From industrial to artistic and cultural applications, AI has made significant leaps over the last decade. IBM’s Watson platform and Google’s DeepMind, for example, have adopted cognitive learning and natural language processing, allowing AI programs to improve themselves through thought and experience. Despite the amazing capabilities of these technologies — DeepMind was able to beat Lee Sedol, a top-ranked player in Go (a strategy board game), while Watson can now predict when machinery will need maintenance and even help doctors prescribe smarter cancer treatment plans — current AI applications are still limited by their restrictive ‘senses’. But with virtual reality becoming mainstream, we are on the cusp of bringing increased sensory input and increased intelligence to our favorite devices.
Although AI is able to detect input on spectrums humans are unable to, such as ultrasonic sound and infrared radiation, in many ways it cannot come close to the sensory input we humans enjoy on a daily basis. Sight, sound, smell, touch and taste all work in harmony to help us learn and provide creative solutions — and in order for AI to truly mimic human intelligence, it must gather and process enough information to effectively replace those senses. Today, though, the technology is not yet there.
Robots in factories, for example, possess adequate machine vision capabilities to pick up defective parts, but if you really want robots to interact in more natural settings, they will need to possess something closer to human vision — with all the nuances that allow us to, say, seamlessly drive vehicles without crashing.
The challenge, then, becomes how to impart that information to AI. That’s where VR comes in. An integral part of VR hardware are sensors of all types that collect, process and eventually provide the necessary feedback from the user and/or the environment the user is immersed in to create a responsive and realistic virtual world. And it’s exactly that sophisticated sensor technology that can inform the machine learning algorithms of AI, providing it with accurate baseline data and an idea of how this data changes over time.
With these enhancements, AI could even leverage direct human interaction to better understand the human perception. VR sensors that measure pupil dilation and contraction, movement, blood pressure, heart rhythm and other physiological responses could help AI “feel” (or at least recognize) the different emotions associated with certain actions. Ultimately, this would result in more realistic learning and feedback models which could in turn optimize the accuracy of AI-related responses.
Needless to say, the potential of VR-enriched AI presents a huge opportunity for developers looking to close technology’s gaps and make life easier. With improvements in cognitive learning, VR and sensor technologies, AI could use sensory information to automate many daily tasks that only humans can perform today, such as cooking and teaching. This, in turn, can help us free up time to work in industries that AI might be unable to service, or entirely new positions created as a result of a society dependent on AI. (Think: self-driving car mechanic, or machine rights lobbyist.) If developers begin ramping up their investment in AI and VR technology today, mankind may very well find options that will help usher in a new age of productivity and betterment driven by AI paradigms.