Regular reality is being disrupted by virtual and augmented reality (VR/AR). The biggest names in tech are battling to power the next generation of entertainment, education, and communication.
Facebook acquired Oculus to make next generation social networking virtual. Apple CEO Tim Cook claims augmented reality will be “as big as the iPhone.” Microsoft’s HoloLens, Google’s Tango, and Intel’s Project Alloy are just a few of the myriad of developers underway to make VR/AR devices as ubiquitous as computers and phones.
The technical complexity behind compelling VR/AR experiences is boosted by another overhyped trend: artificial intelligence. Despite the hype, the progress AI drives in VR/AR are by no means artificial.
Here are 8 specific ways that AI makes our virtual realities even more real.
1. Physical environment mapping
Occipital’s Bridge is one example of how AI can help to map entire environments in real time and blend the results with a virtual world. The external ‘structure sensor’ on Bridge feeds into an AI system and allows a ‘mixed reality’ experience entirely in the headset, thanks to the magic of instant depth perception and precise positioning with six degrees of freedom. The result is that you get a fully immersive VR experience with real world structures.
The fledgling system can already produce CAD-quality models of your house, so you can try furniture and decorations before you buy. This mixed version of reality could provide precise AR shopping experiences in local retail stores. The company is also inviting developers to get to work on new apps and is taking an open-source approach.
Google Tango is arguably more advanced and already in the wild on the Lenovo Phab 2, but so far most of the apps targeted at the phone have been relatively simple games. We expect that to change rapidly.
2. Precise depth perception
It’s one thing to map the walls of a building, it’s quite another to map the constantly shifting internal organs of a patient on the operating table. We’re closing in on a world where a surgeon will see your major organs through a headset, with a modern-day slant on X-ray vision. Indeed, we have the first concepts from the likes of mbits imaging and the ‘Surgery Pad’.
Looking further into the future, surgical robots will take over routine procedures. Automatic depth perception and instant, accurate adjustments will be an essential part of such devices. Even regulated breathing causes slight shifts in the patient’s position, so only the precision of AI can cope with the vast number of calculations that will make the difference between a clean cut and a severed artery.
3. Selective hazard warnings
Soldiers in the field make split-second decisions under heavy fire that can separate life from death. At the same time, they are bombarded with information from eyes in the sky and the crew back at base.
Artificial intelligence has the power to make complex decisions for them and has already been employed in military strategies. An AR system powered with AI can run millions of simulations, compare current situations to archives, and determined the best course of action before any bullets are fired. Soldiers of the future will have a video game-style field of view with all the assists turned on.
AR can also highlight clear and present danger, which allows the soldiers on the ground to make informed choices and deal with the threats in order. ARC4 is a small step towards this future vision, but with a system connected to the might of military drones, satellites and more, the battleground of the future will be a much more high-tech environment.
The military technology will trickle down to civilian use cases. The same AR system that helps Marines win the day can save cyclists from oncoming cars. Garmin’s radar-equipped Varia Vision is an idea of what’s to come, but future models will see around corners and give you advance warning before you’re even in danger.
4. Customized simulation & training
Artificial Intelligence combined with VR/AR is a potent combination for educating the next generation of surgeons, pilots and even youngsters in school.
Doctors already get countless hours of virtual surgery time so they can encounter every complication before they hit the wards. Emergency medical teams (EMTs) deal with car crashes and natural disasters, firemen navigate the worst blazes, and pilots have experience how engines go down – all in virtual worlds without a single life at stake.
AI can improve simulated training by incorporate more data points, comparing and contrasting different techniques, and personalizing the education. The system will act more like a customizable trainer than a static simulator. Children could get their own personal tour guide through ancient Rome and the Amazon jungle. They’ll get to ask questions, see the world in action, and experience an interactive education that books just cannot deliver.
With a simple set of sensors and a headset that monitors every move, we should be able to learn everything from the perfect golf swing to the Chinese language. Virtually anybody should be able to access world class coaching at any academic or sporting discipline. The combination of AI with VR/AR has the potential to democratize education and give every student a chance to learn.
5. Truly Social Media
One day we will look back and laugh at Facebook’s measly chat window as we relax over a virtual coffee with a friend who is halfway around the world. In the near future, we can meet our friends in both real or virtual environments, from a pool hall to the Taj Mahal, and interact as if they’re in the same room. The processing power required is mind-boggling, but also within our grasp thanks to AI.
Facebook, which invested heavily in VR with the purchase of Oculus Rift, is already using generative models from deep learning to automatically design believable character avatars based on your photos.
6. Character Modeling
Right now, we employ two methods for animating characters: motion capture and manual CG work. Handcrafted animations are laborious, while motion capture is limited to the physical capabilities of the actor being modeled. change thanks to AI methods, such as learning by demonstration, self-teaching AI, and phase-functioned neural networks.
Motion capture involves painstakingly recording a vast array of movements that are essentially repeated over and over. New systems like phase-functioned neural networks, developed by the University of Edinburgh, uses machine learning to combine a vast library of stored movements and map them onto new characters.
The team applied the neural network to a character it called The Orange Duck and the results are remarkable.
This will open up a new world of realistic animation in video games, cartoons and Virtual Reality environments. A relatively simple session of motion capture can turn into a full range of movement with the help of a neural network, which means realistic characters can, theoretically, be created on the fly.
Not everyone is happy with AI-driven animation. Studio Ghibli founder Hayao Miyazaki notoriously reflected that AI represented the “end of times” when shown an automated character models from Japanese company DWANGO. Perhaps you shouldn’t be surprised to upset a man who has spent his entire life drawing with AI that replaces drawing.
7. Conversational Non-Player Characters (NPCs)
Auxiliary, non-playable characters (NPCs) in video games are notorious for behaving in odd ways, such as acting completely nonchalant when crimes are committed in front of them or when strangers barge into their homes. Their conversation is always stilted and they just don’t adapt to circumstances.
Even in conversation-driven games like Mass Effect, NPCs are an afterthought. The console only has so much processing power to offer, so they often become part of the background. But with AI to power their realism, NPCs could adapt to events and even carry a proper conversation. Game consoles, especially for VR, could carry neuromorphic chips that send AI-related tasks to cloud servers, so even the sideline characters can evolve based on the input of other players.
8. Rendering Optimization
One of the biggest challenges in VR / AR is rendering realistic graphics with today’s consumer hardware. Too much complexity leads to pixelated images and lag, which in turn leads to headaches for VR headset wearers. The result is that most VR experiences are simplistic and lacking in convincing detail.
The application of AI to game rendering is so obvious that Nvidia even offers formal courses teaching 3D and graphics artists how to apply deep learning techniques to tasks like super resolution, photo to texture mapping, and texture multiplication. In VR, machine learning can be used for selective rendering, where only the portions of a scene where a viewer is looking are dynamically generated in full visual fidelity, saving on computing costs. Images can also be more intelligently compressed with AI techniques, enabling faster transmission over wireless connections without a discernible loss in quality.
High cost barriers and lagging hardware have caused VR/AR to become overhyped in the past years. With the use of AI to overcome technical barriers and improve realism, you can now examine the intricate and alien landscape of Mars to within 30 cm of the true topology. Given that NASA’s acceptance rate for astronauts is less than 0.08%, VR – powered by AI – is the closest you’ll probably get to a space adventure.
Leave a Reply
You must be logged in to post a comment.