Do video games make you dumb? You can probably think of someone in your life who’d make you nod vigorously, but according to behavioral psychologists, playing games actually does the opposite. Play performs a crucial role in cognitive development and building behavior that helps advanced animals thrive in different situations and environments.
Turns out what helps kittens and toddlers learn can also help AI get smarter. In March 2016, Google’s Alpha Go thrashed 18-time human world champion Lee Sedol, in a five-game match of Go. Alpha Go’s neural networks played tens of millions of matches against both humans and itself to master the complex and unusual gameplay it used to pummel Lee. The achievement reveals how machine learning works: an AI system progressively makes self-adjustments to adapt to new challenges and become unbeatable.
Practice makes perfect, in gaming as in life. Algorithms bested the top humans at Tic Tac Toe in the 1950s, checkers in 1994, chess in 1997, Jeopardy in 2011, and finally Go in 2016. Right up to the victories, many considered these games to be impossible for computers to master.
Just several months after scoring the surprise Go win, Deepmind – the company behind Alpha Go – announced a partnership with game developer Blizzard Entertainment to open their 20-year game series Starcraft to AI and machine learning researchers around the world. Among the finest player-versus-player (PVP) games in history, Starcraft – with its variable gameplay approaches, wide range of playable units, and rich in-game economic system – serves as a worthy learning environment to teach AI how to make strategic decisions and solve complex, real-world challenges.
Chess and Go are games of perfect information, in which both sides are fully aware of the entire game state. In Starcraft, you don’t know where your enemy is or what she is building. The game uses a fog-of-war layer to obscure information from players — both AI and human. This layer necessitates hierarchical planning in addition to memory and responsive adaptation to new data in order to win a match. In turn, these parameters are perfect for probing the limits of reinforcement learning.
Beyond Scripted NPCs And Bots
Gone are the days when gaming AI simply meant the scripted non-player characters (NPCs) you meet in role-playing games (RPGs) or the crazy bots you get to shoot when no one else is around to play against.
While a rudimentary form of programmatic scripting made up the entirety of in-game “AI” in the past, much has changed. For example, the multi-awarded and critically acclaimed Shadows of Mordor – a role playing game based in J.R.R. Tolkien’s world of Middle Earth – introduced the AI-driven Nemesis System in 2014 with a reported upgrade coming in mid-2017. The machine learning component adds a realistic layer of emotion, character and personality development to in-game entities such as orcs, depending on how their series of encounters with the human player pan out. Gaming companies like Electronic Arts (EA) and Unity, indie developers, and even robotics startups are using character AI to improve the realism of their experiences.
In an interview with TOPBOTS, Patrick Soderlund, SVP at EA Worldwide Studios emphasized his company’s heavy investments in AI. He said, “AI is the strongest driver of technical innovation in the gaming industry and in the larger field of software development.” Soderlund, who oversees many of the gaming behemoth’s popular releases such as Battlefield and Need For Speed, described an ideal scenario where AI-powered characters in games make independent decisions and anticipate what a human player will do based on her playing patterns. To Soderlund, this requires AI to not only be smart, but also emotionally intelligent.
“In most games, the players can always find a way to cheat AI or figure out a way to work around the AI. But with a self-learning and adaptive AI that actually understands what you are doing, then you’ll have a real artificial intelligence, not one that is merely scripted to deal with certain events.”
The use of AI in games has also gone beyond virtual opponents, NPCs and creeps (the jargon game designers use to describe programmatically spawned monsters). Soderlund reveals that AI can streamline realistic content creation, personalize experiences, and make games run faster and easier to test. “You’ll be able to talk to a video game who’ll act smartly and come back to you with context, sensitivity, and emotion. That will be pretty cool in terms of storytelling,” he promises.
With AI currently getting a tremendous level of attention, Soderlund calls for caution, especially among over-enthusiastic game designers who think AI will solve everything. “What people tend to forget is the actual player experience. Experience consists of so many things: it’s animation, it’s graphics, etc. There’s a lot artificial intelligence can do but not until all of those things act well together as an entity will the experience to the player become relevant. The most important thing is not the underlying AI system but the consumer-touching experience.”
Mastering The Real Game Of Life
In a move similar to the Deepmind-Blizzard collaboration, Microsoft opened up Minecraft for AI research. Dubbed Project Malmo and open to all AI designers on GitHub, the initiative seeks to use a robust game environment in creating “systems that can augment human intelligence” in the real world, able to perform various tasks from cooking and driving a car to doing the laundry and performing medical surgery.
Co-founded by business magnate and tech visionary Elon Musk, OpenAI joined the gaming fray, widening the net and diversifying the experiences AI researchers can immerse their agents in. The San Francisco-based AI lab launched a collection of virtual worlds called Universe where AI can learn to play games and use common software applications. The underlying goal is to develop machines with human-like “general intelligence,” able to adapt, make decisions, and take actions in a wide range of environments and scenarios.
Allowing AI to experiment and fail in virtual environments may be much safer than letting them loose in the world. The extraordinary realism that goes into modern games is useful not only for appeasing picky gamers, but also for training AI for use cases such as autonomous vehicles. A team from Intel and TU Darmstadt claim the visual fidelity of popular games like Grand Theft Auto can be used to establish “ground truth” – or a baseline for reality – for tuning vision algorithms for self-driving cars. Let’s just hope AI doesn’t also pick up the less civilized aspects of playing GTA.
We’ve come a long way since Deep Blue defeated world chess champion Gary Kasparov in 1997. IBM Interactive Media CTO George Dolbier aptly describes the gigantic leap: “the number of potential moves in a chess game has been equated to the number of atoms, or the number of grains of sand on beaches. The number of moves that are potential on a game of Go are equal to the number of atoms in the universe.” In a 2014, WIRED reported Go as “one of AI’s greatest unsolved riddles” that would require another 10 years to crack.
Given the unexpected advances of Go-playing algorithms, you can imagine how AI with deep learning capabilities – immersed in even more challenging environments like Starcraft – will eventually crack the game and beat any human opponent.
But the stakes for emerging technology are much higher than merely winning or losing games. If artificial general intelligence (AGI) can be achieved by pitting algorithms against a multitude of virtual words, then what you have is a game-changing revolution.
Mark H Black says
Nowadays there are so many platforms and different technologies that make my work process easier, but even so, I still want to do nothing from time to time.