We have seen games where developers put bots to make it easy for human players or to make single player recreations of the multiplayer modes of many games. These AI players are rarely capable enough to compete against their human counterparts. Thus they are used to ease the learning curve of many multiplayer games. On the other hand, DeepMind is a firm that specializes in the use of AI in many fields of works. They revealed that their AI driven bots could finally beat their human counterparts in one of the most played multiplayer game Quake III. Their findings are fascinating for those who have a thing for AI learning and capabilities.
This is not DeepMind’s first venture in video games they have already developed a neural engine capable of defeating pro players of many multiplayer games. The best example here is AlphaGo, where their AI defeated the well-known pro player of the said game. They have also developed AI for many other games.
Coming back to their deductions regarding their AI in Quake III. Quake III is drastically different than many other games out there. The game is categorically different because of the procedurally generated stages and the fact that the game is in first person perspective. The problem for AI development here is that they could not learn the best possible method to beat the game. The problem in effect proved a blessing in disguise as AI resembled humanoid learning curve, more on this later.
The AI started from scratch and learned the rules of the capture the flag mode itself. The AI was then able to beat 40 human players where humans, as well as the AI, were mix matched. After considerably defeating humans, DeepMind accepted that their win is attributed towards their AI agent’s pro-human response times. So, they decided to slow them down, but the AI was still able to beat their human counterparts.
Progress of AI
Tomshardware reports that their deductions are especially fascinating since the AI had to learn the basics of the game itself and the fact that AI was able to get the results when stages were procedurally generated.
DeepMind said that their work on this project highlights the fact that we can train AI efficiently by using multi-agent techniques, which means AI against AI. It not only makes the AI aware of its mistakes but also works on things that can be done better. They said, “It highlights the results by exploiting the natural curriculum provided by multi-agent training, and forcing the development of robust agents that can even team up with humans.”