DeepMind, the Artificial Intelligence that Google bought a few years ago to win real people in games, has learned to play 57 games from the legendary Conor himself. Earlier, Google had trained its Artificial Intelligence to beat both people in board games – in particular, in the ancient Chinese Go game – and it is also experimenting with 3D Classics such as Counter Strike. Now Google AI has learned how to play 57 of the Atari 2600 games and does it much better than the average person.
The reason Google teaches its Artificial Intelligence to learn to play is because games are so bizarre, that's why historically games have been used to measure AI intelligence. Why choosing Atari is so easy: this collection of 57 of Atari's difficult games is one of the most challenging challenges of deep learning agents –Functional deepening function (DRL) agent
He loves you | DeepMind AI defeats trained Starsters II players
Atari, Benchmark of Artificial Intelligence
While it is true that some Artificial Intelligent has managed to win some games in this collection individually, they often fail when dealing with the rest of the packageIt requires a series of skills to be used together due to the variety of games and activities. Atari57, a DeepMind Artificial Intelligence specialty at Atari, is the first agent to be able to complete the entire 57-game saga of the Atari 2600.
Compared to other DRL agencies, Atari is able to perform better compared to other Artificial Intelligence. Never Give Up, a previous type of DeepMind agent, managed to win 51 of Atari's games. Also, while Atari achieves a lower overall performance than other agencies like MuZero, it is actually good, because Atari is capable of completing more games than this; this is overall performance is low for each game, but because you are able to eliminate them all
«An agent who performs well enough for a wide range of tasks is classified as intelligent. Games are a great testing ground for building evolving algorithms: they provide a broad range of activities that players should develop and unambiguous behavioral strategies, but also provide progress in measuring game policy, ”Google explained.
We can see it in the previous image: Atari57 will be Agent B, and MuZero will be Agent A. The A's can be very effective in eight games; and average performance across four other games; but he can't finish eight more games. On the other hand, Agent B exhibits as much skill as it is able to complete all tasks, even though it has little work in one of them. The last one is Atari57.
Follow Andro4all