Google has developed a “thinking” algorithm (Deep Q Network) that was able to reach human proficiency at several dozen 1980 Atari games. The games range from side to side shooters like River Raid and 3D racing games like Enduro.
There has been many programs that have been developed to play a game, like the Blue Chess playing super computer, but what sets this particular program apart is that it “learns.” The algorithm was specifically built to not only learn how to play a game, but to improve on it.
According to CNN:
“Google says its algorithm was designed to mimic human learning that takes place in a part of the brain called the hippocampus, which helps us learn from recent experience. Deep-Q network was designed to learn why it lost a round of a video game and to improve its game-play based on its past performance.”
The Google created Q-network performed as well, if not better, than human professional gamers. While advanced A.I.’s are still some time away, this is definitely a first step towards autonomous computers or robots in the future (Skynet anyone?).