"The Terminator" Is Not Fiction, AI Google Changes Aggressively When Urgent

Do you remember the movie The Terminator where Skynet's (AI) artificial intelligence strives to eradicate humans in an attempt to save themselves? Or the 2001 movie: A Space Odyssey where the HAL computer system tried to kill the entire crew of his astronaut because he thought it would be turned off?

Apparently, the above scenarios can come true.

In trials in 2017, Google's AI DeepMind shows how we should be careful when designing robots and artificial intelligence.

Researchers who want to test DeepMind's collaboration capabilities make two AI agents compete in a computer game. The game is repeated as much as 40 million times this requires players to collect as many fruits in order to win.

The researchers found that everything went smoothly if the number of apples that can be collected is still a lot.

But when the number of apples thinned, both DeepMind agents turned "very aggressive". They use laser attacks to drop their enemies and steal all the remaining apples.


In fact, if both agents do not use lasers, they can end up in series with the same number of apples.

This aggressiveness was also found to emerge only when researchers replaced the simpler AI DeepMind with a more complex one. This is in line with the findings of previous Google researchers that collaboration works best when the Deepmind agent used has a smaller network.

According to one member of Google's team, Joel Z Leibo, this human-like behavior is a result of DeepMind's ability to learn from its environment. The smarter the AI, the faster he learns to use aggressive tactics.

However, AI's intelligence is not just bad, as evidenced in the second game where two agents become wolves and one agent becomes prey. To win this game, both wolves must work together to capture the prey and protect the catch from other vultures.

Quickly, DeepMind agents learn that collaboration is the key to success in the situation.

These findings are indeed based on computer games, but they also show that just because it is built by humans, does not mean AI always put our interests first.

Therefore, we need to build human-friendly behavior on the AI ​​and anticipate problems that may come to the fore.

Comments