Google’s DeepMind division teaches a digital ant-like creature to play soccer

The artificial intelligence from Google’s DeepMind Technologies division is impressively versatile, there’s no doubt. Late last year, it became the first neural network in history to defeat a professional player at Go, the Chinese board game whose human players had stumped computers for years, by besting world-ranked player Lee Sedol. It has demonstrated a prowess for video games, too — it taught itself to emerge victorious in 49 different games for the Atari 2600 console and navigate digital 3D-maze called Labyrinth. And now, Google’s human-like AI has learned how to play a sport of a different nature: soccer. DeepMind’s latest experiment involves teaching an ant-like digital bug to maneuver a soccer ball into a goal. It’s simple enough task, in theory but exceedingly complex when you “go in blind” — that is to say, attempt to learn it…


Link to Full Article: Google’s DeepMind division teaches a digital ant-like creature to play soccer