As digital technology transforms the world, experts debate the future role of human intelligence. Rather than embrace artificial intelligence with open arms or fear its dominance, psychologist Gerd Gigerenzer recommends walking a middle road: Allow AI to do what it does well, avoid trusting it in areas where it performs poorly, and stay alert to the risks it poses. Offering a wealth of examples – including self-driving cars, dating apps and chess – he illustrates how AI works best in stable environments with well-defined rules. Since the world is far from stable, humans’ cognitive skills will always have a vital role to play.
Artificial intelligence excels in stable environments with rules circumscribed by human intelligence.
Artificial intelligence works best when given large amounts of data, well-defined rules and a stable environment. If those conditions are met, then AI can calculate numbers, find associations and detect patterns faster and better than humans in some instances. That’s why AI does so well at games. In 1997, IBM’s Deep Blue algorithm beat the reigning chess master Garry Kasparov. And in 2017, Google’s AlphaGo beat the reigning Go master Ke Jie. To do both, AI learned the game rules, trained with human experts and used brute calculation to determine the best possible next move.
Alongside games, another (relatively) stable environment is outer space – planets and stars don’t change overnight. Since astronomers understand planetary movement and possess considerable astronomical data, NASA scientists used AI to help its MESSENGER probe land in Mercury’s orbit in March 2011 at the exact spot it predicted six years before. Down here on Earth, AI can help academics detect inconsistencies in large data sets. In addition, it can help militaries intercept large amounts of foreign...
Comment on this summary or Démarrer une discussion