In this enlightening podcast, MIT research scientist Lex Fridman interviewed AI researcher and physicist Max Tegmark, author of Life 3.0: Being Human in the Age of Artificial Intelligence. Both insist that today’s runaway AI train has rushed humanity to an existential “fork in the road,” where risks include economic catastrophe, cultural and political upheaval, and human extinction. These experts call for a pause in AI development. They say their wake-up call gives society enough time to benefit from this powerful technology and to learn to live with the superintelligent alien beings they warn that scientists are busily creating.
Alien intelligence will be created by humans.
Astrophysicists define the universe as space viewed through telescopes, interpreting the light that has reached Earth since the big bang. Human beings are the most technologically advanced beings in this “spherical volume.” Human lives are rare and precious because they are “stewards of this one spark of advanced consciousness.” Reckless use of technology could extinguish the species. But if nurtured, life forms could spread throughout the universe. Alien intelligence will not visit Earth from outer space – humans will build it. These aliens will not evolve through a Darwinian process of self-preservation, and will not suffer or fear death. Hopefully, they will share human values, and support life on Earth.
This new form of intelligence will download the knowledge and experiences it needs, and delete superfluous data. What it means to be human will be challenged. On the positive side, people may develop more compassion because of shared “hive mind” experiences.
People are already using AI as a communication medium. But in doing so, they often outsource human emotions...
Max Tegmark is a physicist and AI researcher at MIT, co-founder of the Future of Life Institute, and author of Life 3.0: Being Human in the Age of Artificial Intelligence. Lex Fridman is a computer scientist, podcaster, artificial intelligence researcher, and research scientist at MIT.
Comment on this summary