Superintelligence
Paths, Dangers, Strategies
Read or listen offline
Amazon KindleRecommendation
Oxford futurist Nick Bostrom argues that artificial intelligence (AI) offers the promise of a safer, richer and smarter world, but that humanity may not be able to bring AI’s promise to fulfillment. The more Bostrom deconstructs public assumptions about AI, the more you’ll come to think that humankind totally lacks the resources and imagination to consider how to shift from a world that people lead to a world that some superintelligent AI agent could threaten or dominate. Bostrom handily explores the possibilities of – and the concerns related to – such a “singleton.” For instance, he asks what if such an agent could develop into a one-world government with uncertain moral principles. His book is informed and dense, with multiple Xs and Ys to ponder. The specter of what-if carries his narrative, an in-depth treatise designed for the deeply intrigued, not for the lightly interested. getAbstract recommends Bostrom’s rich, morally complex speculation to policy makers, futurists, students, investors and high-tech thinkers.
Summary
About the Author
Oxford University professor Nick Bostrom is a founding director of the Future of Humanity Institute.
Comment on this summary or Начать обсуждение