Steven Johnson
A.I. Is Mastering Language. Should We Trust What It Says?
OpenAI’s GPT-3 and other neural nets can now write original prose with mind-boggling fluency – a development that could have profound implications for the future.
New York Times Magazine, 2022
Aperçu
Machines are getting smarter, but is that a good thing?
Recommendation
What would the world look like with real artificial general intelligence (AGI)? With OpenAI’s remarkable new large language model (LLM) GPT-3, you can see glimmers of the future to come. GPT-3 uses sophisticated neural nets to facilitate word prediction, generating unique compositions based on 700 gigabytes of curated data. This technology would have broad applications across sectors. The troubling question is: Should humanity create such an entity, and what are the possible unintended consequences? OpenAI’s founders want AGI to “benefit” all humanity – but it’s not clear if that is possible, or even desirable.
Summary
About the Author
Steven Johnson is a contributing writer for The New York Times Magazine and the author of Future Perfect: The Case for Progress in a Networked Age. He also writes the newsletter Adjacent Possible on Substack.
Comment on this summary