Rejoignez getAbstract pour lire le résumé !

A Skeptical Take on the AI Revolution

Rejoignez getAbstract pour lire le résumé !

A Skeptical Take on the AI Revolution

The AI expert Gary Marcus asks: What if ChatGPT isn’t as intelligent as it seems?

The New York Times,

5 minutes de lecture
5 points à retenir
Audio et texte

Aperçu

Learn why current large language models aren’t as intelligent as some people think – and how AI could get smarter with time.


Editorial Rating

8

Qualities

  • Eye Opening
  • Hot Topic
  • Insider's Take

Recommendation

In this eye-opening episode of The Ezra Klein Show, neuroscientist and leading AI expert Gary Marcus offers his highly informed insights into the significant limitations of ChatGPT and similar large language models – and explains why more data won’t overcome them. Marcus also warns of ChatGPT’s serious social threats, including perpetuating misinformation at scale. However, he also outlines a way that more sophisticated artificial general intelligence systems could play valuable roles in science, medicine, work and everyday life in the decades to come.

Summary

ChatGPT and similar systems are producing content divorced from truth.

Large language models (LLMs) like ChatGPT excel at producing convincing-sounding text, but these systems don’t understand what they’re doing and saying. Their output consists of “bullsh*t” in the classic sense defined by philosopher Harry Frankfurt: content that seems plausible but has no relationship to truth.

LLMs essentially cut and paste text to create pastiches – content that imitates a style. Humans do something similar when they think – they average things they’ve heard and read – but with a critical difference: Humans have mental models. Because computers have no such internal representations, they’re incapable of either telling the truth or lying.

AI systems like ChatGPT pose significant threats to society because they facilitate misinformation and propaganda.

LLMs threaten to create a situation where misinformation...

About the Podcast

Gary Marcus is considered a leading authority on AI, widely known for his research in human language development and cognitive neuroscience. He is professor emeritus of psychology and neural science at New York University and the founder of the machine learning companies Geometric Intelligence and Robust.AI. Marcus is the author of five books, including Rebooting AI: Building Artificial Intelligence We Can Trust. Ezra Klein is a journalist, political analyst and host of The Ezra Klein Show podcast for The New York Times.


Comment on this summary

  • Avatar
  • Avatar
    J. T. 1 year ago
    One man's Misinformation is another man's normal. And vice versa. It's a big problem to be blinded by your own filter bubble and assume misinformation only happens to Other people, but not also Your people. Indeed there is already heavy political bias baked into these Large Language Models. The problem is, the engineers in California and the journalists in New York happen to agree with those biases, so they overlook them: https://www.nationalreview.com/news/grammarly-wants-to-make-you-an-ally-whether-you-like-it-or-not/