Перейти к содержанию сайта
On the Opportunities and Risks of Foundation Models
Article

On the Opportunities and Risks of Foundation Models



Editorial Rating

9

Qualities

  • Analytical
  • Scientific
  • Eye Opening

Recommendation

Foundation models of artificial intelligence train on diverse data. Using deep neural networks and self-supervised learning, they can handle a variety of subject areas and tasks. According to a large research team at Stanford University, experts can deploy foundation models on a mind-boggling scale with billions of parameters. These models, which can be trained to do many tasks, also seem to develop the capacity to perform tasks beyond their training. These AI models are new, and some users may not understand their weaknesses or their potential. Since foundation models will soon be in wide use, researchers must prioritize studying the social and ethical issues they spawn.

Summary

“Emergence” and “homogenization” characterize foundation models of artificial intelligence (AI).

Foundation models of AI are trained on diverse data and driven by neural networks. They rely on self-supervised learning and can perform a variety of concrete tasks. Recent foundation models are vast in scale. For example, Generative Pre-trained Transformer (GPT-3) deploys a staggering 175 billion parameters and can perform a wide variety of tasks no one specifically taught it to perform.

Emergence occurs when this kind of machine learning system learns to perform a task outside its formal training by drawing on examples of that task in the data developers feed to it. When the machine learning approach to a variety of applications or tasks derives from a single methodology, the foundation model is said to be “homogenized.” Heavily homogenized foundation models pass any flaws they have along to the individual applications that use them.

Foundation models operate in the real world and will generate social impacts.

Foundation models are not merely theoretical entities – they exist in the real world, and more...

About the Authors

Rishi Bommasani is a PhD student in computer science at Stanford University, where Percy Liang is an associate professor of computer science. More than 100 bylined researchers contributed to this paper.


Comment on this summary