Rishi Bommasani, Percy Liang and et al.
On the Opportunities and Risks of Foundation Models
Stanford University, 2022
What's inside?
Today’s cutting-edge artificial intelligence (AI) system is the “foundation” for future models.
Recommendation
Foundation models of artificial intelligence train on diverse data. Using deep neural networks and self-supervised learning, they can handle a variety of subject areas and tasks. According to a large research team at Stanford University, experts can deploy foundation models on a mind-boggling scale with billions of parameters. These models, which can be trained to do many tasks, also seem to develop the capacity to perform tasks beyond their training. These AI models are new, and some users may not understand their weaknesses or their potential. Since foundation models will soon be in wide use, researchers must prioritize studying the social and ethical issues they spawn.
Summary
About the Authors
Rishi Bommasani is a PhD student in computer science at Stanford University, where Percy Liang is an associate professor of computer science. More than 100 bylined researchers contributed to this paper.
Comment on this summary