On the Opportunities and Risks of Foundation Models
Read or listen offline
Recommendation
Foundation models of artificial intelligence train on diverse data. Using deep neural networks and self-supervised learning, they can handle a variety of subject areas and tasks. According to a large research team at Stanford University, experts can deploy foundation models on a mind-boggling scale with billions of parameters. These models, which can be trained to do many tasks, also seem to develop the capacity to perform tasks beyond their training. These AI models are new, and some users may not understand their weaknesses or their potential. Since foundation models will soon be in wide use, researchers must prioritize studying the social and ethical issues they spawn.
Take-Aways
About the Authors
Rishi Bommasani is a PhD student in computer science at Stanford University, where Percy Liang is an associate professor of computer science. More than 100 bylined researchers contributed to this paper.
Comment on this summary or 开始讨论