Episode 35: Percy Liang, Stanford: On the paradigm shift and societal effects of foundation models
May 9, 2024
auto_awesome
Percy Liang, Stanford professor, discusses foundation models, reproducible research, and societal impacts of AI. Topics include paradigm shifts in AI, generative agents for social dynamics, academia's role in model development, aligning language models with human values, and dissent in science and society.
Customized models can lead to societal polarization, emphasizing the importance of models anchored in reality.
Researchers focus on enhancing foundation models through reproducible research and innovative benchmarking approaches.
Academia plays a crucial role in advancing AI, highlighting the need for rigorous evaluation, resource access, and methodological advancements.
Deep dives
Polarization in Customized Models
Customized models leading to individual virtual worlds can contribute to polarization in society. Maintaining a shared reality is crucial to combat this issue, advocating for models that are tethered to reality and not driven solely by profit-driven motives.
Evolution of Language Models
The evolution of language models has shifted towards understanding foundation models, focusing on efficiency, modularity, and robustness. Researchers are delving into reproducible research methods and innovative benchmarking approaches to enhance the functionality and societal impact of these models.
Evaluation and Future Directions
Evaluation of language models poses challenges in robustness assessment and standardized testing. There is a need for more rigorous and contextualized evaluation methodologies, emphasizing real-world applications over stylized benchmarks. Future directions include exploring generative agents for social simulations and investing in third-party evaluation frameworks to compare models across different organizations and capabilities.
Discussion on Risk Evaluation and Open Models
The podcast delves into the importance of rigor in risk discourse, emphasizing the need for a more meticulous approach towards evaluating risk factors associated with various threats such as disinformation and cyber attacks. The speaker highlights the significance of measuring the marginal risk of models, citing examples like the synthesis of bioweapons prompts from AI systems as topics requiring detailed evaluation. Additionally, the conversation touches upon the potential risks posed by fear mongering and its impact on regulatory proposals, advocating for responsible practices while promoting openness and safety in model development.
Academia's Role and the Need for Computing Resources
Another key point discussed is the evolving role of academia in model development, emphasizing the importance of generating innovative ideas and long-term solutions alongside comprehensive evaluations. The conversation underscores the necessity for academia to have adequate computing resources to validate and scale novel concepts, citing examples like the development of new optimizers and fundamental changes to model building approaches. Furthermore, the dialogue emphasizes academia's critical function in methodological advancements, evaluation practices, and reshaping incentive systems in the AI landscape.
Percy Liang is an associate professor of computer science and statistics at Stanford. These days, he’s interested in understanding how foundation models work, how to make them more efficient, modular, and robust, and how they shift the way people interact with AI—although he’s been working on language models for long before foundation models appeared. Percy is also a big proponent of reproducible research, and toward that end he’s shipped most of his recent papers as executable papers using the CodaLab Worksheets platform his lab developed, and published a wide variety of benchmarks.
Generally Intelligent is a podcast by Imbue where we interview researchers about their behind-the-scenes ideas, opinions, and intuitions that are hard to share in papers and talks.
About Imbue
Imbue is an independent research company developing AI agents that mirror the fundamentals of human-like intelligence and that can learn to safely solve problems in the real world. We started Imbue because we believe that software with human-level intelligence will have a transformative impact on the world. We’re dedicated to ensuring that that impact is a positive one.We have enough funding to freely pursue our research goals over the next decade, and our backers include Y Combinator, researchers from OpenAI, Astera Institute, and a number of private individuals who care about effective altruism and scientific research.Our research is focused on agents for digital environments (ex: browser, desktop, documents), using RL, large language models, and self supervised learning. We’re excited about opportunities to use simulated data, network architecture search, and good theoretical understanding of deep learning to make progress on these problems. We take a focused, engineering-driven approach to research.