
80,000 Hours Podcast
#141 – Richard Ngo on large language models, OpenAI, and striving to make the future go well
Dec 13, 2022
In this discussion, Richard Ngo, a researcher at OpenAI with a background at DeepMind, explores the fascinating world of large language models like ChatGPT. He delves into whether these models truly 'understand' language or just simulate understanding. Richard emphasizes the importance of aligning AI with human values to mitigate risks as technology advances. He also compares the governance of AI to nuclear weapons, highlighting the need for effective regulations to ensure safety and transparency in AI applications. This conversation sheds light on the profound implications of AI in society.
02:44:19
Episode guests
AI Summary
Highlights
AI Chapters
Episode notes
Podcast summary created with Snipd AI
Quick takeaways
- Large language models like GPT-3 have the ability to write scripts, explain jokes, produce poetry, and argue for political positions.
- Language models can demonstrate functional understanding by using ideas to reason and draw inferences in new situations.
Deep dives
OpenAI's Goal and Approach
OpenAI's goal is to ensure the development and deployment of advanced AI systems goes well. They focus on both alignment research and governance. OpenAI takes an empirical approach, building models and collecting evidence to inform their work. They apply reinforcement learning from human feedback to improve system behavior. The policy research team focuses on release and distribution considerations. OpenAI sees the need for a portfolio of bets, including both empirical and theoretical approaches.
Remember Everything You Learn from Podcasts
Save insights instantly, chat with episodes, and build lasting knowledge - all powered by AI.