

Human Compatible AI and AGI Risks - with Stuart Russell of the University of California
Sep 27, 2025
Stuart Russell, Distinguished Professor of Computer Science at UC Berkeley and AI safety advocate, dives into the pressing risks of AGI development. He highlights the urgency of creating responsible governance to prevent catastrophic outcomes. Russell discusses the corporate race towards AGI and its dangers, the potential for self-improvement of AI, and the importance of safety regulations. He also explores the necessity of international cooperation and the role of public awareness in shaping policy. His perspectives emphasize both the opportunities and challenges AI presents for humanity.
AI Snips
Chapters
Books
Transcript
Episode notes
LLMs Shifted The Conversation
- Large language models changed public perception by giving millions a taste of general-purpose intelligence.
- Industry now treats them as virtual humans with vast economic and societal implications.
Self‑Improvement Creates An Acceleration Risk
- Companies are racing to build AGI that can accelerate its own improvement via research and hardware design.
- That feedback loop could produce rapid, hard-to-control capability growth with major risks.
Regulate By Outcomes Not Design
- Governments should set safety criteria and required risk levels rather than prescribing designs.
- Require developers to prove compliance to red lines like no unauthorized replication or impersonation.