

Victoria Krakovna–AGI Ruin, Sharp Left Turn, Paradigms of AI Alignment
Jan 12, 2023
Victoria Krakovna, a Research Scientist at DeepMind and co-founder of the Future of Life Institute, dives into the critical realm of AGI safety. She discusses the dangers of unaligned AGI and the necessity of robust alignment strategies to prevent catastrophic outcomes. The conversation explores the 'sharp left turn' threat model, outlining how sudden advances in AI could undermine humanity's control. Krakovna emphasizes the importance of collaboration in AI research and the need for clear goal definitions to navigate the complex landscape of artificial intelligence.
Chapters
Transcript
Episode notes
1 2 3 4 5 6 7 8 9 10 11 12
Intro
00:00 • 2min
Understanding the Perils of Artificial General Intelligence
01:43 • 2min
Navigating AI Alignment Challenges
03:20 • 14min
Navigating the AGI Dilemma
17:48 • 9min
Navigating AI Goals and Misinterpretations
27:18 • 6min
Optimism vs. Pessimism in AI Alignment
33:03 • 5min
Navigating the Sharp Left Turn in AI Alignment
37:51 • 18min
Timelines and Takeoff Speeds in AGI Development
55:52 • 2min
Navigating AI Alignment Challenges
57:34 • 24min
Understanding AI Takeoff: Risks and Mitigation Strategies
01:21:45 • 2min
Navigating AI Alignment Challenges
01:23:25 • 9min
Navigating AI Goal Misgeneralization
01:32:23 • 20min