

Dr Roman Yampolskiy - AI Apocalypse: Are We Doomed? A Chilling Warning for Humanity
Jul 19, 2024
Dr Roman Yampolskiy discusses the risks of AI superintelligence surpassing humanity, the dangers of losing control over technology smarter than humans, and the potential benefits and drawbacks of universal basic income. The conversation delves into the challenges of programming ethics and human values into AI systems, the risks posed by superintelligent AI, and the evolution of AI safety research over time.
Chapters
Transcript
Episode notes
1 2 3 4 5 6 7 8 9 10
Intro
00:00 • 2min
Advancing Narrow AI and the Dangers of Super Intelligence
01:31 • 10min
Exploring AI End-Game Scenarios and Risks
11:12 • 2min
Exploring Dystopian Scenarios, Universal Basic Income, and Global Financial Projects
13:16 • 3min
The Landscape of AI Development and Governance
16:38 • 11min
Exploring the Dangers and Potential of AI
27:52 • 18min
The Dangers and Implications of Superintelligence
45:41 • 17min
Exploring the Definition and Challenges of Achieving Artificial General Intelligence
01:02:43 • 2min
Evolution of AI Safety and Research Landscape Over Time
01:04:46 • 2min
Humans Integrating with AI: Risks and Benefits
01:06:28 • 11min