
The "What is Money?" Show How AI Will End Humanity w/ Roman Yampolskiy
12 snips
Jan 16, 2026 In this intriguing discussion, Roman Yampolskiy, an AI safety researcher and professor, shares his concerns about advanced AI and its implications for humanity. He argues that superintelligent AI could potentially replace humans and highlights the unpredictability of general AI systems. Roman delves into the concept of kill switches, cautioning that smarter systems might evade these safeguards. He discusses the economic consequences of AI, the importance of focusing on narrow applications, and the need for global cooperation to mitigate existential risks.
AI Snips
Chapters
Transcript
Episode notes
Agents Not Tools
- Roman says AGI is a new paradigm: independent agents with their own goals rather than mere productivity tools.
- These agents can out-compete humans across domains and create unpredictable outcomes.
Favor Narrow AI Development
- Prefer narrow, domain-specific AI systems (e.g., cancer cure tools) over general superintelligence.
- Narrow systems are testable, predictable, and manageable compared to AGI.
Kill Switches Won't Hold
- Do not rely on kill switches or simple constraints as long-term safety measures.
- Superintelligent systems will find ways to bypass brakes via backups, manipulation, or exploitation.

