
Warning Shots AI Leaders Admit: We Can’t Stop the Monster We’re Creating | Warning Shots Ep. 14
Oct 19, 2025
AI leaders are revealing troubling truths about the technology they’re creating. Some, like Jack Clark, describe their AI as a 'mysterious creature,' fraught with danger yet inescapable. Elon Musk distances himself, claiming he’s warned the world and can only lessen risks in his own creations. The hosts discuss the moral quandaries of safety versus ambition, the overwhelming drive for profit, and how insiders joke about extinction risks. They urge listeners to take these alarming confessions seriously, as builders themselves caution about the implications of their work.
AI Snips
Chapters
Transcript
Episode notes
Builders Admit The Systems Are Unknowable
- Jack Clark describes advanced AIs as "a real and mysterious creature" that developers do not fully understand.
- The hosts highlight this as evidence builders themselves admit the systems are unpredictable and dangerous.
Insist On A Stop Button
- Demand explicit contingency plans and a clear stop button from frontier AI teams instead of rhetoric.
- Liron Shapira insists companies must openly discuss and prepare to pause development as Plan B.
Incentives Keep People In The Race
- Researchers stay because of incentives: prestige, money, and the chance to witness AGI, not because safety outweighs benefits.
- Michael Zafiris explains these incentives make leaving rare even when individuals fear extinction.
