

Lawfare Daily: Dan Hendrycks on National Security in the Age of Superintelligent AI
37 snips Mar 20, 2025
Dan Hendrycks, Director of the Center for AI Safety, discusses groundbreaking strategies on national security in the age of superintelligent AI. He explores the concept of mutual assured AI malfunction as a new deterrence strategy, drawing parallels to nuclear policies. The conversation also delves into the urgent need for international cooperation to regulate AI access, emphasizing the potential risks and ethical considerations. Hendrycks advocates for heightened government oversight in AI security to protect against misuse and ensure accountability.
AI Snips
Chapters
Transcript
Episode notes
Flawed Manhattan Project Strategy
- The "Manhattan Project" strategy for AI superintelligence development is flawed.
- A competitive lead is not guaranteed, especially with the closing gap between the U.S. and China.
Superintelligence vs. AGI & Transformative AI
- Superintelligence surpasses human expertise in all intellectual domains, unlike AGI or transformative AI.
- Focusing on specific AI capabilities, like virology or cyber skills, is crucial for geopolitical strategy.
MAIM Trigger
- Mutual Assured AI Malfunction (MAIM) focuses on specific dangerous AI capabilities.
- Superintelligence in cyber or bioweapon creation is enough to trigger MAIM, regardless of general AI development.