

AI's Rising Risks: Hacking, Virology, Loss of Control — With Dan Hendrycks
106 snips Mar 26, 2025
Dan Hendrycks, Director and co-founder of the Center for AI Safety, dives deep into the escalating risks of artificial intelligence. He discusses the urgent need for oversight in AI, particularly concerning virology and potential bioweapon applications. Hendrycks warns of hacks enabled by AI and explains the concept of intelligence explosion, where AI could surpass human capabilities. The geopolitical dynamics of AI rivalry, particularly between the U.S. and China, and the dual-use nature of these technologies highlight essential safety discussions for our future.
AI Snips
Chapters
Transcript
Episode notes
AI Risks: Short-Term vs. Long-Term
- AI poses short-term risks like malicious use and long-term risks like loss of control.
- Current AIs cannot cause extinction because they lack agency, like making PowerPoints.
Bioweapons: A More Immediate Threat Than Cyberattacks
- AI's potential to create bioweapons is a serious concern, more so than cyber attacks in the short term.
- Expert-level virology capabilities in AI models are plausible soon.
AI's Proficiency in Virology
- Recent reasoning models guide wet lab procedures, achieving 90th percentile scores compared to expert virologists.
- AI's proficiency in biology stems from its vast knowledge of literature and background experience.