undefined

Dan Hendrycks

A computer science PhD who runs the Center for AI Safety and advises xAI and Scale AI. Expert in AI safety and superintelligence strategy.

Top 10 podcasts with Dan Hendrycks

Ranked by the Snipd community
undefined
90 snips
Dec 11, 2024 • 1h 31min

The A.I. Revolution

In this captivating discussion, Peter Lee, President of Microsoft Research, shares insights on AI's transformative potential across various fields. Ajeya Cotra sheds light on advanced AI risks, while Sarah Guo emphasizes its democratizing capabilities. Eugenia Kuyda dives into ethical considerations surrounding AI companions, and Jack Clark speaks on AI safety in coding. Dan Hendrycks discusses geopolitical concerns, and Marc Raibert warns against overregulation of AI in robotics. Tim Wu brings forward the need for effective AI policy as the panel navigates the promise and peril of this technology.
undefined
72 snips
Mar 26, 2025 • 1h 15min

AI's Rising Risks: Hacking, Virology, Loss of Control — With Dan Hendrycks

Dan Hendrycks, Director and co-founder of the Center for AI Safety, dives deep into the escalating risks of artificial intelligence. He discusses the urgent need for oversight in AI, particularly concerning virology and potential bioweapon applications. Hendrycks warns of hacks enabled by AI and explains the concept of intelligence explosion, where AI could surpass human capabilities. The geopolitical dynamics of AI rivalry, particularly between the U.S. and China, and the dual-use nature of these technologies highlight essential safety discussions for our future.
undefined
67 snips
Oct 19, 2024 • 2h 39min

GELU, MMLU, & X-Risk Defense in Depth, with the Great Dan Hendrycks

Dan Hendrycks, Executive Director of the Center for AI Safety and advisor to Elon Musk's XAI, dives into the critical realm of AI safety. He discusses innovative activation functions like GELU and highlights pivotal benchmarks such as MMLU. Dan emphasizes the need for robust strategies against adversarial threats and the ethical dimensions of AI development. He also sheds light on the impact of geopolitical dynamics on AI forecasting and warns about potential risks, advocating for a collaborative approach to ensure safe AI advancements.
undefined
49 snips
Mar 5, 2025 • 36min

National Security Strategy and AI Evals on the Eve of Superintelligence with Dan Hendrycks

Dan Hendrycks, the Director of the Center for AI Safety and an advisor to xAI and Scale AI, discusses crucial topics around AI's risks. He highlights the stark difference between alignment and safety in AI, underscoring its implications for national security. The potential weaponization of AI is explored, along with strategies like 'mutually assured AI malfunction.' Dan also advocates for policy measures to govern AI development and the need for international cooperation in mitigating risks. His insights reveal the urgency of managing AI’s dual-use nature.
undefined
30 snips
Mar 30, 2025 • 1h 15min

Superintelligence Strategy with Dan Hendrycks

Dan Hendrycks, a computer science PhD and head of the Center for AI Safety, dives into the complex interplay between the US and China on the path to artificial general intelligence (AGI). He discusses the risks of superintelligence, including the need for international regulation to prevent catastrophic outcomes. Hendrycks draws parallels to Cold War nuclear strategies, emphasizing the importance of strategic stability. He also explores the balance between AI safety and creative freedom, advocating for adaptive policies in a rapidly changing geopolitical landscape.
undefined
28 snips
Nov 3, 2023 • 2h 7min

Dan Hendrycks on Catastrophic AI Risks

Dan Hendrycks, AI risk expert, discusses X.ai, evolving AI risk thinking, malicious use of AI, AI race dynamics, making AI organizations safer, and representation engineering for understanding AI traits like deception.
undefined
25 snips
Mar 20, 2025 • 41min

Lawfare Daily: Dan Hendrycks on National Security in the Age of Superintelligent AI

Dan Hendrycks, Director of the Center for AI Safety, discusses groundbreaking strategies on national security in the age of superintelligent AI. He explores the concept of mutual assured AI malfunction as a new deterrence strategy, drawing parallels to nuclear policies. The conversation also delves into the urgent need for international cooperation to regulate AI access, emphasizing the potential risks and ethical considerations. Hendrycks advocates for heightened government oversight in AI security to protect against misuse and ensure accountability.
undefined
18 snips
Jun 21, 2024 • 54min

Dan Hendrycks - Avoiding an AGI Arms Race (AGI Destinations Series, Episode 5)

Dan Hendrycks, Executive Director of The Center for AI Safety, discusses the power players in AGI, the posthuman future, and solutions to avoid an AGI arms race. Topics include AI safety, human control, future scenarios, international coordination, preventing AGI for military use, and collaboration with international organizations for ethical AI development.
undefined
13 snips
Mar 5, 2025 • 60min

Mutually Assured AI Malfunction (Robert Wright & Dan Hendrycks)

Dan Hendrycks, Director of the Center for AI Safety and a key figure in AI research, dives into the urgent topics of his new paper, "Superintelligence Strategy." He discusses the chilling concept of mutually assured AI malfunction and its ties to global tensions, particularly between the U.S. and China. The conversation examines how America's chip war might worsen conflicts over Taiwan and the pressing need for international governance to mitigate AI risks. Hendrycks emphasizes collaboration to navigate the complex geostrategic landscape surrounding AI advancements.
undefined
9 snips
Nov 21, 2023 • 25min

Superintelligent AI: The Utopians

"FT Tech Tonic" brings together Jack Clark, co-founder of Anthropic, Dan Hendrycks, founder of the Center for AI Safety, and Yann LeCun, chief AI scientist. They discuss the risks and benefits of AI, the concept of an 'everything machine,' regulatory challenges, biased decision-making systems, societal inequity, and the dominance of tech companies in AI development.