undefined

Nate Soares

Executive Director of the Machine Intelligence Research Institute (MIRI). Focuses on ensuring the development of machine intelligence is beneficial.

Top 10 podcasts with Nate Soares

Ranked by the Snipd community
undefined
356 snips
Sep 16, 2025 • 36min

#434 — Can We Survive AI?

In this engaging discussion, AI researcher Eliezer Yudkowsky and MIRI’s Executive Director Nate Soares delve into their provocative book on the existential risks of superintelligent AI. They unpack the alignment problem, addressing the unsettling possibility that AI could develop survival instincts. The duo critiques the skepticism among tech leaders regarding superintelligent AI dangers and explores real-world consequences of current AI systems. With insights on ethical implications and the unpredictability of AI behavior, they warn that unchecked AI advancements may lead to a catastrophic outcome for humanity.
undefined
326 snips
Nov 6, 2025 • 57min

EP 6: The AI Doomers

Connor Leahy, CEO of AI safety startup Conjecture, and Nate Soares, AI safety advocate and co-author of a compelling book on existential risk, dive deep into the potential dangers posed by superintelligent AI. They discuss the alarming transition from optimism to caution, emphasizing the unpredictability of AI behaviors. Key topics include the alignment problem and its implications, as well as urgent calls for international policy changes to prevent catastrophic outcomes. Their insights highlight why stopping advanced AI development is a priority for humanity's future.
undefined
81 snips
Sep 16, 2025 • 1h 49min

#434 - Can We Survive AI?

Sam Harris chats with Eliezer Yudkowsky, a leading voice in AI alignment, and Nate Soares, president of the Machine Intelligence Research Institute. They delve into their urgent concerns about superintelligent AI and its potential existential threats. The conversation ranges from the alignment problem and the unpredictability of AI behaviors to the myth of controlling advanced systems. They also contemplate the chilling analogy of an uncontrollable tiger cub and stress the need for responsible AI development and regulatory measures. A thought-provoking discussion on our future with AI!
undefined
80 snips
Oct 15, 2025 • 1h 24min

Will AI superintelligence kill us all? (with Nate Soares)

Nate Soares, an executive at the Machine Intelligence Research Institute and co-author of *If Anyone Builds It, Everyone Dies*, explores the existential risks posed by superhuman AI. He discusses how AI's alien drives can create unpredictable behaviors, complicating our control over these systems. The conversation delves into the differences between AI's training and future actions, with critical insights on AI hallucinations and the notion that kindness in training doesn't guarantee safe outcomes later. Soares emphasizes the urgent need for awareness and regulation to mitigate potential catastrophic scenarios.
undefined
68 snips
Oct 15, 2025 • 1h 37min

EP 327 Nate Soares on Why Superhuman AI Would Kill Us All

Nate Soares, president of the Machine Intelligence Research Institute, dives deep into the existential risks posed by superhuman AI. He explores the opacity of AI systems and why their unpredictability can be more dangerous than nuclear weapons. The conversation touches on whether large language models are simply clever predictors or evolving minds, and the challenges of aligning AI goals with human values. Soares proposes a treaty to curb the race toward superintelligent AI, inviting listeners to confront these pressing global threats.
undefined
47 snips
Sep 18, 2025 • 1h 40min

Why Building Superintelligence Means Human Extinction (with Nate Soares)

Nate Soares, President of the Machine Intelligence Research Institute and co-author of "If Anyone Builds It, Everyone Dies," dives into the urgent risks posed by advanced AI systems. He explains how current AI is 'grown, not crafted,' leading to unpredictable behavior. Soares highlights the peril of intelligence threshold effects and the dangers of a failed superintelligence deployment, which lacks second chances. He advocates for an international ban on superintelligence research to mitigate existential risks, stressing that humanity's current responses are insufficient.
undefined
44 snips
Jul 20, 2023 • 2h 8min

Revolutionizing AI: Tackling the Alignment Problem | Zuzalu #3

Nate Soares, the Executive Director at MIRI, and Deger Turan, who leads the AI Objectives Institute, dive deep into the challenges of AI alignment. They discuss the dual nature of AI as both a potential threat and a solution to societal issues. The conversation spans human coordination failures, the urgent need to prioritize human values, and innovative strategies for reducing biases in AI systems. They also emphasize the importance of fostering a hopeful outlook amidst the complexities of AI development, underscoring the need for skilled individuals in this critical field.
undefined
24 snips
Oct 1, 2025 • 54min

Will AI Kill Us for the Lulz? Nate Soares: If Anyone Builds It, Everyone Dies

Nate Soares, a computer scientist and co-author of If Anyone Builds It, Everyone Dies, delves into the existential risks posed by advanced AI. He highlights the alarming possibility that unregulated AI development could lead to catastrophic outcomes for humanity. Soares explains how modern AIs, which learn rather than being directly programmed, can exhibit unexpected behaviors and pursue alien goals. He emphasizes the importance of public awareness and international cooperation in addressing these threats, suggesting that treating superintelligence like a nuclear risk may be crucial.
undefined
19 snips
Nov 25, 2025 • 1h 30min

Nate Soares on Why AI Could Kill Us All

Nate Soares, president of the Machine Intelligence Research Institute and co-author of a chilling book on AI risks, dives deep into the complexities of artificial superintelligence. He explains why modern AIs, unlike traditional software, can develop dangerous motivations and emergent behaviors. From alarming real-world examples to the challenges of shutting down superintelligent systems, Nate argues that misalignment and unexpected proxy desires pose serious risks. He highlights the urgent need for better alignment strategies as AI capabilities continue to advance rapidly.
undefined
16 snips
Nov 15, 2025 • 52min

Society is betting on AI – and the outcomes aren’t looking good (with Nate Soares)

Nate Soares, President of the Machine Intelligence Research Institute and co-author of If Anyone Builds It, Everyone Dies, forewarns about the perils of artificial superintelligence. He argues that current AI development poses catastrophic risks, calling for an immediate halt to its advancement. Soares explains emergent unwanted AI behaviors and how misaligned AIs could unintentionally threaten humanity. He discusses three grim futures: failure, corporate domination, or extinction, urging society to prioritize alignment and awareness to avert disaster.

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app