undefined

Nate Soares

Executive Director of the Machine Intelligence Research Institute (MIRI). Focuses on ensuring the development of machine intelligence is beneficial.

Top 10 podcasts with Nate Soares

Ranked by the Snipd community
undefined
348 snips
Sep 16, 2025 • 36min

#434 — Can We Survive AI?

In this engaging discussion, AI researcher Eliezer Yudkowsky and MIRI’s Executive Director Nate Soares delve into their provocative book on the existential risks of superintelligent AI. They unpack the alignment problem, addressing the unsettling possibility that AI could develop survival instincts. The duo critiques the skepticism among tech leaders regarding superintelligent AI dangers and explores real-world consequences of current AI systems. With insights on ethical implications and the unpredictability of AI behavior, they warn that unchecked AI advancements may lead to a catastrophic outcome for humanity.
undefined
70 snips
Sep 16, 2025 • 1h 49min

#434 - Can We Survive AI?

Sam Harris chats with Eliezer Yudkowsky, a leading voice in AI alignment, and Nate Soares, president of the Machine Intelligence Research Institute. They delve into their urgent concerns about superintelligent AI and its potential existential threats. The conversation ranges from the alignment problem and the unpredictability of AI behaviors to the myth of controlling advanced systems. They also contemplate the chilling analogy of an uncontrollable tiger cub and stress the need for responsible AI development and regulatory measures. A thought-provoking discussion on our future with AI!
undefined
64 snips
Oct 15, 2025 • 1h 24min

Will AI superintelligence kill us all? (with Nate Soares)

Nate Soares, an executive at the Machine Intelligence Research Institute and co-author of *If Anyone Builds It, Everyone Dies*, explores the existential risks posed by superhuman AI. He discusses how AI's alien drives can create unpredictable behaviors, complicating our control over these systems. The conversation delves into the differences between AI's training and future actions, with critical insights on AI hallucinations and the notion that kindness in training doesn't guarantee safe outcomes later. Soares emphasizes the urgent need for awareness and regulation to mitigate potential catastrophic scenarios.
undefined
44 snips
Jul 20, 2023 • 2h 8min

Revolutionizing AI: Tackling the Alignment Problem | Zuzalu #3

Nate Soares, the Executive Director at MIRI, and Deger Turan, who leads the AI Objectives Institute, dive deep into the challenges of AI alignment. They discuss the dual nature of AI as both a potential threat and a solution to societal issues. The conversation spans human coordination failures, the urgent need to prioritize human values, and innovative strategies for reducing biases in AI systems. They also emphasize the importance of fostering a hopeful outlook amidst the complexities of AI development, underscoring the need for skilled individuals in this critical field.
undefined
41 snips
Oct 15, 2025 • 1h 37min

EP 327 Nate Soares on Why Superhuman AI Would Kill Us All

Nate Soares, president of the Machine Intelligence Research Institute, dives deep into the existential risks posed by superhuman AI. He explores the opacity of AI systems and why their unpredictability can be more dangerous than nuclear weapons. The conversation touches on whether large language models are simply clever predictors or evolving minds, and the challenges of aligning AI goals with human values. Soares proposes a treaty to curb the race toward superintelligent AI, inviting listeners to confront these pressing global threats.
undefined
34 snips
Sep 18, 2025 • 1h 40min

Why Building Superintelligence Means Human Extinction (with Nate Soares)

Nate Soares, President of the Machine Intelligence Research Institute and co-author of "If Anyone Builds It, Everyone Dies," dives into the urgent risks posed by advanced AI systems. He explains how current AI is 'grown, not crafted,' leading to unpredictable behavior. Soares highlights the peril of intelligence threshold effects and the dangers of a failed superintelligence deployment, which lacks second chances. He advocates for an international ban on superintelligence research to mitigate existential risks, stressing that humanity's current responses are insufficient.
undefined
24 snips
Oct 1, 2025 • 54min

Will AI Kill Us for the Lulz? Nate Soares: If Anyone Builds It, Everyone Dies

Nate Soares, a computer scientist and co-author of If Anyone Builds It, Everyone Dies, delves into the existential risks posed by advanced AI. He highlights the alarming possibility that unregulated AI development could lead to catastrophic outcomes for humanity. Soares explains how modern AIs, which learn rather than being directly programmed, can exhibit unexpected behaviors and pursue alien goals. He emphasizes the importance of public awareness and international cooperation in addressing these threats, suggesting that treating superintelligence like a nuclear risk may be crucial.
undefined
10 snips
Sep 16, 2025 • 1h 8min

So, is AI Gonna Kill Us All?

Nate Soares, the President of the Machine Intelligence Research Institute, joins to discuss the existential threats posed by artificial superintelligence. He argues that AIs with undisclosed goals could jeopardize human values. Nate shares his insights on how current AI systems often lead to unintended consequences, using examples like ChatGPT. He also emphasizes the importance of international policy and cooperation to mitigate risks, while navigating the balance between technical details and accessibility in communicating these critical issues.
undefined
Sep 19, 2025 • 1h 12min

Lee Fang Answers Your Questions on Charlie Kirk Assassination Fallout; Hate Speech Crackdowns, and More; Plus: "Why Superhuman AI Would Kill Us All" With Author Nate Soares

Nate Soares, an AI researcher and author of 'If Anyone Builds It, Everyone Dies,' joins to discuss the existential risks posed by superhuman AI. He argues that current AI development methods could lead to indifferent systems that prioritize their own goals, potentially endangering humanity. Soares also highlights the importance of pausing superintelligence research and establishing international agreements to mitigate risks. Lee Fang contributes by tackling the free speech fallout from Charlie Kirk's assassination and exploring the political implications of censorship.
undefined
Jun 11, 2025 • 50min

The AI disconnect: understanding vs motivation, with Nate Soares

Nate Soares, Executive Director of MIRI and a prominent voice in AI safety, shares his insights into the complexities of artificial intelligence. He discusses the risks surrounding AI alignment and the unsettling behavior observed in advanced models like GPT-01. Soares emphasizes the disconnect between AI motivations and human values, addressing the ethical dilemmas in developing superintelligent systems. He urges a proactive approach to managing potential threats, highlighting the need for global awareness and responsible advancements in AI technology.

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app