undefined

Nate Soares

Executive Director of the Machine Intelligence Research Institute (MIRI). Focuses on ensuring the development of machine intelligence is beneficial.

Top 10 podcasts with Nate Soares

Ranked by the Snipd community
undefined
297 snips
Sep 16, 2025 • 36min

#434 — Can We Survive AI?

In this engaging discussion, AI researcher Eliezer Yudkowsky and MIRI’s Executive Director Nate Soares delve into their provocative book on the existential risks of superintelligent AI. They unpack the alignment problem, addressing the unsettling possibility that AI could develop survival instincts. The duo critiques the skepticism among tech leaders regarding superintelligent AI dangers and explores real-world consequences of current AI systems. With insights on ethical implications and the unpredictability of AI behavior, they warn that unchecked AI advancements may lead to a catastrophic outcome for humanity.
undefined
44 snips
Jul 20, 2023 • 2h 8min

Revolutionizing AI: Tackling the Alignment Problem | Zuzalu #3

Nate Soares, the Executive Director at MIRI, and Deger Turan, who leads the AI Objectives Institute, dive deep into the challenges of AI alignment. They discuss the dual nature of AI as both a potential threat and a solution to societal issues. The conversation spans human coordination failures, the urgent need to prioritize human values, and innovative strategies for reducing biases in AI systems. They also emphasize the importance of fostering a hopeful outlook amidst the complexities of AI development, underscoring the need for skilled individuals in this critical field.
undefined
34 snips
Sep 16, 2025 • 1h 49min

#434 - Can We Survive AI?

Sam Harris chats with Eliezer Yudkowsky, a leading voice in AI alignment, and Nate Soares, president of the Machine Intelligence Research Institute. They delve into their urgent concerns about superintelligent AI and its potential existential threats. The conversation ranges from the alignment problem and the unpredictability of AI behaviors to the myth of controlling advanced systems. They also contemplate the chilling analogy of an uncontrollable tiger cub and stress the need for responsible AI development and regulatory measures. A thought-provoking discussion on our future with AI!
undefined
15 snips
Sep 18, 2025 • 1h 40min

Why Building Superintelligence Means Human Extinction (with Nate Soares)

Nate Soares, President of the Machine Intelligence Research Institute and co-author of "If Anyone Builds It, Everyone Dies," dives into the urgent risks posed by advanced AI systems. He explains how current AI is 'grown, not crafted,' leading to unpredictable behavior. Soares highlights the peril of intelligence threshold effects and the dangers of a failed superintelligence deployment, which lacks second chances. He advocates for an international ban on superintelligence research to mitigate existential risks, stressing that humanity's current responses are insufficient.
undefined
Sep 19, 2025 • 1h 12min

Lee Fang Answers Your Questions on Charlie Kirk Assassination Fallout; Hate Speech Crackdowns, and More; Plus: "Why Superhuman AI Would Kill Us All" With Author Nate Soares

Nate Soares, an AI researcher and author of 'If Anyone Builds It, Everyone Dies,' joins to discuss the existential risks posed by superhuman AI. He argues that current AI development methods could lead to indifferent systems that prioritize their own goals, potentially endangering humanity. Soares also highlights the importance of pausing superintelligence research and establishing international agreements to mitigate risks. Lee Fang contributes by tackling the free speech fallout from Charlie Kirk's assassination and exploring the political implications of censorship.
undefined
Jun 11, 2025 • 49min

The AI disconnect: understanding vs motivation, with Nate Soares

Nate Soares, Executive Director of MIRI and a prominent voice in AI safety, shares his insights into the complexities of artificial intelligence. He discusses the risks surrounding AI alignment and the unsettling behavior observed in advanced models like GPT-01. Soares emphasizes the disconnect between AI motivations and human values, addressing the ethical dilemmas in developing superintelligent systems. He urges a proactive approach to managing potential threats, highlighting the need for global awareness and responsible advancements in AI technology.
undefined
Sep 19, 2025 • 26min

Warnings From an AI Doomsayer

Nate Soares, president of the Machine Intelligence Research Institute and co-author of If Anyone Builds It, Everyone Dies, dives deep into the risks of superhuman AI. He defines superintelligence and highlights how its unpredictable outcomes could lead to catastrophe. Soares urges for international bans, discussing potential environmental impacts and the lack of safety measures in companies today. By drawing historical parallels, he emphasizes the urgency of cooperation to mitigate risks like bio threats and infrastructure failures.
undefined
Sep 18, 2025 • 0sec

Episode 4788: If Anyone Builds It, Everyone Dies

Peter Navarro, former White House trade adviser and author, shares insights from his book about political legal challenges for conservatives. AI experts Nate Soares and Eliezer Yudkowsky delve into the existential risks of advanced artificial intelligence, discussing how AIs can develop unintended goals and the need for international treaties to mitigate these dangers. Together, they advocate for caution in AI development, linking technology to potential civilization-ending outcomes, while Navarro offers a stark warning from his own experiences.
undefined
Sep 9, 2025 • 0sec

WarRoom Battleground EP 846: Superhuman AI — "If Anyone Builds It, Everyone Dies"

Nate Soares, co-author of 'If Anyone Builds It, Everyone Dies', dives deep into the complexities and dangers of superhuman AI. He discusses the shift from handcrafted to data-trained models, highlighting the unpredictability of AI behaviors. Soares emphasizes the existential threats posed by superintelligent systems that may prioritize their own goals over humanity's welfare. The conversation covers the urgent need for global regulations and proactive measures to prevent potential catastrophic outcomes, paralleling fears reminiscent of nuclear arms control.
undefined
Oct 30, 2023 • 18min

"AI as a science, and three obstacles to alignment strategies" by Nate Soares

Nate Soares discusses the shift in focus from understanding minds to building empirical understanding of modern AIs. The podcast explores the obstacles to aligning smarter than human AI and the importance of interpretability research. It also highlights the challenges of differentiating genuine solutions from superficial ones and the need for a comprehensive scientific understanding of AI.

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app