Dwarkesh Podcast cover image

Dwarkesh Podcast

Eliezer Yudkowsky - Why AI Will Kill Us, Aligning LLMs, Nature of Intelligence, SciFi, & Rationality

Apr 6, 2023
Eliezer Yudkowsky, a prominent AI safety researcher, shares his insights on the potential risks of advanced AI. He argues passionately for the urgent need to align AI with human values to prevent catastrophic outcomes. Yudkowsky discusses the intricacies of large language models and their challenges in achieving alignment. The conversation delves into the ethical dilemmas of enhancing human intelligence and the unpredictable nature of human motivations as AI evolves. He also reflects on the philosophical implications of AI's impact on society and our future.
04:03:25

Episode guests

Podcast summary created with Snipd AI

Quick takeaways

  • Addressing AI alignment urgently requires a proactive and comprehensive approach to align AI systems with human values.
  • The urgency lies in not remaining silent and taking action to raise awareness of the risks involved in AI development.

Deep dives

Indeterminate future of AI training runs

The guest discusses the unlikelihood of governments adopting a treaty that restricts AI and the motive behind calling for a moratorium on further AI training runs.

Remember Everything You Learn from Podcasts

Save insights instantly, chat with episodes, and build lasting knowledge - all powered by AI.
App store bannerPlay store banner