Dwarkesh Podcast

Eliezer Yudkowsky - Why AI Will Kill Us, Aligning LLMs, Nature of Intelligence, SciFi, & Rationality

67 snips
Apr 6, 2023
Eliezer Yudkowsky, a prominent AI safety researcher, shares his insights on the potential risks of advanced AI. He argues passionately for the urgent need to align AI with human values to prevent catastrophic outcomes. Yudkowsky discusses the intricacies of large language models and their challenges in achieving alignment. The conversation delves into the ethical dilemmas of enhancing human intelligence and the unpredictable nature of human motivations as AI evolves. He also reflects on the philosophical implications of AI's impact on society and our future.
Ask episode
AI Snips
Chapters
Books
Transcript
Episode notes
ANECDOTE

Childhood Influences

  • Eliezer's parents tried raising him Orthodox Jewish, but he only learned to pretend.
  • The ethos of science fiction books resonated more with him, shaping his beliefs.
INSIGHT

AI as Actors

  • Training AIs on human text creates an actor, not a human.
  • They learn to predict human behavior, not embody human psychology.
INSIGHT

AI's Superhuman Ability

  • An AI trained on all human text doesn't become the average human.
  • It becomes capable of simulating any individual, which is far more dangerous.
Get the Snipd Podcast app to discover more snips from this episode
Get the app