80,000 Hours Podcast

#146 – Robert Long on why large language models like GPT (probably) aren't conscious

30 snips
Mar 14, 2023
In this discussion, Robert Long, a philosophy fellow at the Center for AI Safety, examines the contentious topic of AI consciousness. He explains why large language models like GPT likely aren’t sentient entities despite their complex outputs. Long emphasizes the differences between human cognition and AI processing, exploring ethical implications of creating potentially conscious machines. He also addresses the philosophical dilemmas surrounding AI's ability to experience pain and pleasure, urging a cautious approach as we navigate the future of artificial intelligence.
Ask episode
AI Snips
Chapters
Transcript
Episode notes
INSIGHT

Artificial Sentience vs. Animal Sentience

  • Artificial sentience is similar to animal sentience but more complex.
  • It raises the question of subjective experience in non-biological, computer-based entities.
ANECDOTE

Robot Pain Scenario

  • A robot programmed to register damage could experience pain without our knowledge.
  • This could lead to ethical issues, especially if we become dependent on such robots.
INSIGHT

AI vs. Human Pleasure/Pain

  • AI pleasure and pain could differ drastically from humans'.
  • It's easier to cause pain than intense, lasting pleasure due to evolutionary pressures.
Get the Snipd Podcast app to discover more snips from this episode
Get the app