The Trajectory

Scott Aaronson - AGI That Evolves Our Values Without Replacing Them (Worthy Successor, Episode 4)

Sep 13, 2024
Scott Aaronson, a theoretical computer scientist and Schlumberger Centennial Chair at the University of Texas at Austin, explores the future of artificial general intelligence. He discusses the moral implications of creating successor AIs and questions what kind of posthuman future we should be aiming for. The conversation dives into the evolving relationship between consciousness and ethics, the complexities of aligning AI with human values, and the philosophical inquiries surrounding morality and intelligence in diverse life forms.
Ask episode
AI Snips
Chapters
Transcript
Episode notes
INSIGHT

Humans as Math

  • Language models are often dismissed as mere stochastic parrots.
  • However, humans, when reduced to their basic components, are also essentially mathematical structures.
INSIGHT

Copyability and its implications

  • A key difference between current AI and humans is the copyability of AI.
  • This copyability impacts the meaning of harm or even death for an AI.
INSIGHT

Analog Biology

  • Our biology, particularly the analog aspects of our neurobiology, might make perfect copies impossible.
  • This limitation might be a source of human specialness.
Get the Snipd Podcast app to discover more snips from this episode
Get the app