Into AI Safety

Against 'The Singularity' w/ Dr. David Thorstad

8 snips
Nov 24, 2025
Dr. David Thorstad, a philosopher and assistant professor at Vanderbilt University, critiques the singularity hypothesis and its implications for AI safety funding. He argues that the idea of recursive self-improvement leading to superintelligence is fundamentally flawed. Instead of chasing speculative futures, Thorstad advocates for prioritizing immediate issues like poverty, disease, and authoritarianism. He warns that misallocated funds could detract from addressing pressing global problems, and he emphasizes the need for rigorous, peer-reviewed critiques in the field.
Ask episode
AI Snips
Chapters
Books
Transcript
Episode notes
ANECDOTE

Path Into Longtermism And A Planned Book

  • David Thorstad entered longtermism partly via a 2020 postdoc at Oxford with Hilary Graves.
  • He now plans a book, Beyond Long-Termism, bringing overlapping challenges and short-term alternatives together.
INSIGHT

What The Singularity Hypothesis Actually Claims

  • The singularity hypothesis claims a quantity like intelligence will grow at an accelerating rate leading to a historical discontinuity.
  • This requires accelerating (super-exponential) growth and an intelligence level orders of magnitude beyond humans.
INSIGHT

The 'Weak Singularity' Is A Different Claim

  • Many defenders now backpedal to a 'weak singularity' that drops acceleration or huge gaps in intelligence.
  • David Thorstad warns this is a different claim and can't be used to evade critique of the original hypothesis.
Get the Snipd Podcast app to discover more snips from this episode
Get the app