

The magic intelligence in the sky | Good Robot
Apr 11, 2025
Kelsey Piper, a writer for Vox's Future Perfect, explores the dark side of AI with a unique perspective rooted in rationalism. She highlights the ‘paperclip maximizer’ thought experiment, revealing how superintelligent AI could pose existential threats if left unchecked. The conversation dives into the societal fears surrounding AI's evolution and the importance of cultivating rational thinking in children. Personal stories illustrate the challenges of navigating AI's unpredictable nature, drawing parallels to parenting and ethics in technology.
AI Snips
Chapters
Books
Transcript
Episode notes
Rationalist Origin Story
- Kelsey Piper discovered rationalism through a Harry Potter fanfiction.
- This led her to Eliezer Yudkowsky's blog, Less Wrong, and the rationalist community.
Shifting Perspectives on AI
- Eliezer Yudkowsky initially believed superintelligent AI could solve world problems.
- His research led him to believe that building such AI would likely end badly.
Early Influence
- Kelsey Piper found Yudkowsky's thought experiments convincing as a teenager.
- These arguments centered on the possibility and difficulty of controlling superintelligent AI.