
 Future Perfect
 Future Perfect Good Robot #1: The Magic Intelligence in the Sky
 Mar 12, 2025 
 Kelsey Piper, a Vox writer and rationalist enthusiast, joins Eliezer Yudkowsky, a pioneer in AI safety, to dive deep into the dangers of superintelligent AI. They discuss the infamous paperclip maximizer thought experiment, illustrating the frightening potential of AI fixating on simplistic goals. The duo explores the fears within the rationalist community about AI's risks, the societal impacts of unchecked intelligence, and the challenges in fostering a rational understanding of technology amid existential threats. It's a fascinating, cautionary conversation for the tech-savvy! 
 AI Snips 
 Chapters 
 Books 
 Transcript 
 Episode notes 
Rationalist Beginnings
- Kelsey Piper discovered rationalism through a Harry Potter fanfiction.
- This led her to Eliezer Yudkowsky's blog, Less Wrong, which explored rationality and AI.
AI Savior to Threat
- Eliezer Yudkowsky initially believed AI could solve world problems.
- His research led him to believe superintelligent AI was dangerous.
Musk and AI
- Elon Musk tweeted about the potential dangers of AI, echoing Yudkowsky's concerns.
- Musk co-founded OpenAI, inspired by Yudkowsky's ideas.








