

#32 - Scott Aaronson - The Race to AGI and Quantum Supremacy
71 snips Dec 4, 2024
Scott Aaronson, a theoretical computer scientist and former OpenAI researcher, dives into AI safety and quantum computing. He shares insights from his time at OpenAI's superalignment team and discusses the P vs NP problem with engaging analogies. The conversation also touches on the potential of quantum computing to disrupt cryptography and its race among tech giants. Plus, Aaronson proposes a new religion in the context of AGI. Their chat masterfully blends deep theory with pressing real-world concerns.
AI Snips
Chapters
Books
Transcript
Episode notes
OpenAI Sabbatical
- Scott Aaronson joined OpenAI for a one-year sabbatical in 2022 to work on AI safety.
- Despite initial skepticism, the role intrigued him, focusing on theoretical foundations and allowing remote work.
AI Alignment Challenges
- Aaronson admits difficulty in fully defining AI alignment mathematically.
- He worked on more concrete problems like watermarking AI outputs for easier identification.
Watermarking AI Content
- AI detection tools struggle to keep up with rapidly improving language models.
- Watermarking offers a solution by embedding detectable signals within AI-generated content.