

2027 Intelligence Explosion: Month-by-Month Model — Scott Alexander & Daniel Kokotajlo
2381 snips Apr 3, 2025
Scott Alexander, author of popular blogs on AI and culture, joins Daniel Kokotajlo, director of the AI Futures Project, to explore the AI landscape leading up to 2027. They dive into the concept of an intelligence explosion, discussing potential scenarios and exploring the societal implications of superintelligent AI. The conversation covers the challenges of aligning AI developments with human values, the competitive race in AI technology between the U.S. and China, and the transformative potential of AI in fields like manufacturing and biomedicine.
AI Snips
Chapters
Books
Transcript
Episode notes
LLM's lack of discovery
- LLMs haven't made groundbreaking discoveries because they haven't been specifically trained to.
- Current models excel at tasks they've been trained for, but novel discovery requires different training.
Discovery is about heuristics
- LLMs haven't made discoveries, just like humans don't spontaneously connect related words etymologically.
- True discovery requires good heuristics and iterative exploration, which LLMs currently lack.
Fast vs. Continuous Progress
- AI progress can be continuous but still incredibly fast, similar to a hyperbola.
- The core question isn't continuity, but the speed of algorithmic progress.