
Don't Worry About the Vase Podcast On Dwarkesh Patel's Podcast With Andrej Karpathy
13 snips
Oct 21, 2025 In this engaging discussion, Andrej Karpathy, a prominent machine-learning researcher and former Director of AI at Tesla, shares his insights on the future of artificial intelligence. He explores the prospect of AGI timelines, emphasizing the challenges of continual learning and the pitfalls of premature optimism about full agents. Andrej reflects on past AI shifts, critiques reinforcement learning, and debates the balance between knowledge storage and cognitive efficiency. He warns of job displacement and the need for recalibrated education in a post-AI world.
AI Snips
Chapters
Transcript
Episode notes
Agents Need Context More Than Raw IQ
- Andrej Karpathy thinks AGI is likely a decade away and calls this the decade of agents, but current agents lack context and continual learning.
- Zvi Moshowitz argues context handling, not raw intelligence, is the main short-term bottleneck to useful agents.
Evolution Gives Algorithms Not Data
- Karpathy contrasts biological evolution with pre-training and argues we're building 'ghosts' not animals, so evolution provides learning algorithms not direct knowledge.
- Zvi warns against treating models as blank slates and notes evolution supplies useful inductive biases.
Struggling With LLMs While Building NanoChat
- Karpathy found LLMs unhelpful when assembling his Repo NanoChat and relied mainly on autocomplete for boilerplate.
- He reported models repeatedly tried to use standard patterns he intentionally avoided, e.g., DDP, forcing manual fixes.

