Dwarkesh Podcast cover image

Andrej Karpathy — AGI is still a decade away

Dwarkesh Podcast

00:00

Why continual learning and distillation are absent

He argues models lack sleep-like distillation into weights and proposes per-user small updates or sparse attention.

Transcript
Play full episode

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app