Fragmented - AI Developer Podcast cover image

303 - How LLMs Work - the 20 minute explainer

Fragmented - AI Developer Podcast

00:00

Context, Cost, and Practical Intuition

The hosts tie together context windows, token bloat, inference cost versus pretraining, and practical effects.

Play episode from 22:15
Transcript

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app