Into the Impossible With Brian Keating

The Mysterious Math Behind LLMs | Anil Ananthaswamy

65 snips
Jan 23, 2026
Anil Ananthaswamy, award-winning science writer and author of Why Machines Learn, explores the math behind modern AI. He discusses why learning works despite overparameterization, how high-dimensional spaces shape behavior, the risks of data-driven lock-in, limits of current LLMs like hallucinations, and future paths such as continual learning and neuromorphic alternatives.
Ask episode
AI Snips
Chapters
Books
Transcript
Episode notes
00:00 / 00:00

Data + GPUs Unlocked Deep Learning

  • Deep learning needed two material shifts: abundant internet-scale data and GPUs repurposed from gaming.
  • Anil explains matrix-heavy ML maps naturally onto GPU strengths, enabling modern breakthroughs.
00:00 / 00:00

Guard Against AI Lock-In

  • Watch for technological lock-in driven by massive funding and readily scraped internet data.
  • Prioritize research on sample-efficient, energy-efficient alternatives to avoid crowding out better approaches.
00:00 / 00:00

High-Dimensional Spaces Drive ML Magic

  • High-dimensional vector spaces shape surprising ML behavior and generalization properties.
  • Anil finds these mathematical spaces are the underappreciated core of modern ML's power.
Get the Snipd Podcast app to discover more snips from this episode
Get the app