Y Combinator Startup Podcast

Scaling and the Road to Human-Level AI | Anthropic Co-founder Jared Kaplan

181 snips
Jul 29, 2025
Jared Kaplan, co-founder of Anthropic and former theoretical physicist, dives deep into AI scaling and its transformative effects on achieving human-level intelligence. He reveals how intelligence scales in predictable ways, shaping modern language models. Kaplan discusses the launch of CLUD4, emphasizing advancements in coding and search, while underscoring the crucial role of memory management. He explores the evolving functions of AI, from a supportive co-pilot to automating complex tasks, and highlights the necessity of human oversight in navigating this new frontier.
Ask episode
AI Snips
Chapters
Transcript
Episode notes
ANECDOTE

Physicist Turned AI Pioneer

  • Jared Kaplan transitioned from theoretical physics to AI after becoming fascinated by the scaling of intelligence in AI models.
  • His background in physics helped him ask foundational questions that led to important AI scaling discoveries.
INSIGHT

Predictable AI Scaling Laws

  • AI performance improves predictably according to scaling laws in pre-training and reinforcement learning phases.
  • These laws hold consistently over many orders of magnitude in compute, data, and model size.
INSIGHT

AI Task Duration Extends Rapidly

  • AI capabilities improve along two axes: flexibility across modalities and the time horizon of tasks completed.
  • The time horizon for tasks AI can accomplish doubles roughly every seven months, enabling longer, complex workflows.
Get the Snipd Podcast app to discover more snips from this episode
Get the app