Dwarkesh Podcast

Dwarkesh Patel
undefined
845 snips
Dec 23, 2025 • 12min

An audio version of my blog post, Thoughts on AI progress (Dec 2025)

The discussion delves into the complexities of AI progress and the limitations of current robotics. It highlights skepticism around automated researchers and the challenges of achieving human-like continual learning. The concept of scaling in reinforcement learning is examined, alongside the significant compute needs for advancements. Predictions for the future include the potential for brain-like intelligences and the need for efficient training methods. Lastly, the importance of competition in driving innovation is emphasized.
undefined
765 snips
Dec 19, 2025 • 1h 55min

Sarah Paine – Why Russia Lost the Cold War

Sarah Paine, a political scientist and historian specializing in Russian affairs, discusses the multifaceted reasons behind the Soviet Union’s collapse. She explores Reagan's military strategies, the impact of the Helsinki Accords on Eastern Bloc dissidents, and the Sino-Soviet split. Paine highlights internal issues like economic failures and political reforms under Gorbachev. She also delves into Boris Yeltsin's pivotal role in the dissolution of the USSR. With a nod to current geopolitical tensions, she offers insights on navigating potential new conflicts.
undefined
5,410 snips
Nov 25, 2025 • 1h 36min

Ilya Sutskever – We're moving from the age of scaling to the age of research

Join Ilya Sutskever, co-founder of OpenAI and Stability AI, as he dives into the world of AI and machine learning. He discusses the intriguing concept of model jaggedness, explaining why AI sometimes behaves inconsistently. Ilya contrasts pre-training and reinforcement learning, emphasizing the importance of generalization and the barriers faced by AI. He also explores how emotions can serve as value functions and proposes new strategies for ensuring AGI is aligned with human values. Insights on continual learning and the future of superintelligence add depth to this fascinating conversation.
undefined
3,251 snips
Nov 12, 2025 • 1h 28min

Satya Nadella — How Microsoft is preparing for AGI

Satya Nadella, CEO of Microsoft, shares insights on the groundbreaking Fairwater 2 datacenter, capable of massive AI training jobs. He discusses how Microsoft is adapting its business models for AGI, emphasizing AI's transformative potential and sustainable growth. Nadella highlights innovations like GitHub Copilot, the importance of data diversity, and the balance between research and product development. Dylan Patel, founder of SemiAnalysis, probes into Microsoft's hyperscale strategy, custom silicon plans, and the significance of trust in cloud commitments.
undefined
819 snips
Oct 31, 2025 • 1h 31min

Sarah Paine – How Russia sabotaged China's rise

In a riveting discussion, military historian Sarah Paine delves into how Stalin's machinations significantly hindered China's rise for over a century. She explores the complexities of Russo-Chinese relations, detailing how Soviet interventions delayed China's control over crucial territories like Manchuria. Paine explains the impact of the Sino-Soviet split and draws parallels between historical dynamics and current events, including Chinese support for Russia in Ukraine. Her insights offer a fresh perspective on the geopolitical chessboard of the past and present.
undefined
9,567 snips
Oct 17, 2025 • 2h 25min

Andrej Karpathy — AGI is still a decade away

Andrej Karpathy, a leading AI researcher and former Tesla Autopilot head, shares insights on the future of artificial general intelligence (AGI) and education. He discusses why AGI will likely take another decade to mature, highlighting the inefficiencies of reinforcement learning compared to other methods. He critiques the slow progress of self-driving technology, attributing it to safety requirements. Karpathy also emphasizes the importance of integrating AI into education, proposing a model that combines expert faculty and AI assistance for personalized learning.
undefined
1,562 snips
Oct 10, 2025 • 1h 20min

Nick Lane – Life as we know it is chemically inevitable

In this conversation, Nick Lane, an evolutionary biochemist at University College London, dives into the origins of life and the role of eukaryotes. He suggests that early life may have emerged from hydrothermal vents, explaining why life relies on proton gradients and why complex cells evolved only once. Lane discusses how two sexes evolved for mitochondrial quality control and how early life’s chemistry implies its prevalence across the galaxy. He connects these theories to the large-scale patterns seen in eukaryotic evolution and challenges listeners to embrace scientific curiosity.
undefined
846 snips
Oct 4, 2025 • 12min

Some thoughts on the Sutton interview

Explore the intriguing world of reinforcement learning as the discussion dives into the limitations of human-furnished environments for AI. Imitation learning emerges as a key tool, complementing traditional methods and enabling continuous learning. The fascinating analogy of pre-training as fossil fuel underscores its necessity in AI development. Insights into cultural learning parallel human imitation, revealing the complexities involved. Finally, challenges in continual learning and practical solutions for LLMs highlight the ongoing evolution in AI technology.
undefined
2,935 snips
Sep 26, 2025 • 1h 6min

Richard Sutton – Father of RL thinks LLMs are a dead end

Richard Sutton, a leading researcher in reinforcement learning and 2024 Turing Award winner, argues that large language models (LLMs) are a dead end. He believes LLMs can't learn on-the-job and emphasizes the need for a new architecture enabling continual learning like animals do. The discussion touches on how LLMs perform imitation instead of genuine experiential learning, and why instilling goals is vital for intelligence. Sutton critiques the predictive nature of LLMs, advocating for a future where AI learns from real-world interactions rather than fixed datasets.
undefined
2,575 snips
Sep 12, 2025 • 1h 28min

Fully autonomous robots are much closer than you think – Sergey Levine

Sergey Levine, a top robotics researcher and co-founder of Physical Intelligence, believes we are on the verge of a robotic revolution by 2030. He discusses how we can pave the way for self-improving general-purpose robots that could manage our households autonomously. From the societal impacts of full automation to the challenges of scaling robotics technology, Levine emphasizes the need for proactive planning. He also explores the synergy between language models and robotics, predicting significant innovations that could transform industry and daily life.

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app