

Interconnects
Nathan Lambert
Audio essays about the latest developments in AI and interviews with leading scientists in the field. Breaking the hype, understanding what's under the hood, and telling stories. www.interconnects.ai
Episodes
Mentioned books

31 snips
Oct 25, 2025 • 10min
Burning out
The conversation kicks off by dissecting the intense work culture in AI, contrasting performative overwork with the grim realities it spawns. Comparisons to historical industry pressures reveal high stakes, including mental strain and even fatalities. As researchers clock 100-hour weeks, the discussion shifts to the balance of rest and creativity. Nathan highlights talent constraints as the new bottleneck in progress. With the ongoing race for marginal gains, he warns that only the most committed will keep pace, emphasizing the need for long-term vision and self-care.

68 snips
Oct 20, 2025 • 13min
How to scale RL
Explore the exciting world of scaling reinforcement learning as Nathan dives into the challenges and opportunities ahead. Discover the groundbreaking ScaleRL paper, which predicts learning curves and outlines the critical constants influencing RL performance. Learn how recent algorithmic advancements, like truncated importance sampling, are revolutionizing the field. Plus, gain insights into Pipeline RL's systems improvements that minimize GPU idle time. This is a journey into refining RL experimentation and boosting efficiency!

78 snips
Oct 16, 2025 • 47min
The State of Open Models
Explore the exciting shifts in the open model landscape from 2024 to 2025! Discover how China's models gained cumulative lead and the rise of Qwen, alongside Llama's strategic retreat. Delve into the implications of fine-tuned variants and community engagement. Concerns over security and the call for a transparent U.S. response add to the urgency. Plus, audience Q&As reveal insights on the competition between U.S. and Chinese models and the future of open-source initiatives. It's a fascinating look at where AI is headed!

75 snips
Oct 7, 2025 • 12min
Thoughts on The Curve
A recent conference sparked vibrant debates on AI timelines and progress. Key discussions included the plausible automation of research engineers within 3–7 years. Experts are concerned about assuming a certain sequence of AI development is inevitable. The complexity of models and tools was explored, revealing both potential productivity boosts and challenges. Predictions suggest significant advancements and stagnations in AI capabilities ahead, while geopolitical implications of model standards were highlighted. The urgent need for proactive policies regarding open models was also emphasized.

68 snips
Sep 30, 2025 • 9min
ChatGPT: The Agentic App
The podcast dives into the monetization of ChatGPT, exploring the challenges of shopping and advertising. Special focus is on the recent launch of the 'Buy It' feature and the innovative Agentic Commerce Protocol. There’s a discussion about the precision needed for AI to shop effectively and the advances in search capabilities that make this possible. It also highlights the growing trend of model specialization and the potential of agentic apps to transform daily tasks. Caution is advised regarding AI's impact on marketplaces, yet optimism shines through.

127 snips
Sep 22, 2025 • 9min
Thinking, Searching, and Acting
Explore the evolution of reasoning models and how they've outgrown the limitations of early AI like ChatGPT. Delve into the three core primitives: thinking, searching, and acting, each pivotal for future advancements. Uncover the nuances of hallucinations framed as context problems and the ongoing debate between open and closed tool ecosystems. Nathan highlights the impact of tokenomics and hardware trends on AI efficiency, emphasizing that these innovations are set to revolutionize our interaction with technology.

81 snips
Sep 18, 2025 • 16min
Coding as the epicenter of AI progress and the path to general agents
Coding is identified as the last tractable frontier for AI, providing consistent improvements that are highly valuable. Unlike chat and mathematics, coding is proving to be the most useful domain for AI advancements. Real-world applications show agents as powerful tools that enhance workflows, acting as persistent editorial assistants. The discussion includes hands-on performance comparisons between coding agents and traditional methods, highlighting how they solve complex coding issues with ease, transforming the way we interact with technology.

39 snips
Sep 9, 2025 • 14min
On China's open source AI trajectory
Dive into China's ambitious open-source AI strategies and their quest for global influence. The podcast highlights government efforts to bolster national AI capabilities and the geopolitical ramifications of these developments. It discusses the pride within Chinese tech sectors, the ongoing pursuit of self-sufficiency, and the innovative spirit driving this evolution. Listeners can ponder the potential impacts of China's AI efforts on the international landscape thanks to initiatives like the AI Plus plan.

69 snips
Aug 17, 2025 • 13min
Ranking the Chinese Open Model Builders
China is surging ahead in the AI race with groundbreaking open model releases this summer. The discussion highlights the top 19 labs, including the impressive DeepSeek, known for their high-quality models. Emerging players are also making waves, contributing to a rapidly evolving ecosystem. With standout releases like Qwen 3 and Kimi K2, the landscape is a blend of established and new innovators. The future looks promising as these labs are set to rival their Western counterparts, keeping AI enthusiasts on their toes.

72 snips
Aug 15, 2025 • 10min
Contra Dwarkesh on Continual Learning
The discussion centers on the concept of continual learning in AI and its implications for true artificial general intelligence. One thought-provoking argument suggests that continual learning may not be the primary bottleneck in AI advancement. Instead, the focus should be on scaling existing systems. The conversation also critiques the perceived limitations of current large language models in generating human-like responses, questioning why they haven't transformed Fortune 500 workflows despite their capabilities.


