

The MAD Podcast with Matt Turck
Matt Turck
The MAD Podcast with Matt Turck, is a series of conversations with leaders from across the Machine Learning, AI, & Data landscape hosted by leading AI & data investor and Partner at FirstMark Capital, Matt Turck.
Episodes
Mentioned books

6 snips
Jan 22, 2026 • 1h 4min
The End of GPU Scaling? Compute & The Agent Era — Tim Dettmers (Ai2) & Dan Fu (Together AI)
Tim Dettmers, an assistant professor at Carnegie Mellon University, and Dan Fu, an assistant professor at UC San Diego, dive deep into the future of AGI. They debate the limitations of current hardware versus the untapped potential of efficient utilization. Tim warns of physical constraints like the von Neumann bottleneck, while Dan emphasizes better performance through optimized kernels. The conversation also reveals how agents can enhance productivity, with practical advice on leveraging them effectively for work automation and innovation in AI architectures.

76 snips
Jan 15, 2026 • 45min
The Evaluators Are Being Evaluated — Pavel Izmailov (Anthropic/NYU)
Pavel Izmailov, a research scientist at Anthropic and an NYU professor, delves into AI behavior and safety. He discusses the intriguing idea of models developing 'alien survival instincts' and explores deceptive behaviors in AI. Pavel introduces his new concept, epiplexity, challenging traditional information theories. He highlights the importance of scaling oversight and the potential of multi-agent systems. With predictions for 2026, he anticipates remarkable advances in reasoning and collaborations that could reshape the future of AI.

299 snips
Dec 18, 2025 • 55min
DeepMind Gemini 3 Lead: What Comes After "Infinite Data"
In his first podcast interview, Sebastian Borgeaud, a pre-training lead at Google DeepMind, shares insights from the groundbreaking Gemini 3 project. He discusses the shift from an 'infinite data' approach to a data-limited era, emphasizing the importance of curation and evaluation. Sebastian highlights how scaling laws are evolving and why continual learning is crucial for future AI advancements. He also touches on the challenges of benchmarks, the complexities of multimodal data, and advocates for a full-stack understanding in AI research.

184 snips
Nov 26, 2025 • 1h 5min
What’s Next for AI? OpenAI’s Łukasz Kaiser (Transformer Co-Author)
Łukasz Kaiser, a leading researcher at OpenAI and co-author of the influential 'Attention Is All You Need' paper, delves into the latest advancements in AI, including GPT-5.1. He explains the steady exponential growth in AI capabilities, the significance of reasoning models, and how modern chat models utilize tools to enhance their performance. Kaiser also discusses the messy reality of engineering challenges, the future of pre-training, and why even cutting-edge models can struggle with simple logic puzzles. His journey from academia to shaping AI innovation offers a personal touch.

79 snips
Nov 20, 2025 • 1h 28min
Open Source AI Strikes Back — Inside Ai2’s OLMo 3 ‘Thinking"
Nathan Lambert and Luca Soldaini from AI2 dive into the groundbreaking OLMo 3 release, showcasing their approach to open-source AI with full transparency. They discuss the significance of releasing comprehensive model data and the distinction between base, instruct, and thinking models. The conversation highlights the impact of Meta's retreat from the open-source space, leading to the rise of Chinese models. Nathan and Luca also explore the challenges of reasoning in AI, emphasizing the need for U.S. innovation and broader engagement in shaping AI's future.

182 snips
Nov 6, 2025 • 1h 6min
Intelligence Isn’t Enough: Why Energy & Compute Decide the AGI Race – Eiso Kant
Eiso Kant, Co-CEO and co-founder of Poolside, discusses groundbreaking advancements in AI infrastructure at Project Horizon, a massive initiative in West Texas. He emphasizes the importance of owning energy and compute resources over relying on hyperscalers. Eiso also explains his pioneering work on Reinforcement Learning to Learn (RL2L), which aims to decode web behaviors, and how continuous learning can enhance AI capabilities. Additionally, he outlines Poolside’s unique hiring strategy and its commitment to community impact through sustainable practices.

138 snips
Oct 30, 2025 • 1h 3min
State of AI 2025 with Nathan Benaich: Power Deals, Reasoning Breakthroughs, Real Revenue
Join Nathan Benaich, Founder of Air Street Capital and author of the insightful State of AI report, as he unpacks the current AI landscape. He reveals that power, not GPUs, is becoming the critical bottleneck for AI advancement. Discover how reasoning models are revolutionizing science and the shift from theoretical to practical robotics. Nathan also discusses the real revenue flow in the AI sector, examines NVIDIA's dominance versus custom chips, and shares his investment predictions in fields like biology and defense.

212 snips
Oct 23, 2025 • 1h 10min
Are We Misreading the AI Exponential? Julian Schrittwieser on Move 37 & Scaling RL (Anthropic)
Julian Schrittwieser, a senior AI researcher at Anthropic and former member of DeepMind's AlphaGo team, discusses the exponential growth in AI capabilities. He highlights potential breakthroughs in AI, predicting agents could work autonomously by 2026 and possibly achieve Nobel-level discoveries by 2027-2028. Julian delves into the integration of pre-training and reinforcement learning, challenges in AI alignment, and the importance of broader access in tech. He emphasizes gradual productivity gains and the effects on jobs in various sectors.

348 snips
Oct 16, 2025 • 1h 16min
How GPT-5 Thinks — OpenAI VP of Research Jerry Tworek
Join Jerry Tworek, VP of Research at OpenAI, as he dives into the fascinating world of AI reasoning. Discover how GPT-5 evolves from earlier models, emphasizing the crucial roles of pretraining and reinforcement learning. Jerry explains the mechanics of chain-of-thought reasoning, the significance of agentic tools like Codex, and the importance of robust collaboration in research. He even shares insights from his personal journey from math and trading to cutting-edge AI research. Could pretraining combined with RL be key to achieving AGI? Tune in to find out!

483 snips
Oct 2, 2025 • 1h 10min
Sonnet 4.5 & the AI Plateau Myth — Sholto Douglas (Anthropic)
Sholto Douglas, AI researcher at Anthropic and former Google engineer, delves into the innovations of Claude Sonnet 4.5, claiming we're mere years away from AI matching human capabilities. He explains how reinforcement learning has suddenly made a breakthrough and how agents can maintain coherence during long coding sessions. Sholto also discusses the cultural differences across major AI labs, Anthropic's focused approach to coding, and the profound implications of AI's upcoming exponential progress, especially in economics and robotics.


