Vanishing Gradients

Hugo Bowne-Anderson
undefined
24 snips
Jan 23, 2026 • 1h 29min

Episode 68: A Builder’s Guide to Agentic Search & Retrieval with Doug Turnbull & John Berryman

Join search guru Doug Turnbull, who shaped systems at Reddit and Shopify, and John Berryman, the brain behind GitHub Copilot, as they dive into the future of agentic search. They explore the evolution from traditional search to agentic retrieval, spotlighting John's five-level maturity model for AI adoption. Learn why understanding user intent is paramount and discover practical steps to create your own agentic loops. They also share insights on avoiding common pitfalls in search design, emphasizing the importance of real user feedback.
undefined
31 snips
Jan 14, 2026 • 1h 18min

Episode 67: Saving Hundreds of Hours of Dev Time with AI Agents That Learn

Eleanor Berger and Isaac Flaath, co-founders of Elite AI Assisted Coding, delve into the future of software development with AI. They explore how agents can maintain living documentation using simple markdown files, enhancing ongoing learning. Discover the power of specification-first planning to define success, while automated tech debt audits keep projects in check. The duo emphasizes the importance of accountability and clear communication in teamwork. With insights on using agents for routine tasks, they reveal how to save developers hundreds of hours!
undefined
23 snips
Jan 8, 2026 • 43min

Episode 66: The Agent Paradox - Why Moderna's Most Productive AI Systems Aren't Agents

Eric Ma, a research data science leader at Moderna specializing in AI for biotech, discusses the surprise finding that Moderna's systems are built on reliable workflows rather than autonomous agents. He emphasizes the importance of mapping permissions in regulated environments and the risks of data leaks from LLM execution traces. Offloading “janitorial” tasks to AI can improve efficiency, but Eric advises starting with simpler tools to reduce risks. He astutely highlights the need for evaluation rigor that matches the stakes involved in biotech applications.
undefined
90 snips
Dec 19, 2025 • 52min

Episode 65: The Rise of Agentic Search

Jeff Huber, CEO and co-founder of Chroma, dives into the fascinating world of agentic search, transforming how we approach AI and information retrieval. He discusses the importance of 'context engineering' for reliable AI systems and how context rot complicates this. Huber explains the concept of the 'agent harness' for advanced tools and the necessity of hybrid search to maintain balance. With insights on best practices for builders and the challenges in agent evaluation, this conversation illuminates the evolving landscape of AI search.
undefined
82 snips
Dec 3, 2025 • 1h 3min

Episode 64: Data Science Meets Agentic AI with Michael Kennedy (Talk Python)

In this discussion, Michael Kennedy, a seasoned Python developer and educator known for his insights on AI and software practices, tackles the myth of complexity in tech. He shares how to simplify production Python systems, emphasizing the importance of the 'Docker barrier' for cost-effective self-hosting. The conversation explores how Agentic AI is shifting development mindsets and enhancing efficiency. Michael also stresses the value of struggling through learning and the need for complementary skills in navigating the evolving tech landscape.
undefined
103 snips
Nov 22, 2025 • 1h

Episode 63: Why Gemini 3 Will Change How You Build AI Agents with Ravin Kumar (Google DeepMind)

Ravin Kumar, a researcher at Google DeepMind specializing in generative models and LLM products, joins to discuss the groundbreaking Gemini 3. They illustrate how models can 'self-heal' and adapt, reshaping software development. Topics include the transition from basic tool calling to advanced agent harnesses, the contrast between deterministic workflows and high-agency systems, and the importance of robust evaluation infrastructures. Ravin also shares insights on the evolution of productive features like Audio Overviews and the future of multimodal agents.
undefined
53 snips
Oct 31, 2025 • 59min

Episode 62: Practical AI at Work: How Execs and Developers Can Actually Use LLMs

Dr. Randall Olson, co-founder of Wyrd Studios and AI strategist, dives into practical AI applications that can unlock immediate value for businesses. He discusses how non-technical leaders can quickly prototype tools using ChatGPT, emphasizing the significance of starting small with achievable tasks. Randall urges a disciplined approach to AI evaluation akin to software testing, highlights overlooked opportunities for automation, and advocates for iterative experimentation to foster innovation in the workplace. Transforming mundane problems into streamlined solutions is key!
undefined
61 snips
Oct 16, 2025 • 28min

Episode 61: The AI Agent Reliability Cliff: What Happens When Tools Fail in Production

In a fascinating discussion, Alex Strick van Linschoten, a machine learning engineer at ZenML and curator of the LLM Ops Database, delves into the complexities of multi-agent systems. He emphasizes the dangers of introducing too many agents, advocating for simplicity and reliability. Alex shares key insights from nearly 1,000 real-world deployments, highlighting the importance of MLOps hygiene, human-in-the-loop strategies, and using basic programming checks over costly LLM judges. His practical advice on scaling down systems is a must-listen for AI developers!
undefined
150 snips
Sep 30, 2025 • 1h 13min

Episode 60: 10 Things I Hate About AI Evals with Hamel Husain

Hamel Husain, a machine learning engineer and evals expert, discusses the pitfalls of AI evaluations and how to adopt a data-centric approach for reliable results. He outlines ten critical mistakes teams make, debunking ineffective metrics like 'hallucination scores' in favor of tailored analytics. Hamel shares a workflow for effective error analysis, including involving domain experts wisely and avoiding hasty automation. Bryan Bischoff joins as a guest to introduce the 'Failure as a Funnel' concept, emphasizing focused debugging for complex AI systems.
undefined
34 snips
Sep 23, 2025 • 48min

Episode 59: Patterns and Anti-Patterns For Building with AI

In this engaging discussion, John Berryman, Founder of Arcturus Labs and an early engineer on GitHub Copilot, dives into the real-world challenges of building AI applications. He highlights the 'seven deadly sins' of LLM development, offering practical solutions to keep projects moving. John explains why aspiring for perfect accuracy may hinder progress and shares insights on context management and retrieval debugging. Treating an LLM like a forgetful intern, he emphasizes starting simply and avoiding unnecessary complexity for successful deployment.

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app