Vanishing Gradients

Hugo Bowne-Anderson
undefined
Sep 9, 2025 • 1h 1min

Episode 58: Building GenAI Systems That Make Business Decisions with Thomas Wiecki (PyMC Labs)

Thomas Wiecki, founder of PyMC Labs and co-author of PyMC, dives into how generative AI can shape business decisions. He discusses using large language models as synthetic consumers to test product ideas, revealing the efficiency of AI over traditional surveys. Thomas emphasizes Bayesian modeling's role in providing trustworthy insights and navigating complex data. His experience with Colgate highlights the iterative design of AI systems for better product and marketing strategies, urging a balance between innovative models and reliability.
undefined
12 snips
Aug 29, 2025 • 41min

Episode 57: AI Agents and LLM Judges at Scale: Processing Millions of Documents (Without Breaking the Bank)

Shreya Shankar, a PhD candidate at UC Berkeley with experience at Google Brain and Facebook, dives into the world of AI agents and document processing. She sheds light on how LLMs can efficiently handle vast amounts of data, maintaining accuracy without breaking the bank. Topics include the importance of human error review, the intricacies of transforming LLM workflows into reliable pipelines, and the balance of using cheap vs. expensive models. Shreya also discusses how guardrails and structured approaches can enhance LLM outputs in real-world applications.
undefined
Aug 14, 2025 • 46min

Episode 56: DeepMind Just Dropped Gemma 270M... And Here’s Why It Matters

Ravin Kumar, a researcher at Google DeepMind, dives into the newly launched Gemma 270M, the smallest member of the Gemma 3 family of AI models. He explains its efficiency and speed, perfect for on-device use cases where privacy and latency are crucial. Kumar discusses the strategic advantages of smaller models for fine-tuning and targeted tasks, emphasizing their potential to drive broader AI adoption. Listeners will learn how to leverage 270M for specific applications and compare it with larger models in diverse scenarios.
undefined
Aug 12, 2025 • 38min

Episode 55: From Frittatas to Production LLMs: Breakfast at SciPy

Join Eric Ma, who heads research data science at Moderna, as he discusses the wild world of AI systems over breakfast at SciPy. He reveals why 'perfect' testing can lead you astray and introduces three key personas in AI development, each with unique blind spots. Discover how curiosity can elevate builders from good to great, and learn about maintaining observability in both development and production. Eric also shares insights on fostering experimentation in large organizations, embracing the chaos that comes with creating thriving AI products.
undefined
23 snips
Jul 18, 2025 • 41min

Episode 54: Scaling AI: From Colab to Clusters — A Practitioner’s Guide to Distributed Training and Inference

Zach Mueller, who leads Accelerate at Hugging Face, shares his expertise on scaling AI from cozy Colab environments to powerful clusters. He explains how to get started with just a couple of GPUs, debunks myths about performance bottlenecks, and discusses practical strategies for training on a budget. Zach emphasizes the importance of understanding distributed systems for any ML engineer and underscores how these skills can make a significant impact on their career. Tune in for actionable insights and demystifying tips!
undefined
44 snips
Jul 8, 2025 • 45min

Episode 53: Human-Seeded Evals & Self-Tuning Agents: Samuel Colvin on Shipping Reliable LLMs

Samuel Colvin, the mastermind behind Pydantic and founder of Logfire, discusses the often-overlooked challenges in AI reliability. He emphasizes how durability is key, not just flashy demos, and reveals that tiny feedback loops can significantly enhance performance insights. Colvin introduces innovative concepts like prompt self-repair systems and drift alarms, which can catch shifts before they become problems. He advocates for business-driven metrics that ensure features align with real goals, making AI not just functional but dependable in real-world applications.
undefined
7 snips
Jul 2, 2025 • 29min

Episode 52: Why Most LLM Products Break at Retrieval (And How to Fix Them)

Eric Ma, who leads data science research at Moderna, dives into the challenges of aligning retrieval with user intent in LLM-powered systems. He argues that most features fail not at the model level but with context. Eric reveals how a simple YAML-based approach can outperform complex pipelines and discusses the pitfalls of vague user queries. He also emphasizes the importance of evolving retrieval workflows to meet user needs and when it's sufficient to rely on intuition versus formal evaluation in refining these systems.
undefined
18 snips
Jun 26, 2025 • 48min

Episode 51: Why We Built an MCP Server and What Broke First

In this discussion, Philip Carter, Product Management Director at Salesforce and former Principal PM at Honeycomb, shares insights on creating LLM-powered features. He explains the nuances of integrating real production data with these systems. Carter dives into the challenges of tool use, prompt templates, and flaky model behavior. He also discusses the development of the innovative MCP server that enhances observability in AI systems, emphasizing its role in improving user experience and navigating the pitfalls of SaaS product development.
undefined
20 snips
Jun 17, 2025 • 28min

Episode 50: A Field Guide to Rapidly Improving AI Products -- With Hamel Husain

Hamel Husain, an AI specialist with experience at Airbnb, GitHub, and DataRobot, discusses improving AI products through effective evaluation. He highlights the importance of error analysis and systematic iteration in development. The conversation reveals common pitfalls in debugging AI systems, stressing the collaboration between engineers and domain experts to drive progress. Hamel also emphasizes that evaluation should be a comprehensive process, balancing immediate fixes with strategic assessment. This dialogue is a must-hear for anyone grappling with AI system enhancements.
undefined
Jun 5, 2025 • 1h 22min

Episode 49: Why Data and AI Still Break at Scale (and What to Do About It)

Akshay Agrawal, founder of Marimo and former Google Brain researcher, discusses the critical challenges faced in AI at scale. He emphasizes the need for robust infrastructure over just improved models. The conversation covers the importance of reproducibility and the shortcomings of traditional tools. Akshay introduces Marimo's innovative design that addresses modular AI applications and the difficulties in debugging large language models. Live demos illustrate Marimo's capabilities in data extraction and agentic workflows, merging technical insights with cultural reflections in data science.

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app