
Vanishing Gradients
A podcast about all things data, brought to you by data scientist Hugo Bowne-Anderson.
It's time for more critical conversations about the challenges in our industry in order to build better compasses for the solution space! To this end, this podcast will consist of long-format conversations between Hugo and other people who work broadly in the data science, machine learning, and AI spaces. We'll dive deep into all the moving parts of the data world, so if you're new to the space, you'll have an opportunity to learn from the experts. And if you've been around for a while, you'll find out what's happening in many other parts of the data world.
Latest episodes

4 snips
Jun 26, 2025 • 48min
Episode 51: Why We Built an MCP Server and What Broke First
In this discussion, Philip Carter, Product Management Director at Salesforce and former Principal PM at Honeycomb, shares insights on creating LLM-powered features. He explains the nuances of integrating real production data with these systems. Carter dives into the challenges of tool use, prompt templates, and flaky model behavior. He also discusses the development of the innovative MCP server that enhances observability in AI systems, emphasizing its role in improving user experience and navigating the pitfalls of SaaS product development.

10 snips
Jun 17, 2025 • 28min
Episode 50: A Field Guide to Rapidly Improving AI Products -- With Hamel Husain
Hamel Husain, an AI specialist with experience at Airbnb, GitHub, and DataRobot, discusses improving AI products through effective evaluation. He highlights the importance of error analysis and systematic iteration in development. The conversation reveals common pitfalls in debugging AI systems, stressing the collaboration between engineers and domain experts to drive progress. Hamel also emphasizes that evaluation should be a comprehensive process, balancing immediate fixes with strategic assessment. This dialogue is a must-hear for anyone grappling with AI system enhancements.

Jun 5, 2025 • 1h 22min
Episode 49: Why Data and AI Still Break at Scale (and What to Do About It)
Akshay Agrawal, founder of Marimo and former Google Brain researcher, discusses the critical challenges faced in AI at scale. He emphasizes the need for robust infrastructure over just improved models. The conversation covers the importance of reproducibility and the shortcomings of traditional tools. Akshay introduces Marimo's innovative design that addresses modular AI applications and the difficulties in debugging large language models. Live demos illustrate Marimo's capabilities in data extraction and agentic workflows, merging technical insights with cultural reflections in data science.

May 23, 2025 • 1h 4min
Episode 48: HOW TO BENCHMARK AGI WITH GREG KAMRADT
If we want to make progress toward AGI, we need a clear definition of intelligence—and a way to measure it.
In this episode, Hugo talks with Greg Kamradt, President of the ARC Prize Foundation, about ARC-AGI: a benchmark built on Francois Chollet’s definition of intelligence as “the efficiency at which you learn new things.” Unlike most evals that focus on memorization or task completion, ARC is designed to measure generalization—and expose where today’s top models fall short.
They discuss:
🧠 Why we still lack a shared definition of intelligence
🧪 How ARC tasks force models to learn novel skills at test time
📉 Why GPT-4-class models still underperform on ARC
🔎 The limits of traditional benchmarks like MMLU and Big-Bench
⚙️ What the OpenAI O₃ results reveal—and what they don’t
💡 Why generalization and efficiency, not raw capability, are key to AGI
Greg also shares what he’s seeing in the wild: how startups and independent researchers are using ARC as a North Star, how benchmarks shape the frontier, and why the ARC team believes we’ll know we’ve reached AGI when humans can no longer write tasks that models can’t solve.
This conversation is about evaluation—not hype. If you care about where AI is really headed, this one’s worth your time.
LINKS
ARC Prize -- What is ARC-AGI?
On the Measure of Intelligence by François Chollet
Greg Kamradt on Twitter
Hugo's High Signal Podcast with Fei-Fei Li
Vanishing Gradients YouTube Channel
Upcoming Events on Luma
Hugo's recent newsletter about upcoming events and more!
Watch the podcast here on YouTube!
🎓 Want to go deeper?
Check out Hugo's course: Building LLM Applications for Data Scientists and Software Engineers.
Learn how to design, test, and deploy production-grade LLM systems — with observability, feedback loops, and structure built in.
This isn’t about vibes or fragile agents. It’s about making LLMs reliable, testable, and actually useful.
Includes over $800 in compute credits and guest lectures from experts at DeepMind, Moderna, and more.
Cohort starts July 8 — Use this link for a 10% discount

6 snips
Apr 7, 2025 • 1h 19min
Episode 47: The Great Pacific Garbage Patch of Code Slop with Joe Reis
Joe Reis, co-author of Fundamentals of Data Engineering and critic of 'vibe coding,' engages in a thought-provoking discussion about the impact of AI on software development. He highlights the dangers of coding by intuition without structure, exploring the balance between innovation and traditional practices. The conversation examines the implications of AI tools on technical debt, security risks, and the evolution of production standards. Moreover, Reis reflects on the importance of craftsmanship and the learning curve in an age of disposable code.

11 snips
Apr 3, 2025 • 1h 9min
Episode 46: Software Composition Is the New Vibe Coding
Greg Ceccarelli, co-founder of SpecStory and ex-CPO at Pluralsight, dives into the groundbreaking concept of software composition, likening it to musical composition. He discusses how AI and LLMs facilitate vibe coding, making programming more intuitive and accessible. The conversation reveals the democratizing power of these tools, emphasizing intent over traditional coding and the collaborative potential they unleash. Greg also addresses the challenges of evolving technologies in data science and the importance of balancing creativity with robust practices in software development.

Feb 20, 2025 • 1h 18min
Episode 45: Your AI application is broken. Here’s what to do about it.
Joining the discussion is Hamel Husain, a seasoned ML engineer and open-source contributor, who shares invaluable insights on debugging generative AI systems. He emphasizes that understanding data is key to fixing broken AI applications. Hamel advocates for spreadsheet error analysis over complex dashboards. He also highlights the pitfalls of trusting LLM judges blindly and critiques existing AI dashboard metrics. His practical methods will transform how developers approach model performance and iteration in AI.

Feb 4, 2025 • 1h 34min
Episode 44: The Future of AI Coding Assistants: Who’s Really in Control?
Tyler Dunn, CEO and co-founder of Continue, discusses the transformative role of open-source AI coding assistants. He delves into the crucial balance between developer control and AI capabilities, highlighting how customization can empower software engineers. The conversation covers the evolution from autocomplete to intelligent code suggestions and the future of fine-tuning AI models on personalized data. Dunn emphasizes the importance of integration, tailored experiences, and maintaining trust as developers navigate the ever-evolving landscape of AI in coding.

25 snips
Jan 16, 2025 • 1h 1min
Episode 43: Tales from 400+ LLM Deployments: Building Reliable AI Agents in Production
Hugo chats with Alex Strick van Linschoten, a Machine Learning Engineer at ZenML, who has documented over 400 real-world LLM deployments. They discuss the challenges in deploying AI agents, like hallucinations and cascading failures. Alex reveals practical lessons from corporate giants like Anthropic and Klarna, focusing on structured workflows that enhance reliability. He highlights the evolution of LLM capabilities and shares case studies that underscore the importance of prompt engineering and effective error handling in building robust AI systems.

Jan 4, 2025 • 1h 20min
Episode 42: Learning, Teaching, and Building in the Age of AI
In this discussion, Alex Andorra, host of the Learning Bayesian Statistics podcast and an expert in Bayesian stats and sports analytics, joins Hugo to explore the intersection of AI, education, and product development. They reveal how Bayesian thinking aids in overcoming challenges in AI applications and the critical importance of iteration and first principles. The conversation also highlights the influence of commercial interests on experimentation, the evolution of teaching methods in tech, and the intricate world of deploying AI with LLMs.