
Vanishing Gradients
A podcast about all things data, brought to you by data scientist Hugo Bowne-Anderson.
It's time for more critical conversations about the challenges in our industry in order to build better compasses for the solution space! To this end, this podcast will consist of long-format conversations between Hugo and other people who work broadly in the data science, machine learning, and AI spaces. We'll dive deep into all the moving parts of the data world, so if you're new to the space, you'll have an opportunity to learn from the experts. And if you've been around for a while, you'll find out what's happening in many other parts of the data world.
Latest episodes

6 snips
Apr 7, 2025 • 1h 19min
Episode 47: The Great Pacific Garbage Patch of Code Slop with Joe Reis
Joe Reis, co-author of Fundamentals of Data Engineering and critic of 'vibe coding,' engages in a thought-provoking discussion about the impact of AI on software development. He highlights the dangers of coding by intuition without structure, exploring the balance between innovation and traditional practices. The conversation examines the implications of AI tools on technical debt, security risks, and the evolution of production standards. Moreover, Reis reflects on the importance of craftsmanship and the learning curve in an age of disposable code.

11 snips
Apr 3, 2025 • 1h 9min
Episode 46: Software Composition Is the New Vibe Coding
Greg Ceccarelli, co-founder of SpecStory and ex-CPO at Pluralsight, dives into the groundbreaking concept of software composition, likening it to musical composition. He discusses how AI and LLMs facilitate vibe coding, making programming more intuitive and accessible. The conversation reveals the democratizing power of these tools, emphasizing intent over traditional coding and the collaborative potential they unleash. Greg also addresses the challenges of evolving technologies in data science and the importance of balancing creativity with robust practices in software development.

Feb 20, 2025 • 1h 18min
Episode 45: Your AI application is broken. Here’s what to do about it.
Joining the discussion is Hamel Husain, a seasoned ML engineer and open-source contributor, who shares invaluable insights on debugging generative AI systems. He emphasizes that understanding data is key to fixing broken AI applications. Hamel advocates for spreadsheet error analysis over complex dashboards. He also highlights the pitfalls of trusting LLM judges blindly and critiques existing AI dashboard metrics. His practical methods will transform how developers approach model performance and iteration in AI.

Feb 4, 2025 • 1h 34min
Episode 44: The Future of AI Coding Assistants: Who’s Really in Control?
Tyler Dunn, CEO and co-founder of Continue, discusses the transformative role of open-source AI coding assistants. He delves into the crucial balance between developer control and AI capabilities, highlighting how customization can empower software engineers. The conversation covers the evolution from autocomplete to intelligent code suggestions and the future of fine-tuning AI models on personalized data. Dunn emphasizes the importance of integration, tailored experiences, and maintaining trust as developers navigate the ever-evolving landscape of AI in coding.

15 snips
Jan 16, 2025 • 1h 1min
Episode 43: Tales from 400+ LLM Deployments: Building Reliable AI Agents in Production
Hugo chats with Alex Strick van Linschoten, a Machine Learning Engineer at ZenML, who has documented over 400 real-world LLM deployments. They discuss the challenges in deploying AI agents, like hallucinations and cascading failures. Alex reveals practical lessons from corporate giants like Anthropic and Klarna, focusing on structured workflows that enhance reliability. He highlights the evolution of LLM capabilities and shares case studies that underscore the importance of prompt engineering and effective error handling in building robust AI systems.

Jan 4, 2025 • 1h 20min
Episode 42: Learning, Teaching, and Building in the Age of AI
In this discussion, Alex Andorra, host of the Learning Bayesian Statistics podcast and an expert in Bayesian stats and sports analytics, joins Hugo to explore the intersection of AI, education, and product development. They reveal how Bayesian thinking aids in overcoming challenges in AI applications and the critical importance of iteration and first principles. The conversation also highlights the influence of commercial interests on experimentation, the evolution of teaching methods in tech, and the intricate world of deploying AI with LLMs.

7 snips
Dec 30, 2024 • 44min
Episode 41: Beyond Prompt Engineering: Can AI Learn to Set Its Own Goals?
Ben Taylor, CEO of VEOX Inc., Joe Reis, co-founder of Ternary Data, and Juan Sequeda, Principal Scientist at Data.World, discuss the evolution of AI from prompt engineering to goal engineering. They explore whether generative AI is more akin to an electrifying revolution or a blockchain phase. The panel highlights the importance of tackling the POC-to-production gap, understanding AI's failure modes, and balancing executive enthusiasm with employee workload. They also examine how AI's combinatorial abilities can redefine strategies, paralleling the success of AlphaZero in gaming.

8 snips
Dec 24, 2024 • 1h 44min
Episode 40: What Every LLM Developer Needs to Know About GPUs
In this conversation with Charles Frye, Developer Advocate at Modal, listeners gain insights into the intricate world of GPUs and their critical role in AI and LLM development. Charles explains the importance of VRAM and how memory can become a bottleneck. They tackle practical strategies for optimizing GPU usage, from fine-tuning to training large models. The discussion also highlights a GPU Glossary that simplifies complex concepts for developers, along with insights on quantization and the economic considerations in using modern hardware for efficient AI workflows.

13 snips
Nov 25, 2024 • 1h 43min
Episode 39: From Models to Products: Bridging Research and Practice in Generative AI at Google Labs
Hugo chats with Ravin Kumar, a Senior Research Data Scientist at Google Labs, whose career journey includes roles at SpaceX and Sweetgreen. They delve into the balance between technical rigor and practical utility in generative AI. Ravin shares insights on building scalable AI systems, such as using Gemma to optimize bakery operations. He emphasizes the critical role of UX in AI adoption, showcases the Notebook LM tool in action, and explores how AI can aid small businesses—demonstrating the transformative power of accessible technology.

11 snips
Nov 4, 2024 • 1h 24min
Episode 38: The Art of Freelance AI Consulting and Products: Data, Dollars, and Deliverables
Jason Liu, an independent AI consultant with a background at Meta and Stitch Fix, joins the discussion. He shares insights into structuring valuable consulting contracts and shifting from hourly billing to larger deals. Engaging in a live role-play, Jason coaches the host on effective client interaction and pricing strategies. The conversation also highlights the shift from deterministic to probabilistic AI systems, emphasizing the importance of understanding client motivations and fostering meaningful relationships in the evolving freelance landscape.
Remember Everything You Learn from Podcasts
Save insights instantly, chat with episodes, and build lasting knowledge - all powered by AI.