
Vanishing Gradients
A podcast about all things data, brought to you by data scientist Hugo Bowne-Anderson.
It's time for more critical conversations about the challenges in our industry in order to build better compasses for the solution space! To this end, this podcast will consist of long-format conversations between Hugo and other people who work broadly in the data science, machine learning, and AI spaces. We'll dive deep into all the moving parts of the data world, so if you're new to the space, you'll have an opportunity to learn from the experts. And if you've been around for a while, you'll find out what's happening in many other parts of the data world.
Latest episodes

Jul 9, 2024 • 1h 36min
Episode 31: Rethinking Data Science, Machine Learning, and AI
In this discussion, Vincent Warmerdam, a senior data professional at :probabl, challenges conventional data science approaches with innovative insights. He emphasizes the importance of real-world problem exposure and effective visualization. The conversation dives into framing problems accurately and determining if algorithms truly solve them. Vincent advocates for simple models, discusses the role of UI in data science tools, and examines the potential and limitations of LLMs. He highlights the need for community knowledge sharing through blogging and open dialogue.

Jun 26, 2024 • 1h 15min
Episode 30: Lessons from a Year of Building with LLMs (Part 2)
Explore insights from Eugene Yan, Bryan Bischof, Charles Frye, Hamel Husain, and Shreya Shankar on building end-to-end systems with LLMs, the experimentation mindset for AI products, strategies for building trust in AI, the importance of data examination, and evaluation strategies for professionals. These lessons apply broadly to data science, machine learning, and product development.

20 snips
Jun 26, 2024 • 1h 30min
Episode 29: Lessons from a Year of Building with LLMs (Part 1)
Experts from Amazon, Hex, Modal, Parlance Labs, and UC Berkeley share lessons learned from working with Large Language Models. They discuss the importance of evaluation and monitoring in LLM applications, data literacy in AI, the fine-tuning dilemma, real-world insights, and the evolving roles of data scientists and AI engineers.

Jun 9, 2024 • 1h 6min
Episode 28: Beyond Supervised Learning: The Rise of In-Context Learning with LLMs
Alan Nichol, Co-founder and CTO of Rasa, shares insights on using LLMs in chatbots, the evolution of conversational AI, and the challenges of supervised learning. He emphasizes the importance of balancing traditional techniques with new advancements. The podcast also includes a live demo of Rasa's CALM system, showcasing the separation of business logic from language models for reliable conversational flow execution.

31 snips
May 31, 2024 • 1h 32min
Episode 27: How to Build Terrible AI Systems
Jason Liu, an independent consultant in recommendation systems, discusses building AI apps, playbook for ML, and avoiding pitfalls. They focus on building terrible AI systems to learn how to prevent failures. The podcast explores consulting in various industries, future tooling, and creating robust AI systems.

May 15, 2024 • 1h 52min
Episode 26: Developing and Training LLMs From Scratch
Sebastian Raschka discusses developing and training large language models (LLMs) from scratch, covering topics like prompt engineering, fine-tuning, and RAG systems. They explore the skills, resources, and hardware needed, the lifecycle of LLMs, live coding to create a spam classifier, and the importance of hands-on experience. They also touch on using PyTorch Lightning and fabric for managing large models, and reveal insights on techniques in natural language processing models and evaluating LLMs for classification problems.

10 snips
Mar 18, 2024 • 1h 21min
Episode 25: Fully Reproducible ML & AI Workflows
Omoju Miller, a machine learning expert and CEO of Fimio, shares her vision for transparent and reproducible ML workflows. She discusses the necessity of open tools and data in combating the monopolization of tech by closed-source APIs. Topics include the evolution of developer tools, the importance of data provenance, and the potential of a collaborative open compute ecosystem. Omoju also emphasizes user accessibility in machine learning and envisions a future where everyone can build production-ready applications with ease.

4 snips
Feb 27, 2024 • 1h 30min
Episode 24: LLM and GenAI Accessibility
Hugo and Johno discuss the evolution of tooling and accessibility in AI over the past decade, highlighting advancements in using big models from Hugging Face and hi-res satellite data. They delve into the Generative AI mindset, democratizing deep learning with fast.ai, and the importance of UX in generative AI applications. The discussion also covers the skill set needed to be an LLM and AI guru, as well as efforts at answer.ai to democratize LLMs and foundation models.

11 snips
Dec 20, 2023 • 1h 21min
Episode 23: Statistical and Algorithmic Thinking in the AI Age
Allen Downey discusses statistical paradoxes and fallacies in using data, including the base rate fallacy and algorithmic fairness. They dive into examples like COVID vaccination data and explore the challenges of interpreting statistical information correctly. The conversation also covers topics such as epidemiological paradoxes, Gaussian distributions, and the importance of understanding biases in data interpretation for media consumption.

Nov 27, 2023 • 1h 20min
Episode 22: LLMs, OpenAI, and the Existential Crisis for Machine Learning Engineering
Jeremy Howard (Fast.ai), Shreya Shankar (UC Berkeley), and Hamel Husain (Parlance Labs) join Hugo Bowne-Anderson to talk about how LLMs and OpenAI are changing the worlds of data science, machine learning, and machine learning engineering.
Jeremy Howard is co-founder of fast.ai, an ex-Chief Scientist at Kaggle, and creator of the ULMFiT approach on which all modern language models are based. Shreya Shankar is at UC Berkeley, ex Google brain, Facebook, and Viaduct. Hamel Husain has his own generative AI and LLM consultancy Parlance Labs and was previously at Outerbounds, Github, and Airbnb.
They talk about
How LLMs shift the nature of the work we do in DS and ML,
How they change the tools we use,
The ways in which they could displace the role of traditional ML (e.g. will we stop using xgboost any time soon?),
How to navigate all the new tools and techniques,
The trade-offs between open and closed models,
Reactions to the recent Open Developer Day and the increasing existential crisis for ML.
LINKS
The panel on YouTube
Hugo and Jeremy's upcoming livestream on what the hell happened recently at OpenAI, among many other things
Vanishing Gradients on YouTube
Vanishing Gradients on twitter