Too many teams are building AI applications without truly understanding why their models fail. Instead of jumping straight to LLM evaluations, dashboards, or vibe checks, how do you actually fix a broken AI app?
In this episode, Hugo speaks with Hamel Husain, longtime ML engineer, open-source contributor, and consultant, about why debugging generative AI systems starts with looking at your data.
In this episode, we dive into:
Why “look at your data” is the best debugging advice no one follows.
How spreadsheet-based error analysis can uncover failure modes faster than complex dashboards.
The role of synthetic data in bootstrapping evaluation.
When to trust LLM judges—and when they’re misleading.
Why most AI dashboards measuring truthfulness, helpfulness, and conciseness are often a waste of time.
If you're building AI-powered applications, this episode will change how you approach debugging, iteration, and improving model performance in production.
LINKS
The podcast livestream on YouTube (
https://youtube.com/live/Vz4--82M2_0?feature=share)
Hamel's blog (
https://hamel.dev/)
Hamel on twitter (
https://x.com/HamelHusain)
Hugo on twitter (
https://x.com/hugobowne)
Vanishing Gradients on twitter (
https://x.com/vanishingdata)
Vanishing Gradients on YouTube (
https://www.youtube.com/channel/UC_NafIo-Ku2loOLrzm45ABA)
Vanishing Gradients on Twitter (
https://x.com/vanishingdata)
Vanishing Gradients on
Lu.ma (
https://lu.ma/calendar/cal-8ImWFDQ3IEIxNWk)
Building LLM Application for Data Scientists and SWEs, Hugo course on Maven (use VG25 code for 25% off) (
https://maven.com/s/course/d56067f338)
Hugo is also running a free lightning lesson next week on LLM Agents: When to Use Them (and When Not To) (
https://maven.com/p/ed7a72/llm-agents-when-to-use-them-and-when-not-to?utm_medium=ll_share_link&utm_source=instructor)
This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit
hugobowne.substack.com