

Hamel Husain
AI product practitioner and educator who co-created a popular course on AI evals and consults with companies on building eval-driven AI products. Experienced in error analysis, observability for LLM applications, and practical tooling for production AI systems.
Top 3 podcasts with Hamel Husain
Ranked by the Snipd community

1,368 snips
Sep 25, 2025 • 1h 47min
Why AI evals are the hottest new skill for product builders | Hamel Husain & Shreya Shankar (creators of the #1 eval course)
Hamel Husain, an AI product educator and consultant, and Shreya Shankar, a researcher and product expert, share their insights on AI evals. They explain why evals are essential for AI product builders, delve into error analysis techniques, and discuss the balance between code-based evaluations and LLM judges. Listeners will learn about practical tips for implementing evals with minimal time investment and common pitfalls to avoid. The duo also highlights the importance of systematic measurement in enhancing AI product effectiveness.

91 snips
Sep 30, 2025 • 1h 13min
Episode 60: 10 Things I Hate About AI Evals with Hamel Husain
Hamel Husain, a machine learning engineer and evals expert, discusses the pitfalls of AI evaluations and how to adopt a data-centric approach for reliable results. He outlines ten critical mistakes teams make, debunking ineffective metrics like 'hallucination scores' in favor of tailored analytics. Hamel shares a workflow for effective error analysis, including involving domain experts wisely and avoiding hasty automation. Bryan Bischoff joins as a guest to introduce the 'Failure as a Funnel' concept, emphasizing focused debugging for complex AI systems.

39 snips
Sep 28, 2025 • 52min
AI Evaluations Crash Course in 50 Minutes (2025) | Hamel Husain
Hamel Husain, an expert in AI evaluation methods, shares his vast experience training PMs and engineers at top tech firms. He breaks down how to effectively analyze real production traces, emphasizing the power of binary pass/fail ratings over complex scoring systems. Hamel explains common pitfalls in evaluation metrics and introduces practical tools for continuous monitoring. Listeners gain insights into building simple annotation tools and the importance of grounding evaluations in real problems to drive meaningful improvements.