undefined

Shreya Shankar

Researcher and product expert focused on AI evals and qualitative error analysis who co-teaches the top-rated evals course and has published research on validation and rubric drift. Specializes in open/axial coding, building evaluation workflows, and translating data into product improvements.

Top 3 podcasts with Shreya Shankar

Ranked by the Snipd community
undefined
1,879 snips
Sep 25, 2025 • 1h 47min

Why AI evals are the hottest new skill for product builders | Hamel Husain & Shreya Shankar (creators of the #1 eval course)

Hamel Husain, an AI product educator and consultant, and Shreya Shankar, a researcher and product expert, share their insights on AI evals. They explain why evals are essential for AI product builders, delve into error analysis techniques, and discuss the balance between code-based evaluations and LLM judges. Listeners will learn about practical tips for implementing evals with minimal time investment and common pitfalls to avoid. The duo also highlights the importance of systematic measurement in enhancing AI product effectiveness.
undefined
117 snips
Jul 11, 2025 • 1h 35min

The PM’s Role in AI Evals: Step-by-Step

Join Hamel Husain and Shreya Shankar, AI experts who’ve shaped the best AI Evals cohort, as they dive into the essentials of AI evaluations for Product Managers. They unveil why these evaluations are crucial for successful AI product development, highlighting common pitfalls to avoid. Discover the concept of 'hill climbing' for enhancing AI performance, and learn how improper reliance on subjective measures can lead to pitfalls like the hallucination problem in AI. Their insights provide a valuable blueprint for mastering AI evaluations.
undefined
Jan 15, 2026 • 1h 6min

How to Do AI Evals Step-by-Step with Real Production Data | Tutorial by Hamel Husain and Shreya Shankar

Hamel Husain and Shreya Shankar, experienced instructors in AI evals, share their expertise on building reliable production AI. They discuss the critical importance of systematic evaluations over simple demos, emphasizing real-world error analysis. Listeners learn about analyzing real traces, identifying UX failures, and refining categories for actionable insights. The duo highlights the need for tailored evaluations and proper validations, advocating for structured methodologies that prioritize high-impact issues. This engaging tutorial is a must for aspiring PMs in AI.

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app