undefined

Hamel Husain

AI expert and course creator focusing on systematic improvement and measurement of AI applications.

Top 5 podcasts with Hamel Husain

Ranked by the Snipd community
undefined
142 snips
Jul 23, 2024 • 1h 20min

Building Real-World LLM Products with Fine-Tuning and More with Hamel Husain - #694

In this discussion with Hamel Husain, founder of Parlance Labs, they dive into the practicalities of leveraging large language models (LLMs) for real-world applications. Husain shares insights on fine-tuning techniques, including tools like Axolotl and the advantages of LoRa for efficient model adjustments. He emphasizes the importance of thoughtful user interface design and systematic evaluation strategies to enhance AI's effectiveness. The conversation also highlights challenges in data curation and the need for accurate metrics in domain-specific projects, ensuring robust AI development.
undefined
20 snips
Jun 26, 2024 • 1h 30min

Episode 29: Lessons from a Year of Building with LLMs (Part 1)

Experts from Amazon, Hex, Modal, Parlance Labs, and UC Berkeley share lessons learned from working with Large Language Models. They discuss the importance of evaluation and monitoring in LLM applications, data literacy in AI, the fine-tuning dilemma, real-world insights, and the evolving roles of data scientists and AI engineers.
undefined
15 snips
Nov 14, 2023 • 1h 8min

Episode 21: Deploying LLMs in Production: Lessons Learned

Guest Hamel Husain, a machine learning engineer, discusses the business value of large language models (LLMs) and generative AI. They cover common misconceptions, necessary skills, and techniques for working with LLMs. The podcast explores the challenges of working with ML software and chat GPT, the importance of data cleaning and analysis, and deploying LLMs in production with guardrails. They also discuss an AI-powered real estate CRM and optimizing marketing strategies through data analysis.
undefined
Feb 20, 2025 • 1h 18min

Episode 45: Your AI application is broken. Here’s what to do about it.

Joining the discussion is Hamel Husain, a seasoned ML engineer and open-source contributor, who shares invaluable insights on debugging generative AI systems. He emphasizes that understanding data is key to fixing broken AI applications. Hamel advocates for spreadsheet error analysis over complex dashboards. He also highlights the pitfalls of trusting LLM judges blindly and critiques existing AI dashboard metrics. His practical methods will transform how developers approach model performance and iteration in AI.
undefined
Jun 26, 2024 • 1h 15min

Episode 30: Lessons from a Year of Building with LLMs (Part 2)

Explore insights from Eugene Yan, Bryan Bischof, Charles Frye, Hamel Husain, and Shreya Shankar on building end-to-end systems with LLMs, the experimentation mindset for AI products, strategies for building trust in AI, the importance of data examination, and evaluation strategies for professionals. These lessons apply broadly to data science, machine learning, and product development.