Dean Pleban and Liron Itzhakhi Allerhand explore what it really takes to move LLMs into production. They cover how to define clear requirements, prep data for RAG, engineer effective prompts, and evaluate model performance using concrete metrics. The conversation dives into managing sensitive data, avoiding leakage, and why crisp outputs and clear user intent matter. Plus: future trends like in-context learning and the decoupling of foundation models from vertical apps.
Join our Discord community:
https://discord.gg/tEYvqxwhah
---
Timestamps:
- 00:00 Introduction
- 01:48 Phases of LLM Project Development
- 03:32 Defining the Problem
- 09:35 Data Preparation and Understanding
- 23:59 Multimodal RAG
- 26:28 Prompt Engineering & Model Selection
- 27:58 Model Fine-tuning & Customization
- 33:18 LLM as a Judge
- 38:58 Evaluating Model Performance and Handling Hallucinations
- 41:02 Using LLMs with sensitive data
- 45:24 Other ideas for model evaluation and guardrails
- 49:28 Recommendations for the audience
➡️ Liron Itzhaki Allerhand on LinkedIn – https://www.linkedin.com/in/liron-izhaki-allerhand-16579b4/
🌐 Check Out Our Website! https://dagshub.com Social
Links:
➡️ LinkedIn: https://www.linkedin.com/company/dagshub
➡️ Twitter: https://x.com/TheRealDAGsHub
➡️ Dean Pleban: https://x.com/DeanPlbn