
 MLOps.community  Fine-Tuned Models Are Getting Out of Hand
 55 snips 
 Nov 3, 2025  Jaipal Singh Goud, a Solutions Architect at Prem AI, dives into the exciting world of fine-tuning small language models for personalized AI agents. He discusses the contrast between general LLMs and company-specific models, addressing privacy and data control concerns. Jaipal also explores the complementary roles of fine-tuning and RAG systems in query improvement. He emphasizes the importance of user observation for fine-tuning decision-making patterns and envisions a future with countless personalized models, dynamically chosen for each task. 
 AI Snips 
 Chapters 
 Transcript 
 Episode notes 
Fine-Tuning Guides Effective RAG Retrieval
- Fine-tuning and RAG are complementary: RAG retrieves fast-changing facts while fine-tuned models guide what to query.
 - Fine-tuned models excel at framing the right questions for retrieval on enterprise context.
 
Slow Data Encodes Decision Patterns
- Slow data captures decision-making patterns and trade-offs you make repeatedly.
 - Fine-tuned models emulate those decision processes for mission-critical workflows.
 
Train Small Models For Role Processes
- Use small language models (1–7B) for process fine-tuning because they're cheap and fully trainable.
 - Run RL and full fine-tuning on SLMs to instill deep, role-specific behaviors affordably.
 
