DataFramed

#300 End to End AI Application Development with Maxime Labonne, Head of Post-training at Liquid AI & Paul-Emil Iusztin, Founder at Decoding ML

49 snips
May 5, 2025
Maxime Labonne, a Senior Staff Machine Learning Scientist at Liquid AI, and Paul-Emil Iusztin, founder of Decoding ML, delve into the fascinating world of AI application development. They tackle the complexities of deploying AI models, from understanding fine-tuning nuances to the critical RAG feature pipeline. Strategies for effective problem definition and the importance of scalable solutions are highlighted. The discussion also covers managing deployment costs and the balance between safety and model customization, making the technical accessible and engaging.
Ask episode
AI Snips
Chapters
Books
Transcript
Episode notes
INSIGHT

Fine-Tuning Overuse Misconception

  • Fine-tuning is often overused when few-shot prompting with the right data pipeline can solve many problems.
  • Many attempt direct fine-tuning unnecessarily, which is rarely the best choice unless absolutely required.
ADVICE

Start Custom, Use Frameworks Sparingly

  • Avoid starting complex projects with frameworks like LangChain; write custom code from day zero for better data handling.
  • Use frameworks only for quick prototypes to test ideas before building scalable solutions.
ANECDOTE

Complementary AI Engineering Roles

  • Paul manages model integration, from infrastructure scaling to pre/post-processing refinement for production.
  • Maxime fine-tunes base models and hands them off to engineers like Paul for deployment and improvement.
Get the Snipd Podcast app to discover more snips from this episode
Get the app