Eye On A.I.

#261 Jonathan Frankle: How Databricks is Disrupting AI Model Training

39 snips
Jun 12, 2025
In this engaging discussion, Jonathan Frankle, Chief Scientist at Databricks and co-founder of MosaicML, shares insights into innovative AI training techniques. He introduces TAO (Test-time Adaptive Optimization), a method enabling model tuning without expensive labeled data. Jonathan discusses the advantages of synthetic data and reinforcement learning, and how Databricks' reward model enhances performance while minimizing costs. The conversation highlights the potential for transforming AI deployment in enterprises, making it faster and more efficient.
Ask episode
AI Snips
Chapters
Transcript
Episode notes
INSIGHT

Tau Enables Label-Free Fine-Tuning

  • Fine-tuning AI models requires thousands of labeled examples, which are rarely available naturally.
  • Tau simplifies this by using only input prompts without needing labeled outputs for effective fine-tuning.
INSIGHT

Practical Value of RLHF in Tau

  • Tau applies reinforcement learning with human feedback (RLHF) to improve models using inputs without labeled outputs.
  • The novelty lies not in theory, but in making the method work effectively for real customers.
INSIGHT

Tau Trades Training for Inference Efficiency

  • Tau uses synthetic data generation at training time to lower inference compute costs.
  • Training uses extra compute up front, allowing normal inference speed without extra runtime cost.
Get the Snipd Podcast app to discover more snips from this episode
Get the app