Chain of Thought cover image

Beyond Transformers: Maxime Labonne on Post-Training, Edge AI, and the Liquid Foundation Model Breakthrough

Chain of Thought

00:00

Lessons from LFM1 to LFM2: Compatibility and Small-Model Expertise

Maxime reflects on inference tooling, operator compatibility, data quality, customized training techniques, and small-model knowledge limits.

Play episode from 34:35
Transcript

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app