
Chain of Thought Beyond Transformers: Maxime Labonne on Post-Training, Edge AI, and the Liquid Foundation Model Breakthrough
Nov 12, 2025
Maxime Labonne, Head of Post-Training at Liquid AI and creator of a popular LLM course, dives into the future of AI architectures. He reveals how Liquid AI’s hybrid model merges transformers with convolutional layers for efficiency on edge devices. Maxime discusses the pivotal role of post-training in maximizing AI capabilities and the use of synthetic data. He shares insights on small on-device models, creative applications, and the challenges of function calling—making complex AI evolution both relatable and accessible.
AI Snips
Chapters
Transcript
Episode notes
Hybrid Architecture For Edge Efficiency
- Liquid's LFM2 uses a hybrid of attention and short convolution layers to optimize speed, latency, and memory for edge devices.
- The design raises performance across inference speed, long-context memory, and quality without large parameter counts.
Validate Speed On Target Hardware
- Measure operator performance on target hardware early to avoid theoretical speed claims that fail in practice.
- Optimize models on a real device (e.g., a Samsung phone) and run many pre-training benchmarks to converge on the best architecture.
The AI Bike Hackathon Win
- At a Tokyo hackathon a team fine-tuned LFM2 into a vision-language AI bike that role-played as a bike and refused unrelated questions.
- Maxime initially dismissed the idea but later called their work 'absolute genius' after the demo.

