
Don't Worry About the Vase Podcast DeepSeek v3.2 Is Okay And Cheap But Slow
Dec 5, 2025
Explore the fascinating journey of DeepSeek v3.2 and its mixed reviews. Discover the innovative training techniques and safety concerns surrounding the release. Dive into community reactions and benchmark performances, with comparisons to other models like Opus and Gemini. Zvi highlights advancements in mathematical capabilities and the trade-offs of choosing affordability over speed and security. Finally, get a glimpse into the future outlook of this intriguing yet slow model.
AI Snips
Chapters
Transcript
Episode notes
Cheap And Strong But Not Frontier
- DeepSeek v3.2 is a strong, cost-efficient open model but not frontier-level performance.
- It excels at math and training-efficiency innovations while falling short on speed and frontier claims.
The DeepSeek Moment That Frightened Markets
- DeepSeek briefly caused panic with R1 and an app that misled observers about its real capabilities.
- Viral PR and conflated cost figures created a false impression that DeepSeek had matched frontier labs.
Technical Gains, Safety Silence
- The paper's main innovation is a new attention mechanism that reduces training cost and context-scaling compute.
- Zvi notes the paper omits any statement about safety testing for this open, irreversible release.
