ThursdAI - The top AI news from the past week

📆 ThursdAI - Dec 4, 2025 - DeepSeek V3.2 Goes Gold Medal, Mistral Returns to Apache 2.0, OpenAI Hits Code Red, and US-Trained MOEs Are Back!

94 snips
Dec 5, 2025
Lucas Atkins, CTO of RCAI and leader in U.S.-based MOE models, dives into the significant launch of Trinity models and their enterprise implications. He highlights the importance of training and compliance in model development, explaining the efficiency of MOE inference and the challenges of scaling. The conversation shifts to the competitive benchmarks of DeepSeek V3.2, which showcases exceptional performance. Exciting insights on the latest AI integrations wrap up the discussion, emphasizing real-world applications and the rapid evolution of AI technology.
Ask episode
AI Snips
Chapters
Transcript
Episode notes
INSIGHT

Reasoning-First Open Models Are Real

  • DeepSeek V3.2 Speciale focuses on deep reasoning and achieves top-tier olympiad results that rival closed frontier models.
  • The Speciale variant intentionally omits tool-calling to optimize long-form reasoning performance.
INSIGHT

Mistral Returns With Apache 2.0

  • Mistral 3 returns with fully Apache-2 licensed weights and a 675B-parameter MOE architecture with 256K context window.
  • Its performance profile differs because it is an instruction (non-reasoning) model, so benchmark placement must be compared to similar models.
INSIGHT

OpenAI's 'Code Red' Reaction

  • OpenAI entered 'code red' after Gemini 3's launch reportedly caused a DAU drop, sparking focused war rooms and paused side projects.
  • Rumors mention an internal project 'Garlic' targeting improved encoding and reasoning efficiency for 2026.
Get the Snipd Podcast app to discover more snips from this episode
Get the app