
No Priors AI OpenAI’s Game-Changing Agents – Worth the Hype?
12 snips
Oct 13, 2025 OpenAI is shaking up the AI landscape with its release of open models after five years. The conversation dives into whether these new agents signify progress or potential risks. Performance assessments on CodeForce and the implications of tool use in benchmarks highlight crucial findings. Plus, there's intriguing talk on hallucination rates and the company's stance on sharing training data. Microsoft's integration of these models into Windows adds another layer of innovation, promising exciting advancements for developers.
AI Snips
Chapters
Transcript
Episode notes
OpenAI Returns To Open Models
- OpenAI released two models, the first open models they've published in five years.
- The release addresses long-standing criticism about OpenAI's move away from open-source roots.
Open Model Is Competitive On Coding Benchmarks
- The larger GPT OSS 120B scored near OpenAI's closed models on Codeforces, showing competitive coding ability.
- Benchmarks include versions measured both with and without external tools, which affects scores significantly.
Tools Drive Many Benchmark Gains
- 'Tools' in benchmarks mean external capabilities like calculators and code execution which materially boost performance.
- OpenAI did not release those proprietary tools with the open models, so tool-augmented benchmarks aren't immediately reproducible by users.
