
Unsupervised Learning with Jacob Effron AI Vibe Check: The Actual Bottleneck In Research, SSI’s Mystique, & Spicy 2026 Predictions
251 snips
Dec 18, 2025 Ari Morcos, a research scientist focused on model interpretability, and Rob Toews, a tech investor, dive into the AI landscape post-NeurIPS. They discuss whether AI models are plateauing, the constraints of infinite lab capital on innovation, and the paradox of U.S. chip restrictions potentially speeding up China's self-sufficiency. Ari reveals the real bottleneck in AI is compute power, not ideas, while Rob predicts major shifts for OpenAI's leadership by 2026, and both foresee a Chinese open-source model taking center stage.
AI Snips
Chapters
Transcript
Episode notes
LLMs Are Hitting Diminishing Returns
- Models, especially consumer LLMs, show signs of plateauing after the big leaps from GPT-1 to GPT-4.
- Foundational limits like continual learning and sample efficiency explain slowed incremental gains.
Don't Conflate AI With Only LLMs
- The community is overly myopic on language models while other modalities (e.g., video) rapidly progress.
- Excluding LLMs, models broadly are not plateauing and still have major headroom.
RL Works In A Narrow Sweet Spot
- Reinforcement learning helps when models sit in a Goldilocks zone of capability; it's neither too easy nor too hard.
- RL is powerful but not a panacea and must integrate with training stages.


