“Trust me bro, just one more RL scale up, this one will be the real scale up with the good environments, the actually legit one, trust me bro” by ryan_greenblatt
Sep 4, 2025
The discussion dives into the challenges of scaling reinforcement learning (RL) due to low-quality environments. Arguments emerge about the potential benefits of better environments in enhancing AI capabilities. There's skepticism regarding whether recent advancements truly stem from improvements, with some suggesting AIs might soon create their own environments. The conversation also touches on the economics involved in developing RL environments, debating the impact of budget and labor on their effectiveness and the potential algorithmic advancements that could follow.
14:02
forum Ask episode
web_stories AI Snips
view_agenda Chapters
auto_awesome Transcript
info_circle Episode notes
insights INSIGHT
Progress Is A Sum Of Many Advances
Ryan Greenblatt argues recent progress already priced in improved RL environments and other advances.
Multiple seemingly huge advances combine into a smooth trend rather than a single break.
insights INSIGHT
Few True Trend Breakers Exist
Greenblatt sees only a few true trend-breaking breakthroughs in recent decades.
He lists deep learning at scale and generative pretraining as the main candidates.
volunteer_activism ADVICE
Don't Overweight One RL Scale-Up
Don't assume RL scale-up alone will cause a massive above-trend jump in 2025.
Expect steady but not explosively super-exponential gains from RL and reasoning models.
Get the Snipd Podcast app to discover more snips from this episode
I've recently written about how I've updated against seeing substantially faster than trend AI progress due to quickly massively scaling up RL on agentic software engineering. One response I've heard is something like:
RL scale-ups so far have used very crappy environments due to difficulty quickly sourcing enough decent (or even high quality) environments. Thus, once AI companies manage to get their hands on actually good RL environments (which could happen pretty quickly), performance will increase a bunch.
Another way to put this response is that AI companies haven't actually done a good job scaling up RL—they've scaled up the compute, but with low quality data—and once they actually do the RL scale up for real this time, there will be a big jump in AI capabilities (which yields substantially above trend progress). I'm skeptical of this argument because I think that ongoing improvements to RL environments [...]
---
Outline:
(04:18) Counterargument: Actually, companies havent gotten around to improving RL environment quality until recently (or there is substantial lead time on scaling up RL environments etc.) so better RL environments didnt drive much of late 2024 and 2025 progress
(05:24) Counterargument: AIs will soon reach a critical capability threshold where AIs themselves can build high quality RL environments
(06:51) Counterargument: AI companies are massively fucking up their training runs (either pretraining or RL) and once they get their shit together more, well see fast progress
(08:34) Counterargument: This isnt that related to RL scale up, but OpenAI has some massive internal advance in verification which they demonstrated via getting IMO gold and this will cause (much) faster progress late this year or early next year
(10:12) Thoughts and speculation on scaling up the quality of RL environments
The original text contained 5 footnotes which were omitted from this narration.