LessWrong (Curated & Popular) cover image

“How Well Does RL Scale?” by Toby_Ord

LessWrong (Curated & Popular)

00:00

When RL Compute Approaches Pre-training Scale

Toby warns that once RL compute matches pre-training, further RL scaling greatly increases total training cost and becomes infeasible.

Play episode from 12:03
Transcript

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app