
EA Forum Podcast (Curated & popular) “The Scaling Series Discussion Thread: with Toby Ord” by Toby Tremlett🔹
We're trying something a bit new this week. Over the last year, Toby Ord has been writing about the implications of the fact that improvements in AI require exponentially more compute. Only one of these posts so far has been put on the EA forum.
This week we've put the entire series on the Forum and made this thread for you to discuss your reactions to the posts. Toby Ord will check in once a day to respond to your comments[1].
Feel free to also comment directly on the individual posts that make up this sequence, but you can treat this as a central discussion space for both general takes and more specific questions.
If you haven't read the series yet, we've created a page where you can, and you can see the summaries of each post below:
Are the Costs of AI Agents Also Rising Exponentially?
Agents can do longer and longer tasks, but their dollar cost to do these tasks may be growing even faster.
How Well Does RL Scale?
I show that RL-training for LLMs scales much worse than inference or pre-training.
Evidence that Recent AI Gains are Mostly from Inference-Scaling
I show how [...]
---
First published:
February 2nd, 2026
---
Narrated by TYPE III AUDIO.
