

LessWrong (30+ Karma)
LessWrong
Audio narrations of LessWrong posts.
Episodes
Mentioned books

Dec 10, 2025 • 9min
“Most Algorithmic Progress is Data Progress [Linkpost]” by Noosphere89
So this post brought to you by Beren today is about how a lot of claims about within-paradigm algorithmic progress is actually mostly about just getting better data, leading to a Flynn effect, and the reason I'm mentioning this is because once we have to actually build new fabs and we run out of data in 2028-2031, progress will be slower than people expect (assuming we havent reached AGI by then). When forecasting AI progress, the forecasters and modellers often break AI progress down into two components: increased compute, and ‘algorithmic progress’. My argument here is that the term ‘algorithmic progress’ for ‘the remainder after compute’ is misleading and that we should really think about and model AI progress as three terms – compute, algorithms, and data. My claim is that a large fraction (but certainly not all) AI progress that is currently conceived as ‘algorithmic progress’ is actually ‘data progress’, and that this term ‘algorithmic’ gives a false impression about what are the key forces and key improvements that have driven AI progress in the past three years or so. From experience in the field, there have not been that many truly ‘algorithmic’ improvements with massive impact. The [...] ---
First published:
December 10th, 2025
Source:
https://www.lesswrong.com/posts/3uMZFbvJZ8Z5LqpyQ/most-algorithmic-progress-is-data-progress-linkpost
---
Narrated by TYPE III AUDIO.

Dec 10, 2025 • 25min
“Selling H200s to China Is Unwise and Unpopular” by Zvi
AI is the most important thing about the future. It is vital to national security. It will be central to economic, military and strategic supremacy.
This is true regardless of what other dangers and opportunities AI might present.
The good news is that America has many key advantages in AI.
America's greatest advantage in AI is our vastly superior access to compute.
We are in danger of selling a large portion of that advantage for 30 pieces of silver.
This is on track to be done against the wishes of Congress as well as most of those in the executive branch.
Who does it benefit? It benefits China. It might not even benefit Nvidia.
Doing so would be both highly unwise and highly unpopular.
We should not sell highly capable Nvidia H200 chips to China.
If it is too late to not sell H200s, we must limit quantities, and ensure it stops there. We absolutely cannot be giving away other future chips on a similar delay.
The good news is that the stock market reaction implies this might not scale.
Bayeslord: I don’t know anyone who thinks this [...] ---Outline:(01:36) The Announcement(04:38) How Bad Would This Be?(11:42) Is There A Steelman Case For This Other Than 'Trade Always Good'?(16:21) Compute Is A Key Limiting Factor For China and Chinese Labs(17:53) What About That All Important 'Tech Stack'?(20:37) Selling H200s Hurts America In The AI Race(22:14) Nvidia Number Did Not Go Up That Much ---
First published:
December 9th, 2025
Source:
https://www.lesswrong.com/posts/kmEpWTjWeFyqv4tb5/selling-h200s-to-china-is-unwise-and-unpopular
---
Narrated by TYPE III AUDIO.
---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

Dec 10, 2025 • 5min
“The funding conversation we left unfinished” by jenn
People working in the AI industry are making stupid amounts of money, and word on the street is that Anthropic is going to have some sort of liquidity event soon (for example possibly IPOing sometime next year). A lot of people working in AI are familiar with EA, and are intending to direct donations our way (if they haven't started already). People are starting to discuss what this might mean for their own personal donations and for the ecosystem, and this is encouraging to see. It also has me thinking about 2022. Immediately before the FTX collapse, we were just starting to reckon, as a community, with the pretty significant vibe shift in EA that came from having a lot more money to throw around. CitizenTen, in "The Vultures Are Circling" (April 2022), puts it this way: The message is out. There's easy money to be had. And the vultures are coming. On many internet circles, there's been a worrying tone. “You should apply for [insert EA grant], all I had to do was pretend to care about x, and I got $$!” Or, “I’m not even an EA, but I can pretend, as getting a 10k grant is [...] ---
First published:
December 9th, 2025
Source:
https://www.lesswrong.com/posts/JtFnkoSmJ7b6Tj3TK/the-funding-conversation-we-left-unfinished
---
Narrated by TYPE III AUDIO.

Dec 10, 2025 • 14min
“Human Dignity: a review” by owencb
I have in my possession a short document purporting to be a manifesto from the future. That's obviously absurd, but never mind that. It covers some interesting ground, and the second half is pretty punchy. Let's discuss it. Principles for Human Dignity in the Age of AI Humanity is approaching a threshold. The development of artificial intelligence promises extraordinary abundance — the end of material poverty, liberation from disease, tools that amplify human potential beyond current imagination. But it also challenges the fundamental assumptions of human existence and meaning. When machines surpass us in all domains, where will we find our purpose? When our choices can be predicted and shaped by systems we do not understand, what will become of our agency? This moment demands we articulate what aspects of human life must be protected, as we cross the threshold into a strange new world. I think these themes will speak to a lot of people. Would the language? It feels even more grandiose/flowery than the universal declaration of human rights. Personally I like it: I feel the topic deserves this sort of gravitas, or something. But I can imagine it putting some people off. By setting out clear [...] ---Outline:(00:38) Principles for Human Dignity in the Age of AI(03:21) The Principles(03:24) Integrity of Person(06:27) Wellbeing(08:31) Autonomy & Agency The original text contained 1 footnote which was omitted from this narration. ---
First published:
December 8th, 2025
Source:
https://www.lesswrong.com/posts/WW5xsQa7zAcn3LM2m/human-dignity-a-review
---
Narrated by TYPE III AUDIO.
---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

Dec 10, 2025 • 18min
“Insights into Claude Opus 4.5 from Pokémon” by Julian Bradshaw
Credit: Nano Banana, with some text provided. You may be surprised to learn that ClaudePlaysPokemon is still running today, and that Claude still hasn't beaten Pokémon Red, more than half a year after Google proudly announced that Gemini 2.5 Pro beat Pokémon Blue. Indeed, since then, Google and OpenAI models have gone on to beat the longer and more complex Pokémon Crystal, yet Claude has made no real progress on Red since Claude 3.7 Sonnet![1] This is because ClaudePlaysPokemon is a purer test of LLM ability, thanks to its consistently simple agent harness and the relatively hands-off approach of its creator, David Hershey of Anthropic.[2] When Claudes repeatedly hit brick walls in the form of the Team Rocket Hideout and Erika's Gym for months on end, nothing substantial was done to give Claude a leg up. But Claude Opus 4.5 has finally broken through those walls, in a way that perhaps validates the chatter that Opus 4.5 is a substantial advancement. Though, hardly AGI-heralding, as will become clear. What follows are notes on how Claude has improved—or failed to improve—in Opus 4.5, written by a friend of mine who has watched quite a lot of ClaudePlaysPokemon over the past year.[3] [...] ---Outline:(01:28) Improvements(01:31) Much Better Vision, Somewhat Better Seeing(03:05) Attention is All You Need(04:29) The Object of His Desire(06:05) A Note(06:34) Mildly Better Spatial Awareness(07:27) Better Use of Context Window and Note-keeping to Simulate Memory(09:00) Self-Correction; Breaks Out of Loops Faster(10:01) Not Improvements(10:05) Claude would still never be mistaken for a Human playing the game(12:19) Claude Still Gets Pretty Stuck(13:51) Claude Really Needs His Notes(14:37) Poor Long-term Planning(16:17) Dont Forget The original text contained 9 footnotes which were omitted from this narration. ---
First published:
December 9th, 2025
Source:
https://www.lesswrong.com/posts/u6Lacc7wx4yYkBQ3r/insights-into-claude-opus-4-5-from-pokemon
---
Narrated by TYPE III AUDIO.
---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

Dec 9, 2025 • 11min
“My experience running a 100k” by Alexandre Variengien
The SVP100 route. On the 3rd of August last year, I woke up early. I stood nervously with a hundred other runners in a hall in the city of Newmarket, near Cambridge in the UK. I felt intimidated as I looked at the calves, the size of champagne bottles, of the other participants. Only a few runners were starting their first 100k that morning. For many, this was not even the peak of their season. This route was long but almost flat, with only 1,000 meters of cumulative elevation. The real ultras were happening in the Alps, where long distances were combined with a crazy amount of ups and downs. I was almost startled when the race gun fired. I had been chatting for a few moments with a runner I recognized from the 50k I completed the year before. I started running at 7 am, and I would not stop until 15 hours later, finishing the race in the dark with my headlamp guiding me. As I took my first steps on the path, almost delighted to stretch my legs in the fresh summer morning, I thought about how this would never have happened without Adam. Why did I do that? [...] ---
First published:
December 9th, 2025
Source:
https://www.lesswrong.com/posts/4tfeyu5xsubg6wDdH/my-experience-running-a-100k
---
Narrated by TYPE III AUDIO.
---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

Dec 9, 2025 • 23min
“[paper] Auditing Games for Sandbagging” by Jordan Taylor, Joseph Bloom
Jordan Taylor, Sid Black, Dillon Bowen, Thomas Read, Satvik Golechha, Alex Zelenka-Martin, Oliver Makins, Connor Kissane, Kola Ayonrinde, Jacob Merizian, Samuel Marks, Chris Cundy, Joseph Bloom UK AI Security Institute, FAR.AI, Anthropic Links: Paper | Code | Models | Transcripts | Interactive Demo Epistemic Status: We're sharing our paper and a hastily written summary of it, assuming a higher level of context on sandbagging / auditing games than other materials. We also share some informal commentary on our results. This post was written by Jordan and Joseph, and may not reflect the views of all authors. This summary diverges from the paper, condensing heavily while adding our additional commentary in footnotes, Paper Summary & Commentary TL;DR We ran an auditing game to stress-test sandbagging detection (rather than mitigation) techniques. Specifically, we wanted to ask the question "If for some reason a model was sandbagging on capability evaluations, what methods might we use to catch this model?". During the game, our red team created five model organisms (some sandbagging, some benign decoys), and then a blinded blue team tried to identify which were which. The red team won. The blue team only classified 1/5 models correctly when [...] ---Outline:(01:02) Paper Summary & Commentary(01:05) TL;DR(01:59) The Setup(02:06) Red Team(02:56) Blue Team(03:28) Results(03:54) Phase 1: Black-Box and Probes(05:07) Phase 2: Training-Based Methods(06:44) Final Scores(07:23) Post-Game Analysis(08:26) Limitations(09:07) Reasons why our set up favoured the blue team:(09:18) Reasons why our set up favoured the red team:(12:31) Other limitations:(14:06) Future work(14:20) Future work: Detection(16:40) Future work: Elicitation(18:22) Additional commentary(18:25) What went well(19:02) Practical Difficulties(22:04) Resources(22:25) Paper Citation The original text contained 10 footnotes which were omitted from this narration. ---
First published:
December 9th, 2025
Source:
https://www.lesswrong.com/posts/QMLwKemqMDATkkjJG/paper-auditing-games-for-sandbagging
---
Narrated by TYPE III AUDIO.
---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

Dec 9, 2025 • 14min
“Towards Categorization of Adlerian Excuses” by romeostevensit
[Author's note: LLMs were used to generate and sort examples into their requisite categories, as well as find and summarize relevant papers, and extensive assistance with editing] Context: Alfred Adler (1870–1937) split from Freud by asserting that human psychology is teleological (goal-oriented) rather than causal (drive-based). He argued that neuroses and "excuses" are not passive symptoms of past trauma, but active, creative tools used by the psyche to safeguard self-esteem. This post attempts to formalize Adler's concept of "Safeguarding Tendencies" into categories, not by the semantic content of excuses, but by their mechanical function in managing the distance between the Ego and Reality. Abstract: When a life task threatens to reveal inadequacy, people initiate a strategic maneuver to invalidate the test. We propose four "Strategies of Immunity", Incapacity, Entanglement, Elevation, and Scorched Earth, to explain how agents rig the game so they cannot lose. The Teleological Flip In the standard model of behavior, an excuse is a result. You are anxious; therefore, you cannot socialize. The cause (anxiety) produces the effect (avoidance). Adler inverted this vector. He argued that the goal (avoiding the risk of rejection) recruits the means (anxiety). You generate the anxiety in order to avoid [...] ---Outline:(01:15) The Teleological Flip(02:17) 1. Immunity via Incapacity (the broken wing)(03:45) 2. Immunity via Entanglement (the human shield)(05:14) 3. Immunity via Elevation (the ivory tower)(06:39) 4. Immunity via Scorched Earth (the table flip)(08:07) Conclusion: The Courage to be Imperfect(12:52) Relevant works: ---
First published:
December 8th, 2025
Source:
https://www.lesswrong.com/posts/kdG4T9jtETYe8Hkkg/towards-categorization-of-adlerian-excuses
---
Narrated by TYPE III AUDIO.

Dec 9, 2025 • 15min
“Every point of intervention” by TsviBT
Crosspost from my blog.
Events are already set for catastrophe, they must be steered along some course they would not naturally go. [...]
Are you confident in the success of this plan? No, that is the wrong question, we are not limited to a single plan. Are you certain that this plan will be enough, that we need essay no others? Asked in such fashion, the question answers itself. The path leading to disaster must be averted along every possible point of intervention.
— Professor Quirrell (competent, despite other issues), HPMOR chapter 92
This post is a quickly-written service-post, an attempt to lay out a basic point of strategy regarding decreasing existential risk from AGI.
Keeping intervention points in mind
By default, AGI will kill everyone. The group of people trying to stop that from happening should seriously attend to all plausible points of intervention.
In this context, a point of intervention is some element of the world—such as an event, a research group, or an ideology—which could substantively contribute to leading humanity to extinction through AGI. A point of intervention isn't an action; it doesn't say what to do. It just [...] ---Outline:(00:57) Keeping intervention points in mind(01:37) The vague elephant(02:19) Example: France(03:03) Full-court press(04:26) Multi-stage fal... opportunity!(05:15) Brief tangent about a conjunction of disjunctions(06:43) Varied interventions help(07:17) Sources of correlation indicate deeper intervention points(08:13) Some takeaways(10:42) Some biases potentially affecting strategy porfolio balancing(12:44) A terse opinionated partial list of maybe-underattended points of intervention ---
First published:
December 8th, 2025
Source:
https://www.lesswrong.com/posts/CseMTXtQHynGR8S5k/every-point-of-intervention
---
Narrated by TYPE III AUDIO.

Dec 9, 2025 • 7min
“How Stealth Works” by Linch
Stealth technology is cool. It's what gave the US domination over the skies during the latter half of the Cold War, and the biggest component of the US's information dominance in both war and peace, at least prior to the rise of global internet connectivity and cybersecurity. Yet the core idea is almost embarrassingly simple. So how does stealth work? Photo by Steve Harvey on Unsplash When we talk about stealth, we’re usually talking about evading radar. How does radar work? Radar antennas emit radio waves in the sky. The waves bounce off objects like aircraft. When the echoes return to the antenna, the radar system can then identify the object's approximate speed, position, and size. Picture courtesy of Katelynn Bennett over at bifocal bunny So how would you evade radar? You can try to: Blast a bunch of radio waves in all directions (“jamming”). This works if you’re close to the radar antenna, but kind of defeats the point of stealth. Build your plane out of materials that are invisible to radio waves (like glass and some plastics) and just let the waves pass through. This is possible, but very difficult in practice. Besides, by the 1970s [...] The original text contained 2 footnotes which were omitted from this narration. ---
First published:
December 8th, 2025
Source:
https://www.lesswrong.com/posts/MxivaKjaAX9mkJzAK/how-stealth-works
---
Narrated by TYPE III AUDIO.
---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.


