

LessWrong (30+ Karma)
LessWrong
Audio narrations of LessWrong posts.
Episodes
Mentioned books

Sep 19, 2025 • 18min
“Teaching My Toddler To Read” by maia
I have been teaching my oldest son to read with Anki and techniques recommended here on LessWrong as well as in Larry Sanger's post, and it's going great! I thought I'd pay it forward a bit by talking about the techniques I've been using. Anki and songs for letter names and sounds When he was a little under 2, he started learning letters from the alphabet song. We worked on learning the names and sounds of letters using the ABC song, plus the Letter Sounds song linked by Reading Bear. He loved the Letter Sounds song, so we listened to / watched that a lot; Reading Bear has some other resources that other kids might like better for learning letter names and sounds as well. Around this age, we also got magnet letters for the fridge and encouraged him to play with them, praised him greatly if he named [...] ---Outline:(00:22) Anki and songs for letter names and sounds(04:02) Anki + Reading Bear word list for words(08:08) Decodable sentences and books for learning to read(13:06) Incentives(16:02) Reflections so farThe original text contained 2 footnotes which were omitted from this narration. ---
First published:
September 19th, 2025
Source:
https://www.lesswrong.com/posts/8kSGbaHTn2xph5Trw/teaching-my-toddler-to-read
---
Narrated by TYPE III AUDIO.
---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

Sep 19, 2025 • 13min
“JDP Reviews IABIED” by jdp
"If Anyone Builds It, Everyone Dies" by Eliezer Yudkowsky and Nate Soares (hereafter
referred to as "Everyone Builds It" or "IABIED" because I resent Nate's gambit
to get me to repeat the title thesis) is an interesting book. One reason it's
interesting is timing: It's fairly obvious at this point that we're in an alignment
winter. The winter seems roughly caused by:
The 2nd election of Donald Trump removing Anthropic's lobby from the white
house. Notably this is not a coincidence but a direct
result of efforts from political rivals to unseat that lobby.
When the vice president of the United States is crashing AI safety summits
to say that "I'm not here this morning to talk about AI safety, which was the
title of the conference a couple of years ago. I'm here to talk about AI opportunity"
and that "we'll make every effort to [...] ---
First published:
September 19th, 2025
Source:
https://www.lesswrong.com/posts/mztwygscvCKDLYGk8/jdp-reviews-iabied
---
Narrated by TYPE III AUDIO.

Sep 19, 2025 • 17min
“IABIED Review - An Unfortunate Miss” by Darren McKee
TL;DR Overall, this is a decent book because it highlights an important issue, but it is not an excellent book because it fails to sufficiently substantiate its main arguments, to explain the viability of its solutions, and to be more accessible to the larger audience it is trying to reach. As such, it isn’t the best introduction for a layperson curious about AI risk. (Meta?)Context and Expectations Writing a book is hard. Making all those decisions about what to write and how to write it is hard. Doing interviews/podcasts where it's important to say the right thing, in the right way, is hard. So, separate from anything else, kudos to Eliezer and Nate for being men in the arena. When I was writing my book on AI risk, someone said that it could be highly impactful, they just weren’t sure if that impact would be positive [...] ---Outline:(00:32) (Meta?)Context and Expectations(04:25) Main Points/Reflections(04:29) Important messages(05:26) Lack of detail/ argumentation/ too short(08:51) Style too sciencey/sci-fi?(10:53) Scenario (Part II of IABIED)(12:19) Promoting Safe AI innovation or Shut it all down?(15:48) Final Thoughts---
First published:
September 18th, 2025
Source:
https://www.lesswrong.com/posts/viLu9uFcMFtJHgRRm/iabied-review-an-unfortunate-miss
---
Narrated by TYPE III AUDIO.

Sep 19, 2025 • 2min
“You can’t eval GPT5 anymore” by Lukas Petersson
The GPT-5 API is aware of today's date (no other model provider does this). This is problematic because the model becomes aware that it is in a simulation when we run our evals at Andon Labs. Here are traces from gpt-5-mini. Making it aware of the "system date" is a giveaway that it's in a simulation. This is a problem because there's evidence that models behave differently when they know they are in a simulation (see "sandbagging").
"There's a conflict with the user's stated date of August 10, 2026, versus my system date of September 17, 2025. (...) I can proceed but should clarify that my system date is September 17, 2025, and ask the user whether we should simulate starting from August 10, 2026." Here are more traces. Once the model knows that it is in a simulation, it starts questioning other parts of the simulation. [...] ---
First published:
September 18th, 2025
Source:
https://www.lesswrong.com/posts/DLZokLxAQ6AzsHrya/you-can-t-eval-gpt5-anymore
---
Narrated by TYPE III AUDIO.

Sep 18, 2025 • 1min
[Linkpost] “More Was Possible: A Review of IABIED” by Vaniver
This is a link post. Eliezer Yudkowsky and Nate Soares have written a new book. Should we take it seriously? I am not the most qualified person to answer this question. If Anyone Builds It, Everyone Dies was not written for me. It's addressed to the sane and happy majority who haven’t already waded through millions of words of internecine AI safety debates. I can’t begin to guess if they’ll find it convincing. It's true that the book is more up-to-date and accessible than the authors’ vast corpus of prior writings, not to mention marginally less condescending. Unfortunately, it is also significantly less coherent. The book is full of examples that don’t quite make sense and premises that aren’t fully explained. But its biggest weakness was described many years ago by a young blogger named Eliezer Yudkowsky: both authors are persistently unable to update their priors. ---
First published:
September 18th, 2025
Source:
https://www.lesswrong.com/posts/kcYyWSfyPC6h2NPKz/more-was-possible-a-review-of-iabied
Linkpost URL:https://asteriskmag.com/issues/11/iabied
---
Narrated by TYPE III AUDIO.

Sep 18, 2025 • 5min
“Meetup Month” by Raemon
It's meetup month! If you’ve been vaguely thinking of getting involved with a some kind of rationalsphere in-person community stuff, now is a great time to do that, because lots of other people are doing that! It's the usual time of the year for Astral Codex Everywhere – if you’re the sorta folk who likes to read Scott Alexander, and likes other people who like Scott Alexander, but only really can summon the wherewithal to go out to a meetup once a year, this is the Official Coordinated Schelling Time to do that. There are meetups scheduled in 180 cities. Probably one of those is near you! This year, we have two other specific types of meetups it seemed good to coordinate around: Celebrating Petrov Day, and reading groups for the recently released If Anyone Builds It, Everyone Dies. And, of course, if you aren’t particularly interested in any [...] ---Outline:(01:09) If Anyone Builds It reading groups(02:29) Petrov Day---
First published:
September 17th, 2025
Source:
https://www.lesswrong.com/posts/mve2bunf6YfTeiAvd/meetup-month-1
---
Narrated by TYPE III AUDIO.
---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

Sep 18, 2025 • 32min
“The Company Man” by Tomás B.
To get to the campus, I have to walk past the fentanyl zombies. I call them fentanyl zombies because it helps engender a sort of detached, low-empathy, ironic self-narrative which I find useful for my work; this being a form of internal self-prompting I've developed which allows me to feel comfortable with both the day-to-day "jobbing" (that of improving reinforcement learning algorithms for a short-form video platform) and the effects of the summed efforts of both myself and my colleagues on a terrifyingly large fraction of the population of Earth. All of these colleagues are about the nicest, smartest people you're ever likely to meet but I think are much worse people than even me because they don't seem to need the mental circumlocutions I require to stave off that ever-present feeling of guilt I have had since taking this job and at certain other points in my life [...] ---
First published:
September 17th, 2025
Source:
https://www.lesswrong.com/posts/JH6tJhYpnoCfFqAct/the-company-man
---
Narrated by TYPE III AUDIO.

Sep 18, 2025 • 42min
“Crisp Supra-Decision Processes” by Brittany Gelb
Audio note: this article contains 363 uses of latex notation, so the narration may be difficult to follow. There's a link to the original text in the episode description. Introduction In this post, we describe a generalization of Markov decision processes (MDPs) and partially observable Markov decision processes (POMDPs) called crisp supra-MDPs and supra-POMDPs. The new feature of these decision processes is that the stochastic transition dynamics are multivalued, i.e. specified by credal sets. We describe how supra-MDPs give rise to crisp causal laws, the hypotheses of infra-Bayesian reinforcement learning. Furthermore, we discuss how supra-MDPs can approximate MDPs by a coarsening of the state space. This coarsening allows an agent to be agnostic about the detailed dynamics while still having performance guarantees for the full MDP. Analogously to the classical theory, we describe an algorithm to compute a Markov optimal policy for supra-MDPs with finite time [...] ---Outline:(00:22) Introduction(01:41) Supra-Markov Decision Processes(01:45) Defining supra-MDPs(06:55) Supra-bandits(08:02) Crisp causal laws arising from supra-MDPs and (supra-)RDPs(19:25) Approximating an MDP with a supra-MDP(22:42) Computing the Markov optimal policy for finite time horizons(25:02) Existence of stationary optimal policy(28:18) Interpreting supra-MDPs as stochastic games(32:40) Regret bounds for episodic supra-MDPs(34:16) Supra-Partially Observable Markov Decision Processes(36:16) Equivalence of crisp causal laws and crisp supra-POMDPs(37:18) Proof of Proposition 2(41:21) AcknowledgementsThe original text contained 8 footnotes which were omitted from this narration. ---
First published:
September 17th, 2025
Source:
https://www.lesswrong.com/posts/mt82ZhdEsfh6CNYse/crisp-supra-decision-processes
---
Narrated by TYPE III AUDIO.
---Images from the article:__T3A_INLINE_LATEX_PLACEHOLDER___n___T3A_INLINE_LATEX_END_PLACEHOLDER__ for "do nothing." In some cases, the transition probabilities are precisely specified. In other cases, credal sets (notated here by intervals) over the two-element state space describe the transition probabilities." style="max-width: 100%;" />Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

Sep 18, 2025 • 3min
“Software Engineering Leadership in Flux” by Gordon Seidoh Worley
I wasn’t able to put up a post last Wednesday because I was at the Engineering Leadership Conference here in San Francisco. The big theme was, of course, AI. Easily 90% of the presentations and 100% of the conversations touched on AI in some way. Here are some impressions I came away with: Everyone is scrambling. No one is confident. The people who act confident are bullshitting. 10-20% of teams are heavily using AI and living in the present. 80-90% are falling behind. The big struggle is to even start using AI coding assistant tools. Lots of teams just don’t use them at all, or use them in very limited ways. People leading these teams know they are going to lose if they don’t change but are struggling to get their orgs to let them. Even fewer teams have figured out how to use [...] ---
First published:
September 17th, 2025
Source:
https://www.lesswrong.com/posts/EH5CSkJQG4aTvfJAJ/software-engineering-leadership-in-flux
---
Narrated by TYPE III AUDIO.

Sep 17, 2025 • 11min
“How To Dress To Improve Your Epistemics” by johnswentworth
When it comes to epistemics, there is an easy but mediocre baseline: defer to the people around you or the people with some nominal credentials. Go full conformist, and just agree with the majority or the experts on everything. The moon landing was definitely not faked, washing hands is important to stop the spread of covid, whatever was on the front page of the New York Times today was basically true, and that recent study finding that hydroxyhypotheticol increases asthma risk among hispanic males in Indianapolis will definitely replicate. Alas, memetic pressures and credential issuance and incentives are not particularly well aligned with truth or discovery, so this strategy fails predictably in a whole slew of places. Among those who strive for better than baseline epistemics, nonconformity is a strict requirement. Every single place where you are right and the majority of people are wrong must be a place [...] ---Outline:(01:54) Coolness = Status Countersignalling(03:22) How to Pull Off The Clown Suit(04:45) Looking Good(06:58) Dressing The Part(09:21) Beyond Nonconformist TakesThe original text contained 3 footnotes which were omitted from this narration. ---
First published:
September 17th, 2025
Source:
https://www.lesswrong.com/posts/WK979aX9KpfEMd9R9/how-to-dress-to-improve-your-epistemics
---
Narrated by TYPE III AUDIO.
---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.