

LessWrong (30+ Karma)
LessWrong
Audio narrations of LessWrong posts.
Episodes
Mentioned books

Jul 4, 2025 • 4min
“‘AI for societal uplift’ as a path to victory” by Raymond Douglas
The AI tools/epistemics space might provide a route to a sociotechnical victory, where instead of aiming for something like aligned ASI, we aim for making civilization coherent enough to not destroy itself while still keeping anchored to what's good[1]. The core ideas are: Basically nobody actually wants the world to end, so if we do that to ourselves, it will be because somewhere along the way we weren’t good enough at navigating collective action problems, institutional steering, and general epistemics Conversely, there is some (potentially high) threshold of societal epistemics + coordination + institutional steering beyond which we can largely eliminate anthropogenic x-risk, potentially in perpetuity[2] As AI gets more advanced, and therefore more risky, it will also unlock really radical advances in all these areas — genuinely unprecedented levels of coordination and sensible decision making, as well as the potential for narrow research automation in key fields [...] The original text contained 2 footnotes which were omitted from this narration. ---
First published:
July 4th, 2025
Source:
https://www.lesswrong.com/posts/kvZyCJ4qMihiJpfCr/ai-for-societal-uplift-as-a-path-to-victory
---
Narrated by TYPE III AUDIO.

Jul 4, 2025 • 7min
“Two proposed projects on abstract analogies for scheming” by Julian Stastny
In order to empirically study risks from schemers, we can try to develop model organisms of misalignment. Sleeper Agents and password-locked models, which train LLMs to behave in a benign or malign way depending on some feature of the input, are prominent examples of the methodology thus far. Empirically, it turns out that detecting or removing misalignment from model organisms can sometimes be very easy: simple probes can catch sleeper agents, non-adversarial training can cause smaller-sized sleeper agents to forget their backdoor (Price et al., Mallen & Hebbar)[1], and memorizing weak examples can elicit strong behavior out of password-locked models. This might imply that training against real scheming is easy—but model organisms might be “shallow” in some relevant sense that make them disanalogous from real scheming. I want to highlight an alternative here: studying methods to train away deeply ingrained behaviors (which don’t necessarily have to look like misbehavior) [...] ---Outline:(01:52) 1. Sample-efficiently training away misbehavior(01:57) Motivation(03:06) Sketch of a project(04:08) 2. Training away harmlessness without ground truth(04:13) Motivation(04:59) Sketch of a projectThe original text contained 1 footnote which was omitted from this narration. ---
First published:
July 4th, 2025
Source:
https://www.lesswrong.com/posts/5zsLpcTMtesgF7c8p/two-proposed-projects-on-abstract-analogies-for-scheming
---
Narrated by TYPE III AUDIO.

Jul 4, 2025 • 1h
“Outlive: A Critical Review” by MichaelDickens
Outlive: The Science & Art of Longevity by Peter Attia (with Bill Gifford[1]) gives Attia's prescription on how to live longer and stay healthy into old age. In this post, I critically review some of the book's scientific claims that stood out to me.
This is not a comprehensive review. I didn't review assertions that I was pretty sure were true (ex: VO2 max improves longevity), or that were hard for me to evaluate (ex: the mechanics of how LDL cholesterol functions in the body), or that I didn't care about (ex: sleep deprivation impairs one's ability to identify facial expressions).
First, some general notes:
I have no expertise on any of the subjects in this post. I evaluated claims by doing shallow readings of relevant scientific literature, especially meta-analyses.
There is a spectrum between two ways of being wrong: "pop science book pushes [...] ---Outline:(02:19) Disease(02:22) People with metabolically healthy obesity do not have elevated mortality risk(07:39) Amyloid beta is implicated in Alzheimers disease(09:20) HDL cholesterol on its own doesnt prevent heart disease(10:12) Exercise(10:52) VO2max is the best predictor of longevity(15:21) You should train VO2max by doing HIIT at the maximum sustainable pace(20:26) You should do 3+ hours/week of zone 2 training and one or two sessions/week of HIIT(23:34) Stability is as important as cardiovascular fitness and strength(30:23) Nutrition(30:26) Rhesus monkey studies suggest that calorie restriction improves longevity but only if you eat a fairly unhealthy diet(38:09) The data are unclear on whether reducing saturated fat intake is beneficial(45:59) People should take omega-3 supplements(48:33) Sleep(48:55) Every animal sleeps(51:12) We need to sleep 7.5 to 8.5 hours a night(52:24) Basketball players who were told to sleep for 10 hours a night had better shooting accuracy(53:07) Lack of sleep increases obesity and diabetes risk(54:27) A study using Mendelian randomization found that sleeping 6 hours a night increased risk of a heart attack(57:03) Lack of sleep causes Alzheimers disease(58:10) Bonus(58:13) Dunning-Kruger effectThe original text contained 90 footnotes which were omitted from this narration. ---
First published:
July 4th, 2025
Source:
https://www.lesswrong.com/posts/QqTiCz2Xz96MuEFsF/outlive-a-critical-review
---
Narrated by TYPE III AUDIO.
---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

Jul 4, 2025 • 11min
“Authors Have a Responsibility to Communicate Clearly” by TurnTrout
When a claim is shown to be incorrect, defenders may say that the author was just being “sloppy” and actually meant something else entirely. I argue that this move is not harmless, charitable, or healthy. At best, this attempt at charity reduces an author's incentive to express themselves clearly – they can clarify later![1] – while burdening the reader with finding the “right” interpretation of the author's words. At worst, this move is a dishonest defensive tactic which shields the author with the unfalsifiable question of what the author “really” meant. ⚠️ Preemptive clarification The context for this essay is serious, high-stakes communication: papers, technical blog posts, and tweet threads. In that context, communication is a partnership. A reader has a responsibility to engage in good faith, and an author cannot possibly defend against all misinterpretations. Misunderstanding is a natural part of this process. This essay focuses not on [...] ---Outline:(01:40) A case study of the sloppy language move(03:12) Why the sloppiness move is harmful(03:36) 1. Unclear claims damage understanding(05:07) 2. Secret indirection erodes the meaning of language(05:24) 3. Authors owe readers clarity(07:30) But which interpretations are plausible?(08:38) 4. The move can shield dishonesty(09:06) Conclusion: Defending intellectual standardsThe original text contained 2 footnotes which were omitted from this narration. ---
First published:
July 1st, 2025
Source:
https://www.lesswrong.com/posts/ZmfxgvtJgcfNCeHwN/authors-have-a-responsibility-to-communicate-clearly
---
Narrated by TYPE III AUDIO.

Jul 4, 2025 • 4min
[Linkpost] “MIRI Newsletter #123” by Harlan, Rob Bensinger
This is a link post. If Anyone Builds It, Everyone Dies As we announced last month, Eliezer and Nate have a book coming out this September: If Anyone Builds It, Everyone Dies. This is MIRI's major attempt to warn the policy world and the general public about AI. Preorders are live now, and are exceptionally helpful. Preorder Bonus: We’re hosting two exclusive virtual events for those who preorder the book! The first is a chat between Nate Soares and Tim Urban (Wait But Why) followed by a Q&A, on August 10 @ noon PT. The second is a Q&A with both Nate and Eliezer in September (date and time TBD). For details, and to obtain access, head to ifanyonebuildsit.com/events. If you have graphic design chops, you can give us a hand by joining the advertisement design competition for the book. As Malo recently announced, advance copies of the book [...] ---Outline:(00:12) If Anyone Builds It, Everyone Dies(02:02) Other MIRI updates---
First published:
July 3rd, 2025
Source:
https://www.lesswrong.com/posts/J482p4hJBevBbwdmF/miri-newsletter-123
Linkpost URL:https://intelligence.org/2025/07/03/miri-newsletter-123/
---
Narrated by TYPE III AUDIO.

Jul 3, 2025 • 3min
[Linkpost] “Research Note: Our scheming precursor evals had limited predictive power for our in-context scheming evals” by Marius Hobbhahn
This is a link post. Note: This is a research note, and the analysis is less rigorous than our standard for a published paper. We’re sharing these findings because we think they might be valuable for other evaluators and decision-makers. Executive Summary In May 2024, we designed “precursor” evaluations for scheming (agentic self-reasoning and agentic theory of mind), i.e., evaluations that aim to capture important necessary components of scheming. In December 2024, we published “in-context scheming” evaluations, i.e. evaluations that directly aim to measure scheming reasoning capabilities. We have easy, medium, and hard difficulty levels for all evals. In this research note, we run some basic analysis on how predictive our precursor evaluations were of our scheming predictions to test the underlying hypothesis of whether the precursor evals would have “triggered” relevant scheming thresholds. We run multiple pieces of analysis to test whether our precursor evaluations predict our scheming [...] ---
First published:
July 3rd, 2025
Source:
https://www.lesswrong.com/posts/9tqpPP4FwSnv9AWsi/research-note-our-scheming-precursor-evals-had-limited
Linkpost URL:https://www.apolloresearch.ai/blog/research-note-our-scheming-precursor-evals-had-limited-predictive-power-for-our-in-context-scheming-evals
---
Narrated by TYPE III AUDIO.
---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

Jul 3, 2025 • 3min
“Call for suggestions - AI safety course” by boazbarak
In the fall I am planning to teach an AI safety graduate course at Harvard. The format is likely to be roughly similar to my "foundations of deep learning" course. I am still not sure of the content, and would be happy to get suggestions. Some (somewhat conflicting desiderata): I would like to cover the various ways AI could go wrong: malfunction, misuse, societal upheaval, arms race, surveillance, bias, misalignment, loss of control,... (and anything else I'm not thinking of right now). I talke about some of these issues here and here. I would like to see what we can learn from other fields, including software security, aviation and automative safety, drug safety, nuclear arms control, etc.. (and happy to get other suggestions) Talk about policy as well, various frameworks inside companies, regulations etc.. Talk about predictions for the future, methodologies for how to come up with them. [...] ---
First published:
July 3rd, 2025
Source:
https://www.lesswrong.com/posts/qe8LjXAtaZfrc8No7/call-for-suggestions-ai-safety-course
---
Narrated by TYPE III AUDIO.

Jul 3, 2025 • 2min
[Linkpost] “IABIED: Advertisement design competition” by yams
This is a link post. We’re currently in the process of locking in advertisements for the September launch of If Anyone Builds It, Everyone Dies, and we’re interested in your ideas! If you have graphic design chops, and would like to try your hand at creating promotional material for If Anyone Builds It, Everyone Dies, we’ll be accepting submissions in a design competition ending on August 10, 2025. We’ll be giving out up to four $1000 prizes: One for any asset we end up using on a billboard in San Francisco (landscape, ~2:1, details below) One for any asset we end up using in the subways of either DC or NY or both (~square, details below) One for any additional wildcard thing that ends up being useful (e.g. a t-shirt design, a book trailer, etc.; we’re being deliberately vague here so as not to anchor people too hard on [...] ---
First published:
July 2nd, 2025
Source:
https://www.lesswrong.com/posts/8cmhAAj3jMhciPFqv/iabied-advertisement-design-competition
Linkpost URL:https://intelligence.org/2025/07/01/iabied-advertisement-design-competition/
---
Narrated by TYPE III AUDIO.

Jul 3, 2025 • 30min
“Congress Asks Better Questions” by Zvi
Back in May I did a dramatization of a key and highly painful Senate hearing. Now, we are back for a House committee meeting. It was entitled ‘Authoritarians and Algorithms: Why U.S. AI Must Lead’ and indeed a majority of talk was very much about that, with constant invocations of the glory of democratic AI and the need to win.
The majority of talk was this orchestrated rhetoric that assumes the conclusion that what matters is ‘democracy versus authoritarianism’ and whether we ‘win,’ often (but not always) translating that as market share without any actual mechanistic model of any of it.
However, there were also some very good signs, some excellent questions, signs that there is an awareness setting in. As far as Congressional discussions of real AGI issues go, this was in part one of them. That's unusual.
(And as always there were a few [...] ---Outline:(01:29) Some Of The Best Stuff(05:30) Welcome to the House(08:22) Opening Statements(12:24) Krishnamoorthi Keeps Going(14:09) The Crowd Goes Wild(14:13) Moolenaar(15:04) Carson(15:16) Lahood(15:44) Dunn(17:08) Johnson(19:21) Torres(22:25) Brown(22:52) Nun(24:43) Tokuda(25:32) Moran(26:33) Connor(28:32) Humans Unemployable---
First published:
July 2nd, 2025
Source:
https://www.lesswrong.com/posts/2dwBxehFfAsdKtbuq/congress-asks-better-questions
---
Narrated by TYPE III AUDIO.
---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

Jul 2, 2025 • 16min
“Curing PMS with Hair Loss Pills” by David Lorell
Over the last two years or so, my girlfriend identified her cycle as having a unusually strong and very predictable effect on her mood/affect. We tried a bunch of interventions (food, sleep, socializing, supplements, reading the sequences, …) and while some seemed to help a bit, none worked reliably. Then, suddenly, something kind of crazy actually worked: Hair loss pills. The Menstrual Cycle Quick review: The womenfolk among us go through a cycle of varying hormone levels over the course of about a month. The first two weeks are called the “Follicular Phase”, and the last two weeks are called the “Luteal Phase.” The first week (usually less) is the “period” (“menstrual phase”, “menses”) where the body sloughs off the endometrial lining in the uterus, no longer of use to the unimpregnated womb, and ejects it in an irregular fountain of blood and gore. After that is a [...] ---Outline:(00:35) The Menstrual Cycle(02:28) The Bad Guy: Progesterone Allopregnanolone(05:19) Hair Loss Pills(07:17) Finasteride(09:23) Dutasteride(10:55) Results So Far(12:39) Come ON ALREADY(13:44) iM nOt A dOcToRThe original text contained 7 footnotes which were omitted from this narration. ---
First published:
July 2nd, 2025
Source:
https://www.lesswrong.com/posts/mkGxXEmcAGowmMjHC/curing-pms-with-hair-loss-pills
---
Narrated by TYPE III AUDIO.
---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.