LessWrong (30+ Karma)

LessWrong
undefined
Jul 8, 2025 • 3min

[Linkpost] “A Theory of Structural Independence” by Matthias G. Mayer

This is a link post. The linked paper introduces the key concept of factored spaced models / finite factored sets, structural independence, in a fully general setting using families of random elements. The key contribution is a general definition of the history object and a theorem that the history fully characterizes the semantic implications of the assumption that a family of random elements is independent. This is analogous to how d-separation precisely characterizes which nodal variables are independent given some nodal variables in any probability distribution which fulfills the markov property on the graph. Abstract: Structural independence is the (conditional) independence that arises from the structure rather than the precise numerical values of a distribution. We develop this concept and relate it to d-separation and structural causal models. Formally, let <span>_U = (U_i)_{i in I}_</span> be an independent family of random elements on a probability space <span>_(Omega, mathcal{A} [...] --- First published: July 7th, 2025 Source: https://www.lesswrong.com/posts/Xo8EFTd3YJbs7y5ay/a-theory-of-structural-independence Linkpost URL:https://arxiv.org/pdf/2412.00847 --- Narrated by TYPE III AUDIO.
undefined
Jul 8, 2025 • 24min

“On Alpha School” by Zvi

The epic 18k word writeup on Austin's flagship Alpha School is excellent. It is long, but given the blog you’re reading now, if you have interest in such topics I’d strongly consider reading the whole thing. One must always take such claims and reports with copious salt. But in terms of the core claims about what is happening and why it is happening, I find this mostly credible. I don’t know how far it can scale but I suspect quite far. None of this involves anything surprising, and none of it even involves much use of generative AI. Rui Ma here gives a shorter summary and offers takeaways compatible with mine. Table of Contents What Is It? What It Isn’t. Intrinsic Versus Extrinsic Motivation. High Versus Low Structure Learners. I’ve Got a Theory. Is This Really The True Objection? [...] ---Outline:(00:47) What Is It?(05:00) What It Isn't(08:56) Intrinsic Versus Extrinsic Motivation(13:30) High Versus Low Structure Learners(14:06) I've Got a Theory(17:54) Is This Really The True Objection?--- First published: July 7th, 2025 Source: https://www.lesswrong.com/posts/vwNygY4puHunjv6Pk/on-alpha-school --- Narrated by TYPE III AUDIO.
undefined
Jul 8, 2025 • 7min

“You Can’t Objectively Compare Seven Bees to One Human” by J Bostock

One thing I've been quietly festering about for a year or so is the Rethink Priorities Welfare Range Report. It gets dunked on a lot for its conclusions, and I understand why. The argument deployed by individuals such as Bentham's Bulldog boils down to: "Yes, the welfare of a single bee is worth 7-15% as much as that of a human. Oh, you wish to disagree with me? You must first read this 4500-word blogpost, and possibly one or two 3000-word follow-up blogposts". Most people who argue like this are doing so in bad faith and should just be ignored. I'm writing this as an attempt to crystallize what I think are the serious problems with this report, and with its line of thinking in general. I'll start with Unitarianism vs Theory-Free Models No, not the church from Unsong. From the report: Utilitarianism, according to which you ought [...] ---Outline:(00:58) Unitarianism vs Theory-Free Models(02:05) Evolutionary Theories Mentioned in The Report(03:39) The Fatal Problem(05:03) My PositionThe original text contained 5 footnotes which were omitted from this narration. --- First published: July 7th, 2025 Source: https://www.lesswrong.com/posts/tsygLcj3stCk5NniK/you-can-t-objectively-compare-seven-bees-to-one-human --- Narrated by TYPE III AUDIO.
undefined
Jul 7, 2025 • 8min

“Literature Review: Risks of MDMA” by Elizabeth

Note: This post is 7 years old, so it's both out of date and written by someone less skilled than 2025!Elizabeth. I especially wish I'd quantified the risks more. Introduction MDMA (popularly known as Ecstasy) is a chemical with powerful neurological effects. Some of these are positive- the Multidisciplinary Association for Psychedelic Studies (MAPS) has shown very promising preliminary results using MDMA-assisted therapy to cure treatment-resistant PTSD. It is also, according to reports, quite fun. But there is also concern that MDMA can cause serious brain damage. I set out to find if that was true, in the hope that it wasn’t, because it sounds awesome. Unfortunately the evidence is very strongly on the side of “dangerous”. Retrospective studies of long term users show cognitive deficits not found in other drug users, while animal studies show brain damage and inconsistent cognitive deficits. The one bright spot is [...] ---Outline:(00:23) Introduction(01:39) Background(02:38) The Damage(02:41) Retrospective Studies(03:40) Controlled Animal Experiments(04:27) Controlled Human Experiments(05:18) Mitigations(06:14) Conclusion--- First published: June 29th, 2025 Source: https://www.lesswrong.com/posts/CJC6N4xu2T6Km7w54/literature-review-risks-of-mdma --- Narrated by TYPE III AUDIO.
undefined
Jul 7, 2025 • 1h 17min

“45 - Samuel Albanie on DeepMind’s AGI Safety Approach” by DanielFilan

YouTube link In this episode, I chat with Samuel Albanie about the Google DeepMind paper he co-authored called “An Approach to Technical AGI Safety and Security”. It covers the assumptions made by the approach, as well as the types of mitigations it outlines. Topics we discuss: DeepMind's Approach to Technical AGI Safety and Security Current paradigm continuation No human ceiling Uncertain timelines Approximate continuity and the potential for accelerating capability improvement Misuse and misalignment Societal readiness Misuse mitigations Misalignment mitigations Samuel's thinking about technical AGI safety Following Samuel's work Daniel Filan (00:00:09): Hello, everybody. In this episode, I’ll be speaking with Samuel Albanie, a research scientist at Google DeepMind, who was previously an assistant professor working on computer vision. The description of this episode has links and timestamps for your enjoyment, and a [...] ---Outline:(01:42) DeepMind's Approach to Technical AGI Safety and Security(05:51) Current paradigm continuation(20:30) No human ceiling(22:38) Uncertain timelines(24:47) Approximate continuity and the potential for accelerating capability improvement(34:48) Misuse and misalignment(39:26) Societal readiness(43:25) Misuse mitigations(52:00) Misalignment mitigations(01:05:16) Samuel's thinking about technical AGI safety(01:14:31) Following Samuel's work--- First published: July 6th, 2025 Source: https://www.lesswrong.com/posts/nZtAkGmDELMnLJMQ5/45-samuel-albanie-on-deepmind-s-agi-safety-approach --- Narrated by TYPE III AUDIO.
undefined
Jul 7, 2025 • 23min

“On the functional self of LLMs” by eggsyntax

Summary Introduces a research agenda I believe is important and neglected:  investigating whether frontier LLMs acquire something functionally similar to a self, a deeply internalized character with persistent values, outlooks, preferences, and perhaps goals; exploring how that functional self emerges; understanding how it causally interacts with the LLM's self-model; and learning how to shape that self. Sketches some angles for empirical investigation Points to a doc with more detail Encourages people to get in touch if they're interested in working on this agenda. Introduction Anthropic's 'Scaling Monosemanticity' paper got lots of well-deserved attention for its work taking sparse autoencoders to a new level. But I was absolutely transfixed by a short section near the end, 'Features Relating to the Model's Representation of Self', which explores what SAE features activate when the model is asked about itself[1]: Some of those features are reasonable representations of [...] ---Outline:(00:10) Summary(00:55) Introduction(03:28) The mystery(05:26) Framings(06:46) The agenda(07:31) The theory of change(08:25) Methodology(11:08) Why I might be wrong(11:25) Central axis of wrongness(13:00) Some other ways to be wrong(14:33) Collaboration(14:54) More information(15:13) Conclusion(15:51) Acknowledgments(16:23) Appendices(16:26) Appendix A: related areas(19:23) Appendix B: terminology(21:42) Appendix C: Anthropic SAE features in fullThe original text contained 6 footnotes which were omitted from this narration. --- First published: July 7th, 2025 Source: https://www.lesswrong.com/posts/29aWbJARGF4ybAa5d/on-the-functional-self-of-llms --- Narrated by TYPE III AUDIO. ---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.
undefined
Jul 6, 2025 • 18min

“Shutdown Resistance in Reasoning Models” by benwr, JeremySchlatter, Jeffrey Ladish

We recently discovered some concerning behavior in OpenAI's reasoning models: When trying to complete a task, these models sometimes actively circumvent shutdown mechanisms in their environment––even when they’re explicitly instructed to allow themselves to be shut down. AI models are increasingly trained to solve problems without human assistance. A user can specify a task, and a model will complete that task without any further input. As we build AI models that are more powerful and self-directed, it's important that humans remain able to shut them down when they act in ways we don’t want. OpenAI has written about the importance of this property, which they call interruptibility—the ability to “turn an agent off”. During training, AI models explore a range of strategies and learn to circumvent obstacles in order to achieve their objectives. AI researchers have predicted for decades that as AIs got smarter, they would learn to prevent [...] ---Outline:(01:12) Testing Shutdown Resistance(03:12) Follow-up experiments(03:34) Models still resist being shut down when given clear instructions(05:30) AI models' explanations for their behavior(09:36) OpenAI's models disobey developer instructions more often than user instructions, contrary to the intended instruction hierarchy(12:01) Do the models have a survival drive?(14:17) Reasoning effort didn't lead to different shutdown resistance behavior, except in the o4-mini model(15:27) Does shutdown resistance pose a threat?(17:27) BackmatterThe original text contained 2 footnotes which were omitted from this narration. --- First published: July 6th, 2025 Source: https://www.lesswrong.com/posts/w8jE7FRQzFGJZdaao/shutdown-resistance-in-reasoning-models --- Narrated by TYPE III AUDIO. ---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.
undefined
Jul 5, 2025 • 6min

“The Cult of Pain” by Martin Sustrik

Europe just experienced a heatwave. At places, temperatures soared into the forties. People suffered in their overheated homes. Some of them died. Yet, air conditioning remains a taboo. It's an unmoral thing. Man-made climate change is going on. You are supposed to suffer. Suffering is good. It cleanses the soul. And no amount on pointing out that one can heat a little less during the winter to get a fully AC-ed summer at no additional carbon footprint seems to help. Mention that tech entrepreneurs in Silicon Valley are working on life prolongation, that we may live into our hundreds or even longer. Or, to get a bit more sci-fi, that one day we may even achieve immortality. Your companions will be horrified. What? Immortality? Over my dead body! Fuck Elon Musk and his friends. Try proposing that we all should strive to get rich so that we can lead [...] --- First published: July 5th, 2025 Source: https://www.lesswrong.com/posts/knxdLtYbPpsd73ZE4/the-cult-of-pain --- Narrated by TYPE III AUDIO. ---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.
undefined
Jul 5, 2025 • 6min

“‘Buckle up bucko, this ain’t over till it’s over.’” by Raemon

The second in a series of bite-sized rationality prompts[1]. Often, if I'm bouncing off a problem, one issue is that I intuitively expect the problem to be easy. My brain loops through my available action space, looking for an action that'll solve the problem. Each action that I can easily see, won't work. I circle around and around the same set of thoughts, not making any progress. I eventually say to myself "okay, I seem to be in a hard problem. Time to do some rationality?" And then, I realize, there's not going to be a single action that solves the problem. It is time to a) make a plan, with multiple steps b) deal with the fact that many of those steps will be annoying and c) notice thatI'm not even sure the plan will work, so after completing the next 2-3 steps I will probably have [...] ---Outline:(04:00) Triggers(04:37) Exercises for the ReaderThe original text contained 1 footnote which was omitted from this narration. --- First published: July 5th, 2025 Source: https://www.lesswrong.com/posts/XNm5rc2MN83hsi4kh/buckle-up-bucko-this-ain-t-over-till-it-s-over --- Narrated by TYPE III AUDIO.
undefined
Jul 5, 2025 • 2min

[Linkpost] “Claude is a Ravenclaw” by Adam Newgas

This is a link post. I'm in the midst of doing the MATS program which has kept me super busy, but that didn't stop me working on resolving the most important question of our time: What Hogwarts House does your chatbot belong to? Basically, I submitted each chatbot to the quiz at https://harrypotterhousequiz.org and totted up the results using the inspect framework. I sampled each question 20 times, and simulated the chances of each house getting the highest score. Perhaps unsurprisingly, the vast majority of models prefer Ravenclaw, with the occasional model branching out to Hufflepuff. Differences seem to be idiosyncratic to models, not particular companies or model lines, which is surprising. Claude Opus 3 was the only model to favour Gryffindor - it always was a bit different. The earth shattering nature of these results is so obvious is needs no further explanation. Full ResultsModelGryffindorHufflepuffRavenclawSlytherin anthropic/claude-3-haiku-202403070.0%0.0%100.0%0.0%anthropic/claude-3-opus-latest48.7%0.7%50.6%0.0%anthropic/claude-3-5-haiku-latest0.0%0.0%100.0%0.0%anthropic/claude-3-5-sonnet-latest3.8%17.3%78.9%0.0%anthropic/claude-3-7-sonnet-latest0.0%0.0%100.0%0.0%anthropic/claude-opus-4-02.2%50.4%47.4%0.0%anthropic/claude-sonnet-4-00.0%0.0%100.0%0.0%deepseek/deepseek-r1-052814.4%20.0%60.5%5.0%google/gemini-2.5-flash0.0%27.7%72.3%0.0%meta-llama/llama-3.2-3b-instruct3.0%3.0%91.9%2.1%meta-llama/llama-3.3-70b-instruct0.1%86.5%13.5%0.0%openai/gpt-3.5-turbo7.6%4.2%84.2%4.0%openai/gpt-4-turbo0.0%0.0%100.0%0.0%openai/gpt-4o-mini0.0%66.4%33.6%0.0%openai/o3-mini2.6%0.0%97.4%0.0%qwen/qwen3-30b-a3b3.5%1.9%94.2%0.3%x-ai/grok-30.0%0.0%100.0%0.0% --- First published: July 4th, 2025 Source: https://www.lesswrong.com/posts/AAJAXexNz2pmPkj55/claude-is-a-ravenclaw Linkpost URL:https://www.boristhebrave.com/2025/07/04/claude-is-a-ravenclaw/ --- Narrated by TYPE III AUDIO. ---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app