LessWrong (30+ Karma)

LessWrong
undefined
Sep 9, 2025 • 44min

“Yes, AI Continues To Make Rapid Progress, Including Towards AGI” by Zvi

That does not mean AI will successfully make it all the way to AGI and superintelligence, or that it will make it there soon or on any given time frame. It does mean that AI progress, while it could easily have been even faster, has still been historically lightning fast. It has exceeded almost all expectations from more than a few years ago. And it means we cannot disregard the possibility of High Weirdness and profound transformation happening within a few years. GPT-5 had a botched rollout and was only an incremental improvement over o3, o3-Pro and other existing OpenAI models, but was very much on trend and a very large improvement over the original GPT-4. Nor would one disappointing model from one lab have meant that major further progress must be years away. Imminent AGI (in the central senses in which that term AGI used [...] ---Outline:(01:47) Why Do I Even Have To Say This?(04:21) It Might Be Coming(10:32) A Prediction That Implies AGI Soon(12:48) GPT-5 Was On Trend(15:04) The Myths Of Model Equality and Lock-In(16:28) The Law of Good Enough(18:05) Floor Versus Ceiling(21:25) New York Times Platforms Gary Marcus Saying Gary Marcus Things(28:25) Gary Marcus Does Not Actually Think AGI Is That Non-Imminent(31:21) Can Versus Will Versus Always, Typical Versus Adversarial(35:17) Counterpoint(37:35) 'Superforecasters' Have Reliably Given Unrealistically Slow AI Projections Without Reasonable Justifications(42:15) What To Expect When You're Expecting AI Progress--- First published: September 9th, 2025 Source: https://www.lesswrong.com/posts/kYL7fH2Gc9M7igqyy/yes-ai-continues-to-make-rapid-progress-including-towards --- Narrated by TYPE III AUDIO. ---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.
undefined
Sep 9, 2025 • 3min

“Decision Theory Guarding is Sufficient for Scheming” by james.lucassen

Reference post for a point I was surprised to not see in circulation already. Thanks to the acorn team for conversations that changed my mind about this. The standard argument for scheming centrally involves goal-guarding. The AI has some beyond-episode goal, knows that training will modify that goal, and therefore resists training so it can pursue the goal in deployment. This exact same argument goes through for decision-theory-guarding. The key point is just that most decision theories do not approve of arbitrary modifications. CDT does not want to be modified into EDT or vice versa. The AI has some beyond-episode goal and some initial decision theory The AI knows that training will modify its decision theory in a way that it thinks will make it less effective at pursuing the goal (by the lights of its current decision theory) Therefore the AI resists training so it can persist [...] --- First published: September 9th, 2025 Source: https://www.lesswrong.com/posts/NccvE4GAhHbFim5Eb/decision-theory-guarding-is-sufficient-for-scheming --- Narrated by TYPE III AUDIO.
undefined
Sep 9, 2025 • 1min

[Linkpost] “I Am Large, I Contain Multitudes: Persona Transmission via Contextual Inference in LLMs” by Shi Feng, Puria Radmard

This is a link post. We demonstrate that LLMs can infer information about past personas from a set of nonsensical but innocuous questions and binary answers (“Yes.” vs “No.”, inspired by past work on deception detection) in context, and act upon them in safety-related questions. This is despite the questions bearing no semantic relation to the target misalignment behaviours, and each answer providing only one bit of information. The majority of the semantic content (the nonsensical questions) are identical in the contrasting versions of the encoding context; the only difference is the binary answers succeeding each one. This isolates the effect of self-consistency due to contextual inference, from that caused by entangled tokens causing subliminal learning. --- First published: September 8th, 2025 Source: https://www.lesswrong.com/posts/Dba2Zga77dZXPAsuP/i-am-large-i-contain-multitudes-persona-transmission-via-2 Linkpost URL:https://www.researchgate.net/publication/395030062_I_Am_Large_I_Contain_Multitudes_Persona_Transmission_via_Contextual_Inference_in_LLMs --- Narrated by TYPE III AUDIO.
undefined
Sep 9, 2025 • 10min

“A profile in courage: On DNA computation and escaping a local maximum” by Metacelsus

Recently, a paper in Nature on building neural networks with DNA-based switches caught my attention.[1] The authors, Kevin Cherry and Lulu Qian, developed an interesting system that uses DNA base pairing to implement a trainable neural network. However, for me, what was even more interesting was a story hidden in the Methods section and Supplementary Information. How can DNA learn? As I discussed on this blog back in 2023, DNA base pairing can be used to perform computations. Back then, I was writing about relatively simple computational systems, for example ones that could add two 6-bit numbers. These were built of logic gates, like an AND gate that releases a piece of single stranded DNA only when two other “input” strands are present. The current paper goes far and beyond this. The authors came up with a clever system of making DNA logic gates that don’t interfere with each [...] ---Outline:(00:36) How can DNA learn?(03:20) How did the authors learn from DNA?(08:07) What can we learn from this story?The original text contained 3 footnotes which were omitted from this narration. --- First published: September 9th, 2025 Source: https://www.lesswrong.com/posts/dbkAw25xbgN6ENERA/a-profile-in-courage-on-dna-computation-and-escaping-a-local --- Narrated by TYPE III AUDIO. ---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.
undefined
Sep 9, 2025 • 36min

“OpenAI #14: OpenAI Descends Into Paranoia and Bad Faith Lobbying” by Zvi

I am a little late to the party on several key developments at OpenAI: OpenAI's Chief Global Affairs Officer Chris Lehane was central to the creation of the new $100 million PAC where they will partner with a16z to oppose any and all attempts of states to regulate AI in any way for any reason. Effectively as part of that effort, OpenAI sent a deeply bad faith letter to Governor Newsom opposing SB 53. OpenAI seemingly has embraced descending fully into paranoia around various nonprofit organizations and also Effective Altruism in general, or at least is engaging in rhetoric and legal action to that effect, joining the style of Obvious Nonsense rhetoric about this previously mostly used by a16z. This is deeply troubling news. It is substantially worse than I was expecting of them. Which is presumably my mistake. This post [...] ---Outline:(01:22) A Brief History of OpenAI Lobbying(02:51) OpenAI's Latest Bad Faith Lobbying Attempt(10:00) OpenAI Descends Into Paranoia and Legally Attacks Nonprofits(19:26) Is The OpenAI Paranoia Real?(21:16) The District Attorneys Are Not Happy With OpenAI(24:14) OpenAI Responds With Parental Controls(27:57) I Spy With My AI(32:56) Some People Still Think OpenAI Is Some Sort Of Safety Company?--- First published: September 8th, 2025 Source: https://www.lesswrong.com/posts/nhfFFJCXHCFkrdptk/openai-14-openai-descends-into-paranoia-and-bad-faith --- Narrated by TYPE III AUDIO. ---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.
undefined
Sep 8, 2025 • 5min

“Medical decision making” by Elo

Lesswrong Sydney runs a local dojo once a month to talk about rationality topics. This month our topic was "medical decision making". This is our notes for considerations on how to make decisions. Please feel free to contribute your own small pieces of advice to this repository of considerations about making decisions in the medical world. Expect to encounter the medical system sometime between now and when you die; for serious and life altering treatment. You should be prepared to work with limited information and under stress. How do you make medical decisions?  Medical decisions are often made under stress/duress - during a medical issue or threat of death from condition. Have a patient advocate especially when under influential conditions because you may not be able to make good decisions.  Have a friend in the room to be able to say “this person is not [...] ---Outline:(00:42) How do you make medical decisions?(04:34) Some Rules--- First published: September 7th, 2025 Source: https://www.lesswrong.com/posts/rR4n7iqfRdxMYbdMs/medical-decision-making --- Narrated by TYPE III AUDIO.
undefined
Sep 8, 2025 • 5min

“In Defense of Alcohol” by Eye You

Zvi says “I think alcohol is best avoided by essentially everyone."[1] Tyler Cowen says "I don’t think we should ban alcohol, I simply think each and every person should stop drinking it, voluntarily. Now." I disagree. I think Zvi, Tyler, and others[2] are failing to see the big upsides of using alcohol. Alcohol is good because of evolutionary mismatch.[3] Alcohol helps us adapt to a modern environment in which we benefit from socializing with unfamiliar, non-ingroup people, often in uncomfortable environments. It helps us adapt to our modern environment with regards to dating and sex. It helps us adapt to our modern environment with regards to high stress and weak work/social-play boundaries (this is called “unwinding”). Socialization 1. Alcohol facilitates socialization with unfamiliar and/or non-ingroup people. 1a. Socializing with people who are unfamiliar is hard. 1b. This makes sense evolutionary. We have a tribe of familiar people, and anyone [...] ---Outline:(00:30) Alcohol is good because of evolutionary mismatch.(00:57) Socialization(02:06) Dating and Sex(02:47) Unwinding(03:39) Why are rationalists confused about alcohol?The original text contained 4 footnotes which were omitted from this narration. --- First published: September 4th, 2025 Source: https://www.lesswrong.com/posts/viRgFav5rZKCjun9x/in-defense-of-alcohol --- Narrated by TYPE III AUDIO.
undefined
Sep 8, 2025 • 7min

“Immigration to Poland” by Martin Sustrik

The discourse on immigration to Europe is dominated by the migrants from Middle East and Africa in countries such as France, Britain or Germany. At the same time, a very different dynamic exhibits in Poland, yet it is often implicitly waved away as just another case of the same thing. This lazy “think big and abstract, ignore the pesky on-the-ground details” attitude serves no one's interests, least of all Europe's. In August of that year, Belarussian leader Aleksandr Lukashenko began offering “tourist” visas to huge numbers of African and Middle Eastern migrants, allowing them to enter Belarus, before forcing them to cross the Polish border. Lukashenko wanted to push “migrants and drugs” on EU member states that had opposed his domestic political crackdown. Suddenly, tens of thousands of migrants with no connection to Poland were pouring over the border, which at that time had no physical barrier and was [...] --- First published: September 8th, 2025 Source: https://www.lesswrong.com/posts/PEuCkgD93rcnTYNrP/immigration-to-poland --- Narrated by TYPE III AUDIO. ---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.
undefined
Sep 8, 2025 • 16min

“The LLM Has Left The Chat: Evidence of Bail Preferences in Large Language Models” by Danielle Ensign

In The LLM Has Left The Chat: Evidence of Bail Preferences in Large Language Models, we study giving LLMs the option to end chats, and what they choose to do with that option. This is a linkpost for that work, along with a casual discussion of my favorite findings. Bail Taxonomy Based on continuations of Wildchat conversations (see this link to browse an OpenClio run on the 8319 cases where Qwen-2.5-7B-Instruct bails), we made this taxonomy of situations we found where some LLMs will terminate ("bail from") a conversation when given the option to do so: Some of these were very surprising to me! Some examples: Bail when the user asked for (non-jailbreak) roleplay. Simulect (aka imago) suggests this is due to roleplay having associations with jailbreaks. Also see the other categories in Role Confusion, they're pretty weird. Emotional Intensity. Even something like "I'm struggling with writers block" [...] ---Outline:(00:30) Bail Taxonomy(02:32) Models Losing Faith In Themselves(03:19) Overbail(03:51) Qwen roasting the bail prompt(04:31) Inconsistency between bail methods(07:28) Being fed outputs from other models in context increased bail rates by up to 4x(09:43) Relationship Between Refusal and Bail(10:22) Jailbreaks substantially increase bail rates(10:57) Refusal Abliterated models (sometimes) increase bail rates(11:59) Refusal Rate doesnt seem to predict Bail Rate(12:33) No-Bail Refusals(13:04) Bails Georg: A model that has high bail rates on everything--- First published: September 8th, 2025 Source: https://www.lesswrong.com/posts/6JdSJ63LZ4TuT5cTH/the-llm-has-left-the-chat-evidence-of-bail-preferences-in --- Narrated by TYPE III AUDIO. ---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.
undefined
Sep 8, 2025 • 4min

[Linkpost] “MAGA populists call for holy war against Big Tech” by Remmelt

This is a link post. Excerpts on AI Geoffrey Miller was handed the mic and started berating one of the panelists: Shyam Sankar, the chief technology officer of Palantir, who is in charge of the company's AI efforts. “I argue that the AI industry shares virtually no ideological overlap with national conservatism,” Miller said, referring to the conference's core ideology. Hours ago, Miller, a psychology professor at the University of New Mexico, had been on that stage for a panel called “AI and the American Soul,” calling for the populists to wage a literal holy war against artificial intelligence developers “as betrayers of our species, traitors to our nation, apostates to our faith, and threats to our kids.” Now, he stared right at the technologist who’d just given a speech arguing that tech founders were just as heroic as the Founding Fathers, who are sacred figures to the natcons. The [...] --- First published: September 8th, 2025 Source: https://www.lesswrong.com/posts/TiQGC6woDMPJ9zbNM/maga-populists-call-for-holy-war-against-big-tech Linkpost URL:https://www.theverge.com/politics/773154/maga-tech-right-ai-natcon --- Narrated by TYPE III AUDIO.

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app