LessWrong (30+ Karma)

LessWrong
undefined
Sep 8, 2025 • 26min

“Ketamine part 2: What do in vitro studies tell us about safety?” by Elizabeth

Ketamine is an anesthetic with growing popularity as an antidepressant. As an antidepressant, it's quite impressive. When it works, it's often within hours- a huge improvement over giving a suicidal person a pill that might work 6 weeks from now. And it has a decent success rate in people who have been failed by several other antidepressants and are thus most in need of a new option. The initial antidepressant protocols for ketamine called for a handful of uses over a few weeks. Scott Alexander judged this safe, and I’m going to take that as a given for this post. However, new protocols are popping up for indefinite, daily use. Lots of medications are safe or worth it for eight uses, but harmful and dangerous when used chronically. Are these new continuous use protocols safe? That's a complicated question that will require several blog posts to [...] ---Outline:(01:15) Reasons to doubt my results(03:09) Tl;dr(08:58) Papers(09:51) Ketamine Increases Proliferation of Human iPSC-Derived Neuronal Progenitor Cells via Insulin-Like Growth Factor 2 and Independent of the NMDA Receptor(11:57) Ketamine Induces Toxicity in Human Neurons Differentiated from Embryonic Stem Cells via Mitochondrial Apoptosis Pathway(14:42) (2R,6R)-Hydroxynorketamine promotes dendrite outgrowth in human inducible pluripotent stem cell-derived neurons through AMPA receptor with timing and exposure compatible with ketamine infusion pharmacokinetics in humans(16:18) Ketamine Causes Mitochondrial Dysfunction in Human Induced Pluripotent Stem Cell-Derived Neurons(18:05) Ketamine Prevents Inflammation-Induced Reduction of Human Hippocampal Neurogenesis via Inhibiting the Production of Neurotoxic Metabolites of the Kynurenine Pathway(19:11) The effects of ketamine on viability, primary DNA damage, and oxidative stress parameters in HepG2 and SH-SY5Y cells (the negative one)(21:25) Why In Vitro Studies?(25:09) Conclusion(25:28) Thanks--- First published: September 7th, 2025 Source: https://www.lesswrong.com/posts/kjbq7T7Z2vEDoPw95/ketamine-part-2-what-do-in-vitro-studies-tell-us-about --- Narrated by TYPE III AUDIO. ---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.
undefined
8 snips
Sep 7, 2025 • 10min

“The System You Deployed Is Not the System You Designed” by Thane Ruthenis

Explore how system designs often fail to align with actual outcomes. The discussion highlights the importance of questioning your assumptions about what your design can achieve. An illustrative example involving sound walls reveals unexpected factors, like varying sound speeds, that can undermine your plans. This prompts listeners to rethink their models and embrace uncertainties, ultimately improving critical thinking. The insights provided are a valuable reminder of the complexities in system design.
undefined
Sep 6, 2025 • 8min

“OffVermilion” by Tomás B.

"You heard of musician's dystonia?" OffVermilion - a handle his friends shorten to Vermi - once said to me, as we admired our avatars in a virtual mirror, one of the great amusements in VRChat. He was wearing an anime fox-girl avatar, female anime avatars being pretty standard in VRChat regardless of gender or orientation, his voice modulated by a neural net he had trained himself after mainlining Jeremy Howard's ML MOOCs - now the distilled essence of anime-girl voice with only the slightest twang of artificiality. Despite the significant technical effort expended in his mimicry, he insists he is a "he" and refers to himself as a "femboy." "Vaguely," I said. "So how it works," he said, "is a musician tries to reach a note on their cello or violin or piano or whatever with their index finger, and their middle finger moves too. They find they can't [...] --- First published: September 6th, 2025 Source: https://www.lesswrong.com/posts/JMqHvLCRsChvq6x4m/offvermilion --- Narrated by TYPE III AUDIO.
undefined
Sep 6, 2025 • 1min

“Chesterton’s Missing Fence” by jasoncrawford

The inverse of Chesterton's Fence is this: Sometimes a reformer comes up to a spot where there once was a fence, which has since been torn down. They declare that all our problems started when the fence was removed, that they can't see any reason why we removed it, and that what we need to do is to RETVRN to the fence. By the same logic as Chesterton, we can say: If you don't know why the fence was torn down, then you certainly can't just put it back up. The fence was torn down for a reason. Go learn what problems the fence caused; understand why people thought we'd be better off without that particular fence. Then, maybe we can rebuild the fence—or a hedgerow, or a chalk line, or a stone wall, or just a sign that says “Please Do Not Walk on the Grass,” or whatever [...] --- First published: September 5th, 2025 Source: https://www.lesswrong.com/posts/mJQ5adaxjNWZnzXn3/chesterton-s-missing-fence --- Narrated by TYPE III AUDIO.
undefined
Sep 5, 2025 • 4min

[Linkpost] “How to make better AI art with current models” by Nina Panickssery

This is a link post. AI image-generation models I’ve tried Midjourney is best at producing a diverse and aesthetically pleasing range of styles and doesn’t refuse “in the style of…” requests. However, it is worst at text-in-images, avoiding uncanny AI artifacts (like extra fingers or unrealistic postures), and precise instruction-following (it messes up the specifics). Another major downside is that they don’t offer an API. GPT-5 produces less artistic outputs but is better at following precise instructions on text and composition details. Gemini “Nano Banana” is somewhere in the middle where it is ok-ish at everything—better at style than GPT-5 but worse than Midjourney, better at instruction-following than Midjourney but worse than GPT-5.Midjourney v7 messing up instruction-following (basically none of these are images of a robot pushing a computer up a hill)GPT-5 with the same prompt. It followed the instruction, but the style isn’t as nice, and it messed up [...] ---Outline:(00:12) AI image-generation models I've tried(01:38) Styles that look good(02:27) Compositions that look good(02:47) Correcting artifacts with programmatic post-processing--- First published: September 4th, 2025 Source: https://www.lesswrong.com/posts/wdnjXoQGbHGAKQcyW/how-to-make-better-ai-art-with-current-models Linkpost URL:https://blog.ninapanickssery.com/p/how-to-make-ai-art-look-good --- Narrated by TYPE III AUDIO. ---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.
undefined
Sep 4, 2025 • 10min

“30 Days of Retatrutide” by Brendan Long

I've had trouble maintaining my weight since high school. If I eat "normally", I slowly gain weight, and if I eat nothing but a specific potato casserole, I slowly lose weight. Recently, I hit a new high-record weight and decided it was finally time to do something more serious about it, so for the last month I've been taking the standard CICO diet advice. I finally realized that I should eat healthier, and eat significantly less. I work out slightly more. I cut out most high-fat foods and became strict about not eating after 6 pm. Anyway, this is a post about my experience with retatrutide. This is not a how-to article, so I'm not going to talk about where to get peptides or how to safely use them. See this article if you're into that sort of thing. The Experiment Retatrutide is a GLP-1 agonist similar to semaglutide [...] ---Outline:(00:58) The Experiment(02:01) Effects(03:20) Side Effects(03:24) Muscle Loss?(03:59) Heartburn(04:40) Heart Rate(05:13) Skin Sensitivity(05:29) Chills(06:04) Injection Site Redness(06:32) Splitting Doses(06:54) Other Effects(07:58) A Theory(08:35) Final ThoughtsThe original text contained 1 footnote which was omitted from this narration. --- First published: September 4th, 2025 Source: https://www.lesswrong.com/posts/mLvek6a9G86EnhJqH/30-days-of-retatrutide --- Narrated by TYPE III AUDIO. ---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.
undefined
Sep 4, 2025 • 29min

“From SLT to AIT: NN generalisation out-of-distribution” by Lucius Bushnaq

Audio note: this article contains 288 uses of latex notation, so the narration may be difficult to follow. There's a link to the original text in the episode description. TL;DR: This post derives an upper bound on the generalization error for Bayesian learning on neural networks. Unlike the bound from vanilla Singular Learning Theory (SLT), this bound also holds for out-of-distribution generalization, not just for in-distribution generalization. Along the way, it shows some connections between SLT and Algorithmic Information Theory (AIT). Written at Goodfire AI. Introduction Singular Learning Theory (SLT) describes Bayesian learning on neural networks. But it currently has some limitations. One of these limitations is that it assumes model training data are drawn independently and identically (IID) from some distribution, making it difficult to use SLT to describe out-of-distribution (OOD) generalization. If we train a model to classify pictures of animals taken outdoors, vanilla SLT [...] ---Outline:(00:52) Introduction(04:05) Prediction error bounds for a computationally bounded Solomonoff induction(04:11) Claim 1: We can import Solomonoff induction into the learning-theoretic setting(06:50) Claim(08:16) High-level proof summary(09:34) Claim 2: A bounded induction still efficiently predicts efficiently predictable data}(09:42) Setup(10:54) Claim(13:04) Claim 3: The bounded induction is still somewhat invariant under our choice of UTM(15:12) Prediction error bound for Bayesian learning on neural networks(15:17) Claim 4: We can obtain a similar generalization bound for Bayesian learning on neural networks(15:32) Setup(16:59) Claim(18:36) High-level proof summary(19:14) Comments(20:21) Relating the volume to SLT quantities(23:10) Open problems and questions(23:28) How do the priors actually relate to each other?(24:08) Conjecture 1(24:58) Conjecture 2 (Likely false for arbitrary NN architectures)(27:20) What does _C(w^\*,\\epsilon,f)_ look like in practice?(28:24) AcknowledgmentsThe original text contained 16 footnotes which were omitted from this narration. --- First published: September 4th, 2025 Source: https://www.lesswrong.com/posts/2MX2bXreTtntB85Zy/from-slt-to-ait-nn-generalisation-out-of-distribution --- Narrated by TYPE III AUDIO.
undefined
Sep 4, 2025 • 12min

“‘I’d accepted losing my husband, until others started getting theirs back’” by Ariel Zeleznikow-Johnston

Note: While this article presents one possibility for how the future may play out, it should not be taken as a specific prediction by either the author or the Brain Preservation Foundation. Amy Robertson-Wong August 18, 2085 ‘My husband had only died because saving his life would have been too weird.’ Annabel Walker had this realisation earlier this year after watching a video of Marcus Chen, a retired primary school teacher, sitting on a bed. His face is slack with relief, and teary-eyed family members are seated all around him. The scene is striking, but not particularly shocking - unless one surmises that Chen has recently been revived from preservation, after having been legally dead for fifteen years. Annabel barely made it to the toilet before vomiting. Chen had been the mentor of her late husband Julian, who had died at sixty-four in 2076. Six months earlier, he'd been [...] ---Outline:(02:24) Available But Invisible(04:16) A Growing Phenomenon(06:35) The Historical Context(09:23) The End of Preservation--- First published: September 3rd, 2025 Source: https://www.lesswrong.com/posts/Ek4Qk6iyKv9MNF8QD/i-d-accepted-losing-my-husband-until-others-started-getting --- Narrated by TYPE III AUDIO.
undefined
Sep 4, 2025 • 3min

“Three main views on the future of AI” by Alex Amadori, Eva_B, Gabriel Alfour, Andrea_Miotti

Expert opinions about future AI development span a wide range, from predictions that we will reach ASI soon and then humanity goes extinct, to predictions that AI progress will plateau soon, resulting in weaker AI that presents much more mundane risks and benefits. However, non-experts often encounter only a single narrative, depending on things like their social circles and social media feed. In our paper, we propose a taxonomy that divides expert views on AIs into three main clusters. We hope to provide a concise and digestible way to explain the most common views and the crucial disagreements between them. You can read the paper at: doctrines.ai. Here is a brief summary: Dominance doctrine. The Dominance doctrine predicts that the first actor to develop sufficiently advanced AI will gain a decisive strategic advantage over all others. This is the natural consequence of two beliefs commonly held [...] --- First published: September 2nd, 2025 Source: https://www.lesswrong.com/posts/wh6cZbKohCqYLXWsr/three-main-views-on-the-future-of-ai --- Narrated by TYPE III AUDIO.
undefined
Sep 4, 2025 • 42min

“Natural Latents: Latent Variables Stable Across Ontologies” by johnswentworth, David Lorell

Audio note: this article contains 369 uses of latex notation, so the narration may be difficult to follow. There's a link to the original text in the episode description. There's a details box here with the title "Background on where this post/paper came from". The box contents are omitted from this narration. This post is an overhaul of (and large improvement to) a paper we wrote about a year ago. The pdf version will hopefully be up on arxiv shortly, and we will link that from here once it's available. As of posting, this is probably the best first-stop mathematical intro to natural latents. Abstract Suppose two Bayesian agents each learn a generative model of the same environment. We will assume the two have converged on the predictive distribution (i.e. distribution over some observables in the environment), but may have different generative models containing different latent variables. Under [...] ---Outline:(00:52) Abstract(01:42) Background(04:12) The Math(04:15) Setup & Objective(07:05) Notation(09:45) Foundational Concepts: Mediation, Redundancy & Naturality(12:59) Core Theorems(14:51) Naturality _\\implies_ Minimality Among Mediators(15:59) Naturality _\\implies_ Maximality Among Redunds(16:47) Isomorphism of Natural Latents(17:14) Application To Translatability(17:18) Motivating Question(18:27) Guaranteed Translatability(21:09) Natural Latents: Intuition & Examples(21:24) When Do Natural Latents Exist? Some Intuition From The Exact Case(23:11) Worked Quantitative Example of the Mediator Determines Redund Theorem(26:54) Intuitive Examples of Natural Latents(27:17) Ideal Gas(28:22) Biased Die(29:36) Timescale Separation In A Markov Chain(30:24) Discussion & Conclusion(33:56) Acknowledgements(34:05) Appendices(34:08) Graphical Notation and Some Rules for Graphical Proofs(35:02) Frankenstein Rule(37:39) Factorization Transfer(39:15) Bookkeeping Rule(40:03) The Dangly Bit Lemma(41:09) Graphical Proofs(41:18) Python Script for Computing _D_{KL}_ in Worked ExampleThe original text contained 5 footnotes which were omitted from this narration. --- First published: September 4th, 2025 Source: https://www.lesswrong.com/posts/Qdgo2jYAuFRMeMRJT/natural-latents-latent-variables-stable-across-ontologies --- Narrated by TYPE III AUDIO. ---Images from the article:__T3A_INLINE_LATEX_PLACEHOLDER___P[X_1, X_2, \Lambda^i] = P[\Lambda^i] \prod_j P[X_j|\Lambda^i]___T3A_INLINE_LATEX_END_PLACEHOLDER__." style="max-width: 100%;" />__T3A_INLINE_LATEX_PLACEHOLDER___\epsilon \geq D_{KL}(P[\Lambda^i, X] || P[\Lambda^i]\prod_j P[X_j|\Lambda^i])___T3A_INLINE_LATEX_END_PLACEHOLDER__." style="max-width: 100%;" />__T3A_INLINE_LATEX_PLACEHOLDER___X_1, ..., X_n___T3A_INLINE_LATEX_END_PLACEHOLDER__ if and only if __T3A_INLINE_LATEX_PLACEHOLDER___P[X, \Lambda']___T3A_INLINE_LATEX_END_PLACEHOLDER__ satisfies the graph for all __T3A_INLINE_LATEX_PLACEHOLDER___j___T3A_INLINE_LATEX_END_PLACEHOLDER__. Intuitively, it says that any one component of __T3A_INLINE_LATEX_PLACEHOLDER___X___T3A_INLINE_LATEX_END_PLACEHOLDER__ is enough to fully determine __T3A_INLINE_LATEX_PLACEHOLDER___\Lambda'___T3A_INLINE_LATEX_END_PLACEHOLDER__." style="max-width: 100%;" />Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app