

LessWrong (Curated & Popular)
LessWrong
Audio narrations of LessWrong posts. Includes all curated posts and all posts with 125+ karma.If you'd like more, subscribe to the “Lesswrong (30+ karma)” feed.
Episodes
Mentioned books

Dec 17, 2023 • 1h 1min
[HUMAN VOICE] "Significantly Enhancing Adult Intelligence With Gene Editing May Be Possible" by Gene Smith and Kman
The podcast discusses the potential of gene editing to enhance adult intelligence. It explores different gene editing techniques, challenges, and considerations, including the use of base editors and prime editors. The concept of lipid nanoparticles for delivering mRNA is also explored, along with the cost analysis of conducting gene editing experiments. The podcast emphasizes the need for research, funding, and expertise in genetic engineering for enhancing adult intelligence.

Dec 15, 2023 • 40min
[HUMAN VOICE] "Moral Reality Check (a short story)" by jessicata
Support ongoing human narrations of LessWrong's curated posts:www.patreon.com/LWCuratedThis is a linkpost for https://unstableontology.com/2023/11/26/moral-reality-check/Janet sat at her corporate ExxenAI computer, viewing some training performance statistics. ExxenAI was a major player in the generative AI space, with multimodal language, image, audio, and video AIs. They had scaled up operations over the past few years, mostly serving B2B, but with some B2C subscriptions. ExxenAI's newest AI system, SimplexAI-3, was based on GPT-5 and Gemini-2. ExxenAI had hired away some software engineers from Google and Microsoft, in addition to some machine learning PhDs, and replicated the work of other companies to provide more custom fine-tuning, especially for B2B cases. Part of what attracted these engineers and theorists was ExxenAI's AI alignment team.Source:https://www.lesswrong.com/posts/umJMCaxosXWEDfS66/moral-reality-check-a-short-storyNarrated for LessWrong by Perrin Walker.Share feedback on this narration.[125+ Karma Post] ✓[Curated Post] ✓

Dec 15, 2023 • 17min
AI Control: Improving Safety Despite Intentional Subversion
Crossposted from the AI Alignment Forum. May contain more technical jargon than usual.We’ve released a paper, AI Control: Improving Safety Despite Intentional Subversion. This paper explores techniques that prevent AI catastrophes even if AI instances are colluding to subvert the safety techniques. In this post: We summarize the paper;We compare our methodology to what the one used in other safety papers.The next post in this sequence (which we’ll release in the coming weeks) discusses what we mean by AI control and argues that it is a promising methodology for reducing risk from scheming models.Here's the abstract of the paper:As large language models (LLMs) become more powerful and are deployed more autonomously, it will be increasingly important to prevent them from causing harmful outcomes. Researchers have investigated a variety of safety techniques for this purpose, e.g. using models to review the outputs of other models [...]--- First published: December 13th, 2023 Source: https://www.lesswrong.com/posts/d9FJHawgkiMSPjagR/ai-control-improving-safety-despite-intentional-subversion --- Narrated by TYPE III AUDIO.

Dec 13, 2023 • 2min
2023 Unofficial LessWrong Census/Survey
The Less Wrong General Census is unofficially here! You can take it at this link.It's that time again.If you are reading this post and identify as a LessWronger, then you are the target audience. I'd appreciate it if you took the survey. If you post, if you comment, if you lurk, if you don't actually read the site that much but you do read a bunch of the other rationalist blogs or you're really into HPMOR, if you hung out on rationalist tumblr back in the day, or if none of those exactly fit you but I'm maybe getting close, I think you count and I'd appreciate it if you took the survey.Don't feel like you have to answer all of the questions just because you started taking it. Last year I asked if people thought the survey was too long, collectively they thought it was [...]--- First published: December 2nd, 2023 Source: https://www.lesswrong.com/posts/JHeTrWha5PxiPEwBt/2023-unofficial-lesswrong-census-survey --- Narrated by TYPE III AUDIO.

Dec 13, 2023 • 10min
The likely first longevity drug is based on sketchy science. This is bad for science and bad for longevity.
If you are interested in the longevity scene, like I am, you probably have seen press releases about the dog longevity company, Loyal for Dogs, getting a nod for efficacy from the FDA. These have come in the form of the New York Post calling the drug "groundbreaking", Science Alert calling the drug "radical", and the more sedate New York Times just asking, "Could Longevity Drugs for Dogs Extend Your Pet's Life?", presumably unaware of Betteridge's Law of Headlines. You may have also seen the coordinated Twitter offensive of people losing their shit about this, including their lead investor, Laura Deming, saying that she "broke down crying when she got the call".And if you have been following Loyal for Dogs for a while, like I have, you are probably puzzled by this news. Loyal for Dogs has been around since 2021. Unlike any other drug company or longevity [...]--- First published: December 12th, 2023 Source: https://www.lesswrong.com/posts/vHSkxmYYqW59sySqA/the-likely-first-longevity-drug-is-based-on-sketchy-science --- Narrated by TYPE III AUDIO.

Dec 13, 2023 • 13min
[HUMAN VOICE] "What are the results of more parental supervision and less outdoor play?" by Julia Wise
The podcast discusses the increase in parental supervision and decrease in outdoor play among children. It explores the impacts on children's development, including decreased physical and social skills, and worsening mental health. It also analyzes trends in child injuries and deaths, highlighting the need for attention to car traffic safety. The podcast explores differences in play and injury rates between boys and girls, as well as methods for allowing kids to play independently.

Dec 12, 2023 • 1h 8min
Significantly Enhancing Adult Intelligence With Gene Editing May Be Possible
In the course of my life, there have been a handful of times I discovered an idea that changed the way I thought about the world. The first occurred when I picked up Nick Bostrom's book “superintelligence” and realized that AI would utterly transform the world. The second was when I learned about embryo selection and how it could change future generations. And the third happened a few months ago when I read a message from a friend of mine on Discord about editing the genome of a living person.We’ve had gene therapy to treat cancer and single gene disorders for decades. But the process involved in making such changes to the cells of a living person is excruciating and extremely expensive. CAR T-cell therapy, a treatment for certain types of cancer, requires the removal of white blood cells via IV, genetic modification of those cells outside the [...]--- First published: December 12th, 2023 Source: https://www.lesswrong.com/posts/JEhW3HDMKzekDShva/significantly-enhancing-adult-intelligence-with-gene-editing --- Narrated by TYPE III AUDIO.

Dec 11, 2023 • 11min
re: Yudkowsky on biological materials
I was asked to respond to this comment by Eliezer Yudkowsky. This post is partly redundant with my previous post.Why is flesh weaker than diamond?When trying to resolve disagreements, I find that precision is important. Tensile strength, compressive strength, and impact strength are different. Material microstructure matters. Poorly-sintered diamond crystals could crumble like sand, and a large diamond crystal has lower impact strength than some materials made of proteins.Even when the load-bearing forces holding large molecular systems together are locally covalent bonds, as in lignin (what makes wood strong), if you've got larger molecules only held together by covalent bonds at interspersed points along their edges, that's like having 10cm-diameter steel beams held together by 1cm welds.lignin (what makes wood strong)That's an odd way of putting things. The mechanical strength of wood is generally considered to come from it [...]--- First published: December 11th, 2023 Source: https://www.lesswrong.com/posts/XhDh97vm7hXBfjwqQ/re-yudkowsky-on-biological-materials --- Narrated by TYPE III AUDIO.

Dec 5, 2023 • 30min
Speaking to Congressional staffers about AI risk
In May and June of 2023, I (Akash) had about 50-70 meetings about AI risks with congressional staffers. I had been meaning to write a post reflecting on the experience and some of my takeaways, and I figured it could be a good topic for a LessWrong dialogue. I saw that hath had offered to do LW dialogues with folks, and I reached out. In this dialogue, we discuss how I decided to chat with staffers, my initial observations in DC, some context about how Congressional offices work, what my meetings looked like, lessons I learned, and some miscellaneous takes about my experience. ContexthathHey! In your message, you mentioned a few topics that relate to your time in DC. I figured we should start with your experience talking to congressional offices about AI risk. I'm quite interested in learning more; there don't seem to be many [...]--- First published: December 4th, 2023 Source: https://www.lesswrong.com/posts/2sLwt2cSAag74nsdN/speaking-to-congressional-staffers-about-ai-risk --- Narrated by TYPE III AUDIO.

Dec 4, 2023 • 1h 3min
[HUMAN VOICE] "Shallow review of live agendas in alignment & safety" by technicalities & Stag
Support ongoing human narrations of LessWrong's curated posts:www.patreon.com/LWCuratedYou can’t optimise an allocation of resources if you don’t know what the current one is. Existing maps of alignment research are mostly too old to guide you and the field has nearly no ratchet, no common knowledge of what everyone is doing and why, what is abandoned and why, what is renamed, what relates to what, what is going on. This post is mostly just a big index: a link-dump for as many currently active AI safety agendas as we could find. But even a linkdump is plenty subjective. It maps work to conceptual clusters 1-1, aiming to answer questions like “I wonder what happened to the exciting idea I heard about at that one conference” and “I just read a post on a surprising new insight and want to see who else has been working on this”, “I wonder roughly how many people are working on that thing”. This doc is unreadably long, so that it can be Ctrl-F-ed. Also this way you can fork the list and make a smaller one. Most of you should only read the editorial and skim the section you work in.Source:https://www.lesswrong.com/posts/zaaGsFBeDTpCsYHef/shallow-review-of-live-agendas-in-alignment-and-safety#More_metaNarrated for LessWrong by Perrin Walker.Share feedback on this narration.[125+ Karma Post] ✓[Curated Post] ✓


