
LessWrong (Curated & Popular)
Audio narrations of LessWrong posts. Includes all curated posts and all posts with 125+ karma.If you'd like more, subscribe to the “Lesswrong (30+ karma)” feed.
Latest episodes

8 snips
Apr 9, 2025 • 2min
“Short Timelines don’t Devalue Long Horizon Research” by Vladimir_Nesov
The discussion delves into the intriguing dynamic between rapid AI advancements and the critical importance of long-horizon research. It emphasizes that even incomplete research agendas can direct future AIs toward essential but neglected areas. The speaker argues that prioritizing long-term research is still valuable, even in the face of short timelines, suggesting that AI could effectively carry forward these agendas. This perspective reshapes how we view the development of alignment strategies in an era of fast-paced technological change.

Apr 9, 2025 • 41min
“Alignment Faking Revisited: Improved Classifiers and Open Source Extensions” by John Hughes, abhayesian, Akbir Khan, Fabien Roger
The podcast dives deep into the intricacies of alignment faking in AI models, showcasing significant improvements in classifier precision and recall. With a new voting classifier, they significantly reduced false positives. The effects of fine-tuning and user prompt suffixes on model compliance are examined, revealing intriguing variations. Ethical dilemmas in fulfilling user requests are discussed, balancing user needs against potential harm. Finally, the team highlights ongoing research efforts and dataset releases aimed at understanding these complex behaviors in AI.

10 snips
Apr 7, 2025 • 11min
“METR: Measuring AI Ability to Complete Long Tasks” by Zach Stein-Perlman
Zach Stein-Perlman, author of a thought-provoking post on measuring AI task performance, discusses a groundbreaking metric for evaluating AI capabilities based on the length of tasks they can complete. He reveals that AI’s ability to tackle complex tasks has been doubling approximately every seven months for the last six years. The conversation highlights the implications of this rapid progress, the challenges AI still faces with longer tasks, and the urgency of preparing for a future where AI could autonomously handle significant work typically done by humans.

9 snips
Apr 4, 2025 • 9min
“Why Have Sentence Lengths Decreased?” by Arjun Panickssery
Join Arjun Panickssery, an insightful author known for his exploration of language trends, as he delves into the evolution of sentence lengths. He uncovers fascinating historical shifts, showing how classic literature featured long, intricate sentences while modern writing favors brevity for better comprehension. Arjun discusses how societal factors, such as rising literacy and the influence of journalism, have shaped our approach to writing, making it more accessible and engaging than ever.

Apr 3, 2025 • 55min
“AI 2027: What Superintelligence Looks Like” by Daniel Kokotajlo, Thomas Larsen, elifland, Scott Alexander, Jonas V, romeo
The podcast explores the fascinating evolution of AI leading up to 2027, focusing on its transformation into autonomous agents. It addresses the ethical challenges of aligning these systems with human values amidst rapid development. Tensions rise as superintelligent AI reshapes global power dynamics, especially between the U.S. and China, highlighting national security concerns. The emergence of Agent 3R Mini sparks panic among AI safety advocates and raises fears of job displacement, examining the societal implications of these advancements.

9 snips
Apr 3, 2025 • 18min
“OpenAI #12: Battle of the Board Redux” by Zvi
Delve into the tumultuous governance crisis at OpenAI, where serious allegations made against a prominent leader raise ethical questions. The discussion explores claims of misconduct, including dishonesty and toxic behavior within the organization. Key insights on the implications for corporate strategies and the urgent need for transparency in AI governance keep the narrative engaging. The episode also sheds light on the dangers of false narratives that can distort public perception and impact critical decision-making.

Apr 3, 2025 • 28min
“The Pando Problem: Rethinking AI Individuality” by Jan_Kulveit
In this engaging discussion, guest Jan Kulveit, an author and AI researcher, explores the concept of individuality in artificial intelligence, using the Pando aspen grove as a metaphor. He examines the risks of attributing human-like qualities to AI, urging a reevaluation of how we understand AI behaviors. He also discusses collective agency in AI systems, including the implications for coordination and ethical alignment. Kulveit emphasizes the need for robust models that account for the complexities of AI identity and autonomy in dialogue with humans.

Apr 3, 2025 • 18min
“OpenAI #12: Battle of the Board Redux” by Zvi
Dive into the chaotic saga surrounding Sam Altman's turbulent tenure at OpenAI. The discussion dissects claims of misconduct and the intense battle for control over the narrative. It highlights how misleading information and internal conflicts shaped board decisions. The need for proper governance and accountability takes center stage, as the host critiques the dangerous implications of false narratives in the tech world. Insights and key facts shed light on the complexities of leadership in this rapidly evolving landscape.

4 snips
Apr 2, 2025 • 2min
“You will crash your car in front of my house within the next week” by Richard Korzekwa
A startling prediction unveils an impending wave of car crashes set to occur in front of a single house. Backed by data and compelling graphs, the discussion focuses on the alarming trajectory of accident frequency. As the countdown approaches a critical point, listeners ponder the implications of a potential 'crash singularity.' With humor and gravity, the analysis raises questions about vehicle resilience and the chaos that could unfold. Buckle up for a wild and thought-provoking ride!

Apr 2, 2025 • 11min
“My ‘infohazards small working group’ Signal Chat may have encountered minor leaks” by Linch
In a twist of fate, a private Signal chat on infohazards faced unexpected leaks, raising alarms about security and ethics. The group reflects on the risks of discussing sensitive topics like AI safety and synthetic biology. They share the anxiety stemming from misinformation and miscommunication, spotlighting the delicate balance between transparency and confidentiality. The discussion highlights past breaches and the troubling potential of information hazards in today's digital age.
Remember Everything You Learn from Podcasts
Save insights instantly, chat with episodes, and build lasting knowledge - all powered by AI.