

LessWrong (Curated & Popular)
LessWrong
Audio narrations of LessWrong posts. Includes all curated posts and all posts with 125+ karma.If you'd like more, subscribe to the “Lesswrong (30+ karma)” feed.
Episodes
Mentioned books

Oct 23, 2024 • 9min
“Overcoming Bias Anthology” by Arjun Panickssery
Arjun Panickssery, the author behind the "Overcoming Bias Anthology," explores how biases shape our decision-making. He explains the distinction between near and far thinking, and the curiosity surrounding our future ambitions. The conversation dives into the implications of artificial intelligence, examining both its potential and existential risks. Panickssery also tackles cognitive biases' roles in society, and the tension between idealism and our concrete behaviors. His insights challenge listeners to reconsider their perspectives on reality and decision-making.

Oct 22, 2024 • 12min
“Arithmetic is an underrated world-modeling technology” by dynomight
Explore how arithmetic transcends mere calculations to become a powerful world-modeling technology. Discover its applications in scientific domains, like nutrition research involving chimpanzees. Understand the significance of unit consistency in calculations. Dive into the fascinating challenges of estimating costs and sizes for massive steel blocks, using imaginative comparisons to iconic structures. This discussion unveils the hidden potential of arithmetic in grasping complex concepts.

Oct 15, 2024 • 25min
“My theory of change for working in AI healthtech” by Andrew_Critch
In this discussion, Andrew Critch, an AI alignment expert working in healthtech, shares his insights on the urgent need to address the risks of AI, particularly the impending arrival of AGI. He highlights concerns about industrial dehumanization and how it could threaten humanity. Critch advocates for developing human-centric industries, especially in healthcare, as a way to foster human welfare amidst rapid AI advancement. He emphasizes the importance of moral commitment in the sector to navigate the challenges posed by AI.

Oct 15, 2024 • 18min
“Why I’m not a Bayesian” by Richard_Ngo
Richard Ngo, author and philosopher, dives into his critiques of Bayesianism as a method of reasoning. He explains the core principles of Bayesianism, highlighting its focus on degrees of belief, and presents philosophical objections, such as the need for fuzzy truth values. Ngo emphasizes the importance of model-based reasoning and discusses the limitations of Bayesian methods in complex scientific modeling. He draws on insights from Karl Popper to explore how models can differ in structural accuracy and practical usefulness.

Oct 14, 2024 • 18min
“The AGI Entente Delusion” by Max Tegmark
Max Tegmark, an influential author and AI researcher, discusses the emerging geopolitical strategy concerning Artificial General Intelligence (AGI). He critiques the so-called 'entente strategy' that aims to outpace rival nations, warning of its dangers, labeling it a 'suicide race'. Tegmark emphasizes that the real beneficiaries of an AGI arms race would be machines, not nations. He advocates for a focus on 'Tool AI' that prioritizes ethical alignment and safety, presenting a more sustainable approach to AGI development.

Oct 14, 2024 • 19min
“Momentum of Light in Glass” by Ben
The podcast dives into the intriguing mystery of light's momentum in different media, like glass and water. It discusses historical theories from scientists like Abraham and Minkowski, shedding light on their contrasting views. An engaging analogy compares light's behavior to a runner in water, adding depth to the debate. The importance of inquiry and open dialogue in physics is emphasized, highlighting how curiosity drives scientific discovery and progress.

Oct 9, 2024 • 25min
“Overview of strong human intelligence amplification methods” by TsviBT
TsviBT, an author renowned for exploring cognitive enhancement, dives into how we can create superintelligent humans. He discusses various methods for amplifying intelligence, from brain emulation to gene editing, and examines the ethical dilemmas involved. TsviBT highlights the significance of Algernon's Law and stresses the importance of funding innovative projects. He also explores the complexities of brain-brain interfaces and the potential of genomic regulatory networks to enhance cognitive functions, all while addressing the inherent challenges of these technologies.

Oct 3, 2024 • 12min
“Struggling like a Shadowmoth” by Raemon
This discussion delves into the transformative power of suffering and personal growth. It features a fictional character enduring extreme trials, highlighting how confronting pain fosters self-awareness. There's a fascinating exploration of using biotechnology to alleviate suffering and the importance of personal struggle in learning. Emotional insights on fear and the journey toward self-acceptance during challenging times are also examined, underscoring the balance between seeking validation and cultivating inner strength.

Oct 3, 2024 • 8min
“Three Subtle Examples of Data Leakage” by abstractapplic
Explore the intriguing world of data leakage in data science through fascinating examples. From navigating sealed-bid auctions to the complexities of random sampling, discover how improper data handling can lead to skewed predictions. Learn about the subtle implications of these challenges and the strategies needed to identify and correct them. The journey highlights the critical importance of data integrity and the ongoing vigilance required for successful modeling. It's a deep dive into the intellectual dilemmas faced in the field!

Sep 30, 2024 • 22min
“the case for CoT unfaithfulness is overstated” by nostalgebraist
The conversation dives into the skepticism surrounding chain-of-thought (CoT) explanations from large language models. It challenges the notion that these explanations are entirely untrustworthy. Listeners are encouraged to reconsider the insights CoTs can provide, despite their flaws. The discussion emphasizes recognizing the unique benefits of CoTs compared to other reasoning methods. A call to explore what we can learn from model reasoning is made, urging a more nuanced view of model-generated reasoning processes.


