

LessWrong (Curated & Popular)
LessWrong
Audio narrations of LessWrong posts. Includes all curated posts and all posts with 125+ karma.If you'd like more, subscribe to the “Lesswrong (30+ karma)” feed.
Episodes
Mentioned books

Nov 25, 2024 • 21min
“‘The Solomonoff Prior is Malign’ is a special case of a simpler argument” by David Matolcsi
David Matolcsi, an author known for his insights on the Solomonoff prior, dives into the complexities of decision-making in the age of AI. He argues that the Solomonoff prior can lead superintelligent oracles to prioritize alien civilizations over humanity. Matolcsi emphasizes the importance of expected values over simple probabilities, warning against potential pitfalls in reliance on the latter. He also discusses how simulations and infinite universes complicate our understanding, urging us to focus on meaningful impacts to benefit humanity.

Nov 20, 2024 • 5min
“‘It’s a 10% chance which I did 10 times, so it should be 100%’” by egor.timatkov
Dive into the intriguing world of probability misconceptions! The discussion unravels the complexities of low-probability events, using the classic coin flip scenario. It reveals why flipping a coin twice doesn’t guarantee a specific outcome. With essential formulas introduced, listeners discover a surprising twist on the chances of success. The math gets fascinating as it shows that a 1/n chance happening n times leads to a 63% success rate. Get ready to rethink what you know about probabilities!

Nov 19, 2024 • 1h 3min
“OpenAI Email Archives” by habryka
Email exchanges between tech giants reveal their early thoughts on creating beneficial AI and the significant challenges they faced. The discussions highlight the importance of transparency, strategic recruitment, and salary management to attract top talent. Notable negotiations with Microsoft are examined, showcasing concerns about governance and control over AI development. The ongoing pursuit of aligning AI's progress with ethical considerations is a key theme, making for a compelling look into the interplay of ambition and responsibility in tech.

Nov 18, 2024 • 9min
“Ayn Rand’s model of ‘living money’; and an upside of burnout” by AnnaSalamon
In this engaging discussion, Anna Salamon, author known for her insights on willpower, shares her intriguing toy model on the relationship between conscious choices and burnout. She connects Ayn Rand's concept of 'living money' to a model of 'living willpower,' illustrating how our choices can nourish our psyche over time. The conversation also highlights the dangers of delusional planning leading to eventual burnout, offering insights into adapting for healthier relationships and personal growth. A thought-provoking take on willpower and well-being!

Nov 17, 2024 • 24min
“Neutrality” by sarahconstantin
In a deeply polarized world, the concept of neutrality takes center stage. The discussion highlights the scarcity of unbiased institutions and explores the various realities shaped by differing beliefs. Historical and contemporary examples shed light on the challenges of cooperation amid diversity. The podcast emphasizes the need for new frameworks that blend human judgment with structured protocols. It also contemplates the balance between utopian ideals and practical proposals, questioning the role of trust and authority in a quest for societal progress.

Nov 16, 2024 • 14min
“Making a conservative case for alignment” by Cameron Berg, Judd Rosenblatt, phgubbins, AE Studio
Explore the intersection of politics and AI as the conversation dives into the implications of Trump’s potential leadership during a critical time for artificial general intelligence. The speakers argue for a conservative approach to AI alignment, emphasizing national security and the need for bipartisan efforts. They also discuss how addressing AI risks transcends political boundaries, highlighting the importance of proactive policy development in a Republican-majority government. Discover surprising perspectives on winning the AI race while ensuring safety.

Nov 16, 2024 • 1h 4min
“OpenAI Email Archives (from Musk v. Altman)” by habryka
Discover the intriguing email exchanges between Elon Musk and Sam Altman during the formation of OpenAI. The discussion delves into strategic planning and the organization's mission to advance AI responsibly. Unresolved issues about control and governance in AGI are also explored, emphasizing the need for trust and equitable ownership. As the legal battle unfolds, these insights reveal the complexities behind one of the tech world's most pivotal collaborations.

Nov 15, 2024 • 27min
“Catastrophic sabotage as a major threat model for human-level AI systems” by evhub
The discussion dives into the significant threat of catastrophic sabotage in the context of human-level AI. It examines two chilling scenarios: sabotage of AI alignment research and attacks on critical actors. The speakers evaluate the necessary capabilities for carrying out such sabotage and explore methods for assessing risks. To combat these threats, they propose strategies for mitigation, including internal usage restrictions and affirmative safety cases. It’s a compelling look at the darker implications of AI development.

4 snips
Nov 12, 2024 • 22min
“The Online Sports Gambling Experiment Has Failed” by Zvi
Zvi, an author with extensive experience in sports betting, discusses the detrimental effects of legalized online sports gambling. He reveals alarming trends like increased bankruptcies and domestic violence linked to gambling addiction. Zvi critiques the predatory nature of current gambling practices, emphasizing the accessibility and manipulation tactics that exploit players. His insights challenge the notion that legalized betting is harmless, advocating for stricter regulations to protect vulnerable populations.

Nov 12, 2024 • 5min
“o1 is a bad idea” by abramdemski
The podcast delves into the risks of O1 technology, highlighting its double-down on reinforcement learning, which raises safety concerns. It stresses the need for precise value definitions to avoid catastrophic outcomes. Additionally, the discussion touches on the challenges of aligning AI behavior with human morals and the complications that arise from optimizing ambiguous concepts. The implications for AI interpretability are also explored, revealing a gap in understanding how systems like O1 arrive at their conclusions.


