

LessWrong (Curated & Popular)
LessWrong
Audio narrations of LessWrong posts. Includes all curated posts and all posts with 125+ karma.If you'd like more, subscribe to the “Lesswrong (30+ karma)” feed.
Episodes
Mentioned books

Nov 17, 2023 • 14min
[HUMAN VOICE] "Thinking By The Clock" by Screwtape
Support ongoing human narrations of curated posts:www.patreon.com/LWCuratedI'm sure Harry Potter and the Methods of Rationality taught me some of the obvious, overt things it set out to teach. Looking back on it a decade after I first read it however, what strikes me most strongly are often the brief, tossed off bits in the middle of the flow of a story.Fred and George exchanged worried glances."I can't think of anything," said George."Neither can I," said Fred. "Sorry."Harry stared at them.And then Harry began to explain how you went about thinking of things.It had been known to take longer than two seconds, said Harry.-Harry Potter and the Methods of Rationality, Chapter 25.Source:https://www.lesswrong.com/posts/WJtq4DoyT9ovPyHjH/thinking-by-the-clockNarrated for LessWrong by Perrin Walker.Share feedback on this narration.[125+ Karma Post] ✓[Curated Post] ✓I.

Nov 17, 2023 • 2min
"You can just spontaneously call people you haven't met in years" by lc
Here's a recent conversation I had with a friend:Me: "I wish I had more friends. You guys are great, but I only get to hang out with you like once or twice a week. It's painful being holed up in my house the entire rest of the time."Friend: "You know ${X}. You could talk to him."Me: "I haven't talked to ${X} since 2019."Friend: "Why does that matter? Just call him."Me: "What do you mean 'just call him'? I can't do that."Friend: "Yes you can"Me: Source:https://www.lesswrong.com/posts/2HawAteFsnyhfYpuD/you-can-just-spontaneously-call-people-you-haven-t-met-inNarrated for LessWrong by TYPE III AUDIO.Share feedback on this narration.[125+ Karma Post] ✓

Nov 17, 2023 • 8min
"EA orgs' legal structure inhibits risk taking and information sharing on the margin" by Elizabeth
Elizabeth discusses how EA organizations' legal structure inhibits risk taking and information sharing. The challenges of forming a legally independent organization, loss of value, coordination costs, chilling effects, and restricted information sharing are explored. The impact of legal structures on risk-taking, information sharing, and confusion tolerance in fiscal sponsorship are highlighted.

Nov 17, 2023 • 1h 18min
[HUMAN VOICE] "AI Timelines" by habryka, Daniel Kokotajlo, Ajeya Cotra, Ege Erdil
Ajeya Cotra, Daniel Kokotajlo, and Ege Erdil, researchers in the field of AI, discuss their varying estimates for the development of transformative AI and explore their disagreements. They delve into concrete AGI milestones, discuss the challenges of LLM product development, and debate factors that influence AI timelines. They also examine the progression of AI models, the potential of AI technology, and the timeline for achieving super intelligent AGI.

Nov 17, 2023 • 40min
"Integrity in AI Governance and Advocacy" by habryka, Olivia Jimenez
In this podcast, habryka and Olivia Jimenez discuss their thoughts on a recent AI alignment conjecture post, exploring questions on advocacy, social network coordination, and the balance between advocacy and research. They also dive into topics such as governance challenges, stigmas of Effective Altruism, and strategies for gathering support while maintaining integrity.

Nov 16, 2023 • 10min
Loudly Give Up, Don’t Quietly Fade
1.There's a supercharged, dire wolf form of the bystander effect that I’d like to shine a spotlight on.First, a quick recap. The Bystander Effect is a phenomenon where people are less likely to help when there's a group around. When I took basic medical training, I was told to always ask one specific person to take actions instead of asking a crowd at large. “You, in the green shirt! Call 911!” (911 is the emergency services number in the United States.) One habit I worked hard to instill in my own head was that if I’m in a crowd that's asked to do something, I silently count off three seconds. If nobody else responds, I either decide to do it or decide not to do it and I say that. I like this habit, because the Bystander Effect is dumb and I want to fight it. Several [...]--- First published: November 13th, 2023 Source: https://www.lesswrong.com/posts/bkfgTSHhm3mqxgTmw/loudly-give-up-don-t-quietly-fade --- Narrated by TYPE III AUDIO.

Nov 9, 2023 • 8min
[HUMAN VOICE] "Towards Monosemanticity: Decomposing Language Models With Dictionary Learning" by Zac Hatfield-Dodds
This podcast discusses the challenges of understanding artificial neural networks and the importance of recording neuron activations and testing responses. It explores the decomposition of language models with dictionary learning, the benefits of using features for interpretation, and the concept of decomposing models into interpretable features. The chapter also discusses the universality of learned features, potential benefits of decomposing models into a small or large set of features, and the challenges of scaling this approach to larger models.

Nov 9, 2023 • 17min
[HUMAN VOICE] "Deception Chess: Game #1" by Zane et al.
An experiment involving humans playing chess with advice from experts, two of whom are lying. Details about the first game of Deception Chess and the players involved. Discussion and analysis of the moves made in a chess game on Discord. Positive outcome of a game in a real-world AI scenario and plans for further experiments. Reflections on playing a deception Chess game against an AI opponent, including unexpected AI mistakes and speculation on future AI capabilities.

Nov 9, 2023 • 5min
"The 6D effect: When companies take risks, one email can be very powerful." by scasper
This podcast discusses the 6D effect, where documented communications of risks make companies more liable in court. It explores companies' liability for ignored risks and emphasizes the importance of discoverable documentation of dangers. The podcast sheds light on industry norms, legal discovery proceedings, and incentive structures related to risky system building.

Nov 9, 2023 • 1min
"The other side of the tidal wave" by Katja Grace
The podcast explores the distressing possibility of AI causing human extinction and the negative consequences it would have on various aspects of life if superhuman AI becomes a reality.


