LessWrong (Curated & Popular)

LessWrong
undefined
12 snips
May 15, 2025 • 9min

“Explaining British Naval Dominance During the Age of Sail” by Arjun Panickssery

Arjun Panickssery, an insightful author, delves into the British Navy's strategic brilliance from 1670 to 1827. He explains how institutional incentives shaped naval dominance, discussing key battles in the Seven Years’ War and the Napoleonic Wars. Panickssery highlights the motivations of naval captains that drove them into fierce battles and analyzes the intricate strategies that led to their victories. Additionally, he shares fascinating insights into the strict disciplinary measures that governed naval conduct, reflecting a unique era of maritime warfare.
undefined
5 snips
May 14, 2025 • 7min

“Eliezer and I wrote a book: If Anyone Builds It, Everyone Dies” by So8res

Delve into the urgent themes presented in a new book that tackles significant AI risks. The authors highlight the critical relationship between humanity and artificial intelligence. Endorsements from influential figures underscore the book's potential impact. A call to action resonates throughout the discussion, urging listeners to recognize and engage with these pivotal issues before it's too late. The launch serves as both a warning and an invitation for collective awareness and proactive measures.
undefined
10 snips
May 14, 2025 • 8min

“Too Soon” by Gordon Seidoh Worley

A poignant tale unfolds as a son navigates his mother's sudden illness and passing, celebrating cherished memories and the love they shared. The emotional journey reveals the complexities of grief, blending nostalgia with the difficult realities of loss. Amidst the sorrow, a fascinating discussion emerges on artificial intelligence, weighing its potential to reshape lives against the backdrop of personal tragedy. Hope and innovation intertwine, offering a glimpse into a future where technology and recovery coexist.
undefined
7 snips
May 13, 2025 • 5min

“PSA: The LessWrong Feedback Service” by JustisMills

Discover the unique benefits of the LessWrong Feedback Service, a handy tool for writers seeking editorial help. You can summon a professional editor for everything from grammar checks to clarity suggestions, all without pressure. Curious about the types of feedback available? Wondering how often you can request guidance or whether you can use it for linkposts? This fun discussion dives into how this service can enhance your writing experience and why it's worth a try!
undefined
May 8, 2025 • 8min

“Orienting Toward Wizard Power” by johnswentworth

Reflecting on personal loss, the narrator connects their quest for self with a historical dairy farmer who defied norms to vaccinate his family. The podcast contrasts conventional 'king power' with 'wizard power,' emphasizing the latter's importance in achieving authentic fulfillment. It envisions engaging community experiences through creative projects and tech exploration, promoting personal growth and collaborative learning. The narrative invites listeners to rethink success and embrace transformative identities beyond societal limitations.
undefined
May 5, 2025 • 13min

“Interpretability Will Not Reliably Find Deceptive AI” by Neel Nanda

Neel Nanda, a thought leader on AI safety, shares his intriguing insights on interpretability and its limits. He argues that relying solely on interpretability to detect deceptive AI is naive. Instead, he advocates for a multi-faceted defense strategy that includes black-box methods alongside interpretability. Nanda emphasizes that while interpretability can enhance our understanding, it's just one layer in ensuring AI safety. His hot takes spark a provocative discussion on the challenges we face with superintelligent systems.
undefined
4 snips
May 3, 2025 • 12min

“Slowdown After 2028: Compute, RLVR Uncertainty, MoE Data Wall” by Vladimir_Nesov

The discussion explores the anticipated slowdown in AI training compute around 2029, raising concerns about resource limitations and diminishing natural text data. It highlights the uncertain potential of reasoning training and its inability to generate new capabilities. The hosts analyze the implications of scaling challenges, suggesting that advancements may take decades rather than years. They also touch on the growing data inefficiency in current methods, emphasizing the urgency of transformative breakthroughs for future progress.
undefined
May 1, 2025 • 28min

“Early Chinese Language Media Coverage of the AI 2027 Report: A Qualitative Analysis” by jeanne_, eeeee

The discussion dives into the early reactions of Chinese media to the AI 2027 report, highlighting differing perspectives across platforms. Censorship patterns emerge as a crucial signal, hinting at government stances on AGI developments. The content is analyzed through mainstream media, forums, and personal blogs, revealing a complex landscape of public opinion. The geopolitical implications of AI predictions, particularly concerning tensions with the United States, are also examined. Insights into the societal perceptions of AI's future are unveiled.
undefined
Apr 25, 2025 • 1min

[Linkpost] “Jaan Tallinn’s 2024 Philanthropy Overview” by jaan

Discover the impressive achievements of philanthropy in 2024, spotlighting $51 million in endpoint grants. Learn about the speaker's ongoing commitment with over $4 million disbursed in early 2025 and a pledged $10 million for future grant rounds. Dive into the types of impactful projects funded and witness how philanthropy can evolve over time, demonstrating a sustained effort to make a difference.
undefined
9 snips
Apr 24, 2025 • 15min

“Impact, agency, and taste” by benkuhn

I’ve been thinking recently about what sets apart the people who’ve done the best work at Anthropic. You might think that the main thing that makes people really effective at research or engineering is technical ability, and among the general population that's true. Among people hired at Anthropic, though, we’ve restricted the range by screening for extremely high-percentile technical ability, so the remaining differences, while they still matter, aren’t quite as critical. Instead, people's biggest bottleneck eventually becomes their ability to get leverage—i.e., to find and execute work that has a big impact-per-hour multiplier. For example, here are some types of work at Anthropic that tend to have high impact-per-hour, or a high impact-per-hour ceiling when done well (of course this list is extremely non-exhaustive!): Improving tooling, documentation, or dev loops. A tiny amount of time fixing a papercut in the right way can save [...] ---Outline:(03:28) 1. Agency(03:31) Understand and work backwards from the root goal(05:02) Don't rely too much on permission or encouragement(07:49) Make success inevitable(09:28) 2. Taste(09:31) Find your angle(11:03) Think real hard(13:03) Reflect on your thinking--- First published: April 19th, 2025 Source: https://www.lesswrong.com/posts/DiJT4qJivkjrGPFi8/impact-agency-and-taste --- Narrated by TYPE III AUDIO.

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app