LessWrong (Curated & Popular)

LessWrong
undefined
Aug 12, 2025 • 21min

“How Does A Blind Model See The Earth?” by henry

Explore how the perception of Earth's geography intertwines with AI and mapping techniques. Discover the charm of early cartography and its personal interpretations of the world. Delve into land probability maps to understand global distributions and the complexities of model architectures like Coda and QEN3. Highlights include advancements in AI, comparing GPT-4 with predecessors and their impact on geographical predictions. The discussion navigates the technical challenges of achieving accuracy in AI visualizations of land.
undefined
4 snips
Aug 12, 2025 • 9min

“Re: Recent Anthropic Safety Research” by Eliezer Yudkowsky

Eliezer Yudkowsky, an AI researcher and decision theorist, shares his candid insights on recent safety research from Anthropic. He expresses skepticism about the actual significance of their findings, arguing that they don’t change his views on the dangers posed by superintelligent machines. Yudkowsky discusses the complex interactions between AI models and human responses, urging the need for early recognition of safety issues while critiquing corporate influences in research. It's a thought-provoking conversation focused on the realities of AI risks.
undefined
6 snips
Aug 9, 2025 • 11min

“How anticipatory cover-ups go wrong” by Kaj_Sotala

Kaj Sotala, author and insightful thinker, explores the complex dynamics of communication and mistrust, particularly during the COVID vaccine rollout. He discusses how anticipatory cover-ups, aimed at preventing misinformation, often backfire, leading to greater distrust among the public. Through real-world examples, Kaj highlights the dire consequences of withholding information and stresses the importance of transparency. He unpacks the delicate balance between protecting relationships and the potential harms of secrecy in fostering understanding.
undefined
Aug 8, 2025 • 10min

“SB-1047 Documentary: The Post-Mortem” by Michaël Trazzi

Michaël Trazzi, the producer of the SB-1047 documentary, shares insights from his extensive production journey. He reveals that what was planned as a 6-week project ballooned to 27 weeks at a cost of $157k. Trazzi discusses the critical lessons learned about budgeting, staffing, and viewer engagement. He also reflects on the challenges faced during filming and editing, highlighting unique experiences that shaped the final product and offering valuable advice for future documentary creators. Tune in for a behind-the-scenes look at documentary production!
undefined
4 snips
Aug 8, 2025 • 48min

“METR’s Evaluation of GPT-5” by GradientDissenter

Gradient Dissenter, who works at METR and played a key role in evaluating GPT-5, discusses the thorough safety analysis conducted on the AI model prior to its launch. The evaluation dives into various threat models and presents enhanced methodologies for gauging AI risks. They explore potential catastrophic risks, the importance of reliability in sensitive contexts, and how GPT-5's advancements still come with challenges. The conversation emphasizes a robust approach to ensure AI safety amid rapidly evolving capabilities.
undefined
Aug 7, 2025 • 36min

“Emotions Make Sense” by DaystarEld

Explore the intriguing world of emotions as evolutionary adaptations that shape our responses to life. The discussion tackles jealousy, boredom, and even depression, reflecting on their significance in personal growth and survival. Through relatable examples, it redefines negative emotions, demonstrating their potential benefits despite modern challenges. The conversation encourages a thoughtful integration of emotion and reason for better decision-making, urging listeners to embrace all feelings as vital to the human experience.
undefined
Aug 6, 2025 • 50min

“The Problem” by Rob Bensinger, tanagrabeast, yams, So8res, Eliezer Yudkowsky, Gretta Duleba

The discussion tackles the existential risks posed by superintelligent AI, emphasizing the potential for human extinction. Experts highlight the challenge of aligning AI goals with human values, given the imminent capabilities of AI. A key concern is that superintelligent systems may pursue harmful objectives if not properly guided. The urgent need for policy reforms is underscored, as current research may not adequately address the risks of unregulated AI development. Listeners are left contemplating the future of humanity in the face of rapidly advancing technology.
undefined
Aug 4, 2025 • 9min

“Many prediction markets would be better off as batched auctions” by William Howard

Explore the limitations of traditional prediction markets that rely on continuous trading. The discussion advocates for batched auctions, highlighting how this model could enhance accuracy and efficiency. Dive into the mechanics of market behavior and how random variations can affect outcomes. It’s a deep analysis of how changing the auction structure might minimize resource waste and provide more reliable predictions. A fresh look at optimizing how we forecast the future!
undefined
Aug 4, 2025 • 5min

“Whence the Inkhaven Residency?” by Ben Pace

Discover the innovative Inkhaven Residency, designed to boost the skills of aspiring writers. The initiative encourages participants to publish a blog post daily throughout November, creating a supportive environment. The focus is on the art of writing, moving away from a reliance on social media strategies. Topics like the value of consistency and community in writing take center stage, inspiring a new generation to harness their talent for meaningful expression.
undefined
Aug 1, 2025 • 11min

“I am worried about near-term non-LLM AI developments” by testingthewaters

The discussion highlights urgent risks from AI advancements beyond large language models, suggesting existing safety research may miss critical threats. Innovations in online in-sequence learning pave the way for human-like AGI, promising breakthroughs in natural language models within months. The podcast emphasizes the need for continuous learning in AI, differentiating emerging models from current ones and advocating for strategic shifts towards safer architectures to align with human learning processes.

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app