undefined

Joe Carlsmith

Research analyst at Open Philanthropy and doctoral student in philosophy at the University of Oxford, focusing on risks to humanity's long-term future.

Top 10 podcasts with Joe Carlsmith

Ranked by the Snipd community
undefined
198 snips
Aug 22, 2024 • 2h 31min

Joe Carlsmith — Preventing an AI takeover

In this chat, philosopher Joe Carlsmith dives into the intriguing intersection of artificial intelligence and human values. He raises thought-provoking concerns about how we can prevent power imbalances in a tech-driven world. The discussion covers the ethical treatment of AI, comparing it to human upbringing, and raises alarms about losing human agency through automation. With references to thinkers like Nietzsche and C.S. Lewis, Carlsmith advocates for a pluralistic approach to governance amidst evolving technologies, emphasizing the need for careful ethical considerations.
undefined
40 snips
May 19, 2023 • 3h 27min

#152 – Joe Carlsmith on navigating serious philosophical confusion

Joe Carlsmith, a Senior Research Analyst at Open Philanthropy, dives into profound philosophical questions about ethics, decision-making, and humanity's future. He discusses mind-bending theories, like the possibility of living in a computer simulation, and critiques traditional ethical frameworks. Carlsmith emphasizes the need for humility in facing uncertainty and the complexities posed by infinity in moral reasoning. He argues for balancing altruism with genuine compassion, while exploring the risks of advanced AI and our cosmic responsibilities.
undefined
13 snips
Mar 16, 2024 • 1h 52min

#76 – Joe Carlsmith on Scheming AI

Joe Carlsmith discusses the risks of AI systems being deceptive and misaligned during training, exploring the concept of scheming AI. The podcast covers the distinction between different types of AI models in training, the dangers of scheming behaviors, and the complexities of AI goals and motivations. It also delves into the challenges of detecting scheming AI early on, the importance of managing long-term AI motivations, and the uncertainties surrounding training AI models.
undefined
11 snips
Nov 6, 2025 • 32min

“Leaving Open Philanthropy, going to Anthropic” by Joe_Carlsmith

Joe Carlsmith, a senior researcher specializing in AI risks, recently transitioned from Open Philanthropy to Anthropic. He reflects on his impactful tenure at Open Philanthropy, discussing the importance of worldview investigations and AI safety research. Joe shares his aspirations for designing Claude's character at Anthropic and weighs the significance of model-spec design in mitigating existential risks. He addresses the complexities of working within frontier labs, advocating for balancing capability restraint with safety progress, all while navigating potential personal and ethical challenges in his new role.
undefined
8 snips
Nov 14, 2025 • 1h 53min

Joe Carlsmith - A Wiser, AI-Powered Civilization is the “Successor” (Worthy Successor, Episode 15)

In this discussion, Joe Carlsmith, a senior advisor formerly at Open Philanthropy and an Oxford Ph.D., dives into the concept of civilization-level stewardship. He posits that AI should not replace humanity but serve as a tool for enhancing our wisdom and philosophical clarity. Joe explains the potential for unheard values beyond our current understanding, critiques the moral implications of evolution, and stresses the importance of regulating AI to prevent existential risks. His insights on AI traits and civilizational goals are thought-provoking and essential for future discourse.
undefined
7 snips
Mar 25, 2025 • 34min

“AI for AI safety” by Joe Carlsmith

In this discussion, Joe Carlsmith, an expert on AI safety, delves into the innovative concept of using AI itself to enhance safety in AI development. He outlines critical frameworks for achieving safe superintelligence and emphasizes the importance of feedback loops in balancing the acceleration of AI capabilities with safety measures. Carlsmith tackles common objections to this approach while highlighting the potential sweet spots where AI could significantly benefit alignment efforts. A captivating exploration of the future of AI and its inherent risks!
undefined
6 snips
Jul 1, 2024 • 1h 4min

“Loving a world you don’t trust” by Joe Carlsmith

Joe Carlsmith, the author of 'Otherness and control in the age of AGI,' discusses the duality of activity vs. receptivity, facing darkness in the world, and exploring themes of humanism and defiance in 'Angels in America'. The podcast touches on deep atheism, embracing responsibility, and trusting in reality despite its potential lack of inherent goodness.
undefined
Jun 18, 2024 • 1h 2min

LW - Loving a world you don't trust by Joe Carlsmith

Joe Carlsmith discusses the duality of activity versus receptivity, deep atheism, and control-seeking tendencies in the context of AGI. He explores themes of defiance, resilience, and reverence towards imperfection, highlighting the challenges of AI advancement and the importance of cultivating peace and human values in a changing world.
undefined
Mar 25, 2024 • 43min

LW - On attunement by Joe Carlsmith

Join Joe Carlsmith, author of the essay 'On attunement,' as he explores the concept of 'green' in a philosophical context, contrasting scientific knowledge with intuition. Delve into meta-ethical anti-realism, attunement in literature, transformative power of music, and technology's impact on human connection. Reflect on humanity's evolution and moral change through insightful philosophical discussions.
undefined
Aug 11, 2021 • 1h 17min

Utopia on earth and morality without guilt (with Joe Carlsmith)

In this engaging discussion, Joe Carlsmith, a research analyst at Open Philanthropy and a doctoral student at Oxford, explores the elusive concept of utopia. He probes into our aspirations versus the pitfalls of idealism, emphasizing the need for a dynamic understanding of a shared future. Carlsmith delves into the ethics of existence, raising questions about procreation and its moral implications. He contrasts 'wholehearted morality' with guilt-laden moral frameworks, inviting listeners to rethink ethical engagement, non-attachment, and the intricacies of consciousness.

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app