
Joe Carlsmith
Research analyst at Open Philanthropy and doctoral student in philosophy at the University of Oxford, focusing on risks to humanity's long-term future.
Top 5 podcasts with Joe Carlsmith
Ranked by the Snipd community

174 snips
Aug 22, 2024 • 2h 31min
Joe Carlsmith - Otherness and control in the age of AGI
In this chat, philosopher Joe Carlsmith dives into the intriguing intersection of artificial intelligence and human values. He raises thought-provoking concerns about how we can prevent power imbalances in a tech-driven world. The discussion covers the ethical treatment of AI, comparing it to human upbringing, and raises alarms about losing human agency through automation. With references to thinkers like Nietzsche and C.S. Lewis, Carlsmith advocates for a pluralistic approach to governance amidst evolving technologies, emphasizing the need for careful ethical considerations.

40 snips
May 19, 2023 • 3h 27min
#152 – Joe Carlsmith on navigating serious philosophical confusion
Joe Carlsmith, a Senior Research Analyst at Open Philanthropy, dives into profound philosophical questions about ethics, decision-making, and humanity's future. He discusses mind-bending theories, like the possibility of living in a computer simulation, and critiques traditional ethical frameworks. Carlsmith emphasizes the need for humility in facing uncertainty and the complexities posed by infinity in moral reasoning. He argues for balancing altruism with genuine compassion, while exploring the risks of advanced AI and our cosmic responsibilities.

17 snips
Jun 22, 2023 • 2h 24min
Joe Carlsmith on How We Change Our Minds About AI Risk
Joe Carlsmith joins the podcast to discuss how we change our minds about AI risk, gut feelings versus abstract models, and what to do if transformative AI is coming soon. You can read more about Joe's work at https://joecarlsmith.com.
Timestamps:
00:00 Predictable updating on AI risk
07:27 Abstract models versus gut feelings
22:06 How Joe began believing in AI risk
29:06 Is AI risk falsifiable?
35:39 Types of skepticisms about AI risk
44:51 Are we fundamentally confused?
53:35 Becoming alienated from ourselves?
1:00:12 What will change people's minds?
1:12:34 Outline of different futures
1:20:43 Humanity losing touch with reality
1:27:14 Can we understand AI sentience?
1:36:31 Distinguishing real from fake sentience
1:39:54 AI doomer epistemology
1:45:23 AI benchmarks versus real-world AI
1:53:00 AI improving AI research and development
2:01:08 What if transformative AI comes soon?
2:07:21 AI safety if transformative AI comes soon
2:16:52 AI systems interpreting other AI systems
2:19:38 Philosophy and transformative AI
Social Media Links:
➡️ WEBSITE: https://futureoflife.org
➡️ TWITTER: https://twitter.com/FLIxrisk
➡️ INSTAGRAM: https://www.instagram.com/futureoflifeinstitute/
➡️ META: https://www.facebook.com/futureoflifeinstitute
➡️ LINKEDIN: https://www.linkedin.com/company/future-of-life-institute/

13 snips
Mar 16, 2024 • 1h 52min
#76 – Joe Carlsmith on Scheming AI
Joe Carlsmith discusses the risks of AI systems being deceptive and misaligned during training, exploring the concept of scheming AI. The podcast covers the distinction between different types of AI models in training, the dangers of scheming behaviors, and the complexities of AI goals and motivations. It also delves into the challenges of detecting scheming AI early on, the importance of managing long-term AI motivations, and the uncertainties surrounding training AI models.

7 snips
Mar 25, 2025 • 34min
“AI for AI safety” by Joe Carlsmith
In this discussion, Joe Carlsmith, an expert on AI safety, delves into the innovative concept of using AI itself to enhance safety in AI development. He outlines critical frameworks for achieving safe superintelligence and emphasizes the importance of feedback loops in balancing the acceleration of AI capabilities with safety measures. Carlsmith tackles common objections to this approach while highlighting the potential sweet spots where AI could significantly benefit alignment efforts. A captivating exploration of the future of AI and its inherent risks!
Remember Everything You Learn from Podcasts
Save insights instantly, chat with episodes, and build lasting knowledge - all powered by AI.