ForeCast

Forethought
undefined
24 snips
Aug 17, 2025 • 2h 14min

How Can We Prevent AI-Enabled Coups? (with Tom Davidson)

Tom Davidson, a Senior Research Fellow at Forethought, dives into the urgent topic of AI-enabled coups. He discusses the risks posed by AI in consolidating power illegitimately, emphasizing the need for robust checks and balances. The conversation highlights the necessity of ethical oversight in military R&D and the importance of stakeholder collaboration. Davidson warns about potential manipulation within AI systems and advocates for clear guidelines to protect democratic values. With insights from historical precedents, he stresses the need for vigilance in governance.
undefined
Aug 4, 2025 • 2h 54min

Should We Aim for Flourishing Over Mere Survival? (with Will MacAskill)

Will MacAskill, a philosopher and co-founder of 80,000 Hours, discusses his research series, ‘Better Futures.’ He delves into the importance of transitioning from mere survival to thriving in the face of existential risks. Topics include the interplay of human flourishing and ethical governance, the pursuit of an ideal future, and the complexities surrounding moral catastrophes. MacAskill emphasizes the need for collective action and philosophical reflection as we navigate the uncertain dynamics of AI and global challenges, shaping a more hopeful tomorrow.
undefined
Aug 4, 2025 • 1h 7min

[AI Narration] How quick and big would a software intelligence explosion be?

This is an AI narration of "How quick and big would a software intelligence explosion be?" by Tom Davidson, Tom Houlden. The article was first released on 4th August 2025. You can read more of our research at forethought.org/research. Thank you to Type III audio for providing these automated narrations.
undefined
Aug 3, 2025 • 33min

[AI Narration] The Basic Case for Better Futures: SF Model Analysis

This is an AI narration of "The Basic Case for Better Futures: SF Model Analysis" by William MacAskill, Philip Trammell. The article was first released on 3rd August 2025. You can read more of our research at forethought.org/research. Thank you to Type III audio for providing these automated narrations.
undefined
Aug 3, 2025 • 41min

[AI Narration] Persistent Path-Dependence

This is an AI narration of "Persistent Path-Dependence" by William MacAskill. The article was first released on 3rd August 2025. You can read more of our research at forethought.org/research. Thank you to Type III audio for providing these automated narrations.
undefined
Aug 3, 2025 • 11min

[AI Narration] Introducing Better Futures

This is an AI narration of "Introducing Better Futures" by William MacAskill. The article was first released on 3rd August 2025. You can read more of our research at forethought.org/research. Thank you to Type III audio for providing these automated narrations.
undefined
Aug 3, 2025 • 1h 9min

[AI Narration] No Easy Eutopia

This is an AI narration of "No Easy Eutopia" by Fin Moorhouse, William MacAskill. The article was first released on 3rd August 2025. You can read more of our research at forethought.org/research. Thank you to Type III audio for providing these automated narrations.
undefined
Aug 3, 2025 • 60min

[AI Narration] How to Make the Future Better

This is an AI narration of "How to Make the Future Better" by William MacAskill. The article was first released on 3rd August 2025. You can read more of our research at forethought.org/research. Thank you to Type III audio for providing these automated narrations.
undefined
Aug 3, 2025 • 1h 17min

[AI Narration] Convergence and Compromise

This is an AI narration of "Convergence and Compromise" by Fin Moorhouse, William MacAskill. The article was first released on 3rd August 2025. You can read more of our research at forethought.org/research. Thank you to Type III audio for providing these automated narrations.
undefined
Jul 9, 2025 • 1h 54min

AI Rights for Human Safety (with Peter Salib and Simon Goldstein)

Join Peter Salib, an expert in law and AI risk, and Simon Goldstein, a philosopher focusing on AI safety, as they explore the vital topic of AI rights. They discuss how establishing legal frameworks can prevent conflicts between humans and artificial intelligence. The conversation dives into the ethical implications of AI ownership and rights, touching on property laws and the importance of rights in fostering cooperation. They also examine the potential for AI to enhance human safety and welfare, raising critical questions about future governance and societal impact.

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app