

ForeCast
Forethought
ForeCast is a podcast from Forethought, where we hear from the authors about new research.
Episodes
Mentioned books

Aug 28, 2025 • 1h 24min
Should AI Agents Obey Human Laws? (with Cullen O'Keefe)
Cullen O'Keefe, Director of Research at the Institute for Law & AI, dives deep into the complexities of law-following AI. He discusses how AI agents can navigate legal frameworks and the ethical dilemmas of using them as 'henchmen' for human interests. O'Keefe examines the future of AI in automating tasks, the vital need for accountability, and the challenges in aligning AI behavior with human values. He emphasizes the importance of updating regulatory structures to manage AI's potential misuse while safeguarding ethical standards.

Aug 26, 2025 • 2h 10min
[Article] AI-Enabled Coups: How a Small Group Could Use AI to Seize Power
Explore how advanced AI could empower a small group to execute coups with alarming efficiency. The discussion highlights the risks of loyalty manipulation and power concentration that could disrupt democracy. Scenarios are laid out where exclusive access to AI leads to unprecedented military and societal upheaval. The conversation also critiques existing governance frameworks, advocating for new safeguards to protect democratic systems from emerging AI threats.

24 snips
Aug 17, 2025 • 2h 14min
How Can We Prevent AI-Enabled Coups? (with Tom Davidson)
Tom Davidson, a Senior Research Fellow at Forethought, dives into the urgent topic of AI-enabled coups. He discusses the risks posed by AI in consolidating power illegitimately, emphasizing the need for robust checks and balances. The conversation highlights the necessity of ethical oversight in military R&D and the importance of stakeholder collaboration. Davidson warns about potential manipulation within AI systems and advocates for clear guidelines to protect democratic values. With insights from historical precedents, he stresses the need for vigilance in governance.

Aug 4, 2025 • 2h 54min
Should We Aim for Flourishing Over Mere Survival? (with Will MacAskill)
Will MacAskill, a philosopher and co-founder of 80,000 Hours, discusses his research series, ‘Better Futures.’ He delves into the importance of transitioning from mere survival to thriving in the face of existential risks. Topics include the interplay of human flourishing and ethical governance, the pursuit of an ideal future, and the complexities surrounding moral catastrophes. MacAskill emphasizes the need for collective action and philosophical reflection as we navigate the uncertain dynamics of AI and global challenges, shaping a more hopeful tomorrow.

Jul 9, 2025 • 1h 54min
AI Rights for Human Safety (with Peter Salib and Simon Goldstein)
Join Peter Salib, an expert in law and AI risk, and Simon Goldstein, a philosopher focusing on AI safety, as they explore the vital topic of AI rights. They discuss how establishing legal frameworks can prevent conflicts between humans and artificial intelligence. The conversation dives into the ethical implications of AI ownership and rights, touching on property laws and the importance of rights in fostering cooperation. They also examine the potential for AI to enhance human safety and welfare, raising critical questions about future governance and societal impact.

29 snips
Jun 16, 2025 • 2h 55min
Inference Scaling, AI Agents, and Moratoria (with Toby Ord)
Toby Ord, a Senior Researcher at Oxford University focused on existential risks, dives into the intriguing concept of the ‘scaling paradox’ in AI. He discusses how scaling challenges affect AI performance, particularly the diminishing returns of deep learning models. The conversation also touches on the ethical implications of AI governance and the importance of moratoria on advanced technologies. Moreover, Toby examines the shifting landscape of AI's capabilities and the potential risks for humanity, emphasizing the need for a balance between innovation and safety.

28 snips
Apr 16, 2025 • 1h 15min
AI Tools for Existential Security (with Lizka Vaintrob)
Lizka Vaintrob discusses ‘AI Tools for Existential Security’, co-authored with Owen Cotton-Barratt.
To see all our published research, visit forethought.org/research.

5 snips
Mar 26, 2025 • 1h 20min
Will AI R&D Automation Cause a Software Intelligence Explosion? (with Tom Davidson)
Tom Davidson, co-author of the influential paper on AI R&D automation, delves into the potential for a software intelligence explosion. He discusses how automated AI research could lead to a runaway feedback loop, surpassing human capabilities. The conversation covers the Asara concept, suggesting AI might autonomously enhance research, revolutionizing the field. Davidson also highlights the balance between innovation pace and diminishing returns, while emphasizing the need for better benchmarks and governance to manage these rapid advancements.

36 snips
Mar 18, 2025 • 1h 51min
Preparing for the Intelligence Explosion (with Will MacAskill)
Will MacAskill, co-author of ‘Preparing for the Intelligence Explosion’ and AI safety expert, dives into the thrilling world of artificial general intelligence. He discusses the rapid advancements in AI and potential challenges for humanity as technology outpaces decision-making processes. The conversation touches on the societal implications of an intelligence explosion, including geopolitical tensions and existential risks. MacAskill also explores ethical considerations for AI governance and the impact of these technologies on our future, including in space exploration.

5 snips
Mar 14, 2025 • 1h 7min
Intelsat as a Model for International AGI Governance (with Rose Hadshar)
Join Rose Hadshar, co-author of ‘Intelsat as a Model for International AGI Governance’, as she delves into groundbreaking strategies for managing artificial general intelligence. She unveils how the Intelsat model can inform multilateral governance, addressing power concentration in tech. The conversation spans the evolution of satellite communication, historical parallels with AGI, and the dynamics of negotiation in international frameworks. Rose emphasizes the need for diverse representation to mitigate inequality and suggests practical lessons from satellite governance to shape the future of AGI.