ForeCast

Forethought
undefined
25 snips
Aug 28, 2025 • 1h 24min

Should AI Agents Obey Human Laws? (with Cullen O'Keefe)

Cullen O'Keefe, Director of Research at the Institute for Law & AI, dives deep into the complexities of law-following AI. He discusses how AI agents can navigate legal frameworks and the ethical dilemmas of using them as 'henchmen' for human interests. O'Keefe examines the future of AI in automating tasks, the vital need for accountability, and the challenges in aligning AI behavior with human values. He emphasizes the importance of updating regulatory structures to manage AI's potential misuse while safeguarding ethical standards.
undefined
Aug 26, 2025 • 2h 10min

[Article] AI-Enabled Coups: How a Small Group Could Use AI to Seize Power

Explore how advanced AI could empower a small group to execute coups with alarming efficiency. The discussion highlights the risks of loyalty manipulation and power concentration that could disrupt democracy. Scenarios are laid out where exclusive access to AI leads to unprecedented military and societal upheaval. The conversation also critiques existing governance frameworks, advocating for new safeguards to protect democratic systems from emerging AI threats.
undefined
Aug 20, 2025 • 30min

[AI Narration] Could One Country Outgrow the Rest of the World After AGI?

This is an AI narration of "Could One Country Outgrow the Rest of the World After AGI? Economic Analysis of Superexponential Growth" by Tom Davidson. The article was first released on 20th August 2025. You can read more of our research at forethought.org/research. Thank you to Type III audio for providing these automated narrations.
undefined
24 snips
Aug 17, 2025 • 2h 14min

How Can We Prevent AI-Enabled Coups? (with Tom Davidson)

Tom Davidson, a Senior Research Fellow at Forethought, dives into the urgent topic of AI-enabled coups. He discusses the risks posed by AI in consolidating power illegitimately, emphasizing the need for robust checks and balances. The conversation highlights the necessity of ethical oversight in military R&D and the importance of stakeholder collaboration. Davidson warns about potential manipulation within AI systems and advocates for clear guidelines to protect democratic values. With insights from historical precedents, he stresses the need for vigilance in governance.
undefined
Aug 4, 2025 • 2h 54min

Should We Aim for Flourishing Over Mere Survival? (with Will MacAskill)

Will MacAskill, a philosopher and co-founder of 80,000 Hours, discusses his research series, ‘Better Futures.’ He delves into the importance of transitioning from mere survival to thriving in the face of existential risks. Topics include the interplay of human flourishing and ethical governance, the pursuit of an ideal future, and the complexities surrounding moral catastrophes. MacAskill emphasizes the need for collective action and philosophical reflection as we navigate the uncertain dynamics of AI and global challenges, shaping a more hopeful tomorrow.
undefined
Aug 4, 2025 • 1h 7min

[AI Narration] How quick and big would a software intelligence explosion be?

Delve into the fascinating concept of software intelligence explosions. Discover how advancements in AI could compress years of progress into mere months. Understand the critical parameters driving this acceleration and the significant uncertainties involved. The hosts explore the potential scale of AI researchers and what that could mean for future innovations. They also address the limitations of current models and the importance of cautious forecasting in this rapidly evolving field.
undefined
Aug 3, 2025 • 41min

[AI Narration] Persistent Path-Dependence

This is an AI narration of "Persistent Path-Dependence" by William MacAskill. The article was first released on 3rd August 2025. You can read more of our research at forethought.org/research. Thank you to Type III audio for providing these automated narrations.
undefined
Aug 3, 2025 • 60min

[AI Narration] How to Make the Future Better

This is an AI narration of "How to Make the Future Better" by William MacAskill. The article was first released on 3rd August 2025. You can read more of our research at forethought.org/research. Thank you to Type III audio for providing these automated narrations.
undefined
Aug 3, 2025 • 33min

[AI Narration] The Basic Case for Better Futures: SF Model Analysis

This is an AI narration of "The Basic Case for Better Futures: SF Model Analysis" by William MacAskill, Philip Trammell. The article was first released on 3rd August 2025. You can read more of our research at forethought.org/research. Thank you to Type III audio for providing these automated narrations.
undefined
Aug 3, 2025 • 1h 17min

[AI Narration] Convergence and Compromise

This is an AI narration of "Convergence and Compromise" by Fin Moorhouse, William MacAskill. The article was first released on 3rd August 2025. You can read more of our research at forethought.org/research. Thank you to Type III audio for providing these automated narrations.

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app