

ForeCast
Forethought
ForeCast is a podcast from Forethought, where we hear from the authors about new research.
Episodes
Mentioned books

Aug 3, 2025 • 33min
[AI Narration] The Basic Case for Better Futures: SF Model Analysis
This is an AI narration of "The Basic Case for Better Futures: SF Model Analysis" by William MacAskill, Philip Trammell. The article was first released on 3rd August 2025.
You can read more of our research at forethought.org/research. Thank you to Type III audio for providing these automated narrations.

Aug 3, 2025 • 11min
[AI Narration] Introducing Better Futures
This is an AI narration of "Introducing Better Futures" by William MacAskill. The article was first released on 3rd August 2025.
You can read more of our research at forethought.org/research. Thank you to Type III audio for providing these automated narrations.

Jul 9, 2025 • 1h 54min
AI Rights for Human Safety (with Peter Salib and Simon Goldstein)
Join Peter Salib, an expert in law and AI risk, and Simon Goldstein, a philosopher focusing on AI safety, as they explore the vital topic of AI rights. They discuss how establishing legal frameworks can prevent conflicts between humans and artificial intelligence. The conversation dives into the ethical implications of AI ownership and rights, touching on property laws and the importance of rights in fostering cooperation. They also examine the potential for AI to enhance human safety and welfare, raising critical questions about future governance and societal impact.

29 snips
Jun 16, 2025 • 2h 55min
Inference Scaling, AI Agents, and Moratoria (with Toby Ord)
Toby Ord, a Senior Researcher at Oxford University focused on existential risks, dives into the intriguing concept of the ‘scaling paradox’ in AI. He discusses how scaling challenges affect AI performance, particularly the diminishing returns of deep learning models. The conversation also touches on the ethical implications of AI governance and the importance of moratoria on advanced technologies. Moreover, Toby examines the shifting landscape of AI's capabilities and the potential risks for humanity, emphasizing the need for a balance between innovation and safety.

May 21, 2025 • 28min
[AI Narration] The Industrial Explosion
This is an AI narration of "The Industrial Explosion" by Tom Davidson, Rose Hadshar. The article was first released on 21th May 2025.
You can read more of our research at forethought.org/research. Thank you to Type III audio for providing these automated narrations.

28 snips
Apr 16, 2025 • 1h 15min
AI Tools for Existential Security (with Lizka Vaintrob)
Lizka Vaintrob discusses ‘AI Tools for Existential Security’, co-authored with Owen Cotton-Barratt.
To see all our published research, visit forethought.org/research.

Apr 4, 2025 • 24min
[AI Narration] Will Compute Bottlenecks Prevent a Software Intelligence Explosion?
Tom Davidson, a research analyst, dives into the intriguing concept of a software intelligence explosion and the potential hindrances posed by compute bottlenecks. He explains how AI could improve exponentially without the need for additional hardware. Davidson tackles objections regarding empirical machine learning experiments while critiquing economic models that predict strict compute limitations. Finally, he suggests alternative pathways for achieving superintelligence, emphasizing the dynamic adaptability of production methods to circumvent these bottlenecks.

Apr 2, 2025 • 38min
[AI Narration] The AI Adoption Gap: Preparing the US Government for Advanced AI
This is an AI narration of "The AI Adoption Gap: Preparing the US Government for Advanced AI" by Lizka Vaintrob. The article was first released on 2nd April 2025.
You can read more of our research at forethought.org/research. Thank you to Type III audio for providing these automated narrations.

5 snips
Mar 26, 2025 • 1h 20min
Will AI R&D Automation Cause a Software Intelligence Explosion? (with Tom Davidson)
Tom Davidson, co-author of the influential paper on AI R&D automation, delves into the potential for a software intelligence explosion. He discusses how automated AI research could lead to a runaway feedback loop, surpassing human capabilities. The conversation covers the Asara concept, suggesting AI might autonomously enhance research, revolutionizing the field. Davidson also highlights the balance between innovation pace and diminishing returns, while emphasizing the need for better benchmarks and governance to manage these rapid advancements.

Mar 26, 2025 • 17min
[AI Narration] Will the Need to Retrain AI Models from Scratch Block a Software Intelligence Explosion?
This is an AI narration of "Will the Need to Retrain AI Models from Scratch Block a Software Intelligence Explosion?" by Tom Davidson. The article was first released on 26th March 2025.
You can read more of our research at forethought.org/research. Thank you to Type III audio for providing these automated narrations.


