

AI Safety Fundamentals
BlueDot Impact
Listen to resources from the AI Safety Fundamentals courses!https://aisafetyfundamentals.com/
Episodes
Mentioned books

Sep 12, 2025 • 8min
AI Is Reviving Fears Around Bioterrorism. What’s the Real Risk?
By Kyle HiebertThe global spread of large language models is heightening concerns that extremists could leverage AI to develop or deploy biological weapons. While some studies suggest chatbots only marginally improve bioterror capabilities compared to internet searches, other assessments show rapid year-on-year gains in AI systems’ ability to advise on acquiring and formulating deadly agents. Policymakers now face an urgent question: how real and imminent is the threat of AI-enabled bioterrorism?Source:https://www.cigionline.org/articles/ai-is-reviving-fears-around-bioterrorism-whats-the-real-risk/A podcast by BlueDot Impact.Learn more on the AI Safety Fundamentals website.

Sep 12, 2025 • 16min
AI and the Evolution of Biological National Security Risks
By Bill Drexel and Caleb WithersThis report considers how rapid AI advancements could reshape biosecurity risks, from bioterrorism to engineered superviruses, and assesses which interventions are needed today. It situates these risks in the history of American biosecurity and offers recommendations for policymakers to curb catastrophic threats.Source:https://www.cnas.org/publications/reports/ai-and-the-evolution-of-biological-national-security-risksA podcast by BlueDot Impact.Learn more on the AI Safety Fundamentals website.

Sep 9, 2025 • 39min
The Most Important Time in History Is Now
By Tomas PueyoThis blog post traces AI's rapid leap from high school to PhD-level intelligence in just two years, examines whether physical bottlenecks like computing power can slow this acceleration, and argues that recent efficiency breakthroughs suggest we're approaching an intelligence explosion.Source: https://unchartedterritories.tomaspueyo.com/p/the-most-important-time-in-history-agi-asiA podcast by BlueDot Impact.Learn more on the AI Safety Fundamentals website.

Sep 9, 2025 • 22min
Why Do People Disagree About When Powerful AI Will Arrive?
The podcast dives into the contentious debate over when artificial general intelligence (AGI) might emerge. Experts weigh in on conflicting timelines, with some predicting near-term breakthroughs and others suggesting longer timelines due to complex challenges. Discussions highlight the transformative effects AGI could have, ranging from radical abundance to existential risks. With rapid advancements in AI capabilities, the conversation underscores the importance of preparing for both near-term and long-term scenarios. It’s a thought-provoking exploration of the future of intelligence!

Sep 9, 2025 • 5min
Governance of Superintelligence
By Sam Altman, Greg Brockman, Ilya SutskeverOpenAI's leadership outline how humanity might govern superintelligence, proposing international oversight with inspection powers similar to nuclear regulation. They argue the AI systems arriving this decade will be "more powerful than any technology yet created" and their control cannot be left to individual companies alone.Source:https://openai.com/index/governance-of-superintelligence/A podcast by BlueDot Impact.Learn more on the AI Safety Fundamentals website.

Sep 9, 2025 • 25min
Scaling: The State of Play in AI
By Ethan MollickThis post explains the "scaling laws" that drive rapid AI progress: when you make AI models bigger and train them with more computing power, they get smarter at most tasks. The piece also introduces a second scaling law, where AI performance improves by spending more time "thinking" before responding.Source:|https://www.oneusefulthing.org/p/scaling-the-state-of-play-in-aiA podcast by BlueDot Impact.Learn more on the AI Safety Fundamentals website.

Sep 9, 2025 • 15min
Measuring AI Ability to Complete Long Tasks
By Thomas Kwa et al.We propose measuring AI performance in terms of the length of tasks AI agents can complete. We show that this metric has been consistently exponentially increasing over the past 6 years, with a doubling time of around 7 months. Extrapolating this trend predicts that, in under a decade, we will see AI agents that can independently complete a large fraction of software tasks that currently take humans days or weeks.Source: https://metr.org/blog/2025-03-19-measuring-ai-ability-to-complete-long-tasks/A podcast by BlueDot Impact.Learn more on the AI Safety Fundamentals website.

Sep 8, 2025 • 49min
The AI Revolution: The Road to Superintelligence
By Tim UrbanTim Urban uses historical analogies to show why AI progress might accelerate much faster than we expect, and how AI systems could rapidly self-improve from human-level to superintelligent capabilities.Source: https://waitbutwhy.com/2015/01/artificial-intelligence-revolution-1.htmlA podcast by BlueDot Impact.Learn more on the AI Safety Fundamentals website.

Sep 8, 2025 • 10min
"Long" Timelines to Advanced AI Have Gotten Crazy Short
By Helen TonerHelen Toner, former OpenAI board member, reveals how the AI timeline debate has compressed: even conservative experts who once dismissed advanced AI concerns now predict human-level systems within decades. Rapid AI progress has shifted from a fringe prediction to mainstream expert consensus.Source:https://helentoner.substack.com/p/long-timelines-to-advanced-ai-haveA podcast by BlueDot Impact.Learn more on the AI Safety Fundamentals website.

Sep 3, 2025 • 17min
In Search of a Dynamist Vision for Safe Superhuman AI
By Helen TonerThis essay describes AI safety policies that rely on centralised control (surveillance, fewer AI projects, licensing regimes) as "stasist" approaches that sacrifice innovation for stability. Toner argues we need "dynamist" solutions to the risks from AI that allow for decentralised experimentation, creativity and risk-taking.Source:https://helentoner.substack.com/p/dynamism-vs-stasisA podcast by BlueDot Impact.Learn more on the AI Safety Fundamentals website.


