AI Safety Fundamentals

BlueDot Impact
undefined
Sep 9, 2025 • 39min

The Most Important Time in History Is Now

By Tomas PueyoThis blog post traces AI's rapid leap from high school to PhD-level intelligence in just two years, examines whether physical bottlenecks like computing power can slow this acceleration, and argues that recent efficiency breakthroughs suggest we're approaching an intelligence explosion.Source: https://unchartedterritories.tomaspueyo.com/p/the-most-important-time-in-history-agi-asiA podcast by BlueDot Impact.Learn more on the AI Safety Fundamentals website.
undefined
Sep 9, 2025 • 22min

Why Do People Disagree About When Powerful AI Will Arrive?

By Sarah Hastings-WoodhouseMost experts agree that AGI is possible. They also agree that it will have transformative consequences. There is less consensus about what these consequences will be. Some believe AGI will usher in an age of radical abundance. Others believe it will likely lead to human extinction. One thing we can be sure of is that a post-AGI world would look very different to the one we live in today. So, is AGI just around the corner? Or are there still hard problems in front of us that will take decades to crack, despite the speed of recent progress? This is a subject of live debate. Ask various groups when they think AGI will arrive and you’ll get very different answers, ranging from just a couple of years to more than two decades.Why is this? We’ve tried to pin down some core disagreements. Source:https://bluedot.org/blog/agi-timelinesA podcast by BlueDot Impact.Learn more on the AI Safety Fundamentals website.
undefined
Sep 9, 2025 • 5min

Governance of Superintelligence

By Sam Altman, Greg Brockman, Ilya SutskeverOpenAI's leadership outline how humanity might govern superintelligence, proposing international oversight with inspection powers similar to nuclear regulation. They argue the AI systems arriving this decade will be "more powerful than any technology yet created" and their control cannot be left to individual companies alone.Source:https://openai.com/index/governance-of-superintelligence/A podcast by BlueDot Impact.Learn more on the AI Safety Fundamentals website.
undefined
Sep 9, 2025 • 25min

Scaling: The State of Play in AI

By Ethan MollickThis post explains the "scaling laws" that drive rapid AI progress: when you make AI models bigger and train them with more computing power, they get smarter at most tasks. The piece also introduces a second scaling law, where AI performance improves by spending more time "thinking" before responding.Source:|https://www.oneusefulthing.org/p/scaling-the-state-of-play-in-aiA podcast by BlueDot Impact.Learn more on the AI Safety Fundamentals website.
undefined
Sep 9, 2025 • 15min

Measuring AI Ability to Complete Long Tasks

By Thomas Kwa et al.We propose measuring AI performance in terms of the length of tasks AI agents can complete. We show that this metric has been consistently exponentially increasing over the past 6 years, with a doubling time of around 7 months. Extrapolating this trend predicts that, in under a decade, we will see AI agents that can independently complete a large fraction of software tasks that currently take humans days or weeks.Source: https://metr.org/blog/2025-03-19-measuring-ai-ability-to-complete-long-tasks/A podcast by BlueDot Impact.Learn more on the AI Safety Fundamentals website.
undefined
Sep 8, 2025 • 49min

The AI Revolution: The Road to Superintelligence

By Tim UrbanTim Urban uses historical analogies to show why AI progress might accelerate much faster than we expect, and how AI systems could rapidly self-improve from human-level to superintelligent capabilities.Source: https://waitbutwhy.com/2015/01/artificial-intelligence-revolution-1.htmlA podcast by BlueDot Impact.Learn more on the AI Safety Fundamentals website.
undefined
Sep 8, 2025 • 10min

"Long" Timelines to Advanced AI Have Gotten Crazy Short

By Helen TonerHelen Toner, former OpenAI board member, reveals how the AI timeline debate has compressed: even conservative experts who once dismissed advanced AI concerns now predict human-level systems within decades. Rapid AI progress has shifted from a fringe prediction to mainstream expert consensus.Source:https://helentoner.substack.com/p/long-timelines-to-advanced-ai-haveA podcast by BlueDot Impact.Learn more on the AI Safety Fundamentals website.
undefined
Sep 3, 2025 • 17min

In Search of a Dynamist Vision for Safe Superhuman AI

By Helen TonerThis essay describes AI safety policies that rely on centralised control (surveillance, fewer AI projects, licensing regimes) as "stasist" approaches that sacrifice innovation for stability. Toner argues we need "dynamist" solutions to the risks from AI that allow for decentralised experimentation, creativity and risk-taking.Source:https://helentoner.substack.com/p/dynamism-vs-stasisA podcast by BlueDot Impact.Learn more on the AI Safety Fundamentals website.
undefined
Sep 3, 2025 • 17min

It’s Practically Impossible to Run a Big AI Company Ethically

By Sigal Samuel (Vox Future Perfect)Even "safety-first" AI companies like Anthropic face market pressure that can override ethical commitments. This article demonstrates the constraints facing AI companies, and why voluntary corporate governance can't solve coordination problems alone.Source:https://archive.ph/A podcast by BlueDot Impact.Learn more on the AI Safety Fundamentals website.
undefined
Sep 3, 2025 • 18min

Seeking Stability in the Competition for AI Advantage

By Iskander Rehman, Karl P. Mueller, Michael J. MazarrThis RAND article describes some of the international dynamics driving the race to AGI between the US and China, and analyses whether nuclear deterrence logic applies to this race.Source: https://www.rand.org/pubs/commentary/2025/03/seeking-stability-in-the-competition-for-ai-advantage.htmlA podcast by BlueDot Impact.Learn more on the AI Safety Fundamentals website.

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app