AI Safety Fundamentals

BlueDot Impact
undefined
Sep 29, 2025 • 15min

AI and Leviathan: Part I

By Samuel HammondSource: https://www.secondbest.ca/p/ai-and-leviathan-part-iA podcast by BlueDot Impact.Learn more on the AI Safety Fundamentals website.
undefined
Sep 19, 2025 • 43min

d/acc: One Year Later

Vitalik Buterin explores the balance of decentralized technologies to distribute AI power. He highlights advances like verifiable vaccines and air quality sensors that bolster community defense. The discussion includes the importance of cross-field collaboration and critiques centralized safety approaches. Buterin proposes strategies like liability-focused regulation and a global soft pause on compute to mitigate risks. Ultimately, he envisions a pluralistic future where accessible tools empower everyone, steering humanity away from potential AI domination.
undefined
Sep 18, 2025 • 20min

A Playbook for Securing AI Model Weights

This discussion dives into the vital need for securing AI model weights to prevent misuse and bolster national security. The hosts outline a five-level security framework to guard against attacks from amateur hackers to state-sponsored operations. With 38 potential attack vectors revealed, they highlight real-world examples and define practical measures for each security level. Urgent priorities for AI laboratories and the importance of comprehensive defenses create a thought-provoking exploration into the future of AI security.
undefined
Sep 18, 2025 • 10min

AI Emergency Preparedness: Examining the Federal Government's Ability to Detect and Respond to AI-Related National Security Threats

By Akash Wasil et al.This paper uses scenario planning to show how governments could prepare for AI emergencies. The authors examine three plausible disasters: losing control of AI, AI model theft, and bioweapon creation. They then expose gaps in current preparedness systems, and propose specific government reforms including embedding auditors inside AI companies and creating emergency response units.Source:https://arxiv.org/pdf/2407.17347A podcast by BlueDot Impact.Learn more on the AI Safety Fundamentals website.
undefined
Sep 18, 2025 • 14min

Resilience and Adaptation to Advanced AI

By Jamie BernardiJamie Bernardi argues that we can't rely solely on model safeguards to ensure AI safety. Instead, he proposes "AI resilience": building society's capacity to detect misuse, defend against harmful AI applications, and reduce the damage caused when dangerous AI capabilities spread beyond a government or company's control.Source: https://airesilience.substack.com/p/resilience-and-adaptation-to-advanced?utm_source=bluedot-impactA podcast by BlueDot Impact.Learn more on the AI Safety Fundamentals website.
undefined
Sep 18, 2025 • 32min

The Project: Situational Awareness

By Leopold AschenbrennerA former OpenAI researcher argues that private AI companies cannot safely develop superintelligence due to security vulnerabilities and competitive pressures that override safety. He argues that a government-led 'AGI Project' is inevitable and necessary to prevent adversaries stealing the AI systems, or losing human control over the technology.Source:https://situational-awareness.ai/the-project/?utm_source=bluedot-impactA podcast by BlueDot Impact.Learn more on the AI Safety Fundamentals website.
undefined
Sep 18, 2025 • 10min

Introduction to AI Control

Explore the fascinating world of AI control and its crucial distinction from alignment. Discover why controlling AI might be more practical than aligning it, especially when navigating deception risks. Hear about innovative strategies like using trusted models for monitoring and delegating tasks to minimize risks. The conversation touches on the limitations of control and the need for widespread adoption and regulatory measures. With a focus on the potential failure modes and the urgency for long-term solutions, this is a deep dive into AI safety essentials.
undefined
Sep 18, 2025 • 21min

Superintelligent Agents Pose Catastrophic Risks: Can Scientist AI Offer a Safer Path?

By Yoshua Bengio et al.This paper argues that building generalist AI agents poses catastrophic risks, from misuse by bad actors to a potential loss of human control. As an alternative, the authors propose “Scientist AI,” a non-agentic system designed to explain the world through theory generation and question-answering rather than acting in it. They suggest this path could accelerate scientific progress, including in AI safety, while avoiding the dangers of agency-driven AI.Source:https://arxiv.org/pdf/2502.15657A podcast by BlueDot Impact.Learn more on the AI Safety Fundamentals website.
undefined
Sep 18, 2025 • 2h 19min

The Intelligence Curse

By Luke Drago and Rudolf LaineThis section explores how the arrival of AGI could trigger an “intelligence curse,” where automation of all work removes incentives for states and companies to care about ordinary people. It frames the trillion-dollar race toward AGI as not just an economic shift, but a transformation in power dynamics and human relevance.Source:https://intelligence-curse.ai/?utm_source=bluedot-impactA podcast by BlueDot Impact.Learn more on the AI Safety Fundamentals website.
undefined
Sep 12, 2025 • 44min

The Intelligence Curse (Sections 1-3)

The podcast dives into the paradox of increasing intelligence and its unintended consequences. It examines the concept of pyramid replacement, where AI dismantles corporate hierarchies, leading to widespread job loss. The hosts discuss how AI differs from past tech, the economic challenges it poses, and the limitations of social safety nets like UBI. There's a critical look at how the shift towards AI favors capital over labor, exacerbating inequality and restricting social mobility. The discussion raises pressing questions about the future of work in an AI-driven world.

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app