

BlueDot Narrated
BlueDot Impact
Audio versions of the core readings, blog posts, and papers from BlueDot courses.
Episodes
Mentioned books

Dec 4, 2025 • 16min
The Biological Weapons Convention: An Introduction
Audio versions of blogs and papers from BlueDot courses.This resource provides an overview of the Biological Weapons Convention, a disarmament treaty that bans the development, production, acquisition, transfer, stockpiling and use of biological weapons. Despite 187 states being party to the treaty, there are substantial challenges including a lack of funding and difficulty verifying compliance.Original text: https://disarmament.unoda.org/publications/the-biological-weapons-convention/Author(s): United Nations Office for Disarmament AffairsA podcast by BlueDot Impact.

Dec 4, 2025 • 17min
Adherence to and Compliance with Arms Control, Nonproliferation, and Disarmament Agreements and Commitments
Audio versions of blogs and papers from BlueDot courses.The US State Department report is released each April and provides an assessment of states' adherence to arms control, non-proliferation and disarmament agreements and commitments. We see this as a useful resource giving some evidence of modern BW activity and for you to consider the role of intelligence agencies in assessing BWC compliance. However, we would also encourage reflection regarding the specific language used and how/what information is presented. The US has written this with the intention of publishing it openly, so it is worth considering their broader geopolitical motivations when reading.Original text: https://www.state.gov/wp-content/uploads/2024/04/2024-Arms-Control-Treaty-Compliance-Report.pdfAuthor(s): US Department of StateA podcast by BlueDot Impact.

Sep 29, 2025 • 15min
AI and Leviathan: Part I
Audio versions of blogs and papers from BlueDot courses.By Samuel HammondSource: https://www.secondbest.ca/p/ai-and-leviathan-part-iA podcast by BlueDot Impact.

12 snips
Sep 19, 2025 • 43min
d/acc: One Year Later
Vitalik Buterin explores the balance of decentralized technologies to distribute AI power. He highlights advances like verifiable vaccines and air quality sensors that bolster community defense. The discussion includes the importance of cross-field collaboration and critiques centralized safety approaches. Buterin proposes strategies like liability-focused regulation and a global soft pause on compute to mitigate risks. Ultimately, he envisions a pluralistic future where accessible tools empower everyone, steering humanity away from potential AI domination.

10 snips
Sep 18, 2025 • 20min
A Playbook for Securing AI Model Weights
This discussion dives into the vital need for securing AI model weights to prevent misuse and bolster national security. The hosts outline a five-level security framework to guard against attacks from amateur hackers to state-sponsored operations. With 38 potential attack vectors revealed, they highlight real-world examples and define practical measures for each security level. Urgent priorities for AI laboratories and the importance of comprehensive defenses create a thought-provoking exploration into the future of AI security.

Sep 18, 2025 • 10min
AI Emergency Preparedness: Examining the Federal Government's Ability to Detect and Respond to AI-Related National Security Threats
Audio versions of blogs and papers from BlueDot courses. By Akash Wasil et al.This paper uses scenario planning to show how governments could prepare for AI emergencies. The authors examine three plausible disasters: losing control of AI, AI model theft, and bioweapon creation. They then expose gaps in current preparedness systems, and propose specific government reforms including embedding auditors inside AI companies and creating emergency response units.Source:https://arxiv.org/pdf/2407.17347A podcast by BlueDot Impact.

10 snips
Sep 18, 2025 • 10min
Introduction to AI Control
Explore the fascinating world of AI control and its crucial distinction from alignment. Discover why controlling AI might be more practical than aligning it, especially when navigating deception risks. Hear about innovative strategies like using trusted models for monitoring and delegating tasks to minimize risks. The conversation touches on the limitations of control and the need for widespread adoption and regulatory measures. With a focus on the potential failure modes and the urgency for long-term solutions, this is a deep dive into AI safety essentials.

Sep 18, 2025 • 32min
The Project: Situational Awareness
A former OpenAI researcher reveals why private companies can't safely develop superintelligence, citing security flaws and competitive pressures. He suggests a government-led AGI initiative is vital for national security. The discussion touches on how past crises like COVID and the Manhattan Project show the necessity for rapid action. The structure of a joint public-private AGI project is proposed, ensuring military-grade AI remains under democratic control. The risks of espionage and the implications for international stability are also examined.

Sep 18, 2025 • 14min
Resilience and Adaptation to Advanced AI
Audio versions of blogs and papers from BlueDot courses. By Jamie BernardiJamie Bernardi argues that we can't rely solely on model safeguards to ensure AI safety. Instead, he proposes "AI resilience": building society's capacity to detect misuse, defend against harmful AI applications, and reduce the damage caused when dangerous AI capabilities spread beyond a government or company's control.Source: https://airesilience.substack.com/p/resilience-and-adaptation-to-advanced?utm_source=bluedot-impactA podcast by BlueDot Impact.

8 snips
Sep 18, 2025 • 21min
Superintelligent Agents Pose Catastrophic Risks: Can Scientist AI Offer a Safer Path?
This discussion highlights the perils of autonomous generalist AI, including the risks of misuse and losing human control. The concept of 'Scientist AI' is proposed as a safer, non-agentic alternative, designed to enhance understanding without taking action. It emphasizes controlled research and aims to accelerate scientific progress while mitigating dangers. The conversation also covers strategies for keeping Scientist AI aligned with fixed objectives and applying the precautionary principle in AI development.


