

AI Safety Fundamentals
BlueDot Impact
Listen to resources from the AI Safety Fundamentals courses!https://aisafetyfundamentals.com/
Episodes
Mentioned books

Dec 30, 2024 • 56min
Open-Sourcing Highly Capable Foundation Models: An Evaluation of Risks, Benefits, and Alternative Methods for Pursuing Open-Source Objectives
The discussion tackles the double-edged sword of releasing powerful foundation models. While transparency can fuel innovation, the potential for misuse by malicious actors is alarming. Cyberattacks, the development of biological weapons, and disinformation loom large. The conversation also emphasizes the necessity for responsible AI governance, advocating for structured access and staged releases as safer alternatives to total openness. Legal liabilities and the implications of regulation are scrutinized, highlighting the critical need for thorough risk assessments.

Dec 30, 2024 • 41min
So You Want to be a Policy Entrepreneur?
Discover the dynamic world of policy entrepreneurs and their role in driving innovation. Uncover strategies like problem framing and coalition building that empower them to tackle global challenges, especially climate change. Learn how collaborative networks help share vital knowledge and foster legislative support. Historical examples illustrate the impact of these leaders in areas like California’s stem cell research. Dive into the ongoing fight against violence in conflict, showcasing the critical need for advocacy and transformative policies.

Dec 30, 2024 • 26min
Considerations for Governing Open Foundation Models
Discover the debate surrounding open foundation models and their potential to drive innovation and competition. The discussion highlights the ethical dilemmas of open versus closed models and the need for thoughtful policy design. Examining the risks, such as disinformation, the speakers argue that evidence for these dangers is limited. They advocate for policies that consider the unique characteristics of open models to prevent stifling development while promoting transparency and reducing monopolistic power in AI.

May 22, 2024 • 36min
Driving U.S. Innovation in Artificial Intelligence: A Roadmap for Artificial Intelligence Policy in the United States Senate
The podcast discusses the U.S. Senate's AI policy roadmap, including AI Ready Data, National AI Research Resource, AI safety, and election security. It explores funding and collaboration in AI applications for defense and national security, as well as addressing legal and regulatory gaps in AI systems. Policy recommendations cover export controls, security mechanisms, and collaboration with international partners.

May 20, 2024 • 46min
Societal Adaptation to Advanced AI
Authors Jamie Bernardi and Gabriel discuss societal adaptation to advanced AI systems in a paper, emphasizing the need for adaptive strategies and resilience. Topics include managing AI risks, interventions, loss of control to AI decision-makers, adaptive strategies, and responses to AI threat models.

May 20, 2024 • 40min
The AI Triad and What It Means for National Security Strategy
Ben Buchanan, author of the AI Triad framework, discusses the inputs powering machine learning: algorithms, data, and compute. The podcast explores the impact of these components on national security strategy, the disparities between machine learning and traditional programming, and the application of machine learning in national security, robotics, and AI advancements.

May 13, 2024 • 24min
OECD AI Principles
Exploring OECD AI Principles for responsible stewardship of trustworthy AI, national policies, and international cooperation. Updates made in 2024 reflected in document. Importance of public trust, stable policy environment, accountability, and global collaboration emphasized. Focus on enhancing transparency, addressing misinformation, and promoting responsible practices in AI industry.

May 13, 2024 • 9min
The Bletchley Declaration by Countries Attending the AI Safety Summit, 1-2 November 2023
The podcast discusses the Bletchley Declaration from the AI Safety Summit and emphasizes the need for global collaboration to mitigate risks. It explores the potential harm of advanced AI systems and advocates for urgent international cooperation. The importance of international collaboration in AI development is highlighted to address the digital divide and manage frontier AI risks effectively.

May 13, 2024 • 21min
Key facts: UNESCO’s Recommendation on the Ethics of Artificial Intelligence
Exploring UNESCO's global standard on AI ethics, emphasizing human rights protection, transparency, fairness, and human oversight. Discussion on principles like privacy, explainability, fairness, and addressing potential harms in AI deployment. Focus on the environmental impact of AI systems and UNESCO's initiatives for responsible AI governance.

May 13, 2024 • 38min
A pro-innovation approach to AI regulation: government response
Exploring UK's proactive AI regulation approach, accountability challenges in AI, navigating regulatory hurdles, and safety processes for advanced AI systems. Emphasis on innovation, collaboration, and societal impact in AI regulation.


