

AI Safety Newsletter
Center for AI Safety
Narrations of the AI Safety Newsletter by the Center for AI Safety. We discuss developments in AI and AI safety. No technical background required.
This podcast also contains narrations of some of our publications.
ABOUT US
The Center for AI Safety (CAIS) is a San Francisco-based research and field-building nonprofit. We believe that artificial intelligence has the potential to profoundly benefit the world, provided that we can develop and use it safely. However, in contrast to the dramatic progress in AI, many basic problems in AI safety have yet to be solved. Our mission is to reduce societal-scale risks associated with AI by conducting safety research, building the field of AI safety researchers, and advocating for safety standards.
Learn more at https://safe.ai
This podcast also contains narrations of some of our publications.
ABOUT US
The Center for AI Safety (CAIS) is a San Francisco-based research and field-building nonprofit. We believe that artificial intelligence has the potential to profoundly benefit the world, provided that we can develop and use it safely. However, in contrast to the dramatic progress in AI, many basic problems in AI safety have yet to be solved. Our mission is to reduce societal-scale risks associated with AI by conducting safety research, building the field of AI safety researchers, and advocating for safety standards.
Learn more at https://safe.ai
Episodes
Mentioned books

Aug 27, 2025 • 10min
AISN #62: Big Tech Launches $100 Million pro-AI Super PAC
Also: Meta's Chatbot Policies Prompt Backlash Amid AI Reorganization; China Reverses Course on Nvidia H20 Purchases. In this edition: Big tech launches a $100 million pro-AI super PAC; Meta's chatbot policies prompt congressional scrutiny amid the company's AI reorganization; China reverses course on buying Nvidia H20 chips after comments by Secretary of Commerce Howard Lutnick. Listen to the AI Safety Newsletter for free on Spotify or Apple Podcasts. Big Tech Launches $100 Million pro-AI Super PAC Silicon valley executives and investors are investing more than $100 million in a new political network to push back against AI regulations, signaling that the industry intends to be a major player in next year's U.S. midterms. The super PAC is backed by a16z and Greg Brockman and imitates the crypto super PAC Fairshake. The network, called Leading the Future, is modeled on the crypto-focused super-PAC Fairshake and aims to influence AI [...] ---Outline:(00:46) Big Tech Launches $100 Million pro-AI Super PAC(02:27) Meta's Chatbot Policies Prompt Backlash Amid AI Reorganization(04:45) China Reverses Course on Nvidia H20 Purchases(07:21) In Other News---
First published:
August 27th, 2025
Source:
https://newsletter.safe.ai/p/ai-safety-newsletter-62-big-tech
---
Want more? Check out our ML Safety Newsletter for technical safety research.
Narrated by TYPE III AUDIO.
---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

7 snips
Aug 12, 2025 • 9min
AISN #61: OpenAI Releases GPT-5
The podcast dives into OpenAI's release of GPT-5, emphasizing its innovative dual model architecture. Listeners learn how this new system enhances creative writing and response speed. Although not a revolutionary leap from GPT-4, the discussion highlights the implications for AI policy and industry trends. The conversation also touches on the expectations surrounding advanced AI capabilities and the evolving landscape of machine learning.

Jul 31, 2025 • 16min
AISN #60: The AI Action Plan
The podcast dives into the White House's ambitious AI Action Plan aimed at securing America's dominance in AI. It highlights President Trump's executive orders and the speech outlining strategies for innovation and safety. Listeners will also hear about OpenAI's ChatGPT Agent, which impressed at the International Mathematical Olympiad, showcasing remarkable advancements in AI capabilities. Additionally, the discussion touches on NVIDIA's chip sales and the Pentagon's focus on autonomous systems, raising important questions about AI's impact on society.

Jul 15, 2025 • 9min
AISN #59: EU Publishes General-Purpose AI Code of Practice
Discover the EU's groundbreaking General-Purpose AI Code of Practice, which aims to enhance safety and transparency in AI systems. Learn about the significant restrictions on AI uses, including bans on social scoring and predictive policing. Dive into Meta's ambitious revamp of its superintelligence initiatives and their strategies to attract top talent from competitors. Explore the latest in AI legislation and industry controversies, underscoring the urgent need for effective risk management and governance in the evolving AI landscape.

4 snips
Jul 3, 2025 • 9min
AISN #58: Senate Removes State AI Regulation Moratorium
The Senate recently lifted a moratorium that would have restricted states from regulating AI, overcoming significant political hurdles. Additionally, a clash among federal judges has emerged regarding whether using copyrighted materials to train AI qualifies as fair use. These developments signal a pivotal moment in the ongoing debate over AI policy and copyright laws.

Jun 17, 2025 • 7min
AISN #57: The RAISE Act
New York is on the brink of passing groundbreaking legislation for frontier AI, the RAISE Act, which could set vital safety standards. Developers will be required to publish safety plans and disclose major incidents. The discussion also touches on legislative challenges, including a potential federal moratorium that could complicate state AI regulations. Meanwhile, major tech companies like Google and Meta are making strides in AI safety, raising questions about the industry's responsibilities amidst a legislative vacuum.

May 28, 2025 • 9min
AISN #56: Google Releases Veo 3
Discover Google's latest breakthrough with Veo 3, an advanced video generation model that’s raising the bar for AI content creation. The discussion also dives into the perils of relying on voluntary governance in AI, highlighted by Anthropic's Claude Opus 4. The podcast balances technological innovation with critical reflections on safety standards, making for a thought-provoking listen.

May 20, 2025 • 9min
AISN #55: Trump Administration Rescinds AI Diffusion Rule, Allows Chip Sales to Gulf States
The Trump administration's recent decision has opened the floodgates for AI chip sales to the UAE and Saudi Arabia, reshaping the global market. There's also a push for new legislation on whistleblower protections and the verification of AI chip locations. Fascinating discussions reveal the complexities presented by advancements in AI technology. Finally, the introduction of an AI safety course highlights the importance of ethical considerations in this rapidly evolving field.

May 13, 2025 • 9min
AISN #54: OpenAI Updates Restructure Plan
OpenAI unveils a new restructure plan, aiming to keep nonprofit control despite past controversies. This shift comes after criticism from former employees and legal pushback from co-founder Elon Musk. The discussion also highlights a global coalition gathering in Singapore, focused on establishing a research agenda for AI safety. International stakeholders are emphasizing the importance of prioritizing safe AI development amidst evolving challenges.

Apr 29, 2025 • 11min
AISN #53: An Open Letter Attempts to Block OpenAI Restructuring
Explore the heated debate surrounding OpenAI's potential restructuring into a for-profit organization, as former employees and experts express their concerns through a compelling open letter. They warn that this shift could undermine the organization's original charitable mission and jeopardize governance safeguards meant to control artificial general intelligence. Additionally, the podcast celebrates the winners of the SafeBench competition, highlighting innovative benchmarks that enhance AI safety and accountability.