

AI Safety Newsletter
Center for AI Safety
Narrations of the AI Safety Newsletter by the Center for AI Safety. We discuss developments in AI and AI safety. No technical background required.
This podcast also contains narrations of some of our publications.
ABOUT US
The Center for AI Safety (CAIS) is a San Francisco-based research and field-building nonprofit. We believe that artificial intelligence has the potential to profoundly benefit the world, provided that we can develop and use it safely. However, in contrast to the dramatic progress in AI, many basic problems in AI safety have yet to be solved. Our mission is to reduce societal-scale risks associated with AI by conducting safety research, building the field of AI safety researchers, and advocating for safety standards.
Learn more at https://safe.ai
This podcast also contains narrations of some of our publications.
ABOUT US
The Center for AI Safety (CAIS) is a San Francisco-based research and field-building nonprofit. We believe that artificial intelligence has the potential to profoundly benefit the world, provided that we can develop and use it safely. However, in contrast to the dramatic progress in AI, many basic problems in AI safety have yet to be solved. Our mission is to reduce societal-scale risks associated with AI by conducting safety research, building the field of AI safety researchers, and advocating for safety standards.
Learn more at https://safe.ai
Episodes
Mentioned books

11 snips
Oct 28, 2024 • 15min
AISN #43: White House Issues First National Security Memo on AI
The podcast dives into the White House's first National Security Memorandum on AI, emphasizing its significance for AI governance. It warns about international competitors leveraging espionage to gain an edge in U.S. AI technologies. Additionally, the discussion covers the implications of AI on job displacement, highlighting gender disparities in employment effects. Lastly, the intriguing shift of AI's influence into prestigious realms like the Nobel Prizes captures attention, raising questions about AI's evolving role in society.

Oct 1, 2024 • 13min
AISN #42: Newsom Vetoes SB 1047
Plus, OpenAI's o1, and AI Governance Summary. Welcome to the AI Safety Newsletter by the Center for AI Safety. We discuss developments in AI and AI safety. No technical background required. Listen to the AI Safety Newsletter for free on Spotify or Apple Podcasts. Newsom Vetoes SB 1047 On Sunday, Governor Newsom vetoed California's Senate Bill 1047 (SB 1047), the most ambitious legislation to-date aimed at regulating frontier AI models. The bill, introduced by Senator Scott Wiener and covered in a previous newsletter, would have required AI developers to test frontier models for hazardous capabilities and take steps to mitigate catastrophic risks. (CAIS Action Fund was a co-sponsor of SB 1047.) Newsom states that SB 1047 is not comprehensive enough. In his letter to the California Senate, the governor argued that “SB 1047 does not take into account whether an AI system is deployed in high-risk environments, involves [...] ---Outline:(00:18) Newsom Vetoes SB 1047(01:55) OpenAI's o1(06:44) AI Governance(10:32) Links---
First published:
October 1st, 2024
Source:
https://newsletter.safe.ai/p/ai-safety-newsletter-42-newsom-vetoes
---
Want more? Check out our ML Safety Newsletter for technical safety research.
Narrated by TYPE III AUDIO.
---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

Sep 11, 2024 • 12min
AISN #41: The Next Generation of Compute Scale
Plus, Ranking Models by Susceptibility to Jailbreaking, and Machine Ethics. Listen to the AI Safety Newsletter for free on Spotify or Apple Podcasts. The Next Generation of Compute Scale AI development is on the cusp of a dramatic expansion in compute scale. Recent developments across multiple fronts—from chip manufacturing to power infrastructure—point to a future where AI models may dwarf today's largest systems. In this story, we examine key developments and their implications for the future of AI compute. xAI and Tesla are building massive AI clusters. Elon Musk's xAI has brought its Memphis supercluster—“Colossus”—online. According to Musk, the cluster has 100k Nvidia H100s, making it the largest supercomputer in the world. Moreover, xAI plans to add 50k H200s in the next few months. For comparison, Meta's Llama 3 was trained on 16k H100s. Meanwhile, Tesla's “Gigafactory Texas” is expanding to house an AI supercluster. Tesla's Gigafactory supercomputer [...] ---Outline:(00:18) The Next Generation of Compute Scale(04:36) Ranking Models by Susceptibility to Jailbreaking(06:07) Machine Ethics---
First published:
September 11th, 2024
Source:
https://newsletter.safe.ai/p/ai-safety-newsletter-41-the-next
---
Want more? Check out our ML Safety Newsletter for technical safety research.
Narrated by TYPE III AUDIO.
---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

Aug 21, 2024 • 14min
AISN #40: California AI Legislation
Plus, NVIDIA Delays Chip Production, and Do AI Safety Benchmarks Actually Measure Safety?. Listen to the AI Safety Newsletter for free on Spotify or Apple Podcasts. SB 1047, the Most-Discussed California AI Legislation California's Senate Bill 1047 has sparked discussion over AI regulation. While state bills often fly under the radar, SB 1047 has garnered attention due to California's unique position in the tech landscape. If passed, SB 1047 would apply to all companies performing business in the state, potentially setting a precedent for AI governance more broadly. This newsletter examines the current state of the bill, which has had various amendments in response to feedback from various stakeholders. We'll cover recent debates surrounding the bill, support from AI experts, opposition from the tech industry, and public opinion based on polling. The bill mandates safety protocols, testing procedures, and reporting requirements for covered AI models. The bill was [...] ---Outline:(00:18) SB 1047, the Most-Discussed California AI Legislation(04:38) NVIDIA Delays Chip Production(06:49) Safetywashing: Do AI Safety Benchmarks Actually Measure Safety Progress?(10:22) Links---
First published:
August 21st, 2024
Source:
https://newsletter.safe.ai/p/aisn-40-california-ai-legislation
---
Want more? Check out our ML Safety Newsletter for technical safety research.
Narrated by TYPE III AUDIO.
---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

Jul 29, 2024 • 12min
AISN #39: Implications of a Trump Administration for AI Policy
Plus, Safety Engineering Overview. Listen to the AI Safety Newsletter for free on Spotify or Apple Podcasts. Implications of a Trump administration for AI policy Trump named Ohio Senator J.D. Vance—an AI regulation skeptic—as his pick for vice president. This choice sheds light on the AI policy landscape under a future Trump administration. In this story, we cover: (1) Vance's views on AI policy, (2) views of key players in the administration, such as Trump's party, donors, and allies, and (3) why AI safety should remain bipartisan. Vance has pushed for reducing AI regulations and making AI weights open. At a recent Senate hearing, Vance accused Big Tech companies of overstating risks from AI in order to justify regulations to stifle competition. This led tech policy experts to expect that Vance would favor looser AI regulations. However, Vance has also praised Lina Khan, Chair of the Federal Trade [...] ---Outline:(00:18) Implications of a Trump administration for AI policy(04:57) Safety Engineering(08:49) Links---
First published:
July 29th, 2024
Source:
https://newsletter.safe.ai/p/ai-safety-newsletter-39-implications
---
Want more? Check out our ML Safety Newsletter for technical safety research.
Narrated by TYPE III AUDIO.
---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

Jul 9, 2024 • 11min
AISN #38: Supreme Court Decision Could Limit Federal Ability to Regulate AI
Plus, “Circuit Breakers” for AI systems, and updates on China's AI industry. Listen to the AI Safety Newsletter for free on Spotify or Apple Podcasts. Supreme Court Decision Could Limit Federal Ability to Regulate AI In a recent decision, the Supreme Court overruled the 1984 precedent Chevron v. Natural Resources Defence Council. In this story, we discuss the decision's implications for regulating AI. Chevron allowed agencies to flexibly apply expertise when regulating. The “Chevron doctrine” had required courts to defer to a federal agency's interpretation of a statute in the case that that statute was ambiguous and the agency's interpretation was reasonable. Its elimination curtails federal agencies’ ability to regulate—including, as this article from LawAI explains, their ability to regulate AI. The Chevron doctrine expanded federal agencies’ ability to regulate in at least two ways. First, agencies could draw on their technical expertise to interpret ambiguous statutes [...] ---
First published:
July 9th, 2024
Source:
https://newsletter.safe.ai/p/ai-safety-newsletter-38-supreme-court
---
Want more? Check out our ML Safety Newsletter for technical safety research.
Narrated by TYPE III AUDIO.
---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

Jun 18, 2024 • 11min
AISN #37: US Launches Antitrust Investigations
US Launches Antitrust Investigations The U.S. Government has launched antitrust investigations into Nvidia, OpenAI, and Microsoft. The U.S. Department of Justice (DOJ) and Federal Trade Commission (FTC) have agreed to investigate potential antitrust violations by the three companies, the New York Times reported. The DOJ will lead the investigation into Nvidia while the FTC will focus on OpenAI and Microsoft. Antitrust investigations are conducted by government agencies to determine whether companies are engaging in anticompetitive practices that may harm consumers and stifle competition. Nvidia investigated for GPU dominance. The New York Times reports that concerns have been raised about Nvidia's dominance in the GPU market, “including how the company's software locks [...] ---Outline:(00:10) US Launches Antitrust Investigations(02:58) Recent Criticisms of OpenAI and Anthropic(05:40) Situational Awareness(09:14) Links---
First published:
June 18th, 2024
Source:
https://newsletter.safe.ai/p/ai-safety-newsletter-37-us-launches
---
Want more? Check out our ML Safety Newsletter for technical safety research.
Narrated by TYPE III AUDIO.

May 30, 2024 • 10min
AISN #36: Voluntary Commitments are Insufficient
Voluntary Commitments are Insufficient AI companies agree to RSPs in Seoul. Following the second AI Global Summit held in Seoul, the UK and Republic of Korea governments announced that 16 major technology organizations, including Amazon, Google, Meta, Microsoft, OpenAI, and xAI have agreed to a new set of Frontier AI Safety Commitments. Some commitments from the agreement include: Assessing risks posed by AI models and systems throughout the AI lifecycle. Setting thresholds for severe risks, defining when a model or system would pose intolerable risk if not adequately mitigated. Keeping risks within defined thresholds, such as by modifying system behaviors and implementing robust security controls. Potentially halting development or deployment if risks cannot be sufficiently mitigated. These commitments [...] ---Outline:(00:03) Voluntary Commitments are Insufficient(02:45) Senate AI Policy Roadmap(05:18) Chapter 1: Overview of Catastrophic Risks(07:56) Links---
First published:
May 30th, 2024
Source:
https://newsletter.safe.ai/p/ai-safety-newsletter-35-voluntary
---
Want more? Check out our ML Safety Newsletter for technical safety research.
Narrated by TYPE III AUDIO.

May 16, 2024 • 12min
AISN #35: Lobbying on AI Regulation
OpenAI and Google Announce New Multimodal Models In the current paradigm of AI development, there are long delays between the release of successive models. Progress is largely driven by increases in computing power, and training models with more computing power requires building large new data centers. More than a year after the release of GPT-4, OpenAI has yet to release GPT-4.5 or GPT-5, which would presumably be trained on 10x or 100x more compute than GPT-4, respectively. These models might be released over the next year or two, and could represent large spikes in AI capabilities. But OpenAI did announce a new model last week, called GPT-4o. The “o” stands for “omni,” referring to the fact that the model can use text, images, videos [...] ---Outline:(00:03) OpenAI and Google Announce New Multimodal Models(02:36) The Surge in AI Lobbying(05:29) How Should Copyright Law Apply to AI Training Data?(10:10) Links---
First published:
May 16th, 2024
Source:
https://newsletter.safe.ai/p/ai-safety-newsletter-35-lobbying
---
Want more? Check out our ML Safety Newsletter for technical safety research.
Narrated by TYPE III AUDIO.

May 1, 2024 • 17min
AISN #34: New Military AI Systems
AI labs like OpenAI and Meta fail to share models with UK's AI Safety Institute, while Google DeepMind complies. Bipartisan AI policy proposals in the US Senate and discussions on military AI in Israel and the US. New online course on AI safety from CAIS. Updates on AI regulation, security, consciousness, and debates.