

AISN #57: The RAISE Act
Jun 17, 2025
New York is on the brink of passing groundbreaking legislation for frontier AI, the RAISE Act, which could set vital safety standards. Developers will be required to publish safety plans and disclose major incidents. The discussion also touches on legislative challenges, including a potential federal moratorium that could complicate state AI regulations. Meanwhile, major tech companies like Google and Meta are making strides in AI safety, raising questions about the industry's responsibilities amidst a legislative vacuum.
AI Snips
Chapters
Transcript
Episode notes
New York's Pioneering AI Regulation
- New York may become the first state to regulate frontier AI with the RAISE Act.
- The Act sets guardrails on AI labs to improve safety and accountability.
RAISE Act Safety and Reporting Rules
- Developers must publish a safety plan and update it annually.
- They must report major AI safety incidents to state officials within 72 hours.
Withhold Risky AI Models
- Developers must withhold releasing any AI model posing an unreasonable risk of critical harm.
- This applies especially if evaluations predict 100+ deaths or $1 billion damage potential.