The Lawfare Podcast

Lawfare Daily: A Right to Warn: Protecting AI Whistleblowers with Charlie Bullock

Jun 25, 2025
Charlie Bullock, a Senior Research Fellow at the Institute for Law & AI, discusses the newly introduced AI Whistleblower Protection Act. This bipartisan initiative aims to shield employees reporting AI safety concerns, especially in light of OpenAI’s recent controversies. Bullock highlights how the bill addresses public safety risks even without specific legal breaches, and protects whistleblowers from restrictive NDAs. The conversation also delves into the potential challenges and ethical implications in safeguarding whistleblowers within the rapidly evolving tech landscape.
Ask episode
AI Snips
Chapters
Transcript
Episode notes
ANECDOTE

OpenAI NDA Controversy Sparks Action

  • OpenAI's 2024 NDA controversy revealed employees faced broad nondisparagement clauses threatening vested equity.
  • This sparked an open letter and bipartisan political interest in AI whistleblower protections.
INSIGHT

Filling AI Legal Protection Gaps

  • AI whistleblowers need protection beyond law violation reports due to emerging, lightly regulated risks.
  • Reporting "substantial and specific dangers" fills a gap not covered by current whistleblower laws.
INSIGHT

Balancing Whistleblower Protections

  • The bill uses a "substantial and specific" danger standard to balance protection and prevent frivolous claims.
  • Some speculative AI concerns might not meet this threshold but can still be reported anonymously.
Get the Snipd Podcast app to discover more snips from this episode
Get the app