Guest Cecilia Kang, technology and policy reporter for the New York Times, discusses President Biden's concerns about AI, the White House's plans for regulation, the limitations of watermarking AI-generated content, the executive order on AI regulation, international coordination for AI safety standards, and the government's tension with AI use and regulation.
The executive order on AI indicates the White House recognizes the national security threat posed by AI and calls on companies to stress-test their systems for security flaws.
The executive order aims to address future risks of AI, such as safety requirements for large language models and generative AI, but lacks enforceable regulations and substantial solutions to issues like discrimination and bias in AI.
Deep dives
National Security Concerns and Regulation of AI Companies
The podcast discusses how the White House invoked the Defense Production Act, a 1950s law, to regulate AI companies, citing national security concerns. The White House calls on AI companies to stress-test their systems for security flaws and report the results to the administration. While many companies have already committed to this through voluntary agreements earlier this year, the executive order codifies it as a regulation. The use of the Defense Production Act signals that the administration sees AI as a significant national security threat, highlighting the importance of addressing potential risks.
Future Risks and Watermarking AI Content
The executive order aims to address future risks of AI, such as safety requirements for large language models and generative AI, which are not yet widely available. The podcast mentions the concept of watermarking AI-generated content as a solution to disinformation and copyright violations. However, computer scientists express doubts about watermarking's effectiveness and claim that it may not fully address the complex problem of disinformation. The EO also highlights the need for NIST to develop standards on watermarking, but it does not provide enforceable regulations in this regard.
Limited Enforcement and Focus on Future Opportunities
The executive order lacks enforceability beyond requiring companies to report their testing results. It emphasizes supporting AI development, streamlining immigration to attract skilled workers in AI, and encouraging agencies to procure AI tools. The EO focuses more on creating a conversation, signaling the government's prioritization of AI, and urging agencies to consider AI's risks and benefits. Despite discussions on discrimination and bias, the EO doesn't delve deep into solving these problems concerning AI, such as facial recognition issues. Moreover, the EO touches on diverse topics but doesn't offer substantial enforcement or regulatory teeth.
Biden’s executive order on A.I. indicates his administration is taking it seriously. Does it go far enough?
Guest: Cecilia Kang, covering technology and policy for the New York Times.
If you enjoy this show, please consider signing up for Slate Plus. Slate Plus members get benefits like zero ads on any Slate podcast, bonus episodes of shows like Slow Burn and Dear Prudence—and you’ll be supporting the work we do here on What Next TBD. Sign up now at slate.com/whatnextplus to help support our work.