Cecilia Kang from the New York Times discusses Biden's executive order on AI, including the use of the Defense Production Act for regulating AI companies and stress testing AI systems. They also explore the potential of watermarking AI content for addressing disinformation, the challenges of AI regulation at an international level, and the tension between government use of AI and the need for regulations.
The executive order on AI focuses on national security and requires advanced AI companies to stress test their systems for security flaws, but lacks substantive enforcement or solutions for current AI-related dangers like deepfakes.
The order fails to address the potential discrimination and bias caused by facial recognition technology, and lacks enforceability, mainly focusing on creating standards and recommendations rather than concrete action.
Deep dives
The White House's Concerns about AI and the Executive Order
The White House issued an executive order expressing concerns about the risks of AI and the need for regulation. National security is a key focus and the Defense Production Act was invoked to regulate AI companies. The order requires advanced AI companies to stress test their systems for security flaws and report the results. While some companies have already committed to this, the order codifies it into regulation. However, the EO is not very substantive in terms of enforcement or teeth.
Addressing Future Harms and Deepfakes
The executive order looks at future potential harms of AI, such as large language models and generative AI. Safety requirements are mentioned for models that surpass a certain computing threshold, but this only applies to future models, not existing ones. The EO does not effectively address current AI-related dangers, like deepfakes, which can manipulate audio and video content. The order acknowledges the problems but does little to solve them.
Lack of Focus on Facial Recognition and Enforcement
The EO does not emphasize facial recognition technology, despite its potential for discrimination and bias. It does not provide a clear plan for preventing discrimination and bias caused by AI tools. Additionally, the order lacks enforceability and is mostly about creating standards, making recommendations, and asking agencies to think about AI. Hiring AI experts within government agencies is suggested, but it is unlikely that many experts will leave the private sector for government positions.
Biden’s executive order on A.I. indicates his administration is taking it seriously. Does it go far enough?
Guest: Cecilia Kang, covering technology and policy for the New York Times.
If you enjoy this show, please consider signing up for Slate Plus. Slate Plus members get benefits like zero ads on any Slate podcast, bonus episodes of shows like Slow Burn and Dear Prudence—and you’ll be supporting the work we do here on What Next TBD. Sign up now at slate.com/whatnextplus to help support our work.