

Clearview AI in the Capitol, Medical AI Regulation, DeepFake Text
Jan 21, 2021
This week's discussion highlights the surge in Clearview AI facial recognition usage by law enforcement in the wake of the Capitol mob incident. There’s a focus on the FDA's new action plan aimed at regulating AI in medical devices, balancing safety and innovation. Google’s impressive development of a trillion-parameter AI language model is also a focal point. Lastly, the conversation delves into the risks of AI-generated deepfake text and the potential for manipulation in governance, emphasizing the need for ethical oversight.
AI Snips
Chapters
Transcript
Episode notes
Clearview AI Use Spikes After Capitol Riot
- Clearview AI's use spiked 26% after the January 6th Capitol riot as law enforcement sought to identify participants.
- This raises concerns about the normalization of such technology and its potential misuse in other contexts.
Need for Regulation in Facial Recognition Tech
- While accountability is important, the increasing use of facial recognition tech like Clearview AI necessitates regulation.
- Clear standards are needed to ensure responsible and ethical implementation.
Inconsistent Regulations on Facial Recognition
- Current regulations on facial recognition are inconsistent, with no federal oversight, allowing local departments discretion.
- This lack of consistency raises concerns about potential misuse in contexts like peaceful protests.