In 2018, we started finding out that facial recognition software had significant racial and gender biases. This is significant because it's different from the kind of facial recognition that we already know about. Generative AI is a new type of AI and it's a new wrinkle. So instead of AI that scans existing pictures, it's creating nuance. And that we found also has significant racial andgender biases.
As pressure mounts on lawmakers to regulate artificial intelligence, another problem area of the technology is emerging: AI-generated images. Early research shows these images can be biased and perpetuate stereotypes. Bloomberg reporters Dina Bass and Leonardo Nicoletti dug deep into the data that powers this technology, and they join this episode to talk about how AI image generation works—and whether it’s possible to train the models to produce better results.
Read more: Humans Are Biased. Generative AI Is Even Worse
Listen to The Big Take podcast every weekday and subscribe to our daily newsletter: https://bloom.bg/3F3EJAK
Have questions or comments for Wes and the team? Reach us at bigtake@bloomberg.net.
See omnystudio.com/listener for privacy information.