A lot of the blame for AI bias gets put on the data sets, but there needs to also be accountability from users at all levels. One of the things that needs to be done is increased diversification of the data set. There needs to be a way to get data from other countries, other cultures. That's ethical. If we move to optimizing technology where you don't just have to add more volume in order to have a better algorithm or model, that could help as well.
As pressure mounts on lawmakers to regulate artificial intelligence, another problem area of the technology is emerging: AI-generated images. Early research shows these images can be biased and perpetuate stereotypes. Bloomberg reporters Dina Bass and Leonardo Nicoletti dug deep into the data that powers this technology, and they join this episode to talk about how AI image generation works—and whether it’s possible to train the models to produce better results.
Read more: Humans Are Biased. Generative AI Is Even Worse
Listen to The Big Take podcast every weekday and subscribe to our daily newsletter: https://bloom.bg/3F3EJAK
Have questions or comments for Wes and the team? Reach us at bigtake@bloomberg.net.
See omnystudio.com/listener for privacy information.