Google has prioritized ensuring their AI products do not promote bias or prejudice, stemming from an incident in 2015 when Google Photos mislabeled photos of black people as gorillas due to inadequate representation in the training data. This incident highlighted the importance of diverse and inclusive training data to prevent bias in AI algorithms.
When Google released Gemini, a new chatbot powered by artificial intelligence, it quickly faced a backlash — and unleashed a fierce debate about whether A.I. should be guided by social values, and if so, whose values they should be.
Kevin Roose, a technology columnist for The Times and co-host of the podcast “Hard Fork,” explains.
Guest: Kevin Roose, a technology columnist for The New York Times and co-host of the podcast “Hard Fork.”
Background reading:
For more information on today’s episode, visit nytimes.com/thedaily. Transcripts of each episode will be made available by the next workday.