There are three main ways to mitigate bias in AI models: One way is to ensure biases are addressed during the model building by setting guidelines and principles. Another method is to correct biases after the model is built by instructing the model to adhere to specific rules like not being offensive or stereotyping. The third technique involves prompt transformation where the requests made to AI models are modified by prompt engineers to ensure desired results are achieved. This unseen editing of prompts before they reach the AI model improves the quality of responses and ensures better outcomes.
When Google released Gemini, a new chatbot powered by artificial intelligence, it quickly faced a backlash — and unleashed a fierce debate about whether A.I. should be guided by social values, and if so, whose values they should be.
Kevin Roose, a technology columnist for The Times and co-host of the podcast “Hard Fork,” explains.
Guest: Kevin Roose, a technology columnist for The New York Times and co-host of the podcast “Hard Fork.”
Background reading:
For more information on today’s episode, visit nytimes.com/thedaily. Transcripts of each episode will be made available by the next workday.