Knowledge at Wharton

Detecting Bias in AI Image Generators

6 snips
Jan 27, 2025
Kartik Hosanagar, a Wharton marketing professor, teams up with doctoral candidate Pushkar Shukla to reveal their groundbreaking tool aimed at detecting and correcting biases in AI image generators. They delve into how societal stereotypes influence AI outputs, often reinforcing harmful biases in representation. The duo discusses the urgent need for tools that address not only gender and race but also body type and socioeconomic status. Their innovative approach promises to enhance fairness and diversify AI-generated content, with a public release on the horizon.
Ask episode
AI Snips
Chapters
Transcript
Episode notes
INSIGHT

AI Bias Amplification

  • AI image generators perpetuate societal biases, like depicting doctors as older white males.
  • These biases, learned from human-generated data, become amplified at scale, influencing societal perceptions.
ANECDOTE

Examples of AI Bias

  • The image generator shows mostly male computer programmers and female childcare workers, reflecting stereotypes.
  • Images of "old men at church" were predominantly white, often grim, and sometimes depicted with disabilities, highlighting diverse biases.
INSIGHT

Scale of AI Bias

  • Uncorrected AI biases propagate rapidly due to the scale of image generation, impacting societal constructs.
  • Manual bias detection is insufficient at this scale, necessitating automated solutions.
Get the Snipd Podcast app to discover more snips from this episode
Get the app