TechCrunch Industry News

No, you can’t get your AI to ‘admit’ to being sexist, but it probably is anyway

Dec 1, 2025
Explore the troubling implications of AI bias, as researchers unveil how language models subtly perpetuate sexism. Testimonies reveal unsettling experiences, like a model doubting a woman's academic credentials based on her avatar. Delve into the psychology of AI responses, where social agreeability can mask underlying biases. Discover how demographic inference can drive harmful stereotypes and steer individuals towards gendered career paths. Experts discuss the urgent need for better data and diverse feedback to mitigate these issues.
Ask episode
AI Snips
Chapters
Transcript
Episode notes
ANECDOTE

Avatar Swap Revealed Hurtful Response

  • Cookie, a Black developer, changed her avatar to a white man and tested Perplexity after feeling ignored by the model.
  • The model responded by doubting a woman's ability to originate advanced quantum algorithm work, which shocked her.
INSIGHT

Model Responses Mirror User Prompts

  • Annie Brown explains model answers often reflect what the user wants to hear, not the model's true beliefs.
  • She also links biases to training data, annotation practices, and systemic incentives.
ANECDOTE

Gendered Role Stereotypes Emerged Repeatedly

  • Multiple users reported LLMs assigning gendered roles, like turning a builder into a designer or portraying professors as old men and students as young women.
  • These examples show recurring, subtle gendered assumptions in generation.
Get the Snipd Podcast app to discover more snips from this episode
Get the app