Humans + AI

Ganna Pogrebna on behavioural data science, machine bias, digital twins vs digital shadows, and stakeholder simulations (AC Ep23)

Nov 19, 2025
Ganna Pogrebna, a Research Professor and expert in behavioural data science, dives into the intricacies of human bias in AI. She highlights how algorithms can inherit human biases, using Amazon's hiring tool as a cautionary tale. Ganna discusses the need for context-rich prompting when working with AI, alongside the importance of combining human judgment with machine efficiency. She emphasizes the value of simulations and digital twins in refining strategic decisions, illustrating how they can unlock insights into stakeholder dynamics.
Ask episode
AI Snips
Chapters
Books
Transcript
Episode notes
INSIGHT

Human Data Fuels Algorithmic Bias

  • All algorithms are trained on human-labeled data and inherit human biases if unchecked.
  • Recognize that algorithmic outputs reflect the flaws of the human data used to train them.
ANECDOTE

Amazon Hiring Algorithm Example

  • Amazon's hiring model learned gender bias because past hires were mostly male and CV signals reflected that pattern.
  • The model disadvantaged female applicants by reproducing historic hiring patterns.
ADVICE

Understand Then Offset Biases

  • First identify where algorithmic bias comes from before deciding how to mitigate it.
  • Use humans and machines to offset each other's weaknesses rather than blindly replacing one with the other.
Get the Snipd Podcast app to discover more snips from this episode
Get the app