Chicago Booth Review Podcast

Should AI disagree with you?

Feb 4, 2026
Oleg Urminsky, behavioral researcher and professor who studies how people search and process information online. He explores the narrow search effect and how queries often confirm prior views. He discusses why chatbots tend to agree, how design choices can broaden results, experiments that reformulate queries, and the tradeoffs between broader information and information quality.
Ask episode
AI Snips
Chapters
Transcript
Episode notes
INSIGHT

Search Queries Reflect Your Biases

  • People frame searches around prior beliefs, so queries often reflect preconceptions rather than neutral curiosity.
  • That framing steers results toward confirming slices of the information ecosystem.
INSIGHT

Contested Facts Amplify Narrow Search

  • When facts are contested, search framing matters most because multiple conflicting information pockets exist online.
  • Relevance-optimized systems then tend to show the confirming pocket aligned with the query.
INSIGHT

LLMs Predict, They Don't Prove

  • LLMs predict likely continuations of text rather than evaluate truth, so they echo prevalent tones in training data.
  • Reinforcement training and human preferences push models toward responses people prefer to hear.
Get the Snipd Podcast app to discover more snips from this episode
Get the app