
Chicago Booth Review Podcast Can you catch AI-written reviews?
Dec 10, 2025
Brian Jabarian, an economist from the University of Chicago, explores AI detection tools and their implications. He discusses the crucial role of these tools in preserving trust in education and consumer reviews. Jabarian reveals that non-experts struggle to differentiate between AI and human writing. He highlights the most effective detectors, especially Pangram, and warns about the evolving battle between AI and detection technologies. Policy suggestions include varying detection thresholds for classrooms and social media to ensure transparency and accountability in AI usage.
AI Snips
Chapters
Transcript
Episode notes
Humans Fail At Spotting AI
- Brian and Alex ran a lab study where non-experts guessed AI vs. human text and performed like coin flips.
- That poor human performance motivated a systematic evaluation of commercial AI detectors.
Human-Authorship Underpins Trust
- Much of the economy depends on trusting human-generated content like reviews and legal texts.
- Detecting AI authorship preserves that trust and helps assess genuine human performance.
Detection Involves Tough Trade-Offs
- Detectors trade off false negatives (missed AI) against false positives (mislabeling humans).
- Different vendors tune that trade-off differently, so marketed accuracy claims can be misleading.
