Human Centered

Better AI Through Social Science

Jul 5, 2022
This discussion features Jennifer Logg, an expert in judgment and decision-making, Daniel Ho, a legal scholar focused on AI's social context, and Kristian Hammond, who researches AI and human interaction. They dive into the ethical implications of AI technologies, emphasizing the need for transparency and accountability. The trio explores the disconnect between data insights and real-world applications, the importance of addressing biases in algorithms, and the role of social sciences in creating responsible AI systems. A fascinating look at the marriage of AI, ethics, and human behavior awaits!
Ask episode
AI Snips
Chapters
Books
Transcript
Episode notes
INSIGHT

Analytics' Last Mile Problem

  • Organizations face a "last mile" gap between producing analytics and users applying them effectively.
  • Understanding how people respond to algorithmic advice is essential to close that gap.
INSIGHT

Engineers' Broken Human Model

  • Engineers build systems assuming a rational actor model that humans don't follow in practice.
  • Ignoring real human behavior leads to overtrust, undertrust, and misuse of AI systems.
ANECDOTE

Google Engineer's Sentience Claim

  • Jake recounts the Google engineer Blake Lemoine who claimed Lambda was sentient and was put on leave.
  • The episode highlights how even experienced engineers can anthropomorphize chatbots.
Get the Snipd Podcast app to discover more snips from this episode
Get the app