80,000 Hours Podcast

#80 – Stuart Russell on why our approach to AI is broken and how to fix it

Jun 22, 2020
Stuart Russell, a professor at UC Berkeley and co-author of a leading AI textbook, discusses the flaws in current AI development methods. He emphasizes the issue of misaligned objectives, using the example of YouTube's algorithm, which promotes extreme views to maximize engagement. Russell argues for a new approach that prioritizes human preferences and ethical considerations to better align AI systems with societal values. He highlights the urgent need for regulation and responsible frameworks to navigate the complex challenges of advanced AI.
Ask episode
AI Snips
Chapters
Books
Transcript
Episode notes
ANECDOTE

YouTube's Algorithm

  • YouTube's algorithm, aiming to maximize viewing time, inadvertently radicalizes users.
  • It pushes extreme content, making user behavior predictable, highlighting unforeseen consequences of objective-focused AI.
INSIGHT

Flawed AI Model

  • The standard model of AI, focused on optimizing objectives, is flawed.
  • A better model aims for AI to optimize human preferences, creating a collaborative relationship.
INSIGHT

Misunderstandings of AI Risk

  • One misunderstanding is that Stuart Russell isn't worried about AI risk; he believes it's serious unless AI development changes.
  • Another is that he's predicting imminent AI takeover from laptops, which he isn't.
Get the Snipd Podcast app to discover more snips from this episode
Get the app