Machine Learning Street Talk (MLST)

Sayash Kapoor - How seriously should we take AI X-risk? (ICML 1/13)

30 snips
Jul 28, 2024
Sayash Kapoor, a Ph.D. candidate at Princeton, dives deep into the complexities of assessing existential risks from AI. He argues that the reliability of probability estimates can mislead policymakers, drawing parallels to other fields of risk assessment. The discussion critiques utilitarian approaches in decision-making and the challenges with cognitive biases. Kapoor also highlights concerns around AI's rapid growth, pressures on education, and workplace dynamics, emphasizing the need for informed policies that balance technological advancement with societal impact.
Ask episode
AI Snips
Chapters
Books
Transcript
Episode notes
INSIGHT

Existential Risk Consideration

  • Governments reasonably consider AI existential risk.
  • This is because it's a one-time event, and we only have one chance to get it right.
INSIGHT

Unreliable Doom Probabilities

  • Probability estimates for AI doom are unreliable for policy decisions.
  • This is due to the lack of past events and testable theories for accurate prediction.
INSIGHT

Inflated Risk Estimates

  • Risk estimates for infrequent events can be systematically inflated.
  • Metrics used to evaluate forecasters often overestimate tail risks, making them appear more significant.
Get the Snipd Podcast app to discover more snips from this episode
Get the app