Hear This Idea

#65 – Katja Grace on Slowing Down AI and Whether the X-Risk Case Holds Up

24 snips
Jun 10, 2023
Ask episode
AI Snips
Chapters
Transcript
Episode notes
INSIGHT

Empirical Approach To AI Futures

  • AI Impacts researches decision-relevant empirical questions about AI's future rather than pure theory.
  • They focus on concrete histories and vignette workshops to test plausibility of scenarios.
INSIGHT

Breaking Down The X‑Risk Chain

  • The basic x-risk chain: superhuman AI → goal-directedness → bad goals → catastrophic outcomes.
  • Katja arranges counterarguments along each link to test where the chain might break.
INSIGHT

Agentic Behavior Is A Spectrum

  • "Goal-directedness" is a spectrum, not a single utility-maximizer model.
  • Economically useful behavior can look agentic without being maximizer-style dangerous.
Get the Snipd Podcast app to discover more snips from this episode
Get the app