

#65 – Katja Grace on Slowing Down AI and Whether the X-Risk Case Holds Up
24 snips Jun 10, 2023
AI Snips
Chapters
Transcript
Episode notes
Empirical Approach To AI Futures
- AI Impacts researches decision-relevant empirical questions about AI's future rather than pure theory.
- They focus on concrete histories and vignette workshops to test plausibility of scenarios.
Breaking Down The X‑Risk Chain
- The basic x-risk chain: superhuman AI → goal-directedness → bad goals → catastrophic outcomes.
- Katja arranges counterarguments along each link to test where the chain might break.
Agentic Behavior Is A Spectrum
- "Goal-directedness" is a spectrum, not a single utility-maximizer model.
- Economically useful behavior can look agentic without being maximizer-style dangerous.