Katja runs AI Impacts, a research project trying to incrementally answer decision-relevant questions about the future of AI. She is well known for a survey published in 2017 called, When Will AI Exceed Human Performance? Evidence From AI Experts and recently published a new survey of AI Experts: What do ML researchers think about AI in 2022. We start this episode by discussing what Katja is currently thinking about, namely an answer to Scott Alexander on why slowing down AI Progress is an underexplored path to impact.
Youtube: https://youtu.be/rSw3UVDZge0
Audio & Transcript: https://theinsideview.ai/katja
Host: https://twitter.com/MichaelTrazzi
Katja: https://twitter.com/katjagrace
OUTLINE
(00:00) Highlights
(00:58) Intro
(01:33) Why Advocating For Slowing Down AI Might Be Net Bad
(04:35) Why Slowing Down AI Is Taboo
(10:14) Why Katja Is Not Currently Giving A Talk To The UN
(12:40) To Avoid An Arms Race, Do Not Accelerate Capabilities
(16:27) How To Cooperate And Implement Safety Measures
(21:26) Would AI Researchers Actually Accept Slowing Down AI?
(29:08) Common Arguments Against Slowing Down And Their Counterarguments
(36:26) To Go To The Stars, Build AGI Or Upload Your Mind
(39:46) Why Katja Thinks There Is A 7% Chance Of AI Destroys The World
(46:39) Why We Might End Up Building Agents
(51:02) AI Impacts Answer Empirical Questions To Help Solve Important Ones
(56:32) The 2022 Expert Survey on AI Progress
(58:56) High Level Machine Intelligence
(1:04:02) Running A Survey That Actually Collects Data
(1:08:38) How AI Timelines Have Become Shorter Since 2016
(1:14:35) Are AI Researchers Still Too Optimistic?
(1:18:20) AI Experts Seem To Believe In Slower Takeoffs
(1:25:11) Automation and the Unequal Distributions of Cognitive power
(1:34:59) The Least Impressive Thing that Cannot Be Done in 2 years
(1:38:17) Final thoughts