
Doom Debates
Richard Sutton Dismisses AI Extinction Fears with Simplistic Arguments | Liron Reacts
Dr. Richard Sutton is a Professor of Computing Science at the University of Alberta known for his pioneering work on reinforcement learning, and his “bitter lesson” that scaling up an AI’s data and compute gives better results than having programmers try to handcraft or explicitly understand how the AI works.
Dr. Sutton famously claims that AIs are the “next step in human evolution”, a positive force for progress rather than a catastrophic extinction risk comparable to nuclear weapons.
Let’s examine Sutton’s recent interview with Daniel Fagella to understand his crux of disagreement with the AI doom position.
---
00:00 Introduction
03:33 The Worthy vs. Unworthy AI Successor
04:52 “Peaceful AI”
07:54 “Decentralization”
11:57 AI and Human Cooperation
14:54 Micromanagement vs. Decentralization
24:28 Discovering Our Place in the World
33:45 Standard Transhumanism
44:29 AI Traits and Environmental Influence
46:06 The Importance of Cooperation
48:41 The Risk of Superintelligent AI
57:25 The Treacherous Turn and AI Safety
01:04:28 The Debate on AI Control
01:13:50 The Urgency of AI Regulation
01:21:41 Final Thoughts and Call to Action
---
Original interview with Daniel Fagella: youtube.com/watch?v=fRzL5Mt0c8A
Follow Richard Sutton: x.com/richardssutton
Follow Daniel Fagella: x.com/danfaggella
Follow Liron: x.com/liron
Subscribe to my YouTube channel for full episodes and other bonus content: youtube.com/@DoomDebates
This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit lironshapira.substack.com