

Highlights: #158 – Holden Karnofsky on how AIs might take over even if they’re no smarter than humans, and his four-part playbook for AI risk
Oct 6, 2023
Holden Karnofsky discusses the importance of measuring and managing the risks associated with AI systems. The chapter explores the uncertainty surrounding the timeline of AI surpassing human capabilities. It delves into the debates about the level of intelligence required for AI to pose a significant risk. The chapter explores the uncertainty surrounding super intelligence and discusses the concerns of misaligned AI. It explores the tension between utilitarianism and other ethical considerations.
Chapters
Transcript
Episode notes
1 2 3 4 5
Introduction
00:00 • 3min
Preparing for the Explosive Progress of AI and the Potential for an AI Population Explosion
03:04 • 2min
Debating the Level of Intelligence Required for AI Risk
04:42 • 2min
Uncertainty and Potential Outcomes
06:54 • 15min
Exploring the tension between utilitarianism and other ethical considerations
21:27 • 2min