80k After Hours

Highlights: #158 – Holden Karnofsky on how AIs might take over even if they’re no smarter than humans, and his four-part playbook for AI risk

Oct 6, 2023
Holden Karnofsky discusses the importance of measuring and managing the risks associated with AI systems. The chapter explores the uncertainty surrounding the timeline of AI surpassing human capabilities. It delves into the debates about the level of intelligence required for AI to pose a significant risk. The chapter explores the uncertainty surrounding super intelligence and discusses the concerns of misaligned AI. It explores the tension between utilitarianism and other ethical considerations.
Ask episode
Chapters
Transcript
Episode notes