
Highlights: #158 – Holden Karnofsky on how AIs might take over even if they’re no smarter than humans, and his four-part playbook for AI risk
80k After Hours
00:00
Uncertainty and Potential Outcomes
This chapter delves into the uncertainty surrounding super intelligence and explores the potential outcomes. It discusses the concerns of misaligned AI and the possibility of coexistence, as well as the risks and challenges associated with building AI systems.
Transcript
Play full episode