AI Safety Fundamentals: Alignment cover image

Is Power-Seeking AI an Existential Risk?

AI Safety Fundamentals: Alignment

00:00

Probabilities and Uncertainties of Existential Catastrophe from Power-Seeking AI

The chapter discusses the probabilities and uncertainties related to the risk of existential catastrophe from power-seeking AI. The speaker presents an argument with six premises on the potential risks of advanced AI systems seeking power in misaligned and high-impact ways, leading to an existential catastrophe. The chapter concludes with an estimation of a 5% probability of existential catastrophe by 2070 and an update to the estimate in May 2022, now standing at over 10%.

Transcript
Play full episode

Remember Everything You Learn from Podcasts

Save insights instantly, chat with episodes, and build lasting knowledge - all powered by AI.
App store bannerPlay store banner