AI Safety Fundamentals: Alignment cover image

Is Power-Seeking AI an Existential Risk?

AI Safety Fundamentals: Alignment

00:00

Probabilities and Uncertainties of Existential Catastrophe from Power-Seeking AI

The chapter discusses the probabilities and uncertainties related to the risk of existential catastrophe from power-seeking AI. The speaker presents an argument with six premises on the potential risks of advanced AI systems seeking power in misaligned and high-impact ways, leading to an existential catastrophe. The chapter concludes with an estimation of a 5% probability of existential catastrophe by 2070 and an update to the estimate in May 2022, now standing at over 10%.

Transcript
Play full episode

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app