AI Safety Fundamentals: Alignment cover image

Is Power-Seeking AI an Existential Risk?

AI Safety Fundamentals: Alignment

00:00

Introduction

This chapter presents a detailed analysis of the core argument for concern about existential risk posed by misaligned artificial intelligence, including the possibility of creating more intelligent agents than humans and the prediction of an existential catastrophe by 2070.

Transcript
Play full episode

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app