AXRP - the AI X-risk Research Podcast cover image

12 - AI Existential Risk with Paul Christiano

AXRP - the AI X-risk Research Podcast

00:00

Introduction

Pole is a researcher at the alignment research center, where he works on developing means to alline future machine learning systems with human interests. Pole: I think i don't necessarily have a bright line around giant or drastic drops versus moderate drops. Anything that could cause us not to fulfil some large chunk of our potential makes it one of the worst things in the world. You can't have that many 20 % hits before you're down to like, no potential left.

Transcript
Play full episode

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app