AXRP - the AI X-risk Research Podcast cover image

12 - AI Existential Risk with Paul Christiano

AXRP - the AI X-risk Research Podcast

CHAPTER

Introduction

Pole is a researcher at the alignment research center, where he works on developing means to alline future machine learning systems with human interests. Pole: I think i don't necessarily have a bright line around giant or drastic drops versus moderate drops. Anything that could cause us not to fulfil some large chunk of our potential makes it one of the worst things in the world. You can't have that many 20 % hits before you're down to like, no potential left.

00:00
Transcript
Play full episode

Remember Everything You Learn from Podcasts

Save insights instantly, chat with episodes, and build lasting knowledge - all powered by AI.
App store bannerPlay store banner