AXRP - the AI X-risk Research Podcast cover image

12 - AI Existential Risk with Paul Christiano

AXRP - the AI X-risk Research Podcast

CHAPTER

Are You Interested in Interpretability?

In the realm of things roughly like outer alignement, or, you know, sort of alignment yo're dealing with loy stakes, reputable problems. What kind of solutions are yo most interested in from a research perspective? I don't have a very short answer to this question so i guess you'll get a kind of long answer that in itself is interesting. And maybe there's also two kinds of answers i can give. One is, like, the thing that i am most animated by, that, like, i am working on myself. Another is like a broader licer kind of th things people do in the world that i am particularly excited by. Um, e directions. Maybe my

00:00
Transcript
Play full episode

Remember Everything You Learn from Podcasts

Save insights instantly, chat with episodes, and build lasting knowledge - all powered by AI.
App store bannerPlay store banner