AXRP - the AI X-risk Research Podcast cover image

4 - Risks from Learned Optimization with Evan Hubinger

AXRP - the AI X-risk Research Podcast

CHAPTER

Is There an Inner Alignment Problem?

M, i do want to push back a little bit on the and actually, like that thing, i'm conceding is just enough for the argument. Yes, i guess so. I think i'm convinced by the idea that thiis just like so many situations in the world that like, cant an manually like a traing o thing for each situation. Is that if you're splitting the entire d of, like, a really complex problem in only ten thousand tasks, you overnember alwy a thousand tasks like that? Each individual task is still going to have so much generality i think only a milian search, you'll e nace operation. Even if you're

00:00
Transcript
Play full episode

Remember Everything You Learn from Podcasts

Save insights instantly, chat with episodes, and build lasting knowledge - all powered by AI.
App store bannerPlay store banner