AXRP - the AI X-risk Research Podcast cover image

24 - Superalignment with Jan Leike

AXRP - the AI X-risk Research Podcast

00:00

Scale Below Recid - How to Measure Scale Below Recid

The reason we didn't like RLHF right is that there's this concern that it wasn't going to distinguish between things that were right and things that sounded good to the human because the human was like bad at critiquing. So I wonder if we still have that fixed point right. We actually just deliberately train a deceptively aligned model and then we see if the scale below recid flags it or you know how much effort do we have to put into making the model deceptively align such that it would pass our skill over site. Yeah I think that's a really important dollar concern to have, but in a way that's one of the key things we need to figure out

Transcript
Play full episode

Remember Everything You Learn from Podcasts

Save insights instantly, chat with episodes, and build lasting knowledge - all powered by AI.
App store bannerPlay store banner
Get the app