AXRP - the AI X-risk Research Podcast cover image

21 - Interpretability for Engineers with Stephen Casper

AXRP - the AI X-risk Research Podcast

00:00

The Relationship Between Mechanistic Interpretability and Deceptive Alignment

The relationship between mechanistic interpretability and deceptive alignment is one of like inextricable connection. If a system is deceptively aligned, whether its problems are being hidden from us actively or not, I think seems less important to an engineer. Just it's that if the system is this deceptively does misaligned, it has problems hard to find during testing and evaluation. The reason why this is a great example is because it illustrates the situation in which a system might actively want to be deceptive and might be deceptive in a way that is very, very insidious. This deception is cryptographically hard to find. And it provably cryptologically hard to found. But if you assume

Transcript
Play full episode

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app