AI-powered
podcast player
Listen to all your favourite podcasts with AI-powered features
The Problem With Mechanistic Anomalies
So I think the situation is like you have some ML system and you have some set of scenarios in which it's like doing a thing that you think isn't going to be catastrophic for human interests. And then off deployment, it or off training and unduployment, it finds a situation where it could get away with murder. And so your ability to measure whether or not your AI murders all humans, like applies off distribution now. Because you have this, you sort of fingerprinted the kinds of reasoning that it's doing on distribution. And you're ensuring that it'm doing the same kinds of reasoning off distribution, or if it's not, you're like taking it offline.