AI-powered
podcast player
Listen to all your favourite podcasts with AI-powered features
The Importance of Interpretability in Mechanistic Anomalies
I think interpretability techniques are often useful for things like mechanistic anomaly detection, but not oriented around that or something like that. It's hard to check whether or not you're making progress on things like empirical mechanistic anomaly Detection because you can just solve it by doing a bunch of stuff that is intuitively like seems unscalable. And I would be excited if someone did something related to explanations that enabled you to do that thing and define the explanations in a way that made it sort of possible to do that sort of thing.