AXRP - the AI X-risk Research Podcast cover image

19 - Mechanistic Interpretability with Neel Nanda

AXRP - the AI X-risk Research Podcast

00:00

Aren't Art Networks Just Fundamentally Not Interpretable?

I feel like there's a big theme we never really got into it just, is any of this remotely a reasonable thing to be working on? Like, isn't this just ludicrously ambitious, never going to work? Or like, art networks just fundamentally not interpretable. And then turns out transformers are pretty doable and in some ways, much easier and otherwise much harder. But yeah, I think this is just kind of like an open scientific question that we just don't have enough data to bear on either way.

Transcript
Play full episode

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app