AXRP - the AI X-risk Research Podcast cover image

19 - Mechanistic Interpretability with Neel Nanda

AXRP - the AI X-risk Research Podcast

CHAPTER

Aren't Art Networks Just Fundamentally Not Interpretable?

I feel like there's a big theme we never really got into it just, is any of this remotely a reasonable thing to be working on? Like, isn't this just ludicrously ambitious, never going to work? Or like, art networks just fundamentally not interpretable. And then turns out transformers are pretty doable and in some ways, much easier and otherwise much harder. But yeah, I think this is just kind of like an open scientific question that we just don't have enough data to bear on either way.

00:00
Transcript
Play full episode

Remember Everything You Learn from Podcasts

Save insights instantly, chat with episodes, and build lasting knowledge - all powered by AI.
App store bannerPlay store banner