AXRP - the AI X-risk Research Podcast cover image

21 - Interpretability for Engineers with Stephen Casper

AXRP - the AI X-risk Research Podcast

00:00

The Future of Deep Neural Networks

Deep neural networks are so performant because they're these like big sort of semi-onstructured blobs from matrices, right? Such ingredients can flow freely and the network can kind of like figure out its own structure. How possible do you think it's going to be to reconcile performance with architectures that actually help interpretability in a real way? Yeah, I expect this to be the case definitely somewhat.

Transcript
Play full episode

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app