AXRP - the AI X-risk Research Podcast cover image

21 - Interpretability for Engineers with Stephen Casper

AXRP - the AI X-risk Research Podcast

00:00

The Predictability of Networks in AI Interpretability

A paper called adversarial examples or features or not bugs their features, which kind of studied this in a pretty clever way. When you apply this to the image, it can reliably cause the model to be fooled. This suggests that networks may be learning and picking up on features that humans are not naturally like disposed to understand very well but networks can.

Transcript
Play full episode

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app