AXRP - the AI X-risk Research Podcast cover image

21 - Interpretability for Engineers with Stephen Casper

AXRP - the AI X-risk Research Podcast

00:00

Introduction

Steven Casper is a PhD student at MIT working with Dylan Hadfield-Manel on adversaries in interpretability and machine learning. We'll be talking about his Engineers interpretability sequence of blog posts, as well as his paper on benchmarking weather interpretability tools confined to Trojan horses inside neural networks. From this to what we're discussing, you can check the description of this episode, and you can review the transcript at axrp.net.

Transcript
Play full episode

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app