2min chapter

AXRP - the AI X-risk Research Podcast cover image

21 - Interpretability for Engineers with Stephen Casper

AXRP - the AI X-risk Research Podcast

CHAPTER

The Future of Interpretability

The Madre Lab at MIT does really, really cool interpretability work. At one point in time, I constructed a list based on my knowledge of papers from the adversaries and interpretability literature that seem to demonstrate some sort of very engineering relevant capabilities for model diagnostics or debugging. But this list had like 20, I think like 21 or 22 papers on it. And for what it's worth, you know, these papers, did not come from people who are like prototypical member of the AI safety community, interpretability community.

00:00

Get the Snipd
podcast app

Unlock the knowledge in podcasts with the podcast player of the future.
App store bannerPlay store banner

AI-powered
podcast player

Listen to all your favourite podcasts with AI-powered features

Discover
highlights

Listen to the best highlights from the podcasts you love and dive into the full episode

Save any
moment

Hear something you like? Tap your headphones to save it with AI-generated key takeaways

Share
& Export

Send highlights to Twitter, WhatsApp or export them to Notion, Readwise & more

AI-powered
podcast player

Listen to all your favourite podcasts with AI-powered features

Discover
highlights

Listen to the best highlights from the podcasts you love and dive into the full episode