3min chapter

The Inside View cover image

Jesse Hoogland on Developmental Interpretability and Singular Learning Theory

The Inside View

CHAPTER

The Problem With Interpretability in Large Systems

The problem is obviously with very large systems, how do you figure out all the things that are going on inside of a neural network? Maybe you can find many of the big picture things, but it's very hard to find all the little details. Develop mental interpretability proposes that we study how structure forms over the course of training. And I think maybe it's more tractable to find out what's going on in the neural network at the end. If we just understand each individual transition over the courseof training. That might be much more tractable than trying to understand how structure is at the end of training.

00:00

Get the Snipd
podcast app

Unlock the knowledge in podcasts with the podcast player of the future.
App store bannerPlay store banner

AI-powered
podcast player

Listen to all your favourite podcasts with AI-powered features

Discover
highlights

Listen to the best highlights from the podcasts you love and dive into the full episode

Save any
moment

Hear something you like? Tap your headphones to save it with AI-generated key takeaways

Share
& Export

Send highlights to Twitter, WhatsApp or export them to Notion, Readwise & more

AI-powered
podcast player

Listen to all your favourite podcasts with AI-powered features

Discover
highlights

Listen to the best highlights from the podcasts you love and dive into the full episode