The Nonlinear Library cover image

LW - SAE feature geometry is outside the superposition hypothesis by jake mendel

The Nonlinear Library

00:00

Exploring the Importance of Feature Geometry and Activation Spaces in Neural Networks

Exploring the limitations of superposition-based interpretations in neural network activation spaces and proposing alternative theories to improve understanding of feature geometry, with suggestions for future research directions in uncovering new insights.

Play episode from 00:00
Transcript

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app