AI-powered
podcast player
Listen to all your favourite podcasts with AI-powered features
The Problem With Disentangled Representation Learning
The goal of eigen vector decomposition is to like find a basis in which, you know, to sort of rotate the thing around. And there's been limited success in doing those kinds of things to understand the internals of transformers. If you relax that and say, I don't want to have like in completely orthogonal directions, I just want them to be mostly orthogonal to each other, all of a sudden you have exponentially sized memory storage.