AI-powered
podcast player
Listen to all your favourite podcasts with AI-powered features
Using MLPs to Reverse Enter the Network
A lot of the mental moves that I'm making when I'm trying to reverse enter the network are around discovering thing that things are localized within the model. Most other things do not matter for this input and can be safely ignored. The next thing that matters could be computed solely from the heads or neurons that I identified earlier that are important. There's a core problem called superposition which probably not going to talk about that much later but in brief models want to represent features as directions in space so transformers have these MLP layers where there's a linear map followed by a activation function called a jello.