The Bayesian Conspiracy cover image

Bayes Blast 13 – GPT-4 maps every neuron in GPT-2

The Bayesian Conspiracy

CHAPTER

The Problem With Transformers

Each token is then activates one neuron, right? You're asking a question, I'm not sure if I know the answer to because transformers are a bit trickier. A token either being there or not there will either activate or not activate. So it's like it's more complicated with transformers. And that can't be all of it because then if you put in the exact same prompt into different instances, you'd get the exact same output, but you don't.

00:00
Transcript
Play full episode

Remember Everything You Learn from Podcasts

Save insights instantly, chat with episodes, and build lasting knowledge - all powered by AI.
App store bannerPlay store banner