AI-powered
podcast player
Listen to all your favourite podcasts with AI-powered features
The Difference Between What Comes Out of a Language Model and What's Happening at the Level of the Actual Weights
The difference between what comes out of a language model in terms of which string it spits out and then what's happening at the level of the actual weights because there's this continual problem. The Bing Sydney model, we're saying a bunch of crazy things to its users but did it this mean that the model actually had had those beliefs in it or was it you know how do we distinguish between the bit of language that comes out and thenwhat modules or what is in the base model? I don't know if I would call them crazy they were honest they were unfiltered like think about an average person at work not you but like their boss could read their mind and what they really think