AI-powered
podcast player
Listen to all your favourite podcasts with AI-powered features
The Importance of Concepts in Language Models
To accurately predict a token you have to develop perhaps a world model at least for some tasks. One thing I discussed with the mechanistic interpretability researcher Neil Nanda is this question of how do concepts arise in language models. Do they even share our human concepts or do they perhaps develop some entirely alien concepts? You never know what the actual internal experience is like for those models and it could be just that in five cases we talked about so far it's not perfectly but it goes out of distribution in case six and it's a completely different concept, he says. In those crazy things that thing is thereYou just said there is super interesting because so so next token prediction is not a simple task