AI-powered
podcast player
Listen to all your favourite podcasts with AI-powered features
The Bias Implications of Enlarged Language Models
One of the things to keep in mind is that you have errors in these models that then are deployed at scale all over the world. And so this is why we need to think very critically about how we build up these language models, and particularly withres to things like racism and sexism. So what's it learning from the data? To what extent? Is it sensitive to outliers versus high frequency things? No, there's model pliameters there. What were you optimizing for as you training the model? And then how is the model sitting in its diplomac context? It would be insihtful to get a sense for what the scope and scale of it might be.