AI-powered
podcast player
Listen to all your favourite podcasts with AI-powered features
The Emerging Trend Around Scaling Loss for Machine Learning
There's an interesting, I don't know if it's counter trend, the emerging trend around the scaling loss for these models. So with chinchilla, they were able to train a 60 billion parameter model that actually performed better in terms of accuracy and different evaluation metrics than GPT-3 with 175 billion parameters. And similarly, even smaller models and 60 billion right with more data, you know, it performed much better. For most business use cases, you don't even need to go to the tens of billions of parameters.