AI-powered
podcast player
Listen to all your favourite podcasts with AI-powered features
Deep Learning and Scaling Properties of Large Language Models
A recent paper came out of Google DeepMind on the scaling properties of very large language models. And it showed that what we thought we knew about large language models from two years ago, from OpenAI, was totally wrong. They misunderstood how the scaling properties work. The question is, when you have a model and you're trying to train it, should you be trying to optimize the hyperparameters or should you be adding more data?