
Evaluating models without test data (Practical AI #194)
Changelog Master Feed
Deep Learning and Scaling Properties of Large Language Models
A recent paper came out of Google DeepMind on the scaling properties of very large language models. And it showed that what we thought we knew about large language models from two years ago, from OpenAI, was totally wrong. They misunderstood how the scaling properties work. The question is, when you have a model and you're trying to train it, should you be trying to optimize the hyperparameters or should you be adding more data?
00:00
Transcript
Play full episode
Remember Everything You Learn from Podcasts
Save insights instantly, chat with episodes, and build lasting knowledge - all powered by AI.