AI-powered
podcast player
Listen to all your favourite podcasts with AI-powered features
The Future of Language Models
The sequence length constraint could be problematic enough that it actually might push us off of transformers. That is like a scaling quadratically in the sequence length is a huge problem. I would think then that you would predict that the model performance will continue to scale consistently as compute is added basically forever. And how do you think about data constraints as the model complexity rose? Is data going to be the next constraint for building very large LLMs? Yeah, it already really is.