AI-powered
podcast player
Listen to all your favourite podcasts with AI-powered features
There is an ongoing debate about the scaling limits of large language models (LLMs), specifically whether they are approaching a performance wall. While costs for scaling these models have significantly increased—from thousands for earlier models like GPT-2 to potentially hundreds of millions for current advancements—Noam Brown believes there is still room for improvement, particularly in pre-training. He suggests that although the economics of scaling may become impractical at extreme levels, the potential for enhancements through increased resources remains. Brown emphasizes that the future may involve pushing forward through test time compute, which he views as an untapped resource with substantial low-hanging fruit for advancements.