Changelog Master Feed cover image

LLMs break the internet (Changelog Interviews #534)

Changelog Master Feed

CHAPTER

The Cost of Scaling a Language Model

Simon: There was a fun tweet yesterday like gpt four they haven't said how big it is we know that three was 175 billion parameters they won't reveal how big four is somebody got a stopwatch and said okay well i'll ask the same question of three and four and time it and four took ten times longer to produce a result so i reckon four is ten times 175 billion parameters. Simon: If i could have the smallest possible model that's that can do this this pattern where it can call extra tools it can makeapi calls and so forth the stuff i could build with that is kind of incredible and it would run on my phone  that's the thing i'm most excited

00:00
Transcript
Play full episode

Remember Everything You Learn from Podcasts

Save insights instantly, chat with episodes, and build lasting knowledge - all powered by AI.
App store bannerPlay store banner