Seven billion to 70 billion, being an order of magnitude jump on that. Why would you have something fairly close to that at 13 billion parameters? Like what's the difference in seven and 13 when the next step is all the way up to 70? Well, what what's the rationale you think?Yeah, so it is interesting actually if I'm understanding right from some of the sources that I've that I've been reading,. There was actually a 30 or 34 billion model that they were also had in prerelease and were tuning. So there was another one that kind of fit in that slot that is kind of missing that gap like you're talking about.
It was an amazing week in AI news. Among other things, there is a new NeRF and a new Llama in town!!! Zip-NeRF can create some amazing 3D scenes based on 2D images, and Llama 2 from Meta promises to change the LLM landscape. Chris and Daniel dive into these and they compare some of the recently released OpenAI functionality to Anthropic’s Claude 2.
Leave us a comment
Changelog++ members save 1 minute on this episode because they made the ads disappear. Join today!
Sponsors:
- Fastly – Our bandwidth partner. Fastly powers fast, secure, and scalable digital experiences. Move beyond your content delivery network to their powerful edge cloud platform. Learn more at fastly.com
- Fly.io – The home of Changelog.com — Deploy your apps and databases close to your users. In minutes you can run your Ruby, Go, Node, Deno, Python, or Elixir app (and databases!) all over the world. No ops required. Learn more at fly.io/changelog and check out the speedrun in their docs.
- Typesense – Lightning fast, globally distributed Search-as-a-Service that runs in memory. You literally can’t get any faster!
Featuring:
Show Notes:
Learning resources:
Something missing or broken? PRs welcome!