I think it's important for everyone to understand that different needs have different model solutions, like the one we were just discussing. This company we're looking at is very bullish on the fact that they could write a new foundational model that is SQL sourced and able to train it up on that. But I also agree that this idea of compression, disability to use 70 billion different parameters to get down to a better model as an open source model isn't going to work. When you get into a world where most people who can afford to run their own language model, even maybe rent one on a cloud for a dollar an hour or something, you get exponentially larger types of use cases than we will never

Get the Snipd
podcast app

Unlock the knowledge in podcasts with the podcast player of the future.
App store bannerPlay store banner

AI-powered
podcast player

Listen to all your favourite podcasts with AI-powered features

Discover
highlights

Listen to the best highlights from the podcasts you love and dive into the full episode

Save any
moment

Hear something you like? Tap your headphones to save it with AI-generated key takeaways

Share
& Export

Send highlights to Twitter, WhatsApp or export them to Notion, Readwise & more

AI-powered
podcast player

Listen to all your favourite podcasts with AI-powered features

Discover
highlights

Listen to the best highlights from the podcasts you love and dive into the full episode