I think it's important for everyone to understand that different needs have different model solutions, like the one we were just discussing. This company we're looking at is very bullish on the fact that they could write a new foundational model that is SQL sourced and able to train it up on that. But I also agree that this idea of compression, disability to use 70 billion different parameters to get down to a better model as an open source model isn't going to work. When you get into a world where most people who can afford to run their own language model, even maybe rent one on a cloud for a dollar an hour or something, you get exponentially larger types of use cases than we will never
David Ha is the Head of Strategy at Stability AI, and one of the top minds working in AI today. He previously worked as a research scientist in the Brain team at Google. David is particularly interested in evolution and complex systems, and his research explores how intelligence may emerge from limited resource constraints. He joins the show to discuss the advantages of open-source models, modelling AI as an emergent system, why large language models are bad at maths and MUCH more! Important Links:
Show Notes:
- Why David joined Stability AI
- The advantages of open-source models
- We cannot predict the inventions of tomorrow
- Making memes with generative AI
- The centaur approach to AI
- An introduction to large language models
- The relationship between complex systems and resource constraints
- Large language models are bad at maths
- Modelling AI as an emergent system
- Understanding different perspectives
- MUCH more!
Books Mentioned:
- The Beginning of Infinity: Explanations That Transform the World; by David Deutsch
- Why Greatness Cannot Be Planned: The Myth of the Objective; by Kenneth Stanley and Joel Lehman