We should let our models use these capabilities naturally, rather than relearn how to do them from inversions sometimes. And that is very much in line with the way I see this proceeding. We are working on an open source solution to GPT chat, those types of large language models. Just to share number of parameters sometimes take my breath away. They're in the billions, which is an ass load.
David Ha is the Head of Strategy at Stability AI, and one of the top minds working in AI today. He previously worked as a research scientist in the Brain team at Google. David is particularly interested in evolution and complex systems, and his research explores how intelligence may emerge from limited resource constraints. He joins the show to discuss the advantages of open-source models, modelling AI as an emergent system, why large language models are bad at maths and MUCH more! Important Links:
Show Notes:
- Why David joined Stability AI
- The advantages of open-source models
- We cannot predict the inventions of tomorrow
- Making memes with generative AI
- The centaur approach to AI
- An introduction to large language models
- The relationship between complex systems and resource constraints
- Large language models are bad at maths
- Modelling AI as an emergent system
- Understanding different perspectives
- MUCH more!
Books Mentioned:
- The Beginning of Infinity: Explanations That Transform the World; by David Deutsch
- Why Greatness Cannot Be Planned: The Myth of the Objective; by Kenneth Stanley and Joel Lehman