The advent of deep learning really became popular around exactly 10 years ago when researchers, some of my previous colleagues, Jeff Hinton, has demonstrated that these neural network models can be trained to understand the statistical properties of large data sets. Once you have a large enough data distribution, you can start generating novel things. For example, a formula one race car that looks like a strawberry. And it'll do that. These understanding of concepts are emergent. It's not like we didn't design the model to have these chat functions. It's just this sort of capability has emerged from scale. We're at the stage where neural network models are starting to be really good at prediction, because it can
David Ha is the Head of Strategy at Stability AI, and one of the top minds working in AI today. He previously worked as a research scientist in the Brain team at Google. David is particularly interested in evolution and complex systems, and his research explores how intelligence may emerge from limited resource constraints. He joins the show to discuss the advantages of open-source models, modelling AI as an emergent system, why large language models are bad at maths and MUCH more! Important Links:
Show Notes:
- Why David joined Stability AI
- The advantages of open-source models
- We cannot predict the inventions of tomorrow
- Making memes with generative AI
- The centaur approach to AI
- An introduction to large language models
- The relationship between complex systems and resource constraints
- Large language models are bad at maths
- Modelling AI as an emergent system
- Understanding different perspectives
- MUCH more!
Books Mentioned:
- The Beginning of Infinity: Explanations That Transform the World; by David Deutsch
- Why Greatness Cannot Be Planned: The Myth of the Objective; by Kenneth Stanley and Joel Lehman