Open source foundations in the AI ecosystem allow for tinkering, understanding, and building on AI models. It facilitates collective identification of malicious uses and safeguards. The involvement of more people ensures greater understanding and insights. Open models lead to innovations that are not possible in closed models within big companies. Consumer-level hardware, like GPUs, enables millions of people to develop and contribute. The availability of consumer-level hardware allows for innovations in fine-tuning algorithms and model personalization. The parallel is drawn to the innovation seen in personal computers compared to high-end computers. The future of AI will see the most important innovations happening at the individual GPU level.
David Ha is the Head of Strategy at Stability AI, and one of the top minds working in AI today. He previously worked as a research scientist in the Brain team at Google. David is particularly interested in evolution and complex systems, and his research explores how intelligence may emerge from limited resource constraints. He joins the show to discuss the advantages of open-source models, modelling AI as an emergent system, why large language models are bad at maths and MUCH more! Important Links:
Show Notes:
- Why David joined Stability AI
- The advantages of open-source models
- We cannot predict the inventions of tomorrow
- Making memes with generative AI
- The centaur approach to AI
- An introduction to large language models
- The relationship between complex systems and resource constraints
- Large language models are bad at maths
- Modelling AI as an emergent system
- Understanding different perspectives
- MUCH more!
Books Mentioned:
- The Beginning of Infinity: Explanations That Transform the World; by David Deutsch
- Why Greatness Cannot Be Planned: The Myth of the Objective; by Kenneth Stanley and Joel Lehman