

20VC: Why Foundation Model Performance is Not Diminishing But Models Are Commoditising, Why Nvidia Will Enter the Model Space and Models Will Enter the Chip Space & The Right Business Model for AI Software with David Luan, Co-Founder @ Adept
171 snips Jun 24, 2024
David Luan, CEO and co-founder of Adept, shares insights from his experience at OpenAI and Google Brain. He argues that the performance of foundation models isn't diminishing and predicts a consolidation to just 5-7 leading providers. Luan discusses the evolving relationship between AI models and computing hardware, suggesting that companies like Nvidia must adapt to remain competitive. He also delves into the transformative nature of intelligent agents versus traditional automation, and the future implications of AI on pricing models and user capabilities in the workplace.
AI Snips
Chapters
Transcript
Episode notes
Bottom-Up Research at Google Brain
- David Luan learned about bottom-up research at Google Brain, where brilliant scientists worked without near-term objectives.
- They organically self-organized and tackled open technical problems in AI, driving significant advancements.
OpenAI's Shift in Focus
- OpenAI, after realizing Transformer's potential, focused on solving major unsolved problems instead of just research papers.
- This led to the development of projects like robotic hand control, Dota 2 mastery, and scaling GPT.
Model Scaling and Improvement
- Doubling GPUs leads to predictable returns in model performance, following a logarithmic curve.
- A new way to improve models is by having them collect data and learn, like using a theorem-proving environment.