

20VC: Perplexity's Aravind Srinivas on Will Foundation Models Commoditise, Diminishing Returns in Model Performance, OpenAI vs Anthropic: Who Wins & Why the Next Breakthrough in Model Performance will be in Reasoning
182 snips Jun 5, 2024
Aravind Srinivas, Co-founder and CEO of Perplexity AI, shares his insights from prestigious stints at OpenAI, Google, and DeepMind. He delves into the next AI breakthrough in reasoning, questioning whether we are hitting diminishing returns on model performance. Aravind discusses the commoditization of foundation models, predicting shifts in the competitive landscape, and explores the challenges of monetizing AI services. With fascinating anecdotes about talent acquisition and high-quality data, he paints a vivid picture of the future of AI technology.
AI Snips
Chapters
Transcript
Episode notes
Accidental AI Love
- Aravind Srinivas entered a machine learning contest without prior knowledge and won, sparking his interest in AI.
- This success, coupled with Sam Altman's advice about identifying natural talents, motivated him to pursue AI.
Data Curation Over Brute Force
- Diminishing returns on compute and model performance exist, but potential remains if data is carefully curated.
- Simply increasing scale without data curation won't improve model performance; details matter.
General-Purpose Magic
- Domain-specific models are not necessarily better; general-purpose models with diverse data perform well across domains.
- The magic lies in emergent capabilities from diverse training, not domain specificity.