

20VC: Why Data Size Matters More Than Model Size, Why The Google Employee Was Wrong; OpenAI and Google Have the Advantage & Why Open Source is Not Going to Win with Douwe Kiela, Co-Founder @ Contextual AI
75 snips Jun 30, 2023
Douwe Kiela, co-founder and CEO of Contextual AI, discusses the importance of data size over model size in AI. He reveals his unique journey from philosophy to machine learning, detailing lessons learned from industry giants. Kiela critiques the open approach in AI, asserting that proprietary data access is crucial. He explores the challenges of existing language models, including hallucinations and data privacy, and suggests that regulatory understanding is vital for innovation in the tech landscape. Tune in for insights into the future of AI and language models!
AI Snips
Chapters
Transcript
Episode notes
Unconventional Path to AI
- Douwe Kiela's path to AI was unconventional, starting with philosophy before returning to computer science.
- He interned with Leon Bottou, a deep learning pioneer, and later joined Facebook AI Research.
Limitations of Foundational Models
- Existing foundational models have limitations like hallucination, attribution issues, and data privacy concerns.
- Contextual AI aims to address these with retrieval augmented generation, decoupling memory and generation.
Hallucinations: Feature or Bug?
- Hallucinations in language models can be a feature or a bug, depending on the use case.
- They are desirable for creative writing but problematic for enterprise applications requiring accuracy.