

#100 Dr. PATRICK LEWIS (co:here) - Retrieval Augmented Generation
13 snips Feb 10, 2023
Dr. Patrick Lewis, an AI and NLP Research Scientist at co:here, delves into the cutting-edge world of Retrieval-Augmented Language Models. He discusses the limitations of existing transformer models in handling large inputs, revealing the need for better techniques. The conversation highlights the importance of enhancing verifiability in language models by integrating credible sources. Patrick also explores the complexities of information retrieval in improving contextual relevance, using the innovative Atlas project as a prime example.
AI Snips
Chapters
Transcript
Episode notes
Verifiable Language Models
- Language models cannot generate truth, but they can be verifiable.
- They achieve this by citing sources, allowing users to assess the claims' validity.
How RAG Works
- RAG involves analyzing input, retrieving relevant information from a knowledge base, and generating an output based on both.
- This process can be trained end-to-end, allowing the model to learn both retrieval and generation.
RAG Advantages
- Retrieval Augmented Generation (RAG) offers improved interpretability and updatability compared to standard language models.
- Updatability allows for adding or removing information, addressing the static nature of traditionally trained models.