
The MLOps Podcast
⏪ Making LLMs Backwards Compatible with Jason Liu
Jan 15, 2024
Jason Liu, an applied AI consultant and creator of Instructor, discusses challenges and applications of LLMs. Topics include making LLMs interact with existing systems, building applications with LLMs, thinking in logic and design, and the future of Instructor. They also explore misconceptions in LLMs, improving LLM applications, RAG as recommendation systems, fine-tuning embedding models, measuring impact on business outcomes, and unlocking economic value through structured data extraction.
53:41
Episode guests
AI Summary
AI Chapters
Episode notes
Podcast summary created with Snipd AI
Quick takeaways
- RAG framework connects with recommendation systems, utilizing language models for personalized suggestions and advanced filtering techniques beyond vector similarity.
- Transfer learning and fine-tuning techniques will become more accessible and focused on task-specific models to maximize value and improve outcomes.
Deep dives
The parallel between RAG and recommendation systems
The guest discusses the similarities between the RAG (Retrieval-Augmented Generation) framework and recommendation systems. They draw a connection to their experience at Stitch Fix, where language models were used to process user requests, filter inventory, and provide personalized clothing suggestions. They highlight how RAG utilizes a similar approach, with text chunks serving as inventory and model-generated answers acting as recommendations. They emphasize the importance of advanced filtering techniques beyond vector similarity, as well as the need for structured data extraction and feedback evaluation for improved business outcomes.
Remember Everything You Learn from Podcasts
Save insights instantly, chat with episodes, and build lasting knowledge - all powered by AI.