Latent Space: The AI Engineer Podcast cover image

RAG Is A Hack - with Jerry Liu from LlamaIndex

Latent Space: The AI Engineer Podcast

00:00

Fine-tune New Knowledge into LLMs

New knowledge can potentially be found through algorithms, despite some disagreement on the topic./nOpenAI's fine-tuning endpoints have limitations when it comes to memorization and next token production./nTraining a transformers model yourself may allow for better results compared to using the open AI API./nGorilla, a model trained to use specific APIs, utilizes prior learning and retrieval to enhance performance./nRag is currently the default method for augmenting knowledge, but there are questions regarding the extent to which it can internalize concepts and details./nBaking everything into the training process of a language model offers some advantages in terms of knowledge integration./nThe lack of knowledge on how to effectively implement the approach, along with cost considerations, hinders its current usage.

Play episode from 27:48
Transcript

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app