Latent Space: The AI Engineer Podcast cover image

RAG Is A Hack - with Jerry Liu from LlamaIndex

Latent Space: The AI Engineer Podcast

NOTE

Fine-tune New Knowledge into LLMs

New knowledge can potentially be found through algorithms, despite some disagreement on the topic./nOpenAI's fine-tuning endpoints have limitations when it comes to memorization and next token production./nTraining a transformers model yourself may allow for better results compared to using the open AI API./nGorilla, a model trained to use specific APIs, utilizes prior learning and retrieval to enhance performance./nRag is currently the default method for augmenting knowledge, but there are questions regarding the extent to which it can internalize concepts and details./nBaking everything into the training process of a language model offers some advantages in terms of knowledge integration./nThe lack of knowledge on how to effectively implement the approach, along with cost considerations, hinders its current usage.

00:00
Transcript
Play full episode

Remember Everything You Learn from Podcasts

Save insights instantly, chat with episodes, and build lasting knowledge - all powered by AI.
App store bannerPlay store banner