DevOps and Docker Talk: Cloud Native Interviews and Tooling cover image

Local GenAI LLMs with Ollama and Docker

DevOps and Docker Talk: Cloud Native Interviews and Tooling

NOTE

Customizing AI Responses with RAG and Fine-Tuning

The speaker discusses using RAG and fine-tuning to customize AI responses based on additional input provided. They explain that RAG allows injecting large volumes of questions and responses to personalize AI-generated answers. However, simply inputting all the questions and responses can lead to issues due to the model's context size limitation. The context size was historically constrained to 512 tokens. To address this, the speaker suggests strategies such as refining the dataset or exploring fine-tuning options to optimize the AI model's responses.

00:00
Transcript
Play full episode

Remember Everything You Learn from Podcasts

Save insights instantly, chat with episodes, and build lasting knowledge - all powered by AI.
App store bannerPlay store banner