AI-powered
podcast player
Listen to all your favourite podcasts with AI-powered features
The role of fine tuning in model adaptation
Less fine tuning than expected has been observed in the deployment of human loop back in 2022. Prompt engineering has shown to be powerful, and the need for fine tuning has shifted towards incorporating factual context into models rather than specific use cases. This has led to the emergence of retrieval augmented generation (RAG) as a effective solution. Fine tuning is now mainly required for optimizing cost, latency, and tone of voice, rather than for adapting the model to a specific use case. Its heavier operational demands, including the need for a specific dataset and longer processing time, make it a less common choice for adaptation.