AI-powered
podcast player
Listen to all your favourite podcasts with AI-powered features
The Cost of Fine Tuning in Context Learning
If you want to do pure in context learning, two things have to happen. One is you need a model with the longer context lengths because let's just take that example of kind of auto GPT where you are going to probably pass a lot of kind of API expects and recursively it's going to keep adding context. The second thing is you need is a well trained model which can be large in size and typically larger in size means that the hosting can be expensive. Because of these two factors like again, they tend to make inference more costlier.