AI-powered
podcast player
Listen to all your favourite podcasts with AI-powered features
How to Fine-Tune a Language Model
Andrei Carpathi tweeted a tweet from the other day showing task accuracy versus how much effort you're putting in zero shot prompt he was comparing to just like throw out a random question to a random person. that I don't know if it was you who tweeted and I grabbed it or just working from but yeah prompting approaches are completely different from fine-tuning. When you're prompting a language model you don't ever update any of its parametersYou're just adding extra context to the prompt and generating outputfine-tuning refers to training the parameters of the model with gradient descent for example over some data set. The problem is that the model is no longer generic in a lot of cases