Deep Papers cover image

Orca: Progressive Learning from Complex Explanation Traces of GPT-4

Deep Papers

00:00

How to Fine Tune a Small Language Model

Harvey: I think the topic of work is this general topic of using a very large foundation model like GPT for that's very advanced, very intelligent to train and fine tune a much smaller language model around 10 billion parameters. The goal of this felt like, can we beat Vicuna? Can we get close to chat GPT with a very small model with very selective thoughtful data? And then how they produce that data from from a large foundational model, there's some really unique ideas, I think in this too. That would be my take on the paper, my sense of the paper at large.

Transcript
Play full episode

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app