Deep Papers cover image

Orca: Progressive Learning from Complex Explanation Traces of GPT-4

Deep Papers

CHAPTER

How to Fine Tune a Small Language Model

Harvey: I think the topic of work is this general topic of using a very large foundation model like GPT for that's very advanced, very intelligent to train and fine tune a much smaller language model around 10 billion parameters. The goal of this felt like, can we beat Vicuna? Can we get close to chat GPT with a very small model with very selective thoughtful data? And then how they produce that data from from a large foundational model, there's some really unique ideas, I think in this too. That would be my take on the paper, my sense of the paper at large.

00:00
Transcript
Play full episode

Remember Everything You Learn from Podcasts

Save insights instantly, chat with episodes, and build lasting knowledge - all powered by AI.
App store bannerPlay store banner