Deep Papers cover image

Llama 2: Open Foundation and Fine-Tuned Chat Models

Deep Papers

CHAPTER

The Pre-Training Process

There's a vast number of these were just the red teamers that went in and looked across the model outputs. The only thing they say is none of it came from metaproducts. They state themselves that their training process is simple. Using an auto regressive transformer. That gets them the, the template for llama to so that's like a untailored suit.

00:00
Transcript
Play full episode

Remember Everything You Learn from Podcasts

Save insights instantly, chat with episodes, and build lasting knowledge - all powered by AI.
App store bannerPlay store banner