Deep Papers cover image

Llama 2: Open Foundation and Fine-Tuned Chat Models

Deep Papers

00:00

The Pre-Training Process

There's a vast number of these were just the red teamers that went in and looked across the model outputs. The only thing they say is none of it came from metaproducts. They state themselves that their training process is simple. Using an auto regressive transformer. That gets them the, the template for llama to so that's like a untailored suit.

Transcript
Play full episode

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app