Gradient Dissent: Conversations on AI cover image

How EleutherAI Trains and Releases LLMs: Interview with Stella Biderman

Gradient Dissent: Conversations on AI

00:00

How to Run an Open AI API on a GPU

Open AI's models in general are pretty well regarded in terms of like performance per dollar, I guess you could say. Once you start talking models in the tens of billions of parameters, that starts to become very difficult. There's only a handful of GPUs in the world that can fit a 20 billion parameter language model.

Transcript
Play full episode

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app