Thinking Elixir Podcast cover image

154: Serving Up AI with Sean Moriarity

Thinking Elixir Podcast

00:00

How to Train a Model to Follow Instructions

The original GPT three model was like a completion model. So essentially you were telling the model to follow your instructions. And so they started training models specifically to follow instructions. I'm of the opinion that prompt engineering is not necessarily useful, but it seems like one of those things people have too quickly accepted as being normal.

Transcript
Play full episode

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app