Deep Papers cover image

ChatGPT and InstructGPT: Aligning Language Models to Human Intention

Deep Papers

00:00

ML Observability and Alignment - Part 1 of 3

Long O'Yang is a research scientist at OpenAI. He collaborated with Ryan for this instruct GPT project, which has been a couple of years. Ryan's on leave from a machine learning PhD within the statistics department at the University of Washington. And I'm Jason, co-founder of Arise. We're ML observability. And monitoring, so we watch and observe and analyze models. So we see a lot from larger language models to CV, to name it, and really fascinated about the subject.

Transcript
Play full episode

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app