Greymatter cover image

AI's Human Factor | Stanford's Dr. Fei-Fei Li and OpenAI's Mira Murati on AI Safety

Greymatter

CHAPTER

Using Deployment to Make the Models More Reliable

The front end of the api is in part to an understand what the possible risk slows look like. How do you take that information and iterate to a beyond human, you know, kind of safety model? So far, for dipitistry, for example. A, initially we opened up access to use cases that we felt we had the right mitigations in place. But we were not quite comfortable with open handed generation. And so we worked with industry experts from different domains, as well as other researchers to red tim the model a bit further.

00:00
Transcript
Play full episode

Remember Everything You Learn from Podcasts

Save insights instantly, chat with episodes, and build lasting knowledge - all powered by AI.
App store bannerPlay store banner