3min chapter

Greymatter cover image

AI's Human Factor | Stanford's Dr. Fei-Fei Li and OpenAI's Mira Murati on AI Safety

Greymatter

CHAPTER

Using Deployment to Make the Models More Reliable

The front end of the api is in part to an understand what the possible risk slows look like. How do you take that information and iterate to a beyond human, you know, kind of safety model? So far, for dipitistry, for example. A, initially we opened up access to use cases that we felt we had the right mitigations in place. But we were not quite comfortable with open handed generation. And so we worked with industry experts from different domains, as well as other researchers to red tim the model a bit further.

00:00

Get the Snipd
podcast app

Unlock the knowledge in podcasts with the podcast player of the future.
App store bannerPlay store banner

AI-powered
podcast player

Listen to all your favourite podcasts with AI-powered features

Discover
highlights

Listen to the best highlights from the podcasts you love and dive into the full episode

Save any
moment

Hear something you like? Tap your headphones to save it with AI-generated key takeaways

Share
& Export

Send highlights to Twitter, WhatsApp or export them to Notion, Readwise & more

AI-powered
podcast player

Listen to all your favourite podcasts with AI-powered features

Discover
highlights

Listen to the best highlights from the podcasts you love and dive into the full episode