Security is a big, big operational risk of working with these models. I truly think that this isn't a problem that will be solved by machine learning specifically. So my way of thinking about it is to kind of like break it down into like, okay, the model is stochastic, what can we do around the model that then adds that water tight guarantees? And so for prompt injection, for example, things that are really exciting to me are both like on the input and output side, right? You sandwich the LLM API call with like, you know, input validation, output validation to essentially ensure that your model isn't behaving in ways that you don't want it to.

Get the Snipd
podcast app

Unlock the knowledge in podcasts with the podcast player of the future.
App store bannerPlay store banner

AI-powered
podcast player

Listen to all your favourite podcasts with AI-powered features

Discover
highlights

Listen to the best highlights from the podcasts you love and dive into the full episode

Save any
moment

Hear something you like? Tap your headphones to save it with AI-generated key takeaways

Share
& Export

Send highlights to Twitter, WhatsApp or export them to Notion, Readwise & more

AI-powered
podcast player

Listen to all your favourite podcasts with AI-powered features

Discover
highlights

Listen to the best highlights from the podcasts you love and dive into the full episode