Not So Standard Deviations cover image

176 - Is R the Worst?

Not So Standard Deviations

00:00

The Risks of Prompt Injection

The vulnerability exists when you take a carefully prompted, crafted prompt and concatenate that with an untrusted input from a user. In software built on top of large AI language models it can be subverted by a user injecting malicious input. The robots will take it over. They will have injected all the prompts already. So what a random set of like ridiculous guard rails. That's not that you can find your way around it. You'll need another AI product to build the guard rails.

Transcript
Play full episode

Remember Everything You Learn from Podcasts

Save insights instantly, chat with episodes, and build lasting knowledge - all powered by AI.
App store bannerPlay store banner