Changelog Interviews cover image

LLMs break the internet

Changelog Interviews

00:00

Manipulating Language Models and Prompt Injection Attacks

The speakers discuss prompt injection attacks, where they trick language models into providing desired outputs. They highlight the potential consequences of these attacks and how they could be used to manipulate models to favor certain products or ideas. They also mention real-life examples of prompt injection attacks on search engines and social media platforms.

Transcript
Play full episode

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app