Changelog Interviews cover image

LLMs break the internet

Changelog Interviews

CHAPTER

Manipulating Language Models and Prompt Injection Attacks

The speakers discuss prompt injection attacks, where they trick language models into providing desired outputs. They highlight the potential consequences of these attacks and how they could be used to manipulate models to favor certain products or ideas. They also mention real-life examples of prompt injection attacks on search engines and social media platforms.

00:00
Transcript
Play full episode

Remember Everything You Learn from Podcasts

Save insights instantly, chat with episodes, and build lasting knowledge - all powered by AI.
App store bannerPlay store banner