
From prompt attacks to data leaks, LLMs offer new capabilities and new threats
The Stack Overflow Podcast
00:00
Adversarial Attacks and Security Issues
This chapter discusses adversarial attacks in language models (LLMs), including examples such as tricking the AI with prompts and hiding text within images. It also highlights the security issues specific to LLMs and mentions the importance of securing them.
Transcript
Play full episode