Papers Read on AI cover image

Personal LLM Agents: Insights and Survey about the Capability, Efficiency and Security

Papers Read on AI

00:00

Adversarial Attacks and Defense Strategies

This chapter discusses adversarial attacks, including targeted misclassification and prompt-based backdoor attacks, in the context of Language Model Agents (LLMs). It explores defense strategies against these attacks and the challenges of implementing them in LLMs.

Play episode from 01:58:56
Transcript

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app