The TWIML AI Podcast (formerly This Week in Machine Learning & Artificial Intelligence) cover image

RAG Risks: Why Retrieval-Augmented LLMs are Not Safer with Sebastian Gehrmann - #732

The TWIML AI Podcast (formerly This Week in Machine Learning & Artificial Intelligence)

00:00

Safety Concerns in RAG Systems

This chapter discusses the implications of Retrieval-Augmented Generation (RAG) on the safety of large language models, emphasizing the need for robust evaluation and safeguards to prevent misuse. The speakers explore various safety concerns, including prompt injection and types of attacks like data poisoning, revealing vulnerabilities in current safety measures. They highlight the surprising behaviors of models, such as generating unsafe content even when provided with safe context, underscoring the risks associated with RAG implementations.

Transcript
Play full episode

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app