Super Data Science: ML & AI Podcast with Jon Krohn cover image

905: Why RAG Makes LLMs Less Safe (And How to Fix It), with Bloomberg’s Dr. Sebastian Gehrmann

Super Data Science: ML & AI Podcast with Jon Krohn

00:00

RAG Can Reduce Safety Despite Grounding

  • Retrieval augmented generation (RAG) can make large language models less safe by circumventing built-in safety mechanisms.
  • While RAG grounds responses in factual data, it can cause unsafe answers if harmful content is retrieved.
Transcript
Play full episode

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app