CyberWire Daily

LLM security 101. [Research Saturday]

18 snips
Oct 26, 2024
Mick Baccio, a Global Security Advisor for Splunk SURGe, shares valuable insights on the security vulnerabilities of Large Language Models (LLMs). He discusses the surprising complexity behind these AI systems and the critical need for robust cybersecurity measures. Key topics include the OWASP Top 10 vulnerabilities, focusing on issues like prompt injection and data poisoning. Baccio emphasizes the importance of input sanitization and offers practical strategies to enhance LLM security while highlighting engaging resources for cybersecurity awareness.
Ask episode
AI Snips
Chapters
Transcript
Episode notes
INSIGHT

LLM Defense Misconception

  • Many believe defending LLM-based applications is difficult due to LLM complexity and rapid AI advancement.
  • This is a misconception, and defenses are possible.
ADVICE

OWASP for LLMs

  • OWASP provides best practices for building and securing systems, including LLMs.
  • Use OWASP guidelines for LLM security.
ANECDOTE

Splunk's LLM Research

  • Splunk developed and tested their own LLM within their research network using Splunk's OTEL connector.
  • They focused on five of the OWASP Top 10 for LLMs, developing detections based on the telemetry collected.
Get the Snipd Podcast app to discover more snips from this episode
Get the app