

The Insane Dangers of AI Influence Ops and More w/ Disesdi Susanna Cox
4 snips Aug 20, 2025
In this engaging chat, AI security architect Susanna Cox, known for her expertise in AI and security, dives into the hidden dangers of today's AI landscape. She unpacks the roles of red, blue, and innovative purple teams in cybersecurity. Susanna highlights vulnerabilities in generative AI, critiques tech leaders' ambitious promises, and discusses the contrasting AI regulations in the US and Europe. The conversation also touches on the rise of influence operations on social media, revealing how AI could sway public opinion with minimal effort.
AI Snips
Chapters
Transcript
Episode notes
Security-First AI Background
- Susanna Cox blends AI engineering with a decade of security research and red teaming experience.
- Her background underpins her perspective that AI risks require security-first thinking.
Purple Teams Close The Loop
- Purple teaming unites offensive red teams, defensive blue teams, and governance to close operational gaps.
- Collaboration across teams turns vulnerability discovery into lasting remediation and policy changes.
Single Channel Risk Of GenAI
- GenAI expands attack surface by putting data and instructions into a single interactive channel.
- Users now interact directly with systems in real time, increasing injection and manipulation risks.