

What New Risks Does AI Introduce?
6 snips Sep 18, 2025
Kara Sprague, CEO of HackerOne and an expert in AI security, delves into the complex world of AI risks. She emphasizes the need for new governance to manage the unique challenges posed by AI, such as shadow AI and identity issues. The discussion highlights the importance of red teaming for ongoing security testing and how the rapid adoption of AI necessitates clear guidelines for safe usage. Kara also advocates for defining risk appetites and establishing 'paved paths' to channel AI experimentation effectively.
AI Snips
Chapters
Transcript
Episode notes
AI Is A New Form Of Cognition
- AI embeds new forms of cognition into workflows that behave very differently from traditional software.
- These elements adapt, infer, act autonomously, and produce emergent behavior that static checklists cannot predict.
Embed Red Teaming Into AI Workflows
- Build playbooks that go beyond policy by simulating misuse and embedding red teaming into procurement and product workflows.
- Use diverse adversarial testing to treat AI as a dynamic attack surface touching technology and behavior.
Homegrown AI Isn't Automatically Safe
- First-party AI systems can be as risky as third-party ones because they reach production without mature offensive security reviews.
- Attackers can extract training data, bypass controls via prompt manipulation, subvert workflows, and embed persistence in models and plugins.