

DtSR Episode 658 - What Does It Mean to Secure AI (Part 4)
5 snips Jun 17, 2025
Join Jeff Collins, Raja Mukerji, and John Dickson, experts in AI security, as they dive into the complexities of safeguarding artificial intelligence. Discover how traditional security practices adapt alongside new AI technologies and the ethical concerns surrounding data leakage. They explore the challenges of governance and the need for effective legislation, while also discussing innovative methods for analyzing AI behavior. The trio emphasizes proactive strategies and employee training to enhance security, advocating for a culture that prioritizes cybersecurity in AI development.
AI Snips
Chapters
Books
Transcript
Episode notes
Data Leakage Risks with LLMs
- Data leakage is a major risk when employees upload sensitive company data to third-party LLMs.
- This bypasses organizational governance and can expose confidential information publicly or to competitors.
Non-Determinism Challenges in AI
- Software development with LLMs introduces non-determinism, breaking the direct understanding of cause and effect in code.
- This creates challenges for auditability and reliability, as the same input can yield different outputs.
Avoid Sharing Sensitive Data Publicly
- Avoid uploading private or sensitive organizational code or data to public LLMs due to data ownership and security risks.
- Always assume that data sent externally may be beyond your control and could be exposed.