
AI Security Podcast
The #1 source for AI Security insights for CISOs and cybersecurity leaders.
Hosted by two former CISOs, the AI Security Podcast provides expert, no-fluff discussions on the security of AI systems and the use of AI in Cybersecurity. Whether you're a CISO, security architect, engineer, or cyber leader, you'll find practical strategies, emerging risk analysis, and real-world implementations without the marketing noise.
These conversations are helping cybersecurity leaders make informed decisions and lead with confidence in the age of AI.
Latest episodes

Jan 8, 2025 • 57min
AI Cybersecurity Predictions 2025: Revolution or Reality?
The discussion kicks off with AI predictions for cybersecurity in 2025, highlighting the transformative impact of generative AI on the industry. There's an exciting focus on SOC automation and its tangible effects. Data security emerges as a major winner, alongside the potential of agentic AI in revolutionizing security operations. Predictions for innovative AI startups tease a future filled with productivity and security advancements. Amidst optimism and caution, the hosts explore the need for strategic planning in integrating AI into cybersecurity.

Nov 22, 2024 • 51min
AI Red Teaming in 2024 and Beyond
Host Caleb Sima and Ashish Rajan caught up with experts Daniel Miessler (Unsupervised Learning), Joseph Thacker (Principal AI Engineer, AppOmni) to talk about the true vulnerabilities of AI applications, how prompt injection is evolving, new attack vectors through images, audio, and video and predictions for AI-powered hacking and its implications for enterprise security.
Whether you're a red teamer, a blue teamer, or simply curious about AI's impact on cybersecurity, this episode is packed with expert insights, practical advice, and future forecasts. Don’t miss out on understanding how attackers leverage AI to exploit vulnerabilities—and how defenders can stay ahead.
Questions asked:
(00:00) Introduction
(02:11) A bit about Daniel Miessler
(02:22) A bit about Rez0
(03:02) Intersection of Red Team and AI
(07:06) Is red teaming AI different?
(09:42) Humans or AI: Better at Prompt Injection?
(13:32) What is a security vulnerability for a LLM?
(14:55) Jailbreaking vs Prompt Injecting LLMs
(24:17) Whats new for Red Teaming with AI?
(25:58) Prompt injection in Multimodal Models
(27:50) How Vulnerable are AI Models?
(29:07) Is Prompt Injection the only real threat?
(31:01) Predictions on how prompt injection will be stored or used
(32:45) What’s changed in the Bug Bounty Toolkit?
(35:35) How would internal red teams change?
(36:53) What can enterprises do to protect themselves?
(41:43) Where to start in this space?
(47:53) What are our guests most excited about in AI?
Resources
Daniel's Webpage - Unsupervised Learning
Joseph's Website

Nov 4, 2024 • 1h 17min
The Current State of AI and the Future for CyberSecurity in 2024
Jason Clinton, CISO at Anthropic, Kristy Hornland, Cybersecurity Director at KPMG, and Vijay Bolina, CISO at Google DeepMind, discuss the pivotal intersection of AI and cybersecurity. They explore AI's transformative impact on secure coding practices and the evolution of software development. The guests highlight risks surrounding AI-generated code, the complexities of multimodal models, and the imperative of responsible AI use. They emphasize the need for robust data governance and proactive risk management within organizations as they prepare for 2024 and beyond.

Oct 23, 2024 • 28min
What is AI Native Security?
In this episode of the AI Cybersecurity Podcast, Caleb and Ashish sat down with Vijay Bolina, Chief Information Security Officer at Google DeepMind, to explore the evolving world of AI security. Vijay shared his unique perspective on the intersection of machine learning and cybersecurity, explaining how organizations like Google DeepMind are building robust, secure AI systems.
We dive into critical topics such as AI native security, the privacy risks posed by foundation models, and the complex challenges of protecting sensitive user data in the era of generative AI. Vijay also sheds light on the importance of embedding trust and safety measures directly into AI models, and how enterprises can safeguard their AI systems.
Questions asked:
(00:00) Introduction
(01:39) A bit about Vijay
(03:32) DeepMind and Gemini
(04:38) Training data for models
(06:27) Who can build an AI Foundation Model?
(08:14) What is AI Native Security?
(12:09) Does the response time change for AI Security?
(17:03) What should enterprise security teams be thinking about?
(20:54) Shared fate with Cloud Service Providers for AI
(25:53) Final Thoughts and Predictions

Sep 6, 2024 • 47min
BlackHat USA 2024 AI Cybersecurity Highlights
What were the key AI Cybersecurity trends at BlackHat USA? In this episode of the AI Cybersecurity Podcast, hosts Ashish Rajan and Caleb Sima dive into the key insights from Black Hat 2024. From the AI Summit to the CISO Summit, they explore the most critical themes shaping the cybersecurity landscape, including deepfakes, AI in cybersecurity tools, and automation. The episode also features discussions on the rising concerns among CISOs regarding AI platforms and what these mean for security leaders.
Questions asked:
(00:00) Introduction
(02:49) Black Hat, DEF CON and RSA Conference
(07:18) Black Hat CISO Summit and CISO Concerns
(11:14) Use Cases for AI in Cybersecurity
(21:16) Are people tired of AI?
(21:40) AI is mostly a side feature
(25:06) LLM Firewalls and Access Management
(28:16) The data security challenge in AI
(29:28) The trend with Deepfakes
(35:28) The trend of pentest automation
(38:48) The role of an AI Security Engineer

Aug 21, 2024 • 34min
Our insights from Google's AI Misuse Report
The podcast explores alarming findings from Google's report on generative AI misuse, revealing over 200 incidents across healthcare and education. Hosts discuss the rise of deepfakes and AI-driven impersonation, stressing their ease of access and ethical dilemmas. The conversation also highlights the impact of misleading metrics in content creation and touches on the challenges of distinguishing between human and AI-generated content. Lastly, they emphasize the need for legal frameworks as AI technology evolves and shapes public opinion.

70 snips
Aug 2, 2024 • 1h 11min
AI Code Generation - Security Risks and Opportunities
Guy Podjarny, the Founder and CEO at Tessl, dives into the intriguing world of AI-generated code. He discusses its reliability compared to human coding, raising critical questions about trust. Security risks associated with AI code are highlighted, stressing the importance of human oversight and proactive measures. Guy also touches on the changing landscape of AI in software development, the need for automated security testing, and the evolving role of cybersecurity professionals. His insights offer a thought-provoking look at AI’s impact on coding and security.

8 snips
Jul 11, 2024 • 45min
Exploring Top AI Security Frameworks
The podcast explores various AI security frameworks like Databricks, NIST, and OWASP Top 10, comparing their key components and practical implementation strategies. It discusses the challenges of selecting the right framework, AI risk management, and the importance of governance and collaboration. The episode also touches on using Chat GPT for document analysis, Google AI Studio, and the progression of AI proficiency.

Jun 17, 2024 • 45min
Practical Applications and Future Predictions for AI Security in 2024
What is the current state and future potential of AI Security? This special episode was recorded LIVE at BSidesSF (thats why its a little noisy), as we were amongst all the exciting action. Clint Gibler, Caleb Sima and Ashish Rajan sat down to talk about practical uses of AI today, how AI will transform security operations, if AI can be trusted to manage permissions and the importance of understanding AI's limitations and strengths.
Questions asked:
(00:00) Introduction
(02:24) A bit about Clint Gibler
(03:10) What top of mind with AI Security?
(04:13) tldr of Clint’s BSide SF Talk
(08:33) AI Summarisation of Technical Content
(09:47) Clint’s favourite part of the talk - Fuzzing
(15:30) Questions Clint got about his talk
(17:11) Human oversight and AI
(25:04) Perfection getting in the way of good
(30:15) AI on the engineering side
(36:31) Predictions for AI Security
Resources from this coversation:
Caleb's Keynote at BSides SF
Clint's Newsletter

4 snips
May 22, 2024 • 44min
AI Highlights from RSAC 2024 and BSides SF 2024
Key AI Security takeaways from RSA Conference 2024, BSides SF 2024 and all the fringe activities that happen in SF during that week. Caleb and Ashish were speakers, panelists, participating in several events during that week and this episode captures all the highlights from all the conversations they had and they trends they saw during what they dubbed the "Cybersecurity Fringe Festival” in SF.
Questions asked:
(00:00) Introduction
(02:53) Caleb’s Keynote at BSides SF
(05:14) Clint Gibler’s Bsides SF Talk
(06:28) What are BSides Conferences?
(13:55) Cybersecurity Fringe Festival
(17:47) RSAC 2024 was busy
(19:05) AI Security at RSAC 2024
(23:03) RSAC Innovation Sandbox
(27:41) CSA AI Summit
(28:43) Interesting AI Talks at RSAC
(30:35) AI conversations at RSAC
(32:32) AI Native Security
(33:02) Data Leakage in AI Security
(30:35) Is AI Security all that different?
(39:26) How to filter vendors selling AI Solutions?