

The AI Security Podcast
Harriet Farlow (HarrietHacks)
I missed the boat in computer hacking so now I hack AI instead. This podcast discusses all things at the intersection of AI and security. Hosted by me (Harriet Farlow aka. HarrietHacks) and Tania Sadhani and supported by Mileva Security Labs. Chat with Mileva Security Labs for your AI Security training and advisory needs: https://milevalabs.com/Reach out to HarrietHacks if you want us to speak at your event: https://www.harriethacks.com/
Episodes
Mentioned books

Dec 7, 2025 • 28min
AI Safety with CEO of Good Ancestors Greg Sadler | part 1
This week I invited myself over to Greg Sadler's place, the CEO of Good Ancestors, about AI safety. I brought sushi but I didn't have lunch so I ate most of it, and then I almost made him late for his next meeting. We specifically chat through AI capabilities, his work in policy, and building a not-profit. Greg is the kind of person who is so smart and cool that I feel like an absolute dummy interviewing him - so I know you're all going to like this episode. Stay tuned for part 2 where we dive into effective altruism and its intersection with AI!Check out Greg's work here: https://www.goodancestors.org.au/MIT AI Risk Repository: https://airisk.mit.edu/The Life You Can Save (book): https://www.thelifeyoucansave.org/book/80,000 hours: https://80000hours.org/Learn more about AI capability and impacts: https://bluedot.org/

Nov 24, 2025 • 30min
The United States AI Action Plan | will they win the AI race against China? 🤔
Hi! 👋 In this episode, we’re diving into the US AI Action Plan — the White House’s new roadmap for how America plans to lead in AI.. and beat China.We’ll look at what’s inside the plan, what it really means for AI security and regulation, and whether it’s more of a policy promise… or a political one.📄 You can read the full plan here:https://www.whitehouse.gov/wp-content/uploads/2025/07/Americas-AI-Action-Plan.pdfLet me know what you think — is this the kind of leadership AI needs, or a dangerous framing of AI capability?

Nov 9, 2025 • 30min
AI Security vs Application Security
Welcome back! 👋After taking a little break to reset and redesign everything behind the scenes, I’m back — and consolidating all my content. This episode is part of both The AI Security Podcast (on Spotify and Apple Podcasts) and my YouTube channel, HarrietHacks — so whether you prefer to listen or watch, you’ll get the same great conversations (and bad jokes) across both platforms.From now on, I’ll be posting at least fortnightly (with the occasional bonus episode when something big happens… like when I announced the book!).I’ve been in a few conversations lately where people have tried to convince me that AI Security is just Application Security in disguise. Naturally, I disagree. 🤷♀️ So in this episode, we dive into AI Security vs Application Security — how they overlap, where they diverge, and why securing AI systems demands new thinking beyond traditional AppSec.💌 Sign up for the newsletter: http://eepurl.com/i7RgRM📘 Pre-order The AI Security Handbook: [link coming soon]🎥 Watch this episode and more on YouTube: https://www.youtube.com/@HarrietHacks🔗 Useful LinksSQL Injection Examples (W3Schools): https://www.w3schools.com/sql/sql_injection.aspApplication Security Blog (Medium): https://medium.com/@pixelprecisionengineering1/application-security-appsec-in-cybersecurity-855ad9ce5e5eEcholeak Zero-Click Copilot Exploit (Dark Reading): https://www.darkreading.com/application-security/researchers-detail-zero-click-copilot-exploit-echoleakTraditional AppSec vs AI Security (Pillar Security): https://www.pillar.security/blog/traditional-appsec-vs-ai-security-addressing-modern-risks

Aug 12, 2025 • 19min
Agentic AI Security: A Primer
For a while we've been wanting to talk about Agentic AI Security.. the thing is that we could spend multiple episodes talking about it! So we decided to do just that. This is part 1 - a primer - where we talk about exactly what AI agents are and why we may need to consider their security a bit differently. Stay tuned for the rest of the series!

Aug 4, 2025 • 31min
How Likely Are AI Security Incidents? Updates From Our Final Report!
Six months ago Tania and I made an episode about the interim report for our AI Security Likelihood Project.. and it is finally time to discuss the final report! You'll see it live at this link shortly: https://www.aisecurityfundamentals.com/The premise was simple: are AI security incidents happening in the wild? What can we learn about future incidents from these historic ones? We answer some of these questions.

Jul 23, 2025 • 28min
To open or close model weights?
In this episode, Tania and I discuss the debate around closed or open model weights. What do you think?The RAND report we mention: https://www.rand.org/pubs/research_reports/RRA2849-1.html

Jul 15, 2025 • 31min
Creative prompt injection in the wild
In this episode, Tania and I talk through some creative examples of prompt injection/engineering we've seen in the wild.. think prompts hidden in papers, red-teaming and web-scraping.Your Brain on ChatGPT: https://arxiv.org/pdf/2506.08872Paper with hidden text (p. 12): https://arxiv.org/abs/2502.19918v2Interesting overview: https://www.theregister.com/2025/07/07/scholars_try_to_fool_llm_reviewers/Echoleak blog post: https://www.aim.security/lp/aim-labs-echoleak-m365

Jun 24, 2025 • 52min
Threat intel digest: 23 June 2025
This week we discussed multiple AI vulnerabilities, including Echolink in M365 Copilot, Agent Smith in Langchain, and a SQL injection flaw in Llama Index, all of which have been patched. We also covered a data exposure bug in Asana's MCP server and OWASP's project to create an AI vulnerability scoring system, while also outlining Google's defense layers for Gemini, Thomas Roccia's Proximity tool for MCP server security, news regarding AI and legal/security concerns, and research on AI hacking AI, prompt compression, multi-agent security protocols, and the security of reasoning models versus LLMs.

Jun 16, 2025 • 33min
AI safety evaluations with Inspect
I'm back from holiday, and this week Tania and I talk about a project she completed as part of the ARENA AI safety curriculum to replicate the findings of evaluations on frontier AI capabilities.Link to reasoning paper: https://arxiv.org/abs/2502.09696Link to the Inspect dashboard: https://inspect-evals-dashboard.streamlit.app/ARENA AI Safety course: https://www.arena.education/

Jun 10, 2025 • 55min
Threat intel digest: 9 June 2025
This week we try a new condensed format for the AI security digest! we covered critical CVEs, including vulnerabilities in AWS MCP, Llama Index, GitHub MCP integration, and tool poisoning attacks. We also reported on malware campaigns using spoofed AI installers, a supply chain attack via fake PyTorch models, and the AI-guided discovery of a Linux kernel vulnerability by Sean Healin using OpenAI's 03 model. We addressed OpenAI's actions against malicious use of their models, Reddit's lawsuit against Anthropic for data scraping, the creation of an AI model for reconstructing 3D faces from DNA by Chinese researchers, a zero-trust framework for AI agent identity management proposed by the Cloud Security Alliance, research on an agent-based red teaming framework, the impact of context length on LLM vulnerability, and CSIRO's technique for improving deep fake detection. We also highlighted the vulnerablemcp.info project and the ongoing evolution of AI security best practices.Sign up to get the digest in your inbox: http://eepurl.com/i7RgRM


