馃敀 How Secure is AI? Gandalf鈥檚 Creator Exposes the Risks 馃敟
AI security is under attack, and hackers are finding new ways to manipulate AI systems. In this episode, Guy Podjarny sits down with Mateo Rojas-Carulla, co-founder of Lakera and creator of Gandalf, to break down the biggest threats facing AI today鈥攆rom prompt injections and jailbreaks to data poisoning and agent manipulation.
What You鈥檒l Learn:
- How attackers exploit AI vulnerabilities in real-world applications
- Why AI models struggle to separate instructions from external data
- How Gandalf鈥檚 60M+ attack attempts revealed shocking insights
- What the Dynamic Security Utility Framework (DSEC) means for AI safety
- Why red teaming is critical for preventing AI disasters
Whether you鈥檙e a developer, security expert, or just curious about AI risks, this episode is packed with must-know insights on keeping AI safe in an evolving landscape.
馃挕 Can AI truly be secured? Or will attackers always find a way? Drop your thoughts in the comments! 馃憞
Watch the episode on YouTube: https://youtu.be/RKCvlJT_r4s
Join the AI Native Dev Community on Discord: https://tessl.co/4ghikjh
Ask us questions: podcast@tessl.io