Every day, AI models are being deployed in new places, and that is creating demand for a new industry: companies that secure AI systems. Whether it’s preventing models from being used to write malicious code or creating spearphishing emails or keeping safe the data that companies are using to train AI systems, large language models raise a host of new security challenges. David Haber is the CEO of Lakera, a start-up that builds tools to keep AI models secure, and he sits down with host Elias Groll to discuss this new industry and how companies are approaching the challenge of securing AI systems. CyberScoop reporter AJ Vicens joins the show to discuss a ransomware attack by the group known as ALPHV that has caused major disruptions to the U.S. health care systems.
Get the Snipd podcast app
Unlock the knowledge in podcasts with the podcast player of the future.
AI-powered podcast player
Listen to all your favourite podcasts with AI-powered features
Discover highlights
Listen to the best highlights from the podcasts you love and dive into the full episode
Save any moment
Hear something you like? Tap your headphones to save it with AI-generated key takeaways
Share & Export
Send highlights to Twitter, WhatsApp or export them to Notion, Readwise & more
AI-powered podcast player
Listen to all your favourite podcasts with AI-powered features
Discover highlights
Listen to the best highlights from the podcasts you love and dive into the full episode