CISO Tradecraft® cover image

CISO Tradecraft®

#199 - How to Secure Generative AI

Sep 23, 2024
G. Mark Hardy, a security expert focused on Generative AI, discusses critical insights on securing these emerging technologies. He unpacks the mechanics of large language models like ChatGPT and highlights major industry players. G. Mark delves into the risks of AI misuse, including data breaches and fabricated content. He offers practical strategies for CISOs to mitigate these threats, emphasizing the CARE standard for effective governance. Additionally, he touches on the future vulnerabilities of AI and the need for ethical guidelines to foster responsible innovation.
27:55

Episode guests

Podcast summary created with Snipd AI

Quick takeaways

  • Effective risk mitigation for generative AI requires structured approaches, including access controls and continuous monitoring of data inputs and outputs.
  • CISOs must balance innovation with security by engaging stakeholders and presenting practical solutions that align with organizational goals and risk management.

Deep dives

Understanding Generative AI

Generative AI is a form of artificial intelligence that creates new content, such as text or images, by recognizing patterns learned from existing data. This technology can produce original work that often mirrors previous data inputs, showcasing its reliance on training data for production outcomes. Examples include large language models like ChatGPT for text generation and DALL-E for images. The inner workings involve tokenizing information and assigning probabilistic weights to pairs of words, allowing the generation of outputs based on statistical relationships.

Remember Everything You Learn from Podcasts

Save insights instantly, chat with episodes, and build lasting knowledge - all powered by AI.
App store bannerPlay store banner