Risk Management and Enhanced Security Practices for AI Systems
Feb 6, 2024
auto_awesome
In this episode, Omar Khawaja and Diana Kelley discuss a new framework for understanding AI risks, building a security-minded culture around AI, and challenges faced by CISOs in assessing risk. They explore supply chain security in AI systems, emphasize the importance of data provenance tracking, and highlight the challenges in securing the software supply chain for AI and ML systems.
Understanding AI basics is crucial for effective risk management in security.
Securing AI requires a cultural shift, embracing collaboration, and growth mindset for effective protection.
Deep dives
Understanding AI and its Complexity
AI presents similar risks and concerns as traditional applications, but with different terminology. It is important for security professionals to grasp the basics of AI before diving into risk management. The complexity of AI components and terminology can be overwhelming, even for experienced professionals. Building a mental model and visual representation of AI components can help security leaders analyze risks and provide effective guidance.
Drawing Parallels with Medicine
To understand the complexities of AI security, it is helpful to draw parallels with the discipline of medicine. Medical students start by studying the components of the human body (anatomy), followed by how these components function together (physiology), and then learn about diseases that affect the body (epidemiology). Finally, they study interventions and treatments (pharmacology). Similarly, in the world of AI security, understanding the components and how they interact is crucial for effective risk management.
Supply Chain Security in AI and ML
Supply chain security in AI and ML goes beyond traditional software supply chain concerns. In addition to vetting code sources, it is essential to ensure the reliability and integrity of training data, which acts as the raw material for building AI models. Data provenance, tracking, and auditing are crucial to mitigate the risk of training data poisoning and understand the impact of data on model behavior.
The Cultural Shift for AI Security
Securing AI requires a cultural shift within organizations, particularly for CISOs and security teams. The adoption of DevSecOps, data expertise, and collaboration between data, IT, and security teams is imperative. Aligning goals and creating a shared vision is crucial to foster collaboration and break down silos. Embracing a growth mindset and admitting to areas of uncertainty are essential in navigating the complexities of AI security.
In this episode of The MLSecOps Podcast, VP Security and Field CISO of Databricks, Omar Khawaja, joins the CISO of Protect AI, Diana Kelley. Together, Diana and Omar discuss a new framework for understanding AI risks, fostering a security-minded culture around AI, building the MLSecOps dream team, and some of the challenges that Chief Information Security Officers (CISOs) and other business leaders face when assessing the risk to their AI/ML systems.
Get the scoop on Databricks’ new AI Security Framework on The MLSecOps Podcast. To learn more about the framework, contact cybersecurity@databricks.com.
Thanks for checking out the MLSecOps Podcast! Get involved with the MLSecOps Community and find more resources at https://community.mlsecops.com.