CoSAI, OpenSSF and the Interesting Intersection of Secure AI and Open Source
Sep 10, 2024
auto_awesome
Join Dave LaBianca, security engineering director at Google; Mihai Maruseac from the Google Open Source Security Team; and Jay White from Microsoft for a deep dive into AI security. They discuss the Coalition for Secure AI (CoSAI) and its essential role in enhancing AI security and governance. The trio shares insights on collaboration between CoSAI and the OpenSSF AI/ML Security Working Group, covering vital topics like model provenance and best practices for AI software supply chains. Plus, they serve up rapid-fire fun and invaluable advice for aspiring tech professionals!
Collaboration across open source communities, like CoSAI and OpenSSF, is essential for advancing AI security best practices and knowledge sharing.
Technical initiatives focusing on model signing and provenance tracking are vital for protecting AI models and identifying vulnerabilities in their lifecycle.
Deep dives
The Importance of Collaboration in AI Security
Collaboration among various open source communities is crucial to address evolving challenges in AI security. The creation of forums such as COSI enables professionals from different companies, such as Google and Microsoft, to share insights and lessons learned about securely implementing AI technologies. By working together, they aim to close the gaps in knowledge and best practices surrounding AI security and promote democratization of information. This collaborative effort seeks to provide organizations with guidance on integrating traditional systems with AI systems, enhancing overall security frameworks.
Technical Approaches to Model Integrity
Two significant technical initiatives focus on model signing and establishing provenance in AI security frameworks. Model signing will ensure that AI models remain untampered during deployment, and by utilizing efficient hashing techniques, the team strives to streamline this process for large models. Provenance tracking aims to trace the lineage of AI models from training to production, identifying potential vulnerabilities or compromises along the way. This capability is essential, as it enables organizations to assess the security of their AI applications and detect any discrepancies in the model lifecycle.
Calls to Action for Engaging in AI Security
Industry leaders emphasize the need for greater involvement from diverse voices within the AI security community. Those entering the field are encouraged to engage with existing forums and initiatives, contributing their perspectives to enhance collective knowledge and practices. The call to action includes inviting underrepresented communities to participate actively, ensuring that a wide range of experiences and ideas enrich the discussions. By fostering this inclusive environment, the aim is to ensure comprehensive solutions address the complexities of securing AI technologies.
Omkhar is joined by Dave LaBianca, security engineering director at Google, Mihai Maruseac, member of the Google Open Source Security Team, and Jay White, security principal program manager at Microsoft. David and Jay are on the Project Governing Board for the Coalition for Secure AI (CoSAI), an alliance of industry leaders, researchers and developers dedicated to enhancing the security of AI implementations. Additionally, Jay — along with Mihai — are leads on the OpenSSF AI/ML Security Working Group. In this conversation, they dig into CoSAI’s goals and the potential partnership with the OpenSSF.
00:57 - Guest introductions
01:56 - Dave and Jay offer insight into why CoSAI was necessary
05:16 - Jay and Mihai explain the complementary nature of OpenSSF's AI/ML Security Working Group and CoSAI
07:21 - Mihai digs into the importance of proving model provenance
08:50 - Dave shares his thoughts on future CoSAI/OpenSSF collaborations
11:13 - Jay, Dave and Mihai answer Omkhar’s rapid-fire questions
14:12 - The guests offer their advice to those entering the field today and their call to action for listeners