Petar Tsankov, Co-founder and CEO of LatticeFlow AI, dives into the complexities of the EU AI Act and its impact on AI innovation. He discusses the importance of translating legislation into practical technical requirements. Petar introduces 'Comply,' an open-source tool for AI compliance, while emphasizing the need for robust benchmarks in AI safety. He also sheds light on managing AI risks and the collaboration required among stakeholders to navigate evolving regulations, making it essential listening for AI developers and businesses alike.
Read more
AI Summary
AI Chapters
Episode notes
auto_awesome
Podcast summary created with Snipd AI
Quick takeaways
The EU AI Act aims to transform high-level legal requirements into actionable guidelines, encouraging compliance while dispelling innovation-stifling narratives.
Organizations are increasingly forming dedicated teams to manage AI safety and governance, reflecting a growing recognition of operational risks in AI applications.
The launch of Comply provides a framework for assessing AI models against EU regulations, enhancing understanding of compliance and identifying improvement areas.
Deep dives
Understanding the EU AI Regulation Act
The EU AI Regulation Act aims to provide a structured framework for implementing AI technologies within the EU, addressing various concerns like safety, compliance, and governance. It emphasizes transforming high-level legal requirements into actionable guidelines that technology developers can utilize for compliance. By focusing on practical aspects, companies are encouraged to move past narratives that portray regulation as an innovation stifler. This proactive approach allows organizations to understand the implications of using AI in their operations while ensuring safety and accountability.
Current Trends in AI Safety and Governance
A significant shift has been observed in the implementation of AI safety and governance measures across various organizations, responding to the need for structured approaches. Companies are increasingly establishing dedicated teams to oversee AI applications, transitioning from a handful to a multitude of operational models. This momentum highlights the recognition of potential high risks, necessitating strategic oversight and compliance checks to securely harness AI capabilities. As organizations prioritize governance processes, discussions around previous sensationalized fears about AI have evolved into focused inquiries regarding operational safety and risk assessment.
Importance of Identifying High-Risk AI Models
As organizations expand their AI applications, identifying high-risk models becomes vital to mitigate potential repercussions. Companies adopting AI must rigorously determine which models have critical business implications, as previous experiences show that oversight is often lacking. Many organizations grapple with understanding the impact of numerous models, leading to the presence of 'zombie models' that contribute little to operations yet pose risks if misunderstood. A systematic approach to identifying and categorizing AI applications according to risk ensures that compliance and safety remain at the forefront of development efforts.
Comply: Bridging Regulatory Gaps
The launch of Comply, an evaluation framework for assessing language models against EU regulations, serves to bridge the gap between legal requirements and actionable guidelines. This tool quantifies compliance based on specific principles outlined in the EU AI Act, allowing organizations to measure their AI models’ adherence to regulations effectively. By providing scores between zero and one, it enables a clearer understanding of compliance status and informs areas needing improvement. As organizations confront the complexity of regulatory standards, tools like Comply offer valuable insights that make navigating compliance more manageable.
Challenges and Future Directions in AI Governance
While the conversation surrounding AI governance evolves, significant challenges remain in translating legal rhetoric into practical compliance mechanisms. Some principles, like interpretability and robustness, lack clear methods for assessment, complicating the alignment between regulatory intentions and technical realities. As the demand for thorough evaluations increases, the industry recognizes the necessity of creating dynamic benchmarks that evolve along with technological advancements. Moving forward, fostering collaboration between regulatory bodies and tech developers will be essential for refining compliance measures and ensuring that AI systems operate within safe and ethical boundaries.
Dr. Petar Tsankov is a researcher and entrepreneur in the field of Computer Science and Artificial Intelligence (AI).
EU AI Act - Navigating New Legislation // MLOps Podcast #271 with Petar Tsankov, Co-Founder and CEO of LatticeFlow AI.
Big thanks to LatticeFlow for sponsoring this episode!
// Abstract
Dive into AI risk and compliance. Petar Tsankov, a leader in AI safety, talks about turning complex regulations into clear technical requirements and the importance of benchmarks in AI compliance, especially with the EU AI Act. We explore his work with big AI players and the EU on safer, compliant models, covering topics from multimodal AI to managing AI risks. He also shares insights on "Comply," an open-source tool for checking AI models against EU standards, making compliance simpler for AI developers. A must-listen for those tackling AI regulation and safety.
// Bio
Co-founder & CEO at LatticeFlow AI, building the world's first product enabling organizations to build performant, safe, and trustworthy AI systems.
Before starting LatticeFlow AI, Petar was a senior researcher at ETH Zurich working on the security and reliability of modern systems, including deep learning models, smart contracts, and programmable networks.
Petar have co-created multiple publicly available security and reliability systems that are regularly used:
= ERAN, the world's first scalable verifier for deep neural networks: https://github.com/eth-sri/eran
= VerX, the world's first fully automated verifier for smart contracts: https://verx.ch
= Securify, the first scalable security scanner for Ethereum smart contracts: https://securify.ch
= DeGuard, de-obfuscates Android binaries: http://apk-deguard.com
= SyNET, the first scalable network-wide configuration synthesis tool: https://synet.ethz.ch
Petar also co-founded ChainSecurity, an ETH spin-off that within 2 years became a leader in formal smart contract audits and was acquired by PwC Switzerland in 2020.
// MLOps Swag/Merch
https://mlops-community.myshopify.com/
// Related Links
Website: https://latticeflow.ai/
--------------- ✌️Connect With Us ✌️ -------------
Join our slack community: https://go.mlops.community/slack
Follow us on Twitter: @mlopscommunity
Sign up for the next meetup: https://go.mlops.community/register
Catch all episodes, blogs, newsletters, and more: https://mlops.community/
Connect with Demetrios on LinkedIn: https://www.linkedin.com/in/dpbrinkm/
Connect with Petar on LinkedIn: https://www.linkedin.com/in/petartsankov/
Get the Snipd podcast app
Unlock the knowledge in podcasts with the podcast player of the future.
AI-powered podcast player
Listen to all your favourite podcasts with AI-powered features
Discover highlights
Listen to the best highlights from the podcasts you love and dive into the full episode
Save any moment
Hear something you like? Tap your headphones to save it with AI-generated key takeaways
Share & Export
Send highlights to Twitter, WhatsApp or export them to Notion, Readwise & more
AI-powered podcast player
Listen to all your favourite podcasts with AI-powered features
Discover highlights
Listen to the best highlights from the podcasts you love and dive into the full episode