
A Decade of AI Safety and Trust // Petar Tsankov // MLOps Podcast #218
MLOps.community
Contrasting Environments: US Chaos vs Swiss Orderliness in Education and Innovation
Exploring the speaker's journey studying computer science in Zurich versus the US, emphasizing the roles of structured Swiss culture and chaotic American environment in fostering diverse perspectives for innovation and entrepreneurship.
Huge thank you to LatticeFlow AI for sponsoring this episode. LatticeFlow AI - https://latticeflow.ai/.Dr. Petar Tsankov is a researcher and entrepreneur in the field of Computer Science and Artificial Intelligence.MLOps podcast #218 with Petar Tsankov, Co-Founder and CEO at LatticeFlow AI, A Decade of AI Safety and Trust.// AbstractEmbark on a decade-long journey of AI safety and trust. This conversation delves into key areas such as the transition towards more adversarial environments, the challenges in model robustness and data relevance, and the necessity of third-party assessments in the face of companies' reluctance to share data. It further covers current shifts in AI trends, emphasizing problems associated with biases, errors, and lack of transparency, particularly in generative AI and third-party models. This episode explores the origins and mission of LatticeFlow AI to provide trusty solutions for new AI applications, encompassing their participation in safety competitions and their focus on proving the properties of neural networks. The profound conversation concludes by touching upon the importance of data quality, robustness checks, application of emerging standards like ISO 5259 and ISO 40001, and a peek into the future of AI regulation and certifications. Safe to say, it's a must-listen for anyone passionate about trust and safety in AI.// BioCo-founder & CEO at LatticeFlow AI, building the world's first product enabling organizations to build performant, safe, and trustworthy AI systems.Before starting LatticeFlow AI, Petar was a senior researcher at ETH Zurich working on the security and reliability of modern systems, including deep learning models, smart contracts, and programmable networks. // MLOps Jobs board https://mlops.pallet.xyz/jobs// MLOps Swag/Merchhttps://mlops-community.myshopify.com/// Related LinksWebsite: https://latticeflow.ai/ERAN, the world's first scalable verifier for deep neural networks: https://github.com/eth-sri/eranVerX, the world's first fully automated verifier for smart contracts: https://verx.ch Securify, the first scalable security scanner for Ethereum smart contracts: https://securify.ch DeGuard, de-obfuscates Android binaries: http://apk-deguard.com SyNET, the first scalable network-wide configuration synthesis tool: https://synet.ethz.ch --------------- ✌️Connect With Us ✌️ -------------Join our Slack community: https://go.mlops.community/slackFollow us on Twitter: @mlopscommunitySign up for the next meetup: https://go.mlops.community/registerCatch all episodes, blogs, newsletters, and more: https://mlops.community/Connect with Demetrios on LinkedIn: https://www.linkedin.com/in/dpbrinkm/Connect with Petar on LinkedIn: https://www.linkedin.com/in/petartsankov/Timestamps:[00:00] Petar's preferred coffee[00:29] Takeaways[03:15] Shout out to LatticeFlow for sponsoring this episode![03:22] Please like, share, leave a review, and subscribe to our MLOps channels![03:42] Expansion[05:16] Zurich ETH[07:06] AI Safety[09:24] Optimizing one metric, no fixed data sets[12:19] Trust life-changing issues[14:59] So much interest in GenAI[16:45] Explosion of GenAI Trust and Safety[21:14] Red Teaming[25:22] Trustworthy AI in Industry[27:43] DataOps Challenges[33:42] Trusting Third-Party Models[37:00] Testing Open Source Models[41:41] Specialized ML for Leasing[43:04] Regulation and Financial Incentives[45:30] Regulations Drive Innovation Balance[47:23] Regulations vs Certification: Voluntary Prove[52:24] Workflow Transparency: Trust & Efficiency[53:20] Engineers Balance Compliance Risks[54:53] Pushing Deep Learning Limits[57:31] Wrap up