The podcast delves into AI safety and trust over the past decade, emphasizing the importance of reliability and transparency in deploying models. It explores the contrasting educational environments in the US and Switzerland, highlighting the journey of the speaker. Discussions cover challenges in ensuring trust in AI models, the impact of generative AI, and the need for comprehensive testing post-deployment to build trust.
Read more
AI Summary
AI Chapters
Episode notes
auto_awesome
Podcast summary created with Snipd AI
Quick takeaways
Decoupling models from data is crucial for AI trust.
Office location impacts tech collaboration and growth.
Validating AI models pre-production enhances trust and reliability.
Deep dives
Deep Dive into AI Trust and Robustness
The podcast delves into the importance of ensuring AI trust, where AI research, traditionally focused on optimizing accuracy, now faces challenges in production due to various factors. Models not being decoupled from data is emphasized, highlighting the significance of upstream data quality. Lattice flow's focus on detecting model blind spots to enhance system robustness is discussed.
Expansion and Strategic Office Placement
Lattice flow's CEO, Petar Tsankov, explains the strategic expansion of their office to Bulgaria to collaborate on deep tech initiatives. The importance of having an office in a conducive tech environment like Switzerland is highlighted.
Transition from Research to Industry Application
The transition from accurate AI model building in research to practical deployment in industries reveals challenges in ensuring data relevance and model effectiveness in different environments. The significance of validating models pre-production to enhance trust and reliability is underlined.
Industry Shift Towards Third-party Models
The podcast discusses the shift towards utilizing third-party AI models, necessitating independent validation to ensure reliability. The importance of bridging gaps between standard frameworks and practical implementation for certification and trust-building is stressed.
Role of Standards and Certification
The evolution and significance of AI reliability standards like ISO 50 to 59 and ISO 24,000 are highlighted. The integration of these industry standards into AI workflows for transparent and structured validation processes to build trust and communicate across organizational levels is explored.
Huge thank you to LatticeFlow AI for sponsoring this episode. LatticeFlow AI - https://latticeflow.ai/.Dr. Petar Tsankov is a researcher and entrepreneur in the field of Computer Science and Artificial Intelligence.
MLOps podcast #218 with Petar Tsankov, Co-Founder and CEO at LatticeFlow AI, A Decade of AI Safety and Trust.
// Abstract
// Bio
Co-founder & CEO at LatticeFlow AI, building the world's first product enabling organizations to build performant, safe, and trustworthy AI systems.
Before starting LatticeFlow AI, Petar was a senior researcher at ETH Zurich working on the security and reliability of modern systems, including deep learning models, smart contracts, and programmable networks.
// MLOps Jobs board
https://mlops.pallet.xyz/jobs
// MLOps Swag/Merch
https://mlops-community.myshopify.com/
// Related Links
Website: https://latticeflow.ai/
ERAN, the world's first scalable verifier for deep neural networks: https://github.com/eth-sri/eran
VerX, the world's first fully automated verifier for smart contracts: https://verx.ch
Securify, the first scalable security scanner for Ethereum smart contracts: https://securify.ch
DeGuard, de-obfuscates Android binaries: http://apk-deguard.com
SyNET, the first scalable network-wide configuration synthesis tool: https://synet.ethz.ch
--------------- ✌️Connect With Us ✌️ -------------
Join our slack community: https://go.mlops.community/slack
Follow us on Twitter: @mlopscommunity
Sign up for the next meetup: https://go.mlops.community/register
Catch all episodes, blogs, newsletters, and more: https://mlops.community/
Connect with Demetrios on LinkedIn: https://www.linkedin.com/in/dpbrinkm/
Connect with Petar on LinkedIn: https://www.linkedin.com/in/petartsankov/
Timestamps:
[00:00] Petar's preferred coffee
[00:29] Takeaways
[03:15] Shout out to LatticeFlow for sponsoring this episode!
[03:22] Please like, share, leave a review, and subscribe to our MLOps channels!
[03:42] Expansion
[05:16] Zurich ETH
[07:06] AI Safety
[09:24] Optimizing one metric, no fixed data sets
[12:19] Trust life-changing issues
[14:59] So much interest in GenAI
[16:45] Explosion of GenAI Trust and Safety
[21:14] Red Teaming
[25:22] Trustworthy AI in Industry
[27:43] DataOps Challenges
[33:42] Trusting Third-Party Models
[37:00] Testing Open Source Models
[41:41] Specialized ML for Leasing
[43:04] Regulation and Financial Incentives
[45:30] Regulations Drive Innovation Balance
[47:23] Regulations vs Certification: Voluntary Prove
[52:24] Workflow Transparency: Trust & Efficiency
[53:20] Engineers Balance Compliance Risks
[54:53] Pushing Deep Learning Limits
[57:31] Wrap up
Get the Snipd podcast app
Unlock the knowledge in podcasts with the podcast player of the future.
AI-powered podcast player
Listen to all your favourite podcasts with AI-powered features
Discover highlights
Listen to the best highlights from the podcasts you love and dive into the full episode
Save any moment
Hear something you like? Tap your headphones to save it with AI-generated key takeaways
Share & Export
Send highlights to Twitter, WhatsApp or export them to Notion, Readwise & more
AI-powered podcast player
Listen to all your favourite podcasts with AI-powered features
Discover highlights
Listen to the best highlights from the podcasts you love and dive into the full episode