
AI Confidential
Building Trust in AI With Mark Papermaster (AMD) and Mark Russinovich (Microsoft Azure)
Nov 20, 2024
Join Mark Papermaster, CTO at AMD with a rich background at IBM and Apple, and Mark Russinovich, CTO at Microsoft Azure and tech pioneer, as they delve into the world of confidential computing. They discuss how this technology enhances security in AI applications, making them trustworthy and easy to adopt. The duo highlights the importance of partnerships in driving innovation and share insights on multi-party computation's role in safeguarding privacy across industries. Discover how these advancements are reshaping data management and business strategies.
48:30
Episode guests
AI Summary
AI Chapters
Episode notes
Podcast summary created with Snipd AI
Quick takeaways
- Confidential computing is essential for establishing a secure environment for processing sensitive data, thereby increasing trust in AI applications.
- The integration of AI with confidential computing transforms business models by enabling secure multi-party computations without compromising data ownership.
Deep dives
The Evolution and Importance of Confidential Computing
Confidential computing is centered around a fundamental shift towards establishing a trusted environment for sensitive data processing, typically within cloud infrastructures. It employs hardware-based roots of trust to create secure enclaves that safeguard data and applications, minimizing the attack surface against external threats. Industry leaders believe that as data security concerns continue to rise, confidential computing will transition from being a specialized service to a standard computing practice within a decade. This evolution reflects the need for systems that can securely manage and process sensitive data, particularly as organizations increasingly leverage AI capabilities.
Remember Everything You Learn from Podcasts
Save insights instantly, chat with episodes, and build lasting knowledge - all powered by AI.