AI chatbots like ChatGPT have made quiet a splash. Companies are tripping all over themselves in a rush to add “AI” to everything, heedless of the security risks. But perhaps more insidious are the privacy risks. Most AI processing is done in the cloud, meaning that your queries and chats are subject to inspection, sharing, storing and monetizing. These AI systems are incredibly expensive to train and operate. And AI companies are desperate to feed them every scrap of data they can find. It’s a recipe for privacy disaster. But there are ways to make it more private and today we’ll discuss these approaches with Proton’s head of AI, Eamonn Maguire.
Interview Notes
Further Info
Table of Contents
- 0:00:00: Intro
- 0:12:22: Defining some terms
- 0:15:29: What are the main privacy issues with modern AI?
- 0:22:53: What are the dangers of training AI models on personal data?
- 0:27:57: How do we make AI chatbots safer to use?
- 0:35:31: What are Proton’s goals with Lumo?
- 0:42:41: How can Lumo protect a user’s privacy?
- 0:52:19: Can we do more to anoymize cloud LLM queries?
- 0:56:50: What can we do to increase trust and transparency with AI?
- 1:02:55: Where does Proton store and process AI data?
- 1:10:35: Which LLM models does Lumo use?
- 1:15:38: Will Proton offer a local-only version of Lumo?
- 1:20:36: What’s next for Lumo and AI at Proton?
- 1:27:59: Will Lumo ever be part of Proton pricing bundles?
- 1:31:24: Wrap-up
- 1:35:14: Patron podcast preview
- 1:36:04: Looking ahead