
 Firewalls Don't Stop Dragons Podcast
 Firewalls Don't Stop Dragons Podcast Privacy-Focused AI
 Oct 27, 2025 
 In this engaging discussion, Eamonn Maguire, Director of Engineering for AI at Proton, dives into the urgent privacy concerns surrounding AI chatbots. He highlights the risks of data harvesting and the implications of training AI on personal information. Eamonn explains Proton's innovative Lumo model, designed to prioritize privacy with zero access encryption and a no-logs policy. He also shares the importance of transparency, the potential of open-source technology, and how local-only options can enhance user security in a rapidly evolving digital landscape. 
 AI Snips 
 Chapters 
 Transcript 
 Episode notes 
AI Growth Relies On Harvesting User Data
- AI companies harvest vast amounts of user data to differentiate and improve models, often using customer content as exclusive training material.
- This creates a privacy risk because cloud processing and data monetization enable profiling and surveillance.
Disable Unknown AI Features Immediately
- Disable new AI features in apps immediately and re-enable only after verifying privacy and trustworthiness.
- Inspect privacy settings and opt out of data-use-for-training clauses in updated terms of service.
AI Collects Behavioral Signals, Not Just Text
- Modern chatbots collect not just content but metadata and behavioral signals that enable deep profiling beyond the explicit query text.
- Models can infer sensitive information indirectly from usage patterns, increasing privacy harms.
