
Techlore Surveillance Report Microsoft Copilot's Security Failures Are Putting Everyone at Risk
Jan 18, 2026
The discussion dives into the alarming security flaws of Microsoft Copilot that threaten user safety. It also covers the FTC's ban on GM selling driver location data, a significant win for privacy advocates. California's bold step to prohibit data brokers from reselling health information is highlighted. Iran's record internet shutdown raises concerns about communication resilience. Finally, the EFF's guide on age verification shines a light on the risks of online gatekeeping.
AI Snips
Chapters
Transcript
Episode notes
Reprompt Reveals AI Assistant Failure Modes
- Copilot's Reprompt attack combined URL prompt injection, double-request, and chain-request techniques to bypass protections.
- Researchers warn AI assistants must treat external inputs as untrusted and persist safeguards across repeated actions.
Treat Pre-Filled AI Prompts With Caution
- Avoid clicking links that pre-fill AI prompts and inspect any pre-filled prompt before running it.
- Close sessions that request unexpected personal information and report suspicious behavior to the vendor.
Repeated AI Releases Signal Reckless Priorities
- Microsoft keeps releasing experimental AI features despite repeated security and privacy failures.
- Henry Fisher argues this pattern shows recklessness and poor user-data prioritization by Microsoft leadership.
