
Cyber Security Headlines Department of Know: Azure security pitfalls, retailer cyberattack profits, Aardvark eats bugs
10 snips
Nov 3, 2025 Join Davi Ottenheimer, VP of Digital Trust and Ethics at Inrupt, and Rob Teel, Field CTO at GigaOm, as they dive into critical cybersecurity insights. They explore the implications of the recent F5 breach, question the value of Microsoft’s new memory scan feature, and discuss the controversial use of LinkedIn data for AI training. The conversation also highlights how retailer cyberattacks can inadvertently boost competitors' sales and looks at Azure’s delay in making private subnets default. It’s a jam-packed dialogue on the future of technology and security!
AI Snips
Chapters
Transcript
Episode notes
Origin-Based Input Separation Still Fails
- OpenAI Atlas failed to separate trusted inputs from untrusted content, creating classic browser-origin vulnerabilities.
- Davi Ottenheimer and Rob Teel stress this is an old, well-known class of weakness that developers must not repeat.
Detecting Crashes Isn’t Fixing Memory
- Treat post-crash diagnostics as detection, not a fix; push for prevention like ECC and continuous memory integrity checks.
- Davi warns detection after a BSoD admits memory problems but does not stop them from happening.
Leaked Code Multiplies Risk
- F5 breach exposed source code, configs, and many vulnerabilities, which may be downplayed as "limited" impact.
- Both panelists caution that stolen code and vulnerability lists can massively escalate attacker capabilities.
