
Security Intelligence Is ChatGPT Atlas safe? Plus: invisible worms, ghost networks and the AWS outage
Oct 29, 2025
Guests Dave McGinnis, an expert in threat detection; J.R. Rao, a security architecture specialist; and Suja Viswesan, a VP of security products, discuss the alarming risks associated with AI browsers like ChatGPT Atlas. They explore security measures needed to protect these platforms, including prompt sanitization and observability. The conversation shifts to a ghost network on YouTube, fueled by fake tutorials that distribute malware. Finally, they examine the implications of emerging malware like Glassworm and the importance of resilient cloud architectures.
AI Snips
Chapters
Transcript
Episode notes
Avoid AI Browsers For Sensitive Work
- Avoid using AI browsers like ChatGPT Atlas for sensitive or enterprise tasks right now.
- Use them only in isolated machines without banking or credential data, as Suja recommends.
Prompt Injection Is A Core Frontier
- Prompt injection is a frontier security problem where attackers mix instructions with data to make models misbehave.
- Solutions require detection, sanitization, provenance tracking, sandboxing, and LLM firewalls, per J.R. Rao.
Require Basic Controls And Observability
- Implement basic protective controls: visibility, identity, permissioning, monitoring, and response for AI browsers.
- Demand transparency and auditability so enterprises can detect and act when things go wrong.
