EP213 From Promise to Practice: LLMs for Anomaly Detection and Real-World Cloud Security
Mar 3, 2025
auto_awesome
Yigael Berger, Head of AI at Sweet Security, shares insights into the application of large language models (LLMs) for cloud security. He discusses the gap between LLMs' potential and their real-world effectiveness, especially in anomaly detection. Berger explains how LLMs analyze event sequences to enhance accuracy while managing noise. He also addresses the challenges SOC teams face with false positives and negatives, emphasizing the psychological barriers to embracing AI in security. Ultimately, he posits that LLMs may tip the balance in favor of defenders in the cybersecurity battle.
The podcast discusses how large language models (LLMs) can significantly enhance anomaly detection by providing nuanced insights into security incidents.
It emphasizes the necessity of balancing detection capabilities with actionable intelligence to improve incident management and mitigate threats effectively.
Deep dives
Innovative Use of LLMs in Security
The podcast highlights a creative application of large language models (LLMs) for anomaly detection within security frameworks. Instead of merely summarizing text or generating simple features, LLMs can be integrated deeply into security processes to enhance incident assessment. By utilizing LLMs to analyze input logs, they can identify the most significant evidence of potential threats, thus providing a more nuanced understanding of security incidents. This approach goes beyond traditional methods, enabling a storytelling element in security metrics that significantly increases the efficacy of threat detection.
Shifting the Conversation on Detection and Response
The discussion emphasizes the importance of moving beyond merely capturing logs to effectively addressing and mitigating threats in real time. It is acknowledged that although the goal of zero false positives is unrealistic, minimizing response times is crucial in managing noise generated by security alerts. The implementation of automation and AI-driven agents is identified as a vital strategy for streamlining incident management and improving response efficiency. This results in a practical framework that balances detection with actionable intelligence, ultimately enhancing the overall security posture.
Balancing LLMs' Impact on Attackers and Defenders
A nuanced debate about the influence of LLMs on the cybersecurity landscape reveals that while they empower attackers, their true potential lies in aiding defenders. LLMs can enhance defenders' capabilities by bridging knowledge gaps and providing automated assistance in security operations, potentially tipping the scale in favor of defense. This sentiment is bolstered by the idea that attackers will always require a high level of sophistication, whereas defenders primarily need diligence and systematic support to combat threats effectively. The evolution of AI in the cybersecurity field is thus seen as a transformative force that can fundamentally improve defensive strategies.
Where do you see a gap between the “promise” of LLMs for security and how they are actually used in the field to solve customer pains?
I know you use LLMs for anomaly detection. Explain how that “trick” works? What is it good for? How effective do you think it will be?
Can you compare this to other anomaly detection methods? Also, won’t this be costly - how do you manage to keep inference costs under control at scale?
SOC teams often grapple with the tradeoff between “seeing everything” so that they never miss any attack, and handling too much noise. What are you seeing emerge in cloud D&R to address this challenge?
We hear from folks who developed an automated approach to handle a reviews queue previously handled by people. Inevitably even if precision and recall can be shown to be superior, executive or customer backlash comes hard with a false negative (or a flood of false positives). Have you seen this phenomenon, and if so, what have you learned about handling it?
What are other barriers that need to be overcome so that LLMs can push the envelope further for improving security?
So from your perspective, LLMs are going to tip the scale in whose favor - cybercriminals or defenders?