Ely Kahn, VP of Product at SentinelOne, discusses the impact of generative AI on cybersecurity, simplifying processes and empowering analysts. Topics include concerns with AI models, comparison to analysts without AI, preventing models from going into autopilot, and the use of multiple LLMs.
LLMs like Purple AI empower junior analysts in cybersecurity with advanced capabilities.
Integration of multiple LLMs like GPT-4 and GPT-3.5 enhances security investigations efficiently.
Deep dives
Role of LLMs in Transforming Junior Analysts' Work
Using LLMs like Purple AI can significantly impact junior to mid-tier analysts, enhancing their capabilities and transforming their roles. Junior analysts can leverage LLMs to perform advanced activities like hunting and investigations without the need for in-depth knowledge of query language or threat intelligence. By semantically matching questions against a knowledge base, these analysts can receive prompt and structured queries for efficient investigative processes. The transparency built into the LLM conversion process aids junior analysts in becoming proficient queryers and accelerates their learning curve.
Comparison of Senior and Junior Analyst Performance with LLMs
An internal study conducted to compare the performance of a senior principal solution architect with six more junior sales engineers using Purple AI showcased the speed and efficiency gains for junior analysts. While the senior expert ranked last in a capture-the-flag challenge, the juniors outperformed him with the assistance of Purple AI. The study highlighted the substantial benefits for junior to mid-tier analysts using LLMs, emphasizing their transformative potential for less experienced team members.
Genetic Evolution and Future Applications of LLMs in Cyberspace
The evolving landscape of LLMs presents new horizons in security investigations and responses. With the potential to advance beyond co-pilot and assistant roles, LLMs could facilitate more autopilot functions in certain security operations. The integration of LLMs with agent frameworks reminiscent of SORs marks a significant shift towards automating security operations. The future trajectory of LLMs aims at enhancing general reasoning capabilities to streamline investigative processes and improve response efficiency.
Enhanced Investigative Capabilities Through Dynamic LLM Structures
Dynamic LLM structures, such as leveraging a constellation of LLMs for specific tasks, offer flexibility and efficiency in security investigations. Incorporating a mix of models like GPT-4, GPT-3.5, and experimenting with anthropic on AWS Bedrock provides enhanced capabilities tailored to diverse security behaviors. The usage of vector databases and embeddings in rag architectures elevates AI-related activities and fosters a more nuanced approach to security investigations, unlocking new use cases and improving feature discoverability.
How can AI change a Security Analyst's workflow? Ashish and Caleb caught up with Ely Kahn, VP of Product at SentinelOne, to discuss the revolutionary impact of generative AI on cybersecurity. Ely spoke about the challenges and solutions in integrating AI into cybersecurity operations, highlighting how can simplify complex processes and empowering junior to mid-tier analysts.
Questions asked:
(00:00) Introduction
(03:27) A bit about Ely Kahn
(04:29) Current State of AI in Cybersecurity
(06:45) How AI could impact Cybersecurity User Workflow?
(08:37) What are some of the concerns with such a model?
(14:22) How does it compare to a analyst not using this model?
(21:41) Whats stopping models for going into autopilot?