

Finding Large Bounties with Large Language Models - Nico Waisman - ASW #351
Oct 7, 2025
Nico Waisman, a seasoned security leader and former CISO at Lyft, dives into the innovative world of LLM-driven pentesting, focusing on Expo's impressive results on bug bounty platforms. He explains how LLMs can identify flaws at scale using feedback loops and the importance of real-time validation to reduce false positives. Nico also discusses handling hallucinations as an asset, scaling tests with precision, and the interplay between LLMs and fuzzing. Finally, he highlights the need for human oversight in assessing vulnerabilities to enhance application security.
AI Snips
Chapters
Transcript
Episode notes
LLM Climbed Bug Bounty Leaderboards
- Expo ran an LLM against live bug bounty targets and climbed leaderboards to top positions.
- The team used that experiment to validate product scale and improve real-world offensive workflows.
Validate Findings Before Human Review
- Use validators as a second pair of eyes to reduce false positives before human review.
- Run validators early and iterate them against corner cases to lower noise.
Prioritize ROI Over Raw Scan Volume
- Bug bounty ROI requires fingerprinting the attack surface to avoid duplicate noisy targets.
- Target selection matters more than raw scan volume when points are limited.