Hacker Valley Studio

Hacker Valley Media
undefined
Jan 29, 2026 • 33min

Why MFA Isn’t the Safety Net You Think It Is with Yaamini Barathi Mohan

Phishing didn’t get smarter, it got better at looking normal. What used to be obvious scams now blend directly into the platforms, workflows, and security controls people trust every day. In this episode, Ron sits down with Yaamini Barathi Mohan, 2024 DMA Rising Star and Co-Founder & CPO of Secto, to break down how modern phishing attacks bypass MFA, abuse trusted services like Microsoft 365, and ultimately succeed inside the browser. Together, they examine why over-reliance on automation creates blind spots, how zero trust becomes practical at the browser layer, and why human judgment is still the deciding factor as attackers scale with AI. Impactful Moments 00:00 - Introduction 02:44 - Cloud infrastructure powering crime at scale 07:45 - What phishing 2.0 really means 12:10 - How MFA gets bypassed in real attacks 15:30 - Why the browser is the final control point 18:40 - AI reducing SOC alert fatigue 23:07 - Mentorship shaping cybersecurity careers 27:00 - Thinking like attackers to defend better 31:15 - When trust becomes the attack surface   Links Connect with our guest, Yaamini Barathi Mohan, on LinkedIn: https://www.linkedin.com/in/yaamini-mohan/   Check out our upcoming events: https://www.hackervalley.com/livestreams Join our creative mastermind and stand out as a cybersecurity professional: https://www.patreon.com/hackervalleystudio Love Hacker Valley Studio? Pick up some swag: https://store.hackervalley.com Continue the conversation by joining our Discord: https://hackervalley.com/discord Become a sponsor of the show to amplify your brand: https://hackervalley.com/work-with-us/    
undefined
Jan 25, 2026 • 37min

When Cybercrime Learned How to Make Money and Never Looked Back with Graham Cluley

Cybersecurity didn’t start as a billion-dollar crime machine. It started as pranks, ego, and curiosity. That origin story explains almost everything that’s breaking today. Ron sits down with Graham Cluley, one of the earliest antivirus developers turned trusted cyber voice, to trace how malware evolved from digital graffiti into organized financial warfare. From floppy disks and casino-style viruses to ransomware, extortion, and agentic AI, the conversation shows how early decisions still shape today’s most dangerous assumptions. Graham also explains why AI feels inevitable, but still deeply unfinished inside modern organizations. Impactful Moments 00:00 - Introduction 04:16 - Malware before money existed 07:30 - Cheesy biscuits changed cybersecurity 13:10 - When documents became dangerous 14:33 - Crime replaced curiosity 15:23 - Sony proved no one was safe 20:15 - Reporting hacks without causing harm 24:01 - AI replacing penetration testers 29:18 - Agentic AI shifts the threat model 36:30 - Why rushing AI breaks trust Links Connect with our guest on LinkedIn: https://www.linkedin.com/in/grahamcluley/   Check out our upcoming events: https://www.hackervalley.com/livestreams Join our creative mastermind and stand out as a cybersecurity professional: https://www.patreon.com/hackervalleystudio Love Hacker Valley Studio? Pick up some swag: https://store.hackervalley.com Continue the conversation by joining our Discord: https://hackervalley.com/discord Become a sponsor of the show to amplify your brand: https://hackervalley.com/work-with-us/    
undefined
Jan 18, 2026 • 37min

When Automation Outruns Control with Joshua Bregler

AI doesn’t break security, it exposes where it was already fragile. When automation starts making decisions faster than humans can audit, AppSec becomes the only thing standing between scale and catastrophe. In this episode, Ron sits down with Joshua Bregler, Senior Security Manager at McKinsey’s QuantumBlack, to dissect how AI agents, pipelines, and dynamic permissions are reshaping application security. From prompt chaining attacks and MCP server sprawl to why static IAM is officially obsolete, this conversation gets brutally honest about what works, what doesn’t, and where security teams are fooling themselves. Impactful Moments 00:00 – Introduction 02:15 – AI agents create identity chaos 04:00 – Static permissions officially dead 07:05 – AI security is still AppSec 09:30 – Prompt chaining becomes invisible attack 12:23 – Solving problems vs solving AI 15:03 – Ethics becomes an AI blind spot 17:47 – Identity is the next security failure 20:07 – Frameworks no longer enough alone 26:38– AI fixing insecure code in real time 32:15 – Secure pipelines before production Connect with our Guest Joshua Bregler on LinkedIn: https://www.linkedin.com/in/breglercissp/   Our Links Check out our upcoming events: https://www.hackervalley.com/livestreams Join our creative mastermind and stand out as a cybersecurity professional: https://www.patreon.com/hackervalleystudio Love Hacker Valley Studio? Pick up some swag: https://store.hackervalley.com Continue the conversation by joining our Discord: https://hackervalley.com/discord Become a sponsor of the show to amplify your brand: https://hackervalley.com/work-with-us/    
undefined
Jan 15, 2026 • 34min

The Day AI Stopped Asking for Permission with Marcus J. Carey

Marcus J. Carey, Principal Research Scientist at ReliaQuest and a cybersecurity whiz, dives deep into the seismic shift within AI's role in production environments. He highlights how AI has transitioned from mere advisors to autonomous agents, creating new trust dynamics and risk factors. Key topics include 'prompt debt' and 'vibe coding,' showcasing the unforeseen technical challenges of rapid AI integration. Carey emphasizes the importance of retaining coding skills to navigate the evolving landscape where domain expertise and human intuition are vital for effective AI collaboration.
undefined
Jan 8, 2026 • 35min

When AI Ships the Code, Who Owns the Risk with Varun Badhwar and Henrik Plate

Varun Badhwar, co-founder and CEO of Endor Labs, and Henrik Plate, Principal Security Researcher at Endor Labs, dive into the complexities of AI-assisted software development. They discuss the rapid adoption of MCPs and the emerging security risks, including malicious packages that exploit agents. The conversation highlights the shortcomings of traditional AppSec and argues for embedding security in IDEs. With insights from their 2025 State of Dependency Management report, they stress the importance of integrating security from the start to combat rising vulnerabilities.
undefined
Jan 1, 2026 • 28min

Think Like a Hacker Before the Hack Happens with John Hammond

What if the most dangerous hackers are the ones who never touch a keyboard? The real threat isn't just about stolen credentials or ransomware; it's about understanding how attackers think before they even strike. In cybersecurity, defense starts with offense, and the best defenders are those who've walked in the hacker's shoes. In this episode, Ron sits down with John Hammond, principal security researcher at Huntress and one of cybersecurity's most recognizable educators. John shares his journey from Coast Guard enlistee to YouTube creator, building an entire media company around ethical hacking. They dig into the balance between public research and responsible disclosure, the rise of AI-augmented attacks, and why identity is now the biggest attack surface in modern enterprises. Impactful Moments: 00:00 - Introduction 01:00 - AI weaponized in cyber espionage 05:00 - Learning by teaching publicly 09:00 - Balancing curiosity with responsible disclosure 13:00 - Building a creator company 16:00 - Identity as the new frontier 20:00 - AI agents running breach simulations 22:00 - Predictions for cybersecurity in 2026 25:00 - Ron's hacking habit confession   Links: John Hammond LinkedIn: https://www.linkedin.com/in/johnhammond010/ John Hammond Youtube: https://www.youtube.com/@_JohnHammond Article for Discussion: https://www.reuters.com/world/europe/russian-defense-firms-targeted-by-hackers-using-ai-other-tactics-2025-12-19/ Check out our upcoming events: https://www.hackervalley.com/livestreams Join our creative mastermind and stand out as a cybersecurity professional: https://www.patreon.com/hackervalleystudio Love Hacker Valley Studio? Pick up some swag: https://store.hackervalley.com Continue the conversation by joining our Discord: https://hackervalley.com/discord Become a sponsor of the show to amplify your brand: https://hackervalley.com/work-with-us/
undefined
Dec 18, 2025 • 34min

Breaking Into Banks and Bypassing Modern Security with Greg Hatcher and John Stigerwalt

Three banks in four days isn't just a bragging right for penetration testers. It's a wake-up call showing that expensive security tools and alarm systems often fail when tested by skilled operators who understand both human behavior and technical vulnerabilities. Greg Hatcher and John Stigerwalt, co-founders of White Knight Labs, talk about their latest physical penetration tests on financial institutions, manufacturing facilities protecting COVID-19 vaccine production, and why their new Server 2025 course had to rewrite most common Active Directory tools. They share stories of armed guards, police gun draws, poison ivy reconnaissance, and a bag of chips that saved them from serious trouble. The conversation reveals why EDR alone won't stop ransomware, how offline backups remain the exception rather than the rule, and what security controls actually work when attackers bring custom tooling. Impactful Moments: 00:00 - Intro 01:00 - New training courses launched 03:00 - Server 2025 breaks standard tools 05:00 - COVID facility physical penetration 07:00 - Armed guards change the game 10:00 - Police draw guns on operators 13:00 - Bag of chips saves the day 15:00 - Nighttime versus daytime physical tests 18:00 - VIP home security assessments 20:00 - 2026 threat predictions 22:00 - Why EDR doesn't stop ransomware 27:00 - Low cost ransomware simulation ROI 29:00 - Three banks in four days 32:00 - Deepfake as the new EDR Links: Connect with our guests –  Greg Hatcher: https://www.linkedin.com/in/gregoryhatcher2/ John Stigerwalt: https://www.linkedin.com/in/john-stigerwalt-90a9b4110/ Learn more about White Knight Labs: https://www.whiteknightlabs.com Check out our upcoming events: https://www.hackervalley.com/livestreams Join our creative mastermind and stand out as a cybersecurity professional: https://www.patreon.com/hackervalleystudio Love Hacker Valley Studio? Pick up some swag: https://store.hackervalley.com Continue the conversation by joining our Discord: https://hackervalley.com/discord Become a sponsor of the show to amplify your brand: https://hackervalley.com/work-with-us/
undefined
Dec 11, 2025 • 34min

Defending Your Cyber Systems and Your Mental Attack Surface with Chris Hughes

When your firewall forgets to buckle up, the crash doesn’t happen in the network first, it happens in your blindspots. In this episode, Ron is joined by returning guest Chris Hughes, Co-Founder of Aquia and host of the Resilient Cyber podcast. Chris helps reframe vulnerability work as exposure management, connect technical risk to human resilience, and break down the scoring and runtime tools security teams actually need today. Expect clear takeaways on EPSS, reachability analysis, ADR, AI’s double-edged role, and the one habit Chris swears by as a CEO. This episode fuses attack-surface reality with mental-attack-surface strategy so you walk away with both tactical moves and daily practices that protect systems and people. Impactful Moments: 00:00 - Intro 02:00 - Breaking: Fortinet WAF zero-day & visibility lesson 05:00 - Meet Chris Hughes: CEO, author, Resilient Cyber host 08:00 - Mental attack surface explained and why it matters 18:00 - From CVSS to EPSS, reachability, and ADR realities 21:00 - AI as force-multiplier for attackers and defenders 24:30 - Exposure vs vulnerability naming, market trends 26:00 - Chris’s book & how to follow his work 30:00 - Ron’s solo: 3 pillars to patch your mindset 34:00 - Closing takeaways and subscribe reminder Links: Connect with our guest, Chris Hughes, on LinkedIn: https://www.linkedin.com/in/resilientcyber/ Check out the article on the Fortinet exploit here: https://www.helpnetsecurity.com/2025/11/14/fortinet-fortiweb-zero-day-exploited/  Check out our upcoming events: https://www.hackervalley.com/livestreams Join our creative mastermind and stand out as a cybersecurity professional: https://www.patreon.com/hackervalleystudio Love Hacker Valley Studio? Pick up some swag: https://store.hackervalley.com Continue the conversation by joining our Discord: https://hackervalley.com/discord Become a sponsor of the show to amplify your brand: https://hackervalley.com/work-with-us/  
undefined
12 snips
Dec 4, 2025 • 30min

Thriving Beyond Human Labor with Context-Powered AI with Daniel Miessler

In this conversation, Daniel Miessler, a cybersecurity and AI expert and founder of Unsupervised Learning, explores the evolving landscape of work in an AI-dominated era. He argues that human labor itself may be an economic bubble, highlighting how businesses are thriving with fewer workers. Daniel discusses his experiences at Apple and the importance of building careers around problem-solving. He emphasizes context management in AI's potential and shares inspiring stories of youthful innovation, pointing toward a future where adaptation is key.
undefined
Dec 2, 2025 • 20min

Building EDR for AI: Controlling Autonomous Agents Before They Go Rogue with Ron Eddings

AI agents aren't just reacting anymore, they're thinking, learning, and sometimes deleting your entire production database without asking. The real question isn't if your AI agent will be hacked, it's when, and whether you'll have the right hooks in place to stop it before it happens. In this episode, Ron breaks down the ChatGPT Atlas vulnerability that shocked researchers, revealing how malicious prompts can turn AI assistants against their own users by bypassing safeguards and accessing file systems. He presents his new talk "Hooking Before Hacking," introducing a framework for applying EDR principles, prevention, detection, and response, to AI agents before they execute unauthorized commands. From pre-tool use hooks that catch malicious intent to one-time passwords that put humans back in the loop, this episode shares practical security controls you can implement today to prevent your AI agents from going rogue.   Impactful Moments: 00:00 - Introduction 02:00 - ChatGPT Atlas vulnerability exposed 04:00 - AI technology outpacing security guardrails 05:00 - Guardrail jailbreaks and prompt injection 06:00 - AI agents deleting production databases 07:00 - EDR principles for AI agents 09:00 - Pre-tool use hooks catch intention 11:00 - User prompt sanitization prevents leaks 14:00 - One-time passwords for agent workflows 16:00 - Automation mistakes across 10 years   Links: Connect with Ron on LinkedIn: https://www.linkedin.com/in/ronaldeddings/ Check out the entire article here: https://www.yahoo.com/news/articles/cybersecurity-experts-warn-openai-chatgpt-101658986.html  GitHub Repository: https://hackervalley.com/hooking-before-hacking  See Ron's "Hooking Before Hacking" presentation slides here: http://hackervalley.com/hooking-before-hacking-presentation Check out our website: https://hackervalley.com/ Upcoming events: https://www.hackervalley.com/livestreams Love Hacker Valley Studio? Pick up some swag: https://store.hackervalley.com Continue the conversation by joining our Discord: https://hackervalley.com/discord Become a sponsor of the show to amplify your brand: https://hackervalley.com/work-with-us/ Join our creative mastermind and stand out as a cybersecurity professional: https://www.patreon.com/hackervalleystudio    

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app