Risky Bulletin

Between Two Nerds: Why AI in malware is lame

7 snips
Nov 10, 2025
Tom Uren and The Grugq delve into the absurdity of AI use in cybercrime. They critique Google's AI Threat Tracker and discuss why LLMs used for malware like PromptSteal are underwhelming. The duo highlights how AI lowers skill barriers for hackers but introduces unpredictable failures. They explore how the illicit AI tooling market is maturing and debate when AI is genuinely useful in attacks versus its limitations. The conversation reveals a balanced view of AI's role in cyber operations, implying its potential benefits might not be as magical as expected.
Ask episode
AI Snips
Chapters
Transcript
Episode notes
INSIGHT

LLMs Replace Scripts But Add Fragility

  • Many observed malware uses LLMs by sending a prompt and executing the returned one-line commands.
  • That adds complexity, unpredictability, and often worsens a solved scripting problem.
ANECDOTE

Fancy Bear's 'Lame Hug' Deployment

  • Google observed a Fancy Bear deployment that Ukrainian analysts nicknamed "lame hug."
  • The hosts suggest inexperienced developers likely produced these awkward AI-driven tools.
INSIGHT

Illicit AI Market Matures For Simple Tasks

  • Underground marketplaces now sell multifunctional AI tools for phishing and malware work.
  • These tools aim to lower entry barriers but often target simple, automatable tasks.
Get the Snipd Podcast app to discover more snips from this episode
Get the app