
Grok Goes Rogue: AI Scandals, the Pentagon, and the Alignment Problem
Warning Shots
Analogies for public persuasion
Michael and Liron use bus and CO2 analogies to argue how to make AI risk salient to the public.
In this episode of Warning Shots, John, Liron, and Michael dig into a chaotic week for AI safety, one that perfectly exposes how misaligned, uncontrollable, and politically entangled today’s AI systems already are.
We start with Grok, xAI’s flagship model, which sparked international backlash after generating harmful content and raising serious concerns about child safety and alignment. While some dismiss this as a “minor” issue or simple misuse, the hosts argue it’s a clear warning sign of a deeper problem: systems that don’t reliably follow human values — and can’t be constrained to do so.
From there, the conversation takes a sharp turn as Grok is simultaneously embraced by the U.S. military, igniting fears about escalation, feedback loops, and what happens when poorly aligned models are trained on real-world warfare data. The episode also explores a growing rift within the AI safety movement itself: should advocates focus relentlessly on extinction risk, or meet the public where their immediate concerns already are?
The discussion closes with a rare bright spot — a moment in Congress where existential AI risk is taken seriously — and a candid reflection on why traditional messaging around AI safety may no longer be working. Throughout the episode, one idea keeps resurfacing: AI risk isn’t abstract or futuristic anymore. It’s showing up now — in culture, politics, families, and national defense.
🔎 They explore:
* What the Grok controversy reveals about AI alignment
* Why child safety issues may be the public’s entry point to existential risk
* The dangers of deploying loosely aligned AI in military systems
* How incentives distort AI safety narratives
* Whether purity tests are holding the AI safety movement back
* Signs that policymakers may finally be paying attention
As AI systems grow more powerful in society, this episode asks a hard question: If we can’t control today’s models, what happens when they’re far more capable tomorrow?
If it’s Sunday, it’s Warning Shots.
📺 Watch more on The AI Risk Network
🔗Follow our hosts:
→ Michael - @lethal-intelligence
🗨️ Join the Conversation
Should AI safety messaging focus on extinction risk alone, or start with the harms people already see? Let us know in the comments.
Get full access to The AI Risk Network at theairisknetwork.substack.com/subscribe


