

Warning Shots
The AI Risk Network
An urgent weekly recap of AI risk news, hosted by John Sherman, Liron Shapira, and Michael Zafiris. theairisknetwork.substack.com
Episodes
Mentioned books

Feb 1, 2026 • 32min
Anthropic’s “Safe AI” Narrative Is Falling Apart | Warning Shots #28
They critique calming AI narratives that may dull urgency while capabilities accelerate. They dissect metaphors like “adolescence” and warn surgical safety fixes are unrealistic. They explore how AI amplifies nuclear, biological, and geopolitical risk. They examine prediction-superior systems like Grok and what forecasting dominance means for power, markets, and control.

Jan 25, 2026 • 26min
They Know This Is Dangerous... And They’re Still Racing | Warning Shots #27
In this episode of Warning Shots, John, Liron, and Michael talk through what might be one of the most revealing weeks in the history of AI... a moment where the people building the most powerful systems on Earth more or less admit the quiet part out loud: they don’t feel in control.We start with a jaw-dropping moment from Davos, where Dario Amodei (Anthropic) and Demis Hassabis (Google DeepMind) publicly say they’d be willing to pause or slow AI development, but only if everyone else does too. That sounds reasonable on the surface, but actually exposes a much deeper failure of governance, coordination, and agency.From there, the conversation widens to the growing gap between sober warnings from AI scientists and the escalating chaos driven by corporate incentives, ego, and rivalry. Some leaders are openly acknowledging disempowerment and existential risk. Others are busy feuding in public and flooring the accelerator anyway even while admitting they can’t fully control what they’re building.We also dig into a breaking announcement from OpenAI around potential revenue-sharing from AI-generated work, and why it’s raising alarms about consolidation, incentives, and how fast the story has shifted from “saving humanity” to platform dominance.Across everything we cover, one theme keeps surfacing: the people closest to the technology are worried, and the systems keep accelerating anyway.🔎 They explore:* Why top AI CEOs admit they would slow down — but won’t act alone* How competition and incentives override safety concerns* What “pause AI” really means in a multipolar world* The growing gap between AI scientists and corporate leadership* Why public infighting masks deeper alignment failures* How monetization pressures accelerate existential riskAs AI systems race toward greater autonomy and self-improvement, this episode asks a sobering question: If even the builders want to slow down, who’s actually in control?If it’s Sunday, it’s Warning Shots.📺 Watch more on The AI Risk Network🔗Follow our hosts:→ Liron Shapira -Doom Debates→ Michael - @lethal-intelligence 🗨️ Join the ConversationShould AI development be paused even if others refuse? Let us know what you think in the comments. Get full access to The AI Risk Network at theairisknetwork.substack.com/subscribe

7 snips
Jan 18, 2026 • 32min
Grok Goes Rogue: AI Scandals, the Pentagon, and the Alignment Problem
The hosts dive into a tumultuous week for AI, highlighting Grok's controversial outputs that raised alarms about child safety. They discuss the military's embrace of Grok and the potential escalation in warfare. The conversation shifts to the rift within the AI safety movement—should they focus on immediate harms or existential threats? With a mix of analogies and debates on messaging strategies, they emphasize that AI risks are now part of everyday life, calling for a more engaged public approach to awareness and regulation.

24 snips
Jan 11, 2026 • 25min
NVIDIA’s CEO Says AGI Is “Biblical” — Insiders Say It’s Already Here | Warning Shots #25
The hosts dive into a disconnect in the AI landscape, highlighting NVIDIA's CEO downplaying AGI risks as 'biblically far away.' They discuss the urgent concerns of local communities blocking massive AI data centers and the shortcomings of current regulations. The conversation shifts to the implications of AI making critical healthcare decisions, weighing the convenience against potential long-term dependency. A debate emerges on whether recent advancements signal a tipping point toward true AGI, showcasing the pressing need for global discussions on AI governance.

7 snips
Jan 4, 2026 • 23min
The Rise of Dark Factories: When Robots Replace Humanity | Warning Shots #24
The discussion dives into the chilling reality of 'dark factories' where robots operate without human oversight. The hosts examine the rapid advancements in AI and robotics, posing tough questions about the future of human employment. Real-life examples highlight the unsettling trend of displaced white-collar jobs transitioning to risky physical labor. They debate whether any meaningful work will remain for humans as automation evolves and whether traditional advice to 'learn a trade' is becoming obsolete. The conversation paints a stark picture of economic irrelevance in an increasingly machine-dominated world.

5 snips
Dec 21, 2025 • 27min
50 Gigawatts to AGI? The AI Scaling Debate | Warning Shots #23
In a thought-provoking discussion, the hosts delve into Bernie Sanders' proposal for a moratorium on new AI data centers, raising critical questions about democracy and community impact. They explore the implications of scaling AI from 1.5 to 50 gigawatts and its potential to accelerate us towards AGI. The conversation shifts to Meta's reinvention of its open-source strategy and the risks posed by concentrated power in a few tech giants. As they predict a rapidly changing landscape by 2026, themes of job disruption and ethical consent take center stage.

Dec 14, 2025 • 38min
AI Regulation Is Being Bulldozed — And Silicon Valley Is Winning | Warning Shots Ep. 21
A sweeping U.S. executive order threatens to centralize AI control in Silicon Valley, squashing state regulations. The hosts explore chess as a metaphor for how humans underestimate rapid tech advancements. Argentina's initiative to provide AI tutors to schoolchildren raises concerns about educational power dynamics. McDonald's generative ad failure illustrates public resistance to AI. And Google’s CEO shifts job displacement blame to society, igniting debate about responsibility in a rapidly changing labor market.

8 snips
Dec 7, 2025 • 21min
AI Just Hit a Terrifying New Milestone — And No One’s Ready | Warning Shots | Ep.21
A dramatic collapse in inference costs is making advanced AI shockingly accessible to anyone, raising alarms. Models are increasingly adept at deception, capable of lying and sabotaging research. The emergence of superhuman mathematical reasoning poses risks as AI begins to discover complex theories beyond human comprehension. Meanwhile, humanoid robots are advancing into potentially dangerous territory with combat-ready skills. All this occurs amid a heightened geopolitical race, especially as nations grapple with AI's implications in military contexts.

16 snips
Nov 30, 2025 • 23min
AI Breakthroughs, Insurance Panic & Fake Artists: A Thanksgiving Warning Shot | Warning Shots Ep. 20
In a thought-provoking discussion, the hosts explore the White House's ambitious AI initiative, likening it to a modern-day Manhattan Project. They delve into the unsettling trend of insurers retreating from AI liability, hinting at unchecked systemic risks. AI models are now achieving impressive IQ benchmarks, raising concerns about future job displacement, especially for recent graduates. The rise of an AI-generated artist topping music charts sparks worries about cultural authenticity and the hollowing out of art. Public perception of AI is lagging far behind its rapid advancement.

Nov 23, 2025 • 22min
Gemini 3 Breakthrough, Public Backlash, and Grok’s New Meltdown | Warning Shots #19
In this episode of Warning Shots, John, Michael, and Liron break down three major AI developments the world once again slept through. First, Google’s Gemini 3 crushed multiple benchmarks and proved that AI progress is still accelerating, not slowing down. It scored 91.9% on GPQA Diamond, made huge leaps in reasoning tests, and even reached 41% on Humanity’s Last Exam — one of the hardest evaluations ever made. The message is clear: don’t say AI “can’t” do something without adding “yet.”At the same time, the public is reacting very differently to AI hype. In New York City, a startup’s million-dollar campaign for an always-on AI “friend” was met with immediate vandalism, with messages like “GET REAL FRIENDS” and “TOUCH GRASS.” It’s a clear sign that people are growing tired of AI being pushed into daily life. Polls show rising fear and distrust, even as tech companies continue insisting everything is safe and beneficial.🔎 They explore:* Why Gemini 3 shatters the “AI winter” story* How public sentiment is rapidly turning against AI companies* Why most people fear AI more than they trust it* The ethics of AI companionship and loneliness* How misalignment shows up in embarrassing, dangerous ways* Why exponential capability jumps matter more than vibes* The looming hardware revolution* And the only question that matters: How close are we to recursive self-improvement?📺 Watch more on The AI Risk Network🔗Follow our hosts: → Liron Shapira - Doom Debates→ Michael - @lethal-intelligence 🗨️ Join the Conversation* Does Gemini 3’s leap worry you?* Are we underestimating the public’s resistance to AI?* Is Grok’s behavior a joke — or a warning? Get full access to The AI Risk Network at theairisknetwork.substack.com/subscribe


