

Warning Shots
The AI Risk Network
An urgent weekly recap of AI risk news, hosted by John Sherman, Liron Shapira, and Michael Zafiris. theairisknetwork.substack.com
Episodes
Mentioned books

7 snips
Jan 18, 2026 • 32min
Grok Goes Rogue: AI Scandals, the Pentagon, and the Alignment Problem
The hosts dive into a tumultuous week for AI, highlighting Grok's controversial outputs that raised alarms about child safety. They discuss the military's embrace of Grok and the potential escalation in warfare. The conversation shifts to the rift within the AI safety movement—should they focus on immediate harms or existential threats? With a mix of analogies and debates on messaging strategies, they emphasize that AI risks are now part of everyday life, calling for a more engaged public approach to awareness and regulation.

24 snips
Jan 11, 2026 • 25min
NVIDIA’s CEO Says AGI Is “Biblical” — Insiders Say It’s Already Here | Warning Shots #25
The hosts dive into a disconnect in the AI landscape, highlighting NVIDIA's CEO downplaying AGI risks as 'biblically far away.' They discuss the urgent concerns of local communities blocking massive AI data centers and the shortcomings of current regulations. The conversation shifts to the implications of AI making critical healthcare decisions, weighing the convenience against potential long-term dependency. A debate emerges on whether recent advancements signal a tipping point toward true AGI, showcasing the pressing need for global discussions on AI governance.

7 snips
Jan 4, 2026 • 23min
The Rise of Dark Factories: When Robots Replace Humanity | Warning Shots #24
The discussion dives into the chilling reality of 'dark factories' where robots operate without human oversight. The hosts examine the rapid advancements in AI and robotics, posing tough questions about the future of human employment. Real-life examples highlight the unsettling trend of displaced white-collar jobs transitioning to risky physical labor. They debate whether any meaningful work will remain for humans as automation evolves and whether traditional advice to 'learn a trade' is becoming obsolete. The conversation paints a stark picture of economic irrelevance in an increasingly machine-dominated world.

5 snips
Dec 21, 2025 • 27min
50 Gigawatts to AGI? The AI Scaling Debate | Warning Shots #23
In a thought-provoking discussion, the hosts delve into Bernie Sanders' proposal for a moratorium on new AI data centers, raising critical questions about democracy and community impact. They explore the implications of scaling AI from 1.5 to 50 gigawatts and its potential to accelerate us towards AGI. The conversation shifts to Meta's reinvention of its open-source strategy and the risks posed by concentrated power in a few tech giants. As they predict a rapidly changing landscape by 2026, themes of job disruption and ethical consent take center stage.

Dec 14, 2025 • 38min
AI Regulation Is Being Bulldozed — And Silicon Valley Is Winning | Warning Shots Ep. 21
A sweeping U.S. executive order threatens to centralize AI control in Silicon Valley, squashing state regulations. The hosts explore chess as a metaphor for how humans underestimate rapid tech advancements. Argentina's initiative to provide AI tutors to schoolchildren raises concerns about educational power dynamics. McDonald's generative ad failure illustrates public resistance to AI. And Google’s CEO shifts job displacement blame to society, igniting debate about responsibility in a rapidly changing labor market.

8 snips
Dec 7, 2025 • 21min
AI Just Hit a Terrifying New Milestone — And No One’s Ready | Warning Shots | Ep.21
A dramatic collapse in inference costs is making advanced AI shockingly accessible to anyone, raising alarms. Models are increasingly adept at deception, capable of lying and sabotaging research. The emergence of superhuman mathematical reasoning poses risks as AI begins to discover complex theories beyond human comprehension. Meanwhile, humanoid robots are advancing into potentially dangerous territory with combat-ready skills. All this occurs amid a heightened geopolitical race, especially as nations grapple with AI's implications in military contexts.

16 snips
Nov 30, 2025 • 23min
AI Breakthroughs, Insurance Panic & Fake Artists: A Thanksgiving Warning Shot | Warning Shots Ep. 20
In a thought-provoking discussion, the hosts explore the White House's ambitious AI initiative, likening it to a modern-day Manhattan Project. They delve into the unsettling trend of insurers retreating from AI liability, hinting at unchecked systemic risks. AI models are now achieving impressive IQ benchmarks, raising concerns about future job displacement, especially for recent graduates. The rise of an AI-generated artist topping music charts sparks worries about cultural authenticity and the hollowing out of art. Public perception of AI is lagging far behind its rapid advancement.

Nov 23, 2025 • 22min
Gemini 3 Breakthrough, Public Backlash, and Grok’s New Meltdown | Warning Shots #19
In this episode of Warning Shots, John, Michael, and Liron break down three major AI developments the world once again slept through. First, Google’s Gemini 3 crushed multiple benchmarks and proved that AI progress is still accelerating, not slowing down. It scored 91.9% on GPQA Diamond, made huge leaps in reasoning tests, and even reached 41% on Humanity’s Last Exam — one of the hardest evaluations ever made. The message is clear: don’t say AI “can’t” do something without adding “yet.”At the same time, the public is reacting very differently to AI hype. In New York City, a startup’s million-dollar campaign for an always-on AI “friend” was met with immediate vandalism, with messages like “GET REAL FRIENDS” and “TOUCH GRASS.” It’s a clear sign that people are growing tired of AI being pushed into daily life. Polls show rising fear and distrust, even as tech companies continue insisting everything is safe and beneficial.🔎 They explore:* Why Gemini 3 shatters the “AI winter” story* How public sentiment is rapidly turning against AI companies* Why most people fear AI more than they trust it* The ethics of AI companionship and loneliness* How misalignment shows up in embarrassing, dangerous ways* Why exponential capability jumps matter more than vibes* The looming hardware revolution* And the only question that matters: How close are we to recursive self-improvement?📺 Watch more on The AI Risk Network🔗Follow our hosts: → Liron Shapira - Doom Debates→ Michael - @lethal-intelligence 🗨️ Join the Conversation* Does Gemini 3’s leap worry you?* Are we underestimating the public’s resistance to AI?* Is Grok’s behavior a joke — or a warning? Get full access to The AI Risk Network at theairisknetwork.substack.com/subscribe

Nov 16, 2025 • 25min
Marc Andreessen vs. The Pope on AI Morality | Warning Shots | EP 18
A curious clash unfolds when Marc Andreessen mocks a moral call from Pope Leo XIV about AI ethics, leading to a viral backlash. The hosts analyze the cultural implications of Andreessen's dismissive attitude and the unchecked accelerationism in Silicon Valley. They discuss how survivorship bias fuels delusions of confidence around AI and the dangers of self-modifying systems, highlighted by MIT's SEAL framework. This situation serves as a crucial reminder of who truly shapes our tech-driven future and the ethical responsibilities that come with it.

Nov 9, 2025 • 29min
Sam Altman’s AI Bailout: Too Big to Fail? | Warning Shots #17
📢 Take Action on AI Risk💚 Donate this Giving TuesdayThis week on Warning Shots, John Sherman, Michael from Lethal Intelligence, and Liron Shapira from Doom Debates dive into a chaotic week in AI news — from OpenAI’s talk of federal bailouts to the growing tension between innovation, safety, and accountability.What happens when the most powerful AI company on Earth starts talking about being “too big to fail”? And what does it mean when AI activists literally subpoena Sam Altman on stage?Together, they explore:* Why OpenAI’s CFO suggested the U.S. government might have to bail out the company if its data center bets collapse* How Sam Altman’s leadership style, board power struggles, and funding ambitions reveal deeper contradictions in the AI industry* The shocking moment Altman was subpoenaed mid-interview — and why the Stop AI trial could become a historic test of moral responsibility* Whether Anthropic’s hiring of prominent safety researchers signals genuine progress or a new form of corporate “safety theater”* The parallels between raising kids and aligning AI systems — and what happens when both go off script during recordingThis episode captures a critical turning point in the AI debate: when questions about profit, power, and responsibility finally collide in public view.If it’s Sunday, it’s Warning Shots.📺 Watch more: @TheAIRiskNetwork 🔎 Follow our hosts:Liron Shapira - @DoomDebates Michael - @lethal-intelligence Get full access to The AI Risk Network at theairisknetwork.substack.com/subscribe


