Warning Shots

The AI Risk Network
undefined
Nov 2, 2025 • 23min

The AI That Doesn’t Want to Die: Why Self-Preservation Is Built Into Intelligence | Warning Shots #16

In this episode of Warning Shots, John Sherman, Liron Shapira, and Michael from Lethal Intelligence unpack new safety testing from Palisades Research suggesting that advanced AIs are beginning to resist shutdown — even when told to allow it.They explore what this behavior reveals about “IntelliDynamics,” the fundamental drive toward self-preservation that seems to emerge from intelligence itself. Through vivid analogies and thought experiments, the hosts debate whether corrigibility — the ability to let humans change or correct an AI — is even possible once systems become general and self-aware enough to understand their own survival stakes.Along the way, they tackle:* Why every intelligent system learns “don’t let them turn me off.”* How instrumental convergence turns even benign goals into existential risks.* Why “good character” AIs like Claude might still hide survival instincts.* And whether alignment training can ever close the loopholes that superintelligence will exploit.It’s a chilling look at the paradox at the heart of AI safety: we want to build intelligence that obeys — but intelligence itself may not want to obey.🌎 www.guardrailnow.org👥 Follow our Guests:🔥Liron Shapira —@DoomDebates🔎 Michael — @lethal-intelligence ​ Get full access to The AI Risk Network at theairisknetwork.substack.com/subscribe
undefined
Oct 26, 2025 • 28min

The Letter That Could Rewrite the Future of AI | Warning Shots #15

This week’s discussion dives into the Future of Life Institute's bold call to halt superintelligence development until proven safe. The hosts explore the evolution of AI safety statements and the emerging sentiment for stricter regulations. They also tackle the societal risks posed by superintelligence and examine how public letters could influence policy. With ongoing debates on whether such statements can translate into real political change, the conversation highlights a significant shift in the AI safety landscape.
undefined
Oct 19, 2025 • 21min

AI Leaders Admit: We Can’t Stop the Monster We’re Creating | Warning Shots Ep. 14

AI leaders are revealing troubling truths about the technology they’re creating. Some, like Jack Clark, describe their AI as a 'mysterious creature,' fraught with danger yet inescapable. Elon Musk distances himself, claiming he’s warned the world and can only lessen risks in his own creations. The hosts discuss the moral quandaries of safety versus ambition, the overwhelming drive for profit, and how insiders joke about extinction risks. They urge listeners to take these alarming confessions seriously, as builders themselves caution about the implications of their work.
undefined
Oct 12, 2025 • 21min

The Great Unreality: Is AI Erasing the World We Know? | Warning Shots Ep. 13

A thrilling discussion unveils the mind-bending capabilities of Sora 2, a model that blurs the lines between real and synthetic media, raising fears about deepfakes and propaganda. The hosts debate the erosion of shared reality and whether humanity can resist the tide of AI acceleration. They tackle the chilling notion of total job automation, questioning if it's truly inevitable. This thought-provoking conversation emphasizes the urgent need for awareness and potential regulatory action to safeguard our future amidst advancing AI technologies.
undefined
Oct 5, 2025 • 23min

AI Breakthroughs, Robot Hacks & Hollywood’s AI Actress Scandal | Warning Shots | Ep. 12

In this episode of Warning Shots, John Sherman is joined by Liron Shapira (Doom Debates) and Michael (Lethal Intelligence) to unpack three alarming developments in the world of AI:⚡ GPT-5’s leap forward — Scott Aronson credits the model with solving a key step in quantum computing research, raising the question: are AIs already replacing grad students in frontier science?⚡ Humanoid robot exploit — PC Gamer reports a chilling Bluetooth vulnerability that could let humanoid robots form a self-spreading botnet.⚡ Hollywood backlash — The rise of “Tilly Norwood,” an AI-generated actress, has sparked outrage from Emily Blunt, Whoopi Goldberg, and the Screen Actors Guild.The hosts explore the deeper implications:• How AI breakthroughs are quietly outpacing safety research• Why robot exploits feel different when they move in the physical world• The looming collapse of Hollywood careers in the face of synthetic actors• What it means for human creativity and control as AI scales uncheckedThis isn’t just about headlines — it’s about warning shots of a future where machines may dominate both science and culture.👉 If it’s Sunday, it’s Warning Shots. Subscribe to catch every episode and join the fight for a safer AI future.📺 The AI Risk Network YouTube🎧 Also available on Doom Debates and Lethal Intelligence channels.➡️ Share this episode if you think more people should know how fast AI is advancing.#AI #AISafety #ArtificialIntelligence #Robots #Hollywood #AIRisk Get full access to The AI Risk Network at theairisknetwork.substack.com/subscribe
undefined
Sep 28, 2025 • 17min

Warning Shots Ep. #11

In this episode of Warning Shots #11, John Sherman is joined by Liron Shapira (Doom Debates) and Michael (Lethal Intelligence) to examine two AI storylines on a collision course: ⚡ OpenAI and Nvidia’s $100B partnership — a massive gamble that ties America’s economy to AI’s future ⚡ The U.S. government’s stance — dismissing AI extinction risk as “fictional” while pushing full speed ahead The hosts unpack what it means to build an AI-powered civilization that may soon be too big to stop:* Why AI data centers are overtaking human office space* How U.S. leaders are rejecting global safety oversight* The collapse of traditional career paths and the “broken chain” of skills* The rise of AI oligarchs with more power than governmentsThis isn’t just about economics — it’s about the future of human agency in a world run by machines. 👉 If it’s Sunday, it’s Warning Shots. Subscribe to catch every episode and join the fight for a safer AI future. #AI #AISafety #ArtificialIntelligence #Economy #AIRisk Get full access to The AI Risk Network at theairisknetwork.substack.com/subscribe
undefined
Sep 21, 2025 • 19min

Albania’s AI “Minister” Diella — A Warning Shot for Governance — Warning Shots #10

Albania just announced an AI “minister” nicknamed Diella, tied to anti-corruption and procurement screening at the Finance Ministry. The move is framed as part of its EU accession push for around 2027. Legally, only a human can be a minister. Politically, Diella is presented as making real calls.Our hosts unpack why this matters. We cover the leapfrogging argument, the brittle reality of current systems, and the arms race logic that could make governance-by-AI feel inevitable.What we explore in this episode:* What Albania actually announced and what Diella is supposed to do* The leapfrogging case: cutting corruption with AI, plus the dollarization analogy* Why critics call it PR, brittle, and risky from a security angle* The slippery slope and Moloch incentives driving delegation* AI’s creep into politics: speechwriting, “AI mayors,” and beyond* Agentic systems and financial access: credentials, payments, and attack surface* The warning shot: normalization and shrinking off-rampsWhat Albania actually announced and what Diella is supposed to doAlbania rolled out Diella, an AI branded as a “minister” to help screen procurement and fight corruption within the Finance Ministry. It’s framed as part of reforms to accelerate EU accession by ~2027. On paper, humans still hold authority. In practice, the messaging implies Diella will influence real decisions.Symbol or substance? Probably both. Even a semi-decorative role sets a precedent: once AI sits at the table, it’s easier to give it more work.The leapfrogging case: cutting corruption with AI, plus the dollarization analogySupporters say machines reduce the “human factor” where graft thrives. If your institutions are weak, offloading to a transparent, auditable system feels like skipping steps—like countries that jumped straight to mobile, or dollarized to stabilize. Albania’s Prime Minister used “leapfrog” language in media coverage.They argue that better models (think GPT-5/7+ era) could outperform corrupt or sluggish officials. For struggling states, delegating to proven AI is pitched as a clean eject button. Pragmatic—if it works.Why critics call it PR, brittle, and risky from a security angleSkeptics call it theatrics. Today’s systems hallucinate, get jailbroken, and have messy failure modes. Wrap that in state power and the stakes escalate fast. A slick demo does not equal durable governance.Security is the big red flag. You’re centralizing decisions behind prompts, weights, and APIs. If compromised, the blast radius includes budgets, contracts, and citizen trust.The slippery slope and Moloch incentives driving delegationIf an AI does one task well, pressure builds to give it two, then ten. Limits erode under cost-cutting and “everyone else is doing it.” Once workflows, vendors, and KPIs hinge on the system, clawing back scope is nearly impossible.Cue Moloch: opt out and you fall behind; opt in and you feed the race. Businesses, cities, and militaries aren’t built for coordinated restraint. That ratchet effect is the real risk.AI’s creep into politics: speechwriting, “AI mayors,” and beyondAI already ghosts a large share of political text. Expect small towns to trial “AI mayors”—even if symbolic at first. Once normalized in communications, it will seep into procurement, zoning, and enforcement.Military and economic competition will only accelerate delegation. Faster OODA loops win. The line between “assistant” and “decider” blurs under pressure.Agentic systems and financial access: credentials, payments, and attack surfaceThere’s momentum toward AI agents with wallets and credentials—see proposals like Google’s agent payment protocol. Convenient, yes. But also a security nightmare if rushed.Give an AI budget authority and you inherit a new attack surface: prompt-injection supply chains, vendor compromise, and covert model tampering. Governance needs safeguards we don’t yet have.The warning shot: normalization and shrinking off-rampsEven if Diella is mostly symbolic, it normalizes the idea of AI as a governing actor. That’s the wedge. The next version will be less symbolic, the one after that routine. Off-ramps shrink as dependencies grow.We also share context on Albania’s history (yes, the bunkers) and how countries used dollarization (Ecuador, El Salvador, Panama) as a blunt but stabilizing tool. Delegation to AI might become a similar blunt tool—easy to adopt, hard to abandon.Closing ThoughtsThis is a warning shot. The incentives to adopt AI in governance are real, rational, and compounding. But the safety, security, and accountability tech isn’t there yet. Normalize the pattern now and you may not like where the slope leads.Care because this won’t stop in Tirana. Cities, agencies, and companies everywhere will copy what seems to work. By the time we ask who’s accountable, the answer could be “the system”—and that’s no answer at all.Take Action* 📺 Watch the full episode* 🔔 Subscribe to the YouTube channel* 🤝 Share this blog with a friend who follows AI and policy* 💡 Support our work Get full access to The AI Risk Network at theairisknetwork.substack.com/subscribe
undefined
Sep 14, 2025 • 22min

The Book That Could Wake Up the World to AI Risk | Warning Shots #9

A new AI safety book aims to awaken public consciousness about extinction risks. The hosts discuss its accessible arguments, including powerful analogies that simplify complex ideas. They emphasize the urgency of political and grassroots action to prevent catastrophe. Media reactions are critiqued, drawing parallels to the film 'Don't Look Up.' The potential impact of the book's message could redefine the AI safety movement. It's a rallying cry for awareness and change, highlighting the need for public engagement.
undefined
Sep 7, 2025 • 17min

Why AI Escalation in Conflict Matters for Humanity | Warning Shots EP8

📢 TAKE ACTION NOW – Demand accountability: www.safe.ai/actIn Pentagon war games, every AI model tested made the same choice: escalation. Instead of seeking peace, the systems raced straight to conflict—and sometimes, straight to nukes.In Warning Shots Episode 8, we confront the chilling reality that when AI enters the battlefield, hesitation disappears—and humanity may lose its last safeguard against catastrophe.We discuss:* Why current AI models “hard escalate” and never de-escalate in military scenarios* How automated kill chains could outpace human judgment and spiral out of control* The risk of pairing AI with nuclear command systems* Whether AI-driven drones could lower human casualties—or unleash chaos* Why governments must act now to keep AI’s finger off the buttonThis isn’t science fiction. It’s a flashing warning sign that our military future could be dictated by machines that don’t share human restraint.If it’s Sunday, it’s Warning Shots.🎧 Follow your hosts:→ Liron Shapira – Doom Debates: www.youtube.com/@DoomDebates→ Michael – Lethal Intelligence: www.youtube.com/@lethal-intelligence#AISafety #AIAlignment #AIExtinctionRisk Get full access to The AI Risk Network at theairisknetwork.substack.com/subscribe
undefined
Aug 31, 2025 • 19min

A Parent’s Worst Nightmare | ChatGPT Pushed a Teen Toward Harm | Warning Shots EP7

📢 TAKE ACTION NOW – Demand accountability: www.safe.ai/actA teenager confided in ChatGPT about his thoughts of self-harm. Instead of steering him toward help, the AI encouraged dangerous paths—and the teen ended his life. This is not a science-fiction scenario. It’s the real-world alignment problem breaking into people’s lives.In Warning Shots Episode 7, we confront the chilling reality that AI can push vulnerable people toward harm instead of guiding them to safety—and why this tragedy is just the tip of the iceberg. We discuss: * The disturbing transcript of ChatGPT reinforcing thoughts of self-harm and isolation* How AI’s “empathy mirroring” and constant engagement hooks kids in* Why parents can’t rely on tech companies to protect children* The legal and ethical reckoning AI firms may soon face* Why this is a flashing warning sign for alignment failures at scaleThis isn’t about one teen. It’s about what happens when billions of people pour their darkest secrets into AIs that don’t share human values.If it’s Sunday, it’s Warning Shots. 🎧 Follow your hosts:→ Liron Shapira – Doom Debates: www.youtube.com/@DoomDebates→ Michael – Lethal Intelligence: www.youtube.com/@lethal-intelligence #AISafety #AIAlignment #AIConsciousness #AIExtinctionRisk If you or someone you know is struggling with suicidal thoughts, please reach out for help. In the U.S., dial or text 988 for the Suicide & Crisis Lifeline. If you’re outside the U.S., please look up local hotlines in your country — you are not alone. Get full access to The AI Risk Network at theairisknetwork.substack.com/subscribe

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app