What if the biggest mistake in AI safety is believing that laws, treaties, and regulations will save us?In this episode of For Humanity, John sits down with Peter Sparber, a former architect of Big Tobacco’s successful war against regulation, to confront a deeply uncomfortable truth: the AI industry is using the exact same playbook—and it’s working. Drawing on decades of experience inside Washington’s most effective lobbying operations, Peter explains why regulation almost always fails against powerful industries, how AI companies are already neutralizing political pressure, and why real change will never come from lawmakers alone. Instead, he argues that the only path to meaningful AI safety is making unsafe AI bad for business—by injecting risk, liability, and uncertainty directly into boardrooms and C-suites. Peter reveals why AI doesn’t need to outsmart humanity to defeat regulation, it only needs money, time, and political cover. By exposing how industries evade oversight, delay enforcement, and co-opt regulators, this conversation re-frames AI safety around power, incentives, and accountability.
Together, they explore:
* Why laws, treaties, and regulations repeatedly fail against powerful industries
* How Big AI is following Big Tobacco’s exact regulatory playbook
* Why public outrage rarely translates into effective policy
* How companies neutralize enforcement without breaking the law
* Why third-party standards may matter more than legislation
* How local resistance, liability, and investor pressure can change behavior
* Why making unsafe AI bad for business is the only strategy with teeth
📺 Subscribe to The AI Risk Network for weekly conversations on how we can confront the AI extinction threat.
Get full access to The AI Risk Network at
theairisknetwork.substack.com/subscribe