
The Rip Current AI Isn’t Just a Money Risk Anymore — It’s Bigger than That
For most of modern history, regulation in Western democracies has focused on two kinds of harm: people dying and people losing money. But with AI, that’s beginning to change.
This week, the headlines point toward a new understanding that more is at stake than our physical health and our wallets: governments are starting to treat our psychological relationship with technology as a real risk. Not a side effect, not a moral panic, not a punchline to jokes about frivolous lawyers. Increasingly, I’m seeing lawmakers understand that it’s a core threat.
The Rip Current by Jacob Ward is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.
There is, for instance, the extraordinary speech from the new head of MI6, Britain’s intelligence agency. Instead of focusing only on missiles, spies, or nation-state enemies, she warned that AI and hyper-personalized technologies are rewriting the nature of conflict itself — blurring peace and war, state action and private influence, reality and manipulation. When the person responsible for assessing existential threats starts talking about perception and persuasion, that stuff has moved from academic hand-wringing to real danger.
Then there’s the growing evidence that militant groups are using AI to recruit, radicalize, and persuade — often more effectively than humans can. Researchers have now shown that AI-generated political messaging can outperform human persuasion. That matters, because most of us still believe we’re immune to manipulation. We’re not. Our brains are programmable, and AI is getting very good at learning our instructions.
That same playbook is showing up in the behavior of our own government. Federal agencies are now mimicking the president’s incendiary online style, deploying AI-generated images and rage-bait tactics that look disturbingly similar to extremist propaganda. It’s no coincidence that the Oxford University Press crowned “rage bait” its word of the year. Outrage is no longer a side effect of the internet — it’s a design strategy.
What’s different now is the regulatory response. A coalition of 42 U.S. attorneys general has formally warned AI companies about psychologically harmful interactions, including emotional dependency and delusional attachment to chatbots and “companions.” This isn’t about fraud or physical injury. It’s about damage to people’s inner lives — something American law has traditionally been reluctant to touch.
At the same time, the Trump administration is trying to strip states of their power to regulate AI at all, even as states are the only ones meaningfully responding to these risks. That tension — between lived harm and promised utopia — is going to define the next few years.
We can all feel that something is wrong. Not just economically, but cognitively. Trust, truth, childhood development, shared reality — all of it feels under pressure. The question now is whether regulation catches up before those harms harden into the new normal.
Mentioned in This Article:
Britain caught in ‘space between peace and war’, says new head of MI6 | UK security and counter-terrorism | The Guardian
https://www.theguardian.com/uk-news/2025/dec/15/britain-caught-in-space-between-peace-and-war-new-head-of-mi6-warns
Islamic State group and other extremists are turning to AI | AP News
https://apnews.com/article/islamic-state-group-artificial-intelligence-deepfakes-ba201d23b91dbab95f6a8e7ad8b778d5
‘Virality, rumors and lies’: US federal agencies mimic Trump on social media | Donald Trump | The Guardian
https://www.theguardian.com/us-news/2025/dec/15/trump-agencies-style-social-media
US state attorneys-general demand better AI safeguards
https://www.ft.com/content/4f3161cc-b97a-496e-b74e-4d6d2467d59c
Bonus: The Whistleblower Conundrum
I’m also reading this very interesting and very sad account of the fate that has befallen tech workers who couldn’t take it any longer and spoke out. The thing that more and more of them are learning, however, is that the False Claims Act can get them a big percentage of whatever fines an agency imposes: something they’ll need considering they’re unlikely to work again. Tech whistleblowers are doing us all a huge favor, I hope an infrastructure can grow up around supporting them when they do it.
Tech whistleblowers face job losses and isolation - The Washington Post
https://www.washingtonpost.com/technology/2025/12/15/big-tech-whistleblowers-speak-out/
This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit theripcurrent.substack.com/subscribe
