The Rip Current

Jacob Ward
undefined
Dec 19, 2025 • 13min

Did Weed Just Escape the Culture War?

Here’s one I truly didn’t see coming: the Trump administration just made the most scientifically meaningful shift in U.S. marijuana policy in years.No, weed isn’t suddenly legal everywhere. But moving marijuana from Schedule I — alongside heroin — to Schedule III is a very big deal. That single bureaucratic change cracks open something that’s been locked shut for half a century: real research.For years, I’ve covered the strange absurdities of marijuana science in America. If you were a federally funded researcher — which almost every serious scientist is — you weren’t allowed to study the weed people actually use. Instead, you had to rely on a single government-approved grow operation producing products that didn’t resemble what’s sold in dispensaries. As a result, commercialization raced ahead while our understanding lagged far behind.That’s how we ended up with confident opinions, big business, and weak data. We know marijuana can trigger severe psychological effects in a meaningful number of people. We know it can cause real physical distress for others. What we don’t know — because we’ve blocked ourselves from knowing — is who’s at risk, why, and how to use it safely at scale.Meanwhile, the argument that weed belongs in the same category as drugs linked to violence and mass death has always collapsed under scrutiny. Alcohol, linked to more than 178,000 deaths per year in the United States alone, does far more damage, both socially and physically, yet sits comfortably in legal daylight.If this reclassification sticks, the excuse phase is over. States making billions from legal cannabis now need to fund serious, independent research. I didn’t expect this administration to make a science-forward move like this — but here we are. Here’s hoping we can finish the job and finally understand what we’ve been pretending to regulate for decades. This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit theripcurrent.substack.com/subscribe
undefined
Dec 19, 2025 • 13min

AI Has Us Lying to One Another (and It's Changing How We Think)

Okay, honest admission here: I don’t fully know what I think about this topic yet. A podcast producer (thanks Nancy!) once told me “let them watch you think out loud,” and I’m taking her to heart — because the thing I’m worried about is already happening to me.Lately, I’ve been leaning hard on AI tools, God help me. Not to write for me — a little, sure, but for the most part I still do that myself — but to help me quickly get acclimated to unfamiliar worlds. The latest unfamiliar world is online marketing, which I do not understand AT ALL but now need to master to survive as an independent journalist. And here’s the problem: the advice these systems give isn’t neutral, because first of all it’s not really “advice,” it’s just statistically relevant language regurgitated as advice, and second, because it just vacuums up the language wherever it can find it, its suggestions come with online values baked in. I know this — I wrote a whole f*****g book about it — but I lose track of it in my desperation to learn quickly.I’m currently trying to analyze who it is that follows me on TikTok, and why, so I can try to port some of those people (or at least those types of people) over to Substack (thank you for being here) and to YouTube, where one can actually make a living filing analysis like this. (Smash that subscribe button!) So ChatGPT told me to pay attention to a handful of metrics: watch time, who gets past two seconds of the video, etc. One of the main metrics I was told to prioritize? Disagreement in the comments. Not understanding, learning, clarity, the stuff I’m after in my everyday work. Fighting. Comments in which people want to argue with me are “good,” according to ChatGPT. Thoughtful consensus? Statistically irrelevant.Here’s the added trouble. It’s one thing to read that and filter out what’s unhelpful. It’s another thing to do so in a world where all of us are supposed to pretend we had this thought ourselves. AI isn’t just helping us work faster. It’s quietly training us to behave differently — and to hide how that training happens. We’re all pretending this output is “ours,” because the unspoken promise of AI right now is that you can get help and still take the credit. (I believe this is a fundamental piece of the marketing that no one’s saying out loud, but everyone is implying.) And the danger isn’t just dishonesty toward others. It’s that we start believing our own act. There’s a huge canon of scientific literature showing that lying about a thing causes us to internalize the lie over time. The Harvard psychologist Daniel Schachter wrote a sweeping review of the science in 1999 entitled “The Seven Sins of Memory,” in which he synthesized a range of studies that showed that memory is us building a belief on the prior belief, not drawing on a perfect replay of reality, and that repetition and suggestion can implant or strengthen false beliefs that feel subjectively true. Throw us enough ideas and culturally condition us to hide where we got them, and eventually we’ll come to believe they were our own. (And to be clear, I knew a little about the reconstructive nature of memory, but ChatGPT brought me Schachter’s paper. So there you go.) What am I suggesting here? I know we’re creating a culture where machine advice is passed off as human judgment. I don’t know whether the answer is transparency, labeling, norms, regulation, or something else entirely. So I guess I’m starting with transparency.In any event, I do know this: lying about how we did or learned something makes us less discerning thinkers. And AI’s current role in our lives is built on that lie.Thinking out loud. Feedback welcome. Thanks! This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit theripcurrent.substack.com/subscribe
undefined
Dec 17, 2025 • 10min

AI Data Centers Are Draining Our Resources — and Making Strange Political Allies

Note: You can read a deeper dive into this whole issue in my weekly column over at Hard Reset.The United States has a split personality when it comes to AI data centers. On the one side, tech leaders (and the White House) celebrate artificial intelligence as a symbol of national power and economic growth. But politicians from Bernie Sanders to Ron DeSantis point out that when it shows up in our towns, it drains water, drives up electricity prices, and demands round-the-clock power like an always-awake city.Every AI prompt—whether it’s wedding vows or a goofy image—fires up racks of servers that require enormous amounts of electricity and water to stay cool. The result is rising pressure on local water supplies and power grids, and a wave of protests and political resistance across the country. I’m covering that in today’s episode, and you can read the whole report over at Hard Reset.The Rip Current by Jacob Ward is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.I also got to speak about the national controversy over AI data centers on CBS News this week. Check it out: This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit theripcurrent.substack.com/subscribe
undefined
Dec 16, 2025 • 8min

Jake on PBS: Is This the End of A.I. Laws?

Geoff Bennett invited me onto Newshour on Friday to discuss the president’s executive order outlawing state regulations when it comes to A.I. I’m used to the 90-second to 3-minute format that’s so common on the networks, so to have the breathing room Geoff gave me on this topic was wonderful, and let me get into some of the subtleties and context that this debate often excludes! Also, big props to all of you who wrote to say you spotted me in the show — the Newshour Friday night crew rolls deep! Thanks for watching.The Rip Current by Jacob Ward is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber. This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit theripcurrent.substack.com/subscribe
undefined
Dec 15, 2025 • 11min

AI Isn’t Just a Money Risk Anymore — It’s Bigger than That

For most of modern history, regulation in Western democracies has focused on two kinds of harm: people dying and people losing money. But with AI, that’s beginning to change.This week, the headlines point toward a new understanding that more is at stake than our physical health and our wallets: governments are starting to treat our psychological relationship with technology as a real risk. Not a side effect, not a moral panic, not a punchline to jokes about frivolous lawyers. Increasingly, I’m seeing lawmakers understand that it’s a core threat.The Rip Current by Jacob Ward is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.There is, for instance, the extraordinary speech from the new head of MI6, Britain’s intelligence agency. Instead of focusing only on missiles, spies, or nation-state enemies, she warned that AI and hyper-personalized technologies are rewriting the nature of conflict itself — blurring peace and war, state action and private influence, reality and manipulation. When the person responsible for assessing existential threats starts talking about perception and persuasion, that stuff has moved from academic hand-wringing to real danger.Then there’s the growing evidence that militant groups are using AI to recruit, radicalize, and persuade — often more effectively than humans can. Researchers have now shown that AI-generated political messaging can outperform human persuasion. That matters, because most of us still believe we’re immune to manipulation. We’re not. Our brains are programmable, and AI is getting very good at learning our instructions.That same playbook is showing up in the behavior of our own government. Federal agencies are now mimicking the president’s incendiary online style, deploying AI-generated images and rage-bait tactics that look disturbingly similar to extremist propaganda. It’s no coincidence that the Oxford University Press crowned “rage bait” its word of the year. Outrage is no longer a side effect of the internet — it’s a design strategy.What’s different now is the regulatory response. A coalition of 42 U.S. attorneys general has formally warned AI companies about psychologically harmful interactions, including emotional dependency and delusional attachment to chatbots and “companions.” This isn’t about fraud or physical injury. It’s about damage to people’s inner lives — something American law has traditionally been reluctant to touch.At the same time, the Trump administration is trying to strip states of their power to regulate AI at all, even as states are the only ones meaningfully responding to these risks. That tension — between lived harm and promised utopia — is going to define the next few years.We can all feel that something is wrong. Not just economically, but cognitively. Trust, truth, childhood development, shared reality — all of it feels under pressure. The question now is whether regulation catches up before those harms harden into the new normal.Mentioned in This Article:Britain caught in ‘space between peace and war’, says new head of MI6 | UK security and counter-terrorism | The Guardianhttps://www.theguardian.com/uk-news/2025/dec/15/britain-caught-in-space-between-peace-and-war-new-head-of-mi6-warnsIslamic State group and other extremists are turning to AI | AP Newshttps://apnews.com/article/islamic-state-group-artificial-intelligence-deepfakes-ba201d23b91dbab95f6a8e7ad8b778d5‘Virality, rumors and lies’: US federal agencies mimic Trump on social media | Donald Trump | The Guardianhttps://www.theguardian.com/us-news/2025/dec/15/trump-agencies-style-social-mediaUS state attorneys-general demand better AI safeguardshttps://www.ft.com/content/4f3161cc-b97a-496e-b74e-4d6d2467d59cBonus: The Whistleblower ConundrumI’m also reading this very interesting and very sad account of the fate that has befallen tech workers who couldn’t take it any longer and spoke out. The thing that more and more of them are learning, however, is that the False Claims Act can get them a big percentage of whatever fines an agency imposes: something they’ll need considering they’re unlikely to work again. Tech whistleblowers are doing us all a huge favor, I hope an infrastructure can grow up around supporting them when they do it. Tech whistleblowers face job losses and isolation - The Washington Posthttps://www.washingtonpost.com/technology/2025/12/15/big-tech-whistleblowers-speak-out/ This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit theripcurrent.substack.com/subscribe
undefined
Dec 12, 2025 • 11min

Trump's Executive Order Gives the AI Industry What It Wants.

Note: This is a video summary of a longer column I wrote for Hard Reset today. Please have a look!President Trump has signed a sweeping executive order aimed at blocking U.S. states from regulating artificial intelligence — arguing that a “patchwork” of laws threatens innovation and America’s global competitiveness. But there’s a catch: there is no federal AI law to replace what states have been doing.The Rip Current by Jacob Ward is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.In this episode, I break down what the executive order actually does, why states stepped in to regulate AI in the first place, how this move conflicts with public opinion, and why legal experts believe the fight is headed straight to the courts.This isn’t just a tech story. It’s a constitutional one.You can read my full analysis at at HardResetMedia.com. This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit theripcurrent.substack.com/subscribe
undefined
Dec 11, 2025 • 1h 7min

AI Is Even More Biased Than We Are: Mahzarin Banaji on the Disturbing Truth Behind LLMs

This week I sat down with the woman who permanently rewired my understanding of human nature — and now she’s turning her attention to the nature of the machines we’ve gone crazy for.⁠Harvard psychologist Mahzarin Banaji⁠ coined the term “⁠implicit bias⁠” and has conducted research for decades into the blind spots we don’t admit even to ourselves. The work that blew my hair back shows ⁠how prejudice has and hasn’t changed⁠ since 2007. Take one of the tests ⁠here⁠ — I was deeply disappointed by my results.More recently, she’s been running ⁠new experiments⁠ on today’s large language models.What has she learned?They’re far more biased than humans.Sometimes twice or three times as biased.They show shocking behavior — like a model declaring “I am a white male” or demonstrating literal self-love toward its own company. And as their most raw and objectionable responses are papered over, our ability to understand just how prejudiced they really are is being whitewashed, she says.In this conversation, Banaji explains:* Why LLMs amplify bias instead of neutralizing it* How guardrails and “alignment” may hide what the model really thinks* Why kids, judges, doctors, and lonely users are uniquely exposed* How these systems form a narrowing “artificial hive mind”* And why we may not be mature enough to automate judgement at allBanaji is working at the very cutting edge of the science, and delivers a clear and unsettling picture of what AI is amplifying in our minds.Timestamps:00:00 — AI Will Warp Our DecisionsBanaji on why future decision-making may “suck” if we trust biased systems.01:20 — The Woman Who Changed How We Think About BiasJake introduces Banaji’s life’s work charting the hidden prejudices wired into all of us.03:00 — When Internet Language Revealed Human BiasHow early word-embedding research mirrored decades of psychological findings.05:30 — AI Learns the One-Drop RuleCLIP models absorb racial logic humans barely admit.07:00 — The Moment GPT Said “I Am a White Male”Banaji recounts the shocking early answer that launched her LLM research.10:00 — The Rise of Guardrails… and the Disappearance of HonestyWhy the cleaned-up versions of models may tell us less about their true thinking.12:00 — What “Alignment” Gets Fatally WrongThe Silicon Valley fantasy of “universal human values” collides with actual psychology.15:00 — When AI Corrects Itself in Stupid WaysThe Gemini fiasco, and why “fixing” bias often produces fresh distortions.17:00 — Should We Even Build AGI?Banaji on why specialized models may be safer than one general mind.19:00 — Can We Automate Judgment When We Don’t Know Ourselves?The paradox at the heart of AI development.21:00 — Machines Can Be Manipulated Just Like HumansCialdini’s persuasion principles work frighteningly well on LLMs.23:00 — Why AI Seems So Trustworthy (and Why That’s Dangerous)The credibility illusion baked into every polished chatbot.25:00 — The Discovery of Machine “Self-Love”How models prefer themselves, their creators, and their own CEOs.28:00 — The Hidden Line of Code That Made It All Make SenseWhat changes when a model is told its own name.31:00 — Artificial Hive Mind: What 70 LLMs Have in CommonThe narrowing of creativity across models and why it matters.34:00 — Why LLM Bias Is More Extreme Than Human BiasBanaji explains effect sizes that blow past anything seen in psychology.37:00 — A Global Problem: From U.S. Race Bias to India’s Caste BiasHow Western-built models export prejudice worldwide.40:00 — The Loan Officer Problem: When “Truth to the Data” Is ImmoralA real-world example of why bias-blind AI is dangerous.43:00 — Bayesian Hypocrisy: Humans Do It… and AI Does It MoreModels replicate our irrational judgments — just with sharper edges.48:00 — Are We Mature Enough to Hand Off Our Thinking?Banaji on the risks of relying on a mind we didn’t design and barely understand.50:00 — The Big Question: Can AI Ever Make Us More Rational? This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit theripcurrent.substack.com/subscribe
undefined
Dec 10, 2025 • 10min

Australia Just Rebooted Childhood — And the World Is Watching

Australia just imposed a blanket ban on social media for kids under the age of 16. It’s not just the strictest tech policy of any democracy — it’s stricter than China’s laws. No TikTok, no Instagram, no SnapChat, that’s it. And while Washington dithers behind a 1998 law written before Google existed, other countries are gearing up to copy Australia’s homework (Malaysia imposes a similar ban on January 1st). What happens now — the enforcement mess, the global backlash, the accidental creation of the largest clean “control group” in tech-history — could reshape how we think about childhood, mental health, and what governments owe the developing brain.The Rip Current by Jacob Ward is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.00:00 — Australia’s historic under-16 social-media ban01:10 — What counts as “social media” under the law?02:04 — Why platforms — not kids — get fined03:01 — How the U.S. is still stuck with COPPA (from 1998!)04:28 — Why age 13 was always a fiction05:15 — Psychologists on the teenage brain: “all gas, no brakes”07:02 — Malaysia and the EU consider following Australia’s lead08:00 — Nighttime curfews and other global experiments09:11 — Albanese’s pitch: reclaiming “a real childhood”10:20 — Could isolation leave Aussie teens behind socially?11:22 — Why Australia is suddenly stricter than China12:40 — Age-verification chaos: the AI that thinks my uncle is 1213:40 — The enforcement black box14:10 — Australia as the first real longitudinal control group15:18 — If mental-health outcomes improve, everything changes16:05 — The end of the “wild west” era of social platforms? This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit theripcurrent.substack.com/subscribe
undefined
Dec 4, 2025 • 12min

AI is Creating a ‘Hive Mind' — Scientists Just Proved It

The big AI conference NeurIPS is under way in San Diego this week, and nearly 6,000 papers presented there will set the technical, intellectual, and ethical course for AI for the year. NeurIPS is a strange pseudo-academic gathering, where researchers from universities show up to present their findings alongside employees of Apple and Nvidia, part of the strange public-private revolving door of the tech industry. Sometimes they’re the same person: Increasingly, academic researchers are allowed to also hold a job at a big company. I can’t blame them for taking opportunities where they arise—I’m sure I would, in their position—but it’s particularly bothersome to me as a journalist, because it limits their ability to speak publicly.The papers cover robotics, alignment, and how to deliver kitty cat pictures more efficiently, but one paper in particular—awarded a top prize at the conference—grabbed me by the throat. A coalition from Stanford, the Allen Institute, Carnegie Mellon, and the University of Washington presented “Artificial Hive Mind: The Open-Ended Homogeneity of Language Models (and Beyond),” which shows that the average large language model converges toward a narrow set of responses when asked big, brainstormy, open-ended questions. Worse, different models tend to produce similar answers, meaning when you switch from ChatGPT to Gemini or Claude for “new perspective,” you’re not getting it. I’ve warned for years that AI could shrink our menu of choices while making us believe we have more of them. This paper shows just how real that risk is. Today I walk through the NIPS landscape, the other trends emerging at the conference, and why “creative assistance” may actually be the crushing of creativity in disguise. This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit theripcurrent.substack.com/subscribe
undefined
Dec 3, 2025 • 5min

Jake on CBS News: OpenAI's Fight to "Make It Irresistible"

I’ve been in a pretty steady cadence of appearances on CBS News these days, and it’s been a wonderful place to have open-ended conversations about the latest tech headlines. Yesterday they had me on to talk about OpenAI’s “Code Red” memo commanding its employees to delay other products and projects and focus on making ChatGPT as “intuitive and emotional” as possible.Programming Note: Tomorrow (Thursday) I’ll be guest-hosting TWiT’s podcast “Tech News Weekly” at 11am PT / 2pm ET. It’ll be available shortly thereafter on TWiT’s YouTube channel.And I’ve had a series of fantastic conversations on various podcast’s lately. Lux Capital’s RiskGaming Podcast brought me on for an hour on the history of tech optimism and the generational thinking we’ll need to solve the big problems it’s created. And Meredith Edwards’ Meredith for Real podcast brought me in for an hour to talk about how (and whether) we can protect ourselves against what AI is amplifying in us. I’ll throw some clips up soon! This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit theripcurrent.substack.com/subscribe

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app