

The Rip Current
Jacob Ward
We're in the invisible grip of technology, politics, and our own weirdness. We gotta get better at seeing it.
Hosted by veteran journalist Jacob Ward (correspondent for Al Jazeera, PBS, NBC News, and CNN), The Rip Current is your guide to spotting the hidden forces at work in our lives and getting across them safely.
Each week we speak to experts in the stuff you didn't know was having an impact on your life, from venture capital to racism to the tried-and-true tactics of bullies, and teach you how to see The Rip Current before it sweeps you out to sea.
Read more at TheRipCurrent.com! theripcurrent.substack.com
Hosted by veteran journalist Jacob Ward (correspondent for Al Jazeera, PBS, NBC News, and CNN), The Rip Current is your guide to spotting the hidden forces at work in our lives and getting across them safely.
Each week we speak to experts in the stuff you didn't know was having an impact on your life, from venture capital to racism to the tried-and-true tactics of bullies, and teach you how to see The Rip Current before it sweeps you out to sea.
Read more at TheRipCurrent.com! theripcurrent.substack.com
Episodes
Mentioned books

Jan 8, 2026 • 14min
Robots Are Coming for Factory Jobs — and No One Voted on It
I’ve been watching robots fall over for a long time.About a decade ago, I stood on a Florida speedway covering a DARPA robotics competition where machines failed spectacularly at things like opening doors and climbing stairs. It was funny, a little sad, and a reminder of just how hard it is to automate human behavior.Fast-forward to CES this week, and the joke’s over.The Rip Current by Jacob Ward is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.Humanoid robots are no longer pitching sideways into the dirt. They’re lifting, carrying, improvising, and — according to companies like Hyundai — heading onto American factory floors by 2028. These machines aren’t just pre-programmed arms anymore. Thanks to AI, they can understand general instructions, adapt on the fly, and perform tasks that once required human judgment.The pitch from executives like Hyundai’s CEO is reassuring: robots won’t replace humans, they’ll “work for humans.” They’ll handle the dangerous, repetitive jobs so people can move into higher-skilled roles.Labor unions hear something else entirely.For many workers, especially in manufacturing, these are some of the last stable, well-paying jobs that don’t require a college degree. And no one is voting on whether those jobs disappear. There’s no democratic process weighing the tradeoffs. We’re just sliding, quietly, toward a future where efficiency outruns consent.What troubles me most isn’t the technology itself. It’s the assumption baked into it — that if people are being worked like robots, the solution isn’t to make work more humane, but to replace the people.That’s not inevitability. That’s a choice. And right now, it’s being made without us.Jake Guest-Hosts “Tech News Weekly”The nice folks at This Week in Tech, who have brought me on regularly for a year or so now, asked me to fill in for Mikah Sargent, host of Tech News Weekly, and I got to enjoy a turn in the anchor’s chair just before the holidays. Have a look! This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit theripcurrent.substack.com/subscribe

Jan 8, 2026 • 1h 8min
Humans are Tribal and Judgy, and AI is Exploiting It (with Yarrow Dunham)
Why do kids form biases almost instantly? Why do people punish unfairness even when it costs them? And why do social media and AI seem to make all of this worse?In this episode of The Rip Current, I sit down with Yale psychologist Yarrow Dunham to unpack his many years of research into how humans form groups, enforce fairness, and turn tiny assumptions into lifelong beliefs. We talk about children, tribalism, polarization, altruistic punishment — and what happens when these ancient instincts collide with modern technology and generative AI.This conversation explains a lot about why the world feels broken — and why it doesn’t have to stay that way.The Rip Current by Jacob Ward is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.00:00 — How Fast Bias Forms (Even in Kids)Yarrow Dunham explains how children develop group preferences almost instantly — and why bias doesn’t require ideology, history, or teaching.02:05 — The “Minimal Group” Experiment ExplainedWhy simply assigning people to meaningless groups reliably creates favoritism, memory distortion, and preference.04:45 — Is Bias Innate or Learned?What research with infants suggests about early-emerging social preferences — and why “innate” is the wrong shortcut.07:10 — Why Humans Are Wired for CooperationHow long-term reciprocity with non-kin sets humans apart from other animals — and why group loyalty evolved.09:55 — How Big Can a “Tribe” Be?From hunter-gatherer bands to modern identities: nested groups, concentric loyalties, and flexible belonging.12:40 — When Bias Becomes DangerousWhy liking your group doesn’t automatically mean hating others — and what turns neutrality into hostility.14:30 — The Surprising Power of Expected CooperationA key finding: bias toward out-groups collapses when people expect to work together — even before contact.17:10 — Why This Matters for PolarizationHow declining cross-group interaction fuels political and social division — online and offline.19:25 — Kids, Fairness, and Punishing UnfairnessWhy children will pay a personal cost to enforce fairness — even when they’re not directly involved.22:10 — Altruistic Punishment and Moral OutrageHow fairness enforcement connects to adult politics, ideology, and “voting against self-interest.”25:05 — Fairness vs. MeritocracyWhy kids start out egalitarian — and how societies train them to accept inequality over time.27:45 — Status, Race, and Group PreferenceHow high-status groups override in-group bias — and what research shows in the U.S. and South Africa.30:40 — The ‘Default Human’ ProblemWhy systems (and societies) treat white men as the baseline — and the real-world consequences of that bias.33:20 — What Social Media Gets Exactly WrongHow algorithms amplify group identity and hostility — creating a perfect polarization machine.36:05 — Why AI Feels Like It’s “On Your Side”How generative AI triggers ancient social instincts by mimicking agency, affirmation, and belonging.38:50 — The Danger of Sycophantic AIWhy flattery and agreement are design choices — and how they short-circuit growth, challenge, and truth.41:40 — The Feedback Loop That Makes Bias WorseHow AI trained on human bias reflects it back as authority — reinforcing mistaken beliefs at scale.44:30 — Can AI Reduce Bias Instead of Amplifying It?What psychology suggests about indirect contact, imagined cooperation, and redesigned systems.47:10 — What Actually Works to Reduce BiasEqual-status cooperation, shared goals, and why exposure alone isn’t enough.50:05 — The Real Fix Is StructuralWhy individual goodwill isn’t enough — and how institutions shape who meets whom.52:40 — Final Takeaway: Bias Is FlexibleThe hopeful conclusion: group boundaries can be redrawn quickly — if we choose to design for it. This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit theripcurrent.substack.com/subscribe

Jan 5, 2026 • 13min
Every Oil Empire Thinks This Time Will Be Different.
It’s a very weird Monday back from the holidays. While most of us were shaking off jet lag and reminding ourselves who we are when we’re not sleeping late and hanging with family, the world woke up to a piece of news this weekend that showed no one in power learned a goddamn thing in history class: the United States has rendered Venezuela’s president to New York, and powerful people are openly fantasizing about “fixing” a broken country by taking control of its oil.This isn’t a defense of Nicolás Maduro. He presided over the destruction of a nation sitting on the world’s largest proven oil reserves. Venezuela’s state now barely functions beyond preserving its own power. The Venezuelans I’ve spoken with have a wide variety of feelings about an incompetent dictator being arrested by the United States.But what’s clear is that anyone who has read anything knows that the history of oil grabs is a history of financial disaster. So when I hear confident talk about oil revenues flowing back to the U.S., I don’t hear a plan. I hear the opening chapter of a time-honored financial tragedy that’s been repeated again and again, even in our lifetimes.Let’s put aside the moral horror of military invasion and colonial brutality, and just focus on whether the money ever actually flows back to the invader. Example after example shows it doesn’t: Iraq was supposed to stabilize energy markets. Instead, it delivered trillions in war costs, higher deficits, and zero leverage over oil prices. Britain’s attempt to hang onto the Suez Canal ended with a humiliating retreat, an IMF bailout, and the end of its time as a superpower. France’s war in Algeria collapsed its government. Dutch oil extraction in Nigeria boomeranged back home as lawsuits, environmental liability, and reputational ruin.Oil empires all make the same mistake: they think they can nationalize the upside while outsourcing the risk. In reality, profits stay local or corporate. Costs always come home. And we’re about to learn it all over again. This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit theripcurrent.substack.com/subscribe

Jan 2, 2026 • 14min
Why So Many People Hate AI — and Why 2026 Is the Breaking Point
Happy New Year! I’ve been off for the holiday — we cranked through a bake-off, a dance party, a family hot tub visit, and a makeshift ball drop in the living room of a snowy cabin — and I’m feeling recharged for (at least some portion of) 2026. So let’s get to it.I woke to reports that “safeguard failures” in Elon Musk’s Grok led to the generation of child sexual exploitative material (Reuters) — a euphemism that barely disguises how awful this is. I was on CBS News to talk about it this morning, but I made the point that the real question isn’t how did this happen? It’s how could it not?AI systems are built by vacuuming up the worst and best of human behavior and recombining it into something that feels intelligent, emotional, and intimate. I explored that dynamic in The Loop — and we’re now seeing it play out in public, at scale.The New York Times threw a question at all of us this morning: Why Do Americans Hate AI? (NYT). One data point surprised me: as recently as 2022, people in many other countries were more optimistic than Americans when it came to the technology. Huh! But the answer to the overall question seems to signal that we’ve all learned something from the social media era and from the recent turn toward a much more realistic assessment of technology companies’ roles in our lives: For most people, the benefits are fuzzy, while the threats — to jobs, dignity, and social stability — are crystal clear.Layer onto that a dated PR playbook (“we’re working on it”), a federal government openly hostile to regulation, and headlines promising mass job displacement, and the distrust makes a lot of sense.Of course, this is why states are stepping in. The rise of social media and the simultaneous correlated crisis in political discord, health misinformation, and depression rates left states holding the bag, and they’re clearly not going to let that happen again. California’s new AI laws — addressing deepfake pornography, AI impersonation of licensed professionals, chatbot safeguards for minors, and transparency in AI-written police reports — are a direct response to the past and the future.But if you think the distaste for AI’s influence is powerful here, I think we haven’t even gotten started in the rest of the world. Here’s a recent episode that has me more convinced of it than ever: a stadium in India became the scene of a violent protest when Indian football fans who’d paid good money for time with Lionel Messi were kept from seeing the soccer star by a crowd of VIPs clustered around him for selfies. The resulting (and utterly understandable) outpouring of anger made me think hard about what happens when millions of outsourced jobs disappear overnight. I think those fans’ rage at being excluded from a promised reward, bought with the money they work so hard for, is a preview.So yes — Americans distrust AI. But the real question is how deep those feelings go, and how much unrest this technology is quietly banking up, worldwide. That’s the problem we’ll be reckoning with all year long. This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit theripcurrent.substack.com/subscribe

Dec 19, 2025 • 13min
Did Weed Just Escape the Culture War?
Here’s one I truly didn’t see coming: the Trump administration just made the most scientifically meaningful shift in U.S. marijuana policy in years.No, weed isn’t suddenly legal everywhere. But moving marijuana from Schedule I — alongside heroin — to Schedule III is a very big deal. That single bureaucratic change cracks open something that’s been locked shut for half a century: real research.For years, I’ve covered the strange absurdities of marijuana science in America. If you were a federally funded researcher — which almost every serious scientist is — you weren’t allowed to study the weed people actually use. Instead, you had to rely on a single government-approved grow operation producing products that didn’t resemble what’s sold in dispensaries. As a result, commercialization raced ahead while our understanding lagged far behind.That’s how we ended up with confident opinions, big business, and weak data. We know marijuana can trigger severe psychological effects in a meaningful number of people. We know it can cause real physical distress for others. What we don’t know — because we’ve blocked ourselves from knowing — is who’s at risk, why, and how to use it safely at scale.Meanwhile, the argument that weed belongs in the same category as drugs linked to violence and mass death has always collapsed under scrutiny. Alcohol, linked to more than 178,000 deaths per year in the United States alone, does far more damage, both socially and physically, yet sits comfortably in legal daylight.If this reclassification sticks, the excuse phase is over. States making billions from legal cannabis now need to fund serious, independent research. I didn’t expect this administration to make a science-forward move like this — but here we are. Here’s hoping we can finish the job and finally understand what we’ve been pretending to regulate for decades. This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit theripcurrent.substack.com/subscribe

Dec 19, 2025 • 13min
AI Has Us Lying to One Another (and It's Changing How We Think)
Okay, honest admission here: I don’t fully know what I think about this topic yet. A podcast producer (thanks Nancy!) once told me “let them watch you think out loud,” and I’m taking her to heart — because the thing I’m worried about is already happening to me.Lately, I’ve been leaning hard on AI tools, God help me. Not to write for me — a little, sure, but for the most part I still do that myself — but to help me quickly get acclimated to unfamiliar worlds. The latest unfamiliar world is online marketing, which I do not understand AT ALL but now need to master to survive as an independent journalist. And here’s the problem: the advice these systems give isn’t neutral, because first of all it’s not really “advice,” it’s just statistically relevant language regurgitated as advice, and second, because it just vacuums up the language wherever it can find it, its suggestions come with online values baked in. I know this — I wrote a whole f*****g book about it — but I lose track of it in my desperation to learn quickly.I’m currently trying to analyze who it is that follows me on TikTok, and why, so I can try to port some of those people (or at least those types of people) over to Substack (thank you for being here) and to YouTube, where one can actually make a living filing analysis like this. (Smash that subscribe button!) So ChatGPT told me to pay attention to a handful of metrics: watch time, who gets past two seconds of the video, etc. One of the main metrics I was told to prioritize? Disagreement in the comments. Not understanding, learning, clarity, the stuff I’m after in my everyday work. Fighting. Comments in which people want to argue with me are “good,” according to ChatGPT. Thoughtful consensus? Statistically irrelevant.Here’s the added trouble. It’s one thing to read that and filter out what’s unhelpful. It’s another thing to do so in a world where all of us are supposed to pretend we had this thought ourselves. AI isn’t just helping us work faster. It’s quietly training us to behave differently — and to hide how that training happens. We’re all pretending this output is “ours,” because the unspoken promise of AI right now is that you can get help and still take the credit. (I believe this is a fundamental piece of the marketing that no one’s saying out loud, but everyone is implying.) And the danger isn’t just dishonesty toward others. It’s that we start believing our own act. There’s a huge canon of scientific literature showing that lying about a thing causes us to internalize the lie over time. The Harvard psychologist Daniel Schachter wrote a sweeping review of the science in 1999 entitled “The Seven Sins of Memory,” in which he synthesized a range of studies that showed that memory is us building a belief on the prior belief, not drawing on a perfect replay of reality, and that repetition and suggestion can implant or strengthen false beliefs that feel subjectively true. Throw us enough ideas and culturally condition us to hide where we got them, and eventually we’ll come to believe they were our own. (And to be clear, I knew a little about the reconstructive nature of memory, but ChatGPT brought me Schachter’s paper. So there you go.) What am I suggesting here? I know we’re creating a culture where machine advice is passed off as human judgment. I don’t know whether the answer is transparency, labeling, norms, regulation, or something else entirely. So I guess I’m starting with transparency.In any event, I do know this: lying about how we did or learned something makes us less discerning thinkers. And AI’s current role in our lives is built on that lie.Thinking out loud. Feedback welcome. Thanks! This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit theripcurrent.substack.com/subscribe

Dec 17, 2025 • 10min
AI Data Centers Are Draining Our Resources — and Making Strange Political Allies
Note: You can read a deeper dive into this whole issue in my weekly column over at Hard Reset.The United States has a split personality when it comes to AI data centers. On the one side, tech leaders (and the White House) celebrate artificial intelligence as a symbol of national power and economic growth. But politicians from Bernie Sanders to Ron DeSantis point out that when it shows up in our towns, it drains water, drives up electricity prices, and demands round-the-clock power like an always-awake city.Every AI prompt—whether it’s wedding vows or a goofy image—fires up racks of servers that require enormous amounts of electricity and water to stay cool. The result is rising pressure on local water supplies and power grids, and a wave of protests and political resistance across the country. I’m covering that in today’s episode, and you can read the whole report over at Hard Reset.The Rip Current by Jacob Ward is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.I also got to speak about the national controversy over AI data centers on CBS News this week. Check it out: This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit theripcurrent.substack.com/subscribe

Dec 16, 2025 • 8min
Jake on PBS: Is This the End of A.I. Laws?
Geoff Bennett invited me onto Newshour on Friday to discuss the president’s executive order outlawing state regulations when it comes to A.I. I’m used to the 90-second to 3-minute format that’s so common on the networks, so to have the breathing room Geoff gave me on this topic was wonderful, and let me get into some of the subtleties and context that this debate often excludes! Also, big props to all of you who wrote to say you spotted me in the show — the Newshour Friday night crew rolls deep! Thanks for watching.The Rip Current by Jacob Ward is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber. This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit theripcurrent.substack.com/subscribe

Dec 15, 2025 • 11min
AI Isn’t Just a Money Risk Anymore — It’s Bigger than That
For most of modern history, regulation in Western democracies has focused on two kinds of harm: people dying and people losing money. But with AI, that’s beginning to change.This week, the headlines point toward a new understanding that more is at stake than our physical health and our wallets: governments are starting to treat our psychological relationship with technology as a real risk. Not a side effect, not a moral panic, not a punchline to jokes about frivolous lawyers. Increasingly, I’m seeing lawmakers understand that it’s a core threat.The Rip Current by Jacob Ward is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.There is, for instance, the extraordinary speech from the new head of MI6, Britain’s intelligence agency. Instead of focusing only on missiles, spies, or nation-state enemies, she warned that AI and hyper-personalized technologies are rewriting the nature of conflict itself — blurring peace and war, state action and private influence, reality and manipulation. When the person responsible for assessing existential threats starts talking about perception and persuasion, that stuff has moved from academic hand-wringing to real danger.Then there’s the growing evidence that militant groups are using AI to recruit, radicalize, and persuade — often more effectively than humans can. Researchers have now shown that AI-generated political messaging can outperform human persuasion. That matters, because most of us still believe we’re immune to manipulation. We’re not. Our brains are programmable, and AI is getting very good at learning our instructions.That same playbook is showing up in the behavior of our own government. Federal agencies are now mimicking the president’s incendiary online style, deploying AI-generated images and rage-bait tactics that look disturbingly similar to extremist propaganda. It’s no coincidence that the Oxford University Press crowned “rage bait” its word of the year. Outrage is no longer a side effect of the internet — it’s a design strategy.What’s different now is the regulatory response. A coalition of 42 U.S. attorneys general has formally warned AI companies about psychologically harmful interactions, including emotional dependency and delusional attachment to chatbots and “companions.” This isn’t about fraud or physical injury. It’s about damage to people’s inner lives — something American law has traditionally been reluctant to touch.At the same time, the Trump administration is trying to strip states of their power to regulate AI at all, even as states are the only ones meaningfully responding to these risks. That tension — between lived harm and promised utopia — is going to define the next few years.We can all feel that something is wrong. Not just economically, but cognitively. Trust, truth, childhood development, shared reality — all of it feels under pressure. The question now is whether regulation catches up before those harms harden into the new normal.Mentioned in This Article:Britain caught in ‘space between peace and war’, says new head of MI6 | UK security and counter-terrorism | The Guardianhttps://www.theguardian.com/uk-news/2025/dec/15/britain-caught-in-space-between-peace-and-war-new-head-of-mi6-warnsIslamic State group and other extremists are turning to AI | AP Newshttps://apnews.com/article/islamic-state-group-artificial-intelligence-deepfakes-ba201d23b91dbab95f6a8e7ad8b778d5‘Virality, rumors and lies’: US federal agencies mimic Trump on social media | Donald Trump | The Guardianhttps://www.theguardian.com/us-news/2025/dec/15/trump-agencies-style-social-mediaUS state attorneys-general demand better AI safeguardshttps://www.ft.com/content/4f3161cc-b97a-496e-b74e-4d6d2467d59cBonus: The Whistleblower ConundrumI’m also reading this very interesting and very sad account of the fate that has befallen tech workers who couldn’t take it any longer and spoke out. The thing that more and more of them are learning, however, is that the False Claims Act can get them a big percentage of whatever fines an agency imposes: something they’ll need considering they’re unlikely to work again. Tech whistleblowers are doing us all a huge favor, I hope an infrastructure can grow up around supporting them when they do it. Tech whistleblowers face job losses and isolation - The Washington Posthttps://www.washingtonpost.com/technology/2025/12/15/big-tech-whistleblowers-speak-out/ This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit theripcurrent.substack.com/subscribe

Dec 12, 2025 • 11min
Trump's Executive Order Gives the AI Industry What It Wants.
Note: This is a video summary of a longer column I wrote for Hard Reset today. Please have a look!President Trump has signed a sweeping executive order aimed at blocking U.S. states from regulating artificial intelligence — arguing that a “patchwork” of laws threatens innovation and America’s global competitiveness. But there’s a catch: there is no federal AI law to replace what states have been doing.The Rip Current by Jacob Ward is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.In this episode, I break down what the executive order actually does, why states stepped in to regulate AI in the first place, how this move conflicts with public opinion, and why legal experts believe the fight is headed straight to the courts.This isn’t just a tech story. It’s a constitutional one.You can read my full analysis at at HardResetMedia.com. This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit theripcurrent.substack.com/subscribe


