The Rip Current

Jacob Ward
undefined
Dec 4, 2025 • 12min

AI is Creating a ‘Hive Mind' — Scientists Just Proved It

The big AI conference NeurIPS is under way in San Diego this week, and nearly 6,000 papers presented there will set the technical, intellectual, and ethical course for AI for the year. NeurIPS is a strange pseudo-academic gathering, where researchers from universities show up to present their findings alongside employees of Apple and Nvidia, part of the strange public-private revolving door of the tech industry. Sometimes they’re the same person: Increasingly, academic researchers are allowed to also hold a job at a big company. I can’t blame them for taking opportunities where they arise—I’m sure I would, in their position—but it’s particularly bothersome to me as a journalist, because it limits their ability to speak publicly.The papers cover robotics, alignment, and how to deliver kitty cat pictures more efficiently, but one paper in particular—awarded a top prize at the conference—grabbed me by the throat. A coalition from Stanford, the Allen Institute, Carnegie Mellon, and the University of Washington presented “Artificial Hive Mind: The Open-Ended Homogeneity of Language Models (and Beyond),” which shows that the average large language model converges toward a narrow set of responses when asked big, brainstormy, open-ended questions. Worse, different models tend to produce similar answers, meaning when you switch from ChatGPT to Gemini or Claude for “new perspective,” you’re not getting it. I’ve warned for years that AI could shrink our menu of choices while making us believe we have more of them. This paper shows just how real that risk is. Today I walk through the NIPS landscape, the other trends emerging at the conference, and why “creative assistance” may actually be the crushing of creativity in disguise. This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit theripcurrent.substack.com/subscribe
undefined
Dec 3, 2025 • 5min

Jake on CBS News: OpenAI's Fight to "Make It Irresistible"

I’ve been in a pretty steady cadence of appearances on CBS News these days, and it’s been a wonderful place to have open-ended conversations about the latest tech headlines. Yesterday they had me on to talk about OpenAI’s “Code Red” memo commanding its employees to delay other products and projects and focus on making ChatGPT as “intuitive and emotional” as possible.Programming Note: Tomorrow (Thursday) I’ll be guest-hosting TWiT’s podcast “Tech News Weekly” at 11am PT / 2pm ET. It’ll be available shortly thereafter on TWiT’s YouTube channel.And I’ve had a series of fantastic conversations on various podcast’s lately. Lux Capital’s RiskGaming Podcast brought me on for an hour on the history of tech optimism and the generational thinking we’ll need to solve the big problems it’s created. And Meredith Edwards’ Meredith for Real podcast brought me in for an hour to talk about how (and whether) we can protect ourselves against what AI is amplifying in us. I’ll throw some clips up soon! This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit theripcurrent.substack.com/subscribe
undefined
Dec 2, 2025 • 12min

OpenAI Declares "Code Red" — And Takes Aim at Your Brain

According to the Wall Street Journal, Sam Altman sent an internal memo on Monday declaring a company-wide emergency and presumably ruining the holiday wind-down hopes of his faithful employees. OpenAI is hitting pause on advertising plans, delaying AI agents for health and shopping, and shelving a personal assistant called “Pulse.” All hands are being pulled back to one mission: making ChatGPT feel more personal, more intuitive, and more essential to your daily life.The company says it wants the general quality, intelligence, and flexibility to improve, but I’d argue this is less about making the chatbot smarter, and more about making it stickier.Google’s Gemini has been surging — monthly active users jumped from 450 million in July to 650 million in October. Industry leaders like Salesforce CEO Marc Benioff are calling it the best LLM on the market. OpenAI seems to feel the heat, and also seems to feel it doesn’t have the resources to keep building everything it wants all at once — it has to prioritize. Consider that when Altman was recently asked on a podcast how he plans to get to profitability, he grew exasperated. “Enough,” he said.But here’s what struck me about the Code Red. While Gemini is supposedly surpassing ChatGPT in industry benchmarkes, I don’t think Altman is chasing benchmarks. He’s chasing the “toothbrush rule” — the Google standard for greenlighting new products that says a product needs to become an essential habit used at least three times a day. The memo specifically emphasizes “personalization features.” They want ChatGPT to feel like it knows you, so that you feel known, and can’t stop coming back to it.I’ve been talking about AI distortion — the strange way these systems make us feel a genuine connection to what is, ultimately, a statistical pattern generator. That feeling isn’t a bug. It’s becoming the business model.Facebook did this. Google did this. Now OpenAI is doing it: delaying monetization until the product is so woven into your life that you can’t imagine pulling away. Only then do the ads come.Meanwhile, we’re living in a world where journalists have to call experts to verify whether a photo of Trump fellating Bill Clinton is real or AI-generated. The image generators keep getting better, the user numbers keep climbing, and the guardrails remain an afterthought.This is the AI industry in December 2025: a race to become indispensable. This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit theripcurrent.substack.com/subscribe
undefined
Dec 2, 2025 • 15min

Big Tech Got Its Government Back

Shameless Plug Number One: On Thursday I’ll be guest-hosting Tech News Weekly, where I’ll interview journalists about AI companionship, AI-regulating California state senator Scott Wiener and more!Shameless Plug Number Two: I had a really wonderful conversation on the Meredith for Real podcast this week. You don’t often get to speak with someone who actually reads the whole book and comes ready to take you apart, and she was such a thoughtful interviewer. Check it out if you have a moment. Now then…It’s Monday, December 1st. I’m not a turkey guy, and I’m of the opinion that we’ve all made a terrible habit of subjecting ourselves to the one and only time anyone cooks the damn thing each year. So I hope you had an excellent alternative protein in addition to that one. Ours was the Nobu miso-marinated black cod. Unreal. Okay, after the food comes the A.I. hangover. This week I’m looking at three fronts where the future of technology just lurched in a very particular direction: politics, geopolitics, and the weird church council that is the A.I. conference circuit.First, the politics. Trump’s leaked executive order to wipe out state A.I. laws seems to have stalled — not because he’s suddenly discovered restraint, but maybe because the polling suggests that killing A.I. regulation is radioactive. Instead, the effort is being shoved into Congress via the National Defense Authorization Act, the “must-pass” budget bill where bad ideas go to hide. Pair that with the Federal Trade Commission getting its teeth kicked in by Meta in court, and you can feel the end of the Biden-era regulatory moment and the start of a very different chapter: a government that treats Big Tech less as something to govern and more as something to protect.Second, the geopolitics. TSMC’s CEO is now openly talking about expanding chip manufacturing outside Taiwan. That sounds like a business strategy, but it’s really a tectonic shift. For years, America’s commitment to Taiwan has been tied directly to that island’s role as our chip lifeline. If TSMC starts building more of that capacity in Arizona and elsewhere, the risk calculus around a Chinese move on Taiwan changes — and so does the fragility of the supply chain that A.I. sits on top of.Finally, the quiet councils of the faithful: AWS re:Invent and NeurIPS. Amazon is under pressure to prove that all this spending on compute actually makes money. NeurIPS, meanwhile, is where the people who build the models go to decide what counts as progress: more efficient inference, new architectures, new “alignment” tricks. A single talk or paper at that conference can set the tone for years of insanely expensive work. So between Trump’s maneuvers, the FTC’s loss, TSMC’s hedging, and the A.I. priesthood gathering in one place, the past week and this one are a pretty good snapshot of who really steers the current we’re all in. This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit theripcurrent.substack.com/subscribe
undefined
Nov 25, 2025 • 12min

AI Distortion Is Here — And It’s Already Warping Us All

It’s a warning siren: people seeing delusions they never knew they had amplified by AI, a wave of lawsuits alleging emotional manipulation and even suicide coaching, a major company banning minors from talking freely with chatbots for fear of excessive attachment, and a top mental-health safety expert at OpenAI quietly heading for the exit.For years I’ve argued that AI would distort our thinking the same way GPS distorted our sense of direction. But I didn’t grasp how severe that distortion could get—how quickly it would slide from harmless late-night confiding to full-blown psychosis in some users.The Rip Current by Jacob Ward is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.OpenAI’s own data suggests millions of people each week show signs of suicidal ideation, emotional dependence, mania, or delusion inside their chats. Independent investigations and a growing legal record back that up. And all of this is happening while companies roll out “AI therapists” and push the fantasy that synthetic friends might be good for us.As with most of what I’ve covered over the years, this isn’t a tech story. It’s a psychological one. A biological one. And a story about mixed incentives. A story about ancient circuitry overwhelmed by software, and by the companies who can’t help but market it as sentient. I’m calling it AI Distortion—a spectrum running from mild misunderstanding all the way to dependency, delusion, isolation, and crisis.It’s becoming clear that we’re not just dealing with a tool that organizes our thoughts. We’re dealing with a system that can warp them, in all of us, every time. This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit theripcurrent.substack.com/subscribe
undefined
Nov 25, 2025 • 9min

Insurers Are Backing Away From A.I.—and That Should Scare You More Than Any Sci-Fi Predictions

Today I dug into the one corner of the economy that’s supposed to keep its head when everyone else is drunk on hype: the insurance industry. Three of the biggest carriers in the country—AIG, Great American, and W.R. Berkley—are now begging regulators not to force them to cover A.I.-related losses, according to the Financial Times. These are the people who price hurricanes, wildfires, and war zones… and they look at A.I. and say, “No thanks.” That tells you something about where we really are in the cycle.I also walked through the Trump administration’s latest maneuver, which looks a lot like carrying water for Big Tech in Brussels: trading lower steel tariffs for weaker European tech rules. (The Europeans said “no thank you.”) Meanwhile, we’re still waiting on the rumored executive order that would bulldoze state A.I. laws—the only guardrails we have in this country.On the infrastructure front, reporting out of Mumbai shows how A.I. demand is forcing cities back toward coal just to keep data centers running. And if that wasn’t dystopian enough, I close with a bleak little nugget from Business Insider advising Gen Z to “focus on tasks, not job titles” in the A.I. economy. Translation: don’t expect a career—expect a series of gigs glued together by hope.It’s a full Monday’s worth of contradictions: the fragile hype economy, the political favoritism behind it, and the physical reality—pollution, burnout, precarity—that always shows up eventually.Also, if you missed it last week, I got to do my shtick on CNN and PBS. Thanks for watching! This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit theripcurrent.substack.com/subscribe
undefined
Nov 21, 2025 • 8min

The Executive Order That Would End AI Regulation in America

I’m now a proud weekly contributor over at Hard Reset, which looks at the effects of technology on society, labor, and politics, and I’ve got exhaustive coverage of this story over there. Go have a look! The only laws protecting you from the worst excesses of A.I. might be wiped out — and fast. A leaked Trump executive order would ban states from regulating A.I. at all, rolling over the only meaningful protections any of us currently have. There is no federal A.I. law, no federal data-privacy law, nothing. States like California, Illinois, and Colorado are the only line of defense against discriminatory algorithms, unsafe model deployment, and the use of A.I. as a quasi-therapist for millions of vulnerable people.The Rip Current by Jacob Ward is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.This isn’t just bad policy — it’s wildly unpopular. The last time Republicans tried this maneuver, the Senate killed it 99–1. And Americans across the political spectrum overwhelmingly want A.I. regulated, even if it slows the industry down. But the tech sector wants a frictionless, regulation-free environment, and the Trump administration seems eager to give it to them — from crypto dinners and gilded ballrooms to billion-dollar Saudi co-investment plans.There’s another layer here: state laws also slow down the federal government’s attempt to build a massive surveillance apparatus using private data brokers and companies like Palantir. State privacy protections cut off that flow of data. Removing those laws clears the pipe.The White House argues this is about national security, China, and “woke A.I.” But legal experts say the order is a misreading of commerce authority and won’t survive in court. And state leaders like California’s Scott Wiener already seem to be preparing to sue. For now, the takeaway is simple: states are the only governments in America protecting you from A.I. — and the administration is trying to take that away. This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit theripcurrent.substack.com/subscribe
undefined
Nov 20, 2025 • 16min

When AI Stops Being About Jobs — and Starts Being About Us

In today’s episode, I’m following the money, the infrastructure, and the politics:Nvidia just posted another monster quarter and showed that it’s still the caffeine in the US economy. Investors briefly relaxed, even as they warned that an AI bubble is still the top fear in markets. Google jammed Gemini 3 deeper into Search in a bid to regain narrative control. Cloudflare broke down and reminded us that the “smart” future still runs on pretty fragile plumbing. The EU blinked on AI regulation. And here in the U.S., the White House rolled out the red carpet for Saudi Arabia as part of a multibillion-dollar AI infrastructure deal that seems to be shiny enough to have President Trump openly chastising a journalist for asking Crown Prince about his personal responsibility for the murder of an American journalist.But the deeper story I’m looking at today is social, not financial. Politicians like Bernie Sanders (in an interview on NBC News) are beginning to voice the fear that AI won’t just destroy jobs — it might quietly corrode our ability to relate to one another. If you’ve been following me you know this is more or less all I’m thinking about at the moment. So I looked at the history of this kind of concern. Here’s the takeaway: while we’re generally only concerned with death and financial loss in this country, we do snap awake from time to time when a new technology threatens our social fabric. Roll your eyes if you want to, but we’ve seen this moment before with telegraphs, movies, radio demagogues, television, video games, and social media, and there’s a lot to learn from that history. This episode explores that lineage, what it means for AI, and why regulation might arrive faster than companies expect. This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit theripcurrent.substack.com/subscribe
undefined
Nov 18, 2025 • 10min

The Great AI Overbuild: From Tulsa’s Empty Highways to Britain’s Railway Mania to Data Centers in Space

In today’s Deep Cut, we look at the strange, repeating pattern of civilizations wildly overbuilding their infrastructure because they’re sure the future depends on it. Tulsa, Oklahoma once built a highway grid for millions who never arrived. Britain in the 1840s poured money into rail lines that didn’t need to exist. And now the world’s biggest tech companies are spending trillions on AI data centers—some even talking openly about building them in space.I trace the logic behind this frenzy, from rising AI capex to the dream of limitless solar energy in orbit, and contrast it with the uncomfortable reality: much of today’s demand is artificially subsidized by the companies creating it. Along the way we revisit the Kardashev Scale, the pollution math of rocket launches, and the enduring human delusion that if we can build it, it should be built.The Rip Current by Jacob Ward is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.History shows what happens when infrastructure outpaces actual need. Today’s AI buildout has all the ingredients for another chapter in that saga. This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit theripcurrent.substack.com/subscribe
undefined
Nov 17, 2025 • 8min

Mapping the AI Power Shift: Buffett’s Big Bet, Thiel’s Exit, Meta’s Language Play, and a Drunk Robot in Moscow

Good morning — it’s Monday, November 17th, and today’s “Map” traces the forces shaping the week ahead in tech, money, and global politics.We start with something rare: Warren Buffett’s Berkshire Hathaway quietly taking a $4.9 billion stake in Alphabet — one of the last and most unusual moves of his career, and a signal about where the long-term AI value is consolidating.On the other side of the chessboard, Peter Thiel just sold his entire ~$100 million stake in Nvidia, a move that raises questions about timing, crowd psychology, and what it means when a legendary contrarian (among many, many other things) decides the party’s over.I also pull from a conversation I moderated last week with consular officials and regulators from across Asia, where the loudest concern was simple: English-centric AI is failing the rest of the world. Meta’s new Omnilingual ASR model may change that — if they can turn 1,600 supported languages into something the world can actually use.Then we take a detour to Moscow, where Russia’s heavily hyped new humanoid robot walked onstage looking… well… drunk, and promptly face-planted so hard its panels fell off. It’s an unintentionally honest reminder that the dream of a helpful general-purpose humanoid remains firmly in the realm of wishful thinking in spite of the nifty prototypes being trotted out each week.Finally, we look ahead to Saudi Crown Prince Mohammed bin Salman’s visit to the White House, along with a package that includes $600 billion in investment, AI technology access, and a civilian nuclear deal — all coming at a time when AI-driven energy demand is exploding past America’s power capacity, and MBS’s people clearly know it.The Rip Current by Jacob Ward is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.The theme cutting through all of this is simple:AI capital flows are steering the global agenda.Not the products, not the startups — the money. A third of America’s GDP growth last year came solely from AI infrastructure spending, and the upcoming Nvidia earnings call this week will tell us whether this frenzy is accelerating or hitting turbulence.This week, we’ll watch the intersection of investment, policy, and power — and tomorrow, we’ll dive deep into one story that explains where all this is headed. Thanks for listening! This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit theripcurrent.substack.com/subscribe

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app