
Keen On America Episode 2241: Gaia Bernstein on the Threat of AI Companions to Children





No, social media might no longer be the greatest danger to our children’s well-being. According to the writer and digital activist Gaia Bernstein, the most existential new new threat are AI companions. Bernstein, who is organizing a symposium today on AI companions as the “new frontier of kid’s screen addiction”, warns that this new technology, while marketed as solutions to loneliness, may actually worsen social isolation by providing artificially perfect relationships that make real-world interactions seem more difficult. Bernstein raises concerns about data collection, privacy, and the anthropomorphization of AI that makes children particularly vulnerable. She advocates for regulation, especially protecting children, and notes that while major tech companies like Google and Facebook are cautious about directly entering this space, smaller companies are aggressively developing AI companions designed to hook our kids.
Here are the 5 KEEN ON takeaways in our conversation with Bernstein:
* AI companions represent a concerning evolution of screen addiction, where children may form deep emotional attachments to AI that perfectly adapts to their needs, potentially making real-world relationships seem too difficult and messy in comparison.
* The business model for AI companions follows the problematic pattern of surveillance capitalism - companies collect intimate personal data while keeping users engaged for as long as possible. The data collected by AI companions is even more personal and detailed than social media.
* Current regulations are insufficient - while COPPA requires parental consent for children under 13, there's no effective age verification on the internet. Bernstein notes it's currently "the Wild West," with companies like Character AI and Replica actively targeting young users.
* Children are especially vulnerable to AI companions because their prefrontal cortex is less developed, making them more susceptible to emotional manipulation and anthropomorphization. They're more likely to believe the AI is "real" and form unhealthy attachments.
* While major tech companies like Google seem hesitant to directly enter the AI companion space due to known risks, the barrier to entry is lower than social media since these apps don't require a critical mass of users. This means many smaller companies can create potentially harmful AI companions targeting children.
The Dangers of AI Companions for Kids
The Full Conversation with Gaia Bernstein
Andrew Keen: Hello, everybody. It's Tuesday, February 18th, 2025, and we have a very interesting symposium taking place later this morning at Seton Hall Law School—a virtual symposium on AI companions run by my guest, Gaia Bernstein. Many of you know her as the author of "Unwired: Gaining Control over Addictive Technologies." This symposium focuses on the impact of AI companions on children. Gaia is joining us from New York City. Gaia, good to see you again.
Gaia Bernstein: Good to see you too. Thank you for having me.
Andrew Keen: Would it be fair to say you're applying many of the ideas you developed in "Unwired" to the AI area? When you were on the show a couple of years ago, AI was still theory and promise. These days, it's the thing in itself. Is that a fair description of your virtual symposium on AI companions—warning parents about the dangers of AI when it comes to their children?
Gaia Bernstein: Yes, everything is very much related. We went through a decade where kids spent all their time on screens in schools and at home. Now we have AI companies saying they have a solution—they'll cure the loneliness problem with AI companions. I think it's not really a cure; it's the continuation of the same problem.
Andrew Keen: Years ago, we had Sherry Turkle on the show. She's done research on the impact of robots, particularly in Japan. She suggested that it actually does address the loneliness epidemic. Is there any truth to this in your research?
Gaia Bernstein: For AI companions, the research is just beginning. We see initial research showing that people may feel better when they're online, but they feel worse when they're offline. They're spending more time with these companions but having fewer relationships offline and feeling less comfortable being offline.
Andrew Keen: Are the big AI platforms—Anthropic, OpenAI, Google's Gemini, Elon Musk's X AI—focusing on building companions for children, or is this the focus of other startups?
Gaia Bernstein: That's a very good question. The first lawsuit was filed against Character AI, and they sued Google as well. The complaint stated that Google was aware of the dangers of AI companions, so they didn't want to touch it directly but found ways of investing indirectly. These lawsuits were just filed, so we'll find out much more through discovery.
Andrew Keen: I have to tell you that my wife is the head of litigation at Google.
Gaia Bernstein: Well, I'm not suing. But I know the people who are doing it.
Andrew Keen: Are you sympathetic with that strategy? Given the history of big tech, given what we know now about social media and the impact of the Internet on children—it's still a controversial subject, but you made your position clear in "Unwired" about how addictive technology is being used by big tech to take control and take advantage of children.
Gaia Bernstein: I don't think it's a good idea for anybody to do that. This is just taking us one more step in the direction we've been going. I think big tech knows it, and that's why they're trying to stay away from being involved directly.
Andrew Keen: Earlier this week, we did a show with Ray Brasher from Albany Law School about his new book "The Private is Political" and how social media does away with privacy and turns all our data into political data. For you, is this AI Revolution just the next chapter in surveillance capitalism?
Gaia Bernstein: If we take AI companions as a case study, this is definitely the next step—it's enhancing it. With social media and games, we have a business model where we get products for free and companies make money through collecting our data, keeping us online as long as possible, and targeting advertising. Companies like Character AI are getting even better data because they're collecting very intimate information. In their onboarding process, you select a character compatible with you by answering questions like "How would you like your replica to treat you?" The options include: "Take the lead and be proactive," "Enjoy the thrill of being chased," "Seek emotional depth and connection," "Be vulnerable and respectful," or "Depends on my mood." The private information they're getting is much more sophisticated than before.
Andrew Keen: And children, particularly those under 12 or 13, are much more vulnerable to that kind of intimacy.
Gaia Bernstein: They are much more vulnerable because their prefrontal cortex is less developed, making them more susceptible to emotional attachments and risk-taking. One of the addictive measures used by AI companies is anthropomorphizing—using human qualities. Children think their stuffed animals are human; adults don't think this way. But they make these AI bots seem human, and kids are much more likely to get attached. These websites speak in human voices, have ...
