

Computer Says Maybe
Alix Dunn
Technology is changing fast. And it's changing our world even faster. Host Alix Dunn interviews visionaries, researchers, and technologists working in the public interest to help you keep up. Step outside the hype and explore the possibilities, problems, and politics of technology. We publish weekly.
Episodes
Mentioned books

Dec 5, 2025 • 59min
Who Knows? Fact-Finding in a Failing State w/ HRDAG and Data & Society
Everything is happening so fast. And a lot of it’s bad. What can research and science organizations do when issues are complex, fast-moving, and super important?More like this: Independent Researchers in a Platform Era w/ Brandi GuerkinkBuilding knowledge is more important than ever in times like these. This week, we have three guests. Megan Price from the Human Rights Data Analysis Group (HRDAG) shares how statistics and data science can be used to get justice. Janet Haven and Charlton McIlwan from Data & Society explore the role that research institutions can offer to bridge research knowledge and policy prescription.Further reading & resources:HRDAG’s involvement in the trial of José Efraín Ríon MonttA profile of Guatemala and timeline of its conflict — BBC (last updated in 2024)To Protect and Serve? — a study on predictive policing by William Isaac and Kristian LumAn article about the above study — The AppealHRDAG’s stand against tyrannyMore on Understanding AI — Data & Society’s event series with the New York Public LibraryAbout Janet Haven, Executive Director of Data & SocietyAbout Charlton McIlwan, board president of Data & SocietyBias in Computer Systems by Helen NissenbaumCenter for Critical Race and Digital StudiesIf you want to hear more about the history of D&S, the full conversation is up on Youtube (add link when we have).**Subscribe to our newsletter to get more stuff than just a podcast — we run events and do other work that you will definitely be interested in!**Post Production by Sarah Myles | Pre Production by Georgia Iacovou

Nov 28, 2025 • 48min
Who Knows? Independent Researchers in a Platform Era w/ Brandi Geurkink
Imagine doing tech research… but from outside the tech industry? What an idea…More like this: Nodestar: Turning Networks into Knowledge w/ Andrew TraskSo much of tech research happens within the tech industry itself, because it requires data access, funding, and compute. But what the tech industry has in resources, it lacks in independence, scruples, and a public interest imperative. Alix is joined by Brandi Guerkink from The Coalition of Independent Tech Research to discuss her work at a time where platforms have never been so opaque, and funding has never been so sparseFurther Reading & Resources:More about Brandi and The CoalitionUnderstanding Engagement with U.S. (Mis)Information News Sources on Facebook by Laura Edelson & Dan McCoyMore on Laura EdelsonMore on Dan McCoyJim Jordan bringing in Nigel Farage from the UK to legitimise his attacks on EU tech regulations — PoliticoTed Cruz on preventing jawboning & government censorship of social media — BloombergJudge dismisses ‘vapid’ Elon Musk lawsuit against group that cataloged racist content on X — The GuardianSee the CCDH’s blog post on getting the case thrown outPlatforms are blocking independent researchers from investigating deepfakes by Ariella SteinhornDisclosure: This guest is a PR client of our consultancy team. As always, the conversation reflects our genuine interest in their work and ideas.**Subscribe to our newsletter to get more stuff than just a podcast — we run events and do other work that you will definitely be interested in!**

Nov 21, 2025 • 52min
Tres Publique: Algorithms in the French Welfare State w/ Soizic Pénicaud
Governments around the world are using predictive systems to manage engagement with even the most vulnerable. Results are mixed.More like this: Algorithmically Cutting Benefits w/ Kevin De LibanLuckily people like Soizic Pénicaud are working to prevent the modern welfare state from becoming a web of punishment of the most marginalised. Soizic has worked on algorithmic transparency both in and outside of a government context, and this week will share her journey from working on incrementally improving these systems (boring, ineffective, hard) — to escaping the slow pace of government and looking at the bigger picture of algorithmic governance, and how it can build better public benefit in France (fun, transformative, and a good challenge).Soizic is working to shift political debates about opaque decision-making algorithms to focus on what they’re really about: the marginalised communities who’s lives are most effected by these systems.Further reading & resources:The Observatory of Public Algorithms and their InventoryThe ongoing court case against the French welfare agency's risk-scoring algorithmMore about SoizicMore on the Transparency of Public Algorithms roadmap from Etalab — the task force Soizic was part ofLa Quadrature du NetFrance’s Digital Inquisition — co-authored by Soizic in collaboration with Lighthouse Reports, 2023AI prototypes for UK welfare system dropped as officials lament ‘false starts’ — The Guardian Jan 2025Learning from Cancelled Systems by Data Justice LabThe Fall of an Algorithm: Characterizing the Dynamics Toward Abandonment — by Nari Johnson et al, featured in FAccT 2024**Subscribe to our newsletter to get more stuff than just a podcast — we host live shows and do other work that you will definitely be interested in!**

Nov 14, 2025 • 1h
Straight to Video: From Rodney King to Sora w/ Sam Gregory
Sam Gregory, Executive Director of WITNESS, dives into the critical evolution of video evidence since the Rodney King incident in 1992. He explores the balance between generative video technology, like Sora, and risks of misinformation. Discussion includes the normalization of likeness theft and how this impacts trust and identity. Gregory also critiques Silicon Valley's hubris and emphasizes the need for responsible governance in digital spaces. Ultimately, he highlights the intersection of information and material harms caused by AI, urging for urgent action in protecting vulnerable users.

Nov 7, 2025 • 42min
The Toxic Relationship Between AI & Journalism w/ Nic Dawes
What happens when AI models try to fill the gaping hole in the media landscape where journalists should be?More like this: Reanimating Apartheid w/ Nic DawesThis week Alix is joined by Nic Dawes, who until very recently ran the non-profit newsroom The City. In this conversation we explore journalism’s new found toxic relationship with AI and big tech: can journalists meaningfully use AI in their work? If a model summarises a few documents, does that add a new layer of efficiency, or inadvertently oversimplify? And what can we learn from big tech positioning itself as a helpful friend to journalism during the Search era?Beyond the just accurate relaying of facts, journalistic organisations also represent an entire backlog of valuable training data for AI companies. If you don’t have the same resources as the NYT, suing for copyright infringement isn’t an option — so what then? Nic says we have to break out of the false binary of ‘if you can’t beat them, join them!’Further reading & resources:Judge allows ‘New York Times’ copyright case against OpenAI to go forward — NPRGenerative AI and news report 2025: How people think about AI’s role in journalism and society — Reuters InstituteAn example of The City’s investigative reporting: private equity firms buying up property in the Bronx — 2022The Intimacy Dividend — Shuwei FangSam Altman on Twitter announcing that they’ve improved ChatGPT to be mindful of the mental health effects — “We realize this made it less useful/enjoyable to many users who had no mental health problems, but…”**Subscribe to our newsletter to get more stuff than just a podcast — we run events and do other work that you will definitely be interested in!**

13 snips
Oct 31, 2025 • 46min
Unlearning in the AI Era w/ Nabiha Syed at Mozilla Foundation
Nabiha Syed, Executive Director of the Mozilla Foundation and former media lawyer, discusses redefining tech for the AI era. She emphasizes the need for building human-centered technology and critiques the obsession with scale. Nabiha shares her vision for an inclusive MozFest, focusing on unlearning and diverse perspectives. She discusses the importance of community-driven tech, the role of no-code tools in empowering users, and the idea that joy should be central to tech experiences. This engaging conversation covers both the challenges and opportunities within today's technology landscape.

Oct 24, 2025 • 53min
You Seem Lonely. Have a Robot w/ Stevie Chancellor
Loneliness and mental health illnesses are rising in the US, while access to care dwindles — so a lot of people are turning to chatbots. Do chatbots work for therapy?More like this: The Collective Intelligence Project w/ Divya Siddarth and Zarinah AgnewWhy are individuals are confiding in chatbots over qualified human therapists? Stevie Chancellor explains why an LLM can’t replace a therapeutic relationship — but often there’s just no other choice. Turns out the chatbots designed specifically for therapy are even worse than general models like ChatGPT; Stevie shares her ideas on how LLMs could potentially be used — safely — for therapeutic support. This is really helpful primer on how to evaluate chatbots for specific, human-replacing tasks.Further reading & resources:Stevie’s paper on whether replacing therapists with LLMs is even possible (it’s not)See the research on GithubPeople are Losing Their Loved Ones to AI-Fuelled Spiritual Fantasies — Rolling Stone (May 2025)Silicon Valley VC Geoff Lewis becomes convinced that ChatGPT is telling him government secrets from the futureLoneliness considered a public health epidemic according to the APAFTC orders online therapy company BetterHelp to pay damages of $7.8mDelta plans to use AI in ticket pricing draws fire from US lawmakers — Reuters July 2025**Subscribe to our newsletter to get more stuff than just a podcast — we run events and do other work that you will definitely be interested in!**

Oct 17, 2025 • 59min
Local Laws for Global Technologies w/ Hillary Ronen
What’s it like working as a local representative when you live next door to Silicon Valley?More like this: Chasing Away Sidewalk Labs w/ Bianca WylieWhen Hilary Ronen was on the board of supervisors for San Francisco, she had to make lots of decisions about technology. She felt unprepared. Now she sees local policymakers on the frontlines of a battle of resources and governance in an AI era, and is working to upskill them to make better decisions for their constituents. No degree in computer science required.Further reading & resources:Local Leadership in the Era of Artificial Intelligence and the Tech Oligarchy by Hillary RonenMore on Hillary’s work as a Supervisor for SFHillary Ronen on progressives, messaging, hard choices, and justice — interview in 48Hills from January 2025More about Local ProgressConfronting Preemption — a short briefing by Local ProgressWhat Happens When State and Local Laws Conflict — article on state-level preemption by State Court Report**Subscribe to our newsletter to get more stuff than just a podcast — we run events and do other work that you will definitely be interested in!**

Oct 10, 2025 • 56min
Gotcha! Enshittification w/ Cory Doctorow
In this engaging conversation, Cory Doctorow, a science fiction novelist and digital-rights activist, delves into his book Enshittification. He explores the alarming three-stage decay of platforms: initial goodness, user lock-in, and eventual exploitation. Cory also introduces concepts like the 'chickenized reverse centaur' to explain gig work exploitation. He advocates for reducing platform power through competition and interoperability, emphasizing the need for coordinated solutions to combat industrialized scams in the digital age.

Oct 3, 2025 • 55min
Gotcha! ScamGPT w/ Lana Swartz & Alice Marwick
In this engaging discussion, guests Lana Swartz and Alice Marwick dive into the dark world of AI-enabled scams. Lana, a media studies professor, highlights how economic precarity and side-hustle culture fuel scams. Alice, a social scientist, exposes the alarming role of human trafficking in scam operations. They explore how generative AI automates fraud by crafting personalized scripts targeting vulnerability, while also debating whether AI could reduce human trafficking. Their insights shed light on the terrifying interplay between technology and exploitation.


