Computer Says Maybe cover image

Computer Says Maybe

Latest episodes

undefined
Apr 4, 2025 • 50min

Technology Nationalism in India w/ Divij Joshi

Amidst the scrambling of geopolitics, there is increasing conversation and momentum for the concept of tech sovereignty. It basically means that countries should build their own technology rather than rely on Silicon Valley. India Stack! Euro Stack! Everyone wants a stack.In this episode we explore India’s work over the last 20 years to build ‘digital public infrastructure’ or DPI. They went YOLO on a digital ID system in a country of 1 billion people — with very mixed results. Did this ‘public infrastructure’ lead to a locally-owned marketplaces? Nope! Has the fact that their PM is a Hindu nationalist limited India’s ability to tout this work on the global stage? Also nope! It’s actually allowed the government to techwash its authoritarianism.Lots to unpack here, and fortunately, we’re joined by Divij Joshi, a researcher focused on the political economy of ‘digital public infrastructure’ or DPI, to explore India’s attempts at digital ID and government-as-a-platform.Further reading & resources:Government as a Platform by Tim O’ReillyThe Global DPI AgendaRecovering the ‘Public’ in India’s Digital Public Infrastructure Strategy by IT for ChangeAadhaar’s mixing of public risk and private profit by Aria ThakerInterrogating India’s quest for data sovereignty by Divij Joshi**Subscribe to our newsletter to get more stuff than just a podcast — we run events and do other work that you will definitely be interested in!**Divij is a Research Fellow at ODI Global and a Doctoral Researcher at UCL, where his research and advocacy focuses on understanding the political economy and governance of emerging technologies to articulate a vision for a fair and just information society. His thesis examines how the emergence of 'Digital Public Infrastructures', as platform and data-based information systems are shaping notions of economic development and political subjectivity in India and globally.
undefined
Mar 28, 2025 • 43min

AI Assistant or AI Boss? w/ Data & Society

Two years ago, we were told that ‘prompt engineer’ would be a real job — well, it’s not. Is generative AI actually going to replace and transform human labour, or is this just another shallow marketing narrative?This week Alix speaks with Aiha Nguyen and Alexandra Mateescu, who recently authored Generative AI and Labor: Power, Hype, and Value at Work. They discuss how automation is now being used as a threat against workers, and how certain types of labour are being devalued by AI — especially (shocking) traditionally feminised work, such as caregiving.Further reading:Generative AI and Labor: Power, Hype, and Value at Work by Aiha Nguyen and Alexandra MateescuBlood in the Machine by Brain MerchantSubscribe to our newsletter to get more stuff than just a podcast — we run events and do other work that you will definitely be interested in!*Aiha Nguyen is the Program Director for the Labor Futures Initiative at Data & Society where she guides research and engagement. She brings a practitioner's perspective to this role having worked for over a decade in community and worker advocacy and organizing. Her research interests lie at the intersection of labor, technology, and urban studies. She is author of The Constant Boss: Work Under Digital Surveillance and co-author of ‘At the Digital Doorstep: How Customers Use Doorbell Cameras to Manage Delivery Workers’, and ‘Generative AI and Labor: Power, Hype and Value at Work’.**Alexandra Mateescu is a researcher on the Labor Futures team at the Data & Society Research Institute, where she investigates the impacts of digital surveillance, AI, and algorithmic power within the workplace. As an ethnographer, her past work has led her to explore the role of worker data and its commodification, the intersections of care labor and digital platforms, automation within service industries, and generative AI in creative industries. She is also a 2024-2025 Fellow at the Siegel Family Endowment.*
undefined
Mar 21, 2025 • 1h 1min

Regulating Privacy in an AI Era w/ Carly Kind

This week Alix is speaking with her long-time friend and collaborator Carly Kind, who is now the privacy commissioner of Australia. Here’s something you may be embarrassed to ask: what does a privacy commissioner even do? We got you…Alix and Carly will discuss how privacy regs bump up against current trends in AI, how to incentivise compliance, and the limits of Australian privacy laws.**Subscribe to our newsletter to get more stuff than just a podcast — we run events and do other work that you will definitely be interested in!**Carly Kind commenced as Australia’s Privacy Commissioner in February 2024 for a 5-year term. As Privacy Commissioner, she regulates the handling of personal information by entities covered by the Australian Privacy Act 1988 and seeks to influence the development of legislation and advance privacy protections for Australians. Ms Kind joined from the UK-based Ada Lovelace Institute, where she was the inaugural director. As a human rights lawyer and leading authority on the intersection of technology policy and human rights, she has advised industry, government and non-profit organisations on digital rights, artificial intelligence, privacy and data protection, and corporate accountability in the technology sphere.
undefined
Mar 14, 2025 • 53min

Dogwhistles: Networked Transphobia Online

This week producer Georgia joins Alix to discuss something huge that we’ve yet to go deep on: the prevalence of trans misogyny online. This episode is jam-packed with four amazing guests to guide us through this rough terrain:Shivani Dave is a journalist and commentator who uses social media for their career and income. They share their experiences with receiving hate online, and having to balance posting against hits to their mental healthAlice Hunsberger is a trust & safety professional who’s worked at all levels of content moderation. She explains the technical complexities and limitations of moderating online spacesJenni Olson is head of social media safety at GLAAD, and discusses the lack of transparency and care around platform content policies, allowing hateful dog whistles to proliferateDr Emily Cousens, a professor at Northeastern, who provides important context on the history of trans misogyny in the UKFurther reading & resources:A Short History of Trans Misogyny by Jules Gill-PetersonDebunking the Cass Review by Gideon MKGLAAD Social Media Safety ProgramMeta’s Anti-LGBT Makeover by Jenni OlsonRapid Onset Gender Dysphoria by Maintenance Phase: parts ONE and TWOT&S Insider by Alice Hunsberger**Subscribe to our newsletter to get more stuff than just a podcast — we run events and do other work that you will definitely be interested in!***SHIVANI DAVE (they/them) is a political commentator and journalist whose work focuses on human rights, science and technology. SHIV is one of the organisers of the London Dyke March,  a regular collaborator with organisations; ACT UP LONDON, Queer Night Pride, local TRA, London Trans+ Pride and other more formal structures (THT, AKT, Trans+ History Week, LGBT+ History Month, NHS, THE PEOPLE ). They have written for outlets including The Guardian, BBC News, and Metro. They have appeared on Good Morning Britain, Sky News, and Jeremy Vine on 5 among others. SHIV is driven by a passion for sharing the stories of marginalised and oppressed people around the world.**Alice Goguen Hunsberger is a Trust & Safety leader with 20+ years of experience in content moderation, CX, and building safer online communities. She heads Trust & Safety at Musubi Labs, an AI company specializing in T&S services. Alice got her start in 2002, running a community forum and developing its first moderation guidelines. She later led T&S and CX at OkCupid, helped guide Grindr through its IPO as VP of CX & T&S, and drove ethical outsourcing strategies as VP of T&S at PartnerHero.**Jenni Olson (she/her/TBD) is Senior Director of the Social Media Safety Program at national LGBTQ media advocacy organization, GLAAD. A prominent voice in the field of tech accountability, Jenni leads GLAAD’s work to hold tech companies and social media platforms accountable, and to secure safe online spaces for LGBTQ people. The GLAAD Social Media Safety Program researches, monitors, and reports on a variety of issues facing LGBTQ social media users. GLAAD’s annual Social Media Safety Index (SMSI) report evaluates the major social media platforms on LGBTQ safety, privacy, and expression. Olson has worked in LGBTQ media and tech for decades and is best known as co-founder of PlanetOut.com, the first major LGBTQ community website, created by a small team of tech pioneers in 1996.**Dr Emily Cousens (They/Them) is Assistant Professor of Politics and International Relations at Northeastern University, London and the UK lead for the Digital Transgender Archive. They are the author of Trans feminist epistemologies in the US Second Wave, published by Palgrave in 2023, and their expertise are in transfeminist philosophy and history.*
undefined
Mar 7, 2025 • 48min

VCs Are World Eaters w/ Catherine Bracy

This week Alix interviewed Catherine Bracy on her book World Eaters: How Venture Capital is Cannibalising the Economy. Support Catherine’s work and buy it NOW.Venture capital wasn’t always how it is today. But now it’s a driver of inequality, political and economic instability, and insufferable personalities. How did we get here and what might come next?In this conversation Catherine outlines her views on our current political moment and the role of VC in it. We’ve all got feelings about VCs, but in her book and in this conversation she forensically picks apart how it works, why it doesn’t really work, and why that’s a problem for all of us.Further reading & resources:Buy Catherine’s bookTechEquity CollaborativeCatherine Bracy is the Founder and CEO of TechEquity, an organization doing research and advocacy on issues at the intersection of tech and economic equity to ensure the tech industry’s products and practices create opportunity instead of inequality. She is also the author of the forthcoming book, World Eaters: How Venture Capital is Cannibalizing the Economy (Dutton: March, 2025).
undefined
Feb 28, 2025 • 1h 3min

Power Over Precision w/ Jenny Reardon

Alix’s conversation this week is with Jenny Reardon, who shares with us the history of genomics — and the absolutely mind-melting parallels it has with the trajectory of the AI industry.Jenny describes genomics as the industrialisation of genetics; it’s not just about understanding the genetic properties of humans, but mapping out every last inch of their genetic information so that it’s machine readable and scalable and — does this remind you of anything yet?There are a disturbing amount of correlations between AI and genomics: that they have roots in military applications; as fields they have been pumped up with money and compute; and that there are, of course, huge conceptual overlaps with race science.Jenny Reardon is a Professor of Sociology and the Founding Director of the Science and Justice Research Center at the University of California, Santa Cruz.  Her research draws into focus questions about identity, justice and democracy that are often silently embedded in scientific ideas and practices.  She is the author of Race to the Finish: Identity and Governance in an Age of Genomics (Princeton University Press) and, most recently, The Postgenomic Condition: Ethics, Justice, Knowledge After the Genome (University of Chicago Press)
undefined
Feb 21, 2025 • 37min

The Taiwan Bottleneck w/ Brian Chen

Do you ever wonder how semiconductors (AKA chips) get made? Or why most of them are made in Taiwan? Or what this means for geopolitics?Luckily, this is a podcast for nerds like you. Alix was joined this week by Brian Chen from Data & Society, who systematically explains the process of advanced chip manufacture, how its thoroughly entangled in US economic policy, and how Taiwan’s place as the main artery for chips is the product of deep colonial infrastructures.Brian J. Chen is the policy director of Data & Society, leading the organization’s work to shape tech policy. With a background in movement lawyering and legislative and regulatory advocacy, he has worked extensively on issues of economic justice, political economy, and tech governance.Previously, Brian led campaigns to strengthen the labor and employment rights of digital platform workers and other workers in precarious industries. Before that, he led programs to promote democratic accountability in policing, including community oversight over the adoption and use of police technologies.**Subscribe to our newsletter to get more stuff than just a podcast — we run events and do other work that you will definitely be interested in!**
undefined
Feb 14, 2025 • 56min

AI Safety’s Spiral of Urgency w/ Shazeda Ahmed

Shazeda Ahmed, a Chancellor’s Postdoctoral fellow at UCLA, dives into AI safety's geopolitical landscape, particularly the U.S.-China relationship. She critiques the urgency surrounding AI safety and reveals how it is often fueled by anti-China sentiment. The discussion covers the implications of surveillance technologies, the complexities of AI ethics, and the intersection of corporate interests with safety efforts. Ahmed also highlights the historical influences of eugenics in shaping current AI policies, urging for more nuanced conversations to include marginalized perspectives.
undefined
Feb 12, 2025 • 47min

Live Show: Paris Post-Mortem

Kapow! We just did our first ever LIVE SHOW. We barely had time to let the mics cool down before a bunch of you requested to have the recording on our pod feed so here we are.ICYMI: this is a recording from the live show that we did in Paris, right after the AI Action Summit. Alix sat down to have a candid conversation about the summit, and pontificate on what people might have meant when they kept saying ‘public interest AI’ over and over. She was joined by four of the best women in AI politics:Astha Kapoor, Co-Founder for the Aapti InstituteAmba Kak, Executive Director of the AI Now InstituteAbeba Birhane, Founder & Principal Investigator of the Artificial Intelligence Accountability Lab (AIAL)Nabiha Syed, Executive Director of MozillaIf audio is not enough for you, go ahead and watch the show on YouTube**Subscribe to our newsletter to get more stuff than just a podcast — we run events and do other work that you will definitely be interested in!***Astha Kapoor is the Co-founder of Aapti Institute, a Bangalore based research firm that works on the intersection of technology and society. She has 15 years of public policy and strategy consulting experience, with a focus on use of technology for welfare. Astha works on participative governance of data, and digital public infrastructure. She’s a member of World Economic Forum Global Future Council on data equity (2023-24), visiting fellow at the Ostrom Workshop (Indiana University). She was also a member of the Think20 taskforce on digital public infrastructure during India and Brazil's G20 presidency and is currently on the board of Global Partnership for Sustainable Data.**Amba Kak has spent the last fifteen years designing and advocating for technology policy in the public interest, across government, industry, and civil society roles – and in many parts of the world. Amba brings this experience to her current role co-directing AI Now, a New York-based research institute where she leads on advancing diagnosis and actionable policy to tackle concerns with artificial intelligence and concentrated power. She has served as Senior Advisor on AI to the Federal Trade Commission and was recognized as one of TIME’s 100 Most Influential People in AI in 2024.**Dr. Abeba Birhane founded and leads the TCD AI Accountability Lab (AIAL). Dr Birhane is currently a Research Fellow at the School of Computer Science and Statistics in Trinity College Dublin. Her research focuses on AI accountability, with a particular focus on audits of AI models and training datasets – work for which she was featured in Wired UK and TIME on the TIME100 Most Influential People in AI list in 2023. Dr. Birhane also served on the United Nations Secretary-General’s AI Advisory Body and currently serves at the AI Advisory Council in Ireland.**Nabiha Syed is the Executive Director of the Mozilla Foundation, the global nonprofit that does everything from championing trustworthy AI to advocating for a more open, equitable internet. Prior to joining Mozilla, she was CEO of The Markup, an award-winning journalism non-profit that challenges technology to serve the public good. Before launching The Markup in 2020, Nabiha spent a decade as an acclaimed media lawyer focused on the intersection of frontier technology and newsgathering, including advising on publication issues with the Snowden revelations and the Steele Dossier, access litigation around police disciplinary records, and privacy and free speech issues globally. In 2023, Naibha was awarded the NAACP/Archewell Digital Civil Rights Award for her work.*
undefined
Feb 7, 2025 • 1h 4min

Defying Datafication w/ Dr Abeba Birhane (PLUS: Paris AI Action Summit)

The Paris AI Action Summit is just around the corner! If you’re not going to be there, and you wish you were — we got you.We are streaming next week’s podcast LIVE from Paris on YouTube — register here🎙️On Tuesday, February 11th, at 6:30pm Paris time / 12:30pm EST, we’ll be recording our first-ever LIVE podcast episode. After two days at the French AI Action Summit, Alix will sit down with four of the best women in AI politics to break down the power and politics of the Summit. It’s our Paris Post-Mortem — and we’re live-streaming the whole conversation.We’ll hear from:Astha Kapoor, Co-Founder for the Aapti InstituteAmba Kak, Executive Director of the AI Now InstituteAbeba Birhane, Founder & Principal Investigator of the Artificial Intelligence Accountability Lab (AIAL)Nabiha Syed, Executive Director of MozillaThis is our first-ever live-streamed podcast, and we’d love a great community turnout. Join the stream on Tuesday and share it with anyone else who wants the hot of the press review of what happens in Paris.And, today’s episode is abundant with treats to prime you for the summit: Alix checks in with Martin Tisne who is the special envoy to the Public Interest AI track to ask him about how he feels about the upcoming summit, and what he hopes it will achieve.We also hear from Michelle Thorne, of Green Web Foundation about a joint statement on the environmental impacts of AI she’s hoping can focus the energy of the summit towards planetary limits and decarbonisation of AI. Learn about why and how she put this together and how she’s hoping to start reasonable conversations about how AI is a complete and utter energy vampire.Then we have Dr. Abeba Birhane — who will also be at our live show next week — to share her experiences launching the AI Accountability Lab at Trinity College in Dublin. Abeba’s work pushes to actually research AI systems before we make claims about them. In a world of industry marketing spin, Abeba is a voice of reason. As a cognitive scientist who studies people she also cautions against the impossible and tantalising idea that we can somehow datafy human complexity.Further Reading & Resources:**AI auditing: The Broken Bus on the Road to AI Accountability** by Abeba Birhane, Ryan Steed, Victor Ojewale, Briana Vecchione, Inioluwa Deborah RajiAI Accountability LabPress release outlining the Lab’s launch last year — Trinity CollegeThe Artificial Intelligence Action SummitWithin Bounds: Limiting AI’s Environmental Impact — led by Michelle Thorne from the Green Web FoundationOur Youtube ChannelDr Abeba Birhane founded and leads the TCD AI Accountability Lab (AIAL). Dr Birhane is currently a Research Fellow at the School of Computer Science and Statistics in Trinity College Dublin. Her research focuses on AI accountability, with a particular focus on audits of AI models and training datasets – work for which she was featured in Wired UK and TIME on the TIME100 Most Influential People in AI list in 2023. Dr. Birhane also served on the United Nations Secretary-General’s AI Advisory Body and currently serves at the AI Advisory Council in Ireland.Martin Tisné is Thematic Envoy to the AI Action Summit, in charge of all deliverables related to Public Interest AI. He also leads the AI Collaborative, an initiative of The Omidyar Group created to help regulate artificial intelligence based on democratic values and principles and ensure the public has a voice in that regulation. He founded the Open Government Partnership (OGP) alongside the Obama White House and helped OGP grow to a 70+ country initiative. He also initiated the International Open Data Charter, the G7 Open Data Charter, and the G20’s commitment to open data principles.Michelle Thorne (@thornet) is working towards a fossil-free internet as the Director of Strategy at the Green Web Foundation. She’s a co-initiator of the Green Screen Coalition for digital rights and climate justice and a visiting professor at Northumbria University. Michelle publishes Branch, an online magazine written by and for people who dream about a sustainable internet, which received the Ars Electronica Award for Digital Humanities in 2021.

Get the Snipd
podcast app

Unlock the knowledge in podcasts with the podcast player of the future.
App store bannerPlay store banner

AI-powered
podcast player

Listen to all your favourite podcasts with AI-powered features

Discover
highlights

Listen to the best highlights from the podcasts you love and dive into the full episode

Save any
moment

Hear something you like? Tap your headphones to save it with AI-generated key takeaways

Share
& Export

Send highlights to Twitter, WhatsApp or export them to Notion, Readwise & more

AI-powered
podcast player

Listen to all your favourite podcasts with AI-powered features

Discover
highlights

Listen to the best highlights from the podcasts you love and dive into the full episode