Computer Says Maybe

Alix Dunn
undefined
Mar 21, 2025 • 1h 1min

Regulating Privacy in an AI Era w/ Carly Kind

This week Alix is speaking with her long-time friend and collaborator Carly Kind, who is now the privacy commissioner of Australia. Here’s something you may be embarrassed to ask: what does a privacy commissioner even do? We got you…Alix and Carly will discuss how privacy regs bump up against current trends in AI, how to incentivise compliance, and the limits of Australian privacy laws.**Subscribe to our newsletter to get more stuff than just a podcast — we run events and do other work that you will definitely be interested in!**Carly Kind commenced as Australia’s Privacy Commissioner in February 2024 for a 5-year term. As Privacy Commissioner, she regulates the handling of personal information by entities covered by the Australian Privacy Act 1988 and seeks to influence the development of legislation and advance privacy protections for Australians. Ms Kind joined from the UK-based Ada Lovelace Institute, where she was the inaugural director. As a human rights lawyer and leading authority on the intersection of technology policy and human rights, she has advised industry, government and non-profit organisations on digital rights, artificial intelligence, privacy and data protection, and corporate accountability in the technology sphere.
undefined
Mar 14, 2025 • 53min

Dogwhistles: Networked Transphobia Online

This week producer Georgia joins Alix to discuss something huge that we’ve yet to go deep on: the prevalence of trans misogyny online. This episode is jam-packed with four amazing guests to guide us through this rough terrain:Shivani Dave is a journalist and commentator who uses social media for their career and income. They share their experiences with receiving hate online, and having to balance posting against hits to their mental healthAlice Hunsberger is a trust & safety professional who’s worked at all levels of content moderation. She explains the technical complexities and limitations of moderating online spacesJenni Olson is head of social media safety at GLAAD, and discusses the lack of transparency and care around platform content policies, allowing hateful dog whistles to proliferateDr Emily Cousens, a professor at Northeastern, who provides important context on the history of trans misogyny in the UKFurther reading & resources:A Short History of Trans Misogyny by Jules Gill-PetersonDebunking the Cass Review by Gideon MKGLAAD Social Media Safety ProgramMeta’s Anti-LGBT Makeover by Jenni OlsonRapid Onset Gender Dysphoria by Maintenance Phase: parts ONE and TWOT&S Insider by Alice Hunsberger**Subscribe to our newsletter to get more stuff than just a podcast — we run events and do other work that you will definitely be interested in!***SHIVANI DAVE (they/them) is a political commentator and journalist whose work focuses on human rights, science and technology. SHIV is one of the organisers of the London Dyke March,  a regular collaborator with organisations; ACT UP LONDON, Queer Night Pride, local TRA, London Trans+ Pride and other more formal structures (THT, AKT, Trans+ History Week, LGBT+ History Month, NHS, THE PEOPLE ). They have written for outlets including The Guardian, BBC News, and Metro. They have appeared on Good Morning Britain, Sky News, and Jeremy Vine on 5 among others. SHIV is driven by a passion for sharing the stories of marginalised and oppressed people around the world.**Alice Goguen Hunsberger is a Trust & Safety leader with 20+ years of experience in content moderation, CX, and building safer online communities. She heads Trust & Safety at Musubi Labs, an AI company specializing in T&S services. Alice got her start in 2002, running a community forum and developing its first moderation guidelines. She later led T&S and CX at OkCupid, helped guide Grindr through its IPO as VP of CX & T&S, and drove ethical outsourcing strategies as VP of T&S at PartnerHero.**Jenni Olson (she/her/TBD) is Senior Director of the Social Media Safety Program at national LGBTQ media advocacy organization, GLAAD. A prominent voice in the field of tech accountability, Jenni leads GLAAD’s work to hold tech companies and social media platforms accountable, and to secure safe online spaces for LGBTQ people. The GLAAD Social Media Safety Program researches, monitors, and reports on a variety of issues facing LGBTQ social media users. GLAAD’s annual Social Media Safety Index (SMSI) report evaluates the major social media platforms on LGBTQ safety, privacy, and expression. Olson has worked in LGBTQ media and tech for decades and is best known as co-founder of PlanetOut.com, the first major LGBTQ community website, created by a small team of tech pioneers in 1996.**Dr Emily Cousens (They/Them) is Assistant Professor of Politics and International Relations at Northeastern University, London and the UK lead for the Digital Transgender Archive. They are the author of Trans feminist epistemologies in the US Second Wave, published by Palgrave in 2023, and their expertise are in transfeminist philosophy and history.*
undefined
Mar 7, 2025 • 48min

VCs Are World Eaters w/ Catherine Bracy

This week Alix interviewed Catherine Bracy on her book World Eaters: How Venture Capital is Cannibalising the Economy. Support Catherine’s work and buy it NOW.Venture capital wasn’t always how it is today. But now it’s a driver of inequality, political and economic instability, and insufferable personalities. How did we get here and what might come next?In this conversation Catherine outlines her views on our current political moment and the role of VC in it. We’ve all got feelings about VCs, but in her book and in this conversation she forensically picks apart how it works, why it doesn’t really work, and why that’s a problem for all of us.Further reading & resources:Buy Catherine’s bookTechEquity CollaborativeCatherine Bracy is the Founder and CEO of TechEquity, an organization doing research and advocacy on issues at the intersection of tech and economic equity to ensure the tech industry’s products and practices create opportunity instead of inequality. She is also the author of the forthcoming book, World Eaters: How Venture Capital is Cannibalizing the Economy (Dutton: March, 2025).
undefined
Feb 28, 2025 • 1h 3min

Power Over Precision w/ Jenny Reardon

Alix’s conversation this week is with Jenny Reardon, who shares with us the history of genomics — and the absolutely mind-melting parallels it has with the trajectory of the AI industry.Jenny describes genomics as the industrialisation of genetics; it’s not just about understanding the genetic properties of humans, but mapping out every last inch of their genetic information so that it’s machine readable and scalable and — does this remind you of anything yet?There are a disturbing amount of correlations between AI and genomics: that they have roots in military applications; as fields they have been pumped up with money and compute; and that there are, of course, huge conceptual overlaps with race science.Jenny Reardon is a Professor of Sociology and the Founding Director of the Science and Justice Research Center at the University of California, Santa Cruz.  Her research draws into focus questions about identity, justice and democracy that are often silently embedded in scientific ideas and practices.  She is the author of Race to the Finish: Identity and Governance in an Age of Genomics (Princeton University Press) and, most recently, The Postgenomic Condition: Ethics, Justice, Knowledge After the Genome (University of Chicago Press)
undefined
Feb 21, 2025 • 37min

The Taiwan Bottleneck w/ Brian Chen

Do you ever wonder how semiconductors (AKA chips) get made? Or why most of them are made in Taiwan? Or what this means for geopolitics?Luckily, this is a podcast for nerds like you. Alix was joined this week by Brian Chen from Data & Society, who systematically explains the process of advanced chip manufacture, how its thoroughly entangled in US economic policy, and how Taiwan’s place as the main artery for chips is the product of deep colonial infrastructures.Brian J. Chen is the policy director of Data & Society, leading the organization’s work to shape tech policy. With a background in movement lawyering and legislative and regulatory advocacy, he has worked extensively on issues of economic justice, political economy, and tech governance.Previously, Brian led campaigns to strengthen the labor and employment rights of digital platform workers and other workers in precarious industries. Before that, he led programs to promote democratic accountability in policing, including community oversight over the adoption and use of police technologies.**Subscribe to our newsletter to get more stuff than just a podcast — we run events and do other work that you will definitely be interested in!**
undefined
Feb 14, 2025 • 56min

AI Safety’s Spiral of Urgency w/ Shazeda Ahmed

Shazeda Ahmed, a Chancellor’s Postdoctoral fellow at UCLA, dives into AI safety's geopolitical landscape, particularly the U.S.-China relationship. She critiques the urgency surrounding AI safety and reveals how it is often fueled by anti-China sentiment. The discussion covers the implications of surveillance technologies, the complexities of AI ethics, and the intersection of corporate interests with safety efforts. Ahmed also highlights the historical influences of eugenics in shaping current AI policies, urging for more nuanced conversations to include marginalized perspectives.
undefined
Feb 12, 2025 • 47min

Live Show: Paris Post-Mortem

Kapow! We just did our first ever LIVE SHOW. We barely had time to let the mics cool down before a bunch of you requested to have the recording on our pod feed so here we are.ICYMI: this is a recording from the live show that we did in Paris, right after the AI Action Summit. Alix sat down to have a candid conversation about the summit, and pontificate on what people might have meant when they kept saying ‘public interest AI’ over and over. She was joined by four of the best women in AI politics:Astha Kapoor, Co-Founder for the Aapti InstituteAmba Kak, Executive Director of the AI Now InstituteAbeba Birhane, Founder & Principal Investigator of the Artificial Intelligence Accountability Lab (AIAL)Nabiha Syed, Executive Director of MozillaIf audio is not enough for you, go ahead and watch the show on YouTube**Subscribe to our newsletter to get more stuff than just a podcast — we run events and do other work that you will definitely be interested in!***Astha Kapoor is the Co-founder of Aapti Institute, a Bangalore based research firm that works on the intersection of technology and society. She has 15 years of public policy and strategy consulting experience, with a focus on use of technology for welfare. Astha works on participative governance of data, and digital public infrastructure. She’s a member of World Economic Forum Global Future Council on data equity (2023-24), visiting fellow at the Ostrom Workshop (Indiana University). She was also a member of the Think20 taskforce on digital public infrastructure during India and Brazil's G20 presidency and is currently on the board of Global Partnership for Sustainable Data.**Amba Kak has spent the last fifteen years designing and advocating for technology policy in the public interest, across government, industry, and civil society roles – and in many parts of the world. Amba brings this experience to her current role co-directing AI Now, a New York-based research institute where she leads on advancing diagnosis and actionable policy to tackle concerns with artificial intelligence and concentrated power. She has served as Senior Advisor on AI to the Federal Trade Commission and was recognized as one of TIME’s 100 Most Influential People in AI in 2024.**Dr. Abeba Birhane founded and leads the TCD AI Accountability Lab (AIAL). Dr Birhane is currently a Research Fellow at the School of Computer Science and Statistics in Trinity College Dublin. Her research focuses on AI accountability, with a particular focus on audits of AI models and training datasets – work for which she was featured in Wired UK and TIME on the TIME100 Most Influential People in AI list in 2023. Dr. Birhane also served on the United Nations Secretary-General’s AI Advisory Body and currently serves at the AI Advisory Council in Ireland.**Nabiha Syed is the Executive Director of the Mozilla Foundation, the global nonprofit that does everything from championing trustworthy AI to advocating for a more open, equitable internet. Prior to joining Mozilla, she was CEO of The Markup, an award-winning journalism non-profit that challenges technology to serve the public good. Before launching The Markup in 2020, Nabiha spent a decade as an acclaimed media lawyer focused on the intersection of frontier technology and newsgathering, including advising on publication issues with the Snowden revelations and the Steele Dossier, access litigation around police disciplinary records, and privacy and free speech issues globally. In 2023, Naibha was awarded the NAACP/Archewell Digital Civil Rights Award for her work.*
undefined
Feb 7, 2025 • 1h 4min

Defying Datafication w/ Dr Abeba Birhane (PLUS: Paris AI Action Summit)

The Paris AI Action Summit is just around the corner! If you’re not going to be there, and you wish you were — we got you.We are streaming next week’s podcast LIVE from Paris on YouTube — register here🎙️On Tuesday, February 11th, at 6:30pm Paris time / 12:30pm EST, we’ll be recording our first-ever LIVE podcast episode. After two days at the French AI Action Summit, Alix will sit down with four of the best women in AI politics to break down the power and politics of the Summit. It’s our Paris Post-Mortem — and we’re live-streaming the whole conversation.We’ll hear from:Astha Kapoor, Co-Founder for the Aapti InstituteAmba Kak, Executive Director of the AI Now InstituteAbeba Birhane, Founder & Principal Investigator of the Artificial Intelligence Accountability Lab (AIAL)Nabiha Syed, Executive Director of MozillaThis is our first-ever live-streamed podcast, and we’d love a great community turnout. Join the stream on Tuesday and share it with anyone else who wants the hot of the press review of what happens in Paris.And, today’s episode is abundant with treats to prime you for the summit: Alix checks in with Martin Tisne who is the special envoy to the Public Interest AI track to ask him about how he feels about the upcoming summit, and what he hopes it will achieve.We also hear from Michelle Thorne, of Green Web Foundation about a joint statement on the environmental impacts of AI she’s hoping can focus the energy of the summit towards planetary limits and decarbonisation of AI. Learn about why and how she put this together and how she’s hoping to start reasonable conversations about how AI is a complete and utter energy vampire.Then we have Dr. Abeba Birhane — who will also be at our live show next week — to share her experiences launching the AI Accountability Lab at Trinity College in Dublin. Abeba’s work pushes to actually research AI systems before we make claims about them. In a world of industry marketing spin, Abeba is a voice of reason. As a cognitive scientist who studies people she also cautions against the impossible and tantalising idea that we can somehow datafy human complexity.Further Reading & Resources:**AI auditing: The Broken Bus on the Road to AI Accountability** by Abeba Birhane, Ryan Steed, Victor Ojewale, Briana Vecchione, Inioluwa Deborah RajiAI Accountability LabPress release outlining the Lab’s launch last year — Trinity CollegeThe Artificial Intelligence Action SummitWithin Bounds: Limiting AI’s Environmental Impact — led by Michelle Thorne from the Green Web FoundationOur Youtube ChannelDr Abeba Birhane founded and leads the TCD AI Accountability Lab (AIAL). Dr Birhane is currently a Research Fellow at the School of Computer Science and Statistics in Trinity College Dublin. Her research focuses on AI accountability, with a particular focus on audits of AI models and training datasets – work for which she was featured in Wired UK and TIME on the TIME100 Most Influential People in AI list in 2023. Dr. Birhane also served on the United Nations Secretary-General’s AI Advisory Body and currently serves at the AI Advisory Council in Ireland.Martin Tisné is Thematic Envoy to the AI Action Summit, in charge of all deliverables related to Public Interest AI. He also leads the AI Collaborative, an initiative of The Omidyar Group created to help regulate artificial intelligence based on democratic values and principles and ensure the public has a voice in that regulation. He founded the Open Government Partnership (OGP) alongside the Obama White House and helped OGP grow to a 70+ country initiative. He also initiated the International Open Data Charter, the G7 Open Data Charter, and the G20’s commitment to open data principles.Michelle Thorne (@thornet) is working towards a fossil-free internet as the Director of Strategy at the Green Web Foundation. She’s a co-initiator of the Green Screen Coalition for digital rights and climate justice and a visiting professor at Northumbria University. Michelle publishes Branch, an online magazine written by and for people who dream about a sustainable internet, which received the Ars Electronica Award for Digital Humanities in 2021.
undefined
Jan 31, 2025 • 45min

DEI Season Finale: Part Two

This week Alix continues her conversation with Hanna McCloskey and Rubie Clarke from Fearless Futures and we take a whistle-stop tour of the past 5 years. We start in 2020 with the disingenuous but huge embrace of DEI work by tech companies, to 2025 when those same companies are part of massive movements actively campaigning against it.The pair share what it was like running a DEI consultancy in the months and years following the murder of George Floyd — when DEI was suddenly on the agenda for a lot organisations. The performative and ineffective methods that DEI is famous for (endless canape receptions!) has also given the inevitable backlash easy pickings for mockery and vilification.The news is happening so fast, but these DEI episodes can hopefully help listeners better understand the backlash, not just to DEI, but to any attempts to correct systemic inequity in society.Subscribe to our newsletter to get more stuff than just a podcast — we run events and do other work that you will definitely be interested in!Further reading & resources:Fearless FuturesDEI Disrupted: The Blueprint for DEI Worth DoingCombahee River CollectiveRubie Eílis Clarke (she/her) is Senior Director of Consultancy, Fearless Futures. Rubie is of Jewish and Irish heritage and is based in her home town of London. As Senior Director of Consultancy at Fearless Futures, Rubie supports ambitious organisations to diagnose inequity in their ecosystems and design, implement and evaluate innovative anti-oppression solutions. Her expertise lies in critical social theory and research, policy analysis and organisational change strategy. She holds a B.A. in Sociology and Anthropology from Goldsmiths University, London and a M.A. in Global Political Economy from the University of Sussex, with a focus on social and economic policy, Race critical theory, decoloniality and intersectional feminism. Rubie is also an expert facilitator who is skilled at leaning into nuance, complexity and discomfort with curiosity and compassion. She is passionate about facilitating collaborative learning journeys that build deep understanding of the root causes of oppression and unlock innovative and meaningful ways to disrupt and divest in service, ultimately, of collective liberation.Hanna Naima Mccloskey (she/her) is Founder and CEO, Fearless Futures. Hanna is Algerian British and the Founder & CEO of Fearless Futures. Before founding Fearless Futures she worked for the UN, NGOs and the Royal Bank of Scotland, across communications, research and finance roles; and has lived, studied and worked in Israel-Palestine, Italy, USA, Sudan, Syria and the UK. She has a BA in English from the University of Cambridge and an MA in International Relations from the Johns Hopkins School of Advanced International Studies, with a specialism in Conflict Management. Hanna is passionate, compassionate and challenging as an educator and combines this with rigour and creativity in consultancy. She brings nuanced and complex ideas in incisive and engaging ways to all she supports, always with a commitment for equitable transformation. Hanna is also a qualified ABM bodyfeeding peer supporter, committed to enabling all parents to meet their body feeding goals.
undefined
Jan 24, 2025 • 47min

DEI Season Finale: Part One

DEI is a nebulous field — if you’re not in it, it can be hard to know which tactics and methods are reasonable and effective… and which are a total waste of time. Or worse: which are actively harmful.In this two-parter Alix is joined by Hanna McCloskey and Rubie Clarke from Fearless Futures. In this episode they share what DEI is and crucially, what it isn’t.Listen to understand why unconscious bias training is a waste of time, and what meaningful anti-oppression work actually looks like — especially when attempting to embed these principles into digital products that are deployed globally.**Subscribe to our newsletter to get more stuff than just a podcast — we run events and do other work that you will definitely be interested in!**Further reading & resources:Fearless FuturesDEI Disrupted: The Blueprint for DEI Worth DoingCombahee River CollectiveRubie Eílis Clarke (she/her) is Senior Director of Consultancy, Fearless Futures. Rubie is of Jewish and Irish heritage and is based in her home town of London. As Senior Director of Consultancy at Fearless Futures, Rubie supports ambitious organisations to diagnose inequity in their ecosystems and design, implement and evaluate innovative anti-oppression solutions. Her expertise lies in critical social theory and research, policy analysis and organisational change strategy. She holds a B.A. in Sociology and Anthropology from Goldsmiths University, London and a M.A. in Global Political Economy from the University of Sussex, with a focus on social and economic policy, Race critical theory, decoloniality and intersectional feminism. Rubie is also an expert facilitator who is skilled at leaning into nuance, complexity and discomfort with curiosity and compassion. She is passionate about facilitating collaborative learning journeys that build deep understanding of the root causes of oppression and unlock innovative and meaningful ways to disrupt and divest in service, ultimately, of collective liberation.Hanna Naima Mccloskey (she/her) is Founder and CEO, Fearless Futures. Hanna is Algerian British and the Founder & CEO of Fearless Futures. Before founding Fearless Futures she worked for the UN, NGOs and the Royal Bank of Scotland, across communications, research and finance roles; and has lived, studied and worked in Israel-Palestine, Italy, USA, Sudan, Syria and the UK. She has a BA in English from the University of Cambridge and an MA in International Relations from the Johns Hopkins School of Advanced International Studies, with a specialism in Conflict Management. Hanna is passionate, compassionate and challenging as an educator and combines this with rigour and creativity in consultancy. She brings nuanced and complex ideas in incisive and engaging ways to all she supports, always with a commitment for equitable transformation. Hanna is also a qualified ABM bodyfeeding peer supporter, committed to enabling all parents to meet their body feeding goals.

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app