Computer Says Maybe cover image

Computer Says Maybe

Latest episodes

undefined
Feb 12, 2025 • 47min

Live Show: Paris Post-Mortem

Kapow! We just did our first ever LIVE SHOW. We barely had time to let the mics cool down before a bunch of you requested to have the recording on our pod feed so here we are.ICYMI: this is a recording from the live show that we did in Paris, right after the AI Action Summit. Alix sat down to have a candid conversation about the summit, and pontificate on what people might have meant when they kept saying ‘public interest AI’ over and over. She was joined by four of the best women in AI politics:Astha Kapoor, Co-Founder for the Aapti InstituteAmba Kak, Executive Director of the AI Now InstituteAbeba Birhane, Founder & Principal Investigator of the Artificial Intelligence Accountability Lab (AIAL)Nabiha Syed, Executive Director of MozillaIf audio is not enough for you, go ahead and watch the show on YouTube**Subscribe to our newsletter to get more stuff than just a podcast — we run events and do other work that you will definitely be interested in!***Astha Kapoor is the Co-founder of Aapti Institute, a Bangalore based research firm that works on the intersection of technology and society. She has 15 years of public policy and strategy consulting experience, with a focus on use of technology for welfare. Astha works on participative governance of data, and digital public infrastructure. She’s a member of World Economic Forum Global Future Council on data equity (2023-24), visiting fellow at the Ostrom Workshop (Indiana University). She was also a member of the Think20 taskforce on digital public infrastructure during India and Brazil's G20 presidency and is currently on the board of Global Partnership for Sustainable Data.**Amba Kak has spent the last fifteen years designing and advocating for technology policy in the public interest, across government, industry, and civil society roles – and in many parts of the world. Amba brings this experience to her current role co-directing AI Now, a New York-based research institute where she leads on advancing diagnosis and actionable policy to tackle concerns with artificial intelligence and concentrated power. She has served as Senior Advisor on AI to the Federal Trade Commission and was recognized as one of TIME’s 100 Most Influential People in AI in 2024.**Dr. Abeba Birhane founded and leads the TCD AI Accountability Lab (AIAL). Dr Birhane is currently a Research Fellow at the School of Computer Science and Statistics in Trinity College Dublin. Her research focuses on AI accountability, with a particular focus on audits of AI models and training datasets – work for which she was featured in Wired UK and TIME on the TIME100 Most Influential People in AI list in 2023. Dr. Birhane also served on the United Nations Secretary-General’s AI Advisory Body and currently serves at the AI Advisory Council in Ireland.**Nabiha Syed is the Executive Director of the Mozilla Foundation, the global nonprofit that does everything from championing trustworthy AI to advocating for a more open, equitable internet. Prior to joining Mozilla, she was CEO of The Markup, an award-winning journalism non-profit that challenges technology to serve the public good. Before launching The Markup in 2020, Nabiha spent a decade as an acclaimed media lawyer focused on the intersection of frontier technology and newsgathering, including advising on publication issues with the Snowden revelations and the Steele Dossier, access litigation around police disciplinary records, and privacy and free speech issues globally. In 2023, Naibha was awarded the NAACP/Archewell Digital Civil Rights Award for her work.*
undefined
Feb 7, 2025 • 1h 4min

Defying Datafication w/ Dr Abeba Birhane (PLUS: Paris AI Action Summit)

The Paris AI Action Summit is just around the corner! If you’re not going to be there, and you wish you were — we got you.We are streaming next week’s podcast LIVE from Paris on YouTube — register here🎙️On Tuesday, February 11th, at 6:30pm Paris time / 12:30pm EST, we’ll be recording our first-ever LIVE podcast episode. After two days at the French AI Action Summit, Alix will sit down with four of the best women in AI politics to break down the power and politics of the Summit. It’s our Paris Post-Mortem — and we’re live-streaming the whole conversation.We’ll hear from:Astha Kapoor, Co-Founder for the Aapti InstituteAmba Kak, Executive Director of the AI Now InstituteAbeba Birhane, Founder & Principal Investigator of the Artificial Intelligence Accountability Lab (AIAL)Nabiha Syed, Executive Director of MozillaThis is our first-ever live-streamed podcast, and we’d love a great community turnout. Join the stream on Tuesday and share it with anyone else who wants the hot of the press review of what happens in Paris.And, today’s episode is abundant with treats to prime you for the summit: Alix checks in with Martin Tisne who is the special envoy to the Public Interest AI track to ask him about how he feels about the upcoming summit, and what he hopes it will achieve.We also hear from Michelle Thorne, of Green Web Foundation about a joint statement on the environmental impacts of AI she’s hoping can focus the energy of the summit towards planetary limits and decarbonisation of AI. Learn about why and how she put this together and how she’s hoping to start reasonable conversations about how AI is a complete and utter energy vampire.Then we have Dr. Abeba Birhane — who will also be at our live show next week — to share her experiences launching the AI Accountability Lab at Trinity College in Dublin. Abeba’s work pushes to actually research AI systems before we make claims about them. In a world of industry marketing spin, Abeba is a voice of reason. As a cognitive scientist who studies people she also cautions against the impossible and tantalising idea that we can somehow datafy human complexity.Further Reading & Resources:**AI auditing: The Broken Bus on the Road to AI Accountability** by Abeba Birhane, Ryan Steed, Victor Ojewale, Briana Vecchione, Inioluwa Deborah RajiAI Accountability LabPress release outlining the Lab’s launch last year — Trinity CollegeThe Artificial Intelligence Action SummitWithin Bounds: Limiting AI’s Environmental Impact — led by Michelle Thorne from the Green Web FoundationOur Youtube ChannelDr Abeba Birhane founded and leads the TCD AI Accountability Lab (AIAL). Dr Birhane is currently a Research Fellow at the School of Computer Science and Statistics in Trinity College Dublin. Her research focuses on AI accountability, with a particular focus on audits of AI models and training datasets – work for which she was featured in Wired UK and TIME on the TIME100 Most Influential People in AI list in 2023. Dr. Birhane also served on the United Nations Secretary-General’s AI Advisory Body and currently serves at the AI Advisory Council in Ireland.Martin Tisné is Thematic Envoy to the AI Action Summit, in charge of all deliverables related to Public Interest AI. He also leads the AI Collaborative, an initiative of The Omidyar Group created to help regulate artificial intelligence based on democratic values and principles and ensure the public has a voice in that regulation. He founded the Open Government Partnership (OGP) alongside the Obama White House and helped OGP grow to a 70+ country initiative. He also initiated the International Open Data Charter, the G7 Open Data Charter, and the G20’s commitment to open data principles.Michelle Thorne (@thornet) is working towards a fossil-free internet as the Director of Strategy at the Green Web Foundation. She’s a co-initiator of the Green Screen Coalition for digital rights and climate justice and a visiting professor at Northumbria University. Michelle publishes Branch, an online magazine written by and for people who dream about a sustainable internet, which received the Ars Electronica Award for Digital Humanities in 2021.
undefined
Jan 31, 2025 • 45min

DEI Season Finale: Part Two

This week Alix continues her conversation with Hanna McCloskey and Rubie Clarke from Fearless Futures and we take a whistle-stop tour of the past 5 years. We start in 2020 with the disingenuous but huge embrace of DEI work by tech companies, to 2025 when those same companies are part of massive movements actively campaigning against it.The pair share what it was like running a DEI consultancy in the months and years following the murder of George Floyd — when DEI was suddenly on the agenda for a lot organisations. The performative and ineffective methods that DEI is famous for (endless canape receptions!) has also given the inevitable backlash easy pickings for mockery and vilification.The news is happening so fast, but these DEI episodes can hopefully help listeners better understand the backlash, not just to DEI, but to any attempts to correct systemic inequity in society.Subscribe to our newsletter to get more stuff than just a podcast — we run events and do other work that you will definitely be interested in!Further reading & resources:Fearless FuturesDEI Disrupted: The Blueprint for DEI Worth DoingCombahee River CollectiveRubie Eílis Clarke (she/her) is Senior Director of Consultancy, Fearless Futures. Rubie is of Jewish and Irish heritage and is based in her home town of London. As Senior Director of Consultancy at Fearless Futures, Rubie supports ambitious organisations to diagnose inequity in their ecosystems and design, implement and evaluate innovative anti-oppression solutions. Her expertise lies in critical social theory and research, policy analysis and organisational change strategy. She holds a B.A. in Sociology and Anthropology from Goldsmiths University, London and a M.A. in Global Political Economy from the University of Sussex, with a focus on social and economic policy, Race critical theory, decoloniality and intersectional feminism. Rubie is also an expert facilitator who is skilled at leaning into nuance, complexity and discomfort with curiosity and compassion. She is passionate about facilitating collaborative learning journeys that build deep understanding of the root causes of oppression and unlock innovative and meaningful ways to disrupt and divest in service, ultimately, of collective liberation.Hanna Naima Mccloskey (she/her) is Founder and CEO, Fearless Futures. Hanna is Algerian British and the Founder & CEO of Fearless Futures. Before founding Fearless Futures she worked for the UN, NGOs and the Royal Bank of Scotland, across communications, research and finance roles; and has lived, studied and worked in Israel-Palestine, Italy, USA, Sudan, Syria and the UK. She has a BA in English from the University of Cambridge and an MA in International Relations from the Johns Hopkins School of Advanced International Studies, with a specialism in Conflict Management. Hanna is passionate, compassionate and challenging as an educator and combines this with rigour and creativity in consultancy. She brings nuanced and complex ideas in incisive and engaging ways to all she supports, always with a commitment for equitable transformation. Hanna is also a qualified ABM bodyfeeding peer supporter, committed to enabling all parents to meet their body feeding goals.
undefined
Jan 24, 2025 • 47min

DEI Season Finale: Part One

DEI is a nebulous field — if you’re not in it, it can be hard to know which tactics and methods are reasonable and effective… and which are a total waste of time. Or worse: which are actively harmful.In this two-parter Alix is joined by Hanna McCloskey and Rubie Clarke from Fearless Futures. In this episode they share what DEI is and crucially, what it isn’t.Listen to understand why unconscious bias training is a waste of time, and what meaningful anti-oppression work actually looks like — especially when attempting to embed these principles into digital products that are deployed globally.**Subscribe to our newsletter to get more stuff than just a podcast — we run events and do other work that you will definitely be interested in!**Further reading & resources:Fearless FuturesDEI Disrupted: The Blueprint for DEI Worth DoingCombahee River CollectiveRubie Eílis Clarke (she/her) is Senior Director of Consultancy, Fearless Futures. Rubie is of Jewish and Irish heritage and is based in her home town of London. As Senior Director of Consultancy at Fearless Futures, Rubie supports ambitious organisations to diagnose inequity in their ecosystems and design, implement and evaluate innovative anti-oppression solutions. Her expertise lies in critical social theory and research, policy analysis and organisational change strategy. She holds a B.A. in Sociology and Anthropology from Goldsmiths University, London and a M.A. in Global Political Economy from the University of Sussex, with a focus on social and economic policy, Race critical theory, decoloniality and intersectional feminism. Rubie is also an expert facilitator who is skilled at leaning into nuance, complexity and discomfort with curiosity and compassion. She is passionate about facilitating collaborative learning journeys that build deep understanding of the root causes of oppression and unlock innovative and meaningful ways to disrupt and divest in service, ultimately, of collective liberation.Hanna Naima Mccloskey (she/her) is Founder and CEO, Fearless Futures. Hanna is Algerian British and the Founder & CEO of Fearless Futures. Before founding Fearless Futures she worked for the UN, NGOs and the Royal Bank of Scotland, across communications, research and finance roles; and has lived, studied and worked in Israel-Palestine, Italy, USA, Sudan, Syria and the UK. She has a BA in English from the University of Cambridge and an MA in International Relations from the Johns Hopkins School of Advanced International Studies, with a specialism in Conflict Management. Hanna is passionate, compassionate and challenging as an educator and combines this with rigour and creativity in consultancy. She brings nuanced and complex ideas in incisive and engaging ways to all she supports, always with a commitment for equitable transformation. Hanna is also a qualified ABM bodyfeeding peer supporter, committed to enabling all parents to meet their body feeding goals.
undefined
Jan 17, 2025 • 53min

DEI: the final season + Alex Kotran on the Future of Education

We have a special episode for you this week: we brought in Hanna Mccloskey and Rubie Clarke from Fearless Futures to talk about the recent announcement from Mark Zuckerberg which signalled, very strongly, that he doesn’t care about marginalised groups on his platforms — or within the company itself.We hear from Rubie and Hanna in the first half of the episode — and they will be back with us over the next couple of weeks for a two-parter on DEI! The rest of the episode will feature Alex Kotran discussing the future of Education.What does the term ‘AI literacy’ invoke for you? A proficiency in AI tooling? For Alex Kotran, founder of The AI Education Project, it’s about preparing students to enter a rapidly changing workforce. It’s not about just learning how to use AI, but understanding how to build durable skills around it, and get on a career path that won’t disappear in five years.Alex has some great perspectives on how AI tools will significantly narrow career paths for young people. This is an urgent issue that spans beyond basic AI literacy. It's about preparing students for a workforce that might look very different in five years to what it does today, and thinking holistically about how issues of tech procurement and efficiency intersect with times of economic downturn, such as a recession.Further Reading:The AI Education ProjectThe AIEDU’s AI Readiness FrameworkAlex Kotran, CEO of The AI Education Project (aiEDU), has nearly a decade of AI expertise and more than a decade of political experience, as a community organizer. He founded aiEDU in 2019 after he discovered that the Akron Public Schools, where his mom has taught for 30+ years, did not offer courses in AI use.Previously, as Director of AI Ethics at H5, Alex partnered with NYU Law School and the National Judicial College to create a judicial training program that is now used around the world. He also established H5's first CSR function, incubating nonprofits like The Future Society, a leading AI governance institute.
undefined
Jan 10, 2025 • 46min

To be Seen and not Watched w/ Tawana Petty

Welcome back! Let us know what you think of the show and what you want to see more of in 2025 by writing in here, or rambling into a microphone here.In this episode Alix is joined by Tawana Petty, who shares her experiences coming up as a political community activist in Detroit. Tawana studied the history of radical black movements under Grace Lee Boggs, and has taken these learnings into her work today.Listen to learn about how places like Detroit are used as testing grounds for new ‘innovations’ — especially within marginalised neighbourhoods. Tawana explains in detail how surveillance and safety are often mistakenly conflated, and how we have to work to unlearn this conflation.Further reading:Our Data Bodies project: https://www.odbproject.org/James and Grace Lee Boggs Center: https://www.boggscenter.org/The Detroit Community and Technology Project: https://detroitcommunitytech.org/ who ran the digital stewards programDetroit Digital Justice Coalition: https://alliedmedia.org/projects/detroit-digital-justice-coalitionWe The People of Detroit: https://www.wethepeopleofdetroit.com/Tawana Petty is a mother, social justice organizer, poet, author, and facilitator. She is the founding Executive Director of Petty Propolis, Inc., an artist incubator which teaches poetry, policy literacy and advocacy, and interrogates negative pervasive narratives, in pursuit of racial and environmental justice. Petty is a 2023-2025 Just Tech Fellow with the Social Science Research Council, a 2024 Rockwood National LIO Alum, and she currently serves on the CS (computer science) for Detroit Steering Committee. In 2021, Petty was named one of 100 Brilliant Women in AI Ethics. In 2023, she was honored with the AI Policy Leader in Civil Society Award by the Center for AI and Digital Policy, the Ava Jo Silent Shero Award by the Michigan Roundtable for Diversity and Inclusion, and with a Racial Justice Leadership Award by the Detroit People's Platform. In 2024, Petty was listed on Business Insider’s AI Power List for Policy and Ethics.
undefined
Dec 20, 2024 • 49min

Our learnings from 2024

We’re wrapped for the year, and will be back on the 10th of Jan. In the meantime, listen to Alix, Prathm, and Georgia discuss their biggest learnings from the pod this year from some of their favourite episodes.**We want to hear from YOU about the podcast — what do you want to hear more of in 2025? Share your ideas with us here: https://tally.so/r/3E860B**Or if you’d rather ramble into a microphone (just like we do…) use this link instead!We pull out clips from the following episodes:The Age of Noise w/ Eryk SalvaggioThe Happy Few: Open Source AI pt1Big Dirty Data Centres w/ Boxi Wu and Jenna RuddockUS Election Special w/ Spencer OvertonChasing Away Sidewalk Labs w/ Bianca WylieThe Human in the LoopThe Stories we Tell Ourselves About AIFurther reading:Learn more about what ex TikTok moderator Mojez has been up to this year via this BBC TikTok
undefined
Dec 13, 2024 • 46min

A $20bn Search Engine w/ Michelle Meagher

Google has finally been judged to be a monopoly by a federal court — while this was strikingly obvious already, what does this judgement mean? Is this too little too late?This week Alix and Prathm were joined by Michelle Meagher, an antitrust lawyer who shared a brief history of how antitrust started as a tool for governments to stop the consolidation of corporate power, and over time has morphed to focus on issues of competition and consumer protection — which has allowed monopolies to thrive.Michelle discusses the details and her thinking on the ongoing cases against Google, and more generally on how monopolies are basically like a big octopus arm-wrestling itself.Further reading:US Said to Consider a Breakup of Google to Address Search Monopoly — NY TimesGoogle’s second antitrust suit brought by US begins, over online ads — GuardianBig Tech on Trial — Matt StollerHow the EU’s DMA is changing Big Tech — The VergeUK set to clear Microsoft’s deal to buy Call of Duty maker Activision Blizzard — GuardianSign up to the Computer Says Maybe newsletter to get invites to our events and receive other juicy resources straight to your inboxMichelle is a competition lawyer and co-founder of the Balanced Economy Project, Europe’s first anti-monopoly organisation. She is author of Competition is Killing Us: How Big Business is Harming Our Society and Planet - and What to Do About It (Penguin, 2020), a Financial Times Best Economics Book of the Year. She is a Senior Policy Fellow at the University College London Centre for Law, Economics and Society. She is a Senior Fellow working on Monopoly and Corporate Governance at the Centre for Research on Multinational Corporations (SOMO).
undefined
Dec 6, 2024 • 48min

The Age of Noise w/ Eryk Salvaggio

What happens if you ask a generative AI image model to show you what Picasso’s work would have looked like if he lived in Japan in the 16th century? Would it produce something totally new, or just mash together stereotypical aesthetics from Picasso’s work, and 16th century Japan?This week, Alix interviewed Eryk Salvaggio, who shares his ideas around how we are moving away from ‘the age of information’ and into an age of noise, where we’ve progressed so far into a paradigm of easy and frictionless information sharing, that information has transformed into an overwhelming wall of noise.So if everything is just noise, what do we filter out and keep in — and what systems do we use to do that?Further reading:Visit Eryk’s WebsiteCybernetic Forests — Eryk’s newsletter on tech and cultureOur upcoming event: Insight Session: The politics, power, and responsibility of AI procurement with Bianca WylieOur newsletter, which shares invites to events like the above, and other interesting bitsEryk Salvaggio has been making tech-critical art since the dawn of the Internet. Now he’s a blend of artist, tech policy researcher, and writer focused on a critical approach to AI. He is the Emerging Technologies Research Advisor at the Siegel Family Endowment, an instructor in Responsible AI at Elisava Barcelona School of Design, a researcher at the metaLab (at) Harvard University’s AI Pedagogy Project, one of the top contributors to Tech Policy Press, and an artist whose work has been shown at festivals including SXSW, DEFCON, and Unsound.
undefined
Nov 29, 2024 • 53min

The Happy Few: Open Source AI (part two)

In part two of our episode on open source AI, we delve deeper into we can use openness and participation for sustainable AI governance. It’s clear that everyone agrees that things like the proliferation of harmful content is a huge risk — but what we cannot seem to agree on is how to eliminate this risk.Alix is joined again by Mark Surman, and this time they both take a closer look at the work Audrey Tang did as Taiwan’s first digital minister, where she successfully built and implemented a participatory framework that allowed the people of Taiwan to directly inform AI policy.We also hear more from Merouane Debbah, who built the first LLM trained in Arabic, and highlights the importance of developing AI systems that don’t follow rigid western benchmarks.Mark Surman has spent three decades building a better internet, from the advent of the web to the rise of artificial intelligence. As President of Mozilla, a global nonprofit backed technology company that does everything from making Firefox to advocating for a more open, equitable internet, Mark’s current focus is ensuring the various Mozilla organizations work in concert to make trustworthy AI a reality. Mark led the creation of Mozilla.ai (a commercial AI R+D lab) and Mozilla Ventures (an impact venture fund with a strong focus on AI). Before joining Mozilla, Mark spent 15 years leading organizations and projects that promoted the use of the internet and open source as tools for social and economic development.More about our guests:Audrey Tang, Cyber Ambassador of Taiwan, served as Taiwan’s 1st digital minister (2016-2024) and the world’s 1st nonbinary cabinet minister. Tang played a crucial role in shaping g0v (gov-zero), one of the most prominent civic tech movements worldwide. In 2014, Tang helped broadcast the demands of Sunflower Movement activists, and worked to resolve conflicts during a three-week occupation of Taiwan’s legislature. Tang became a reverse mentor to the minister in charge of digital participation, before assuming the role in 2016 after the government changed hands. Tang helped develop participatory democracy platforms such as vTaiwan and Join, bringing civic innovation into the public sector through initiatives like the Presidential Hackathon and Ideathon.Sayash Kapoor is a Laurance S. Rockefeller Graduate Prize Fellow in the University Center for Human Values and a computer science Ph.D. candidate at Princeton University's Center for Information Technology Policy. He is a coauthor of AI Snake Oil, a book that provides a critical analysis of artificial intelligence, separating the hype from the true advances. His research examines the societal impacts of AI, with a focus on reproducibility, transparency, and accountability in AI systems. He was included in TIME Magazine’s inaugural list of the 100 most influential people in AI.Mérouane Debbah is a researcher, educator and technology entrepreneur. He has founded several public and industrial research centers, start-ups and held executive positions in ICT companies. He is professor at Khalifa University in Abu Dhabi, and founding director of the Khalifa University 6G Research Center. He has been working at the interface of AI and telecommunication and pioneered in 2021 the development of NOOR, the first Arabic LLM.Further reading & resourcesPolis — a real-time participation platformRecursive Public by vTaiwanNoor — the first LLM trained on the Arabic languageFalcon FoundationBuy AI Snake Oil by Sayash Kapoor and Arvind Narayanan

Get the Snipd
podcast app

Unlock the knowledge in podcasts with the podcast player of the future.
App store bannerPlay store banner

AI-powered
podcast player

Listen to all your favourite podcasts with AI-powered features

Discover
highlights

Listen to the best highlights from the podcasts you love and dive into the full episode

Save any
moment

Hear something you like? Tap your headphones to save it with AI-generated key takeaways

Share
& Export

Send highlights to Twitter, WhatsApp or export them to Notion, Readwise & more

AI-powered
podcast player

Listen to all your favourite podcasts with AI-powered features

Discover
highlights

Listen to the best highlights from the podcasts you love and dive into the full episode