
Computer Says Maybe
Technology is changing fast. And it's changing our world even faster. Host Alix Dunn interviews visionaries, researchers, and technologists working in the public interest to help you keep up. Step outside the hype and explore the possibilities, problems, and politics of technology. We publish weekly.
Latest episodes

Dec 6, 2024 • 48min
The Age of Noise w/ Eryk Salvaggio
What happens if you ask a generative AI image model to show you what Picasso’s work would have looked like if he lived in Japan in the 16th century? Would it produce something totally new, or just mash together stereotypical aesthetics from Picasso’s work, and 16th century Japan?This week, Alix interviewed Eryk Salvaggio, who shares his ideas around how we are moving away from ‘the age of information’ and into an age of noise, where we’ve progressed so far into a paradigm of easy and frictionless information sharing, that information has transformed into an overwhelming wall of noise.So if everything is just noise, what do we filter out and keep in — and what systems do we use to do that?Further reading:Visit Eryk’s WebsiteCybernetic Forests — Eryk’s newsletter on tech and cultureOur upcoming event: Insight Session: The politics, power, and responsibility of AI procurement with Bianca WylieOur newsletter, which shares invites to events like the above, and other interesting bitsEryk Salvaggio has been making tech-critical art since the dawn of the Internet. Now he’s a blend of artist, tech policy researcher, and writer focused on a critical approach to AI. He is the Emerging Technologies Research Advisor at the Siegel Family Endowment, an instructor in Responsible AI at Elisava Barcelona School of Design, a researcher at the metaLab (at) Harvard University’s AI Pedagogy Project, one of the top contributors to Tech Policy Press, and an artist whose work has been shown at festivals including SXSW, DEFCON, and Unsound.

Nov 29, 2024 • 53min
The Happy Few: Open Source AI (part two)
In part two of our episode on open source AI, we delve deeper into we can use openness and participation for sustainable AI governance. It’s clear that everyone agrees that things like the proliferation of harmful content is a huge risk — but what we cannot seem to agree on is how to eliminate this risk.Alix is joined again by Mark Surman, and this time they both take a closer look at the work Audrey Tang did as Taiwan’s first digital minister, where she successfully built and implemented a participatory framework that allowed the people of Taiwan to directly inform AI policy.We also hear more from Merouane Debbah, who built the first LLM trained in Arabic, and highlights the importance of developing AI systems that don’t follow rigid western benchmarks.Mark Surman has spent three decades building a better internet, from the advent of the web to the rise of artificial intelligence. As President of Mozilla, a global nonprofit backed technology company that does everything from making Firefox to advocating for a more open, equitable internet, Mark’s current focus is ensuring the various Mozilla organizations work in concert to make trustworthy AI a reality. Mark led the creation of Mozilla.ai (a commercial AI R+D lab) and Mozilla Ventures (an impact venture fund with a strong focus on AI). Before joining Mozilla, Mark spent 15 years leading organizations and projects that promoted the use of the internet and open source as tools for social and economic development.More about our guests:Audrey Tang, Cyber Ambassador of Taiwan, served as Taiwan’s 1st digital minister (2016-2024) and the world’s 1st nonbinary cabinet minister. Tang played a crucial role in shaping g0v (gov-zero), one of the most prominent civic tech movements worldwide. In 2014, Tang helped broadcast the demands of Sunflower Movement activists, and worked to resolve conflicts during a three-week occupation of Taiwan’s legislature. Tang became a reverse mentor to the minister in charge of digital participation, before assuming the role in 2016 after the government changed hands. Tang helped develop participatory democracy platforms such as vTaiwan and Join, bringing civic innovation into the public sector through initiatives like the Presidential Hackathon and Ideathon.Sayash Kapoor is a Laurance S. Rockefeller Graduate Prize Fellow in the University Center for Human Values and a computer science Ph.D. candidate at Princeton University's Center for Information Technology Policy. He is a coauthor of AI Snake Oil, a book that provides a critical analysis of artificial intelligence, separating the hype from the true advances. His research examines the societal impacts of AI, with a focus on reproducibility, transparency, and accountability in AI systems. He was included in TIME Magazine’s inaugural list of the 100 most influential people in AI.Mérouane Debbah is a researcher, educator and technology entrepreneur. He has founded several public and industrial research centers, start-ups and held executive positions in ICT companies. He is professor at Khalifa University in Abu Dhabi, and founding director of the Khalifa University 6G Research Center. He has been working at the interface of AI and telecommunication and pioneered in 2021 the development of NOOR, the first Arabic LLM.Further reading & resourcesPolis — a real-time participation platformRecursive Public by vTaiwanNoor — the first LLM trained on the Arabic languageFalcon FoundationBuy AI Snake Oil by Sayash Kapoor and Arvind Narayanan

Nov 22, 2024 • 58min
The Happy Few: Open Source AI (part one)
In the context of AI, what do we mean when we say ‘open source’? An AI model is not something you can straightforwardly open up like a piece of software; there are huge technical and social considerations to be made.Is it risky to open-source highly capable foundation models? What guardrails do we need to think about when it comes to the proliferation of harmful content? And, can you really call it ‘open’ if the barrier for accessing compute is so high? Is model alignment really the only thing we have to protect us?In this two-parter, Alix is joined by Mozilla president Mark Surman to discuss the benefits and drawbacks of open and closed models. Our guests are Alondra Nelson, Merouane Debbah, Audrey Tang, and Sayash Kapoor.Listen to learn about the early years of the free software movement, the ecosystem lock-in of the closed-source environment, and what kinds of things are possible with a more open approach to AI.Mark Surman has spent three decades building a better internet, from the advent of the web to the rise of artificial intelligence. As President of Mozilla, a global nonprofit backed technology company that does everything from making Firefox to advocating for a more open, equitable internet, Mark’s current focus is ensuring the various Mozilla organizations work in concert to make trustworthy AI a reality. Mark led the creation of Mozilla.ai (a commercial AI R+D lab) and Mozilla Ventures (an impact venture fund with a strong focus on AI). Before joining Mozilla, Mark spent 15 years leading organizations and projects that promoted the use of the internet and open source as tools for social and economic development.More about our guests:Audrey Tang, Cyber Ambassador of Taiwan, served as Taiwan’s 1st digital minister (2016-2024) and the world’s 1st nonbinary cabinet minister. Tang played a crucial role in shaping g0v (gov-zero), one of the most prominent civic tech movements worldwide. In 2014, Tang helped broadcast the demands of Sunflower Movement activists, and worked to resolve conflicts during a three-week occupation of Taiwan’s legislature. Tang became a reverse mentor to the minister in charge of digital participation, before assuming the role in 2016 after the government changed hands. Tang helped develop participatory democracy platforms such as vTaiwan and Join, bringing civic innovation into the public sector through initiatives like the Presidential Hackathon and Ideathon.Alondra Nelson is s scholar of the intersections of science, technology, policy, and society, and the Harold F. Linder Professor at the Institute for Advanced Study, an independent research center in Princeton, New Jersey. Dr. Nelson was formerly deputy assistant to President Joe Biden and acting director of the White House Office of Science and Technology Policy (OSTP). In this role, she spearheaded the development of the Blueprint for an AI Bill of Rights, and was the first African American and first woman of color to lead US science and technology policy.Sayash Kapoor is a Laurance S. Rockefeller Graduate Prize Fellow in the University Center for Human Values and a computer science Ph.D. candidate at Princeton University's Center for Information Technology Policy. He is a coauthor of AI Snake Oil, a book that provides a critical analysis of artificial intelligence, separating the hype from the true advances. His research examines the societal impacts of AI, with a focus on reproducibility, transparency, and accountability in AI systems. He was included in TIME Magazine’s inaugural list of the 100 most influential people in AI.Mérouane Debbah is a researcher, educator and technology entrepreneur. He has founded several public and industrial research centers, start-ups and held executive positions in ICT companies. He is professor at Khalifa University in Abu Dhabi, and founding director of the Khalifa University 6G Research Center. He has been working at the interface of AI and telecommunication and pioneered in 2021 the development of NOOR, the first Arabic LLM.Further reading & resourcesPolis — a real-time participation platformRecursive Public by vTaiwanNoor — the first LLM trained on the Arabic languageFalcon FoundationBuy AI Snake Oil by Sayash Kapoor and Arvind Narayanan

Nov 15, 2024 • 50min
Algorithmically cutting benefits w/ Kevin De Liban
This week Alix was joined by Kevin De Liban, who just launched Techntonic Justice, an organisation designed to support and fight for those harmed by AI systems.In this episode Kevin describes his experiences litigating on behalf of people in Arkansas who found their in-home care hours cut aggressively by an algorithm administered by the state. This is a story about taking care away from individuals in the name of ‘efficiency’, and the particular levers for justice that Kevin and his team managed to take advantage of to eventually ban the use of this algorithm in Arkansas.CW: This episode contains descriptions of people being denied care and left in undignified situations at around 08.17- 08.40 and 27.12-28.07Further reading & resources:Techtonic JusticeKevin De Liban is the founder of Techtonic Justice, and the Director of Advocacy at Legal Aid of Arkansas, nurturing multi-dimensional efforts to improve the lives of low-income Arkansans in matters of health, workers' rights, safety net benefits, housing, consumer rights, and domestic violence. With Legal Aid, he has led a successful litigation campaign in federal and state courts challenging Arkansas's use of an algorithm to cut vital Medicaid home-care benefits to individuals who have disabilities or are elderly.

Nov 8, 2024 • 41min
Election Debrief
In a reflective post-election discussion, the hosts tackle the role of tech giants like Elon Musk in shaping political dynamics. They advocate for a politics rooted in love and empathy, critiquing current misanthropic trends. The conversation stretches to polling inaccuracies and the impact of social media on young male voters, highlighting shifts in political engagement. Furthermore, they stress the necessity for the left to craft compelling narratives, while addressing the implications of big tech in the new political landscape.

Nov 1, 2024 • 51min
US Election Special w/ Spencer Overton
For this pre-election special, Prathm spoke with law professor Spencer Overton about how this election has — and hasn’t — be impacted by AI systems. Misinformation and deepfakes appear to be top of the agenda for a lot politicians and commentators, but there’s a lot more to think about…Spencer discusses the USA’s transition into a multiracial democracy, and describes the ongoing cultural anxiety that comes with that — and how that filters down into the politicisation of AI tools, both as fuel for moral panics, as well as being used to suppress voters of colour.Further reading:Artificial Intelligence for Electoral Management | International IDEAOvercoming Racial Harms to Democracy from Artificial Intelligence by Spencer Overton :: SSRNAI’s impact on elections is being overblown | MIT Technology ReviewEffects of Shelby County v. Holder on the Voting Rights Act | Brennan Center for JusticeSpencer Overton is the Patricia Roberts Harris Research Professor at GW Law School. As the Director of the Multiracial Democracy Project at the GW Equity Institute, he focuses on producing and supporting research that grapples with challenges to a well-functioning multiracial democracy. He is currently working on research projects related to the regulation of AI to facilitate a well-functioning multiracial democracy and the implications of alternative voting systems for multiracial democracy.

Nov 1, 2024 • 27min
Net 0 ++: Reporting on AI’s climate injustices w/ Karen Hao
For our final episode in this series on the environment, Alix interviewed Karen Hao on how tough it is to report on environmental impacts of AI.The conversation focusses on two of Karen’s recent stories, linked below. One of the biggest barriers to consistent reporting on AI’s climate injustices is the sheer opaqueness of information about what companies are trying to do when building infrastructure and what they think the actual costs — primarily of energy use and water — will be. Tech companies that Karen has written about enter communities via shell companies and promise relatively big deals for small municipalities if they allow development of new data centres — and community members often don’t know what they’re signing up for before it’s too late.Listen to learn about how difficult it is to report on this industry, and the tactics and methods Karen has to use to tell her stories.Further reading:Microsoft’s Hypocrisy on AI by Karen HaoAI is Taking Water from the Desert by Karen HaoKaren Hao is an American journalist who writes for publications like The Atlantic. She was previously a foreign correspondent based in Hong Kong for The Wall Street Journal and a senior artificial intelligence editor at the MIT Technology Review. She is best known for her coverage on AI research, technology ethics and the social impact of AI.

Oct 25, 2024 • 33min
Net 0++: Concrete arguments for AI
In our third episode about AI & the environment, Alix interviewed Sherif Elsayed-Ali, who’s been working on using AI to reduce the carbon emissions of concrete. Yes, that’s right — concrete.This may seem like a very niche place to focus a green initiative on but it isn’t; concrete is the second most used substance in the world because it’s integral to modern infrastructure, and there’s no other material like it. It’s also one of the biggest carbon emitters in the world.In this episode Sherif explains how AI and machine learning can make the process of concrete production more precise and efficient so that it burns much less fuel. Listen to learn about the big picture of global carbon emissions, and how AI can actually be used to actually reduce carbon output, rather than just monitor it — or add to it!Sherif Elsayed-Ali trained as a civil engineer, then studied international human rights law and public policy and administration. He worked with the UN and in the non-profit sector on humanitarian and human rights research and policy, before embarking on a career in tech and climate.Sherif founded Amnesty Tech, a group at the forefront of technology and human rights. He then joined Element AI (today Service Now Research), starting and leading its AI for Climate work. In 2020, he co-founded and became CEO of Carbon Re, an industrial AI company spun out of Cambridge University and UCL, developing novel solutions for decarbonising cement. He then co-founded Nexus Climate, a company providing climate tech advisory services and supporting the startup ecosystem.

Oct 18, 2024 • 32min
Net 0++: Big Dirty Data Centres
This week we are continuing our AI & Environment series with an episode about a key piece of AI infrastructure: data centres. With us this week are Boxi Wu and Jenna Ruddock to explain how data centres are a gruesomely sharp double-edged sword.They contribute to huge amounts of environmental degradation via local water and energy consumption, and impact the health of surrounding communities with incessant noise pollution. Data centres are also used as a political springboard for global leaders, where the expansion of AI infrastructure is seen as being synonymous with progress and economic growth.Boxi and Jenna talk us through the various community concerns that come with data centre development, and the kind of pushback we’re seeing in the UK and the US right now.Boxi Wu is a DPhil researcher at the Oxford Internet Institute and a Research Policy Consultant with the OECD’s AI Policy Observatory. Their research focuses on the politics of AI infrastructure within the context of increasing global inequality and the current climate crisis. Prior to returning to academia, Boxi worked in AI ethics, technology consulting and policy research. Most recently, they worked in AI Ethics & Safety at Google DeepMind where they specialised in the ethics of LLMs and led the responsible release of frontier AI models including the initially released Gemini models.Jenna Ruddock is a researcher and advocate working at the intersections of law, technology, media, and environmental justice. Currently, she is policy counsel at Free Press, where she focuses on digital civil rights, surveillance, privacy, and media infrastructures. She has been a visiting fellow at the University of Amsterdam's critical infrastructure lab (criticalinfralab.net), a postdoctoral fellow with the Technology & Social Change project at the Harvard Kennedy School's Shorenstein Center, and a senior researcher with the Tech, Law & Security Program at American University Washington College of Law. Jenna is also a documentary photographer and producer with a background in community media and factual streaming.Further readingGoverning Computational Infrastructure for Strong and Just AI Economies, co-authored by Boxi WuGetting into Fights with Data Centres by Anne Pasek

Oct 11, 2024 • 42min
Net 0++: Microsoft’s greenwashing w/ Holly Alpine
This week we’re kicking off a series about AI & the environment. We’re starting with Holly Alpine, who just recently left Microsoft after starting and growing an internal sustainability programme over a decade.Holly’s goal was pretty simple: she wanted Microsoft to honour the sustainability commitments that they had set for themselves. The internal support she had fostered for sustainability initiatives did not match up with Microsoft’s actions — they continued to work with fossil fuel companies even though doing so was at odds with their plans to achieve net 0.Listen to learn about what it’s like approaching this kind of huge systemic challenge with good faith, and trying to make change happen from the inside.Holly Alpine is a dedicated leader in sustainability and environmental advocacy, having spent over a decade at Microsoft pioneering and leading multiple global initiatives. As the founder and head of Microsoft's Community Environmental Sustainability program, Holly directed substantial investments into community-based, nature-driven solutions, impacting over 45 global communities in Microsoft’s global datacenter footprint, with measurable improvements to ecosystem health, social equity, and human well-being.Currently, Holly continues her environmental leadership as a Board member of both American Forests and Zero Waste Washington, while staying active in outdoor sports as a plant-based athlete who enjoys rock climbing, mountain biking, ski mountaineering, and running mountain ultramarathons.Further Reading:Microsoft’s Hypocrisy on AIOur tech has a climate problem: How we solve it
Remember Everything You Learn from Podcasts
Save insights instantly, chat with episodes, and build lasting knowledge - all powered by AI.