

Computer Says Maybe
Alix Dunn
Technology is changing fast. And it's changing our world even faster. Host Alix Dunn interviews visionaries, researchers, and technologists working in the public interest to help you keep up. Step outside the hype and explore the possibilities, problems, and politics of technology. We publish weekly.
Episodes
Mentioned books

Nov 1, 2024 • 51min
US Election Special w/ Spencer Overton
For this pre-election special, Prathm spoke with law professor Spencer Overton about how this election has — and hasn’t — be impacted by AI systems. Misinformation and deepfakes appear to be top of the agenda for a lot politicians and commentators, but there’s a lot more to think about…Spencer discusses the USA’s transition into a multiracial democracy, and describes the ongoing cultural anxiety that comes with that — and how that filters down into the politicisation of AI tools, both as fuel for moral panics, as well as being used to suppress voters of colour.Further reading:Artificial Intelligence for Electoral Management | International IDEAOvercoming Racial Harms to Democracy from Artificial Intelligence by Spencer Overton :: SSRNAI’s impact on elections is being overblown | MIT Technology ReviewEffects of Shelby County v. Holder on the Voting Rights Act | Brennan Center for JusticeSpencer Overton is the Patricia Roberts Harris Research Professor at GW Law School. As the Director of the Multiracial Democracy Project at the GW Equity Institute, he focuses on producing and supporting research that grapples with challenges to a well-functioning multiracial democracy. He is currently working on research projects related to the regulation of AI to facilitate a well-functioning multiracial democracy and the implications of alternative voting systems for multiracial democracy.

Nov 1, 2024 • 27min
Net 0 ++: Reporting on AI’s climate injustices w/ Karen Hao
For our final episode in this series on the environment, Alix interviewed Karen Hao on how tough it is to report on environmental impacts of AI.The conversation focusses on two of Karen’s recent stories, linked below. One of the biggest barriers to consistent reporting on AI’s climate injustices is the sheer opaqueness of information about what companies are trying to do when building infrastructure and what they think the actual costs — primarily of energy use and water — will be. Tech companies that Karen has written about enter communities via shell companies and promise relatively big deals for small municipalities if they allow development of new data centres — and community members often don’t know what they’re signing up for before it’s too late.Listen to learn about how difficult it is to report on this industry, and the tactics and methods Karen has to use to tell her stories.Further reading:Microsoft’s Hypocrisy on AI by Karen HaoAI is Taking Water from the Desert by Karen HaoKaren Hao is an American journalist who writes for publications like The Atlantic. She was previously a foreign correspondent based in Hong Kong for The Wall Street Journal and a senior artificial intelligence editor at the MIT Technology Review. She is best known for her coverage on AI research, technology ethics and the social impact of AI.

Oct 25, 2024 • 33min
Net 0++: Concrete arguments for AI
In our third episode about AI & the environment, Alix interviewed Sherif Elsayed-Ali, who’s been working on using AI to reduce the carbon emissions of concrete. Yes, that’s right — concrete.This may seem like a very niche place to focus a green initiative on but it isn’t; concrete is the second most used substance in the world because it’s integral to modern infrastructure, and there’s no other material like it. It’s also one of the biggest carbon emitters in the world.In this episode Sherif explains how AI and machine learning can make the process of concrete production more precise and efficient so that it burns much less fuel. Listen to learn about the big picture of global carbon emissions, and how AI can actually be used to actually reduce carbon output, rather than just monitor it — or add to it!Sherif Elsayed-Ali trained as a civil engineer, then studied international human rights law and public policy and administration. He worked with the UN and in the non-profit sector on humanitarian and human rights research and policy, before embarking on a career in tech and climate.Sherif founded Amnesty Tech, a group at the forefront of technology and human rights. He then joined Element AI (today Service Now Research), starting and leading its AI for Climate work. In 2020, he co-founded and became CEO of Carbon Re, an industrial AI company spun out of Cambridge University and UCL, developing novel solutions for decarbonising cement. He then co-founded Nexus Climate, a company providing climate tech advisory services and supporting the startup ecosystem.

Oct 18, 2024 • 32min
Net 0++: Big Dirty Data Centres
This week we are continuing our AI & Environment series with an episode about a key piece of AI infrastructure: data centres. With us this week are Boxi Wu and Jenna Ruddock to explain how data centres are a gruesomely sharp double-edged sword.They contribute to huge amounts of environmental degradation via local water and energy consumption, and impact the health of surrounding communities with incessant noise pollution. Data centres are also used as a political springboard for global leaders, where the expansion of AI infrastructure is seen as being synonymous with progress and economic growth.Boxi and Jenna talk us through the various community concerns that come with data centre development, and the kind of pushback we’re seeing in the UK and the US right now.Boxi Wu is a DPhil researcher at the Oxford Internet Institute and a Research Policy Consultant with the OECD’s AI Policy Observatory. Their research focuses on the politics of AI infrastructure within the context of increasing global inequality and the current climate crisis. Prior to returning to academia, Boxi worked in AI ethics, technology consulting and policy research. Most recently, they worked in AI Ethics & Safety at Google DeepMind where they specialised in the ethics of LLMs and led the responsible release of frontier AI models including the initially released Gemini models.Jenna Ruddock is a researcher and advocate working at the intersections of law, technology, media, and environmental justice. Currently, she is policy counsel at Free Press, where she focuses on digital civil rights, surveillance, privacy, and media infrastructures. She has been a visiting fellow at the University of Amsterdam's critical infrastructure lab (criticalinfralab.net), a postdoctoral fellow with the Technology & Social Change project at the Harvard Kennedy School's Shorenstein Center, and a senior researcher with the Tech, Law & Security Program at American University Washington College of Law. Jenna is also a documentary photographer and producer with a background in community media and factual streaming.Further readingGoverning Computational Infrastructure for Strong and Just AI Economies, co-authored by Boxi WuGetting into Fights with Data Centres by Anne Pasek

Oct 11, 2024 • 42min
Net 0++: Microsoft’s greenwashing w/ Holly Alpine
This week we’re kicking off a series about AI & the environment. We’re starting with Holly Alpine, who just recently left Microsoft after starting and growing an internal sustainability programme over a decade.Holly’s goal was pretty simple: she wanted Microsoft to honour the sustainability commitments that they had set for themselves. The internal support she had fostered for sustainability initiatives did not match up with Microsoft’s actions — they continued to work with fossil fuel companies even though doing so was at odds with their plans to achieve net 0.Listen to learn about what it’s like approaching this kind of huge systemic challenge with good faith, and trying to make change happen from the inside.Holly Alpine is a dedicated leader in sustainability and environmental advocacy, having spent over a decade at Microsoft pioneering and leading multiple global initiatives. As the founder and head of Microsoft's Community Environmental Sustainability program, Holly directed substantial investments into community-based, nature-driven solutions, impacting over 45 global communities in Microsoft’s global datacenter footprint, with measurable improvements to ecosystem health, social equity, and human well-being.Currently, Holly continues her environmental leadership as a Board member of both American Forests and Zero Waste Washington, while staying active in outdoor sports as a plant-based athlete who enjoys rock climbing, mountain biking, ski mountaineering, and running mountain ultramarathons.Further Reading:Microsoft’s Hypocrisy on AIOur tech has a climate problem: How we solve it

8 snips
Oct 4, 2024 • 50min
Chasing Away Sidewalk Labs w/ Bianca Wylie
In this insightful conversation, Bianca Wylie, a writer and digital governance expert, shares her journey resisting Google's plans for a smart city in Toronto. She emphasizes the importance of public consultation in digital governance and explores the complexities of tech procurement. Bianca discusses the role of local journalism in community engagement and critiques the issues of privacy in urban planning. She advocates for specificity in tech discussions, highlighting the need for transparency and genuine public dialogue when integrating technology into governance.

Sep 27, 2024 • 25min
Will Newsom Veto the AI Safety Bill? w/ Teri Olle
What if we could have a public library for compute? But is… more compute really what we want right now?This week Alix interviewed Teri Olle from the Economic Security Project, a co-sponsor of the California AI safety bill (SB 1047). The bill has been making the rounds in the news because it would force AI companies to do safety checks on their models before releasing them to the public — which is seen as uh, ‘controversial’, to those in the innovation space.But Teri had a hand in a lesser known part of the bill: the construction of CalCompute, a state owned public cloud cluster for resource-intensive AI development. This would mean public access to the compute power needed to train state of the art AI models — finally giving researchers and plucky start ups access to something otherwise locked inside a corporate walled garden.Teri Olle is the California Campaign Director for Economic Security Project Action. Beginning her career as an attorney, Teri soon moved into policy and issue advocacy, working on state and local efforts to ban toxic chemicals and pesticides, decrease food insecurity and hunger, increase gender representation in politics. She is a founding member of a political action committee dedicated to inserting parent voice into local politics and served as the president of the board of Emerge California. She lives in San Francisco with her husband and two daughters.

Sep 20, 2024 • 38min
The stories we tell ourselves about AI
Applications for our second cohort of Media Mastery for New AI Protagonists are now open! Join this 5-week program to level up your media impact alongside a dynamic community of emerging experts in AI politics and power—at no cost to you. In this episode, we chat with Daniel Stone, a participant from our first cohort, about his work. Apply by Sunday, September 29th!The adoption of new technologies is driven by stories. A story is a shortcut to understanding something complex. Narratives can lock us into a set of options that are…terrible. The kicker is that narratives are hard to detect and even harder to influence.But how reliable are our narrators? And how can we use story as strategy?The good news is that experts are working to unravel the narratives around AI. All so that folks with public interest in mind can change the game.This week Alix sat down with three researchers looking at three AI narrative questions. She spoke to Hanna Barakat about how the New York Times reports on AI; John Tanner, who scraped and analysed huge amounts of YouTube videos to find narrative patterns; and Daniel Stone, who studied and deconstructed metaphors that power collective understanding about AI.In this ep we ask:What are the stories we tell ourselves about AI? And why do we let industry pick them?How do these narratives change what is politically possible?What can public interest organisations and advocates do to change the narrative game?Hanna Barakat is a research analyst for Computer Says Maybe, working at the intersection of emerging technologies and complex systems design. She graduated from Brown University in 2022 with honors in International Development Studies and a focus in Digital Media Studies.Jonathan Tanner founded Rootcause after more than fifteen years working in senior communications roles for high-profile politicians, CEOs, philanthropists and public thinkers across the world. In this time he has worked across more than a dozen countries running diverse teams whilst writing keynote speeches, securing front page headlines, delivering world-first social media moments and helping to secure meaningful changes to public policy.Daniel Stone is currently undertaking research with Cambridge University’s Centre for Future Intelligence and is the Executive Director of Diffusion.Au. He is a Policy Fellow with the Chifley Research Centre and a Policy Associate at the Centre for Responsible Technology Australia.

Sep 13, 2024 • 38min
Bridging The Divide w/ Issie Lapowsky
There are oceans of research papers digging into the various harms of online platforms. Researchers are asking urgent questions such as how hate speech and misinformation has an effect on our information environment, and our democracy.But how does this research find it’s way to the media, policymakers, advocacy groups, or even tech companies themselves?To help us answer this, Alix is joined this week by Issie Lapowsky, who recently authored Bridging The Divide: Translating Research on Digital Media into Policy and Practice — a report about how research reaches these four groups, and what they do with it. This episode also features John Sands from Knight Foundation, who commissioned this report.Further reading:Bridging The Divide by Issie LapowskyKnight FoundationIssie Lapowsky is a journalist covering the intersection between tech, politics and national affairs. She has been published in WIRED, Protocol, The New York Times, and Fast Company.John Sands is Senior Director of Media and Democracy at Knight Foundation. Since joining Knight Foundation in 2019, he has led more than $100 million in grant making to support independent scholarship and policy research on information and technology in the context of our democracy.

Sep 6, 2024 • 45min
Why was the CEO of Telegram just arrested? w/ Mallory Knodel
Last week, CEO of Telegram Pavel Durov landed in France and was immediately detained. The details of his arrest are still emerging; he is being charged for being complicit in illegal activities happening on the platform, including the spread of CSAM.Durov’s lawyer has referred to these charges as “absurd” — because the head of a social media company cannot be held responsible for criminal activity on the platform. That might be true in the US but does that hold up in France?This week Alix is joined by Mallory Knodel to talk us through what happened:What are the implications of France making this move, and why now?How has Telegram positioned themselves as the most safe and secure messaging platform when they don’t even use the same encryption standards as WhatsApp?How Telegram has managed to get away with being uncooperative with various governments — or have they?Mallory Knodel is The Center for Democracy & Technology’s Chief Technology Officer. She is also a co-chair of the Human Rights and Protocol Considerations research group of the Internet Research Task Force and a chairing advisor on cybersecurity and AI to the Freedom Online Coalition.


