Computer Says Maybe

Alix Dunn
undefined
Jul 18, 2024 • 4min

New mini-series: Exhibit X

In the Exhibit X series Alix and Prathm sink their fingernails into the tangled universe of litigation and Big Tech; how have the courts held Big Tech firms accountable for their various harms over the years? Is whistleblowing an effective mechanism for informing new regulations? What about a social media platform’s first amendment rights? So much to cover, so many episodes coming your way!
undefined
Jul 12, 2024 • 25min

What the FAccT? Evidence of bias. Now what?

In part four of our FAccT deep dive, Alix joins Marta Ziosi and Dasha Pruss to discuss their paper “Evidence of What, for Whom? The Socially Contested Role of Algorithmic Bias in a Predictive Policing Tool”.In their paper they discuss how an erosion of public trust can lead to ‘any idea will do’ decisions, and often these lean on technology, such as predictive policing systems. One such tool is the Shot Spotter, a piece of audio surveillance tech designed to detect gunfire — a contentious system which has been sold both as a tool for police to surveil civilians, and as a tool for civilians to keep tabs on police. Can it really be both?Marta Ziosi is a Postdoctoral Researcher at the Oxford Martin AI Governance Initiative, where her research focuses on standards for frontier AI. She has worked for institutions such as DG CNECT at the European Commission, the Berkman Klein Centre for Internet & Society at Harvard University, The Montreal International Center of Expertise in Artificial Intelligence (CEIMIA) and The Future Society. Previously, Marta was a Ph.D. student and researcher on Algorithmic Bias and AI Policy at the Oxford Internet Institute. She is also the founder of AI for People, a non-profit organisation whose mission is to put technology at the service of people.  Marta holds a BSc in Mathematics and Philosophy from University College Maastricht. She also holds an MSc in Philosophy and Public Policy and an executive degree in Chinese Language and Culture for Business from the London School of Economics.Dasha Pruss is a 2023-2024 fellow at the Berkman Klein Center for Internet & Society and an Embedded EthiCS postdoctoral fellow at Harvard University. In fall 2024 she will be an assistant professor of philosophy and computer science at George Mason University. She received her PhD in History & Philosophy of Science from the University of Pittsburgh in May 2023, and holds a BSc in Computer Science from the University of Utah. She has also co-organized with Against Carceral Tech, an activist group working to ban facial recognition and predictive policing in the city of Pittsburgh.This episode is hosted by Alix Dunn. Our guests are   Marta Ziosi and Dasha PrussiFurther Reading Evidence of What, for Whom? The Socially Contested Role of Algorithmic Bias in a Predictive Policing Tool Refusing and Reusing Data by Catherine D’Ignazio
undefined
Jul 5, 2024 • 24min

What the FAccT? First law, bad law

In this episode, we speak with Lara Groves and Jacob Metcalf  at the seventh annual FAccT conference in Rio de Janeiro.In part four of our FAccT deep dive, Alix joins Lara Groves and Jacob Metcalf  to discuss their paper “ Auditing Work: Exploring the New York City algorithmic bias audit regime”.Lara Groves is a Senior Researcher at the Ada Lovelace Institute. Her most recent project explored the role of third-party auditing regimes in AI governance. Lara has previously led research on the role of public participation in commercial AI labs, and on algorithmic impact assessments. Her research interests include practical and participatory approaches to algorithmic accountability and innovative policy solutions to challenges of governance.Before joining Ada, Lara worked as a tech and internet policy consultant, and has experience in research, public affairs and campaigns for think-tanks, political parties and advocacy groups. Lara has an MSc in Democracy from UCL.Jacob Metcalf, PhD, is a researcher at Data & Society, where he leads the AI on the Ground Initiative, and works on an NSF-funded multisite project, Pervasive Data Ethics for Computational Research (PERVADE). For this project, he studies how data ethics practices are emerging in environments that have not previously grappled with research ethics, such as industry, IRBs, and civil society organizations. His recent work has focused on the new organizational roles that have developed around AI ethics in tech companies.Jake’s consulting firm, Ethical Resolve, provides a range of ethics services, helping clients to make well-informed, consistent, actionable, and timely business decisions that reflect their values. He also serves as the Ethics Subgroup Chair for the IEEE P7000 Standard.This episode is hosted by Alix Dunn. Our guests are Lara Groves and Jacob Metcalf.Further ReadingLara Groves (Ada Lovelace Institute, UK), Jacob Metcalf (Data & Society Research Institute, USA), Alayna Kennedy (Independent researcher, USA), Briana Vecchione (Data & Society Research Institute, USA) and Andrew Strait (Ada Lovelace Institute, UK)- Auditing Work: Exploring the New York City algorithmic bias audit regime
undefined
Jun 28, 2024 • 30min

What the FAccT?: Abandoning Algorithms

In this episode, we speak with Nari Johnson and Sanika Moharana at this year’s FAccT conference in Rio de Janeiro.In part two of our FAccT deep dive, Alix joins Nari Johnson and Sanika Moharana to discuss their paper “The Fall of an Algorithm: Characterizing the Dynamics Toward Abandonment”.Nari Johnson is a third-year PhD student in Carnegie Mellon University's Machine Learning Department, where she is advised by Hoda Heidari. She graduated from Harvard in 2021 with a BA and MS in Computer Science, where she previously worked with Finale Doshi-Velez.Sanika Moharana is a second-year PhD student in Human Computer Interaction at Carnegie Mellon University. As an advocate for human-centered design and research, Sanika practices iterative ideation and prototyping for multimodal interactions and interfaces across intelligent systems, connected smart devices, IOT’s, AI experiences, and emerging technologies .Further Reading The Fall of an Algorithm: Characterizing the Dynamics Toward Abandonment
undefined
6 snips
Jun 21, 2024 • 54min

What the FAccT?: Reformers and Radicals

Alix Dunn from Ada Lovelace Institute talks about FAccT conference and papers presented. Topics include AI model marketplaces moderation, algorithmic bias in policing, AI auditing laws, responsibility in AI supply chains, disgorgement of data for models, and surveillance concerns for AI technology.
undefined
May 23, 2024 • 52min

Protesting Project Nimbus: employee organising to end Google’s contract with Israel w/ Dr.Kate Sim

In this episode, we speak with Dr. Kate Sim, one of the core organisers of the Google Worker Sit-In Against Project Nimbus. Dr. Kate Sim was recently fired, alongside almost 50 other employees, from Google after helping organize a sit-in protesting Project Nimbus, a joint contract between Google and Amazon to provide technology to the Israeli government and military. In this episode, Alix and Dr. Sim discuss technology-enabled violence, Dr. Sim's work in trust and safety, and Google's cancelled project Maven. They also talk about Dr. Sim's journey into protesting Project Nimbus, the many other voices fighting against the contract, and how Big Tech often obfuscates its responsibility in perpetuating violence. In the end, we arrive at a common lesson: solidarity is our main hope for change. This episode is hosted by Alix Dunn and our guest is Dr. Kate Sim. Further Reading No Tech For Apartheid What is Project Nimbus, and why are Google workers protesting Israel deal? Israeli Weapons Firms Required to Buy Cloud Services From Google and Amazon The Palestine Laboratory: How Israel Exports the Technology of Occupation Around the World 3 Years After the Project Maven Uproar, Google Cozies to the Pentagon How Big Tech and Silicon Valley are Transforming the Military-Industrial Complex Google Fired Us for Protesting Its Complicity in the War on Gaza. But We Won’t Be Silenced. Computer Says Maybe Newsletter - Protesting Project Nimbus
undefined
Apr 19, 2024 • 40min

The Human in the Loop: What's it like to work in the AI supply chain?

Experts in AI systems and tech labor discuss the evolving jobs in the AI industry, labor conditions, and who benefits. Topics include content moderation challenges, AI supply chain labor issues, and the human cost of consumer technology. Personal experiences and advocacy for better labor standards are highlighted.
undefined
Feb 13, 2024 • 35min

2024 Elections: Is AI going to wreak havoc?

In this episode, we walk through how misinformation and disinformation has been used in past elections to impact outcomes, where we think AI might make a material difference in how elections play out this year, and where we think responsibility lies for the situation we’re in.This episode is hosted by Alix Dunn and Prathm Juneja, and guests include Sam Gregory, Josh Lawson, and Claire Wardle.If you have feedback about the episode or a pet subject that you might want to join forces to develop into an episode, please reach out. You can email team@saysmaybe.com or share an audio note here: speakpipe.com/saysmaybe--Further ReadingAcademic Articles KOSA isn’t designed to help kids, by danah boyd Techno-legal Solutionism: Regulating Children’s Online Safety in the United StatesA review and provocation: On polarization and platforms How to Prepare for the Deluge of Generative AI on Social MediaExposure to the Russian Internet Research Agency foreign influence campaign on Twitter in the 2016 US election and its relationship to attitudes and voting behaviorNegative Downstream Effects of Alarmist Disinformation Discourse: Evidence from the United States | Political BehaviorNews Articles Voter Suppression Has Gone Digital Big Tech rolls back misinformation measures ahead of 2024Big Tech Backslide: How Social-Media Rollbacks Endanger Democracy Ahead of the 2024 Elections Murthy v. Missouri (Formerly Missouri v. Biden) FCC votes to outlaw scam robocalls that use AI-generated voicesOther Links How OpenAI is approaching 2024 worldwide elections OII | How Data and Artificial Intelligence are Actually Transforming American Elections

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app