This podcast explores the challenges of AI in real life, including the testing of autonomous vehicles in San Francisco, uncovering discriminatory AI in welfare fraud prediction, prioritizing people over profits in AI governance, and addressing the fear and control of AI through regulations and personalized infrastructure.
The deployment of autonomous vehicles (AVs) in San Francisco has raised concerns about street safety, highlighting the need for better performance standards and incremental approvals based on demonstrated performance.
Investigations into AI systems used to predict welfare fraud in Europe reveal biases and inadequate performance, necessitating better regulations and accountability in AI governance to ensure fairness and equity in these systems.
Deep dives
Ensuring Safety of Autonomous Vehicles in San Francisco
In San Francisco, the deployment of autonomous vehicles (AVs) has raised concerns about street safety. While AVs have benefits like sticking to speed limits and utilizing cameras and sensors, they still struggle to react in every situation, resulting in collisions, injuries, and traffic disruptions. Regulators expected AVs to comply with road rules, but this hasn't been the case. More than 40 companies have licenses to test AVs in San Francisco, but incidents involving AVs interfering with fire department operations have been reported. The city has called for better performance standards and incremental approvals for AV expansions based on demonstrated performance.
Biases and Inadequacies of AI Systems in Welfare Programs
Investigations into AI systems used to predict welfare fraud in Europe have revealed biases and inadequate performance. These systems utilize subjective variables that include discriminatory factors like physical appearance or gender. Vulnerable individuals are disproportionately affected by these technologies, leading to a rise in punitive actions while lacking due process. Despite being discriminatory and inefficient, these systems are deployed without adequate precautions. Calls for better regulations and accountability in AI governance are growing to address these concerns and ensure fairness and equity in these systems.
Promoting Responsible AI Development and Regulation
To earn trust and prioritize people's interests over profits, companies need to shift their mindset towards responsible AI development. This includes operationalizing policies, conducting continuous monitoring, and creating risk dashboards and impact assessment reports. Company values and benchmarks should be set, and compliance with regulations should be ensured. More diverse voices, including those who are impacted by AI, need to be included in the design process. AI governance should be based on performance and accountability, with regulations setting minimum standards. By building technology that aligns with people's needs and values, we can create AI systems that are more beneficial and trustworthy.
Why does it so often feel like we’re part of a mass AI experiment? What is the responsible way to test new technologies? Bridget Todd explores what it means to live with unproven AI systems that impact millions of people as they roll out across public life.
In this episode: a visit to San Francisco, a major hub for automated vehicle testing; an exposé of a flawed welfare fraud prediction algorithm in a Dutch city; a look at how companies comply with regulations in practice; and how to inspire alternative values for tomorrow’s AI.
Julia Friedlander is senior manager for automated driving policy at San Francisco Municipal Transportation Agency who wants to see AVs regulated based on safety performance data.
Justin-Casimir Braun is a data journalist at Lighthouse Reports who is investigating suspect algorithms for predicting welfare fraud across Europe.
Navrina Singh is the founder and CEO of Credo AI, a platform that guides enterprises on how to ‘govern’ their AI responsibly in practice.
Suresh Venkatasubramanian is the director of the Center for Technological Responsibility, Reimagination, and Redesign at Brown University and he brings joy to computer science.
IRL is an original podcast from Mozilla, the non-profit behind Firefox. In Season 7, host Bridget Todd shares stories about prioritizing people over profit in the context of AI.
Get the Snipd podcast app
Unlock the knowledge in podcasts with the podcast player of the future.
AI-powered podcast player
Listen to all your favourite podcasts with AI-powered features
Discover highlights
Listen to the best highlights from the podcasts you love and dive into the full episode
Save any moment
Hear something you like? Tap your headphones to save it with AI-generated key takeaways
Share & Export
Send highlights to Twitter, WhatsApp or export them to Notion, Readwise & more
AI-powered podcast player
Listen to all your favourite podcasts with AI-powered features
Discover highlights
Listen to the best highlights from the podcasts you love and dive into the full episode