Individuals in the tech industry have a responsibility to raise ethical concerns and refuse to participate in projects that contribute to surveillance and weapon systems.
There is a need for ethical review and accountability in AI research publications to prevent harm towards certain ethnic groups and redirect technology towards improving lives.
Deep dives
Tech industry's contribution to harm
The podcast explores how individuals in the tech industry have discovered that their work can contribute to harmful outcomes. It raises questions about the ethical responsibility of companies and governments in their use of technology, particularly artificial intelligence. The ability for individuals to refuse to participate in such projects is emphasized.
The ethical concerns of surveillance and weapon systems
The podcast highlights the story of an engineer working for Google who discovered that her team was involved in building air gap data centers for Project Maven, a Pentagon project that used AI to analyze aerial surveillance footage from drones in conflict zones. The engineer expressed ethical concerns about supporting projects that contribute to surveillance and weapon systems. The episode discusses the impact of such projects on privacy, human rights, and the responsibility of tech companies.
Ethical implications of AI research and data usage
The podcast also touches on the ethical implications of AI research and data usage. It discusses how academic journals publish research that could enable control and harm towards certain ethnic groups, particularly in countries like China. The need for ethical review and accountability in AI research publications is emphasized. Additionally, the episode explores the concept of 'data weapons' used by law enforcement to surveil and criminalize communities, particularly those of black and brown individuals. It advocates for transparency, accountability, and the redirection of technology towards improving lives.
Where should tech builders draw the line on AI for military or surveillance? Just because it can be built, doesn’t mean it should be. At what point do we blow the whistle, call out the boss, and tell the world? Find out what it’s like to sound the alarm from inside a big tech company.
Laura Nolan shares the story behind her decision to leave Google in 2018 over their involvement in Project Maven, a Pentagon project which used AI by Google.
Yves Moreau explains why he is calling on academic journals and international publishers to retract papers that use facial recognition and DNA profiling of minority groups.
Yeshimabeit Milner describes how the non-profit Data for Black Lives is pushing back against use of AI powered tools used to surveil and criminalize Black and Brown communities.
Shmyla Khan, describes being on the receiving end of technologies developed by foreign superpowers as a researcher with the Digital Rights Foundation in Pakistan.
IRL is an original podcast from Mozilla, the non-profit behind Firefox. In Season 6, host Bridget Todd shares stories of people who make AI more trustworthy in real life. This season doubles as Mozilla’s 2022 Internet Health Report. Go to the report for show notes, transcripts, and more.
Get the Snipd podcast app
Unlock the knowledge in podcasts with the podcast player of the future.
AI-powered podcast player
Listen to all your favourite podcasts with AI-powered features
Discover highlights
Listen to the best highlights from the podcasts you love and dive into the full episode
Save any moment
Hear something you like? Tap your headphones to save it with AI-generated key takeaways
Share & Export
Send highlights to Twitter, WhatsApp or export them to Notion, Readwise & more
AI-powered podcast player
Listen to all your favourite podcasts with AI-powered features
Discover highlights
Listen to the best highlights from the podcasts you love and dive into the full episode