The Radical AI Podcast cover image

The Radical AI Podcast

Latest episodes

undefined
Oct 14, 2020 • 40min

Why We Do This: Reflecting on Six Months of Radical AI with Dylan and Jess

In this special episode of The Radical AI Podcast Dylan and Jess pull back the curtain to reflect on six months of the show! From qualitative research to ontological horseplay - this episode has it all! Full show notes for this episode can be found at radicalai.org.  If you enjoy this episode please make sure to subscribe, submit a rating and review, and connect with us on twitter at twitter.com/radicalaipod      
undefined
Oct 7, 2020 • 1h 3min

More than Fake News: Fighting Media Manipulation with Claire Leibowicz and Emily Saltz from the Partnership on AI

What is media integrity? What is media manipulation? What do you need to know about fake news? To answer these questions and more we welcome to the show Claire Leibowicz and Emily Saltz -- two representatives from the Partnership on AI’s AI and Media Integrity team. Claire Leibowicz is a Program Lead directing the strategy and execution of projects in the Partnership on AI’s AI and Media Integrity portfolio. Claire also oversees PAI’s AI and Media Integrity Steering Committee. Emily Saltz is a Research Fellow at Partnership on AI for the PAI/First Draft Media Manipulation Research Fellowship. Prior to joining PAI, Emily was UX Lead for The News Provenance Project at The New York Times. Full show notes for this episode can be found at Radicalai.org.  If you enjoy this episode please make sure to subscribe, submit a rating and review, and connect with us on twitter at twitter.com/radicalaipod
undefined
Sep 30, 2020 • 58min

The State of the Union of Surveillance: Are Things Getting Better? with Liz O'Sullivan

What should you know about the state of surveillance in the world today? What can we do as consumers to stop unintentionally contributing to surveillance? The Facial Recognition industry had a reckoning after the murder of George Floyd - are things getting better? To answer these questions we welcome Liz O'Sullivan to the show. Liz O'Sullivan is the Surveillance Technology Oversight Project's technology director. She is also the co-founder and vice president of commercial operations at Arthur AI, an AI explainability and bias monitoring startup. Liz has been featured in articles on ethical AI in the NY Times, The Intercept, and The Register, and has written about AI for the ACLU and The Campaign to Stop Killer Robots. She has spent 10 years in tech, mainly in the AI space, most recently as the head of image annotations for the computer vision startup, Clarifai. Full show notes for this episode can be found at Radicalai.org.  If you enjoy this episode please make sure to subscribe, submit a rating and review, and connect with us on twitter at twitter.com/radicalaipod      
undefined
Sep 23, 2020 • 59min

Checklists and Principles and Values, Oh My! Practices for Co-Designing Ethical Technologies with Michael Madaio

What are the limitations of using checklists for fairness? What are the alternatives? How do we effectively design ethical AI systems around our collective values?  To answer these questions we welcome Dr. Michael Madaio to the show. Madaio is a postdoc at Microsoft Research working with the FATE (Fairness, Accountability, Transparency, and Ethics in AI) research group. Michael works at the intersection of human-computer interaction, AI/ML, and public interest technology, where he uses human-centered methods to understand how we might equitably co-design data-driven technologies in the public interest with impacted stakeholders. Michael, along with other collaborators at Microsoft FATE, authored the paper: “Co-Designing Checklists to Understand Organizational Challenges and Opportunities around Fairness in AI”, which is one of the major focuses of this interview!  Full show notes for this episode can be found at Radicalai.org.  If you enjoy this episode please make sure to subscribe, submit a rating and review, and connect with us on twitter at twitter.com/radicalaipod      
undefined
Sep 16, 2020 • 59min

Resistance Against the Tech to Prison Pipeline with the Coalition for Critical Technology

What is the tech to prison pipeline? How can we build infrastructures of resistance to it? What role does academia play in perpetuating carceral technology? To answer these questions we welcome to the show Sonja Solomun and Audrey Beard, two representatives from the Coalition for Critical Technology.  Sonja Solomun works on the politics of media and technology, including the history of digital platforms, polarization, and on fair and accountable governance of technology. She is currently the Research Director of the Centre for Media, Technology and Democracy at McGill’s Max Bell School of Public Policy is finishing her PhD at the Department of Communication Studies at McGill University. Audrey Beard is a critical AI researcher who explores the politics of artificial intelligence systems and who earned their Master's in Computer Science at Rensselaer Polytechnic Institute. Audrey and Sonja co-founded the Coalition for Critical Technology, along with NM Amadeo, Chelsea Barabas, Theo Dryer, and Beth Semel. The mission of the Coalition for Critical Technology is to work towards justice by resisting technologies that exacerbate inequality, reinforce racism, and support the carceral state. Full show notes for this episode can be found at Radicalai.org.  If you enjoy this episode please make sure to subscribe, submit a rating and review, and connect with us on twitter at twitter.com/radicalaipod    
undefined
Sep 13, 2020 • 56min

All Tech is Human Series #4 - Building the Next Generation of Responsible Technologists & Changemakers with Rumman Chowdhury and Yoav Schlesinger

How can we inform and inspire the next generation of responsible technologists and changemakers? How do you get involved as someone new to the responsible AI field? In partnership with All Tech is Human we present this Livestreamed conversation featuring Rumman Chowdhury (Responsible AI Lead at Accenture) and Yoav Schlesinger (Principal, Ethical AI Practice at Salesforce). This conversation is moderated by All Tech Is Human's David Ryan Polgar. The organizational partner for the event is TheBridge. The conversation does not stop here! For each of the episodes in our series with All Tech is Human, you can find a detailed “continue the conversation” page on our website radicalai.org. For each episode we will include all of the action items we just debriefed as well as annotated resources that were mentioned by the guest speakers during the livestream, ways to get involved, relevant podcast episodes, books, and other publications. 
undefined
Sep 9, 2020 • 60min

Democratizing AI: Inclusivity, Accountability, & Collaboration with Anima Anandkumar

What are current attitudes towards AI Ethics from within the tech industry? How can we make computer science a more inclusive discipline for women? What does it mean to democratize AI? Why should we? How can we? To answer these questions and more we welcome Dr. Anima Anandkumar to the show.  Anima holds dual positions in academia and industry. In academia - she is a professor in the Caltech Computing and Mathematical Sciences department. In Industry - she is the director of machine learning research at NVIDIA. At NVIDIA, she is leading the research group that develops next-generation AI algorithms. Anima is also the youngest named chair professor at Caltech, where she co-leads the AI4science initiative.   Full show notes for this episode can be found at Radicalai.org.    If you enjoy this episode please make sure to subscribe, submit a rating and review, and connect with us on twitter at twitter.com/radicalaipod
undefined
Sep 2, 2020 • 1h 13min

Designing for Intelligibility: Building Responsible AI with Jenn Wortman Vaughan

What are the differences between explainability, intelligibility, interpretability, and transparency in Responsible AI? What is human-centered machine learning? Should we be regulating machine learning transparency?    To answer these questions and more we welcome Dr. Jenn Wortman Vaughan to the show.   Jenn is a Senior Principal Researcher at Microsoft Research. She has been leading efforts at Microsoft around transparency, intelligibility, and explanation under the umbrella of Aether, their company-wide initiative focused on responsible AI. Jenn’s research focuses broadly on the interaction between people and AI, with a passion for AI that augments, rather than replaces, human abilities.   Full show notes for this episode can be found at Radicalai.org.    If you enjoy this episode please make sure to subscribe, submit a rating and review, and connect with us on twitter at twitter.com/radicalaipod
undefined
Aug 26, 2020 • 55min

All Tech is Human Series #3 - Big Tech, Power, & Diplomacy with Alexis Wichowski & Rana Sarkar

How should diplomacy and international cooperation adjust to the significant global power that major tech companies wield?  In partnership with All Tech is Human we present this Livestreamed conversation featuring Alexis Wichowski (adjunct associate professor in Columbia University’s School of International and Public Affairs, teaching in the Technology, Media, and Communications specialization) and Rana Sarkar (Consul General of Canada for San Francisco and Silicon Valley, with accreditation for Northern California and Hawaii.)   This conversation is moderated by All Tech Is Human's David Ryan Polgar. The organizational partner for the event is TheBridge.   The conversation does not stop here! For each of the episodes in our series with All Tech is Human, you can find a detailed “continue the conversation” page on our website radicalai.org. For each episode we will include all of the action items we just debriefed as well as annotated resources that were mentioned by the guest speakers during the livestream, ways to get involved, relevant podcast episodes, books, and other publications. 
undefined
Aug 19, 2020 • 60min

Is Uber Moral? The Ethical Crisis of the Gig Economy with Veena Dubal

What is precarious work and how does it impact the psychology of labor? How might platforms like Uber and Lyft be negatively impacting their workers? How do gig economy apps control the lives of those who use them for work?   To answer these questions and more we welcome Dr. Veena Dubal to the show.   Veena is a professor of Law at UC Hastings. Veena received her J.D. and PhD from UC Berkeley, where she conducted an ethnography of the San Francisco taxi industry. Veena’s research focuses on the intersection of law, technology, and precarious work.    Full show notes for this episode can be found at Radicalai.org.    If you enjoy this episode please make sure to subscribe, submit a rating and review, and connect with us on twitter at twitter.com/radicalaipod

Remember Everything You Learn from Podcasts

Save insights instantly, chat with episodes, and build lasting knowledge - all powered by AI.
App store bannerPlay store banner