The chapter delves into the biases present in AI models, especially societal biases in text and image models, emphasizing how specific prompts can reinforce biases and the lack of diversity in default outputs. Various studies are referenced to highlight how biases are amplified and stereotypes validated through AI models, with a focus on gender and ethnicity biases. The discussion extends to the technical aspects of analyzing biases in AI models and the implications of biased textual descriptions associated with images on clustering and understanding demographics.
In episode 120 of The Gradient Podcast, Daniel Bashir speaks to Sasha Luccioni.
Sasha is the AI and Climate Lead at HuggingFace, where she spearheads research, consulting, and capacity-building to elevate the sustainability of AI systems. A founding member of Climate Change AI (CCAI) and a board member of Women in Machine Learning (WiML), Sasha is passionate about catalyzing impactful change, organizing events and serving as a mentor to under-represented minorities within the AI community.
Have suggestions for future podcast guests (or other feedback)? Let us know here or reach Daniel at editor@thegradient.pub