Dr. S. Craig Watkins on Why AI’s Potential to Combat or Scale Systemic Injustice Still Comes Down to Humans
Apr 3, 2024
01:13:45
auto_awesome Snipd AI
Dr. S. Craig Watkins, an AI expert, talks with Brené Brown about the alignment problem in AI systems and the potential for scaling injustice. They discuss who needs to be involved in building AI systems aligned with democratic values, how to ensure ethical principles are in place to avoid scaling injustice in high-stakes environments, such as healthcare and criminal justice, and emphasize the importance of intentional policies and expert involvement to guide the future development of AI technologies.
Read more
AI Summary
Highlights
AI Chapters
Episode notes
auto_awesome
Podcast summary created with Snipd AI
Quick takeaways
Multidisciplinary approach needed in AI development to address societal complexities.
Unintended bias in AI systems perpetuates social and economic disparities.
Shift towards augmented intelligence to enhance human capacity and mitigate historical biases.
Deep dives
Complexity of Living Beyond Human Scale in the Digital Age
Living beyond human scale in a world inundated with AI, social media, and constant information bombardment raises questions about community, possibilities, and costs. This podcast episode delves into the crossroads of Unlocking Us and Dare to Lead series, exploring the role of AI, social media, and community in a technologically driven world.
Challenge of Ethical AI Design and Fairness
The conversation with Professor S. Craig Watkins sheds light on the challenge of integrating AI in high-stake environments like healthcare and criminal justice while ensuring fairness. The discussion highlights the limitations of current AI models in addressing biases and discrimination, particularly in predictive policing and hiring algorithms.
Need for Multidisciplinary Approach in AI Development
The podcast emphasizes the necessity of a multidisciplinary approach in AI development to address societal complexities. By including inputs from behavioral scientists, ethicists, and diverse domain experts, the narrative advocates for building AI models that align with ethical principles, account for lived experiences, and mitigate systemic bias in real-world applications.
Impact of Bias in AI Systems
The podcast episode delves into the unintended consequences of bias in AI systems developed by major tech companies like Open AI, Google, and Facebook. Despite the lack of intentional bias in their design, the systems end up perpetuating social and economic disparities due to how problems are defined, data sets are prepared, and models are developed. Examples like racial bias in hiring processes highlight the profound impact of these implicit biases, leading to disparities in employment opportunities.
Concerns and Future Directions in AI Development
The episode underscores the need for a shift towards augmented intelligence rather than artificial intelligence to enhance human capacity instead of replacing it. The conversation emphasizes the importance of diverse voices and expertise in shaping the development and deployment of AI systems to mitigate historical biases and societal impacts. Highlighting challenges such as automated bias and the relinquishment of decision-making to machines, the discussion calls for a more thoughtful and inclusive approach to AI integration to address ethical and social implications.
In this episode, Brené and Craig discuss what is known in the AI community as the “alignment problem” — who needs to be at the table in order to build systems that are aligned with our values as a democratic society? And, when we start unleashing these systems in high stakes environments like education, healthcare, and criminal justice, what guardrails, policies, and ethical principles do we need to make sure that we’re not scaling injustice?
This is the third episode in our series on the possibilities and costs of living beyond human scale, and it is a must-listen!
Please note: In this podcast, Dr. Watkins and Brené talk about how AI is being used across healthcare. One topic discussed is how AI is being used to identify suicidal ideation. If you or a loved one is in immediate danger, please call or text the National Suicide & Crisis Lifeline at 988 (24/7 in the US). If calling 911 or the police in your area, it is important to notify the operator that it is a psychiatric emergency and ask for police officers trained in crisis intervention or trained to assist people experiencing a psychiatric emergency.