To Solve the AI Problem, Rely on Policy, Not Technology
Apr 18, 2023
auto_awesome
Leading scholar Kate Crawford discusses the potential harms of AI for society, emphasizing the need for policy, not just technology. They explore demystifying AI, addressing bias, and the importance of regulation to ensure safety, accountability, and transparency in AI systems.
AI is an extractive industry that relies on data, labor, and resources, and its environmental footprint is often overlooked in discussions.
The perception of AI as magical and beyond regulation needs to be challenged, and there is a need for interdisciplinary collaboration to address AI's social harms and biases.
Deep dives
AI as an Extractive Industry
AI can be seen as an extractive industry that relies heavily on data, human labor, and environmental resources. Large language models and chatbots are trained on massive datasets extracted from the internet. Human labor is involved in labeling data and performing grunt work for AI systems. The environmental impact of AI includes energy consumption, water usage, and the extraction of rare earth minerals. AI is characterized as the extractive industry of the 21st century, based on its reliance on data, labor, and resources.
Unexplored Conversations about AI
There are several important conversations missing in the discussion around AI. Firstly, the material nature of AI is often overlooked, as it has a significant environmental footprint. Secondly, the obsession with artificial general intelligence (AGI) distracts from the current technical errors and potential social harms of AI. AGI should not overshadow the problems AI systems cause in healthcare, education, criminal justice, and other domains. Lastly, there is a need to challenge the flawed assumptions AI systems can perpetuate, such as the limited classification of emotions or races. These unexplored conversations are essential for a more comprehensive understanding of AI's implications.
Enchanted Determinism and Harmful Consequences
The concept of 'Enchanted Determinism' highlights the problematic perception of AI as both magical and deterministic. This perception leads to assumptions that AI is beyond comprehension and regulation. However, it is crucial to demystify AI's workings and recognize the need for regulation, considering both technical errors and social harms. AI systems often perpetuate biases, stereotypes, and retrograde notions due to flawed classificatory logics. The consequences of AI span across discriminatory outcomes, implementation in sensitive social institutions, disinformation dissemination, privacy infringements, and more. Addressing these consequences requires interdisciplinary collaboration, rigorous testing, and systematic auditing of AI systems.
Artificial intelligence is everywhere, growing increasingly accessible and pervasive. Conversations about
AI often focus on technical accomplishments rather than societal impacts, but leading scholar Kate
Crawford has long drawn attention to the potential harms AI poses for society: exploitation,
discrimination, and more. She argues that minimizing risks depends on civil society, not technology.
The ability of people to govern AI is often overlooked because many people approach new technologies
with what Crawford calls “enchanted determinism,” seeing them as both magical and more accurate
and insightful than humans. In 2017, Crawford cofounded the AI Now Institute to explore productive
policy approaches around the social consequences of AI. Across her work in industry, academia, and
elsewhere, she has started essential conversations about regulation and policy. Issues editor Monya
Baker recently spoke with Crawford about how to ensure AI designers incorporate societal protections
into product development and deployment.