Artificial intelligence is everywhere, growing increasingly accessible and pervasive. Conversations about
AI often focus on technical accomplishments rather than societal impacts, but leading scholar Kate
Crawford has long drawn attention to the potential harms AI poses for society: exploitation,
discrimination, and more. She argues that minimizing risks depends on civil society, not technology.
The ability of people to govern AI is often overlooked because many people approach new technologies
with what Crawford calls “enchanted determinism,” seeing them as both magical and more accurate
and insightful than humans. In 2017, Crawford cofounded the AI Now Institute to explore productive
policy approaches around the social consequences of AI. Across her work in industry, academia, and
elsewhere, she has started essential conversations about regulation and policy. Issues editor Monya
Baker recently spoke with Crawford about how to ensure AI designers incorporate societal protections
into product development and deployment.
Resources