Contemporary AI systems are typically created by many different people, each working on separate parts or “modules.” This can make it difficult to determine who is responsible for considering the ethical implications of an AI system as a whole — a problem compounded by the fact that many AI engineers already don’t consider it their job to ensure the AI systems they work on are ethical.
In their latest paper, “Dislocated Accountabilities in the AI Supply Chain: Modularity and Developers’ Notions of Responsibility,” technology ethics researcher David Gray Widder and research scientist Dawn Nafus attempt to better understand the multifaceted challenges of responsible AI development and implementation, exploring how responsible AI labor is currently divided and how it could be improved.
In this episode, David and Dawn join This Anthro Life host Adam Gamwell to talk about the AI “supply chain,” modularity in software development as both ideology and technical practice, how we might reimagine responsible AI, and more. Show Highlights:
[03:51] How David and Dawn found themselves in the responsible AI space
[09:04] Where and how responsible AI emerged
[16:25] What the typical AI development process looks like and how developers see that process
[18:28] The problem with “supply chain” thinking
[23:37] Why modularity is epistemological
[26:26] The significance of modularity in the typical AI development process
[31:26] How computer scientists’ reactions to David and Dawn’s paper underscore modularity as a dominant ideology
[37:57] What it is about AI that makes us rethink the typical development process
[45:32] Whether the job of asking ethical questions gets “outsourced” to or siloed in the research department
[49:12] Some of the problems with user research nowadays
[56:05] David and Dawn’s takeaways from writing the paper