For many job seekers today, the first eyes on their application are most likely not human. Companies and recruiters are turning to AI more and more to streamline the hiring process. But is AI actually fairer than its human counterparts? Or is it bringing in new biases and discriminatory practices when looking at a job applicant’s qualifications?
Hilke Schellmann is a professor of journalism at New York University and the author of The Algorithm: How AI Decides Who Gets Hired, Monitored, Promoted, and Fired and Why We Need to Fight Back Now. Her work examines AI’s increasing role in the world of work and how companies should be cautious of its pitfalls.
Hilke and Greg discuss the scale of AI’s impact on hiring, the bias and inefficiencies in these tools, and why more human oversight and testing is needed in this field.
*unSILOed Podcast is produced by University FM.*
Show Links:
Recommended Resources:
Guest Profile:
Her Work:
Episode Quotes:
Is AI more biased or human hiring?
04:16: I think the one concern that I have with AI tools is like, one human hiring manager can be biased against a certain amount of people; it's usually very limited. How many can they possibly hire in a year, right? And I'm sorry to all the people who have been the victim of somebody who's biased in HR or in hiring manager. The problem is with AI tool we sometimes see it used in a new scope. If you have a resume parser that's discriminating against women. If you use it in all incoming resumes in your company, some companies receive millions, literally millions, of applications. So the harm can be just so much larger than one biased human can apply, and I also feel like if we build these sophisticated AI tools, let's make sure they work. They're not compounding their already bias that people, especially people of color, especially women, especially people with disabilities, have already encountered.
The misconception of AI as thinking machines
16:29: I think the problem is that we assume that AI tools are thinking machines and that they find something meaningful. But they have no conscience. They don't understand. They just pick.
AI tools don't erase biases
14:09: If you work with an AI vendor that cannot tell you how a score comes to be and says, "We know it's a deep neural network. We don't know it's on in training data," I would be really worried because we have only seen time and again that we find bias in these tools and not the opposite. The tool doesn't erase the bias, unfortunately.
Do we need more educated HR consumers or consumer reports for AI tools?
51:54: I would love to have a consumer report, but in the absence of that for AI tools, we need to get a whole lot more skeptical and do pilot studies. Also, like, hire maybe an outside I/O psychologist to take apart the technical report. And if an AI vendor doesn't have a technical report that explains how the tool was validated and built, and how they did at least the four-fifth rule analysis to understand that there's no disparate impact—if they don't have a technical report that explains any of that or whatever they call it—I would assume they didn't do this. I would run away if they can't even tell you how the tool was validated and checked for disparate impact. And then I would scrutinize these technical reports. I had people help me with that, and they found flaws in a couple of technical reports that I was able to get my hands on. So I would do that.