skip to Main Content

Even Robots are a Little Bit Racist: AI Bias in Recruitment

That’s right! Even robots. How would you like to perform only the most high-level and uniquely human elements of your job?  Are your skills really best utilized on data entry, rote memorization and pushing paper? Artificial Intelligence (AI) promises to delegate all the drudgery of your job to machines while freeing you up to mingle with clients on the golf course and answer phone calls from your private yacht in the Adriatic Sea.

It almost sounds too good to be true.  But are machines really up to the task?

One industry that has highly leveraged AI is recruitment.  As we have previously written about here, the task of sifting through hundreds or thousands of resumes is uniquely suited to machines.  An important feature of the application of AI to recruitment is reducing human bias in the selection of candidates.  But as we warned, an AI system is only as good as the data inputted into it – a critical point recently confronted by Amazon.

The Amazon Story

Amazon developed an AI tool to automate recruitment and reduce bias.  While it appears to have been effective at the former, it seriously failed at the latter.  Since the AI was trained to select applicants based on resumes previously submitted over a 10 year period, the algorithm was in effect trained with tainted data.  The sample contained resumes that were disproportionately from male candidates, and as such, the AI began to prefer male candidates.

Thankfully, Amazon was alerted to and rectified the gender bias in its AI, but eventually scrapped the program altogether.  What if the issue was not discovered, or other biases remained? Further, who is responsible when an organization uses an algorithm to select candidates that in effect discriminates against an identifiable group on human rights grounds?  Since an algorithm cannot explain its decision-making process, how can you be sure it is working as intended and not simply perpetuating existing biases?

Controlling Bias and AI

This raises new and challenging issues for employers and HR professionals. The issue of liability could differ depending on whether companies develop their own AI such as Amazon, or where a company utilizes third-party recruitment applications (the more likely scenario).  

Using third-party software that creates biased outcomes could arguably simply be a result of historical biases in a company’s hiring decisions, and have nothing to do with the AI itself.  But do most employers or HR professionals have the capacity to evaluate whether the bias is created by the algorithm or their company data fed to the algorithm (or both)?

AI recruitment vendors can perform adverse impact tests to determine the integrity of the data.  However, this is often one of the last steps and can come after an organization has invested significant time and resources in implementing the technology.  A best practice could be for organizations to negotiate that vendors detect bias at the start of the project or mitigate it in the course of algorithm development.

Still, this doesn’t guarantee that bias will be eliminated entirely or clarify who is ultimately responsible for biased outcomes.  

Takeaways

So should employers and HR professionals hold off on using AI in recruitment? No, technology is nothing to be afraid of.  AI is still an expedient and efficient tool for large-scale recruitment, and organizations with a long history of unbiased decision-making processes will have better results.  If nothing else, it does turn an organization’s attention to the input and underlying data that feeds into the selection process. This is always a good internal conversation to have, whether or not technology is involved.

This is something to keep on eye on as the technology evolves and policy develops to address the above issues

As always, we would be happy to answer any questions you have regarding the use of AI in the selection of candidates for your workplace.  

Back To Top