What is AI’s blind spot?
AI is being used increasingly by HR teams, particularly for recruitment.
But what are the ethical implications of using AI to help hire new talent?
The adoption of Artificial intelligence (AI) and Machine Learning (ML) technologies in HR – and recruitment in particular – has increased at pace as the sector strives to drive efficiencies and reduce costs.
The rise in remote working has opened up global talent pools for these companies to fish in, job ads have been met with huge demand, and AI has played an important role in helping companies thin the field and find their next hires.
This trend is supported by a survey, commissioned at the height of the pandemic, that found nearly a quarter (24%) of HR professionals would “likely be using AI for recruitment to a high degree within the next two years.”
AI can help across the recruitment progress from aggregating job vacancies and connecting them to potentially suitable candidates, to increasing candidate screening.
Candidate screening is also becoming more popular with businesses, and some are even using AI chatbots to help with initial sifts of applications. For businesses, this makes sense in the short term as it saves time and money – if the AI rejects a candidate, it will quickly find another, so the opportunity cost is low in the short term.
And the benefits don’t stop at recruitment. Once a new employee has been hired, AI can support with everything from training and job satisfaction to progress management, which significantly frees up other people’s time across the business. Given the average cost of onboarding a new employee is £3,000, there is no doubt that streamlining this process makes smart financial sense.
But what happens when we outsource our decision-making power to the very AI systems making our lives easier?
Recruitment is complex, and no algorithm is inherently ethical or unethical. We need to look at how the technology is being used and whether the outcomes are being measured effectively in the short and long-term.
AI learns from existing data sets created and informed by human behavior, and this can be particularly dangerous when it comes to recruitment.
For example, if the AI recognises a pattern whereby a recruiting organisation can earn its fees by pairing candidates within existing frameworks of bias – for example, education background or socioeconomic status – then that is exactly what will happen.
In other words, if the criteria for the role have been based on previous successful hires, the data informing the AI could be reinforcing a multitude of biases and therefore unintentionally enabling discriminatory hiring practices.
This means swathes of people from diverse backgrounds will not be able to access the same opportunities as their peers, affecting their long-term prospects and, on a macro scale, deepening social inequality.
That is not to say that the use of AI is always controversial. Many companies allow machines to make decisions purely based on applicant data. This is arguably less contentious, as it generally involves qualifying people based on objective criteria such as their residency, for example. There are many systems and methods that can be used in a similar way that make the human element of the equation more efficient without giving agency over the decision itself to the AI such that we discriminate against good candidates.
Even with the best of intentions for fairly recruiting people based on their skills and experience, AI systems are not infallible.
In fact, it’s been reported that applicants are devising ways to trick algorithms into preferentially treating their application, using buzzwords or cheating tests. While lying on your CV is not a new invention, human HR professionals have the skills to determine between authentic and exaggerated resumes. However, sometimes AI systems aren’t that capable, and can be tricked, yet again unbalancing the level-ish playing field.
The problems surface when humans outsource their autonomy to AI without understanding the rules underpinning the systems and the “how” and “why” behind AI-derived decisions. It has either been assumed that HR professionals who are making the decisions do not need to understand how the systems work, or the systems have been deliberately obscured for commercial or marketing reasons.
Recruitment screening and evaluation was explicitly called out in the EU’s proposed AI regulation as a ‘high-risk application’ and will be subject to transparency regulations, which is a step in the right direction.
This lack of context and transparency is degrading the HR function as we know it by enabling an abdication of responsibility and encouraging HR professionals to blindly follow the technology’s guidance.
There is a pervasive acceptance that AI technologies are simply too complex to understand. This mindset is a crutch for human ignorance – if we do not understand the technologies and the potential risks they pose, we do not need to take responsibility for the uncomfortable truths they reveal to us about the way society operates.
The majority of AI goes on silently in the background, and its impact is felt even before it enters the AI recruitment tools we all talk about. For AI systems to operate responsibly, explainability and transparency need to be at their core so that anyone can understand how and why a decision has been made regardless of their knowledge of machine learning.
Complex AI systems can arrive at outcomes for a variety of reasons and knowing what those reasons are will help to eliminate bias. This knowledge will allow HR professionals to apply their own intuition to the data and make an informed decision on whether to proceed based on the AI’s assessment. In other words, if we understand the rules the machine is following, we can collaborate, challenge, and intervene at the appropriate points.
Ultimately, AI has huge potential to transform the HR function by automating repetitive and time-intensive tasks to enable those working within the sector to deliver a richer and personalized experience. After all, HR is a people business.
However, we are at a tipping point with AI’s ever-increasing use.
We have the opportunity to understand how these technologies work and closely collaborate with them to protect individuals and create a fairer, more equal society. It would be tragic to let that pass us by.
Get the Editor’s picks of the week delivered straight to your inbox!
I am the CEO of the Mind Foundry, an Artificial Intelligence company that spun out of the University of Oxford. I believe people can use technology to improve their lives. I am focused on how to make it available to people around the world so we can remove the limits that hold us back.
"*" indicates required fields
"*" indicates required fields