Emily Hocken looks at AI recruitment through the lens of discrimination, and legal implications. AI is a game changer, but is it faultless?
AI has many incredible uses, but recruiters and HR teams need to ensure its application is fair and unbiased.
Easier said than done? Emily Hocken provides some pointers to help you use it in the best way possible.
The growing popularity of artificial intelligence (AI) cannot be understated. Most businesses are now integrating some form of AI technology into their everyday processes and the HR sphere is firmly on this journey too.
Even back in 2021, a survey by Gartner found that almost all recruiting executives already used AI to optimize at least part of the recruiting and hiring process. Now, 99% of Fortune 500 companies rely on talent-sifting AI recruitment software as a means of targeting higher quality candidates, speeding up the hiring process and freeing up employee time.
Overall, the intention is to optimize the recruitment process by making it more efficient. While this all makes sense, it becomes problematic when AI is also cited as a way of eliminating, or at least reducing, the risk of discrimination in the hiring process.
Some AI recruitment software is marketed as a tool which can eliminate unconscious bias and discrimination in recruitment by taking human judgement out of the decision-making process.
The theory is that AI technology has a very high IQ, but no EQ or emotional intelligence, and so will not make the same discriminatory assumptions about a candidate from their gender, their educational background or the spelling of their name that a human might make, even subconsciously.
In some instances, this may be effective but it will depend heavily on the way that the AI algorithm has been developed and the selection criteria used to distinguish between candidates.
AI generally works by reviewing historical data sets and learning a pattern to identify what its user wants, based on previous human decisions.
In recruitment, this means giving the technology multiple examples of historical candidates and their success rates, analyzing them to find out what qualities make up a ‘successful candidate’ and then developing an algorithm that identifies prospective candidates who are most likely to be successful and eliminates those with undesirable qualities.
Unfortunately, this means that previous discriminatory biases can be picked up by the algorithm, which then learns to repeat that behavior.
In addition, there is also a risk that AI software can create discrimination issues of its own when trying to streamline the selection process. For example, where recruitment software analyses a prospective candidate’s writing or speech patterns to assess them, this could have a disproportionately negative impact on neurodiverse or disabled individuals, or even individuals whose first language isn’t English.
Rejecting these candidates purely on this basis could give rise to discrimination claims against the employer, despite the fact that the decision was made by the AI software without the employer’s knowledge.
A recent study by the University of Cambridge has also revealed that AI technology decisions can be affected by irrelevant variables including lighting, background, clothing and facial expressions.
Not only does this means that recruitment decisions made using this technology are more likely to be unfair and inaccurate, but there is a further risk that these “irrelevant variables” could be manipulated or learned by some applicants, to increase an individual’s chances of passing AI assessed recruitment stages. This would undermine the often cited purpose of using AI in the first place – to increase fairness and maximise impartiality.
It is therefore critical that employers require suppliers to explain the selection criteria used by their technology and how these will be applied, in order that the employer can assess and identify any potential risks, then rectify them before using the software.
Regulators are calling for increased scrutiny of AI in the employment sphere, which is particularly vulnerable to risk, and the law is still playing catch-up.
So far in the UK, the Government has only just published an ‘AI Regulation Policy’; and the Information Commissioner’s Office is still investigating allegations that algorithms used in recruitment are “negatively impacting employment opportunities of those from diverse backgrounds”.
By contrast, the EU is one step ahead and is due to implement the AI Act. This designates ’employment’ as an area of “high risk”. Under these new laws, AI technology used in connection with the employment and/or management of workers will need to comply with strict obligations before it can be put to market.
Going forward, this law will impact any UK businesses whose operations extend in some way to the EU, and other (non-EU) countries are likely to follow suit by enacting similar laws.
When used correctly, AI can be a helpful resource to streamline the recruitment and selection process, but employers must be careful – they, and the developers of the technology, are currently responsible for ensuring it is not used in a discriminatory way.
In order to do this, we recommend that employers:
While AI can be a useful tool in automating aspects of the hiring process, firms should approach its use with caution, and not entirely rely on processes to ensure an unbiased and non-discriminatory recruitment system.
Did this story pique your interest? Get more stories on artificial intelligence here.
Get the Editor’s picks of the week delivered straight to your inbox!
"*" indicates required fields
"*" indicates required fields