AI recruitment: How candidates feel (and the bias problem)
For better or worse, AI is playing an increasingly prevalent role in the world of recruitment and HR.
The use of AI in recruitment can help hiring teams save time, but how does it make candidates feel?
The internet is awash with ‘how-to’ articles explaining how candidates can perfect their resumes so that AI bots pick them up during the job application process. For better or worse, AI is playing an increasingly prevalent role in the world of recruitment and HR.
In 2018, a LinkedIn report mentioned that 67% of hiring managers and recruiters said AI was saving them time. Now more than ever, large companies are hiring AI platforms to help them select the best pool of candidates for a particular job role.
HireVue is one such platform that companies, like Unilever, Vodafone, PwC, Accenture, and Oracle use. Hirevue’s AI-driven assessment involves candidates answering pre-selected questions in a recorded video interview. Candidates are then judged on the basis of their vocabulary, speech patterns, body language, tone, and facial expressions and the employer is presented with a pool of applicants best suited for the job.
These assessments ask well-designed structured interview questions focused on job competencies, key skills, and attributes identified for a particular job. “Those competencies have long been identified through many years of validated research as among the most important in predicting success in a particular role,” says Andy Valenzuela, CHRO at HireVue.
“Our interviews are meant to enhance the initial screening step, ensuring that candidates are not dismissed based on name, gender, age, race, GPA or career gaps—decisions that too often occur with traditional resume screens, despite these factors having no correlation to job success,” says Valenzuela.
On the surface, it seems to work. Unilever reportedly achieved £1 million in annual cost savings, a 90% reduction in hiring time, and a 16% increase in hiring diversity by deploying HireVue’s assessment.
Faceless hiring from the candidate’s perspective
However, for Becky Sun, HireVue’s AI recruitment process felt impersonal. Living in Minneapolis, MN, Sun was interviewed for a job with a major consulting company this February. “By the end of the ‘interview,’ I felt it was a huge waste of my time and was glad it was over,” she says.
This approach is efficient for the employers, she says, but not for the candidates.
“How can you really get to know a future employee in such artificial situations? The position that I finally did land in a different company had a much better process: My resume fit their criteria. They asked me to take a skills test, I then ranked number one on the skills test. I got a job offer, and accepted. Only then did the employer ask whether I would like to have a Zoom meeting for a no-stakes introduction.”
Rachel Murphy from Kansas City, Missouri, feels the same way. Her interview took place in late December 2020 using the HireVue platform.
She was given six questions to answer on video format, that could be re-recorded as many times as she wanted.
“There was no visual feedback and you could only see yourself in the monitor. It gave me the option to black out the image of my face as it recorded and that made it a little easier, but it was still terrible,” she says.
So much of an interview is the back and forth of conversation. “This was an extemporaneous speech with as many chances as possible to overanalyze your performance. You couldn’t go back if you answered better in a previous attempt. It recorded over each time.”
For Murphy, it was an endless cycle of “nothing but negative feedback” from herself and she says that the fact that she didn’t advance to the next round had something to do with the format of the interview. “I’m very good with people. But this was a whole different ball game,” she adds.
Leveling the playing field
The playing field for candidates is fairer – applicants can be assessed based on their skills and not subjective information, according to Lindsey Zuloaga, HireVue’s chief data scientist.
Assessments are passed along to hiring managers, who can make their decisions based on facts, not their own subconscious prejudices. “This can be particularly helpful as candidates re-enter the workforce (for example after this past year’s secession) where candidates are considered for their experience and capabilities instead of penalized for the time gaps in their resumes,” says Zuloaga.
With millions of people looking for work due to record unemployment rates, there are too many applicants for recruiters to handle individually, much less efficiently.
“Turning to conversational AI can help improve communications between candidates and companies by making the experience more instantaneous. Rather than waiting for an email, automated FAQs and chatbots allow candidates to ask questions at their convenience, without requiring additional effort from overburdened recruitment teams,” adds Zuloaga.
In November 2019, lawyers at the Electronic Privacy Information Center (EPIC), a privacy rights non-profit, filed a complaint with the Federal Trade Commission, forcing the agency to investigate the company for bias, inaccuracy, and lack of transparency. It also accused HireVue of engaging in “deceptive trade practices” because the company claims it doesn’t use facial recognition. (EPIC argues HireVue’s facial analysis qualifies as facial recognition.)
HireVue believes the EPIC complaint is without merit, according to Zuloaga, further stating that early in 2020, HireVue proactively removed the visual analysis component from all of its new assessments, as it wasn’t working for them.
HireVue’s internal research demonstrated that recent advances in natural language processing had significantly increased the predictive power of language and that the benefit from the visual analysis was minimal.
“Bias within an AI system can result in a suitable candidate being rejected for a reason connected with a protected characteristic. In this instance, the claim that the candidate brings will be against the employer,” says Dr Anne Sammon, a partner in the employment and reward group at Pinsent Masons law firm.
This can be challenging given the complexity of this type of algorithm and it is important that the decision-maker(s) implementing this have sufficient expertise in the area to properly make informed decisions, she points out.
Employers should also make inquiries into the teams that developed the AI solution. “It will be important to ensure that there is sufficient diversity within that team and that the developers have actively turned their minds to ensuring diversity of thought is harnessed in the development process.”
Recruitment: Bias and discrimination
In 1988, the Commission for Racial Equality found that St George’s Hospital Medical School had engaged in race and sex discrimination in its admission policy by using a computer program for screening of applicants that discriminated against women and those with non-European sounding names.
The flaw in that system was that it had been developed to match human admissions decisions (and did so with an accuracy of between 90 to 95%), which themselves clearly contained bias, says Dr. Sammon.
“There have been other high profile examples of AI systems applying biased algorithms that disadvantage people with particular characteristics or backgrounds. This doesn’t mean that AI is inherently flawed, but any employer seeking to rely on it to make hiring decisions needs to engage with the risks of bias in using such tools and should ensure that it properly understands how the underlying algorithm works to make informed decisions on the use of AI in the recruitment process.”
In order to deploy AI in recruitment, it’s important to check that there is no adverse impact on those with a particular protected characteristic and therefore, regular analysis of the candidates who are successfully passing the AI stage and those who are not is very crucial, says Dr. Sammons. “It is also important that where any adverse impact is identified, steps are taken to understand why it is there and to address it.”
HireVue’s solutions are all rigorously evaluated for adverse impact in accordance with the EEOC’s Uniform Guidelines and standards outlined in the Society for Industrial and Organizational Psychology’s (SIOP) “Principles for the Validation and Use of Personnel Selection Procedures.”
It is also important to engage with legislators, academics, social scientists, and data scientists to advance in this field and conduct regular audits of the companies’ technology and algorithm to both improve and to continually watch for and remove potential bias, which Valenzuela says HireVue is consistent with.
Sign up to the UNLEASH Newsletter
Get the Editor’s picks of the week delivered straight to your inbox!
Freelance Journalist
Freelance journalist writing for Vogue US, The Guardian, British Vogue, VICE, Glamour, The Washington Post, Insider, Al Jazeera, The Independent, Oprah magazine, among other publications.
Contact Us
"*" indicates required fields
Partner with UNLEASH
"*" indicates required fields