Here’s how to not just comply, but go above and beyond.
AI regulations are on the horizon.
What impact will they have on HR teams, and how they work with vendors?
Experts weigh in.
Although artificial intelligence (AI) has been around for over 50 years, it is dominating the headlines at the moment because of the latest iteration of the technology: generative AI.
Generative AI can take many forms, but the one that has caught the world’s attention is essentially a highly advanced chatbot.
It scours hundreds of trillions of data points to generate human-like text and content. The most well-known version of this is OpenAI’s free-to-use ChatGPT, but the likes of Google and Meta also have their own models.
As millions of people globally experiment with this technology daily – both in their professional and personal lives – regulators across the world are grappling with how to ensure these tools do no harm, with a particular focus on the world of work.
Sam Altman, CEO of OpenAI, has himself called for better regulation and oversight of these technologies – he has also, alongside academics and other AI company CEOs, signed an open letter declaring that AI poses a “risk of extinction” and that this should be a global priority.
New York is leading the way in regulating AI. In a first of its kind law, from July this year, it’ll be illegal for employers in New York City to use AI (or so-called ‘automated employment decision tools’) for hiring and promotion decisions, unless certain requirements are met.
These requirements include a published bias audit (by an independent auditor) – employers can choose to work with their vendors for this piece, and rely on vendors’ external auditing processes.
Also, candidates and employees must be notified that AI is being used, and what data will be collected, plus how it will be used and retained.
The wider US federal government, the UK and the European Union (EU) aren’t far behind in regulating the use of AI in the workplace.
The US and EU have committed to draft a Code of Conduct for AI to bridge the gap until full regulation comes in.
This is happening in the context of the US Equal Employment Opportunity Commission (EEOC) issuing guidance about the use of AI at work, and the EU finalizing work on the AI act.
For the AI act, the EU has classified HR applications of AI as high-risk – and then, like the New York law, puts some requirements around the use of AI in these scenarios.
The obligations high-risk AI systems are subject to are linked with data, analytics, transparency to employees and candidates, and appropriate human oversight. These are applied to both employers using the technology, and companies producing the AI models in the first place, and are expected to be enshrined in law by mid-2024.
The EEOC is taking a slightly different approach – as per its remit, it is focused on employers (not tech vendors).
Employers will now be held responsible for any discriminatory impact of AI when it is used in hiring, promotions, demotions or firing – this is in line with Title VII of the Civil Rights Act, which protects against discrimination at work based on race, color, religious, sexual orientation, or gender.
It is clear that AI regulation is coming, and fast – but the crucial question is what do employers, and particularly HR leaders, need to do to prepare and avoid costly fines for non-compliance? UNLEASH sat down with HR and AI experts to get some advice.
The first action HR leaders need to take is to actually “understand the essence of these laws”, according to Ravin Jesuthasan, AI expert, and global transformation leader at Mercer.
They should do this with the help of legal experts, and “add this to the portfolio of regulations they already monitor”, adds Gartner’s distinguished VP analyst Helen Poitevin.
Once they’ve explored which of their technologies are subject to these new laws, they need to “be prepared to rapidly retool their practices and protocols”, continues Jesuthasan.
They need to look at every single vendor relationship and have difficult conversations about compliance, adds SeekOut’s head of legal, Sam Shaddox.
There is no denying it; “applying these regulations retroactively to products and practices that were created with no such regulations in mind will be…difficult”, shares Cangrade’s CEO and founder Gershon Goren.
However, the good news is this pain won’t last forever.
According to Goren, “this legislation will codify existing laws, and specifically EEOC, to be an a priori requirement for any AI algorithm that drives HR decisions.. Any AI system will have to demonstrate its compliance with existing laws before it is used”.
The challenge here for lawmakers and regulators is to ensure that they aren’t stifling innovation, particularly from smaller companies who don’t have deep pockets to navigate complex rules, according to Shaddox. But that’s not for HR to worry about.
Of course, as AI continues to evolve, these regulatory frameworks will need to continue to be updated.
As things stand, “the regulation continues to lag the technology”; it is “imperative that regulators take an agile view of regulation and do not view it as ‘one and done’”, shares Mercer’s Jesuthasan.
This creates challenges for vendors – and HR leaders need to make sure they stay abreast of new regulations, and work with vendors on compliance challenges.
However, a good course of action long-term is for HR and employers to get ahead, and simply ensure they are working with vendors to “be responsible when using AI regardless of the law” – this is the view of Forward Role’s managing director Brian Johnson.
Ultimately, for HR, AI must just be “a tool to enhance decision-making”.
So, Johnson concludes, “whether there’s a strict law or not, responsible use of AI in the workplace is the way to go. By keeping fairness, transparency, and privacy in mind, employers can make the most of AI’s benefits while ensuring a level playing field for all candidates and employees”.
Where the HR world meets. You can’t afford to miss out on UNLEASH World in Paris this October.
Get the Editor’s picks of the week delivered straight to your inbox!
Chief Reporter
Allie is an award-winning business journalist and can be reached at alexandra@unleash.ai.
"*" indicates required fields
"*" indicates required fields