Does the use of AI in the workplace need to be regulated?
The coronavirus pandemic has accelerated the use of AI and other technologies as the workplace has shifted from the office to remote locations.
Mounting concerns that the broader use of AI in the world of remote working is discriminatory and unfair for employees.
How can the legal framework be altered to ensure AI’s workplace use benefits all?
Just 24 hours after news that Amazon was requiring its US delivery drivers to consent to AI-powered cameras constantly monitoring them, AI and its use in the workplace are back in the headlines.
The UK’s union group Trade Union Congress (TUC) is calling for a review of regulations surrounding the use of AI in the workplace.
This is particularly relevant at the moment as the roll-out of AI and other cutting-edge HR technology has been accelerated by the coronavirus pandemic and the global shift to remote working.
Although the TUC acknowledges in a recently published manifesto that AI and other innovative technologies are transforming the way we work for the better, the union is concerned that these new technologies are dehumanizing the workplace and could “entrench inequalities, unfair treatment, and unsafe working practices” if left unchecked. This is particularly the case as AI is increasingly being used for hiring and firing decisions.
“Our prediction is that left unchecked, the use of AI to manage people will also lead to work becoming an increasingly lonely and isolating experience, where the joy of human connection is lost,” wrote TUC general secretary Frances O’Grady in the manifesto’s introduction.
This call from the TUC comes off the back of a survey in the summer of 2020 that considered how AI and associated automation technologies are being used by UK employers in the workplace.
The resulting Worker Experience report found that AI was being used extensively in employee recruitment, monitoring, management, reward, and discipline, and that staff were largely unaware of the implications of this.
As a result of these findings, the TUC worked with lawyers from Cloisters Robin Allen and Dee Masters to dig deeper and make recommendations about the necessary changes to the legal frameworks surrounding the use of AI in the workplace.
The Cloisters-TUC report and survey found that one of the major concerns about AI and other tools was profiling based on known personal data about employees, which can lead to discrimination based on race, gender, and other factors.
Another major issue is around employee monitoring – which goes back to the Amazon case. The TUC found that there were concerns by employees that AI was flawed and made mistakes, but managers believed it to be “unimpeachable.” This all fed into feelings of lack of trust in relationships between employee and employer.
Further to this, the report found that AI and other new technologies were intruding into the private spheres of workers lives while they are working remotely.
This is creating an ‘always on’ culture where employers expect their employees to be easily contactable, even outside of office hours.
In turn, this is detrimental to people’s mental health and wellbeing as it erodes their ability to strike a work-life balance. Digital exhaustion and burn out is a hugely important issue that is currently being discussed in depth. In an attempt to tackle it, Citi bank has introduced Zoom-free Fridays and encouraged people to keep work meetings within working hours.
The Cloisters’ lawyers note:
“It is therefore important that AI technologies are regulated to ensure they do not encroach upon the private lives of employees and workers.”
To seize the opportunity to ensure that technology works to the benefit of all, the TUC makes four core recommendations.
The first, is that employers should have a legal duty to consult trade unions about the use of so-called ‘high risk’ and intrusive forms of AI in the workplace.
The second is that there should be a legal right for all workers to have a human review of decisions made by AI technologies, which gives them the opportunity to challenge decisions deemed unfair or discriminatory.
The Cloisters report considers the example of employees facing redundancy who could insist on being interviewed by a human rather than just talking aimlessly into a screen. The lawyers state: “We do not accept that employees and workers should ever be treated as mere units of production nor that they can be compelled in effect to contract with systems or machines.”
Thirdly, there should be legal amendments to UK’s general data protection regulation (GDPR) and Equality Act to safeguard everyone against biased algorithms. Focusing on GDPR, the Cloisters report focuses on the ambiguity in that regulation and that it does not “adequately compel transparency and observability around the use of AI” and other technologies. The lawyers call on GDPR to be updated to ensure that “discriminatory data processing is always unlawful,” whether fully automated or not.
Finally, employees should have a legal right to switch off from work and have so-called ‘communication-free’ time as part of their work-life balance. In their report, the Cloisters lawyers talk about ‘red lines’ in the modern employment relationship, which should be non-negotiable.
They acknowledge that the practicalities of implementing this would differ per job role, seniority and sector, but they suggest that employers should consult with their staff about how this could work best for them.
Get the Editor’s picks of the week delivered straight to your inbox!
Chief Reporter
Allie is an award-winning business journalist and can be reached at alexandra@unleash.ai.
"*" indicates required fields
"*" indicates required fields