AI regulation is like the movie Everything Everywhere All At Once: Workday
As the technology evolves, regulation is grappling to keep up – what are the implications for HR? Check out exclusive insights Workday Rising EMEA.
Expert Insight
OpenAI has dominated the headlines this week.
But Workday is keen to differentiate itself, and its risk-based approach to AI, particularly generative AI, from ChatGPT.
UNLEASH attended Workday Rising EMEA in Barcelona and got exclusive insights from Workday executives, as well as HR leaders and academics.
Over the weekend, the tech world was rocked by huge news.
The board of OpenAI, the company best known for creating ChatGPT and reinvigorating/switching up conversations about AI, decided to oust co-founder Sam Altman as CEO.
Altman has since been reinstated at the helm of OpenAI, seemingly due to employee pressure – there has also been some changes made to the board.
The situation is going to be one to watch – how long will Altman stay at the AI firm? What impact will the new board members have on the future of generative AI? These are just some of the questions the media, regulators and governments, and the wider public need to be asking.
At Workday Rising EMEA in Barcelona, Workday’s co-president Sayan Chakraborty not only used his Day Two innovation keynote to declare that we are living through an “epic transition” in technology – similar to the advent of electricity, the internet or cloud computing – but also to differentiate what the likes of OpenAI is doing from the generative AI approach that Workday is taking.
OpenAI’s large language model (LLM) GPT is trained on huge amounts of data scraped from the internet – and it has been built in an “opaque” way. “These models reflect all of the good, and all of the bad [of the internet].”
According to Chakraborty, there’s a lot of misinformation and bias, these models can be used in cyber attacks and can create serious intellectual property (IP) conflicts – all of this explains a lot of the fear and scaremongering around generative AI.
In comparison, Chakraborty claims what Workday is doing is called “enterprise LLMs” – “these depend on high quality data…strong regulatory compliance, [with] privacy, security, IP ownership all built in”.
“We know where the data comes from, we know who owns it, we know where there may be concerns” – for Chakraborty, “there’s no place for hallucinations” and mistakes when it comes to HR and AI.
Do you need a chief responsible AI officer?
Overseeing AI risk mitigation and governance at Workday is Kelly Trindel, the tech giant’s dedicated chief responsible AI officer (and her multi-disciplinary team of experts). Because Trindel reports in the chief compliance officer, her team’s work is independent from what the product teams are up to.
“We’re all very excited about AI, [but] we also see that there are risks,” Trindell tells UNLEASH in an exclusive interview.
So, over the past few years (Trindel has been in her post for two years) Workday has developed a scalable risk evaluation framework for AI.
For Trindel, it builds on Workday’s foundation of data privacy and security for its entire technology stack.
She tells UNLEASH: “Frankly, it is just the right thing to do. Workday cared about trust from the beginning – this is just an extension of that”.
Chakraborty echoed this: “We treat our AI models the way we treat everything at Workday”, precision, accuracy and risk mitigation are top of mind.
Workday takes a risk-based approach to its own oversight of AI – this mirrors how the regulators are looking at this emerging tech, and the fact that certain HR products are being categorized as high risk.
“High risk doesn’t mean we aren’t supposed to develop it”, notes Trindel – instead, there is need for extra caution around risk identification and mitigation.
Trindel’s focus is figuring out “when the technology touches a person, what kinds of risks could come up?”.
So top of mind for Trindel and Workday is ensuring that the technologies only have a positive impact on humans.
“We want to make sure that we’re developing [these technologies] so they are actually put in place to help make work [better], and for real business reasons”.
Transparency, trust and fairness is also key – Workday is very aware that no-one will use the technology unless they trust it.
This was very clearly stated by Jens-Henrik Jeppesen, senior director of corporate affairs for EMEA and APJ, during a panel session at Workday Rising. “Our customers need to be able to trust that vendors have this under control,” he said.
Trindel agrees that “folks [need to] feel like they can obtain this technology in a way that is trustworthy and responsible”.
The need to work with all stakeholders
For Trindel, the work on responsible AI needs to include a range of stakeholders.
During panel discussions at Workday Rising EMEA, she talks about the partnership with the public affairs team (which Jeppesen works in), as well the product team.
At Workday the AI evaluation framework comes into play during the actual development of these technologies, not just the testing phase.
“We really need to be very close to the teams on the ground – or the hands on keyboards – the people who know how this stuff works,” added Trindel.
This also helps guide Workday’s external partnership with the regulators – Workday is already in productive conversations with global regulators and governments on AI, and is keen to be part of creating a successful system to regulate these hugely impactful technologies.
Another important external stakeholder is academia – and UNLEASH was thrilled to sit down at Workday Rising EMEA with Dympna O’Sullivan from Technological University Dublin to find out her views on responsible AI, and what Workday is up to.
O’Sullivan shares that “AI is not just a technical tool, it is a social phenomenon… it is going to have really far-reaching consequences on…humanity”.
Therefore, any discussions about AI need to involve a broad set of stakeholders.
“We need to bring everyone into the conversation,” Trindel agrees.
O’Sullivan is impressed with Workday’s approach: “They are global leaders in Responsible AI. They were one of the first companies to have one person responsible for responsible AI, and they were one of the first to start lobbying regulators and to start talking about the need for guidance.”
“One of the things that I’ve been impressed with is that they give civic society a voice – a lot of companies are…just focused on their customers, but civic society is really important,” O’Sullivan tells UNLEASH.
The moving target of regulation
Currently, the world is at a real inflection point in terms of AI, but the future is uncertain – these technologies will continue to evolve, and that means the regulations will also change over time.
Workday sees this work as a journey – during a panel session, Trindel stated: “Anyone who says they have the entire thing figured out completely probably isn’t being 100% truthful”, so Workday is “watching how things develop”.
Jeppesen added: “If you wanted to make a movie about AI policy and regulation right now, it would have to be called Everything Everywhere All At Once.
“It is an ongoing process.”
It remains a challenge to future proof regulations for an unknown, unpredictable technology.
During his keynote, Carl Eschenbach, newly appointed co-CEO of Workday, noted that trust cannot be earnt just once – it is a continual process.
While vendors like Workday take on a lot of responsibility around AI, there is also a lot for organizations – and specifically HR leaders – to be aware of, particularly amid this evolving regulatory context.
Trindel shared in a panel discussion: “We understand that we are the developer of the technology, and our customers are the deployer. We look every carefully at the distinction of roles and responsibilities between different players or actors in the space.”
Workday, for instance, doesn’t have direct access to employees, so it is on the employer to ask for their consent around how their data is used by vendors and other external partners.
According to Shane Luke, head of AI and machine learning at Workday, when dealing with external data, the HR tech giant takes a “who needs to know only” approach.
“We have policies, we train people, we only people who should have access to the data to have access it” – but he didn’t explain precisely how Workday prevents malicious actors from getting through the security protocols.
Dutch multinational tech giant Philips is one of Workday’s oldest customers in EMEA. In an media roundtable at Workday Rising EMEA, Efthymios Zindros, Director of HR IT at Philips, shares that Workday is its “single source of truth”.
Zindros shares that while there is a business demand for generative AI within Philips – “we see significant cost savings” – but it is challenging to figure out the best uses cases, and to ensure that HR can implement it ethically and responsibly.
He remains unsure about the maturity of generative AI products coming from all vendors, and believes the full functionality may take two, three years to develop.
Talking directly to HR leaders about the future of AI, O’Sullivan says it is important to not buy into the scaremongering about “existential risk” – “It’s not helpful”, and it distracts from the real risks that organizations do need to be aware of.
Then have productive with all of your vendors and the design of technology (and the data being used), and really dial into the use cases that work best for your business, and your workers.
Sign up to the UNLEASH Newsletter
Get the Editor’s picks of the week delivered straight to your inbox!
Chief Reporter
Allie is an award-winning business journalist and can be reached at alexandra@unleash.ai.
Contact Us
"*" indicates required fields
Partner with UNLEASH
"*" indicates required fields