ChatGPT, the workplace and HR: Use cases and warnings
Lisbeth Claus returns for part two of her three-part piece on the generative AI chabot, this time looking at considerations to use it fairly and effectively.
Why You Should Care
The applications of the tech are wide ranging, to say the least - but so are the pitfalls.
Dr Lisbeth Claus continues her exclusive three-part look at the chatbot that's changing everything.
HR leaders, don’t miss out: Join us in Las Vegas for free as a VIP guest at UNLEASH America.
you can read part one of Lisbeth’s piece on ChatGPT here, but this middle chapter covers practical uses and concerns for HR teams and the wider business.
Uses of ChatGPT in HR
While AI is not a new idea in HR and chatbots have been in use for over a decade at some vanguard companies, the performance level of the latest ChatGPT version and the range of applications of the chatbot in the workplace, and particularly HR, are tremendous.
There are many tier-one HR tasks in recruitment, training, learning and development (L&D), rewards, employee engagement, handling employee information. HR analytics that can be performed effectively—if not better—by ChatGPT rather than HR staff for work packages and tasks such as:
- Compose and respond to emails.
- Ideate and produce boiler plate content.
- Handle most employee queries through a virtual digital assistant and deal with HR-related questions & answers in an interactive manner.
- Write job descriptions and policy manuals, and update them.
- Generate legal boilerplate documents.
- Synthesize the key content on a topic and summarize information in reports.
- Produce a template with content for a PowerPoint presentation.
- Use AI chatbot-driven recruiting and applicant tracking systems (ATS) to scan resumes for keywords, manage recruitment communication, screen candidates and prepare a shortlist, interview candidates, conduct assessments, and answer questions. Similarly for job applicants, ChatGPT can write cover letters, materials, and other tasks related to job applications.
- Bring up an employee issue to the chatbot and get is resolved much faster (less than a minute) than speaking to a person.
- Receive online coaching.
- Evaluate writing style and receive improvement tips.
- Augment employee onboarding.
- Develop scenarios for preferred managerial behavior.
- Write text responses for coaching and feedback.
- Provide content for common use cases in problem solving and decision making.
- Create individualized learning plans linking career development to L&D.
- Summarize managerial knowledge.
- Draft training manuals.
- Provide just-in-time interactive training.
- Preparing instructional materials (cases, assessments, questions).
- Document workflow processes.
- Provide real time HR intelligence and insights into employee behavior.
- Assist in daily compensation-related tasks (generating reports, aggregating market data, evaluating pay equity).
- Serve as an AI-based analysis tool of the personnel database.
- Conduct just-in-time employee surveys and sentiment analyses.
- Supply suggestions for staff engagement based on real-time data.
- Track employee progress and provide feedback on their performance.
- Update changes in HR information systems.
- Analyze conversation in company communications (e-mail, Slack, Teams, etc.).
- Make meaningful inferences from the employee database.
- Provide predictive analytics, recommend steps to prevent undesirable outcomes, and inform decision-making.
- Use conversational AI (chatbots, virtual agents, digital assistants) for a wide range of HR activities.
- Deliver specific tier-one HR services.
- Automate repetitive HR tasks.
HR influencer Josh Bersin sees ChatGPT as a critical tool to make the administrative side of things more efficient in organizations, especially for narrow domains with a well-defined corpus of knowledge that requires deep mastery such as safety, compliance, training, sales, marketing, and recurring work processes, including HR.
HR departments are poised to see an increase in automation with AI technologies —even beyond repetitive tier-one tasks!
But as with any innovation, the application of ChatGPT and its implementation in the workplace is not without drawbacks and likely resistance to change.
Major issues with using ChatGPT in the workplace
ChatGPT is raising a storm of issues related to accuracy, sources, bias, plagiarism, privacy, jobs, AI ethics, and regulations. These obstacles are currently being heavily debated and tested by both opponents and proponents of the proliferation of AI use in the workplace and HR.
Accuracy
As ChatGPT mainly uses publicly available information (predominantly the Internet) as a database for its training, it generates fake as well as real information and risks accuracy problems.
To curtail disinformation and increase accuracy—as well as rise above its current baseline level—(HR) chatbots need to be based on validated sources of deep knowledge domains.
In the near future, we are likely to see many more and very powerful ‘single domain’ chatbot applications emerge in HR and HR vendors incorporating this text-generating tool in their service offerings.
Biases
Since algorithms are developed by people they contain human biases. AI systems contain cognitive, database and societal biases. Cognitive biases occur when the designers unconsciously introduce errors through the assumptions they make in developing the algorithm, use incomplete and non-representative data sets, and have societal prejudices built into the training of the data.
It has been well documented that algorithms are not immune to the existing societal biases—including racism, sexism, agism, and Anglo-Saxon and multinational company hegemony to name a few. As a result, the AI systems HR practitioners use to augment and/or replace their services may include similar biases and errors.
We must ensure that these biases are discovered and rectifications are made to the flawed algorithm. While there are ways for AI practitioners and business and policy leaders to minimize biases in AI systems to increase people trust, companies (and their vendors) will have to prove that their AI people management systems are not discriminatory.
Sources
As ChatGPT has no links or references to the sources of information it includes in its generated text, the provenance of the content it generates is unknown and without reliability checks.
But generative AI search models could easily become more discriminative and upgraded to a semantic search context where the facts feeding the models include the provenance of the cited information.
Plagiarism
Since the source of the generative text is not acknowledged, plagiarism will be a problem due to possible copyright infringements. In addition, will the producer of the writing acknowledge that the content was generated by ChaptGPT or was done as a form of blended writing using the tool as a first draft?
If not personally disclosed perhaps the use of an authenticating watermark needs to be considered.
Fairness
Vanguard and large global companies are initially likely to have an unfair competitive advantage as prime adopters of these tools due to economies of scale and scope.
Privacy
Employers using HR chatbot tools need to address privacy and transparency concerns—especially in EU countries per the local transposition of the EU Directive on General Data Protection Regulation—and compel AI platforms of tech companies to pass stringent data protection assessments for technical and legal compliance.
Jobs
The debate on whether AI will replace humans and lead to job losses/gains has been going on for quite some time. While initially ChapGPT will augment jobs rather than replacing them, in the long run the tool will inevitably take away some jobs while it creates new ones.
The creative, interpersonal and critical thinking skills of people are much harder to replace with AI.
But these skills still need to be developed in people and ChatGPT can be an awesome tool to train them on those skills—at least at a baseline level so far.
ChatGPT can also make existing HR jobs better by taking over more mundane repetitive tasks and allowing HR to make work more efficient and focused on higher-value and more strategic parts of the job. But such productivity increases—doing more with less people—are likely to lead to HR staff reductions in the long-run.
It behooves HR people to examine the different work packages in their current role, which parts of their work is likely to be augmented and/or replaced, and which skillset HR practitioners need to develop and upskill for the changing nature of their own work as a result of AI.
AI ethical social responsibility
There are ethical challenges related to the use of AI text agents. According to a January 2023 article in Business Ethics (by Laura Illia, Elanor Colleoni and Stelios Zyglidopoulos, AI text agents have three major challenges: automated mass manipulation and disinformation (i.e., fake agenda problem); massive low‐quality content production (i.e., lowest denominator problem); and the creation of a growing buffer in the communications between stakeholders (i.e., the mediation problem).
The OpenAI website offers some guidelines for using content co-authored with GPT-3. In a nutshell they are: Do no harm, refrain from using harmful content; clearly identify the use of AI-generated content; attribute it to your name, you are responsible for the published content.
It remains to be seen whether ChatGPT will be able to pass the ‘Turing test’ for HR services—namely the requirement that a person should be unable to distinguish the answers provided by the AI from the HR practitioner when comparing the replies to questions put to both.
There are also ‘courtesy’ issues for our interaction with a chatbot. Should there be a disclosure that employees are speaking/interacting with AI versus a real HR person?
Regulations
At the micro level, ethical uses and abuses of such tools will likely fall between two ends of a continuum: the market decides and AI developers self-regulate by following the voluntary ethical rules they have agreed upon or regulation and/or legislation is imposed.
At the organizational level—including applications in the HR domain—companies will need to examine the implications for using these types of technologies without proper consideration. AI developers as well as organizational users need to put systems, policies and procedures in place to prevent misuse of such AI tools.
As ChatGPT and related tools can minimize cost and maximize productivity, it begs the question about the policies and procedures needed for using (or not) these AI tools in our companies and how we will manage the proliferations and integration of such blended writing tools.
Find the world’s HR and AI experts at UNLEASH America in Las Vegas this April. Book your UNLEASH America tickets today.
Sign up to the UNLEASH Newsletter
Get the Editor’s picks of the week delivered straight to your inbox!
Emerita Professor of Management and Global HR
Lisbeth Claus, Ph.D., SPHR, GPHR, SHRM-SCP is Professor Emerita of Management and Global Human Resources.
Contact Us
"*" indicates required fields
Partner with UNLEASH
"*" indicates required fields