A first-line of defense or a crutch for poor training and education.
More than 264 million people worldwide suffer from depression.
This is according to a January 2020 report from the World Health Organization.
Could chatbots help? Or are they biased?
In WeForum’s piece on the Workplace Intelligence and Oracle study, they address findings that technology like AI and digital assistants are improving the mental health of 75% of the global workforce. Respondents were reported to say that this was down to technology enabling the automation of tasks and decreasing workload to prevent burnout; making them more efficient through the provision of information, and reduced stress by helping prioritize tasks. Additionally, they found that AI has helped workers shorten their workweek; take longer vacations; increase productivity; increase job satisfaction; and improve well-being.
On the back of this report, there has been an influx of coverage as to what chatbots and robots can do for our mental health and how employers can harness these to help their employees. Let’s hone in on the chatbot space with respect to mental health and look at some of the pros and cons of utilizing these in the workspace.
Tools like chatbots and apps tie into the habit and culture of our employees. We know from countless studies, including this recent one from Oracle and Workplace Intelligence, that there are stark generational differences of who is impacted with mental health: 89% of 22-25-year-olds saying the pandemic has affected their mental health in the UK. Highlighting a possible connection, the study showed that the younger generations have taken on more overtime: 66% of 22-25s and 59% of 26-37s worked at least five more hours a week than before the pandemic, compared to 48% of 38-54s and 31% of 55-74s.
The uptake of something that is digitally centered, mobile-friendly, and fits into what we know the most vulnerable and impacted age groups engage with is definitely a pro. Equally, the beauty of chatbots and apps is that it doesn’t have to correspond to when you’re working. It’s accessible anytime, anywhere, and thereby fits into the very nature of so many mental health issues that can seemingly come from out of the blue so often and not fit certain constraints like your hours at the office.
This also came out of the study where they found that mental-health issues are being carried over from peoples’ professional lives to their home lives. 85% of people said mental health issues at work had affected their home life, with some of the most common repercussions being sleep deprivation (40%), reduced happiness at home (33%), and suffering family relationships (30%). If it is our work life that is impacting our mental health during our personal lives, the support that is offered by employers should reflect this. These chatbot and app technologies where employees have 24/7 availability to an easily accessible support system, cannot be a bad thing?
As much as chatbots and apps are heralded for their “judgment-free” support – and studies such as this latest one where only 18% of people said they would prefer humans over robots to support their mental health, as they believe robots provide a judgment-free zone, an unbiased outlet to share problems, and quick answers to health-related questions – this might actually be a misnomer.
Dr. Pragya Argawal told Forbes that “Bias in AI is not being given adequate attention, especially when such tools are being deployed in a sensitive domain such as tackling mental health or being advertised as a “coach”.” This is mirrored in an Emerj piece where they discuss a challenge for developers in NLP for text and speech. They highlight “At the moment, these apps are just now learning the nuances of the English language, and seem to largely depend on short descriptions being input by the users. The mood trackers use non-verbal communication without the context of facial expressions, body language, or voice inflections, which is also likely to be a challenge in the near term.” There is a concern here that these apps and technological tools to address mental health issues are oversimplifying a very nuanced issue that can potentially lead to wrong advice or diagnosis especially when mental health is so complex and multi-faceted.
Additionally, it is worrying to consider that the reason people may be turning to technologies to address mental health issues is because of fear of being able to talk to their human colleagues. Many employees feel they cannot discuss their mental health with their managers for fear of it impacting their career opportunities and some even fearing being fired, or furloughed in this current climate specifically. Are we brushing poor, compassionate, and ill-equipped management with a ‘quick fix’ by putting a piece of technology between them and their employees?
Dr. Argawal also believes we could be playing a dangerous game with using chatbots to support mental well-being and many are naïve to the implications. For example, she highlights that research has shown that behavioral data acquired from these types of applications are then sold in secondary markets where it could affect people’s lives completely unsuspectedly like credit, employment, law enforcement, higher education, and pricing. “There are also potential medical risks to patients associated with poor quality online information, self-diagnosis and self-treatment, passive monitoring, and the use of unvalidated smartphone apps.”
She also notes that since people know they are talking a machine they are much more open than they would be with another human being. This openness in the provision of information to technology may not then be mirrored in their privacy policies. For example, GDPR issues such as this were highlighted in 2014 when the Samaritans were forced to abandon its Radar Twitter app, which was designed to read users’ tweets for evidence of suicidal thoughts after it was accused of breaching the privacy of vulnerable Twitter users.
If employers can see mental well-being chatbots and applications as a partnership between man and machine, rather than a crutch for poor education and understanding of the real issues and solutions to mental health issues then there may be scope for this to be a successful ‘first-line of defense’. The recent Oracle / Workplace Intelligence study demonstrates that 89% of employees would stay with an employer longer if they provided mental health support, and two-thirds wouldn’t work for a company that didn’t have a clear policy on supporting mental health, according to a November 2019 study by Aetna.
Subsequently, chatbots and applications can be a way to attract and retain talent, as well as work towards the solution of providing a mental-health program that has been witnessed to improve ROI. As long as we aren’t leaving our entire solution plan for mental health issues to these technologies, and understanding when human intervention needs to occur, as well as providing full training, support, and education to managers in our workplace, then they can be a great first line of defense in combatting the mental health pandemic, especially in our remote world.
Get the Editor’s picks of the week delivered straight to your inbox!
Head of UNLEASH Labs
Abigail is dedicated to connecting HR buyers with the technology and tools they need to succeed.
"*" indicates required fields
"*" indicates required fields