Top tips on how to deal with gender bias.
AI comes with biases.
Find out why this is a big concern for businesses.
And how to fix it!
HR leaders, don’t miss out: Join us in Las Vegas for free as a VIP guest at UNLEASH America.
Over the past year, artificial intelligence (AI) has become the trending topic of conversation across the globe, with bots like ChatGPT impressing users with their ability to produce human-like text and even computer code. However, what if AI makes wrong decisions?
Bias in AI systems, particularly gender bias, is a serious issue resulting in various harms, including discrimination, reduced transparency, as well as security and privacy issues.
In severe cases, AI could damage careers or even cost lives with the wrong decision made.
We risk an imbalanced future where AI will never reach its full potential if its bias problem isn’t dealt with.
The effectiveness of AI depends on the data sets on which it is trained, which is mostly skewed towards men, and is used in everything from online news articles to books.
According to research, AI will associate men with roles such as ‘captain’ and ‘financier’ if it’s trained on Google News data, and associate women with ‘receptionist’ and ‘homemaker’.
In consequence, many AI systems, created by predominantly male teams and trained on biased data, have significant issues with women.
AI’s wrong decisions can damage both the financial and physical health of people, such as men receiving more generous credit from credit card companies, and tools that screen for everything from COVID-19 to liver disease.
The fact that women only make up 22% of professionals in AI and data science, based on the World Economic Forum’s research, exaggerates the issue. With non-binary and transgender expressions, gender is becoming a more complex topic, which further increases the likelihood of bias in various forms.
To avoid going into another ‘AI Winter’ following the 1970s when interest in the technology dried up, AI professionals need to address the bias issue and make full use of the powerful tool.
Going forward, businesses will increasingly rely on AI technology to turn their data into value. According to Lenovo’s Data for Humanity report, 88% of business leaders say that AI technology will be an important factor in helping their organization unlock the value of its data over the next five years.
So how will business leaders deal with the problem of bias? For the first time in history, we have this powerful technology that is entirely created from our own understanding of the world. AI is a mirror that we hold up to ourselves.
We shouldn’t be shocked by what we see in this mirror. Instead, we should use this knowledge to change the way we do things. That starts with ensuring that the way our organizations work is fair in terms of gender representation and inclusion – but also by paying attention to how data is collected and used.
Whenever you start collecting data, processing it, or using it, you risk inserting bias. Bias can creep in anywhere: if there is more data for one gender, for example, or if questions were written by men.
For business leaders, thinking about where data comes from, how it’s used, and how bias can be combatted will become increasingly important.
Technical solutions will also play an important part. Data scientists don’t have the luxury of going through every line of text used in a training model.
There are two solutions to this: one is having many more people to test the model and spot problems. But the better solution is to have more efficient tools to find bias, either in the data which the AI is fed with, or in the model itself.
With ChatGPT, for example, the researchers use a mental learning model to annotate potentially problematic data. The AI community needs to focus on this. Tools to provide greater transparency in the way AI works will also be important.
It also helps if we consider the broader context. The tools we use today are already creating bias in the models we will apply in the future.
We might think that we have ‘solved’ a bias issue now, but in 50 years, for example, new tools or pieces of evidence might change completely how we look at certain things.
This was the case with the history of Rett syndrome diagnosis, where data was primarily collected from girls. The lack of data on boys with the disorder introduced bias into data modelling several years later and led to inaccurate diagnoses and treatment recommendations for boys.
Similarly, in 100 years, humans might work for only three days a week. That would mean that data from now is skewed towards a five-day way of looking at things. Data scientists and business leaders must take context into account. Understanding social context is equally important for businesses operating in multiple territories today.
Mastering such issues will be one of the touchstones of responsible AI. For business leaders using AI technology, being conscious of these issues will grow in importance, along with public and regulatory interest.
By next year, 60% of AI providers will offer a means to deal with possible harm caused by the technology alongside the tech itself, according to Gartner.
Business leaders must plan thoroughly for responsible AI and create their own definition of what this means for their organization, by identifying the risks and assessing where bias can creep in.
They need to engage with stakeholders to understand potential problems and distinguish how to move forward with best practices. Using AI responsibly will be a long journey, and one that will require constant attention from leadership.
The rewards of using AI responsibly, and rooting out bias wherever it creeps in, will be considerable, allowing business leaders to improve their reputation for trust, fairness and accountability, while delivering real value to their organization, to customers and to society as a whole.
Businesses need to deal with this at board level to ensure bias is dealt with and AI is used responsibly across the whole organization. This could include launching their own Responsible AI board to ensure that all AI applications are evaluated for bias and other problems.
Leaders also need to address the broader problem of women in STEM, particularly in data science. Women – especially those in leadership roles – will be central to solving the issue of gender bias in AI.
For forward-thinking organizations aiming to unlock their data value by leveraging AI, it is vitally important to acknowledge and address the gender bias issue and deal with it effectively.
Business leaders need to employ a thoughtful approach towards the utilization of AI across the company, which can be achieved with bias detection tools and ensuring transparency.
Additionally, leaders must consider the origin of their data, the way it is used, and the steps to avoid bias.
By doing so, businesses will benefit from the unlocked value of their data, and an inclusive future where AI can be used for better good with its full potential.
The International Festival of HR is back! Discover amazing speakers at UNLEASH America on 26-27 April 2023.
Get the Editor’s picks of the week delivered straight to your inbox!
Senior Manager, Global Product Diversity Office
Lopez manages the Lenovo Product Diversity Office team.
"*" indicates required fields
"*" indicates required fields