In the same way tech and AI played a central role in galvanizing the Black Lives Matter movement, the practical and ethical merits of digital were once again brought into question.
Whilst racism has existed from the dawn of humanity, technology has transformed the way it is reported.
Tech and AI played a central role in galvanizing the Black Lives Matter movement.
Machines are learning the same biases and internal prejudices of the majority that are developing them.
On May 25th 2020, an African American man named George Floyd was killed in Minneapolis, Minnesota, sparking widespread condemnation and civil unrest not just in the US, but also in many other parts of the world. Racial inequality has existed since the dawn of humanity. However technology has completely transformed the way in which it is reported and the way we respond to these incidents day to day.
For many people, watching another human being lose their life in such an inhumane way was extremely terrible and traumatic. This was also exacerbated by the global COVID-19 pandemic that had put many countries in lockdown. It felt like for the first time ever, we were all eyewitnesses to the ugliest parts of humanity – and it all happened at the same time!
There was outrage. There were calls for action. Many people took it upon themselves to be proactive, to educate themselves, to march and to protest against inequality. But in the same way tech and AI played a central role in galvanizing the Black Lives Matter movement, the practical and ethical merits of digital were once again brought into question. Here are 3 things we learned:
Activism and social justice are key pillars for creating an equal and fairer society for all.
On the day of the incident, Floyd was placed under arrest on suspicion of using a counterfeit note at a local convenience store. It emerged that a team of four officers were involved in restraining him, even though nearby CCTV footage did not suggest that Floyd was non-compliant or aggressive at any point. For the next few days, we were bombarded by the image of Derek Chauvin, a white policeman, kneeling on the neck of a gasping George Floyd. The team of officers also included a man of Asian descent and a black man.
So what part did AI play in all of this? Social media algorithms are designed to prioritize what we see based on relevancy as opposed to chronology. This type of sorting is typically helpful because it delivers more of the content that you care about, rather than random posts that may or may not be of interest.
But could it be argued that in this instance there may have been a downside to the algorithm? Many people complained about overwhelm from over-exposure to the graphic images that told the story of Floyd’s final moments. These images clogged our timelines and newsfeeds and there was simply no escape. Twitter rage evolved into street protests and then into violence in major cities including London, Paris and New York. Activism and social justice are key pillars for creating an equal and fairer society for all. And whilst technology remains a vital tool in these efforts, many people are becoming increasingly aware of the need to guard against over-exposure that can have a lasting impact on mental health and wellness.
The right to come together and peacefully express our views is a basic constitutional and human rights provision in modern society.
On August 7th, the New York Police Department sent a large team of officers, including some in riot gear, to the home of 28-year-old activist Derrick Ingram. He had been accused of assault after allegedly shouting into a police officer’s ear with a bullhorn. A standoff ensued, live-streamed by Ingram on Instagram, during which he repeatedly asked officers to produce a search warrant. They were not able to do so. After protestors supporting Ingram flocked to the street, the NYPD stood down and Ingram turned himself in to the police the next day. It later emerged that the NYPD had used facial recognition software to track down the Black Lives Matter activist from an Instagram post.
The right to come together and peacefully express our views is a basic constitutional and human rights provision in modern society. The state should not interfere with this right in any way simply because it disagrees with a particular stance. With images from the protests being widely shared on social media to raise awareness, some police departments took the opportunity to add the people featured to their facial recognition databases. Automatic identification of individuals involved in the #BlackLivesMatter campaign led to subsequent arrests in an effort to suppress protest activity in several US cities.
In further development, researchers from Stanford created an AI-powered bot that automatically covers up the faces of #BlackLivesMatter protesters in photos. This approach replaces the faces of protestors with a black fist emoji that has become a symbol of the #BlackLivesMatter movement. The hope is that such a solution will be built into social media platforms, but currently, there has been no indication from the tech giants that this is on the horizon.
Artificial intelligence can only be founded on human intelligence
In 2017, a video went viral of an automatic soap dispenser that would only release soap onto white hands. The flaw occurred because the product was not tested on darker skin pigments. A study in March 2019 found that driverless cars are more likely to drive into black pedestrians, again because by default the technology is designed to detect white skin.
Artificial intelligence can only be founded on human intelligence. Humans programme the machines to behave in a certain way which means they may be passing on their unconscious biases. The tech and computer industry is still overwhelmingly dominated by white men. In 2016, there were ten large tech companies in Silicon Valley that did not employ a single black woman. When there is a lack of diversity in the room, it means the machines are learning the same biases and internal prejudices of the majority that are developing them.
The social media algorithms, facial recognition and digital tools utilised in the aftermath of #JusticeForGeorgeFloyd and #BlackLivesMatter have highlighted the need to address machine bias. An emerging initiative is the field of FAT ML (Fairness, Accountability and Transparency in Machine Learning). Their aim is to create a standard for building better algorithms that can assist regulators and others to uncover the undesirable and unintended consequences of machine learning, and to contribute to the type of equal and fair society we would all like to be a part of.
Get the Editor’s picks of the week delivered straight to your inbox!
"*" indicates required fields
"*" indicates required fields