According to Joris Lechêne, a TikToker and self-professed “nerd” and activist, “automation is never diversity friendly”. His words were sparked by an incident he faced when his passport photographs were rejected by Her Majesty’s Passport Office in the UK because its facial recognition technology failed to identify his features.
Despite accurately following all the rules, according to Lechêne the photo was rejected “because the artificial intelligence software wasn’t designed with people of my phenotype in mind,” adding that, “it tends to get very confused by hairlines that don’t fall along the face and somehow mistake it for the background and it has a habit of thinking that people like me keep our mouths open.”
Though seemingly fairly inconsequential on the surface, he points out that this is only one example of how racial bias is sometimes built into AI and automation software, suggesting that “robots are just as racist as society is”.
“Society is heavily skewed towards whiteness and that creates an unequal system and that unequal system is carried through the algorithm,” explains Lechêne.
His experience aside, this isn’t the only instance of such AI bias. In 2015 software engineer Jacky Alciné discovered that the image recognition algorithms used in Google Photos were classifying his black friends as “Gorillas”. Following that, in 2018 the American Civil Liberties Union found that Amazon’s face surveillance technology incorrectly matched 28 members of Congress – all people of colour – as criminals, based on their headshots. And most recently the National Institute of Standards and Technology in the US released a report which found that when conducting a particular type of database search known as “one-to-one” matching, many facial recognition algorithms falsely identified African-American and Asian faces 10 to 100 times more than Caucasian faces.
“Biases are inherently invisible to those who bear them, so external, diverse and representative voices need to be present before, during and after the development process, so that we can ensure our systems will carry respect, diversity and inclusion forward,” explains Carmen Del Solar, senior conversational AI engineer at Artificial Solutions.
With the increasing use of AI and automation in telecoms spaces, from chatbots and conversational AI to network management and enhancing emerging technologies like 5G, it is important to address these issues of bias at all layers. But as always, it starts with the data.
“The steps taken within the data governance process are critical to ensuring non-biased and representative outcomes,” said Oded Karev, general manager of NICE Robotic Automation Solutions.
“One such step is ensuring that AI systems are specifically designed to disregard group identities entirely. Robots do not need to consider personal attributes or protected statuses because an individual’s colour, sex, gender, age or other characteristics are irrelevant to the operation of an AI tool and should have no bearing on outcomes.”
According to Louisa Gregory, vice president of culture, change and diversity at Colt Technology Services, the “training data fed to AI systems is often based on historical data collected over a period of time. In many cases, this data contains biases or reflects structural inequality. For example, the data for a recruitment tool could come from CVs spanning the last 15 years, the majority of which is likely to be those of white cis males.”
Another consideration is that the data used in AI programming is not always collected by the data scientists and machine learning practitioners themselves and “therefore they don’t know how it was collected, generated, pre-processed or cleaned,” explains Keeley Crockett, IEEE member and professor in computational intelligence at Manchester Metropolitan University.
“There are also debates as to whether data sets should contain special category data such as gender, ethnicity, religion or biometrics, which could lead to a model which directly discriminates,” she says.
Claire Woodcock, senior manager of machine learning & biometrics at Onfido, points to varied and broadly skilled teams as the answer, saying: “Cross-functional teams with deeply technical specialisms as well as skills in user experience and policy are key to delivering this. Developing AI with a multidisciplinary lens ensures a better user experience as well as improved AI performance.”
Overall, it seems the best approach to this issue is that the data should be seen as a living organism that is constantly growing, changing and, most importantly, being updated to reflect the society we live in.
“The most important thing is to continually review the data,” says Henry Brown, director of data & analytics at Ciklum.
“In science, this can be considered a ‘systematic error’, which takes a huge amount of effort to quantify and rectify. Teams should never just assume their data is unbiased, and should endeavour to critically review the data as much as possible – looking for alternative sources that may help them, at the very least, to verify the integrity of their input data sets, if not entirely augment them.”
Objective decision-making is on one hand what makes AI and automation the epitome of fairness, but on the other hand it is the inability to recognise nuance, account for context and display emotion, which leaves many questioning whether it is best placed for some more human scenarios.
“The ability to communicate naturally (understand context, share a common ground with our interlocutors, ask for clarifications, exert common sense, etc) is one that automated systems have not yet mastered,” explains Del Solar.
“Despite the recent advances in natural language processing (NLP), dialog systems are still not fully automated.”
The solution, according to Crockett, is for businesses to “develop co-creation and co-production strategies with all stakeholders, including the public voice. The involvement of the public in the AI development lifecycle, from conceptualisation to deployment, allows public scrutiny which contributes to building consumer trust.”
Part of the “human factor” also involves evolving AI and automation, to meet the unique and often singular needs of the individual, especially when it comes to delivering the highest levels of customer service. The key could be in using it for manual processes only.
“If customers want better service and more empathetic engagements with a brand, AI can help by automating a lot of the manual, monotonous tasks that take up employees’ time and freeing them up to address those demands directly,” says Karev.
Brown says that monitoring if and why a customer chooses to interact with things like chatbots is crucial to developing the best and most optimal solutions in the long run.
“Continually learning, and providing customers with easy methods of giving feedback. If they are to engage with a chatbot, then businesses must ensure that the customer can quickly be passed on to a human assistant,” he explains.
“We should also log the reasons given by the customer as to why they didn’t want to engage with an AI solution.”
In its report, Fast forward to the past – Is automation making organisations less diverse?, Deloitte found that while intelligent automation can transform work practices, it also creates the risk of reinforcing structural inequality – as it is typically lower-paid jobs comprising routine tasks that are most at risk of being automated.
“Throughout history, industrialisation and technology has affected workers, and AI will be no different. In the long run, when we have adapted and tamed these technologies, societies may very well be better off; but in the short term, certain disadvantaged groups may be worse off as they find it harder to adapt,” says Peter van der Putten, director of decisioning and AI solutions at Pegasystems.
But AI also presents wide-ranging opportunities for some disadvantaged groups – for example, van der Putten believes that new jobs may become more accessible to people with disabilities, through what he calls AI enhanced extensions.
“Ironically, if we want to counter the evil side effects of a technology, we will need to make it our own and democratise it,” he adds.
Taking a slightly more proactive approach, Crockett believes that those who lose jobs through this development should have the opportunity to retrain, reskill and find employment in different fields, because as expected it takes a toll on the mental health and wellbeing of the individual.
“They should also have access to free educational courses to help improve their digital literacy, along with a mentoring scheme to help them develop confidence. It costs money, but we need to provide the most disadvantaged with opportunities, to ensure they are treated equally in society as everyone else.”
As published in Ericsson’s Employing AI techniques to enhance returns on 5G network investments report, 53% of service providers expect to have fully integrated some aspect of AI into their networks by the end of 2020, with a further 19% expecting the same over the next three to five years – making it as relevant as ever to ensure our networks have the correct intent and priorities to best serve its users.
“Technology is neither good nor bad – nor is it neutral, as historian Melvin Kranzberg said – and for AI it is no different. You cannot say that AI algorithms in general are good or evil: the same algorithms that are used to detect cancer could be used for some bad purpose,” explains van der Putten.
In his view, it all boils done to the purpose for which an AI system is built, “Is it built to gather faces of peaceful protesters without their consent? To spam customers with offers that only serve a company, not the customer?”
He says that new AI regulation is on the horizon that will assess all types of factors contributing to building global trust and “enabling responsible use of it for the benefit of not just companies but also customers and citizens”.
While none of us can, or should, want to shy away from progress, it is important that all changes are made with consideration of the needs and concerns of everyone, a process that can only be achieved through an open cycle of collaboration, feedback and communication.