Why diversity is key to a successful AI strategy

Dozens of overlapping paper head cutouts facing right fading from white to multicoloured
(Image credit: Shutterstock)

This article originally appeared in Issue 11 of IT Pro 20/20, available here. To receive each new issue in your inbox, click here.

We’ve seen no shortage of scandals when it comes to machine learning and artificial intelligence (AI). In just the past few months, we’ve seen Microsoft’s robo-journalists illustrate an article about racism with an image of the wrong band member of Little Mix, the UK government’s A-Level algorithm penalising students based on the performance of past students, and, most recently, Twitter’s AI-powered image-cropping tool appearing to favour white faces over black faces.

However, AI bias isn’t an issue limited to big-name companies and high-profile scandals. A recent report from Capgemini that surveyed 800 organisations and 2,900 consumers has revealed 90% of organisations are aware of at least one instance where an AI system had resulted in ethical issues for their business.

What’s more, the findings show that while two-thirds (68%) of consumers expect AI models to be fair and free of bias, only 53% of businesses have a leader who is responsible for ethics of AI systems, such as a chief ethics officer. Even fewer – just 46% – have the ethical implications of their AI systems independently audited.

It’s clear that, with AI becoming embedded in all aspects of our life, companies need to do more to ensure their systems are free of bias and even find ways to use the technology to help mitigate harmful biases in order to make fairer business decisions.

Team building

So how do we do that? It starts by building a diverse team, something the industry is still failing to do; according to research published by the AI Now Institute, 80% of AI professors are men, and only 15% of AI researchers at Facebook and 10% of AI researchers at Google are women.

Jen Rodvold, head of digital ethics and tech for good at Sopra Steria, comments: “Diversity is key not only to driving a successful AI strategy, but essential to a business’ bottom line. A diverse workforce will offer a range of different perspectives, flag any bias involved in the development process and help to interrogate wider organisational processes that could be perpetuating bias and impacting the way your technology is developed in unforeseen ways.”

This is a viewpoint shared by Andrew Grant, senior product director for AI at Imagination Technologies, who says that ensuring a diverse set of data scientists is critical to developing ethical AI.

“To ensure best practice when establishing data sets for the training of an AI, there firstly needs to be a diverse set of data scientists collecting and analysing the data. No one section of training an AI should be overseen by an individual, by cross-checking work, individual bias can be much more effectively removed,” he says.

Diversify your data

Diverse datasets are also required; by training machine learning models on historic data – such as data that shows men are more commonly promoted to senior roles, or that most technology industry workers are white – encoding biases into AI is nearly inescapable.

Caryn Tan, responsible AI manager at Accenture, tells IT Pro: “Organisations building and designing AI must remember that it’s limited by the information it is fed. An algorithm can't tell when something is unfair, it just picks up historical patterns. When we don’t take steps to mitigate this it can result in bad feedback loops that can trap people based on their origins, history or even a stereotype. So organisations must take proactive steps to address potential bias before it has the chance to manifest.”

Anna Brailsford, CEO at Code First Girls, adds that in order to ensure models are being trained using a diverse data set, it’s critical that “diversity and inclusion are a part of the foundation of business decision-making”.

“In the tech industry, it’s been well documented that machine learning and AI systems are inherently biased – a result of the data set used to train their intelligence,” she tells IT Pro. “Researchers at Harvard have found that companies are using flawed historical data sets to train their AI for recruitment purposes; meaning women and people of colour are being discriminated against before they’ve even made it to the interview.

“A top-down approach isn't the answer and could potentially further entrench existing AI bias. Instead, businesses need to treat diversity and inclusion as an ongoing learning process.”

Transparency is key

Of course, businesses must also consider AI guidelines and transparency, particularly in the face of increased regulatory scrutiny. The European Commission, for example, has issued guidelines on the key ethical principles that should be used for designing AI applications, while the US Federal Trade Commission (FTC) in early 2020 called for “transparent AI”.

The latter stated that when an AI-enabled system makes an adverse decision, such as declining an application for a credit card, then the organisation should show the affected consumer the key data points used in arriving at the decision and give them the right to change any incorrect information.

Tom Winstanley, VP of new ventures and innovation at NTT DATA UK, comments: “As AI scales up across the economy, it is essential that businesses have robust, ethical standards in place, enshrining AI guidelines at the heart of their operations.

Transparency is critical: companies cannot rely on a ‘black-box’ dataset and should be open about how their AI is trained and what standards they have in place to ensure it is used responsibly. It is for this reason that NTT DATA publicly announced its own AI ethics guidelines last year.”

Rodvold adds: “Ensuring transparency of your technology, alongside strong diversity practices, will work towards eliminating bias and ensure you build public trust. A holistic Digital Ethics approach, which considers the intersection of diversity, transparency, privacy, and safety in the development of AI, will ensure you take customers on your AI journey and deliver effective, sustainable technology.”

With AI becoming increasingly ubiquitous in all walks of life and the problem of built-in bias in these systems becoming well-documented, it’s key that businesses act now to ensure their software and processes are free of bias. Thankfully, while it’s adoption has soared during the COVID-19 pandemic, it’s still early enough to do something about it.

That means the future of AI can still be shaped, for the better, through strategic diversity efforts.

Carly Page

Carly Page is a freelance technology journalist, editor and copywriter specialising in cyber security, B2B, and consumer technology. She has more than a decade of experience in the industry and has written for a range of publications including Forbes, IT Pro, the Metro, TechRadar, TechCrunch, TES, and WIRED, as well as offering copywriting and consultancy services. 

Prior to entering the weird and wonderful world of freelance journalism, Carly served as editor of tech tabloid The INQUIRER from 2012 and 2019. She is also a graduate of the University of Lincoln, where she earned a degree in journalism.

You can check out Carly's ramblings (and her dog) on Twitter, or email her at hello@carlypagewrites.co.uk.