IT Pro is supported by its audience. When you purchase through links on our site, we may earn an affiliate commission. Learn more

IBM to snuff out AI bias with updated Watson OpenScale

Watson OpenScale now has recommended bias monitors to automatically detect gender and ethnic bias

IBM Watson

IBM has added a feature to its Watson OpenScale software that detects and mitigates against gender and ethnic bias.

These recommended bias monitors are the latest addition to Watson OpenScale, which was launched in September 2018, with the purpose of giving business users and non-data scientists the ability to monitor their AI and machine learning models to better understand performance. The software helps to monitor algorithmic bias and provides explanations for AI outputs.

Up till now, users manually selected which features or attributes of a model to monitor for bias in production, based on their own knowledge. But according to IBM, with the recommended bias monitors, Watson OpenScale will now automatically identify whether known protected attributes, including sex, ethnicity, marital status, and age, are present in a model and recommend they are monitored.

What's more, IBM says it is working with the regulatory compliance experts at Promontory to continue expanding this list of attributes to cover the sensitive demographic attributes most commonly referenced in data regulation.

"As regulators begin to turn a sharper eye on algorithmic bias, it is becoming more critical that organisations have a clear understanding of how their models are performing and whether they are producing unfair outcomes for certain groups," said Susannah Shattuck, the offering manager for Watson OpenScale.

Artificial intelligence is a rapidly advancing sector, particularly in the UK where it is often reported that the country is one of the leading developers, but this growth is often offset with concerns that the technology is being developed in a way that accentuates inequality.

In March, the Centre for Data Ethics and Innovation (CDEI) announced it had joined forces with the Cabinet Office's Race Disparity Unit to investigate potential bias in algorithmic decision-making.

As algorithms become more commonplace in society, their potential to help people increases. However, recent reports have shown that human bias can creep into algorithms, thus ultimately harming the people it's meant to help.

Featured Resources

Big data for finance

How to leverage big data analytics and AI in the finance sector

Free Download

Ten critical factors for cloud analytics success

Cloud-native, intelligent, and automated data management strategies to accelerate time to value and ROI

Free Download

Remove barriers and reconnect with your customers

The $260 billion dollar friction problem businesses don't know they have

Free Download

The future of work is already here. Now’s the time to secure it.

Robust security to protect and enable your business

Free Download

Most Popular

How to secure your hybrid workforce
Advertisement Feature

How to secure your hybrid workforce

23 Sep 2022
What your hybrid workforce needs from their laptops
Advertisement Feature

What your hybrid workforce needs from their laptops

21 Sep 2022
Why collaboration is key to digital transformation
Sponsored

Why collaboration is key to digital transformation

13 Sep 2022