AI bias must be tackled to avoid it 'unknowingly' harming people

AI digital brain

While AI hasn't quite come of age, it's now got to a point where most people understand what the benefits are.

However, for all the benefits on offer, companies looking to take advantage of AI must still put ethical considerations and the avoidance of bias on the priority list, according to a panel session held at Salesforce's Dreamforce conference in San Francisco this week.

"Accuracy levels are so high now that the kind of things you can do in one year were not possible years ago with hundreds of people," said Richard Socher, chief scientist at Salesforce.

"Now that this stuff is working, we really need to think about the ethical implications."

Kathy Baxter, an architect in Salesforce's Ethical AI Practice, concurred about the need to ensure such sophisticated technologies do more good than harm, adding: "How do we rebuild software that truly has a positive impact on the people it serves?"

"AI can do so much tremendous good, but it can have the potential to unknowingly harm individuals. We can't expect AI to magically exclude bias in society bias is baked in."

Baxter continued: "How do we represent the world that we want and not the world as it is?"

Given AI essentially needs to learn, it will take its lead from human beings so it's the responsibility of humans to act ethically and do the right thing when it comes to AI development, agreed the panel moderated by Salesforce futurist Peter Schartz.

Baxter stressed that in particular, there's a need to ensure that people are not adversely impacted because of factors they cannot change or control, such as gender or race.

The panel highlighted that it will be just as important to educate people on the shortcomings of AI and potential bias as it is to promote the benefits of smart systems. Ultimately, like with technology today, the results you get out are only as good as the data that's put in. The same is true of AI as it stands now.

"AI will have a bigger impact than the internet on humanity," Socher added. "AI will pick up bias and either amplify it or keep it going. We have to educate people that AI is only as good as the training data."

When it comes to that so-called training data, Baxter said Salesforce recognised its role in boosting awareness and education levels. Using Trailhead, as well as other AI-focused resources, the cloud firm hopes to help open peoples' eyes to the potential and the pitfalls so they can make informed decisions.

"The quality of that training data is key. It helps customers see and understand the data so they can identify if there is any bias there if there are any errors, so they can correct it," Baxter added.

"Ethics is a mindset, not a checklist and we need to instil it early on."

Maggie Holland

Maggie has been a journalist since 1999, starting her career as an editorial assistant on then-weekly magazine Computing, before working her way up to senior reporter level. In 2006, just weeks before ITPro was launched, Maggie joined Dennis Publishing as a reporter. Having worked her way up to editor of ITPro, she was appointed group editor of CloudPro and ITPro in April 2012. She became the editorial director and took responsibility for ChannelPro, in 2016.

Her areas of particular interest, aside from cloud, include management and C-level issues, the business value of technology, green and environmental issues and careers to name but a few.