IT Pro is supported by its audience. When you purchase through links on our site, we may earn an affiliate commission. Learn more

Google AI panel faces backlash as staff protest right-wing council member

Appointment of Kay Coles James goes against the company's AI ethics, Google's employees declare

Google Sign with LGBT colours

Google's employees have written an open letter demanding the removal of one of the AI council members over her track record on LGBT and immigration rights.

Kay Coles James, the president of the right-wing think tank Heritage Foundation, was announced as one of the members of Google's Advanced Technology External Advisory Council (ATEAC) last week, but the appointment has angered many Google employees who feel she is vocally anti-trans, anti-LGBTQ and anti-immigration.

Writing a letter, posted on Medium as well as internally, Googler's Against Transphobia and Hate said her record speaks for itself, over and over again.

"In selecting James, Google is making clear that its version of 'ethics' values proximity to power over the wellbeing of trans people, other LGBTQ people and immigrants. Such a position directly contravenes Google's stated values," the collective said.

Those stated values, announced by Google in June, included 'avoid creating or reinforcing unfair bias'. Google said it wanted to avoid unjust impacts on people, particularly those related to sensitive characteristics such as race, ethnicity, gender, nationality, income, sexual orientation, ability, and political or religious belief. But its appointment of James does show the company is saying one thing and doing another, whether intentional or not.

It follows a similar issue raised by last years women's walk where the company said it supported its female staff, who had opposed Google's handling of sexual harassment cases but was actually found to have tried to block the protest strike.

But, this incident has a deeper issue, particularly as many examples of artificial intelligence have been found to show an unfair bias. From AI that doesn't recognise trans people, doesn't 'hear' more feminine voices and doesn't 'see' women of colour, to AI used to enhance police surveillance, profile immigrants and automate weapons - those who are most marginalised are potentially most at risk.

Featured Resources

2022 State of the multi-cloud report

What are the biggest multi-cloud motivations for decision-makers, and what are the leading challenges

Free Download

The Total Economic Impact™ of IBM robotic process automation

Cost savings and business benefits enabled by robotic process automation

Free Download

Multi-cloud data integration for data leaders

A holistic data-fabric approach to multi-cloud integration

Free Download

MLOps and trustworthy AI for data leaders

A data fabric approach to MLOps and trustworthy AI

Free Download

Most Popular

Empowering employees to truly work anywhere
Sponsored

Empowering employees to truly work anywhere

22 Nov 2022
How to boot Windows 11 in Safe Mode
Microsoft Windows

How to boot Windows 11 in Safe Mode

15 Nov 2022
The top 12 password-cracking techniques used by hackers
Security

The top 12 password-cracking techniques used by hackers

14 Nov 2022