Tech workers fear generative AI could "drive women out the workforce"

Female office worker working a computer behind a desk
(Image credit: Getty Images)

A quarter of workers fear that AI will drive women out of the workplace, according to new research from Code First Girls. 

A survey from the social enterprise found that more than eight-in-ten workers are using ChatGPT in their workplace across a variety of roles, including tech, content creation, graphic design, and copywriting.

Six-in-ten said they're using it mainly to improve productivity, with half saying it helps them work faster.

However, a surge in the use of generative AI tools has also given rise to fears over the potential impact of the technology on female staff. One-quarter of workers said they believe the use of AI will “push women out of the workforce” while nearly half agreed that the potential imbalance in tech will result in biased or discriminatory AI models.

When it comes to recruitment, researchers said unintentional bias in AI models and algorithm development can result in a lack of diversity among shortlisted candidates, thanks to male-dominated training data.

Anna Brailsford, CEO of Code First Girls, said the research highlights growing concerns among female tech workers that the emergence of generative AI tools could have a negative impact on their careers.

"The development of AI continues to demonstrate useful applications across a variety of industries and sectors. However, by not prioritizing diversity in its development, we risk building models that are inherently discriminatory against race and gender minorities,” she said.

"So with concerns about the growth of generative AI potentially pushing more women out of the workforce, businesses must work harder than ever to recruit and retain diverse tech teams that build effective and innovative AI models and wider tech products."

In a report last year, the Kenan Institute found that eight out of 10 women in the US workforce are in occupations highly exposed to generative AI-related job cuts, compared with just six out of 10 men.

Similarly, research from workforce analyst firm Revelio Labs concluded that all of the ten jobs most likely to be affected by AI, from bill and account collectors to telemarketers, are disproportionately held by women.

"Women are underrepresented in technical occupations and overrepresented in 'supporting' careers like administrative assistants. It happens to be the case that AI's abilities overlap more with support occupations," said Revelio economist Hakki Ozdenoren.

AI recruitment tools have been a contentious topic

Fears that AI recruitment tools are biased are well-founded. In 2018, Amazon was forced to pull an AI recruitment tool after it was discovered that it was disproportionately favoring men. 

Because the tool was trained on data about previous hires - mostly men - it started downgrading any resumes containing the word 'women's'.

In September 2023, a study by Aligned AI found that gender bias still persists in large language models (LLMs). When it comes to professional gender bias, researchers said OpenAI’s GPT-4 is the most biased AI, with a score of 19.2%, followed closely by Databricks’ Dolly 2.0, and Stability AI’s StableLM-Tuned-Alpha.

While some researchers have claimed that AI can actually reduce bias in recruitment, this idea was contested in late 2022 by University of Cambridge researchers, who found that using AI to narrow candidate pools can increase uniformity rather than diversity in the workforce, as it's calibrated to search for the employer’s 'ideal candidate'.

"These tools are trained to predict personality based on common patterns in images of people they’ve previously seen, and often end up finding spurious correlations between personality and apparently unrelated properties of the image, like brightness," said Euan Ong, researcher on the project.

Emma Woollacott

Emma Woollacott is a freelance journalist writing for publications including the BBC, Private Eye, Forbes, Raconteur and specialist technology titles.