Over-reliance on ChatGPT could harm worker performance

ChatGPT website displayed on a laptop screen
(Image credit: Getty Images)

Workers who rely heavily on generative AI tools such as ChatGPT could inadvertently harm their performance levels, according to new research from Boston Consulting Group (BCG). 

In a study of worker productivity, the consultancy found that staff who used AI to support tasks “outside the frontier” of what tools can typically achieve – moving beyond creative ideation and into decision making –  a marked dip in quality of work and productivity was recorded. 

The research involved assigning more than 750 consultants across three specific groups. This saw a contingent given access to ChatGPT powered by GPT-3, one with no access to AI tools, and another given access to ChatGPT powered by GPT-4

A performance baseline was established across 18 similar tasks, researchers said, with subjects asked to perform said tasks with or without generative AI tools based on their group. 

The study found that, typically, consultants using AI were “significantly more productive” and completed 12.1% more tasks on average. 

Subjects also completed tasks 25% more quickly and produced “significantly higher quality results”, the researchers found. 

“Consultants across the skills distribution benefited significantly from having AI augmentation, with those below the average performance threshold increasing by 43% and those above increasing by 17% compared to their own scores,” the researchers said. 

For tasks selected “outside the frontier”, consultants using generative AI tools were found to be 19% less likely to produce correct solutions to problems than those without, suggesting an overreliance on the tools and highlighting uneven capabilities. 

Subjects were also more likely to assume that outputs from ChatGPT were legitimate and failed to check for errors in query responses, the study found. 

Researchers specifically noted that the AI tools are creating a “jagged technological frontier” in which some workers are relying on tools to support or augment their capabilities despite AI offering no obvious benefits. 

RELATED RESOURCE

Whitepaper from AWS on machine learning innovations with cloud services, with image of two female colleagues looking at a notepad

(Image credit: AWS)

Learn how to easily prepare data, and build, train, and deploy machine learning applications

DOWNLOAD FOR FREE

“Our results demonstrate that AI capabilities cover an expanding, but uneven, set of knowledge work we call a “jagged technological frontier”. Within this frontier, AI can complement or even displace human work; outside of the frontier, AI output is inaccurate, less useful, and degrades human performance,” researchers said. 

“Because the capabilities of AI are rapidly evolving and poorly understood, it can be hard for professionals to grasp exactly what the boundary of this frontier might be. 

“We find that professionals who skillfully navigate this frontier gain large productivity benefits when working with the AI, while AI can actually decrease performance.”

Taking ChatGPT at face value

The research from BCG sheds additional light on the potential dangers for workers relying on generative AI tools for complex tasks. 

A similar study conducted by Purdue University in August 2023 found that software engineers relying on ChatGPT were at risk of overlooking inaccuracies and even seemingly obvious errors due to the perceived legitimacy of answers produced by the chatbot. 

The study found that 52% of answers to programming-related queries presented by the chatbot were inaccurate. 

More than three-quarters (77%) were deemed “verbose” by researchers.

A key talking point in the study was that subjects were inclined to believe incorrect answers due to its “comprehensiveness” and use of “well-articulated language”. 

“When a participant failed to correctly identify the incorrect answer, we asked them what could be the contributing factors,” researchers said at the time. 

“Seven out of 12 participants mentioned the logical and insightful explanations, and comprehensive and easy-to-read solutions generated by ChatGPT made them believe it to be correct.”

Of the “preferred answers” highlighted by users, more than three-quarters (77%) of these were found to be wrong. 

Ross Kelly
News and Analysis Editor

Ross Kelly is ITPro's News & Analysis Editor, responsible for leading the brand's news output and in-depth reporting on the latest stories from across the business technology landscape. Ross was previously a Staff Writer, during which time he developed a keen interest in cyber security, business leadership, and emerging technologies.

He graduated from Edinburgh Napier University in 2016 with a BA (Hons) in Journalism, and joined ITPro in 2022 after four years working in technology conference research.

For news pitches, you can contact Ross at ross.kelly@futurenet.com, or on Twitter and LinkedIn.