AI is narrowing the performance gap between high and low skilled workers, but over-reliance is a concern – report

A close up of a woman working at a computer, with the letters AI reflected in her glasses
(Image credit: Getty Images)

Workers are said to be more effective and productive when they work alongside an AI assistant, according to an influential new report.

As of 2023, AI has achieved levels of performance that surpass human capabilities across a range of tasks, said researchers in Stanford University’s latest annual Artificial Intelligence Index report.

Over the last few years, AI has beaten human performance at tasks such as image classification (in 2015), basic reading comprehension (2017), visual reasoning (2020), and natural language inference (2021), it said.

Fortunately, there are still a few task categories where AI fails to exceed human ability, "which tend to be more complex cognitive tasks, such as visual commonsense reasoning and advanced-level mathematical problem-solving", the report found.

Stanford’s report is one of the most comprehensive annual reports on progress in AI, tracking research and development trends as well as technical performance and the impact on economics and policy.

“The data is in: AI makes workers more productive and leads to higher quality work,” the report said. “In 2023, several studies assessed AI’s impact on labor, suggesting that AI enables workers to complete tasks more quickly and to improve the quality of their output,” it added.

The report highlighted studies by Microsoft, Harvard Business School, and the National Bureau of Economic Research, which showed using AI tools could help workers finish tasks faster.

It also noted that using generative AI could help with various legal tasks, especially contract drafting. But it also said there are widespread reports of LLM hallucinations “being especially pervasive in legal tasks”.

The researchers said AI access appears to narrow the performance gap between low- and high-skilled workers, noting the results of Harvard Business School research.

“While higher-skilled workers using AI still performed better than their lower-skilled, AI-using counterparts, the disparity in performance between low- and high-skilled workers was markedly lower when AI was utilized compared to when it was not,” it said.

However, it’s not all good news for AI at work. While AI tends to enhance quality and productivity, over-reliance on it can become an issue. The report cites research that shows when workers become complacent and overly trusting of AI’s results, their performance drops compared to workers who are more vigilant in scrutinizing AI output.

Many executives also say that AI will have a significant impact on jobs: in one survey 43% felt that staff size would decrease – and only 15% felt that generative AI would lead to increases in the number of employees.

Perhaps then it’s surprising that, considering the hype, AI was only estimated to potentially leading to productivity growth, over 10-year period, of between 1.0% and 1.5%.

What the report says about AI investment

The report noted that 2021 was a peak year for investment in AI overall – since then investment has declined significantly. However, investment in generative AI has increased massively.

In 2023, the sector secured $25.2 billion in investment, nearly nine times the 2022 level and 30 times the amount from 2019. Major players in the generative AI space, including OpenAI, Anthropic, Hugging Face, and Inflection, reported substantial fundraising rounds.

The private sector perhaps unsurprisingly leads AI research, with 51 significant new machine learning models developed by industry, while academia contributed only 15 (plus another 21 models resulting from industry-academia collaborations).

Part of that is because these frontier models are wildly expensive to build: OpenAIused an estimated $78 million worth of computing power to train GPT-4, while Google’s Gemini Ultra cost $191 million in compute costs.

Open source now forms the majority of AI models

However, the report also noted another shift: of the 149 foundation models released last year over half (65.7%) were open-source, compared to only 44.4% in 2022 and 33.3% in 2021. There was also a sharp 59.3% rise in the total number of GitHub open source AI projects in 2023.

Another big change is the shift towards multimodal AI which can handle chat, video, image and audio.

“An edge of the AI research I find most exciting is combining these large language models with robotics or autonomous agents, marking a significant step in robots working more effectively in the real world," said Vanessa Parli, Stanford HAI director of research programs.

Deciding which of these models is best – or safest – remains extremely hard to do. AI developers, including OpenAI, Google, and Anthropic, primarily test their models against different responsible AI benchmarks, the researchers said. This makes it hard to compare the risks and limitations of top AI models.


Another problem: as AI models continue to increase in power they require more challenging benchmarks. AI models have reached “performance saturation” on established benchmarks such as ImageNet, SQuAD, and SuperGLUE. As a result, new benchmarks emerging, including SWE-bench for coding, HEIM for image generation and MMMU for general reasoning.

The report also noted that the levels of AI regulations in the US are rising: in 2023, there were 25 AI-related regulations, up from just one in 2016. Perhaps lined to to that: people are getting more nervous about the impact of AI. The report quotes research showing that over half of people are more concerned than excited about the impact of AI, while one survey suggests two thirds of people think AI will dramatically affect their lives in the next three to five years.

Steve Ranger

Steve Ranger is an award-winning reporter and editor who writes about technology and business. Previously he was the editorial director at ZDNET and the editor of