IT leaders don’t trust AI agents yet – and they’re missing out on huge financial gains

While AI agents offer big financial incentives, most enterprises want to maintain strict human supervision

Artificial intelligence concept image showing side profile of AI agents and human workers.
(Image credit: Getty Images)

Despite the hype around agentic AI, only 2% of organizations have fully scaled deployment – and trust is a big stumbling block for many.

According to a new report from Capgemini, agentic AI is set to deliver up to $450 billion in economic value over the next three years through revenue gains and cost savings.

Scaled adoption shows even greater potential, with organizations that have achieved it projected to generate around $382 million on average over the next three years, with others netting around $76 million.

There's no lack of executive ambition, the consultancy noted, with 93% of business leaders believing that scaling AI agents over the next 12 months will provide a huge competitive edge.

However, most organizations are still in the early stages of implementing agentic AI, with fewer than a quarter having launched pilots and only 14% having begun implementation.

The majority are in planning mode, the study noted, while nearly half still lack a clear cut implementation strategy.

Franck Greverie, chief portfolio & technology officer at Capgemini, said the report shows that while agentic AI shows promise, successful adoption remains a multi-faceted challenge.

“The economic potential of AI agents is significant, but realizing this value depends on more than just the technology, it requires a comprehensive and strategic transformation across people, processes and systems,” Greverie said.

Trust in AI agents grows with maturity of projects

Most agentic AI deployments are still in the early stages of autonomy, Capgemini said, with only 15% of all business processes operating at semi-autonomous to fully autonomous levels.

While this is expected to rise to a quarter by 2028, most agents today function as assistants or copilots, supporting routine tasks rather than independently managing complex workflows.

The path to scale in terms of autonomy is proving elusive, with a lack of trust meaning that humans remain actively involved. Nearly three-quarters of executives said the benefits of human oversight outweigh the costs, and 90% view human involvement in AI-driven workflows as either positive or cost-neutral.

In fact, there's a lot less trust in fully autonomous AI agents than there was last year, down from 43% to 27%. However, as organizations move from exploration to implementation, trust in AI agents grows.

Almost half of those in the implementation phase (47%) have an above-average level of trust, compared to 37% of those still in the exploratory phase.

Over the next 12 months, more than 60% of organizations expect to form human-agent teams, where AI agents function as subordinates or enhance human capabilities.

By doing this, they expect a 65% increase in human engagement in high-value tasks, a 53% rise in creativity, and a 49% boost in employee satisfaction.

“To succeed, organizations must remain focused on outcomes, reimagining their processes with an AI-first mindset. Central to this transformation is the need to build trust in AI by ensuring it is developed responsibly, with ethics and safety baked in from the outset," said Greverie.

"It also means reshaping organisations to support effective human-AI chemistry, creating the right conditions for these systems to enhance human judgment and help deliver superior business outcomes.”

Make sure to follow ITPro on Google News to keep tabs on all our latest news, analysis, and reviews.

MORE FROM ITPRO

Emma Woollacott

Emma Woollacott is a freelance journalist writing for publications including the BBC, Private Eye, Forbes, Raconteur and specialist technology titles.