Half of agentic AI projects are still stuck at the pilot stage – but that’s not stopping enterprises from ramping up investment
Organizations are stymied by issues with security, privacy, and compliance, as well as the technical challenges of managing agents at scale
Agentic AI projects are failing to get past the pilot stage because enterprises can't govern, validate, or safely scale them.
Adoption of this latest iteration of the technology is still in its early stages, a survey of 919 senior global leaders from Dynatrace has revealed, but is growing rapidly, with 26% of organizations having 11 or more projects.
Organizations expect the biggest return on investment (ROI) from agentic AI in system monitoring (44%), cybersecurity (27%) and data processing (25%), and three-quarters are expecting an increase in their AI budget next year.
However, this may all be wishful thinking. Around half of these projects are stuck in the Proof-of-Concept (PoC) stages.
The main barriers to full implementation, respondents said, are concerns with security, privacy, or compliance, cited by 52%, followed by technical challenges to managing agents at scale, at 51%.
“Organizations are not slowing adoption because they question the value of AI, but because scaling autonomous systems safely requires confidence that those systems will behave reliably and as intended in real-world conditions,” said Alois Reitbauer, chief technology strategist at Dynatrace.
Seven-in-ten agentic AI–powered decisions are still verified by humans, and 87% of organizations are actively building or deploying agents that require human supervision.
Sign up today and you will receive a free copy of our Future Focus 2025 report - the leading guidance on AI, cybersecurity and other IT challenges as per 700+ senior executives
Leaders said they expected a 50-50 human–AI collaboration for IT and routine customer support applications, and a 60-40 human–AI level of collaboration for business applications.
“With most enterprises now spending millions of dollars annually and planning further budget increases, agentic AI is becoming a core part of digital operations. At the same time, the data shows a clear shift underway," said Reitbauer.
"While human oversight remains essential today, organizations are increasingly preparing for more autonomous, AI-driven decision-making. The focus is now on building the trust and operational reliability needed to scale agentic AI responsibly.”
Observability is a problem with agentic AI
A recurring pain point for enterprises tinkering with agentic AI tools lies in observability, according to Dynatrace. Observability of these autonomous systems is needed across every stage of the life cycle, from development and implementation through to operationalization.
Observability is most used in implementation, at 69%, followed by operationalization at 57% and development at 54%.
“Observability is a vital component of a successful agentic AI strategy. As organizations push toward greater autonomy, they need real-time visibility into how AI agents behave, interact, and make decisions,” Reitbauer said.
“Observability not only helps teams understand performance and outcomes, but it provides the transparency and confidence required to scale agentic AI responsibly and with appropriate oversight.”
AI pilot purgatory is a recurring theme amongst executives. A study from Informatica last year revealed that fewer than two-fifths of AI projects have successfully made it into production, while two-thirds of firms have seen fewer than half of their pilot schemes make it into the real world.
More than four-in-ten cited a lack of data quality and readiness as the biggest obstacle to reaching production, while 43% also blamed a lack of technical maturity.
FOLLOW US ON SOCIAL MEDIA
Make sure to follow ITPro on Google News to keep tabs on all our latest news, analysis, and reviews.
You can also follow ITPro on LinkedIn, X, Facebook, and BlueSky.
Emma Woollacott is a freelance journalist writing for publications including the BBC, Private Eye, Forbes, Raconteur and specialist technology titles.
-
What Anthropic's constitution changes mean for the future of ClaudeNews The developer debates AI consciousness while trying to make Claude chatbot behave better
-
Microsoft warns of rising AitM phishing attacks on energy sectorNews The campaign abused SharePoint file sharing services to deliver phishing payloads and altered inbox rules to maintain persistence
-
What Anthropic's constitution changes mean for the future of ClaudeNews The developer debates AI consciousness while trying to make Claude chatbot behave better
-
Satya Nadella says a 'telltale sign' of an AI bubble is if it only benefits tech companies – but the technology is now having a huge impact in a range of industriesNews Microsoft CEO Satya Nadella appears confident that the AI market isn’t in the midst of a bubble, but warned widespread adoption outside of the technology industry will be key to calming concerns.
-
‘There’s been tremendous agent washing’: Dell Technologies CTO John Roese says the real potential of AI agents is just being realized – and they could end up managing humansNews As businesses look for return on investment with AI, Dell Technologies believes agents will begin showing true value at mid-tier tasks and in managerial roles.
-
Workers are wasting half a day each week fixing AI ‘workslop’News Better staff training and understanding of the technology is needed to cut down on AI workslop
-
Retailers are turning to AI to streamline supply chains and customer experience – and open source options are proving highly popularNews Companies are moving AI projects from pilot to production across the board, with a focus on open-source models and software, as well as agentic and physical AI
-
Microsoft CEO Satya Nadella wants an end to the term ‘AI slop’ and says 2026 will be a ‘pivotal year’ for the technology – but enterprises still need to iron out key lingering issuesNews Microsoft CEO Satya Nadella might want the term "AI slop" shelved in 2026, but businesses will still be dealing with increasing output problems and poor returns.
-
OpenAI says prompt injection attacks are a serious threat for AI browsers – and it’s a problem that’s ‘unlikely to ever be fully solved'News OpenAI details efforts to protect ChatGPT Atlas against prompt injection attacks
-
OpenAI says GPT-5.2-Codex is its ‘most advanced agentic coding model yet’ – here’s what developers and cyber teams can expectNews GPT-5.2 Codex is available immediately for paid ChatGPT users and API access will be rolled out in “coming weeks”
