Half of agentic AI projects are still stuck at the pilot stage – but that’s not stopping enterprises from ramping up investment
Organizations are stymied by issues with security, privacy, and compliance, as well as the technical challenges of managing agents at scale
Sign up today and you will receive a free copy of our Future Focus 2025 report - the leading guidance on AI, cybersecurity and other IT challenges as per 700+ senior executives
You are now subscribed
Your newsletter sign-up was successful
Agentic AI projects are failing to get past the pilot stage because enterprises can't govern, validate, or safely scale them.
Adoption of this latest iteration of the technology is still in its early stages, a survey of 919 senior global leaders from Dynatrace has revealed, but is growing rapidly, with 26% of organizations having 11 or more projects.
Organizations expect the biggest return on investment (ROI) from agentic AI in system monitoring (44%), cybersecurity (27%) and data processing (25%), and three-quarters are expecting an increase in their AI budget next year.
However, this may all be wishful thinking. Around half of these projects are stuck in the Proof-of-Concept (PoC) stages.
The main barriers to full implementation, respondents said, are concerns with security, privacy, or compliance, cited by 52%, followed by technical challenges to managing agents at scale, at 51%.
“Organizations are not slowing adoption because they question the value of AI, but because scaling autonomous systems safely requires confidence that those systems will behave reliably and as intended in real-world conditions,” said Alois Reitbauer, chief technology strategist at Dynatrace.
Seven-in-ten agentic AI–powered decisions are still verified by humans, and 87% of organizations are actively building or deploying agents that require human supervision.
Sign up today and you will receive a free copy of our Future Focus 2025 report - the leading guidance on AI, cybersecurity and other IT challenges as per 700+ senior executives
Leaders said they expected a 50-50 human–AI collaboration for IT and routine customer support applications, and a 60-40 human–AI level of collaboration for business applications.
“With most enterprises now spending millions of dollars annually and planning further budget increases, agentic AI is becoming a core part of digital operations. At the same time, the data shows a clear shift underway," said Reitbauer.
"While human oversight remains essential today, organizations are increasingly preparing for more autonomous, AI-driven decision-making. The focus is now on building the trust and operational reliability needed to scale agentic AI responsibly.”
Observability is a problem with agentic AI
A recurring pain point for enterprises tinkering with agentic AI tools lies in observability, according to Dynatrace. Observability of these autonomous systems is needed across every stage of the life cycle, from development and implementation through to operationalization.
Observability is most used in implementation, at 69%, followed by operationalization at 57% and development at 54%.
“Observability is a vital component of a successful agentic AI strategy. As organizations push toward greater autonomy, they need real-time visibility into how AI agents behave, interact, and make decisions,” Reitbauer said.
“Observability not only helps teams understand performance and outcomes, but it provides the transparency and confidence required to scale agentic AI responsibly and with appropriate oversight.”
AI pilot purgatory is a recurring theme amongst executives. A study from Informatica last year revealed that fewer than two-fifths of AI projects have successfully made it into production, while two-thirds of firms have seen fewer than half of their pilot schemes make it into the real world.
More than four-in-ten cited a lack of data quality and readiness as the biggest obstacle to reaching production, while 43% also blamed a lack of technical maturity.
FOLLOW US ON SOCIAL MEDIA
Make sure to follow ITPro on Google News to keep tabs on all our latest news, analysis, and reviews.
You can also follow ITPro on LinkedIn, X, Facebook, and BlueSky.
Emma Woollacott is a freelance journalist writing for publications including the BBC, Private Eye, Forbes, Raconteur and specialist technology titles.
-
‘AI is not making IT simpler – it's making it more consequential’: IT workers are feeling the heat as AI raises expectationsNews A SolarWinds survey suggests AI makes IT work more strategic, but also adds friction and raises expectations
-
‘Fragmentation is poison’: How Microsoft is targeting disparate data to boost AI adoptionNews Amir Netz, the co-creator of Microsoft's Power BI service, tells ITPro that business context is key to effective AI deployment.
-
Salesforce ramps up agentic AI research with new foundry projectNews Researchers are already working on new tools for agent-to-agent interaction and “ambient intelligence”
-
AI adoption rates aren’t matching IT hypeNews The appetite for AI is there, but a range of issues are hampering adoption
-
Oracle announces new proactive enterprise agents at AI World Tour LondonNews With a slew of new tools and customization options, Oracle is aiming to ground AI agents directly in enterprise data
-
Meta engineer trusted advice from an AI agent, ended up exposing user dataNews The internal security incident exposed sensitive user data to unauthorized employees
-
Concerns are mounting over the cognitive impact of AI as workers report experiencing ‘brain fry’ – and it’s causing "increased employee errors, decision fatigue, and intention to quit"News Research from Boston Consulting Group backs earlier studies in highlighting the negative cognitive impact of AI at work
-
Salesforce targets telco gains with new agentic AI toolsNews Telecoms operators can draw on an array of pre-built agents to automate and streamline tasks


