AI tools are a game changer for enterprise productivity, but reliability issues are causing major headaches – ‘everyone’s using AI, but very few know how to keep it from falling over’

Enterprises are flocking to AI tools, but very few lack the appropriate infrastructure to drive adoption at scale

Male software developer with his hands in his face looking frustrated while using AI tools on a laptop computer in an open plan office space.
(Image credit: Getty Images)

While AI tools are proving vital for worker productivity, many enterprises are reporting serious problems with reliability - and it’s because they’ve been woefully underprepared for integrating the technology.

A new survey of backend engineers and IT decision makers by Temporal Technologies found that 94% are using AI in their workflows - mostly through tools like Copilot or ChatGPT.

However, just 39% are building reliable internal frameworks to support the adoption of the technology. A key stumbling block is the underlying infrastructure required to scale the technology at an enterprise-wide level, the study noted.

"Most teams think they’re modern. Most teams also admit they’re stuck," one respondent said. "Somewhere between post-build ('we built it!') and pre-scale ('oh god, how do we scale this thing?'), the real challenge begins."

Nearly half of large companies rely on custom-built workflow solutions, compared with just a third of smaller organizations, the survey found.

Three-quarters of teams said their workflows are hampered by issues such as insufficient support for long-running processes and high operational overhead, both cited by 35%, along with failure recovery challenges, a problem for 34%.

Reliability is more important than cost, performance, and speed, with 36% of engineering and IT leaders saying that reliability and compliance are their top development priorities over the next 12-to-24 months.

Notably, reliability and compliance are now bigger priorities than automation, cited by 33% of respondents, and reducing technical debt (30%).

Developers are getting stuck with AI tools

According to the survey, nearly a third of developers said that complex, long-running workflows break constantly, which hampers efficiency and slows down development processes.

"Everyone’s ‘using AI,’ but very few know how to keep it from falling over,” one respondent said.

“That knowledge gap isn’t just inconvenient, it’s a dealbreaker when you’re trying to scale agentic systems without a way to retry, resume, or even observe what went wrong.”

Tooling priorities and decision-making power are also misaligned, the survey found. In smaller companies, half said that developers typically lead tooling decisions, while in enterprises, that responsibility shifts to IT managers and CIOs at 53%.

For developers at larger enterprises, this means new tools and solutions are essentially dropped in their lap, and responsibility for implementation is left to them.

"The report tells us a lot of what we hear from partners every day—backend challenges aren’t just technical, they’re also organizational,” said Samar Abbas, co-founder and CEO of Temporal Technologies.

“Engineers and decision makers are prioritizing different things, and that disconnect is driving tooling delays, reliability risks, and rising complexity across the stack. AI is only adding another layer of scale and unpredictability.”

Security concerns are rising

Decision makers ranked security as their top concern, with nearly half saying they were losing sleep over customer churn, their biggest concern during outages.

Another 47% said that downtime drives up operational costs, while only 5% said that failures would have no major impact.

Four-in-ten respondents reported that AI’s biggest impact was in code generation - a growing trend in recent months.

A survey from Clutch revealed that 53% of senior software developers believe LLMs can already code better than most humans. However, many voiced serious concerns about data privacy and security risks.

The findings of the Clutch survey align with a previous study on the topic from Cloudsmith, which warned developers are placing too much faith in AI code generation and opening themselves up to potential security risks.

Make sure to follow ITPro on Google News to keep tabs on all our latest news, analysis, and reviews.

MORE FROM ITPRO

Emma Woollacott

Emma Woollacott is a freelance journalist writing for publications including the BBC, Private Eye, Forbes, Raconteur and specialist technology titles.