Businesses are being taken for fools with AI agents
No amount of automation or ‘reasoning’ can hide a bad product – but don’t ask AI developers
AI agents have been the hot new product in the tech sector for some time now, freshening up the shine of generative AI as well as the century-old promise of truly intelligent automation.
Big tech has spent more than a year promising that AI agents can deliver on this promise to an extent that the earliest LLMs couldn’t, by automatically completing tasks based on detailed user prompts. And they’re not holding back in their predictions for the potential of agentic AI.
Microsoft, for one, expects 1.3 billion AI agents to be operational by 2028, having painted a picture of a vast interconnected world of agents that collaborate on tasks.
This doesn’t really mean anything, though. For all IT decision makers know, a billion of those agents could only be capable of carrying out the simplest tasks and may barely move the dial when it comes to productivity.
Simply having more agents in your enterprise environment isn’t inherently a good thing, in the same way that simply hiring more workers is pointless if they’re not very good at their jobs.
Don’t hand a monkey the car keys
Let’s be very clear here: AI agents are still not very good at their 'jobs', or at least pretty terrible at producing returns on investment.
Gartner expects 40% of AI agents to be canned by 2027 and has warned against adopting tools that are essentially repackaged RPA and chatbots. Working with these things on the ground may be a baffling experience – writing about them from the periphery is becoming maddening.
Sign up today and you will receive a free copy of our Future Focus 2025 report - the leading guidance on AI, cybersecurity and other IT challenges as per 700+ senior executives
The pattern with generative AI is clear at this point: developers launch a new architecture or framework to much fanfare, initial hype is followed by bold claims about how the tools will revolutionize the workplace, and then businesses spend two years trying to squeeze any value out of products.
The end result is the same, namely disappointed IT teams and confused C-suite executives who have bought into the latest AI hype.
I’m not saying companies won’t find ways to use AI to their benefit. I’m simply saying that if you’re really invested in the idea that letting an autonomous AI agent loose in your enterprise will automatically drive growth and productivity, you may be in for a nasty surprise.
AI agents have been the subject of both praise and scrutiny, particularly from those in the security department. Trust in AI agents may actually be decreasing and security experts are openly calling for more discussion on the risks of AI agents.
Some companies, concerned with the potential for AI agents to go off the rails and cause damage, are using AI agents to police AI agents. This isn’t a tiny phenomenon either: Gartner predicts these ‘guardian agents’ will form 10-15% of the agentic AI market by 2030.
What exactly are we doing here? It seems to me that companies are being encouraged to walk into a ‘who watched the watchmen’ conundrum, with no actual benefit to their bottom line. AI agents are either trustworthy or they’re not – no amount of oversight will change that.
Of course, it suits the companies peddling AI agents very well to offer them as a solution to their own issues. The promise of a worker that never clocks out is a double edged sword, as the cost of inferencing cloud AI models only compounds when you ask them to process information 24/7.
Reasoning isn’t working
The first generation of AI agents came with reassurance from developers that these tools would always have human oversight. This was intended to prevent agents from making major mistakes based on hallucinations – and also to shield AI developers from blowback as a result of these potentially disastrous errors, particularly when it comes to deploying code.
Of course, this also negated the primary selling point of agents: that they could work autonomously, without need for constant human supervision.
Enter reasoning. Developers have heralded this as the next great step for LLMs, as it allows them to break down user demands step-by-step to give more detailed – and hopefully more usable – answers and responses to prompts.
It’s also been sold as a major enabler for AI agents, as it allows them to act with more autonomy and react to changing conditions or input contexts to go longer in between user prompts.
Reasoning models can produce a ‘chain of thought’, a breakdown of each decision the model has made so that users can assess the model’s reasoning and follow a clear audit trail for why it’s made any decision.
Here’s the problem: reasoning doesn’t work so well. Earlier this year, an Apple study concluded that reasoning models would simply give up when faced with sufficiently complex problems, even if they still had tokens left over to work on them.
Working in tech media, it can be easy to get swept up in regular product announcements. I’ve written before about how we all need to pay less attention to individual model releases and more on the actual use cases they enable.
But the lure of keeping up with LMArena leaderboards and the latest from OpenAI, Google, and Anthropic among others is strong. Since that Apple paper dropped in June, we’ve had a number of major AI releases – even the much feted GPT-5, underwhelming though it may be – to capture our attention and keep generative AI exciting.
We’ve not, however, had any major retort to the Apple paper. I’ll believe the potential of AI agents when I see it.

Rory Bathgate is Features and Multimedia Editor at ITPro, overseeing all in-depth content and case studies. He can also be found co-hosting the ITPro Podcast with Jane McCallion, swapping a keyboard for a microphone to discuss the latest learnings with thought leaders from across the tech sector.
In his free time, Rory enjoys photography, video editing, and good science fiction. After graduating from the University of Kent with a BA in English and American Literature, Rory undertook an MA in Eighteenth-Century Studies at King’s College London. He joined ITPro in 2022 as a graduate, following four years in student journalism. You can contact Rory at rory.bathgate@futurenet.com or on LinkedIn.
-
AWS CEO Matt Garman says AI agents are going to have 'as much impact on your business as the internet or cloud'News Garman told attendees at AWS re:Invent that AI agents represent a paradigm shift in the trajectory of AI and will finally unlock returns on investment for enterprises.
-
Amazon S3 just got a big performance boostNews The Amazon S3 Vectors service now scales to two billion vectors per index
-
AWS CEO Matt Garman says AI agents are going to have 'as much impact on your business as the internet or cloud'News Garman told attendees at AWS re:Invent that AI agents represent a paradigm shift in the trajectory of AI and will finally unlock returns on investment for enterprises.
-
Westcon-Comstor partners with Fortanix to drive AI expertise in EMEANews The new agreement will help EMEA channel partners ramp up AI and multi-cloud capabilities
-
Microsoft quietly launches Fara-7B, a new 'agentic' small language model that lives on your PC — and it’s more powerful than GPT-4oNews The new Fara-7B model is designed to takeover your mouse and keyboard
-
Anthropic announces Claude Opus 4.5, the new AI coding frontrunnerNews The new frontier model is a leap forward for the firm across agentic tool use and resilience against attacks
-
Gartner says 40% of enterprises will experience ‘shadow AI’ breaches by 2030 — educating staff is the key to avoiding disasterNews Staff need to be educated on the risks of shadow AI to prevent costly breaches
-
Google blows away competition with powerful new Gemini 3 modelNews Gemini 3 is the hyperscaler’s most powerful model yet and state of the art on almost every AI benchmark going
-
Microsoft's new Agent 365 platform is a one-stop shop for deploying, securing, and keeping tabs on AI agentsNews The new platform looks to shore up visibility and security for enterprises using AI agents
-
Some of the most popular open weight AI models show ‘profound susceptibility’ to jailbreak techniquesNews Open weight AI models from Meta, OpenAI, Google, and Mistral all showed serious flaws