Workers are wasting half a day each week fixing AI ‘workslop’
Better staff training and understanding of the technology is needed to cut down on AI workslop
AI 'workslop' is forcing employees to work an extra four and a half hours each week to clean up mistakes, according to new research.
A survey of more than 1,100 US enterprise AI users from Zapier found that while 92% of workers say AI boosts their productivity, the average employee spends more than half a workday revising, correcting - and sometimes completely redoing - AI-generated outputs.
Three-quarters of respondents reported at least one negative consequence from low-quality AI outputs, including work rejected by stakeholders (28%), security incidents (27%), and customer complaints (25%).
Zapier noted that just 2% of respondents don’t need to revise what AI produces.
A key factor here lies in poor training, researchers noted. Employees without AI training are six times more likely to say AI makes them less productive.
While untrained workers spend less time on AI cleanup, they also report fewer productivity gains: just 69% say AI helps, compared with 94% of trained workers.
“The productivity gains from AI are real. 92% of workers feel them. But so is the cleanup work,” said Emily Mabie, senior AI automation engineer at Zapier.
Sign up today and you will receive a free copy of our Future Focus 2025 report - the leading guidance on AI, cybersecurity and other IT challenges as per 700+ senior executives
“The companies seeing the best results aren't the ones avoiding AI. They're the ones who have invested in training, context, and orchestration tools that turn AI from a sloppy experiment into a managed process.”
The worst AI workslop offenders
Data analysis tops the workslop list, according to Zapier, with 55% saying data analysis and visualization projects require the most cleanup, followed by writing tasks at 46%.
Meanwhile, engineering, IT, and data roles average five hours per week fixing AI outputs, with 78% reporting negative consequences. Finance and accounting teams face the highest rate of negative consequences at 85%, averaging 4.6 hours of cleanup per week.
The time lost in fixing AI-generated outputs has a significant impact on bottom lines, according to Zapier. Workers spending more than five hours a week on AI cleanup tasks are more than twice as likely to report lost revenue, clients, or deals.
Zapier said better data quality and more robust infrastructure could go a long way in helping to improve the situation.
The study found that those with access to AI orchestration tools and comprehensive company context – so internal documentation, brand guides, project templates, or prompt libraries – said the technology does have a big positive impact on productivity.
“The solution isn’t fewer tools, it’s better infrastructure,” said Mabie. “Orchestration, training, and proper context convert AI from a vague experiment into a managed process where the extra cleanup is the cost of doing more meaningful work faster, rather than the cost of pretending you are.”
AI workslop is here to stay
The rise of AI workslop has become a recurring pain point for enterprises ramping up adoption of the technology.
A report from MIT researchers last year found that more than 40% of US-based workers had been given AI-generated content that “masquerades as good work but lacks the substance to meaningfully advance a given task”.
This, the study noted, was destroying productivity and harming perception of the technology in the workplace.
Certain professions are experiencing acute issues with this trend, such as those working in software development. A recent study from CodeRabbit, for example, shows that AI makes 1.7 times as many mistakes as human programmers.
The use of AI in software development has been one of the leading use-cases for the technology over the last three years, with developers reporting significant productivity boosts from AI code generation.
Research from Harness in early 2025, however, found that these productivity gains are offset by the fact developers are having to drop tools and manually remediate faulty code, slowing down processes.
FOLLOW US ON SOCIAL MEDIA
Make sure to follow ITPro on Google News to keep tabs on all our latest news, analysis, and reviews.
You can also follow ITPro on LinkedIn, X, Facebook, and BlueSky.
Emma Woollacott is a freelance journalist writing for publications including the BBC, Private Eye, Forbes, Raconteur and specialist technology titles.
-
IBM Sovereign Core targets AI and cloud data residency gains for European enterprisesNews The new IBM Sovereign Core service allows organizations to build, manage, and deploy their own AI-ready sovereign workloads
-
Keepit bolsters partner-first strategy with new-look global channel teamNews The SaaS backup and recovery vendor has consolidated its global channel leadership to accelerate partner-led growth
-
Everything you need to know about Claude Cowork, including features, pricing, and how to access the new productivity toolNews Users can give Claude Cowork access to specific folders on their computer, allowing the bot to autonomously sort and organize files in the background while you're working away.
-
Retailers are turning to AI to streamline supply chains and customer experience – and open source options are proving highly popularNews Companies are moving AI projects from pilot to production across the board, with a focus on open-source models and software, as well as agentic and physical AI
-
Microsoft CEO Satya Nadella wants an end to the term ‘AI slop’ and says 2026 will be a ‘pivotal year’ for the technology – but enterprises still need to iron out key lingering issuesNews Microsoft CEO Satya Nadella might want the term "AI slop" shelved in 2026, but businesses will still be dealing with increasing output problems and poor returns.
-
OpenAI says prompt injection attacks are a serious threat for AI browsers – and it’s a problem that’s ‘unlikely to ever be fully solved'News OpenAI details efforts to protect ChatGPT Atlas against prompt injection attacks
-
Google DeepMind CEO Demis Hassabis thinks startups are in the midst of an 'AI bubble'News AI startups raising huge rounds fresh out the traps are a cause for concern, according to Hassabis
-
OpenAI turns to red teamers to prevent malicious ChatGPT use as company warns future models could pose 'high' security riskNews The ChatGPT maker wants to keep defenders ahead of attackers when it comes to AI security tools
-
AWS has dived headfirst into the agentic AI hype cycle, but old tricks will help it chart new watersOpinion While AWS has jumped on the agentic AI hype train, its reputation as a no-nonsense, reliable cloud provider will pay dividends
-
AWS CEO Matt Garman says AI agents will have 'as much impact on your business as the internet or cloud'News Garman told attendees at AWS re:Invent that AI agents represent a paradigm shift in the trajectory of AI and will finally unlock returns on investment for enterprises.
