Why AI 'workslop' is bad for workers and costing your business millions
Poorly-generated AI content is having a financial impact on businesses, slowing productivity, and creating friction between employees
The rush to integrate generative AI into workflows and automate manual work is resulting in presentations and reports that may look polished and professional but are error-strewn and lack substance.
Ask any employee who uses AI and they’ll probably tell you that they’ve spent a significant amount of time at least once cleaning up the technology’s mistakes, even if they’re simple grammatical ones.
This workplace epidemic has been termed AI ‘workslop’ by leadership coaching platform BetterUp and researchers at the Stanford Social Media Lab. Writing in the Harvard Business Review in September, they defined ‘workslop’ as “AI-generated work content that masquerades as good work, but lacks the substance to meaningfully advance a given task”.
'Slop' has become a common term used for low-quality AI outputs, which is plaguing security teams, gumming up submissions to open source bug bounty programs, and driving prominent figures such as Andrej Karpathy to cast doubt on the technology as a whole. Despite appeals from the likes of Satya Nadella, the term is still widely used.
According to the Betterup and Stanford Social Media Lab research, approximately four in ten of 1,150 US desk-based workers encounter workslop at least once a month. Each incident takes, on average, almost two hours to rectify.
The cost to business is believed to be up to $186 per employee per month. Based on the workslop’s estimated prevalence (41%), productivity at a company with a headcount of 10,000 would take a $9m hit over the course of a year.
Beyond the financial impact, workslop is fueling tensions among co-workers. When asked how they felt about receiving workslop, employees said they felt annoyed (53%), confused (38%) and offended (22%). Colleagues who relied on AI to generate content were deemed to be “less creative, capable and reliable”. Consequently, 42% of respondents said they’d consider their co-workers “less trustworthy”, while 37% would view them as “less intelligent”.
Sign up today and you will receive a free copy of our Future Focus 2025 report - the leading guidance on AI, cybersecurity and other IT challenges as per 700+ senior executives
The situation is threatening to create rifts within teams. A third of those surveyed said they had notified teammates or managers about a colleague’s workshop and that they would be less inclined to want to work with someone from whom they’ve received workslop.
And things could be getting worse, not better. A recent study by Zapier suggested the productivity impact of workshop could be even greater, with employees forced to spend four and a half hours each week fixing AI mistakes.
Putting a good AI content generation policy in place
"AI workslop is a real problem and it’s unsurprising that it has crept into our workplace products,” says Brian Shannon, CTO at SaaS-based IT management company Flexera.
While there are undoubtedly workers who rely on AI to enhance the quality of their work, others are turning to it for a quick fix and the result is content that often lacks context and can be unhelpful. Leaders “must be diligent about creating processes that validate AI output” and “establish best practices for teams to determine when generative AI helps versus when it hurts,” advises Shannon.
Sharon Bernstein, chief human resources officer at SaaS provider WalkMe, acquired by SAP in 2024, agrees that leaders have to step up their monitoring of how the technology is being used by their workers.
“Workslop arises when employees are told to use AI without guidance on how or when to use it,” says Bernstein. “Valuable AI output should be enhancing a task’s clarity, creativity, or efficiency, not simply replicating human effort in a lower-quality way.”
What’s needed are clear policies on what content AI should be used and shouldn’t be used for. They should also set out rules on what data is used to generate content. For example, it could stipulate that only first-party data can be used and the content should be easily sourced and properly referenced.
“A good policy isn’t about restrictions, it’s about giving teams guardrails so they can confidently use AI to add value,” says Eric Ritter, CEO and founder of digital marketing agency Digital Neighbor. The risk of not having guardrails in place is that employees could end up using AI recklessly. Aside from quality issues, this could potentially have liability and legal ramifications, damaging brand integrity and client trust.
Ritter adds that Digital Neighbor has seen the benefit of a strong AI content generation policy first-hand with its clients. “When businesses define where AI fits in their workflow, be it initial research or draft creation, versus where human expertise is non-negotiable, like client-facing content, teams aren’t second-guessing AI’s every decision.”
Workslop vs genuine AI content
Even with a robust policy in place, there’s still the chance that workslop could slip through the net. Some of this can be due to human error – employees may be under pressure to deliver a report or presentation within a tight deadline – but other instances can be down to a lack of care and attention, i.e. employees simply failing to question the quality of AI’s output and checking it for context and relevance.
More often than not, the reason for employees not spotting workslop is because leaders haven’t provided them with the right training. Employees have to understand that “slop happens when someone hits ‘generate’ and copies the output verbatim,” argues Ritter. “Whereas value happens when AI is used as a ‘thinking partner’ that you feed specific context, review critically, and then add your expertise.”
Bernstein says that employees at WalkMe are “encouraged to be critical of AI output the same way they would approach an intern’s work… You wouldn’t ship off an intern’s work without review as it can always benefit from constructive criticism.”
As for how employees should go about identifying the difference between workslop and genuine, valuable content, Bernstein advises that they look for “content that’s generic and inconsistent with company tone”. They also need to be wary of content that has a lot to say yet doesn’t say anything memorable or useful.
Ritter echoes this and says that “if they read three paragraphs and can’t recall a single specific example or actionable takeaway”, then the reasonable assumption is that the content is workslop. He recommends that employees look out for stock words and phrases, such as “to dive deep”, “to leverage synergies” and “to unpack insights”. Another thing that would get his alarm bells ringing is repetitive structure, such as an opening sentence, followed by a few bullet points and then a summary paragraph.
“Real human writing has rhythm variation and personality. So, if the AI content reads like it could apply to any company in any industry, it's probably workslop,” warns Ritter.
While AI-generated content can be useful, it’s clear it can also cost businesses time and money and cause friction among teams if employees don’t know when and how best to use it.
Rich is a freelance journalist writing about business and technology for national, B2B and trade publications. While his specialist areas are digital transformation and leadership and workplace issues, he’s also covered everything from how AI can be used to manage inventory levels during stock shortages to how digital twins can transform healthcare. You can follow Rich on LinkedIn.
-
Cisco looks to showcase “unique value” with revamped 360 Partner ProgramNews Cisco has unveiled a revamped partner framework to help partners capitalize on growing AI-driven customer demand
-
IT teams are battling a surge in outages over missed critical alertsNews IT workers are ignoring a torrent of false alerts, but there's a risk a legitimate one could slip through the net
