AI employee monitoring will only burn bridges in the workplace

A CGI render of a model businessperson placed on a white floor, with two large CCTV cameras watching them to represent AI employee monitoring and employer tracking. The scene is darkly lit, with the model businessperson lit with a spotlight to emphasize them being monitored.
(Image credit: Getty Images)

As technology progresses in the workplace, employees should be concerned about the increasing extent to which their every action is monitored, recorded, and quantified. 

A noir detective illustration, coloured in red and dark blue, showing a man wearing fedora hat, holding gun, and smoking cigar

(Image credit: Getty Images)

Is it time to call in the Turing Police?

Companies are adopting AI employee monitoring tools to analyze employee communications, assess productivity, and keep staff on task. But this automated oversight, enhanced by recent generative AI advances, is becoming a serious problem that could irreparably sour employee relations if leaders aren’t careful.

On the face of it, the premise is simple. AI allows for the kind of big data analytics necessary to analyze vast quantities of information on employee activity on business messaging apps and internal software. It can help leaders measure workforce productivity and also keep an eye out for potential breaches of conduct.

Businesses have always monitored employees, of course, and there’s no sense in being behind the curve, but these new forms of staff surveillance have an ominous air about them. Keeping an eye on employees is one thing; making employees feel as though they are living under the ever-present, watchful eye of their employer is quite another, and won't do anything to improve workplace satisfaction. 

CNBC recently reported on the notable clients of AI firm Aware, which specializes in AI employee monitoring and analyzing employee conversations. Aged just seven, the fledgling Ohio-based start-up has managed to wrangle the likes of T-Mobile, AstraZeneca, and BT onto its platform. Through integration with workplace software, such as Microsoft Teams and Slack, Aware has gained access to the analytical power of trillions of employee messages, information that CEO Jeff Schumann described as “the fastest-growing unstructured data set in the world.”  

But this is data that should not be monitored. These are private messages between private entities, not information designed to aid corporate decision-making. Aware’s data repository has within it around 6.5 billion messages that, in turn, represent about 20 billion individual interactions across more than 3 million employees.

The idea behind the platform is less dystopian-police-state and more utopian-streamlined-efficiency, in Aware’s view. Aware says it seeks to provide companies with the analytics needed to gauge employee sentiment about new office policies, spot malicious behavior patterns from insider threat actors, or highlight the proliferation of inappropriate materials.   

The software also anonymizes employee details to avoid targeted monitoring. That is unless a company decides to use Aware’s eDiscovery tool, which can reveal specific information “in the event of extreme threats or other risk behaviors that are predetermined by the client.”

Aside from the fact Aware customers could track individual employees if they saw fit, there are more fundamental concerns here.  At the very least, these sorts of AI employee monitoring tools will cast a shadow over workers, creating a level of distrust between leaders and their staff that will contribute to a toxic and uncomfortable workplace culture

The Information Commissioner's Office (ICO) seemed to echo these concerns in new guidance on workplace monitoring tools in October 2023, citing a report that said 70% of the public would find workplace monitoring tactics intrusive.

At the risk of quoting an executive-level cliche, employees really are a company's most important asset. Spying on them will hinder efforts to combat high staff turnover in tech, adding to the woe of leaders amid tech skills shortages and security skills shortages.

Using AI to track employee productivity is misguided 

Aware is far from the only company using AI monitoring practices, with the problematic practice having become common at some household names. At Uber, AI can provide grounds for dismissal as reported by Politico. Facial recognition tracks drivers’ identities when they log in to work, while also keeping a close eye on their performance ratings. If these checks suggest a want of quality work, Uber can fire workers.

This makes human workers subordinate to algorithms, removing any level of human understanding or personal empathy towards the individual. In short, humans play second fiddle to machines. Whatever efficiency gains come out of a move to AI employee monitoring have to be weighed against the damage these systems do to one’s reputation. 

Productivity is, of course, a valid concern for businesses. ONS figures published in February 2024 show flat annual productivity growth, with output per worker having declined 0.6% in 2023 compared to 2022.

Issues like this can generally be attributed to factors such as burnout or the time constraints of repetitive tasks. Automation can be a solution here, with recent Slack research suggesting generative AI can boost productivity and save every employee a month of work per year. However, implementation should be done in a way that reflects employer sympathy rather than employer suspicion. 

The age-old metaphor of the “carrot and stick” comes to mind. Although employees should be monitored at some level, to ensure they are meeting standards and targets, they also need to be rewarded with freedom and independence. Making employees feel like they are constantly on a short leash won't have positive effects in the long term.

Companies would be wise to take note of these “legitimate concerns” going forward.  Leaders tempted to quickly indulge in all AI has to offer need to make sure they’re still looking after their flesh-and-blood workforce if they want to avoid shredding their employee relations.

George Fitzmaurice
Staff Writer

George Fitzmaurice is a staff writer at ITPro, ChannelPro, and CloudPro, with a particular interest in AI regulation, data legislation, and market development. After graduating from the University of Oxford with a degree in English Language and Literature, he undertook an internship at the New Statesman before starting at ITPro. Outside of the office, George is both an aspiring musician and an avid reader.