Is AI workplace monitoring helpful or harmful?

A massive eye overseeing a dark room full of individuals
(Image credit: Shutterstock)

No matter where we work, be it an office, factory, at home or out in the field, our employers can keep a watchful eye over every move and action we make. That doesn’t just include how frequently we’re emailing or how collaborative we’re being with our colleagues, but also how much of our day is spent making cups of tea or even taking trips to the toilet.

And that’s just the tip of the iceberg. Thanks to the rise of motion sensors, activity monitors, analytics and the use of artificial intelligence in the workplace it's ever more possible for an employer to know far more about their employees than before.

While there’s no doubt that these technologies can be useful for employers looking to identify inefficiencies in their business model and automate previously human-led tasks - which can lead to a suite of operational benefits - this workplace surveillance has been widely criticised by some who claim such technologies can do more harm than good.

Unforeseen complications

The question of whether this technology-led monitoring is helpful or harmful is a complex and multifaceted issue. For every potential benefit of workplace monitoring - be it the fact it can improve productivity levels or keep employees safe - a new, unforeseen complication is also created.

"Implemented correctly, technology can be a very important tool for both employers and employees," argues Katherine Mayes, programme manager for cloud, data, analytics, and AI at techUK. She cited positives like helping employers avoid bias by basing managerial decisions on merit rather than subjective factors like whether or not they like a staff member. However, she adds that "the increased use of technologies like AI are raising a number of profound legal, social and ethical questions".

That's because using AI in the workplace to monitor staff isn't simply a matter of watching people do their jobs - akin to a 'time and motion study' type approach. AI systems do more than count time. They apply algorithms to draw conclusions by themselves, and it's this that can be a particular cause for concern. Imagine an AI monitoring system in the workplace which sees a dramatic turndown in keyboard activity from one person at a particular time. Is that person being lazy or is there something else going on?

Training and expectations

That might be is a relatively simplistic example, but as Nick Maynard, senior analyst at Juniper Research explains, "the difficult part is making sure that the data used to train the system is free of bias and gives a true reflection".

"This may represent a challenge when trying to explain how systems have flagged a lack of productivity, particularly when it comes to potential disciplinary issues," he adds.

RELATED RESOURCE

How organisations unlock their data capital with artificial intelligence

The thoughtful application of AI offers hope to organisations looking for actionable insight

FREE DOWNLOAD

Theo Knott, policy programmes manager for BCS, The Chartered Institute for IT, argues that "it is entirely feasible for an office worker to be away from the keyboard for an hour and be having productive conversations or doing work mentally".

The question then is whether AI can ever infer correctly the difference between what we might flippantly call 'thinking' and 'slacking'. Even if a worker is 'slacking', is it possible for an algorithm to determine whether that downtime is ultimately beneficial or harmful to productivity?

Taking breaks is not only important for our health, but it's also a great way of figuring out answers to complex tasks. Perhaps a chat about the movie you saw at the weekend around the water cooler is precisely the distraction a thorny problem requires.

The output quality from workplace monitoring systems will almost certainly improve over time as the quality of data on which AI's algorithms are based becomes more robust. However, for Theo Knott, we should remain cautious about the rollout of such technology.

"It is likely that accuracy would improve rapidly as technology is improved and datasets become richer," he argues. "Whether the errors on the way to this point are worth it is questionable, but the key is ensuring that things are transparent, so that wrong decisions can be easily challenged."

Transparency matters

The General Data Protection Regulation sets the ground rules for the use of personal data in the workplace, making it clear that employees should know what data is being collected and why, as well as setting out requirements for data retention.

A key point here is maintaining transparency with the workers themselves. As Matt Creagh, employment rights officer for the Trade Union Congress, explains, "it's important to remember that working people have a right to privacy, and this right extends to the workplace." He continued, "Tracking and surveillance software should only be used with the agreement of a workplace union or the workforce."

And this isn't just a matter of law - it is one of good practice too. As Katherine Mayes points out, "employers have a responsibility to engage with staff on this debate and work together to carefully determine how AI can be used to support and empower the workforce. If businesses get this wrong, they risk undermining workplace morale which could lead to staff resignations."

Sandra Vogel
Freelance journalist

Sandra Vogel is a freelance journalist with decades of experience in long-form and explainer content, research papers, case studies, white papers, blogs, books, and hardware reviews. She has contributed to ZDNet, national newspapers and many of the best known technology web sites.

At ITPro, Sandra has contributed articles on artificial intelligence (AI), measures that can be taken to cope with inflation, the telecoms industry, risk management, and C-suite strategies. In the past, Sandra also contributed handset reviews for ITPro and has written for the brand for more than 13 years in total.