AI assistants are tools, not co-workers

Two coworker hands shaking, while dissolving into pixels.
(Image credit: Getty Images)

Companies are falling over themselves to implement generative AI tools within their environments, chasing productivity gains and automated outputs that align with their company brand.

A woman staring at code in a room lit in blue and purple

(Image credit: Getty Images)

Is AI the future of everything?

Rather than implement artificial intelligence (AI) as a backend tool or customer-facing chatbot, some companies are choosing to install systems as ‘co-workers’ for existing employees, replete with names and digital avatars representing them. Right now, it might seem a harmless oddity, or even cute. Some of the bots have been given silly names or a cartoon avatar that looks like a clunky robot.

Underneath the surface, however, things aren’t quite so innocent.

The root of the problem is this: To call an AI tool a co-worker is to equate its output, however significant, with human labor. The concept of an AI tool’s metrics being directly compared to those of human workers is distasteful at best.

It may begin as a simple proof-of-concept, with managers aiming to establish a baseline for how effective workers are currently and how effective they could be using AI. It would establish a competitive dynamic between workers and automated assistants that can only end one way, though: Real people being axed. 

This concept is already becoming a reality. In August, IBM CEO Arvind Krishna reaffirmed previous suggestions that up to 7,800 jobs at his firm could be replaced by AI, while BT is set to slash 55,000 jobs by 2030 as it looks to cut costs by implementing more AI processes.

RELATED RESOURCE

The business leader’s guide to digital worker technology for improving productivity whitepaper

(Image credit: IBM)

Turn your workforce into a talent force

DOWNLOAD NOW

Krishna has since attempted to walk back his comments, stating that he has no intentions to reduce tech roles in the face of AI, but that back-office staff will face cuts. “That means you can get the same work done with fewer people,” the executive told CNBC. As if that somehow makes it better.

This has been true of major developments such as the personal computer or smartphone, with those best equipped to use productivity tools able to shine in the workplace. Part of the promise for large language models (LLMs) in particular, though, has been their capacity for natural language processing, which allows workers without technical knowledge of prompts to still use AI tools for complex tasks such as coding.

A collage of people's faces with blurred lines emanating from the center, to represent big data harvesting on social media.

(Image credit: Getty Images)

Who owns the data used to train AI?

If we’re not careful, this could be death by a thousand cuts for those in affected industries. As existing roles are cut in favor of automation, it can’t be right that tools initially sold as productivity enhancers are allowed to ‘take’ roles previously held by humans. Leaders must bring workers along on the AI upskilling journey. 

Hiding an AI behind a peppy name or avatar also distracts from the very real shift in worker power going on here. By establishing a dynamic in which an AI tool can be considered capable of entirely replacing workers, executives distract attention from their own firing and hiring decisions. Ultimately, those who begin to impose these pressures on their workforce will have chosen profit over their employees.

To champion redundancy on this scale is to cheer on the gradual eradication of all worker bargaining power. If you’re clapping for the AI-driven death of art as a career path, stick around for the erosion of labor rights at the hands of AI ‘buddies’.

Human-led collaboration

A potential route away from this struggle is the ‘human-led’ AI system, in which AI tools are included as optional enhancements to existing software or operating systems. These tools make suggestions or produce drafts, but ultimately leave the decision-making up to real employees.

Google and Microsoft have run with this concept heavily in their respective AI productivity tools, Duet AI and Copilot, with both firms well underway with the implementation of these tools across their entire product suites.  Microsoft’s ‘Copilot’ brand is already extensive, including 365 Copilot, Windows 11 Copilot, and the recently released Copilot Studio.

This will undoubtedly unlock productivity for workers and in years to come ‘Copilot’ might be as regular a term as ‘spellcheck’. It’s important to hold onto the idea of authorship and responsibility when using these systems. While generative AI will undoubtedly see more use in business for tasks such as drafting a blog post, it would be strange to see the name of a large language model in the ‘author’ column alongside the marketing copywriter who puts together the final edit. 

Rory Bathgate
Features and Multimedia Editor

Rory Bathgate is Features and Multimedia Editor at ITPro, overseeing all in-depth content and case studies. He can also be found co-hosting the ITPro Podcast with Jane McCallion, swapping a keyboard for a microphone to discuss the latest learnings with thought leaders from across the tech sector.

In his free time, Rory enjoys photography, video editing, and good science fiction. After graduating from the University of Kent with a BA in English and American Literature, Rory undertook an MA in Eighteenth-Century Studies at King’s College London. He joined ITPro in 2022 as a graduate, following four years in student journalism. You can contact Rory at rory.bathgate@futurenet.com or on LinkedIn.