Workers view agents as ‘important teammates’ – but the prospect of an AI 'boss' is a step too far

Concerns over how AI agents will work alongside humans are rising, according to a new study

Human female working on a desktop computer with a robotic AI agent working on desktop computer facing each other.
(Image credit: Getty Images)

AI agents are becoming increasingly common in the workplace, according to new research, but employees want clear guidelines on how they’re used.

In a new study by Workday, three-quarters (75%) of respondents told the company they’re comfortable working alongside agents, but just 30% would be comfortable being managed by one and 12% are “not at all comfortable”.

Workday said this highlights a precarious situation for enterprises and underlines the need to integrate AI solutions without “losing the human touch”.

“That signals a clear boundary between human and machine roles that leaders need to respect, meaning that organizations must clearly define the role of agents and explain how they make decisions in order to build trust and ensure successful integration,” the report stated.

The study comes as businesses across a range of industries ramp up deployment of agentic AI solutions. Indeed, 82% of organizations surveyed by Workday said they are now expanding their use of AI agents.

Research from Microsoft earlier this year predicted that as many as 1.3 billion agents will be working away in the background by 2028.

Agents operate differently to previous iterations of AI, in that they can work autonomously on behalf of human workers, rather than as a ‘copilot’ or ‘AI assistant’.

It’s here that employees want clear boundaries set by enterprises. Being ‘managed’ by an agent is essentially a dealbreaker for many workers, the study found.

AI agents are “important teammates”

Most employees see agents as “important teammates, but not full members of the workforce”. This is a viewpoint that varies wildly across different departments and business functions, Workday noted.

Staff in finance, hiring, and legal are far more wary of agents due to the sensitive nature of the tasks they’ll be carrying out. Conversely, workers in areas such as IT support and skills development are more trusting.

This highlights the need for greater human oversight and accountability depending on the specific areas agents are deployed, according to Workday.

Across the board, just 24% of respondents said they are comfortable with agents operating autonomously and without human supervision.

On the topic of trust, the Workday study noted that “exposure” to agents is a crucial factor in developing faith in these systems.

More than a quarter of respondents said they believe agents are overhyped, for example, but trust “rises dramatically” with increased use and experience.

“For instance, only 36% of those exploring AI agents trust their organization to use them responsibly, but that number jumps to 95% among those further along,” the company said. “Direct experience with agents builds confidence.”

Workers are wary of productivity gains

Notably, Workday found employees reported higher levels of productivity using AI agents, with 90% of respondents believing they will help them get more work done.

But this is a double-edged sword for many, the study warned. Nearly half (48%) of workers reported growing concerns that these productivity gains will lead to increased pressure and heavier workloads.

36% voiced concerns over less human interaction in their day-to-day work, while 48% are also worried about a “decline in critical thinking”.

The impact of AI tools on critical thinking has become a recurring talking point over the last year. In early 2025, a study from Microsoft warned that frequent users of tools such as ChatGPT and Copilot experience “long-term reliance and diminished independent problem-solving” capabilities.

In June, a similar study on the impact of AI on users by MIT’s Media Lab also pointed to reduced critical thinking skills.

This study saw 54 subjects divided into three separate groups and asked to write essays: one using ChatGPT, another relying on Google Search, and a final group using no tools at all.

Researchers found that those using ChatGPT to write content recorded the lowest cognitive engagement and performance.

Concerns over cognitive impact aside, Workday said the concerns voiced by respondents shows enterprises must approach the adoption of agentic AI in a considered, transparent manner.

“This isn’t just about deploying new technology,” the company said. “It’s about thoughtfully designing a future where AI agents enhance human capabilities, enabling a more productive and fulfilling work experience for all.”

Make sure to follow ITPro on Google News to keep tabs on all our latest news, analysis, and reviews.

MORE FROM ITPRO

Ross Kelly
News and Analysis Editor

Ross Kelly is ITPro's News & Analysis Editor, responsible for leading the brand's news output and in-depth reporting on the latest stories from across the business technology landscape. Ross was previously a Staff Writer, during which time he developed a keen interest in cyber security, business leadership, and emerging technologies.

He graduated from Edinburgh Napier University in 2016 with a BA (Hons) in Journalism, and joined ITPro in 2022 after four years working in technology conference research.

For news pitches, you can contact Ross at ross.kelly@futurenet.com, or on Twitter and LinkedIn.