Workers view agents as ‘important teammates’ – but the prospect of an AI 'boss' is a step too far
Concerns over how AI agents will work alongside humans are rising, according to a new study


AI agents are becoming increasingly common in the workplace, according to new research, but employees want clear guidelines on how they’re used.
In a new study by Workday, three-quarters (75%) of respondents told the company they’re comfortable working alongside agents, but just 30% would be comfortable being managed by one and 12% are “not at all comfortable”.
Workday said this highlights a precarious situation for enterprises and underlines the need to integrate AI solutions without “losing the human touch”.
“That signals a clear boundary between human and machine roles that leaders need to respect, meaning that organizations must clearly define the role of agents and explain how they make decisions in order to build trust and ensure successful integration,” the report stated.
The study comes as businesses across a range of industries ramp up deployment of agentic AI solutions. Indeed, 82% of organizations surveyed by Workday said they are now expanding their use of AI agents.
Research from Microsoft earlier this year predicted that as many as 1.3 billion agents will be working away in the background by 2028.
Agents operate differently to previous iterations of AI, in that they can work autonomously on behalf of human workers, rather than as a ‘copilot’ or ‘AI assistant’.
Sign up today and you will receive a free copy of our Future Focus 2025 report - the leading guidance on AI, cybersecurity and other IT challenges as per 700+ senior executives
It’s here that employees want clear boundaries set by enterprises. Being ‘managed’ by an agent is essentially a dealbreaker for many workers, the study found.
AI agents are “important teammates”
Most employees see agents as “important teammates, but not full members of the workforce”. This is a viewpoint that varies wildly across different departments and business functions, Workday noted.
Staff in finance, hiring, and legal are far more wary of agents due to the sensitive nature of the tasks they’ll be carrying out. Conversely, workers in areas such as IT support and skills development are more trusting.
This highlights the need for greater human oversight and accountability depending on the specific areas agents are deployed, according to Workday.
Across the board, just 24% of respondents said they are comfortable with agents operating autonomously and without human supervision.
On the topic of trust, the Workday study noted that “exposure” to agents is a crucial factor in developing faith in these systems.
More than a quarter of respondents said they believe agents are overhyped, for example, but trust “rises dramatically” with increased use and experience.
“For instance, only 36% of those exploring AI agents trust their organization to use them responsibly, but that number jumps to 95% among those further along,” the company said. “Direct experience with agents builds confidence.”
Workers are wary of productivity gains
Notably, Workday found employees reported higher levels of productivity using AI agents, with 90% of respondents believing they will help them get more work done.
But this is a double-edged sword for many, the study warned. Nearly half (48%) of workers reported growing concerns that these productivity gains will lead to increased pressure and heavier workloads.
36% voiced concerns over less human interaction in their day-to-day work, while 48% are also worried about a “decline in critical thinking”.
The impact of AI tools on critical thinking has become a recurring talking point over the last year. In early 2025, a study from Microsoft warned that frequent users of tools such as ChatGPT and Copilot experience “long-term reliance and diminished independent problem-solving” capabilities.
In June, a similar study on the impact of AI on users by MIT’s Media Lab also pointed to reduced critical thinking skills.
This study saw 54 subjects divided into three separate groups and asked to write essays: one using ChatGPT, another relying on Google Search, and a final group using no tools at all.
Researchers found that those using ChatGPT to write content recorded the lowest cognitive engagement and performance.
Concerns over cognitive impact aside, Workday said the concerns voiced by respondents shows enterprises must approach the adoption of agentic AI in a considered, transparent manner.
“This isn’t just about deploying new technology,” the company said. “It’s about thoughtfully designing a future where AI agents enhance human capabilities, enabling a more productive and fulfilling work experience for all.”
Make sure to follow ITPro on Google News to keep tabs on all our latest news, analysis, and reviews.
MORE FROM ITPRO

Ross Kelly is ITPro's News & Analysis Editor, responsible for leading the brand's news output and in-depth reporting on the latest stories from across the business technology landscape. Ross was previously a Staff Writer, during which time he developed a keen interest in cyber security, business leadership, and emerging technologies.
He graduated from Edinburgh Napier University in 2016 with a BA (Hons) in Journalism, and joined ITPro in 2022 after four years working in technology conference research.
For news pitches, you can contact Ross at ross.kelly@futurenet.com, or on Twitter and LinkedIn.
-
Zyxel XGS1935-52HP review
Reviews Plenty of Gigabit ports and a fair power budget makes this switch a great choice for SMBs with big PoE+ deployment plans
-
Passwords are a problem: why device-bound passkeys can be the future of secure authentication
Industry insights AI-driven cyberthreats demand a passwordless future…
-
OpenAI thought it hit a home run with GPT-5 – users weren't so keen
News It’s been a tough week for OpenAI after facing criticism from users and researchers
-
DeepMind CEO Demis Hassabis thinks Meta's multi-billion dollar hiring spree shows it's scrambling to catch up in the AI race
News DeepMind CEO Demis Hassabis thinks Meta's multi-billion dollar hiring spree is "rational" given the company's current position in the generative AI space.
-
Mistral's new sustainability tracker tool shows the impact AI has on the environment – and it makes for sober reading
News The training phase for Mistral's Large 2 model was equal to the yearly consumption of over 5,o00 French citizens.
-
VC investment in AI is skyrocketing – funding in the first half of 2025 was more than the whole of last year, says EY
News The average AI deal size is growing as VCs turn their attention to later-stage companies
-
The Replit vibe coding incident gives us a glimpse into why developers are still wary of AI coding assistants
News Recent vibe coding snafus highlight the risks of AI coding assistants
-
Researchers tested over 100 leading AI models on coding tasks — nearly half produced glaring security flaws
News AI models large and small were found to introduce cross-site scripting errors and seriously struggle with secure Java generation
-
‘LaMDA was ChatGPT before ChatGPT’: Microsoft’s AI CEO Mustafa Suleyman claims Google nearly pipped OpenAI to launch its own chatbot – and it could’ve completely changed the course of the generative AI ‘boom’
News In a recent podcast appearance, Mustafa Suleyman revealed Google was nearing the launch of its own ChatGPT equivalent in the months before OpenAI stole the show.
-
Microsoft is doubling down on multilingual large language models – and Europe stands to benefit the most
News The tech giant wants to ramp up development of LLMs for a range of European languages