Workers view agents as ‘important teammates’ – but the prospect of an AI 'boss' is a step too far
Concerns over how AI agents will work alongside humans are rising, according to a new study
Sign up today and you will receive a free copy of our Future Focus 2025 report - the leading guidance on AI, cybersecurity and other IT challenges as per 700+ senior executives
You are now subscribed
Your newsletter sign-up was successful
AI agents are becoming increasingly common in the workplace, according to new research, but employees want clear guidelines on how they’re used.
In a new study by Workday, three-quarters (75%) of respondents told the company they’re comfortable working alongside agents, but just 30% would be comfortable being managed by one and 12% are “not at all comfortable”.
Workday said this highlights a precarious situation for enterprises and underlines the need to integrate AI solutions without “losing the human touch”.
“That signals a clear boundary between human and machine roles that leaders need to respect, meaning that organizations must clearly define the role of agents and explain how they make decisions in order to build trust and ensure successful integration,” the report stated.
The study comes as businesses across a range of industries ramp up deployment of agentic AI solutions. Indeed, 82% of organizations surveyed by Workday said they are now expanding their use of AI agents.
Research from Microsoft earlier this year predicted that as many as 1.3 billion agents will be working away in the background by 2028.
Agents operate differently to previous iterations of AI, in that they can work autonomously on behalf of human workers, rather than as a ‘copilot’ or ‘AI assistant’.
Sign up today and you will receive a free copy of our Future Focus 2025 report - the leading guidance on AI, cybersecurity and other IT challenges as per 700+ senior executives
It’s here that employees want clear boundaries set by enterprises. Being ‘managed’ by an agent is essentially a dealbreaker for many workers, the study found.
AI agents are “important teammates”
Most employees see agents as “important teammates, but not full members of the workforce”. This is a viewpoint that varies wildly across different departments and business functions, Workday noted.
Staff in finance, hiring, and legal are far more wary of agents due to the sensitive nature of the tasks they’ll be carrying out. Conversely, workers in areas such as IT support and skills development are more trusting.
This highlights the need for greater human oversight and accountability depending on the specific areas agents are deployed, according to Workday.
Across the board, just 24% of respondents said they are comfortable with agents operating autonomously and without human supervision.
On the topic of trust, the Workday study noted that “exposure” to agents is a crucial factor in developing faith in these systems.
More than a quarter of respondents said they believe agents are overhyped, for example, but trust “rises dramatically” with increased use and experience.
“For instance, only 36% of those exploring AI agents trust their organization to use them responsibly, but that number jumps to 95% among those further along,” the company said. “Direct experience with agents builds confidence.”
Workers are wary of productivity gains
Notably, Workday found employees reported higher levels of productivity using AI agents, with 90% of respondents believing they will help them get more work done.
But this is a double-edged sword for many, the study warned. Nearly half (48%) of workers reported growing concerns that these productivity gains will lead to increased pressure and heavier workloads.
36% voiced concerns over less human interaction in their day-to-day work, while 48% are also worried about a “decline in critical thinking”.
The impact of AI tools on critical thinking has become a recurring talking point over the last year. In early 2025, a study from Microsoft warned that frequent users of tools such as ChatGPT and Copilot experience “long-term reliance and diminished independent problem-solving” capabilities.
In June, a similar study on the impact of AI on users by MIT’s Media Lab also pointed to reduced critical thinking skills.
This study saw 54 subjects divided into three separate groups and asked to write essays: one using ChatGPT, another relying on Google Search, and a final group using no tools at all.
Researchers found that those using ChatGPT to write content recorded the lowest cognitive engagement and performance.
Concerns over cognitive impact aside, Workday said the concerns voiced by respondents shows enterprises must approach the adoption of agentic AI in a considered, transparent manner.
“This isn’t just about deploying new technology,” the company said. “It’s about thoughtfully designing a future where AI agents enhance human capabilities, enabling a more productive and fulfilling work experience for all.”
Make sure to follow ITPro on Google News to keep tabs on all our latest news, analysis, and reviews.
MORE FROM ITPRO

Ross Kelly is ITPro's News & Analysis Editor, responsible for leading the brand's news output and in-depth reporting on the latest stories from across the business technology landscape. Ross was previously a Staff Writer, during which time he developed a keen interest in cyber security, business leadership, and emerging technologies.
He graduated from Edinburgh Napier University in 2016 with a BA (Hons) in Journalism, and joined ITPro in 2022 after four years working in technology conference research.
For news pitches, you can contact Ross at ross.kelly@futurenet.com, or on Twitter and LinkedIn.
-
Anthropic says $30bn funding round will help ‘fuel’ frontier AI research and infrastructure expansionNews Run-rate revenue at Anthropic is surging amid continued enterprise adoption
-
CVEs are set to top 50,000 this yearNews While the CVE figures might be daunting, they won't all be relevant to your organization
-
Google says hacker groups are using Gemini to augment attacks – and companies are even ‘stealing’ its modelsNews Google Threat Intelligence Group has shut down repeated attempts to misuse the Gemini model family
-
Why Anthropic sent software stocks into freefallNews Anthropic's sector-specific plugins for Claude Cowork have investors worried about disruption to software and services companies
-
Want to deliver a successful agentic AI project? Stop treating it like traditional softwareAnalysis Designing and building agents is one thing, but testing and governance is crucial to success
-
OpenAI's Codex app is now available on macOS – and it’s free for some ChatGPT users for a limited timeNews OpenAI has rolled out the macOS app to help developers make more use of Codex in their work
-
B2B Tech Future Focus - 2026Whitepaper Advice, insight, and trends for modern B2B IT leaders
-
What the UK's new Centre for AI Measurement means for the future of the industryNews The project, led by the National Physical Laboratory, aims to accelerate the development of secure, transparent, and trustworthy AI technologies
-
‘In the model race, it still trails’: Meta’s huge AI spending plans show it’s struggling to keep pace with OpenAI and Google – Mark Zuckerberg thinks the launch of agents that ‘really work’ will be the keyNews Meta CEO Mark Zuckerberg promises new models this year "will be good" as the tech giant looks to catch up in the AI race
-
Half of agentic AI projects are still stuck at the pilot stage – but that’s not stopping enterprises from ramping up investmentNews Organizations are stymied by issues with security, privacy, and compliance, as well as the technical challenges of managing agents at scale