Should workers prepare to become AI agent bosses?

Tech leaders claim employees could soon be managing AI agents – but this will require a huge culture shift, security awareness and governance

An abstract CGI render showing a multicolored computer chip with an outline of a human head and the word 'AI agent' on it.
(Image credit: Getty Images)

AI agents are being deployed at scale right now, with Microsoft expecting 1.3 billion to be operational by 2028, and leading voices in tech are suggesting that every worker could soon oversee their own AI workers.

Throughout 2025, agentic AI has become a major focus for big tech companies and autonomous AI agents are driving significant shifts across the sector.

Most recently, Salesforce CEO Marc Benioff announced the CRM firm has replaced around 4,000 customer service roles using AI agents, with human omnichannel supervisors overseeing their operations.

This is a prominent example of a trend that Microsoft predicts will soon impact firms on at the leading edge of AI adoption. In its latest Work Trend Index, Microsoft declared that every worker is set to become the boss of a fleet of AI agents. The tech giant has predicted the rise of “frontier firms” over the next five years, where human workers build, delegate to and manage AI agents.

"From the boardroom to the frontline, every worker will need to think like the CEO of an agent-powered startup," Jared Spartaro, CMO of AI at Work for Microsoft, wrote in a blog post accompanying the report.

Microsoft’s declaration doesn’t come as a surprise to Tom Pepper, partner at Avella Security and security lead at the UK Government’s AI Security Institute. “We are already seeing the early stages of this in areas such as coding assistants, workflow automation, and customer support.

“The technology is advancing quickly, but the real challenge is less about capability and more about readiness. Most organizations have not yet built the skills, governance structures, or cultural mindset needed to treat AI agents as members of the workforce.”

But is this work pattern likely to materialize on an organization-by-organization level – or are big tech firms getting ahead of themselves in their enthusiasm for generative AI as a technology, without considering the business realities?

Building a culture of trust

Companies looking to prepare their workforces for becoming AI agent bosses will have to establish open dialogue about the impacts of AI agents – Microsoft found that there’s a disconnect between how familiar leaders (67%) and their employees (40%) are with them.

Leaders enthusiastic about AI agents will also have to contend with worker discomfort with the technology. A recent study by Workday found 75% of employees are comfortable working alongside AI agents but only 30% would be comfortable being managed by one.

Swapping the roles to give human workers oversight and direct management of AI agents could shift sentiment. Just 24% of respondents to Workday’s study were comfortable with the idea of fully autonomous agents and this desire for human oversight could be channelled into enthusiasm for new ‘AI agent boss’ roles.

But concerns remain, with a recent Capgemini study finding a marked decline the trust senior executives have in AI agents, from 43% in 2024 to 27% in 2025. More will have to be done to bridge this gap if the gains AI developers claim agents can unlock are to be realized.

Regardless of concerns, there is no avoiding the dramatic changes agentic AI is going to bring, says Calum Chace, co-founder of AI safety company Conscium.

The onus has to be on leaders to prepare employees for how AI agents are going to shake up the workplace, he says. Chace recommends leaders investing resources in AI literacy, so employees are aware of the benefits, limitations and biases of AI agents. Leaders should provide them with the tools they need to experiment with large language models (LLMs) – both at home and work – to better understand how they can be used to drive efficiencies and reduce costs.

“Workers need to be fluent in how LLMs work. Not in the mechanics of the technology, although a basic understanding of that is helpful, but what they can do well, and what they can’t do so well,” says Chace. "They also need to have a grasp on the data their employer has and how to organize and combine it."

Chace and Pepper are in agreement that another crucial step leaders should take is to increase security awareness and implement data hygiene and data minimization practices. Recent research by the University of Maryland Institute for Advanced Computer Studies concluded that AI agents can be more vulnerable to cyber attacks than LLMs. The researchers found that AI agents can be jailbroken and tricked into downloading malicious files, revealing private information, and sending phishing emails.

“When workers begin delegating tasks to AI, the attack surface expands. Sensitive data will flow into prompts, outputs may be manipulated by adversaries, and the integrity of decisions made by these AI agents must be questioned. Leaders should prepare their teams to act not only as managers of AI outputs, but also as security guardians,” advises Pepper.

Addressing agentic AI security concerns will be a necessary step toward not only building trust among employees, but also preventing tools from introducing threats into enterprise environments with or without human oversight.

The need for human verification

While Microsoft recognizes that there’s currently a “capacity gap” between what companies want to achieve with AI and what their workers can deliver, hiring around AI is ramping up. More than a quarter (28%) of leaders are considering recruiting AI managers to oversee human-agent hybrid teams and 32% are planning to hire specialists who can design, develop and optimize AI agents, per its Work Trend Index found. What’s more, 36% of leaders expect their employees to be managing AI agents within five years.

Pepper says that while AI may be maturing quickly, human oversight will continue to be critical for the foreseeable future. Companies should consider establishing an AI oversight committee to define AI governance policies and set out risk guidelines. A human in the mix will help to ensure every AI agent’s decision is unbiased and that it has the company’s business interests at heart.

“The truth is that AI agents today are unsophisticated and mostly engaged in information retrieval. But it does seem likely that within a couple of years, agents will be negotiating contracts with each other,” says Chace.

“They will still be supervised, with a human in the loop, so to that extent, humans will be delegating and managing them. It could be five to ten years before agents are able to carry out lengthy, complicated tasks with little or no supervision, with a human overseeing them in the way we currently oversee human employees.”

Rich McEachran

Rich is a freelance journalist writing about business and technology for national, B2B and trade publications. While his specialist areas are digital transformation and leadership and workplace issues, he’s also covered everything from how AI can be used to manage inventory levels during stock shortages to how digital twins can transform healthcare. You can follow Rich on LinkedIn.