Observability will be key to agentic AI safety, says Microsoft Security exec
Agentic AI adoption will require a re-evaluation of enterprise risk management, according to Microsoft corporate VP
Sign up today and you will receive a free copy of our Future Focus 2025 report - the leading guidance on AI, cybersecurity and other IT challenges as per 700+ senior executives
You are now subscribed
Your newsletter sign-up was successful
Enterprises around the world are flocking to agentic AI tools in 2026, with research from Microsoft showing that 80% of Fortune 500 companies are already using agents in daily operations.
Agents represent a step change in how enterprises leverage AI, with autonomous bots carrying out tasks on behalf of employees and helping to unlock marked productivity and efficiency gains.
Yet many organisations aren’t fully aware of the potential risks associated with these tools, according to Vasu Jakkal, corporate vice president for Microsoft Security.
Speaking during the opening keynote session at the 2026 RSAC Conference in San Francisco, Jakkal said the integration of agents in customer-facing environments will require a re-evaluation of enterprise risk management.
Trust, Jakkal said, will be crucial for safe and secure deployment of agents, which is why observability, security, and governance will be crucial.
“Humans and agents are working together, and we are only just scratching the surface of AI, but as is always with the case of technology advancement, there will always be those who will use it for nefarious purposes,” she said, adding that the use of AI in malicious activities has reached an “inflection point”.
Microsoft’s own intelligence operations have observed bad actors using AI primarily to improve their “trade craft”. They’re using the technology to curate more efficient phishing lures and debug malware, for example.
Sign up today and you will receive a free copy of our Future Focus 2025 report - the leading guidance on AI, cybersecurity and other IT challenges as per 700+ senior executives
“We’ve seen this in operations by North Korean actors Jasper Sleet [and] Coral Sleet, where AI enables sustained, large-scale misuse of legitimate access to things like identity fabrication through social engineering and really long-term persistence at very low cost.”
“Structurally different”
This brave new world of AI-powered malicious activity means cyber defenders now face new considerations when contending with potential risks, Jakkal said.
Indeed, malicious activities as a result of AI aren’t just faster, they’re “structurally different, and in this new reality, security has to change", Jakkal said.
Jakkal noted that organizations have relied on “layers of siloed point solutions, static policies, and human-reliant response”, but bad actors don’t take this into account.
“They think in graphs, and with agents, they can now operate continuously at machine speed across these graphs,” she explained.
This means enterprise security needs to shift from a traditional approach shoring up specific control points, toward a comprehensive architecture where defense is a proactive, not reactive approach. AI, she said, will be crucial in facilitating this change.
“At Microsoft, we believe the future of security is ambient and autonomous, just like the AI it needs to protect,” Jakkal said.
“You can’t simply turn on security, it has to be something that’s woven deeply into every layer of the AI stack – from agents to apps, to platforms, to infrastructure. It needs to be always on, always there, everywhere.
“We need to use agents. We need to use agents that are continuously discovering, testing and fixing the attack path in an always on self defending loop so defenders can address these attacks before they happen.”
Humans in the loop
Ensuring humans are kept in the loop will be crucial in this process and a key factor in building trust, especially given that IDC research predicts more than 1.3 billion agents will be in operation by 2028.
Areas such as identity security will become more important than ever in ensuring enterprises can keep a close eye on agents while they operate behind the scenes.
“They must be secured with the same vigilance that we use to secure people,” she said.
Similarly, the rise of “double agents” – those that have been manipulated by malicious actors to engage in nefarious activities – have already been observed by Microsoft.
With this in mind, Jakkal expects observability to be a key enterprise focus in the coming years.
“We cannot protect what we cannot see,” she said. “And in this era of agentic AI, organizations will need an observability control plane.”
Observability won’t rest solely with security teams either, she said. Developer teams and IT teams will also require shared controls to shore up identity and data security, and to ensure robust governance of agents.
The stakes are high when it comes to safe and secure agentic AI adoption, Jakkal said, which underlines the need for a trustworthy approach to integration. It will also have long-term positive implications for enterprises, if done correctly.
“As we do this, I know that we will build trust at the very core of our organizations, and security becomes that incredible catalyst for innovation.”
FOLLOW US ON SOCIAL MEDIA
Follow ITPro on Google News and add us as a preferred source to keep tabs on all our latest news, analysis, views, and reviews.
You can also follow ITPro on LinkedIn, X, Facebook, and BlueSky.

Ross Kelly is ITPro's News & Analysis Editor, responsible for leading the brand's news output and in-depth reporting on the latest stories from across the business technology landscape. Ross was previously a Staff Writer, during which time he developed a keen interest in cyber security, business leadership, and emerging technologies.
He graduated from Edinburgh Napier University in 2016 with a BA (Hons) in Journalism, and joined ITPro in 2022 after four years working in technology conference research.
For news pitches, you can contact Ross at ross.kelly@futurenet.com, or on Twitter and LinkedIn.
-
Enterprises need to think of agents as ‘digital co-workers’ – and that means implementing the same security safeguardsNews Practices such as zero trust and least privilege will be needed as agents gain access to sensitive enterprise data
-
Safe AI adoption rests on cybersecurity professionals, says RSAC chairmanNews With AI security a key talking point at RSAC 2026, executive chairman Hugh Thompson believes the industry can lead by example
-
Enterprises need to think of agents as ‘digital co-workers’ – and that means implementing the same security safeguardsNews Practices such as zero trust and least privilege will be needed as agents gain access to sensitive enterprise data
-
Safe AI adoption rests on cybersecurity professionals, says RSAC chairmanNews With AI security a key talking point at RSAC 2026, executive chairman Hugh Thompson believes the industry can lead by example
-
RSAC in focus: Key takeaways for CISOsThe RSAC Conference 2025 spotlighted pivotal advancements in agentic AI, identity security, and collaborative defense strategies, shaping the evolving mandate for CISOs.
-
RSAC in focus: Quantum computing and securityExperts at RSAC 2025 emphasize the need for urgent action to secure data against future cryptographic risks posed by quantum computing
-
RSAC in focus: How AI is improving cybersecurityAI is revolutionizing cybersecurity by enhancing threat detection, automating defenses, and letting IT professionals tackle evolving digital challenges.
-
RSAC in focus: Collaboration in cybersecurityExperts at RSA Conference 2025 emphasised that collaboration across sectors and shared intelligence are pivotal to addressing the evolving challenges of cybersecurity.
-
RSAC in focus: Considerations and possibilities for the remainder of 2025As 2025 unfolds, RSAC explores the pivotal considerations and emerging possibilities shaping the cybersecurity landscape
-
RSAC Conference 2025: The front line of cyber innovationITPro Podcast Ransomware, quantum computing, and an unsurprising focus on AI were highlights of this year's event