Microsoft ramps up zero trust capabilities amid agentic AI push

The move from Microsoft looks to bolster agent security and prevent misuse

Agentic AI concept image showing human brain split in two halves, with one side digitized on a grid pattern and the other illuminated in yellow with brainwaves emitting.
(Image credit: Getty Images)

AI agents need to be treated like any other employee, at least when it comes to security, and that means they can't be trusted by default and need their own secure identification.

With zero trust in mind, Microsoft will be extending its security and identity tools — Entra, Purview, and Defender — to cover AI agents developed using its own tools, as well as a few key partners.

"These announcements underscore our commitment to providing comprehensive security and governance for AI, with technology built on the security lessons of the past and in line with our Secure Future Initiative principles," noted Vasu Jakkal, Corporate Vice President at Microsoft Security, in a blog post.

The zero trust announcement comes alongside wider AI news from Microsoft's Build conference, held in Seattle this week, including the general availability of Azure AI Foundry Agent Service to help companies deploy agentic AI using pre-built or custom agents.

Alongside the zero trust announcements, Microsoft also revealed evaluation and monitoring tools built into Azure AI Foundry to help detect and block prompt injections as well as task adherence to keep agents in line.

Agentic AI is the latest big tech trend, with industry leaders previously suggesting this marks the latest step in the natural evolutionary path of generative AI. But concerns over security have come to the fore as the industry pivots to the technology.

Earlier this year, ITPro was told that while AI agents could mark a step change in cybersecurity, the technology also has the potential to leave enterprises vulnerable to a range of new threats.

Microsoft has made its intentions clear in the agentic AI space, having already unveiled agents for its Security Copilot service. These new security features look to further bolster protection for enterprises dabbling in the technology.

Microsoft Entra Agent ID

Microsoft has unveiled a system for managing and security agentic AI called Microsoft Entra Agent ID, which manages AI agents to ensure they don't have access to data, apps, or other infrastructure without first being validated via the zero trust policy.

"Now, AI agents created within Microsoft Copilot Studio and Azure AI Foundry are automatically assigned identities in a Microsoft Entra directory — analogous to etching a unique VIN into every new car and registering it before it leaves the factory — centralizing agent and user management in one solution," said Jakkal.

The system will work with ServiceNow and Workday, integrating into their agent platforms and providing automated provisioning of identities, Jakkal added.

Purview and Defender

Alongside Entra Agent ID, Microsoft is also extending its Purview data security and compliance controls to AI agents built within Azure AI Foundry and Copilot Studio, as well as custom-built AI apps via a new software development kit (SDK).

"Developers can leverage these controls to help reduce the risk of their AI applications oversharing or leaking data, and to support compliance efforts, while security teams gain visibility into AI risks and mitigations," Jakkal said. "This integration improves AI data security and streamlines compliance management for development and security teams."

Similarly, the tech giant is adding security tools from Defender directly into Azure AI Foundry.

Jakkal noted that this integration “reduces the tooling gap” between security and development teams, meaning the latter can “proactively mitigate AI application risks” and potential vulnerabilities.

MORE FROM ITPRO

TOPICS

Freelance journalist Nicole Kobie first started writing for ITPro in 2007, with bylines in New Scientist, Wired, PC Pro and many more.

Nicole the author of a book about the history of technology, The Long History of the Future.