Microsoft ramps up zero trust capabilities amid agentic AI push
The move from Microsoft looks to bolster agent security and prevent misuse
Sign up today and you will receive a free copy of our Future Focus 2025 report - the leading guidance on AI, cybersecurity and other IT challenges as per 700+ senior executives
You are now subscribed
Your newsletter sign-up was successful
AI agents need to be treated like any other employee, at least when it comes to security, and that means they can't be trusted by default and need their own secure identification.
With zero trust in mind, Microsoft will be extending its security and identity tools — Entra, Purview, and Defender — to cover AI agents developed using its own tools, as well as a few key partners.
"These announcements underscore our commitment to providing comprehensive security and governance for AI, with technology built on the security lessons of the past and in line with our Secure Future Initiative principles," noted Vasu Jakkal, Corporate Vice President at Microsoft Security, in a blog post.
The zero trust announcement comes alongside wider AI news from Microsoft's Build conference, held in Seattle this week, including the general availability of Azure AI Foundry Agent Service to help companies deploy agentic AI using pre-built or custom agents.
Alongside the zero trust announcements, Microsoft also revealed evaluation and monitoring tools built into Azure AI Foundry to help detect and block prompt injections as well as task adherence to keep agents in line.
Agentic AI is the latest big tech trend, with industry leaders previously suggesting this marks the latest step in the natural evolutionary path of generative AI. But concerns over security have come to the fore as the industry pivots to the technology.
Earlier this year, ITPro was told that while AI agents could mark a step change in cybersecurity, the technology also has the potential to leave enterprises vulnerable to a range of new threats.
Sign up today and you will receive a free copy of our Future Focus 2025 report - the leading guidance on AI, cybersecurity and other IT challenges as per 700+ senior executives
Microsoft has made its intentions clear in the agentic AI space, having already unveiled agents for its Security Copilot service. These new security features look to further bolster protection for enterprises dabbling in the technology.
Microsoft Entra Agent ID
Microsoft has unveiled a system for managing and security agentic AI called Microsoft Entra Agent ID, which manages AI agents to ensure they don't have access to data, apps, or other infrastructure without first being validated via the zero trust policy.
"Now, AI agents created within Microsoft Copilot Studio and Azure AI Foundry are automatically assigned identities in a Microsoft Entra directory — analogous to etching a unique VIN into every new car and registering it before it leaves the factory — centralizing agent and user management in one solution," said Jakkal.
The system will work with ServiceNow and Workday, integrating into their agent platforms and providing automated provisioning of identities, Jakkal added.
Purview and Defender
Alongside Entra Agent ID, Microsoft is also extending its Purview data security and compliance controls to AI agents built within Azure AI Foundry and Copilot Studio, as well as custom-built AI apps via a new software development kit (SDK).
"Developers can leverage these controls to help reduce the risk of their AI applications oversharing or leaking data, and to support compliance efforts, while security teams gain visibility into AI risks and mitigations," Jakkal said. "This integration improves AI data security and streamlines compliance management for development and security teams."
Similarly, the tech giant is adding security tools from Defender directly into Azure AI Foundry.
Jakkal noted that this integration “reduces the tooling gap” between security and development teams, meaning the latter can “proactively mitigate AI application risks” and potential vulnerabilities.
MORE FROM ITPRO
- OpenAI just launched 'Codex', a new AI agent for software engineering
- Microsoft expects 1.3 billion AI agents to be in operation by 2028 – here’s how it plans to get them working together
- GitHub just unveiled a new AI coding agent for Copilot – and it’s available now
Freelance journalist Nicole Kobie first started writing for ITPro in 2007, with bylines in New Scientist, Wired, PC Pro and many more.
Nicole the author of a book about the history of technology, The Long History of the Future.
-
Mistral CEO Arthur Mensch thinks 50% of SaaS solutions could be supplanted by AINews Mensch’s comments come amidst rising concerns about the impact of AI on traditional software
-
Westcon-Comstor and UiPath forge closer ties in EU growth driveNews The duo have announced a new pan-European distribution deal to drive services-led AI automation growth
-
Microsoft patches six zero-days targeting Windows, Word, and more – here’s what you need to knowNews Patch Tuesday update targets large number of vulnerabilities already being used by attackers
-
What security teams need to know about the NSA's new zero trust guidelinesNews The new guidelines aim to move an organization from discovery to target-level implementation of zero trust practices
-
Thousands of Microsoft Teams users are being targeted in a new phishing campaignNews Microsoft Teams users should be on the alert, according to researchers at Check Point
-
Microsoft warns of rising AitM phishing attacks on energy sectorNews The campaign abused SharePoint file sharing services to deliver phishing payloads and altered inbox rules to maintain persistence
-
Fears over “AI model collapse” are fueling a shift to zero trust data governance strategiesNews Gartner warns of "model collapse" as AI-generated data proliferates – and says organizations need to beware
-
Microsoft just took down notorious cyber crime marketplace RedVDS – and found hackers were using ChatGPT and its own Copilot tool to wage attacksNews Microsoft worked closely with law enforcement to take down the notorious RedVDS cyber crime service – and found tools like ChatGPT and its own Copilot were being used by hackers.
-
These Microsoft Teams security features will be turned on by default this month – here's what admins need to knowNews From 12 January, weaponizable file type protection, malicious URL detection, and a system for reporting false positives will all be automatically activated.
-
The Microsoft bug bounty program just got a big update — and even applies to third-party codeNews Microsoft is expanding its bug bounty program to cover all of its products, even those that haven't previously been covered by a bounty before and even third-party code.
