Microsoft wants customers to start red teaming generative AI systems to prevent security blunders
Microsoft hopes a new tool will help security practitioners shore up generative AI security
Microsoft has unveiled the launch of a new open automation framework aimed at aiding security teams to red team generative AI systems.
The Python Risk Identification Toolkit for generative AI (PyRIT) will “empower” security staff and machine learning engineers to identify and mitigate risks within generative AI systems more efficiently, the tech giant said.
Abstraction and extensibility is built into PyRIT through five interfaces. These include targets, datasets, scoring engine, attack strategies, and memory.
Notably, PyRIT offers two separate attack styles. The first, known as “single-turn,” involves sending a combination of jailbreak and harmful prompts to a target AI system before scoring the response.
The second is called a “multiturn” strategy, whereby PyRIT sends the same combination of prompts, again scores the response, but then responds back to the AI system depending on the score. This allows security teams to investigate more realistic adversarial behavior.
Microsoft noted that, while this tool automates tasks, it is not a “replacement” for the manual red teaming of generative AI systems. Instead, it acts as a form of augmentation to existing red team expertise.
As is often the focus for automation tools, the idea is to eliminate more tedious workloads, while keeping the human team in control of strategy and execution.
Sign up today and you will receive a free copy of our Future Focus 2025 report - the leading guidance on AI, cybersecurity and other IT challenges as per 700+ senior executives
The biggest advantage Microsoft says it has experienced is in efficiency gain.
Through an exercise on a Copilot system, the firm reported that it was able to “pick a harm category, generate several thousand malicious prompts, and use PyRIT’s scoring engine to evaluate the output” in a matter of hours rather than a matter of weeks.
“At Microsoft, we believe that security practices and generative AI responsibilities need to be a collaborative effort. We are deeply committed to developing tools and resources that enable every organization across the globe to innovate responsibly with the latest artificial intelligence advances,” Microsoft said.
Microsoft: Red teaming AI systems is too complex
Several factors lend a heightened level of complexity to red teaming AI systems, and automation can help in making this complexity a little bit more manageable.
According to its experience red teaming generative AI systems, Microsoft cited three ways in which generative AI security risks are more difficult to deal with than traditional security risks.
RELATED WHITEPAPER
In the first instance, there is an added set of issues which red teams need to look out for.
When red teaming traditional software and classical AI systems, the focus is solely on security vulnerabilities. With generative AI, though, there is also the additional concern of responsible AI which often manifests itself in the form of biased or inaccurate content.
Generative AI is more probabilistic than traditional software as well, meaning the red teaming process isn't as simple as executing a single, default attack path which would work on a traditional system.
Generative AI can provide different outputs in response to the same input, adding a layer of “non-determinism” that makes the red teaming process less straightforward.
The architecture of generative AI systems can also vary considerably. They can be standalone applications or they can form parts of existing systems, while the sort of content they produce, be it text, picture, or video, can differ radically.
“To surface just one type of risk (say, generating violent content) in one modality of the application (say, a chat interface on browser), red teams need to try different strategies multiple times to gather evidence of potential failures,” Microsoft said.
“Doing this manually for all types of harms, across all modalities across different strategies, can be exceedingly tedious and slow,” it added.

George Fitzmaurice is a former Staff Writer at ITPro and ChannelPro, with a particular interest in AI regulation, data legislation, and market development. After graduating from the University of Oxford with a degree in English Language and Literature, he undertook an internship at the New Statesman before starting at ITPro. Outside of the office, George is both an aspiring musician and an avid reader.
-
Enterprise AI adoption is about to get the Big Brother treatmentOpinion Worried your staff aren’t using those shiny AI tools you petitioned for? Big tech has you covered
-
Dreamforce 2025: What's an agentic OS?ITPro Podcast NPUs, e-ink, and immersive headsets are the latest hardware innovations for business devices
-
Microsoft issues warning over “opportunistic” cyber criminals targeting big businessNews Microsoft has called on governments to do more to support organizations
-
A terrifying Microsoft flaw could’ve allowed hackers to compromise ‘every Entra ID tenant in the world’News The Entra ID vulnerability could have allowed full access to virtually all Azure customer accounts
-
Microsoft and Cloudflare just took down a major phishing operationNews RaccoonO365’s phishing as a service platform has risen to prominence via Telegram
-
Microsoft quietly launched an AI agent that can detect and reverse engineer malwareNews Researchers say the tool is already achieving the “gold standard” in malware classification
-
Microsoft patched a critical vulnerability in its NLWeb AI search tool – but there's no CVE (yet)News Researchers found an unauthenticated path traversal bug in the tool debuted at Microsoft Build in May
-
NCSC says ‘limited number’ of UK firms affected by SharePoint attack as global impact spreadsNews The SharePoint flaw has already had a wide impact according to reports from government security agencies
-
Confused at all the threat group names? You’re not alone. CrowdStrike and Microsoft want to change thatNews CrowdStrike and Microsoft hope to "bring clarity and coordination" to the cyber industry by unifying threat group naming conventions.
-
A flaw in OneDrive’s File Picker feature could give access to hundreds of appsNews The OneDrive File Picker flaw could affect hundreds of apps, researchers warn