Microsoft wants customers to start red teaming generative AI systems to prevent security blunders
Microsoft hopes a new tool will help security practitioners shore up generative AI security
Microsoft has unveiled the launch of a new open automation framework aimed at aiding security teams to red team generative AI systems.
The Python Risk Identification Toolkit for generative AI (PyRIT) will “empower” security staff and machine learning engineers to identify and mitigate risks within generative AI systems more efficiently, the tech giant said.
Abstraction and extensibility is built into PyRIT through five interfaces. These include targets, datasets, scoring engine, attack strategies, and memory.
Notably, PyRIT offers two separate attack styles. The first, known as “single-turn,” involves sending a combination of jailbreak and harmful prompts to a target AI system before scoring the response.
The second is called a “multiturn” strategy, whereby PyRIT sends the same combination of prompts, again scores the response, but then responds back to the AI system depending on the score. This allows security teams to investigate more realistic adversarial behavior.
Microsoft noted that, while this tool automates tasks, it is not a “replacement” for the manual red teaming of generative AI systems. Instead, it acts as a form of augmentation to existing red team expertise.
As is often the focus for automation tools, the idea is to eliminate more tedious workloads, while keeping the human team in control of strategy and execution.
Sign up today and you will receive a free copy of our Future Focus 2025 report - the leading guidance on AI, cybersecurity and other IT challenges as per 700+ senior executives
The biggest advantage Microsoft says it has experienced is in efficiency gain.
Through an exercise on a Copilot system, the firm reported that it was able to “pick a harm category, generate several thousand malicious prompts, and use PyRIT’s scoring engine to evaluate the output” in a matter of hours rather than a matter of weeks.
“At Microsoft, we believe that security practices and generative AI responsibilities need to be a collaborative effort. We are deeply committed to developing tools and resources that enable every organization across the globe to innovate responsibly with the latest artificial intelligence advances,” Microsoft said.
Microsoft: Red teaming AI systems is too complex
Several factors lend a heightened level of complexity to red teaming AI systems, and automation can help in making this complexity a little bit more manageable.
According to its experience red teaming generative AI systems, Microsoft cited three ways in which generative AI security risks are more difficult to deal with than traditional security risks.
RELATED WHITEPAPER
In the first instance, there is an added set of issues which red teams need to look out for.
When red teaming traditional software and classical AI systems, the focus is solely on security vulnerabilities. With generative AI, though, there is also the additional concern of responsible AI which often manifests itself in the form of biased or inaccurate content.
Generative AI is more probabilistic than traditional software as well, meaning the red teaming process isn't as simple as executing a single, default attack path which would work on a traditional system.
Generative AI can provide different outputs in response to the same input, adding a layer of “non-determinism” that makes the red teaming process less straightforward.
The architecture of generative AI systems can also vary considerably. They can be standalone applications or they can form parts of existing systems, while the sort of content they produce, be it text, picture, or video, can differ radically.
“To surface just one type of risk (say, generating violent content) in one modality of the application (say, a chat interface on browser), red teams need to try different strategies multiple times to gather evidence of potential failures,” Microsoft said.
“Doing this manually for all types of harms, across all modalities across different strategies, can be exceedingly tedious and slow,” it added.

George Fitzmaurice is a former Staff Writer at ITPro and ChannelPro, with a particular interest in AI regulation, data legislation, and market development. After graduating from the University of Oxford with a degree in English Language and Literature, he undertook an internship at the New Statesman before starting at ITPro. Outside of the office, George is both an aspiring musician and an avid reader.
-
Trump's AI executive order could leave US in a 'regulatory vacuum'News Citing a "patchwork of 50 different regulatory regimes" and "ideological bias", President Trump wants rules to be set at a federal level
-
TPUs: Google's home advantageITPro Podcast How does TPU v7 stack up against Nvidia's latest chips – and can Google scale AI using only its own supply?
-
Microsoft Teams is getting a new location tracking feature that lets bosses snoop on staff – research shows it could cause workforce pushbackNews A new location tracking feature in Microsoft Teams will make it easier to keep tabs on your colleague's activities – and for your boss to know exactly where you are.
-
Microsoft opens up Entra Agent ID preview with new AI featuresNews Microsoft Entra Agent ID aims to help manage influx of AI agents using existing tools
-
A notorious ransomware group is spreading fake Microsoft Teams ads to snare victimsNews The Rhysida ransomware group is leveraging Trusted Signing from Microsoft to lend plausibility to its activities
-
CISA just published crucial new guidance on keeping Microsoft Exchange servers secureNews With a spate of attacks against Microsoft Exchange in recent years, CISA and the NSA have published crucial new guidance for organizations to shore up defenses.
-
CISA issues alert after botched Windows Server patch exposes critical flawNews A critical remote code execution flaw in Windows Server is being exploited in the wild, despite a previous 'fix'
-
Microsoft issues warning over “opportunistic” cyber criminals targeting big businessNews Microsoft has called on governments to do more to support organizations
-
A terrifying Microsoft flaw could’ve allowed hackers to compromise ‘every Entra ID tenant in the world’News The Entra ID vulnerability could have allowed full access to virtually all Azure customer accounts
-
Microsoft and Cloudflare just took down a major phishing operationNews RaccoonO365’s phishing as a service platform has risen to prominence via Telegram