Microsoft wants customers to start red teaming generative AI systems to prevent security blunders
Microsoft hopes a new tool will help security practitioners shore up generative AI security
Sign up today and you will receive a free copy of our Future Focus 2025 report - the leading guidance on AI, cybersecurity and other IT challenges as per 700+ senior executives
You are now subscribed
Your newsletter sign-up was successful
Microsoft has unveiled the launch of a new open automation framework aimed at aiding security teams to red team generative AI systems.
The Python Risk Identification Toolkit for generative AI (PyRIT) will “empower” security staff and machine learning engineers to identify and mitigate risks within generative AI systems more efficiently, the tech giant said.
Abstraction and extensibility is built into PyRIT through five interfaces. These include targets, datasets, scoring engine, attack strategies, and memory.
Notably, PyRIT offers two separate attack styles. The first, known as “single-turn,” involves sending a combination of jailbreak and harmful prompts to a target AI system before scoring the response.
The second is called a “multiturn” strategy, whereby PyRIT sends the same combination of prompts, again scores the response, but then responds back to the AI system depending on the score. This allows security teams to investigate more realistic adversarial behavior.
Microsoft noted that, while this tool automates tasks, it is not a “replacement” for the manual red teaming of generative AI systems. Instead, it acts as a form of augmentation to existing red team expertise.
As is often the focus for automation tools, the idea is to eliminate more tedious workloads, while keeping the human team in control of strategy and execution.
Sign up today and you will receive a free copy of our Future Focus 2025 report - the leading guidance on AI, cybersecurity and other IT challenges as per 700+ senior executives
The biggest advantage Microsoft says it has experienced is in efficiency gain.
Through an exercise on a Copilot system, the firm reported that it was able to “pick a harm category, generate several thousand malicious prompts, and use PyRIT’s scoring engine to evaluate the output” in a matter of hours rather than a matter of weeks.
“At Microsoft, we believe that security practices and generative AI responsibilities need to be a collaborative effort. We are deeply committed to developing tools and resources that enable every organization across the globe to innovate responsibly with the latest artificial intelligence advances,” Microsoft said.
Microsoft: Red teaming AI systems is too complex
Several factors lend a heightened level of complexity to red teaming AI systems, and automation can help in making this complexity a little bit more manageable.
According to its experience red teaming generative AI systems, Microsoft cited three ways in which generative AI security risks are more difficult to deal with than traditional security risks.
RELATED WHITEPAPER
In the first instance, there is an added set of issues which red teams need to look out for.
When red teaming traditional software and classical AI systems, the focus is solely on security vulnerabilities. With generative AI, though, there is also the additional concern of responsible AI which often manifests itself in the form of biased or inaccurate content.
Generative AI is more probabilistic than traditional software as well, meaning the red teaming process isn't as simple as executing a single, default attack path which would work on a traditional system.
Generative AI can provide different outputs in response to the same input, adding a layer of “non-determinism” that makes the red teaming process less straightforward.
The architecture of generative AI systems can also vary considerably. They can be standalone applications or they can form parts of existing systems, while the sort of content they produce, be it text, picture, or video, can differ radically.
“To surface just one type of risk (say, generating violent content) in one modality of the application (say, a chat interface on browser), red teams need to try different strategies multiple times to gather evidence of potential failures,” Microsoft said.
“Doing this manually for all types of harms, across all modalities across different strategies, can be exceedingly tedious and slow,” it added.

George Fitzmaurice is a former Staff Writer at ITPro and ChannelPro, with a particular interest in AI regulation, data legislation, and market development. After graduating from the University of Oxford with a degree in English Language and Literature, he undertook an internship at the New Statesman before starting at ITPro. Outside of the office, George is both an aspiring musician and an avid reader.
-
ITPro Excellence Awards winners unveiledIt's time to celebrate excellence in IT. Read on for the full list of winners...
-
This new mobile compromise toolkit enables spyware, surveillance, and data theftNews The professional package allows even unsophisticated attackers to take full control of devices
-
Thousands of Microsoft Teams users are being targeted in a new phishing campaignNews Microsoft Teams users should be on the alert, according to researchers at Check Point
-
Microsoft warns of rising AitM phishing attacks on energy sectorNews The campaign abused SharePoint file sharing services to deliver phishing payloads and altered inbox rules to maintain persistence
-
Microsoft just took down notorious cyber crime marketplace RedVDS – and found hackers were using ChatGPT and its own Copilot tool to wage attacksNews Microsoft worked closely with law enforcement to take down the notorious RedVDS cyber crime service – and found tools like ChatGPT and its own Copilot were being used by hackers.
-
These Microsoft Teams security features will be turned on by default this month – here's what admins need to knowNews From 12 January, weaponizable file type protection, malicious URL detection, and a system for reporting false positives will all be automatically activated.
-
The Microsoft bug bounty program just got a big update — and even applies to third-party codeNews Microsoft is expanding its bug bounty program to cover all of its products, even those that haven't previously been covered by a bounty before and even third-party code.
-
Microsoft Teams is getting a new location tracking feature that lets bosses snoop on staff – research shows it could cause workforce pushbackNews A new location tracking feature in Microsoft Teams will make it easier to keep tabs on your colleague's activities – and for your boss to know exactly where you are.
-
Microsoft opens up Entra Agent ID preview with new AI featuresNews Microsoft Entra Agent ID aims to help manage influx of AI agents using existing tools
-
A notorious ransomware group is spreading fake Microsoft Teams ads to snare victimsNews The Rhysida ransomware group is leveraging Trusted Signing from Microsoft to lend plausibility to its activities