Microsoft wants customers to start red teaming generative AI systems to prevent security blunders

Microsoft logo sits illumintated outside the Microsoft booth on day 2 of the GSMA Mobile World Congress 2019 on February 26, 2019 in Barcelona, Spain
(Image credit: Getty Images)

Microsoft has unveiled the launch of a new open automation framework aimed at aiding security teams to red team generative AI systems. 

The Python Risk Identification Toolkit for generative AI (PyRIT) will “empower” security staff and machine learning engineers to identify and mitigate risks within generative AI systems more efficiently, the tech giant said. 

Abstraction and extensibility is built into PyRIT through five interfaces. These include targets, datasets, scoring engine, attack strategies, and memory. 

Notably, PyRIT offers two separate attack styles. The first, known as “single-turn,” involves sending a combination of jailbreak and harmful prompts to a target AI system before scoring the response. 

The second is called a “multiturn” strategy, whereby PyRIT sends the same combination of prompts, again scores the response, but then responds back to the AI system depending on the score. This allows security teams to investigate more realistic adversarial behavior. 

Microsoft noted that, while this tool automates tasks, it is not a “replacement” for the manual red teaming of generative AI systems. Instead, it acts as a form of augmentation to existing red team expertise. 

As is often the focus for automation tools, the idea is to eliminate more tedious workloads, while keeping the human team in control of strategy and execution.

The biggest advantage Microsoft says it has experienced is in efficiency gain. 

Through an exercise on a Copilot system, the firm reported that it was able to “pick a harm category, generate several thousand malicious prompts, and use PyRIT’s scoring engine to evaluate the output” in a matter of hours rather than a matter of weeks. 

“At Microsoft, we believe that security practices and generative AI responsibilities need to be a collaborative effort. We are deeply committed to developing tools and resources that enable every organization across the globe to innovate responsibly with the latest artificial intelligence advances,” Microsoft said.  

Microsoft: Red teaming AI systems is too complex

Several factors lend a heightened level of complexity to red teaming AI systems, and automation can help in making this complexity a little bit more manageable. 

According to its experience red teaming generative AI systems, Microsoft cited three ways in which generative AI security risks are more difficult to deal with than traditional security risks. 

RELATED WHITEPAPER

In the first instance, there is an added set of issues which red teams need to look out for.

When red teaming traditional software and classical AI systems, the focus is solely on security vulnerabilities. With generative AI, though, there is also the additional concern of responsible AI which often manifests itself in the form of biased or inaccurate content.

Generative AI is more probabilistic than traditional software as well, meaning the red teaming process isn't as simple as executing a single, default attack path which would work on a traditional system.

Generative AI can provide different outputs in response to the same input, adding a layer of “non-determinism” that makes the red teaming process less straightforward.

The architecture of generative AI systems can also vary considerably. They can be standalone applications or they can form parts of existing systems, while the sort of content they produce, be it text, picture, or video, can differ radically.

“To surface just one type of risk (say, generating violent content) in one modality of the application (say, a chat interface on browser), red teams need to try different strategies multiple times to gather evidence of potential failures,” Microsoft said.

“Doing this manually for all types of harms, across all modalities across different strategies, can be exceedingly tedious and slow,” it added. 

George Fitzmaurice
Staff Writer

George Fitzmaurice is a staff writer at ITPro, ChannelPro, and CloudPro, with a particular interest in AI regulation, data legislation, and market development. After graduating from the University of Oxford with a degree in English Language and Literature, he undertook an internship at the New Statesman before starting at ITPro. Outside of the office, George is both an aspiring musician and an avid reader.