Microsoft launches open source tool Counterfeit to prevent AI hacking

Microsoft logo suspended above a conference floor

Microsoft has launched an open source tool to help developers assess the security of their machine learning systems.

The Counterfit project, now available on GitHub, comprises a command-line tool and generic automation layer to allow developers to simulate cyber attacks against AI systems.

Microsoft’s red team have used Counterfit to test its own AI models, while the wider company is also exploring using the tool in AI development.

Anyone can download the tool and deploy it through Azure Shell, to run in-browser, or locally in an Anaconda Python environment.

It can assess AI models hosted in various cloud environments, on-premises, or in the edge. Microsoft also promoted its flexibility by highlighting the fact that it’s agnostic to AI models and also supports a variety of data types, including text, images, or generic input.

“Our tool makes published attack algorithms accessible to the security community and helps to provide an extensible interface from which to build, manage, and launch attacks on AI models,” Microsoft said.

“This tool is part of broader efforts at Microsoft to empower engineers to securely develop and deploy AI systems.”

The three key ways that security professionals can deploy Counterfit is by pen testing and red teaming AI systems, scanning AI systems for vulnerabilities, and logging attacks against AI models.

RELATED RESOURCE

Transforming business operations with AI, IoT data, and edge computing

A Pathfinder report on the ROI of AI, IoT, and edge computing

FREE DOWNLOAD

The tool comes preloaded with attack algorithms, while security professionals can also use the built-in cmd2 scripting engine to hook into Counterfit from existing offensive tools for testing purposes.

Optionally, businesses can scan AI systems with relevant attacks any number of times to create baselines, with continuous runs as vulnerabilities are addressed, helping to measure ongoing progress.

Microsoft developed the tool out of a need to assess its own systems for vulnerabilities. Counterfit began life as a handful of attack scripts written to target individual AI models, and gradually evolved into an automation tool to attack multiple systems at scale.

The company claims it’s engaged with a variety of its partners, customers, and government entities in testing the tool against machine learning models in their own environments.

Keumars Afifi-Sabet
Features Editor

Keumars Afifi-Sabet is a writer and editor that specialises in public sector, cyber security, and cloud computing. He first joined ITPro as a staff writer in April 2018 and eventually became its Features Editor. Although a regular contributor to other tech sites in the past, these days you will find Keumars on LiveScience, where he runs its Technology section.