NCSC announces global guidelines on AI security

A translucent hand in shadow typing on a colourful keyboard
(Image credit: Getty Images)

The UK’s National Cyber Security Centre (NCSC) has announced a new set of global guidelines on the security considerations of developing artificial intelligence (AI) systems.

The recommendations will apply to anyone developing systems that use AI, whether they are creating an AI tool from scratch, or building on top of a pre-existing platform. 

Endorsed by agencies from 18 countries across the globe, including every member of the G7, the NCSC describes the guidelines as the first of their kind to be agreed globally, aimed at ensuring AI systems are designed, developed, and deployed securely.

The guidelines will cover four key areas of an AI system’s development life cycle: secure design, secure development, secure deployment, and secure operation and maintenance.

Secure design focuses on helping developers understand the risks their specific AI system will pose and how to model these threats. 

The NCSC also wants developers to assess whether the service they are looking to create is “most appropriately addressed using AI”, and if so, whether they should choose to train a new model, use an existing model (and whether this will need fine-tuning), or work with an external model provider.

The guidance on secure development covers how developer’s can secure their supply chain, ensuring any software not produced in-house adheres to their organization’s security standards.

Secure development includes generating the appropriate documentation of data, models, and prompts, as well as managing technical debt throughout the development process.

The NCSC’s advice on secure deployment outlines the measures developers should take to protect their infrastructure and models against compromise, threat, or loss.

The advisory also requires robust infrastructure security principles across the system’s life cycle such as applying access controls to APIs, models and data, and the models’ training pipelines. 

Secure operation and maintenance comprises recommendations for developers once they have deployed their model, such as monitoring the system’s behavior and its data drift. 

The guidance also advises developers to monitor and log the inputs to their systems including queries and prompts for the purposes of audits, investigations, and remediation if the system becomes compromised.

International approach is key to keeping up with rapid AI development 

Speaking at Chatham House in June, NCSC CEO Lindy Cameron explained the importance of adopting a collaborative approach in order to keep pace with the rapidly evolving AI landscape.

“We know that AI is developing at a phenomenal pace and there is a need for concerted international action, across governments and industry, to keep up” said Cameron. 

“These Guidelines mark a significant step in shaping a truly global, common understanding of the cyber risks and mitigation strategies around AI to ensure that security is not a postscript to development but a core requirement throughout.”

Jen Easterly, CISA director, described these new guidelines as a significant marker in the global community’s commitment to responsible development and deployments of AI technologies.

RELATED RESOURCE

A Forrester Wave report on AI Decisioning Platforms commissioned by IBM

(Image credit: IBM)

The Forrester Wave™: AI decisioning platforms, Q2 2023


Improve your decision-making with AI

DOWNLOAD NOW

“The release of the Guidelines for Secure AI System Development marks a key milestone in our collective commitment—by governments across the world—to ensure the development and deployment of artificial intelligence capabilities that are secure by design”, Easterly commented.

“The domestic and international unity in advancing secure by design principles and cultivating a resilient foundation for the safe development of AI systems worldwide could not come at a more important time in our shared technology evolution. This joint effort reaffirms our mission to protect critical infrastructure and reinforces the importance of cross-border collaboration in securing our digital future.”

Solomon Klappholz
Staff Writer

Solomon Klappholz is a Staff Writer at ITPro. He has experience writing about the technologies that facilitate industrial manufacturing which led to him developing a particular interest in IT regulation, industrial infrastructure applications, and machine learning.