IT Pro is supported by its audience. When you purchase through links on our site, we may earn an affiliate commission. Learn more

What is offensive AI and how do you protect against it?

Offensive AI is on the rise and organisations need to put appropriate defences in place if they are to fend off attacks

Much of what we hear about artificial intelligence is focused around its use in business processes, for example its ability to analyse vast quantities of data at speeds we humans can’t manage, and so help us make better decisions. But AI is also increasingly apparent as an attack tool, where it is often referred to as offensive AI. 

It was always going to happen 

It’s often said that the ‘bad actors’ are one step ahead of those wanting to protect systems, and that they use all the tools they can get their hands on to achieve their goals. So it won’t be a surprise to hear they are using AI too, or that there was a certain inevitability about the emergence of offensive AI. 

As Bryan Betts, principal analyst at Freeform Dynamics tells IT Pro: “The use of smart tools to automate the attack process was inevitable. For instance, if a human attacker has to spend a lot of time trying different routes into a target network, adapting after each attempt and deciding what to try next, why not teach that process to a piece of software?”

It isn’t just speed that offensive AI delivers, it’s also flexibility. An offensive AI can attack many different targets at the same time, spreading its tentacles around and giving bad actors a wide reach.  

Humans can’t handle it alone

Humans can’t fight this kind of fast, broad and deep type of attack on their own. A report by Forrester The Emergence of Offensive AI, produced for DarkTrace, found that 79% of firms said security threats have become faster over the last five years, and 86% said the volume of advanced security threats had increased over the same time period. 

As organisations digitise more of their work processes, the size of the ‘attack surface’ grows and it becomes increasingly difficult for human surveillance to keep an eye on everything. The Forrester research found it takes 44% of organisations more than three hours to discover there’s been an infection, fewer than 40% can remove the threat in under three hours, and fewer than a quarter can return to business as usual in less than three hours.

Offensive AI has the potential to push those statistics in the wrong direction, and the way to fight it is with AI that’s built to work in the organisation’s favour. This is known as defensive AI.

Fighting AI with AI

Just as the appearance of offensive AI was inevitable, so the development of defensive AI was always going to happen. Daulet Baimukashev, data scientist from the Institute for Smart Systems and Artificial Intelligence (ISSAI) at Nazarbayev University, Kazakhstan, tells IT Pro: “Defensive AI can use machine learning methods to learn about the normal and anomalous behaviour of the system by analysing large inputs of data, and can figure out new types of attacks and continuously improve its accuracy.”

So, defensive AI can work on its own initiative not just to identify attacks, but also to repel them. Baimukashev explains: “Defensive AI can evolve to a system that autonomously tackles various cyber-attacks. This reduces the workload for human operations and increases the efficiency of dealing with large numbers of cyber-attacks.”

The point about reducing workload for humans is vital, given the scope, range and capabilities of offensive AI, and the need for speed in finding and disabling any successful attacks. Forrester’s report revealed that as well as being concerned about the scale and speed of offensive AI attacks, 66% of cybersecurity decision makers felt that offensive AI could carry out attacks that no human could imagine. If you can’t imagine something, you can’t be prepared for when it happens. 

Keeping the humans in the loop

Despite the clear need to automate defence, and the ability of defensive AI systems to find and disable offensive AI attacks, Bryan Betts tells IT Pro that humans will still always have a role to play, saying: “I suspect the key for the defenders will be how well they can keep skilled humans in the loop, letting the machines deal with the data sifting and the routine fixes, and adapting to attacks via defensive upgrades, while the humans monitor the AI's decision-making and help build its learning.” Just like many other implementations of AI then, defensive AI helps humans by doing a lot of the heavy lifting, takes some actions autonomously, learns as it goes along, reports to humans and helps humans – and organisations – achieve their goals.

Featured Resources

Meeting the future of education with confidence

How the switch to digital learning has created an opportunity to meet the needs of every student, always

Free Download

The Total Economic Impact™ of IBM Cloud Pak® for Watson AIOps with Instana

Cost savings and business benefits

Free Download

The business value of the transformative mainframe

Modernising on the mainframe

Free Download

Technology reimagined

Why PCaaS is perfect for modern schools

Free Download

Recommended

Senate report slams agencies for poor cyber security
cyber security

Senate report slams agencies for poor cyber security

3 Aug 2021
Most employees put their workplace at risk by taking cyber security shortcuts
cyber security

Most employees put their workplace at risk by taking cyber security shortcuts

27 Jul 2021
61% of organizations say improving security a top priority for 2021
cyber security

61% of organizations say improving security a top priority for 2021

29 Jun 2021

Most Popular

How to boot Windows 11 in Safe Mode
Microsoft Windows

How to boot Windows 11 in Safe Mode

7 Jun 2022
Delivery firm Yodel disrupted by cyber attack
cyber attacks

Delivery firm Yodel disrupted by cyber attack

21 Jun 2022
Salaries for the least popular programming languages surge as much as 44%
Development

Salaries for the least popular programming languages surge as much as 44%

23 Jun 2022