IT Pro is supported by its audience. When you purchase through links on our site, we may earn an affiliate commission. Learn more

What is offensive AI and how do you protect against it?

Offensive AI is on the rise and organisations need to put appropriate defences in place if they are to fend off attacks

Much of what we hear about artificial intelligence is focused around its use in business processes, for example its ability to analyse vast quantities of data at speeds we humans can’t manage, and so help us make better decisions. But AI is also increasingly apparent as an attack tool, where it is often referred to as offensive AI. 

It was always going to happen 

It’s often said that the ‘bad actors’ are one step ahead of those wanting to protect systems, and that they use all the tools they can get their hands on to achieve their goals. So it won’t be a surprise to hear they are using AI too, or that there was a certain inevitability about the emergence of offensive AI. 

As Bryan Betts, principal analyst at Freeform Dynamics tells IT Pro: “The use of smart tools to automate the attack process was inevitable. For instance, if a human attacker has to spend a lot of time trying different routes into a target network, adapting after each attempt and deciding what to try next, why not teach that process to a piece of software?”

It isn’t just speed that offensive AI delivers, it’s also flexibility. An offensive AI can attack many different targets at the same time, spreading its tentacles around and giving bad actors a wide reach.  

Humans can’t handle it alone

Humans can’t fight this kind of fast, broad and deep type of attack on their own. A report by Forrester The Emergence of Offensive AI, produced for DarkTrace, found that 79% of firms said security threats have become faster over the last five years, and 86% said the volume of advanced security threats had increased over the same time period. 

As organisations digitise more of their work processes, the size of the ‘attack surface’ grows and it becomes increasingly difficult for human surveillance to keep an eye on everything. The Forrester research found it takes 44% of organisations more than three hours to discover there’s been an infection, fewer than 40% can remove the threat in under three hours, and fewer than a quarter can return to business as usual in less than three hours.

Offensive AI has the potential to push those statistics in the wrong direction, and the way to fight it is with AI that’s built to work in the organisation’s favour. This is known as defensive AI.

Fighting AI with AI

Just as the appearance of offensive AI was inevitable, so the development of defensive AI was always going to happen. Daulet Baimukashev, data scientist from the Institute for Smart Systems and Artificial Intelligence (ISSAI) at Nazarbayev University, Kazakhstan, tells IT Pro: “Defensive AI can use machine learning methods to learn about the normal and anomalous behaviour of the system by analysing large inputs of data, and can figure out new types of attacks and continuously improve its accuracy.”

So, defensive AI can work on its own initiative not just to identify attacks, but also to repel them. Baimukashev explains: “Defensive AI can evolve to a system that autonomously tackles various cyber-attacks. This reduces the workload for human operations and increases the efficiency of dealing with large numbers of cyber-attacks.”

The point about reducing workload for humans is vital, given the scope, range and capabilities of offensive AI, and the need for speed in finding and disabling any successful attacks. Forrester’s report revealed that as well as being concerned about the scale and speed of offensive AI attacks, 66% of cybersecurity decision makers felt that offensive AI could carry out attacks that no human could imagine. If you can’t imagine something, you can’t be prepared for when it happens. 

Keeping the humans in the loop

Despite the clear need to automate defence, and the ability of defensive AI systems to find and disable offensive AI attacks, Bryan Betts tells IT Pro that humans will still always have a role to play, saying: “I suspect the key for the defenders will be how well they can keep skilled humans in the loop, letting the machines deal with the data sifting and the routine fixes, and adapting to attacks via defensive upgrades, while the humans monitor the AI's decision-making and help build its learning.” Just like many other implementations of AI then, defensive AI helps humans by doing a lot of the heavy lifting, takes some actions autonomously, learns as it goes along, reports to humans and helps humans – and organisations – achieve their goals.

Featured Resources

The 3D skills report

Add 3D skills to your creative toolkits and play a sizeable role in the digital future

Free Download

The increasing need for environmental intelligence solutions

How sustainability has become a major business priority and is continuing to grow in importance

Free Download

2022 State of the multi-cloud report

What are the biggest multi-cloud motivations for decision-makers, and what are the leading challenges

Free Download

Solve global challenges with machine learning

Tackling our word's hardest problems with ML

Free Download

Most Popular

Dutch hacker steals data from virtually entire population of Austria
data breaches

Dutch hacker steals data from virtually entire population of Austria

26 Jan 2023
European partners expect growth this year, here are three ways they will achieve it

European partners expect growth this year, here are three ways they will achieve it

17 Jan 2023
GTA V vulnerability exposes PC users to partial remote code execution attacks

GTA V vulnerability exposes PC users to partial remote code execution attacks

23 Jan 2023