Much of what we hear about artificial intelligence is focused around its use in business processes, for example its ability to analyse vast quantities of data at speeds we humans can’t manage, and so help us make better decisions. But AI is also increasingly apparent as an attack tool, where it is often referred to as offensive AI.
It was always going to happen
It’s often said that the ‘bad actors’ are one step ahead of those wanting to protect systems, and that they use all the tools they can get their hands on to achieve their goals. So it won’t be a surprise to hear they are using AI too, or that there was a certain inevitability about the emergence of offensive AI.
As Bryan Betts, principal analyst at Freeform Dynamics tells IT Pro: “The use of smart tools to automate the attack process was inevitable. For instance, if a human attacker has to spend a lot of time trying different routes into a target network, adapting after each attempt and deciding what to try next, why not teach that process to a piece of software?”
It isn’t just speed that offensive AI delivers, it’s also flexibility. An offensive AI can attack many different targets at the same time, spreading its tentacles around and giving bad actors a wide reach.
Humans can’t handle it alone
Humans can’t fight this kind of fast, broad and deep type of attack on their own. A report by Forrester The Emergence of Offensive AI, produced for DarkTrace, found that 79% of firms said security threats have become faster over the last five years, and 86% said the volume of advanced security threats had increased over the same time period.
As organisations digitise more of their work processes, the size of the ‘attack surface’ grows and it becomes increasingly difficult for human surveillance to keep an eye on everything. The Forrester research found it takes 44% of organisations more than three hours to discover there’s been an infection, fewer than 40% can remove the threat in under three hours, and fewer than a quarter can return to business as usual in less than three hours.
Offensive AI has the potential to push those statistics in the wrong direction, and the way to fight it is with AI that’s built to work in the organisation’s favour. This is known as defensive AI.
Fighting AI with AI
Just as the appearance of offensive AI was inevitable, so the development of defensive AI was always going to happen. Daulet Baimukashev, data scientist from the Institute for Smart Systems and Artificial Intelligence (ISSAI) at Nazarbayev University, Kazakhstan, tells IT Pro: “Defensive AI can use machine learning methods to learn about the normal and anomalous behaviour of the system by analysing large inputs of data, and can figure out new types of attacks and continuously improve its accuracy.”
So, defensive AI can work on its own initiative not just to identify attacks, but also to repel them. Baimukashev explains: “Defensive AI can evolve to a system that autonomously tackles various cyber-attacks. This reduces the workload for human operations and increases the efficiency of dealing with large numbers of cyber-attacks.”
The point about reducing workload for humans is vital, given the scope, range and capabilities of offensive AI, and the need for speed in finding and disabling any successful attacks. Forrester’s report revealed that as well as being concerned about the scale and speed of offensive AI attacks, 66% of cybersecurity decision makers felt that offensive AI could carry out attacks that no human could imagine. If you can’t imagine something, you can’t be prepared for when it happens.
Keeping the humans in the loop
Despite the clear need to automate defence, and the ability of defensive AI systems to find and disable offensive AI attacks, Bryan Betts tells IT Pro that humans will still always have a role to play, saying: “I suspect the key for the defenders will be how well they can keep skilled humans in the loop, letting the machines deal with the data sifting and the routine fixes, and adapting to attacks via defensive upgrades, while the humans monitor the AI's decision-making and help build its learning.” Just like many other implementations of AI then, defensive AI helps humans by doing a lot of the heavy lifting, takes some actions autonomously, learns as it goes along, reports to humans and helps humans – and organisations – achieve their goals.
Get the ITPro. daily newsletter
Receive our latest news, industry updates, featured resources and more. Sign up today to receive our FREE report on AI cyber crime & security - newly updated for 2023.
Sandra Vogel is a freelance journalist with decades of experience in long-form and explainer content, research papers, case studies, white papers, blogs, books, and hardware reviews. She has contributed to ZDNet, national newspapers and many of the best known technology web sites.
At ITPro, Sandra has contributed articles on artificial intelligence (AI), measures that can be taken to cope with inflation, the telecoms industry, risk management, and C-suite strategies. In the past, Sandra also contributed handset reviews for ITPro and has written for the brand for more than 13 years in total.