Weaponised artificial intelligence (AI) is no longer some futuristic sci-fi nightmare. Autonomous killer robots aren't out to get us just yet, but AI technologies such as machine learning have been adopted by criminal gangs who, like any ambitious organisations, want to give their operations an edge.
One of the best-known botnets, TrickBot, is a prime example of a once standard Trojan that's now brimming with AI capabilities. Its creators have added intelligent algorithm-based modules which, for instance, calculate how to hide in a specific target system, making it almost impossible to detect.
Imaginative attackers are also using AI to scan for minute vulnerabilities in systems; process vast stores of personal data and create deepfakes so realistic they'd fool a CEO's mum. Tools to achieve this nefarious magic are widely available through the dark web, but even more frightening still is the prospect of criminals weaponising organisations' own AI by infiltrating and manipulating the data that informs it.
The implications for global security are indeed grim. Business leaders also fear lagging behind in the AI security race, with 60% of those surveyed by Darktrace last year suggesting human-driven responses are failing to keep up. Nearly all (96%) have begun to guard against AI, but with threats escalating, what tools and systems are available?
How AI learns to guard your data
To face down AI threats, you need AI defences. More than two-thirds (69%) of organisations surveyed in a Capgemini study said AI security is urgent, and this number is likely to grow as more are hit by AI-driven attacks. "I don't know any IT security vendor that hasn't included machine learning algorithms in security toolsets," says Freeform Dynamics analyst Tony Lock. "Security was one of the earliest sectors to use machine learning because it's so good at looking for patterns, especially anomalies that might indicate a threat."
Traditional security tools can't keep pace with the sheer scale of malware and ransomware created every week. AI, by contrast, can detect even the tiniest potential risk before it enters the system, without having to constantly run computer scans or be told what threats to look out for. Instead, it learns a baseline and then automatically flags anything out of the ordinary.
Accelerating AI modernisation with data infrastructure
Generate business value from your AI initiatives
AI apps and components are available in cloud services from the likes of Amazon and Microsoft, and can be added to existing systems without interrupting workflows. Everyone can get on with their jobs with minimal risk of mistakes, and the tools are designed to scale as required. Microsoft Azure's secure research environment for regulated data is a good example. It uses smart automation to supervise and analyse the user's business data, while its machine learning is ready to leap into action if it detects a blip. Similarly, email scanners such as Proofpoint use machine learning to detect malicious emails by spotting clues far too subtle for a human to see.
The more these tools are used, the more accurate and faster they get. Response times are slashed as AI tools learn from their own experiences and from those of other organisations, through analysis of samples shared in the cloud. "The AI might miss the first attack, but then it'll share that knowledge with other AI systems and create new ways to detect the new attack, and so on," says Adam Kujawa, security evangelist at Malwarebytes. Eventually, says Kujawa, the user won't encounter threats at all.
Beyond anomalies: Automation, scale and prediction
Automated threats can't be tackled using legacy security tools, but AI-powered cyber security tools can help. Deployed in a system, algorithms build a thorough understanding of activity such as website traffic, and learn to automatically and instantly distinguish between humans, good data, bad data, and bots.
Martin Rehak, CEO of security firm Resistant AI and lecturer at Prague University, gives the example of large-scale financial fraud that exploits organisations' own automation systems. "AI and machine learning are the only scaling factors that can supervise these systems effectively in real-time," he says. The system will then continuously refine relationships between algorithms, getting better at evaluating documents and behaviour in real-time, potentially uncovering all kinds of fraud.
AI also prioritises risks far more intuitively than a human can. "Technology has evolved to allow prioritisation backed by AI algorithms, which computes risk score," explains Naveen Vijay, VP of threat research at risk analytics firm Gurucul. "This approach allows it to automate not only the detection of incidents but also the mitigation process."
AI helps you prioritise resources, too. By enabling you to analyse vast amounts of data and create a detailed record of all your assets, an AI system can predict how and where you're most likely to be compromised, so you can organise your defences to protect the most vulnerable areas.
Deep learning, attack simulations and beyond
At the moment, AI defences can't do all the work by themselves. They still have to be correctly managed by humans. "The common mistake I see is companies paying for AI systems then not configuring them correctly," says Jamie King, information and cyber security manager at IT provider TSG. "I personally like Microsoft Sentinel as part of a security strategy, because it's cost-effective and works well. But organisations need to be aware that it is an option, and quality management needs to be in place."
AI is great for spotting anomalies, but a human is still needed to make the final call, agrees Phil Bindley, MD of cloud and security at Intercity. "Having a blend that uses both AI and humans helps to spot false positives. Solutions like Checkpoint Harmony inform about potential threats based on AI and machine learning, then require human interaction to make a choice on the best course of action."
Recommendations for managing AI risks
Integrate your external AI tool findings into your broader security programs
Just as driverless cars are set to transform transport, though, autonomous AI systems may render human supervision unnecessary. Already, the most advanced AI security services offer elements of deep learning, which doesn't depend on human-designed algorithms but instead on neural networks, which comprise many layers of analytical nodes and are effectively artificial brains. Such a system could learn to "know" the difference between benign and malicious activity.
Security teams can already harness the predictive powers of AI by building models that help them predict what malware will do next, and then build AI workflows that swing into action automatically when an attack or variant is detected. AI prediction is evolving fast, however. Firms such as Darktrace are developing smart attack simulations that'll autonomously anticipate and block the actions of even the most inventive AI-tooled cyberpunk.
"Proactive security and simulations will be incredibly powerful," says Max Heinemeyer, VP of cyber innovation at Darktrace. "This will turn the tables on bad actors, giving security teams ways to future-proof their organisations against unknown and AI-driven threats."
Get the ITPro. daily newsletter
Receive our latest news, industry updates, featured resources and more. Sign up today to receive our FREE report on AI cyber crime & security - newly updated for 2023.
Jane Hoskyn has been a journalist for over 25 years, with bylines in Men's Health, the Mail on Sunday, BBC Radio and more. In between freelancing, her roles have included features editor for Computeractive and technology editor for Broadcast, and she was named IPC Media Commissioning Editor of the Year for her work at Web User. Today, she specialises in writing features about user experience (UX), security and accessibility in B2B and consumer tech. You can follow Jane's personal Twitter account at @janeskyn.