AI threats: The importance of a concrete strategy in fighting novel attacks

A CGI render of glowing blue and purple lines connecting to represent AI threats.
(Image credit: Alamy)

It’s been more than a year since ChatGPT burst onto the scene and the rate of innovation in the space since its arrival has underlined the significance of the AI threats businesses face today.

AI’s ability to improve cyber security is a clear advantage, but the technology’s offensive potential is also a concern for business leaders. Some of the threats associated with the malicious use of AI tools include helping hackers create more sophisticated malware, enhancing their phishing campaigns, and using increasingly life-like deepfakes.

Industry experts predict AI threats will surge in the coming years. The UK’s National Cyber Security Centre (NCSC) recently warned that the malicious use of AI will fuel a spike in the volume and severity of cyber attacks in the next two years, and ransomware is expected to be a particular problem for businesses moving forward.

Last year, the NCSC also announced new global guidelines on the security considerations of developing AI systems. Meanwhile, the NCSC Annual Review suggests AI-related threats to critical national infrastructure such as energy firms are growing. 

So how big is the AI threat in the short and long term, and what can businesses do to prepare?

How sophisticated is the AI threat?

Threat actors are already using AI in social engineering attacks, using large language models to generate more realistic phishing emails in a variety of languages. In a similar fashion, AI is also being deployed to create more convincing deepfakes. For example, one bank located in the UAE lost $35 million after it was compromised by a voice cloning attack that used AI to mimic a company director.

One of the most concerning developments brought about by the proliferation of AI systems is that it has significantly lowered the barrier for those looking to get involved in cyber crime. Budding cyber criminals who lack the technical proficiency to conduct traditional attacks now can now use AI tools to automate various black hat tasks, explains Matt Middleton Leal, managing director EMEA at Qualys.

“This will make attacks more scalable, and criminals can sell this on as a service to others as another way to monetize their skills.”

Threat actors are selling their malicious AI tools on the dark web, helping a range of potential hackers get involved in the digital extortion industry, explains Rohan Whitehead, education coordinator at the nonprofit Institute of Analytics. 

“It democratizes access to advanced tools, enabling even those with minimal technical expertise to launch complex cyber attacks.”

AI also promises to significantly shake up the ransomware industry by helping cyber criminals to develop and launch more targeted, adaptive threat campaigns.

“AI can enable the customization of ransomware payloads based on the specific vulnerabilities and characteristics of targeted systems, making it more challenging to defend against such attacks,” Whitehead explains. . 

Whitehead predicts that AI-powered ransomware’s flexibility may even result in the new variants of malware that can learn from previous unsuccessful attacks to refine its methods and avoid detection in the future.

Concrete steps to counter the AI threat

Advanced capabilities such as these might seem a long way off, but as the NCSC’s warning makes clear, criminal use of AI is evolving quickly. It’s therefore important that businesses implement short and long-term steps to counter the threat.

Katie Barnett, director of cyber security at Toro says leaders need a clear strategy for the AI threat. “Organizations need to understand their threats, the current and future projected situation, and where security improvements or enhancements need to be made. Having a clear purpose will help identify where investments need to be made in the short and long term.”

In the short term, understand the assets you have across the business, says Middleton-Leal. In addition, prioritize your systems. For example: “Which ones are your business-critical applications, and which are at most risk because they are internet-facing or publicly accessible?”

RELATED WHITEPAPER

Start with a policy that defines the legitimate use of AI and make sure it is published and understood, Barnett advises. “Involve your workforce in this process. Understand why they are already using AI – what task are they trying to automate or augment and what are the potential benefits for your organization?  How can you continue to leverage these benefits whilst recognizing and mitigating the potential risks? From this, you can create a process to assess and approve, or decline existing use cases.”

It’s also important to maintain your underlying IT systems and infrastructure, says Barnett. “At an organizational level, if you are developing something for public consumption, ensure it is secure by design. Have separate environments for development and testing to reduce the risk of compromise to your production systems and networks.”

The importance of an AI threat strategy

To counter the threat posed by AI in cybersecurity, businesses must adopt a “comprehensive strategy” that includes both technical and organizational measures, says Whitehead. “Technical defenses should leverage AI and machine learning (ML) to detect and respond to evolving threats in real-time, while organizational measures should foster a culture of security awareness and resilience against AI-enhanced threats.”

Businesses should engage in active threat intelligence sharing networks to benefit from collective defense strategies and insights into emerging threats, he says. “The development of ethical AI use guidelines within organizations can also play a crucial role in ensuring responsible use and deployment of AI technologies.”

In the long term, Middleton-Leal advises: “Automate your patching process: reducing the mean time to remediate systems reduces the window of opportunity around secondary applications, and it frees up time to spend on protecting critical applications.”

Meanwhile, company policies will need to consider the AI influence. “Stricter rules and more training will need to be applied to prevent unethical use and the insider threat,” says Barnett. “Ensure that staff understand how to use the AI tools that are available to them. A higher level of education will need to be incorporated into training programs to develop competencies.”

While training your employees is key, it’s also important to communicate with your board about the risks, says Middleton-Leal. “Put these issues into context, so that your board is aware and knows how big a problem they pose. You may end up having to take action faster than you bargained for, but the impetus will be there to improve.”

In tandem, work on communication between teams, says Middleton-Leal. “Knowing what ‘good behavior’ looks like will be essential to AI-powered cyber security in the future, so you can eliminate risks fast. Getting ahead of these issues and knowing what’s coming up can help you keep that view of security up to date, as well as identifying what should be stopped for investigation.”

Kate O'Flaherty

Kate O'Flaherty is a freelance journalist with well over a decade's experience covering cyber security and privacy for publications including Wired, Forbes, the Guardian, the Observer, Infosecurity Magazine and the Times. Within cyber security and privacy, her specialist areas include critical national infrastructure security, cyber warfare, application security and regulation in the UK and the US amid increasing data collection by big tech firms such as Facebook and Google. You can follow Kate on Twitter.