AI threats: The importance of a concrete strategy in fighting novel attacks

A CGI render of glowing blue and purple lines connecting to represent AI threats.
(Image credit: Alamy)

It’s been more than a year since ChatGPT burst onto the scene and innovation in the space since has underlined the significance of the AI threat businesses face.  of  

There’s no doubt generative AI can boost security, but its associated risks include the ability for hackers to use it to create malware, more convincing phishing emails, and attacks using realistic deepfakes

The capabilities enabled by AI are evolving rapidly and with this, experts predict the threat to security will surge. Malicious AI use will drive an increase in the volume and impact of cyber attacks over the next two years, especially ransomware, according to a new warning from the UK’s National Cyber Security Centre (NCSC). 

Last year, the NCSC announced new global guidelines on the security considerations of developing AI systems. Meanwhile, the NCSC Annual Review suggests AI-related threats to critical national infrastructure such as energy firms are growing. 

So how big is the AI threat in the short and long term, and what can businesses do to prepare?

How sophisticated is the AI threat?

AI is already being used in email phishing attacks to create more convincing emails in multiple languages. It’s also being used to enhance deepfakes, with one UAE-based bank losing $35 million following a successful voice cloning attack. 

One of the biggest problems with AI is its ability to make complex cyber attacks achievable for to a wider pool of criminals. For example, AI tools can be used to automate the development of malicious content, says Matt Middleton-Leal, managing director EMEA at Qualys. “This will make attacks more scalable, and criminals can sell this on as a service to others as another way to monetize their skills.”

The emergence of generative-AI-as-a-service on underground forums makes sophisticated cyber attack capabilities available to a wider range of criminals, agrees Rohan Whitehead, education coordinator at nonprofit the Institute of Analytics. “It democratizes access to advanced tools, enabling even those with minimal technical expertise to launch complex cyber attacks.”

AI can also offer a boost to ransomware operators by enhancing cyber criminals’ capabilities and helping them to create more targeted and adaptive campaigns. “AI can enable the customization of ransomware payloads based on the specific vulnerabilities and characteristics of targeted systems, making it more challenging to defend against such attacks,” Whitehead explains. 

The adaptive nature of AI-driven ransomware could even lead to the development of malware that learns from attempts to detect or neutralize it and becomes more resilient over time, Whitehead predicts.

Concrete steps to counter the AI threat

Advanced capabilities such as these might seem a long way off, but as the NCSC’s warning makes clear, criminal use of AI is evolving quickly. It’s therefore important that businesses implement short and long-term steps to counter the threat.

Katie Barnett, director of cyber security at Toro says leaders need a clear strategy for the AI threat. “Organizations need to understand their threats, the current and future projected situation, and where security improvements or enhancements need to be made. Having a clear purpose will help identify where investments need to be made in the short and long term.”

In the short term, understand the assets you have across the business, says Middleton-Leal. In addition, prioritize your systems. For example: “Which ones are your business-critical applications, and which are at most risk because they are internet-facing or publicly accessible?”


Start with a policy that defines the legitimate use of AI and make sure it is published and understood, Barnett advises. “Involve your workforce in this process. Understand why they are already using AI – what task are they trying to automate or augment and what are the potential benefits for your organization?  How can you continue to leverage these benefits whilst recognizing and mitigating the potential risks? From this, you can create a process to assess and approve, or decline existing use cases.”

It’s also important to maintain your underlying IT systems and infrastructure, says Barnett. “At an organizational level, if you are developing something for public consumption, ensure it is secure by design. Have separate environments for development and testing to reduce the risk of compromise to your production systems and networks.”

The importance of an AI threat strategy

To counter the threat posed by AI in cybersecurity, businesses must adopt a “comprehensive strategy” that includes both technical and organizational measures, says Whitehead. “Technical defenses should leverage AI and machine learning (ML) to detect and respond to evolving threats in real-time, while organizational measures should foster a culture of security awareness and resilience against AI-enhanced threats.”

Businesses should engage in active threat intelligence sharing networks to benefit from collective defense strategies and insights into emerging threats, he says. “The development of ethical AI use guidelines within organizations can also play a crucial role in ensuring responsible use and deployment of AI technologies.”

In the long term, Middleton-Leal advises: “Automate your patching process: reducing the mean time to remediate systems reduces the window of opportunity around secondary applications, and it frees up time to spend on protecting critical applications.”

Meanwhile, company policies will need to consider the AI influence. “Stricter rules and more training will need to be applied to prevent unethical use and the insider threat,” says Barnett. “Ensure that staff understand how to use the AI tools that are available to them. A higher level of education will need to be incorporated into training programs to develop competencies.”

While training your employees is key, it’s also important to communicate with your board about the risks, says Middleton-Leal. “Put these issues into context, so that your board is aware and knows how big a problem they pose. You may end up having to take action faster than you bargained for, but the impetus will be there to improve.”

In tandem, work on communication between teams, says Middleton-Leal. “Knowing what ‘good behavior’ looks like will be essential to AI-powered cyber security in the future, so you can eliminate risks fast. Getting ahead of these issues and knowing what’s coming up can help you keep that view of security up to date, as well as identifying what should be stopped for investigation.”

Kate O'Flaherty

Kate O'Flaherty is a freelance journalist with well over a decade's experience covering cyber security and privacy for publications including Wired, Forbes, the Guardian, the Observer, Infosecurity Magazine and the Times. Within cyber security and privacy, her specialist areas include critical national infrastructure security, cyber warfare, application security and regulation in the UK and the US amid increasing data collection by big tech firms such as Facebook and Google. You can follow Kate on Twitter.