‘They are able to move fast now’: AI is expanding attack surfaces – and hackers are looking to reap the same rewards as enterprises with the technology
Potent new malware strains, faster attack times, and the rise of shadow AI are causing havoc
Sign up today and you will receive a free copy of our Future Focus 2025 report - the leading guidance on AI, cybersecurity and other IT challenges as per 700+ senior executives
You are now subscribed
Your newsletter sign-up was successful
Attack surfaces are expanding at a rapid pace thanks to enterprise AI adoption, according to research from Zscaler, and it’s placing huge strain on cybersecurity teams.
Findings from Zscaler ThreatLabz’ 2026 AI Security Report show enterprises now face a confluence of threats. The rise of ‘shadow AI’ combined with machine-speed intrusions and the use of AI among threat actors means time-to-compromise is plunging.
Deepen Desai, former CSO at Zscaler and head of security research at the ThreatLabz division, told ITPro the first of these issues is a natural byproduct of the industry’s rapid pivot to AI over the last three and a half years.
Employees have been experimenting with exciting new tools for some time now, often without considering the potential security risks associated with unauthorized AI solutions.
This creates a huge blind spot for enterprise security teams and creates the risk of disastrous data leakage. Research from Gartner in November 2025 projected up to 40% of enterprises globally will experience a shadow AI-related breach by 2030, underlining the growing risks associated with this trend.
This is an issue that can be remedied by robust internal guardrails, however. It's the use of AI by threat actors that has alarm bells ringing for Desai and counterparts across the industry.
Hackers have already been observed using the technology to fine-tune phishing and vishing attacks, for example, but in recent months they’ve begun flocking to these tools to build and refine malware.
Sign up today and you will receive a free copy of our Future Focus 2025 report - the leading guidance on AI, cybersecurity and other IT challenges as per 700+ senior executives
“It all started with phishing and vishing and their standard initial access attacks, where their goal is just to compromise a credential or an identity,” he told ITPro.
“But soon we also started noticing malware created using AI, and we are able to tell that because when we reverse those payloads, we're able to see the comments that a lot of these AI coding tools will add, which is very, very typical.”
Desai highlighted one recent incident observed by Zscaler in which AI-powered malware was connected to a Google Sheets document to support the attacker when executing commands.
“The malware that was deployed in this victim's environment would connect to a Google Sheet, which had two columns in it,” he explained.
“One column where the attacker is entering commands that this malware needs to execute on the victim environment, and the second column where the malware will update the results of what came out when it ran these commands.”
“Whether it was for data exfiltration, whether it is for downloading a new payload or giving that context for the attacker to perform future commands, it was literally being updated every few minutes,” Desai added.
Concerns about AI-powered malware have been gaining momentum over the last 18 months, with security experts warning hackers are increasingly relying on the technology to build potent new strains.
Research from TrendMicro in September 2025 found threat actors are “vibe coding” malware based on dissected threat intelligence reports, allowing them to reverse engineer particular strains and speed up attacks.
Similarly, just last week Google warned hackers have been abusing its Gemini AI models to build malware.
Speedier attacks are raising concerns
The use of AI in these instances is helping to speed up attacks, Desai told ITPro, posing huge challenges for security teams. Combine this with the fact that many of the AI systems used by enterprises are highly susceptible to compromise, and teams now face overlapping security considerations.
Analysis from the company found many enterprise AI systems “break almost immediately” when tested under adversarial conditions. The median time to first critical failure, for example, was just 16 minutes, and 90% of systems were compromised in under 90 minutes.
“They are able to move fast now because of the same efficiencies that we’re seeing on the production side,” Desai told ITPro, adding that security teams will likely find themselves fighting off AI-powered attacks with their own internal tools in the near future.
“You have to use AI to fight AI driven attacks,” he said. “You need to apply AI at all stages of the attack to detect phishing, to detect malware, to detect exfiltration, to detect command and control activity.”
FOLLOW US ON SOCIAL MEDIA
Make sure to follow ITPro on Google News to keep tabs on all our latest news, analysis, and reviews.
You can also follow ITPro on LinkedIn, X, Facebook, and BlueSky.

Ross Kelly is ITPro's News & Analysis Editor, responsible for leading the brand's news output and in-depth reporting on the latest stories from across the business technology landscape. Ross was previously a Staff Writer, during which time he developed a keen interest in cyber security, business leadership, and emerging technologies.
He graduated from Edinburgh Napier University in 2016 with a BA (Hons) in Journalism, and joined ITPro in 2022 after four years working in technology conference research.
For news pitches, you can contact Ross at ross.kelly@futurenet.com, or on Twitter and LinkedIn.
-
Boards are pushing for faster returns on AI investments, and tech leaders can't keep paceNews AI projects are now being held to the same standards as any other business investment
-
KnowBe4 appoints Kelly Morgan to lead global customer experienceNews The former DocuSign executive will oversee the company’s customer success, managed services, and professional services teams
-
Ransomware gangs are using employee monitoring software as a springboard for cyber attacksNews Two attempted attacks aimed to exploit Net Monitor for Employees Professional and SimpleHelp
-
Notepad++ hackers remained undetected and pushed malicious updates for six months – here’s who’s responsible, how they did it, and how to check if you’ve been affectedNews Hackers remained undetected for months and distributed malicious updates to Notepad++ users after breaching the text editor software – here's how to check if you've been affected.
-
CISA’s interim chief uploaded sensitive documents to a public version of ChatGPT – security experts explain why you should never do thatNews The incident at CISA raises yet more concerns about the rise of ‘shadow AI’ and data protection risks
-
Former Google engineer convicted of economic espionage after stealing thousands of secret AI, supercomputing documentsNews Linwei Ding told Chinese investors he could build a world-class supercomputer
-
AI is “forcing a fundamental shift” in data privacy and governanceNews Organizations are working to define and establish the governance structures they need to manage AI responsibly at scale – and budgets are going up
-
90% of companies are woefully unprepared for quantum security threats – analysts say they need to get a move onNews Quantum security threats are coming, but a Bain & Company survey shows systems aren't yet in place to prevent widespread chaos
-
LastPass issues alert as customers targeted in new phishing campaignNews LastPass has urged customers to be on the alert for phishing emails amidst an ongoing scam campaign that encourages users to backup vaults.
-
NCSC names and shames pro-Russia hacktivist group amid escalating DDoS attacks on UK public servicesNews Russia-linked hacktivists are increasingly trying to cause chaos for UK organizations