Cyber professionals call for a 'strategic pause' on AI adoption as teams left scrambling to secure tools
Security professionals are scrambling to secure generative AI tools


More than a third of security leaders and practitioners admit that generative AI is moving faster than their teams can manage.
Almost half (48%) told penetration testing firm Cobalt that they'd like to have a 'strategic pause' to recalibrate their defenses against generative AI-driven threats - something they know they're not likely to get.
More than seven-in-ten (72%) cited generative AI-related attacks as their top IT risk, but a third still aren't conducting regular security assessments, including penetration testing, for their LLM deployments.
“Threat actors aren’t waiting around, and neither can security teams,” said Gunter Ollmann, CTO at Cobalt.
“Our research shows that while genAI is reshaping how we work, it’s also rewriting the rules of risk. The foundations of security must evolve in parallel, or we risk building tomorrow’s innovation on today’s outdated safeguards.”
Security leaders at C-suite and VP level are more concerned than practitioners about long-term generative AI threats such as adversarial attacks - an issue for 76%, compared with just 68% of security practitioners.
However, 45% of practitioners expressed concern about near-term operational risks such as inaccurate outputs, compared with only 36% of security leaders.
Sign up today and you will receive a free copy of our Future Focus 2025 report - the leading guidance on AI, cybersecurity and other IT challenges as per 700+ senior executives
Security leaders are also more likely to consider changing how their team approaches cybersecurity defense strategies in light of the potential of generative AI-driven attacks, at 52% compared with 43% for practitioners.
Top concerns among all survey respondents included sensitive information disclosure, cited by 46%, model poisoning or theft, a worry for 42%, inaccurate data, an issue for 40%, and training data leakage, cited by 37%.
Similarly, half said they wanted more transparency from software suppliers about how they detect and prevent vulnerabilities, signaling a growing trust gap in the AI supply chain, the researchers said.
Many organizations lack the in-house expertise to adequately assess, prioritize, and remediate complex LLM-specific vulnerabilities.
This can lead to an over-reliance for fixes on third-party model providers or tool vendors - some of which may not prioritize these security issues as quickly or effectively as they should, particularly if the vulnerability lies within the foundational model itself.
LLM analysis uncovers worrying flaws
Analysis based on data collected during Cobalt pentests showed that while 69% of serious findings across all categories are resolved, this drops to just 21% of the high-severity vulnerabilities found in LLM pentests.
This is a concern, researchers said, given that 32% of LLM pentest findings are serious and is the lowest resolution rate across all test types the company conducts.
While the mean time to resolve (MTTR) for those serious LLM findings that are fixed is a rapid 19 days — the shortest MTTR across all pentest types - this is probably partly because organizations tend to prioritize quicker, and often simpler fixes.
"Much like the rush to cloud adoption, genAI has exposed a fundamental gap between innovation and security readiness,” said Ollmann.
“Mature controls were not built for a world of LLMs. Security teams must shift from reactive audits to programmatic, proactive AI testing — and fast.”
MORE FROM ITPRO
- AI security blunders have cyber professionals sweating
- Enterprises are worried about agentic AI security risks – Gartner says the answer is just adding more AI agents
- Agentic AI could be a blessing and a curse for cybersecurity
Emma Woollacott is a freelance journalist writing for publications including the BBC, Private Eye, Forbes, Raconteur and specialist technology titles.
-
Huawei executive says 'we need to embrace AI hallucinations’
News Tao Jingwen, director of Huawei’s quality, business process & IT management department, said firms should embrace hallucinations as part and parcel of generative AI.
-
Advania UK eyes further growth under new CEO James Hardy
News Hardy will lead Advania’s UK business as it targets the underserved mid-market with integrated IT services
-
Pentesters are now a CISOs best friend as critical vulnerabilities skyrocket
News Attack surfaces are expanding rapidly, but pentesters are here to save the day
-
Generative AI attacks are accelerating at an alarming rate
News Two new reports from Gartner highlight the new AI-related pressures companies face, and the tools they are using to counter them
-
Hackers are using AI to dissect threat intelligence reports and ‘vibe code’ malware
News TrendMicro has called for caution on how much detail is disclosed in security advisories
-
Anthropic admits hackers have 'weaponized' its tools – and cyber experts warn it's a terrifying glimpse into 'how quickly AI is changing the threat landscape'
News Security experts say Anthropic's recent admission that hackers have "weaponized" its AI tools gives us a terrifying glimpse into the future of cyber crime.
-
Security researchers have just identified what could be the first ‘AI-powered’ ransomware strain – and it uses OpenAI’s gpt-oss-20b model
News Using OpenAI's gpt-oss:20b model, ‘PromptLock’ generates malicious Lua scripts via the Ollama API.
-
Microsoft quietly launched an AI agent that can detect and reverse engineer malware
News Researchers say the tool is already achieving the “gold standard” in malware classification
-
Flaw in Lenovo’s customer service AI chatbot could let hackers run malicious code, breach networks
News Hackers abusing the Lenovo flaw could inject malicious code with just a single prompt
-
Using DeepSeek at work is like ‘printing out and handing over your confidential information’
News Thinking of using DeepSeek at work? Think again. Cybersecurity experts have warned you're putting your enterprise at huge risk.