Cyber professionals call for a 'strategic pause' on AI adoption as teams left scrambling to secure tools
Security professionals are scrambling to secure generative AI tools


More than a third of security leaders and practitioners admit that generative AI is moving faster than their teams can manage.
Almost half (48%) told penetration testing firm Cobalt that they'd like to have a 'strategic pause' to recalibrate their defenses against generative AI-driven threats - something they know they're not likely to get.
More than seven-in-ten (72%) cited generative AI-related attacks as their top IT risk, but a third still aren't conducting regular security assessments, including penetration testing, for their LLM deployments.
“Threat actors aren’t waiting around, and neither can security teams,” said Gunter Ollmann, CTO at Cobalt.
“Our research shows that while genAI is reshaping how we work, it’s also rewriting the rules of risk. The foundations of security must evolve in parallel, or we risk building tomorrow’s innovation on today’s outdated safeguards.”
Security leaders at C-suite and VP level are more concerned than practitioners about long-term generative AI threats such as adversarial attacks - an issue for 76%, compared with just 68% of security practitioners.
However, 45% of practitioners expressed concern about near-term operational risks such as inaccurate outputs, compared with only 36% of security leaders.
Sign up today and you will receive a free copy of our Future Focus 2025 report - the leading guidance on AI, cybersecurity and other IT challenges as per 700+ senior executives
Security leaders are also more likely to consider changing how their team approaches cybersecurity defense strategies in light of the potential of generative AI-driven attacks, at 52% compared with 43% for practitioners.
Top concerns among all survey respondents included sensitive information disclosure, cited by 46%, model poisoning or theft, a worry for 42%, inaccurate data, an issue for 40%, and training data leakage, cited by 37%.
Similarly, half said they wanted more transparency from software suppliers about how they detect and prevent vulnerabilities, signaling a growing trust gap in the AI supply chain, the researchers said.
Many organizations lack the in-house expertise to adequately assess, prioritize, and remediate complex LLM-specific vulnerabilities.
This can lead to an over-reliance for fixes on third-party model providers or tool vendors - some of which may not prioritize these security issues as quickly or effectively as they should, particularly if the vulnerability lies within the foundational model itself.
LLM analysis uncovers worrying flaws
Analysis based on data collected during Cobalt pentests showed that while 69% of serious findings across all categories are resolved, this drops to just 21% of the high-severity vulnerabilities found in LLM pentests.
This is a concern, researchers said, given that 32% of LLM pentest findings are serious and is the lowest resolution rate across all test types the company conducts.
While the mean time to resolve (MTTR) for those serious LLM findings that are fixed is a rapid 19 days — the shortest MTTR across all pentest types - this is probably partly because organizations tend to prioritize quicker, and often simpler fixes.
"Much like the rush to cloud adoption, genAI has exposed a fundamental gap between innovation and security readiness,” said Ollmann.
“Mature controls were not built for a world of LLMs. Security teams must shift from reactive audits to programmatic, proactive AI testing — and fast.”
MORE FROM ITPRO
- AI security blunders have cyber professionals sweating
- Enterprises are worried about agentic AI security risks – Gartner says the answer is just adding more AI agents
- Agentic AI could be a blessing and a curse for cybersecurity
Emma Woollacott is a freelance journalist writing for publications including the BBC, Private Eye, Forbes, Raconteur and specialist technology titles.
-
Microsoft could be preparing for a crackdown on remote work
News The tech giant is the latest to implement stricter policies around hybrid working without requiring a full five days in the office
-
JetBrains CEO on how developers must transform with AI
Interview There may still be a place for strong developer progression in the age of AI, if workers can adapt to rapid changes
-
Microsoft patched a critical vulnerability in its NLWeb AI search tool – but there's no CVE (yet)
News Researchers found an unauthenticated path traversal bug in the tool debuted at Microsoft Build in May
-
AI breaches aren’t just a scare story any more – they’re happening in real life
News IBM research shows proper AI access controls are leading to costly data leaks
-
The rise of GhostGPT – Why cybercriminals are turning to generative AI
Industry Insights GhostGPT is not an AI tool - It has been explicitly repurposed for criminal activity
-
Think DDoS attacks are bad now? Wait until hackers start using AI assistants to coordinate attacks, researchers warn
News The use of AI in DDoS attacks would change the game for hackers and force security teams to overhaul existing defenses
-
Okta and Palo Alto Networks are teaming up to ‘fight AI with AI’
News The expanded partnership aims to help shore up identity security as attackers increasingly target user credentials
-
Despite the hype, cybersecurity teams are still taking a cautious approach to using AI tools
News Research from ISC2 shows the appetite for AI tools in cybersecurity is growing, but professionals are taking a far more cautious approach than other industries.
-
AI security blunders have cyber professionals scrambling
News Growing AI security incidents have cyber teams fending off an array of threats
-
CISOs bet big on AI tools to reduce mounting cost pressures
News AI automation is a top priority for CISOs, though data quality, privacy, and a lack of in-house expertise are common hurdles