Cyber professionals call for a 'strategic pause' on AI adoption as teams left scrambling to secure tools
Security professionals are scrambling to secure generative AI tools


More than a third of security leaders and practitioners admit that generative AI is moving faster than their teams can manage.
Almost half (48%) told penetration testing firm Cobalt that they'd like to have a 'strategic pause' to recalibrate their defenses against generative AI-driven threats - something they know they're not likely to get.
More than seven-in-ten (72%) cited generative AI-related attacks as their top IT risk, but a third still aren't conducting regular security assessments, including penetration testing, for their LLM deployments.
“Threat actors aren’t waiting around, and neither can security teams,” said Gunter Ollmann, CTO at Cobalt.
“Our research shows that while genAI is reshaping how we work, it’s also rewriting the rules of risk. The foundations of security must evolve in parallel, or we risk building tomorrow’s innovation on today’s outdated safeguards.”
Security leaders at C-suite and VP level are more concerned than practitioners about long-term generative AI threats such as adversarial attacks - an issue for 76%, compared with just 68% of security practitioners.
However, 45% of practitioners expressed concern about near-term operational risks such as inaccurate outputs, compared with only 36% of security leaders.
Sign up today and you will receive a free copy of our Future Focus 2025 report - the leading guidance on AI, cybersecurity and other IT challenges as per 700+ senior executives
Security leaders are also more likely to consider changing how their team approaches cybersecurity defense strategies in light of the potential of generative AI-driven attacks, at 52% compared with 43% for practitioners.
Top concerns among all survey respondents included sensitive information disclosure, cited by 46%, model poisoning or theft, a worry for 42%, inaccurate data, an issue for 40%, and training data leakage, cited by 37%.
Similarly, half said they wanted more transparency from software suppliers about how they detect and prevent vulnerabilities, signaling a growing trust gap in the AI supply chain, the researchers said.
Many organizations lack the in-house expertise to adequately assess, prioritize, and remediate complex LLM-specific vulnerabilities.
This can lead to an over-reliance for fixes on third-party model providers or tool vendors - some of which may not prioritize these security issues as quickly or effectively as they should, particularly if the vulnerability lies within the foundational model itself.
LLM analysis uncovers worrying flaws
Analysis based on data collected during Cobalt pentests showed that while 69% of serious findings across all categories are resolved, this drops to just 21% of the high-severity vulnerabilities found in LLM pentests.
This is a concern, researchers said, given that 32% of LLM pentest findings are serious and is the lowest resolution rate across all test types the company conducts.
While the mean time to resolve (MTTR) for those serious LLM findings that are fixed is a rapid 19 days — the shortest MTTR across all pentest types - this is probably partly because organizations tend to prioritize quicker, and often simpler fixes.
"Much like the rush to cloud adoption, genAI has exposed a fundamental gap between innovation and security readiness,” said Ollmann.
“Mature controls were not built for a world of LLMs. Security teams must shift from reactive audits to programmatic, proactive AI testing — and fast.”
MORE FROM ITPRO
- AI security blunders have cyber professionals sweating
- Enterprises are worried about agentic AI security risks – Gartner says the answer is just adding more AI agents
- Agentic AI could be a blessing and a curse for cybersecurity
Emma Woollacott is a freelance journalist writing for publications including the BBC, Private Eye, Forbes, Raconteur and specialist technology titles.
-
743,000 patients had their data compromised in the McLaren Health Care breach
News The breach at McLaren Health Care marks the second incident at the nonprofit in recent years.
-
Thousands of government laptops, tablets, and phones are missing and nowhere to be found
News A freedom of information disclosure shows more than 2,000 government-issued phones, tablets, and laptops have been lost or stolen, prompting huge cybersecurity concerns.
-
AI security blunders have cyber professionals scrambling
News Growing AI security incidents have cyber teams fending off an array of threats
-
CISOs bet big on AI tools to reduce mounting cost pressures
News AI automation is a top priority for CISOs, though data quality, privacy, and a lack of in-house expertise are common hurdles
-
The FBI says hackers are using AI voice clones to impersonate US government officials
News The campaign uses AI voice generation to send messages pretending to be from high-ranking figures
-
Almost a third of workers are covertly using AI at work – here’s why that’s a terrible idea
News Employers need to get wise to the use of unauthorized AI tools and tighten up policies
-
‘We are now a full-fledged powerhouse’: Two years on from its Series B round, Hack the Box targets further growth with AI-powered cyber training programs and new market opportunities
News Hack the Box has grown significantly in the last two years, and it shows no signs of slowing down
-
Foreign AI model launches may have improved trust in US AI developers, says Mandiant CTO – as he warns Chinese cyber attacks are at an “unprecedented level”
News Concerns about enterprise AI deployments have faded due to greater understanding of the technology and negative examples in the international community, according to Mandiant CTO Charles Carmakal.
-
Security experts issue warning over the rise of 'gray bot' AI web scrapers
News While not malicious, the bots can overwhelm web applications in a way similar to bad actors
-
Bugcrowd’s new MSP program looks to transform pen testing for small businesses
News Cybersecurity provider Bugcrowd has launched a new service aimed at helping MSP’s drive pen testing capabilities - with a particular focus on small businesses.