AI challenges mean it's time to shine for cyber professionals – but they need a helping hand
Keep your security pros close, you never know when you’ll need them to solve an AI-related crisis
Sign up today and you will receive a free copy of our Future Focus 2025 report - the leading guidance on AI, cybersecurity and other IT challenges as per 700+ senior executives
You are now subscribed
Your newsletter sign-up was successful
If RSAC 2026 made one thing clear, it’s that enterprises are facing a monumental task with AI security risks. Cyber pros, it seems, will once again be putting out fires and left picking up the pieces.
Across several keynotes, the scale of the security challenges faced by AI-focused enterprises was laid bare. The risks associated with the technology are multi-fold, as well, and not limited simply to threat actor adoption.
Hype-fuelled, rushed adoption projects are causing headaches for security teams, fragmented integration across enterprise departments are leaving critical blindspots wide open to exploitation, and the spectre of “shadow AI” is an equally pressing challenge.
AI-related threats have grown in both frequency and scale over the last two years, research shows. Analysis from IBM in August 2025, for example, found 20% of organisations had suffered a breach due to security incidents involving shadow AI alone.
Meanwhile, security firms and AI providers have issued repeated alerts over the increased use of the technology by hackers, helping them to create more convincing phishing lures or to dissect threat intel reports to build potent malware strains.
This maelstrom of overlapping threats, risk factors, and considerations is already a nightmare for security teams, and it’s going to get worse. But a key talking point at RSAC 2026 centered around the fact that this is the time to shine for cybersecurity professionals.
Indeed, they’ve never been more important and will form the vanguard of safe AI adoption in years to come. RSAC executive chairman Hugh Thompson told attendees that there’s a “certain dynamism” that exists in the industry in 2026, issuing a call to action for security professionals to play a more proactive role in shaping adoption and governance.
Sign up today and you will receive a free copy of our Future Focus 2025 report - the leading guidance on AI, cybersecurity and other IT challenges as per 700+ senior executives
“We cannot be passive observers on this AI journey,” Thompson said in his opening keynote. “AI and cybersecurity are so deeply intertwined, we can’t let AI be something that happens to us. Instead, it’s our responsibility as cybersecurity professionals to make AI work for us.”
Former CISA chief and newly-appointed RSAC CEO Jen Easterly echoed Thomson’s comments, highlighting the resilience of a community that has evolved multiple times over the past two decades.
From the early days of the web to mass cloud adoption, cybersecurity professionals have stepped up when they’re needed most.
“This is our community, and we should draw strength from that, because together, we are stronger than any threat,” she said.
“Together, we're building trust in a world that desperately needs trust, a world increasingly powered by the most consequential technology of our lifetime, moving faster and faster than ever.”
AI is changing the face of cyber
Cybersecurity professionals are indeed making AI work for them. A survey from ISC2 in mid-2025 found roughly one-third (30%) of respondents have already integrated AI tools into their daily workflows.
Meanwhile, around 42% told ISC2 they were “actively considering” the deployment of AI tools in security operations.
As with other professions such as software development or customer service, the productivity and efficiency gains delivered by AI are tantalizing for cyber teams contending with a seemingly never-ending torrent of threats.
Analysis from Sapio Research last year found more than half (56%) of security teams had reported productivity improvements in the wake of AI adoption.
In terms of threat detection and response, the report added that cyber pros reported similar improvements. Nearly half (46%) said the technology is helping improve incident response while 42% said it’s helping speed up threat intel gathering.
Talk to your cyber pros
The rush to adopt AI has been understandable, there’s big money involved and companies are stretching dollars as far as possible. But one of the key issues highlighted at RSAC 2026 was the fact these adoption projects have been scattered and fragmented - and it’s not helping security teams.
Tenable co-CEO Stephen Vintz summed up the situation, noting that disparate teams are all working toward their own goals with the technology, resulting in a fragmented ecosystem where threats slip through the cracks.
“AI ownership is fractured, very fractured within the enterprise,” he said. “The data science team owns the models, the ML Ops team owns the production pipeline, it owns deployment. The product team owns the integration of these capabilities into the offering, legal owns the compliance.”
Vintz noted that security pros are “at the end of the line” and left picking up the pieces when things go wrong. This “responsibility gap”, as he described it, is doing no-one any favors and highlights the need for closer cross-functional collaboration and ownership.
Simply put, it’s clear that business leaders, department heads, and individual teams need to do better when communicating with security teams – and cyber pros themselves need to be more proactive in engaging with other domains.

Ross Kelly is ITPro's News & Analysis Editor, responsible for leading the brand's news output and in-depth reporting on the latest stories from across the business technology landscape. Ross was previously a Staff Writer, during which time he developed a keen interest in cyber security, business leadership, and emerging technologies.
He graduated from Edinburgh Napier University in 2016 with a BA (Hons) in Journalism, and joined ITPro in 2022 after four years working in technology conference research.
For news pitches, you can contact Ross at ross.kelly@futurenet.com, or on Twitter and LinkedIn.
-
RSAC warnings and Arm's AGI CPUITPro Podcast AI agents are complicating the jobs of cyber professionals
-
NTT has a plan to reduce emissions across the entire software product life cycleNews A first of its kind framework aims to track everything from procurement and design through to operation and disposal
-
March rundown: RSAC warnings and Arm's AGI CPUITPro Podcast AI agents are complicating the jobs of cyber professionals
-
Systems are deterministic, people are probabilistic – AI is both, and that's a headache for cyber teamsNews AI combines both the risks associated with IT systems and the people using them, creating headaches for practitioners
-
Tenable co-CEO Stephen Vintz says enterprises need to get serious about tackling the AI “responsibility gap”News The Tenable chief wants a serious conversation on AI ownership and accountability
-
The key risks security teams face in 2026From AI-related flaws to supply chain risks, cyber professionals now contend with overlapping challenges
-
Observability will be key to agentic AI safety, says Microsoft Security execNews Agentic AI adoption will require a re-evaluation of enterprise risk management, according to Microsoft corporate VP
-
Enterprises need to think of agents as ‘digital co-workers’ – and that means implementing the same security safeguardsNews Practices such as zero trust and least privilege will be needed as agents gain access to sensitive enterprise data
-
Safe AI adoption rests on cybersecurity professionals, says RSAC chairmanNews With AI security a key talking point at RSAC 2026, executive chairman Hugh Thompson believes the industry can lead by example