The key risks security teams face in 2026

From AI-related flaws to supply chain risks, cyber professionals now contend with overlapping challenges

Neon blue padlock with code flowing over it, floating above small plinths raised at different heights, each with code underneath their platforms
(Image credit: Getty Images)

Cybersecurity teams worldwide face an increasingly broad range of risks, with malicious actors ramping up operations.

In a panel session at RSAC Conference 2026, Ed Skoudis, president of SANS Technology Institute led attendees through an array of key issues encountered by frontline security practitioners in 2026.

From AI-related risks to supply chain security and operational complexity, teams now contend with a confluence of overlapping challenges, panelists noted.

Panelists included:

  • Heather Barnhart, head of faculty and senior forensic expert at SANS Institute and Cellebrite
  • Joshua Wright, faculty fellow at SANS Institute and senior technical director at Counter Hack Innovations
  • Robert Lee, CEO and founder of OT cybersecurity firm Dragos and SANS Institute fellow
  • Rob T Lee, CAIO and chief of research at SANS Institute

Dual implications of AI

The impact of AI was a recurring talking point throughout the session, as with RSAC 2026 more broadly. Attendees heard that while AI offers huge opportunities for security practitioners, it also creates new risks.

Wright specifically highlighted a looming wave of AI-related software zero days due to the integration of these solutions across enterprise technology stacks. This is creating a dynamic new frontier for security teams and bad actors.

Indeed, hackers and other malicious actors are now actively “industrializing” the use of AI to target potential weak spots in software security and pounce on flaws. This means enterprises need to re-evaluate how they respond to critical vulnerabilities.

“We need to start measuring [vulnerabilities] in how many tokens it requires for an AI model to find a previously unknown vulnerability,” he said.

“I think we are quickly headed toward a time period where we’re going to see not maybe one or two, or maybe three, zero days in a week, but a week of hundreds of zero day[s],” Wright commented.

These will be designed by AI, he added, creating opportunities for bad actors to be able to target organisations en-masse and causing huge disruption.

“I don’t think we’re ready for this,” he said.

The plus side for security professionals, panel members claimed, is AI will assist in countering this new wave of potential risks. Wright said the technology will offer enterprises a chance to “resolve this patching dilemma” and keep pace with the scale of malicious activity in coming years.

Operational technology risks

Another key risk area, highlighted by Robert Lee, is operational technology (OT), which is now a leading target for state-backed groups and malicious actors.

Traditional motives, such as financial gain, are still present but aren’t the only incentives. The critical nature of these systems and their use in areas such as national infrastructure, healthcare, and manufacturing, he said, make them appealing targets to simply cause disruption.

Risks are rising on this front, research shows. Analysis from Bridewell found 95% of CNI operators faced some form of cyber incident in 2025, for example.

“We see some state actors and non-state actors, they’re very opportunistic, they’re going to hit a manufacturing facility and wipe what they can and cause chaos,” he said.

“Some are doing it for money, some are doing it for influence. There are multiple state actors that are planning … how to take down major portions of a country.”

Taking advantage of AI

With this growing array of potential dangers, security teams are now forced to adapt rapidly to compensate for the changing tactics of malicious actors.

There’s room for AI to help support and streamline processes for teams, particularly in incident response, Barnhart noted. Enterprises and individual practitioners that capitalize on the benefits of the technology, will have a key advantage in years to come.

“AI is not going to take your job. However, if you are in digital forensics or incident response and you learn to use AI to make yourself more powerful, you will steal that person’s job,” she said.

TOPICS
Ross Kelly
News and Analysis Editor

Ross Kelly is ITPro's News & Analysis Editor, responsible for leading the brand's news output and in-depth reporting on the latest stories from across the business technology landscape. Ross was previously a Staff Writer, during which time he developed a keen interest in cyber security, business leadership, and emerging technologies.

He graduated from Edinburgh Napier University in 2016 with a BA (Hons) in Journalism, and joined ITPro in 2022 after four years working in technology conference research.

For news pitches, you can contact Ross at ross.kelly@futurenet.com, or on Twitter and LinkedIn.