Systems are deterministic, people are probabilistic – AI is both, and that's a headache for cyber teams

AI combines both the risks associated with IT systems and the people using them, creating headaches for practitioners

RSAC 2026 Conference branding pictured on a banner at the Moscone Center in San Francisco, USA.
(Image credit: RSAC™ 2026 Conference)

Security practitioners have traditionally looked at cyber risk through a “deterministic lens”, but with AI the rules have changed and that will require a tactical rethink.

That’s according to Dean Sysman, executive chairman and co-founder of Axonius, who told attendees at the RSAC 2026 Conference that enterprise security teams need to shake up practices for an era of AI-related risks.

The technology is accelerating both offensive and defensive cyber operations globally, he noted, with bad actors now capitalizing on flaws in minutes, rather than weeks.

Similarly, unsecure AI systems mean there are more entry points for bad actors to pounce on, and enterprises are now awash with agents that have deep access to internal datasets.

With this in mind, treating AI like humans will be critical for assessing and managing risk, Sysman noted.

There is one caveat. Currently, security teams focus on deterministic behavior for IT systems and probabilistic behavior for humans. AI, however, behaves in a probabalistic and deterministic way.

This means managing risk is going to split their attention.

“We’ve always looked at cybersecurity through two different lenses,” he said.

“We have the deterministic lens, where we look at systems and compute and storage and memory, and those things behave in a way that we can determine, we can forecast, we can understand and react to or respond to in a deterministic way as well.

“And yet, we have people who are probabilistic. Nobody ever knows what somebody is going to do, or how they’re going to ask, or what kind of risk they’re going to introduce.”

Sysman's voice adds to a growing chorus of security and IT professionals calling for AI systems – and agents in particular – to be treated as human in the context of security.

A host of tech industry figures have urged enterprises to view agents as “digital co-workers” and thereby apply the same security standards. Efforts are being made to improve identity management and observability to keep tabs on agents working away in the background.

Sysman told attendees that visibility isn’t enough, though, saying they need to shift their attention toward “actionability”.

“We need to start by seeing what we have, what exists in our environment,” he explained. “Then there’s this understanding of the organizational context of those things. We can’t fix everything and we don’t have unlimited resources, so there is always an element of prioritizing.”

Ultimately, actionability places the emphasis on ownership and accountability of AI risk in the enterprise - a key talking point during an earlier talk by Tenable co-CEO Stephen Vintz.

“The biggest question that is the hardest one to answer today [is] who owns this? Who will make this decision? Who needs to be accountable for the action and the decision that needs to be taken?”

FOLLOW US ON SOCIAL MEDIA

Follow ITPro on Google News and add us as a preferred source to keep tabs on all our latest news, analysis, views, and reviews.

You can also follow ITPro on LinkedIn, X, Facebook, and BlueSky.

Ross Kelly
News and Analysis Editor

Ross Kelly is ITPro's News & Analysis Editor, responsible for leading the brand's news output and in-depth reporting on the latest stories from across the business technology landscape. Ross was previously a Staff Writer, during which time he developed a keen interest in cyber security, business leadership, and emerging technologies.

He graduated from Edinburgh Napier University in 2016 with a BA (Hons) in Journalism, and joined ITPro in 2022 after four years working in technology conference research.

For news pitches, you can contact Ross at ross.kelly@futurenet.com, or on Twitter and LinkedIn.