Systems are deterministic, people are probabilistic – AI is both, and that's a headache for cyber teams
AI combines both the risks associated with IT systems and the people using them, creating headaches for practitioners
Security practitioners have traditionally looked at cyber risk through a “deterministic lens”, but with AI the rules have changed and that will require a tactical rethink.
That’s according to Dean Sysman, executive chairman and co-founder of Axonius, who told attendees at the RSAC 2026 Conference that enterprise security teams need to shake up practices for an era of AI-related risks.
The technology is accelerating both offensive and defensive cyber operations globally, he noted, with bad actors now capitalizing on flaws in minutes, rather than weeks.
Similarly, unsecure AI systems mean there are more entry points for bad actors to pounce on, and enterprises are now awash with agents that have deep access to internal datasets.
With this in mind, treating AI like humans will be critical for assessing and managing risk, Sysman noted.
There is one caveat. Currently, security teams focus on deterministic behavior for IT systems and probabilistic behavior for humans. AI, however, behaves in a probabalistic and deterministic way.
This means managing risk is going to split their attention.
Sign up today and you will receive a free copy of our Future Focus 2025 report - the leading guidance on AI, cybersecurity and other IT challenges as per 700+ senior executives
“We’ve always looked at cybersecurity through two different lenses,” he said.
“We have the deterministic lens, where we look at systems and compute and storage and memory, and those things behave in a way that we can determine, we can forecast, we can understand and react to or respond to in a deterministic way as well.
“And yet, we have people who are probabilistic. Nobody ever knows what somebody is going to do, or how they’re going to ask, or what kind of risk they’re going to introduce.”
Sysman's voice adds to a growing chorus of security and IT professionals calling for AI systems – and agents in particular – to be treated as human in the context of security.
A host of tech industry figures have urged enterprises to view agents as “digital co-workers” and thereby apply the same security standards. Efforts are being made to improve identity management and observability to keep tabs on agents working away in the background.
Sysman told attendees that visibility isn’t enough, though, saying they need to shift their attention toward “actionability”.
“We need to start by seeing what we have, what exists in our environment,” he explained. “Then there’s this understanding of the organizational context of those things. We can’t fix everything and we don’t have unlimited resources, so there is always an element of prioritizing.”
Ultimately, actionability places the emphasis on ownership and accountability of AI risk in the enterprise - a key talking point during an earlier talk by Tenable co-CEO Stephen Vintz.
“The biggest question that is the hardest one to answer today [is] who owns this? Who will make this decision? Who needs to be accountable for the action and the decision that needs to be taken?”
FOLLOW US ON SOCIAL MEDIA
Follow ITPro on Google News and add us as a preferred source to keep tabs on all our latest news, analysis, views, and reviews.
You can also follow ITPro on LinkedIn, X, Facebook, and BlueSky.

Ross Kelly is ITPro's News & Analysis Editor, responsible for leading the brand's news output and in-depth reporting on the latest stories from across the business technology landscape. Ross was previously a Staff Writer, during which time he developed a keen interest in cyber security, business leadership, and emerging technologies.
He graduated from Edinburgh Napier University in 2016 with a BA (Hons) in Journalism, and joined ITPro in 2022 after four years working in technology conference research.
For news pitches, you can contact Ross at ross.kelly@futurenet.com, or on Twitter and LinkedIn.
-
2026 in IoT attacks: the biggest threats so far and what businesses can doIn-depth Internet of Things devices are more useful than ever – but security is still playing catch-up
-
Anthropic is increasing Claude Code usage limits — here’s everything you need to knowNews The new deal will help Anthropic increase Claude Code usage limits, and API rate limits for Claude Opus models
-
Five Eyes agencies sound alarm over risky agentic AI deploymentsNews Security agencies have urged organizations to establish clear boundaries and guardrails for AI agents
-
Enterprises are adopting agents faster than they can secure and govern them – experts warn it’s a disaster waiting to happenNews Identity systems developed for human interaction fail to cope with the new demands
-
UK firms left in the dark over what workers are sharing with AINews Security teams can’t keep track of what workers are sharing with AI applications, regardless of whether they’re approved or unauthorized
-
'The goal for this year will be to automate all security processes': Google Cloud is betting on Wiz to usher in a new era of AI securityNews Wiz wants to deploy its agents for continuous penetration testing, and in Google it’s found a parent company that can achieve this vision at scale
-
AI is now a ‘standard part of the attacker toolkit’News Cyber attacks are increasing in scale, intensity, and velocity thanks to AI, and it’s forcing defenders to react faster than ever before
-
Agent identity governance can't keeping up with adoption rates – and it’s creating a security nightmareNews Enterprises are leaving high-privilege keys unchanged for months or years at a time
-
AI challenges mean it's time to shine for cyber professionals – but they need a helping handAnalysis Keep your security pros close, you never know when you’ll need them to solve an AI-related crisis
-
March rundown: RSAC warnings and Arm's AGI CPUITPro Podcast AI agents are complicating the jobs of cyber professionals, with broken permissions and a lack of oversight posing major risks