Tenable co-CEO Stephen Vintz says enterprises need to get serious about tackling the AI “responsibility gap”
The Tenable chief wants a serious conversation on AI ownership and accountability
Sign up today and you will receive a free copy of our Future Focus 2025 report - the leading guidance on AI, cybersecurity and other IT challenges as per 700+ senior executives
You are now subscribed
Your newsletter sign-up was successful
Rapid AI adoption is creating an array of new risks for enterprises globally, with fragmented policies on responsibility and security exacerbating the issue.
That’s according to Tenable co-CEO, Stephen Vintz, who believes a “responsibility gap” over AI governance has emerged over the last three years and is now one of the most pressing challenges facing organizations.
Speaking at RSAC 2026 Conference in San Francisco, Vintz told attendees that executives and boards aren’t quite sure about how to address AI-related risks, despite heavy investment in the technology.
The hype of the early generative AI “boom” and the speed of adoption has been a key factor, according to Vintz, with enterprises rushing into adoption projects assuming long-standing governance practices will still be up to scratch.
“There is a fundamental mismatch between the exponential speed of AI adoption and the linear speed of traditional corporate governance, and things will only move faster and things will only become more complex,” he told attendees.
“Roughly 90% of all organizations have adopted AI in some form, and yet half of those have already experienced some sort of cyber incident,” Vintz added. “That’s a staggering number for a technology that’s still in its infancy.”
Establishing where responsibility for AI risk lies is challenging because different departments are all pursuing their own goals with the technology.
Sign up today and you will receive a free copy of our Future Focus 2025 report - the leading guidance on AI, cybersecurity and other IT challenges as per 700+ senior executives
Many organizations have AI-related projects spanning multiple divisions, segments, and individual teams. This results in a fragmented approach that muddles ownership, accountability, and – ultimately – security.
“The responsibility gap exists in part because AI ownership is fractured, very fractured within the enterprise,” he said.
“One of the key challenges is that AI systems are meaningfully more complex to deploy than traditional SaaS apps, not just technically, but also organizationally.
“The data science team owns the models, the ML Ops team owns the production pipeline, it owns deployment. The product team owns the integration of these capabilities into the offering, legal owns the compliance," he added.
From a cybersecurity perspective, teams in this domain are then left “at the end of the line” battling to protect systems they neither designed, nor fully control.
Real world risks
AI-related security incidents are not only growing in frequency, but also in scale and intensity, according to Vintz, who laid out a series of real-world examples.
In one incident, a “major financial institution” rolled out an internal AI assistant aimed at helping workers more efficiently summarize documents and speed up data analysis.
Unbeknown to the company and thousands of users, a misconfiguration meant that the agent had access to an array of sensitive information, including financial models, strategic documents, and internal communications.
“As a result, employees, thousands of them, could query it and pull information out of it that others were not meant to see,” Vintz explained.
“The exposure, well it wasn’t caused by a hacker, it was a simple misconfiguration – and who set it up? The security team, the very group responsible for protecting the organization.”
This highlights the importance of cross-functional development and risk assessment when building and deploying AI systems, Vintz said.
Each stakeholder has a crucial role to play in the process, yet the fractured approach means potentially disastrous issues or hypothetical failures are neither considered, nor tackled.
Global risk
Ownership and accountability of AI risk isn’t just an organizational challenge at present, Vintz said, it’s a high level regulatory discussion that needs to be addressed head-on. Tools that could be employed include legislative changes, mild tweaks to existing industry frameworks, and a re-evaluation of traditional governance practices.
“To close the responsibility gap, we have to get serious about how we manage this technology and effective risk management, and governance needs to happen in both the public sector and private sector,” he said.
With the former, Vintz said regulators will play a “massive role in setting the tone and tempo for safety”.
From an industry perspective, updates to existing security frameworks will play a key role in helping manage AI risk and setting new standards for organizations.
Vintz specifically highlighted the NIST cybersecurity framework as a “great place to start” as it’s “widely recognized as the standard for managing cyber risk”.
“It’s flexible enough to be adaptive for AI enabled threats,” he said. Similarly, at an international level ISO cybersecurity standards are already being modified slightly to “bridge the gap between security and AI security”.
Ultimately, responsibility for governance will rest on the shoulders of the private sector and companies building these tools, Vintz noted. This means organizations need to start thinking about risk in a proactive way – and that will require both significant investment and a change in tack to traditional approaches.
“For over 30 years our industry has been built on firefighting, detecting … and responding at human speed,” he said. Vintz added that this made sense when risks “moved at human speed”, but that’s all changing.
“It was the right way to operate, but it fails in the AI era where everything moves at machine speed,” he said. “Our spending reflects this outdated reality. Today, over 90% of all cybersecurity spend is in detection and response.”
“Exposure management” is an approach specifically highlighted by Vintz, and one designed to tackle risks head on.
This involves a concerted focus on “unified visibility, insight, and action” which helps enterprises understand risk and then proactively reduce it. This is crucial, he noted, as risk “rarely appears in isolation”.
“Exposure management also provides the necessary intelligence layer to orchestrate the right mix of humans and AI to get stuff done, because that's really important,” Vintz said.
FOLLOW US ON SOCIAL MEDIA
Follow ITPro on Google News and add us as a preferred source to keep tabs on all our latest news, analysis, views, and reviews.
You can also follow ITPro on LinkedIn, X, Facebook, and BlueSky.

Ross Kelly is ITPro's News & Analysis Editor, responsible for leading the brand's news output and in-depth reporting on the latest stories from across the business technology landscape. Ross was previously a Staff Writer, during which time he developed a keen interest in cyber security, business leadership, and emerging technologies.
He graduated from Edinburgh Napier University in 2016 with a BA (Hons) in Journalism, and joined ITPro in 2022 after four years working in technology conference research.
For news pitches, you can contact Ross at ross.kelly@futurenet.com, or on Twitter and LinkedIn.
-
HP delivers AI-powered updates to the Workforce Experience Platform (WXP) designed to help IT leaders and MSPs navigate the current memory shortage and moreNew features help IT teams turn insight into action and really derive maximum value from the tech investments they have made
-
‘It’s not a good look for the PC ecosystem as a whole.” HP to make fix for TPM vulnerability an industry standardJust announced TPM Guard offers important protection against device data theft when attackers gain physical access
-
The key risks security teams face in 2026From AI-related flaws to supply chain risks, cyber professionals now contend with overlapping challenges
-
Observability will be key to agentic AI safety, says Microsoft Security execNews Agentic AI adoption will require a re-evaluation of enterprise risk management, according to Microsoft corporate VP
-
Enterprises need to think of agents as ‘digital co-workers’ – and that means implementing the same security safeguardsNews Practices such as zero trust and least privilege will be needed as agents gain access to sensitive enterprise data
-
Safe AI adoption rests on cybersecurity professionals, says RSAC chairmanNews With AI security a key talking point at RSAC 2026, executive chairman Hugh Thompson believes the industry can lead by example