AI is “forcing a fundamental shift” in data privacy and governance
Organizations are working to define and establish the governance structures they need to manage AI responsibly at scale – and budgets are going up
Enterprises are shaking up their approach to data privacy and governance, new research shows, largely due to added risk factors created by AI adoption.
According to Cisco's 2026 Data and Privacy Benchmark Study, nearly all companies are expanding privacy programs and governance frameworks to protect their data.
AI is the main reason for 90%, with 93% saying they planned further investment to keep up with the complexity of AI systems and the expectations of customers and regulators.
The survey found that 38% spent at least $5 million on their privacy programs in the past year – marking a dramatic increase from just 14% who spent over that threshold in 2024.
Notably, these programs appear to be working well. An overwhelming 96% of organizations reported that robust privacy frameworks were helping unlock AI agility and innovation, and 95% said privacy was essential for building customer trust in AI-powered services.
One interesting change spotted by the researchers is that trust is no longer just a question of meeting regulatory requirements.
Data governance is now seen as a strategic business enabler, with 99% of organizations reporting at least one tangible benefit from their privacy initiatives, such as enhanced agility, innovation, and greater customer loyalty.
Sign up today and you will receive a free copy of our Future Focus 2025 report - the leading guidance on AI, cybersecurity and other IT challenges as per 700+ senior executives
Almost half said that clear communication about how data is collected and used is the most effective way to build customer confidence.
As a result, governance is evolving – although many organizations are still working to define and establish the structures they need to manage AI responsibly.
While three-quarters report having a dedicated AI governance body in place, only 12% describe it as mature. Meanwhile, 65% of organizations struggle to access relevant, high-quality data efficiently.
"AI is forcing a fundamental shift in the data landscape, calling for holistic governance of all data – both personal and non-personal,” said Jen Yokoyama, senior vice president, legal innovation and strategy, at Cisco.
“Organizations must deeply understand and structure their data to ensure every automated decision is explainable. It’s not just for compliance, but a necessary scaling engine for AI innovation.”
Data requirements are causing headaches
While 72% of respondents were generally positive about data privacy laws, there is a growing push to streamline and update data requirements, Cisco found.
Just over eight-in-ten organizations surveyed face heightened demand for data localization and global data complexity - and 85% said this adds cost, complexity, and risk to cross-border service delivery. #
Similarly, 77% report these requirements limit their ability to offer seamless 24/7 service across markets.
On top of this, the assumption that locally stored data is inherently more secure is gradually eroding, from 90% in 2025 to 86% in 2026.
“To capture the potential of AI, organizations (83%) are advocating for a shift toward harmonized international standards,” said Harvey Jang, Cisco vice president and chief privacy officer.
“They recognize that global consistency is an economic necessity to ensure data can flow securely while maintaining the high standards of protection required for trust.”
Cisco said enterprises should invest in robust data infrastructure, prioritizing transparency, and embedding security and privacy throughout AI initiatives.
Elsewhere, they should make sure they're making informed decisions about data localization, establish strong AI governance, and kit out their teams with comprehensive training and safeguards.
FOLLOW US ON SOCIAL MEDIA
Make sure to follow ITPro on Google News to keep tabs on all our latest news, analysis, and reviews.
You can also follow ITPro on LinkedIn, X, Facebook, and BlueSky.
Emma Woollacott is a freelance journalist writing for publications including the BBC, Private Eye, Forbes, Raconteur and specialist technology titles.
-
Why AI 'workslop' is bad for businessIn-depth Poorly-generated AI content is having a financial impact on businesses, slowing productivity, and creating friction between employees
-
Cisco looks to showcase “unique value” with revamped 360 Partner ProgramNews Cisco has unveiled a revamped partner framework to help partners capitalize on growing AI-driven customer demand
-
Fears over “AI model collapse” are fueling a shift to zero trust data governance strategiesNews Gartner warns of "model collapse" as AI-generated data proliferates – and says organizations need to beware
-
26% of privacy professionals expect a “material privacy breach” in 2026 as budget cuts and staff shortages stretch teams to the limitNews Overworked, underfunded privacy teams are being left hung out to dry by executives
-
Supply chain and AI security in the spotlight for cyber leaders in 2026News Organizations are sharpening their focus on supply chain security and shoring up AI systems
-
Cisco says Chinese hackers are exploiting an unpatched AsyncOS zero-day flaw – here's what we know so farNews The zero-day vulnerability affects Cisco's Secure Email Gateway and Secure Email and Web Manager appliances – here's what we know so far.
-
EU lawmakers want to limit the use of ‘algorithmic management’ systems at workNews All workplace decisions should have human oversight and be transparent, fair, and safe, MEPs insist
-
Researchers claim Salt Typhoon masterminds learned their trade at Cisco Network AcademyNews The Salt Typhoon hacker group has targeted telecoms operators and US National Guard networks in recent years
-
Trend Micro issues warning over rise of 'vibe crime' as cyber criminals turn to agentic AI to automate attacksNews Trend Micro is warning of a boom in 'vibe crime' - the use of agentic AI to support fully-automated cyber criminal operations and accelerate attacks.
-
NCSC issues urgent warning over growing AI prompt injection risks – here’s what you need to knowNews Many organizations see prompt injection as just another version of SQL injection - but this is a mistake
