Tech leaders worry AI innovation is outpacing governance
The pace of AI innovation is making it increasingly difficult to handle issues of safety and responsibility


The rapid growth of AI is outpacing effective governance, researchers have warned, with business leaders desperate for more clarity on regulation.
NTT Data’s Responsibility Gap Survey of C-suite executives concludes that there's an urgent need for stronger AI leadership, balancing innovation with responsibility.
Eight-in-ten said a lack of clear policies is preventing them from scaling generative AI initiatives, and that unclear government regulations are hindering AI investment and implementation, leading to delayed adoption.
And while nine-in-ten executives said they worry about AI security risks, only a quarter of CISOs said they have a robust governance framework.
"The enthusiasm for AI is undeniable, but our findings show that innovation without responsibility is a risk multiplier," said NTT Data CEO Abhijit Dubey.
"Organizations need leadership-driven AI governance strategies to close this gap — before progress stalls and trust erodes."
There's a big split amongst C-suite executives about the appropriate balance between safety and innovation. A third of executives believe responsibility matters more than innovation, one-third think the opposite, and one-third rates them equally.
Sign up today and you will receive a free copy of our Future Focus 2025 report - the leading guidance on AI, cybersecurity and other IT challenges as per 700+ senior executives
There are also concerns about sustainability, with three-quarters of leaders saying that their AI ambitions conflict with corporate sustainability goals, forcing them to rethink energy-intensive AI solutions.
Additionally, two-thirds of executives say their employees lack the skills to work effectively with AI, while 72% admit they don't have an AI policy in place to guide responsible use.
NTT Data recommends introducing Responsible by Design Principles, building AI responsibly from the ground up and, integrating security, compliance, and transparency into development from day one.
Leaders need a systematic approach to ethical AI standards, going beyond legal requirements, and organizations should upskill employees to work alongside AI and ensure teams understand AI’s risks and opportunities.
RELATED WHITEPAPER
Meanwhile, there needs to be global collaboration on AI policy, with businesses, regulators, and industry leaders coming together to create clearer, actionable AI governance frameworks and establish global AI standards.
"AI’s trajectory is clear — its impact will only grow. But without decisive leadership, we risk a future where innovation outpaces responsibility, creating security gaps, ethical blind spots, and missed opportunities," said Dubey.
"The business community must act now. By embedding responsibility into AI’s foundation—through design, governance, workforce readiness, and ethical frameworks—we unlock AI’s full potential while ensuring it serves businesses, employees, and society at large equally."
MORE FROM ITPRO
- Everything you need to know about AI and data protection
- Why your business needs data protection policies
- A lack of AI guidance is causing serious GDPR headaches
Emma Woollacott is a freelance journalist writing for publications including the BBC, Private Eye, Forbes, Raconteur and specialist technology titles.
-
Let’s talk about digital sovereignty
In the age of AI and cloud, where data resides is a key consideration
-
Lack of visibility creates "cascade" of security risk, says Kiteworks
News Organizations that don't keep track of data breaches, shadow AI, and third-party counts face dramatically worse outcomes across every metric
-
Hackers are using AI to dissect threat intelligence reports and ‘vibe code’ malware
News TrendMicro has called for caution on how much detail is disclosed in security advisories
-
Anthropic admits hackers have 'weaponized' its tools – and cyber experts warn it's a terrifying glimpse into 'how quickly AI is changing the threat landscape'
News Security experts say Anthropic's recent admission that hackers have "weaponized" its AI tools gives us a terrifying glimpse into the future of cyber crime.
-
Security researchers have just identified what could be the first ‘AI-powered’ ransomware strain – and it uses OpenAI’s gpt-oss-20b model
News Using OpenAI's gpt-oss:20b model, ‘PromptLock’ generates malicious Lua scripts via the Ollama API.
-
Microsoft quietly launched an AI agent that can detect and reverse engineer malware
News Researchers say the tool is already achieving the “gold standard” in malware classification
-
Flaw in Lenovo’s customer service AI chatbot could let hackers run malicious code, breach networks
News Hackers abusing the Lenovo flaw could inject malicious code with just a single prompt
-
Using DeepSeek at work is like ‘printing out and handing over your confidential information’
News Thinking of using DeepSeek at work? Think again. Cybersecurity experts have warned you're putting your enterprise at huge risk.
-
Passwords are a problem: why device-bound passkeys can be the future of secure authentication
Industry insights AI-driven cyberthreats demand a passwordless future…
-
Microsoft patched a critical vulnerability in its NLWeb AI search tool – but there's no CVE (yet)
News Researchers found an unauthenticated path traversal bug in the tool debuted at Microsoft Build in May