Tech leaders worry AI innovation is outpacing governance
The pace of AI innovation is making it increasingly difficult to handle issues of safety and responsibility
Sign up today and you will receive a free copy of our Future Focus 2025 report - the leading guidance on AI, cybersecurity and other IT challenges as per 700+ senior executives
You are now subscribed
Your newsletter sign-up was successful
The rapid growth of AI is outpacing effective governance, researchers have warned, with business leaders desperate for more clarity on regulation.
NTT Data’s Responsibility Gap Survey of C-suite executives concludes that there's an urgent need for stronger AI leadership, balancing innovation with responsibility.
Eight-in-ten said a lack of clear policies is preventing them from scaling generative AI initiatives, and that unclear government regulations are hindering AI investment and implementation, leading to delayed adoption.
And while nine-in-ten executives said they worry about AI security risks, only a quarter of CISOs said they have a robust governance framework.
"The enthusiasm for AI is undeniable, but our findings show that innovation without responsibility is a risk multiplier," said NTT Data CEO Abhijit Dubey.
"Organizations need leadership-driven AI governance strategies to close this gap — before progress stalls and trust erodes."
There's a big split amongst C-suite executives about the appropriate balance between safety and innovation. A third of executives believe responsibility matters more than innovation, one-third think the opposite, and one-third rates them equally.
Sign up today and you will receive a free copy of our Future Focus 2025 report - the leading guidance on AI, cybersecurity and other IT challenges as per 700+ senior executives
There are also concerns about sustainability, with three-quarters of leaders saying that their AI ambitions conflict with corporate sustainability goals, forcing them to rethink energy-intensive AI solutions.
Additionally, two-thirds of executives say their employees lack the skills to work effectively with AI, while 72% admit they don't have an AI policy in place to guide responsible use.
NTT Data recommends introducing Responsible by Design Principles, building AI responsibly from the ground up and, integrating security, compliance, and transparency into development from day one.
Leaders need a systematic approach to ethical AI standards, going beyond legal requirements, and organizations should upskill employees to work alongside AI and ensure teams understand AI’s risks and opportunities.
RELATED WHITEPAPER
Meanwhile, there needs to be global collaboration on AI policy, with businesses, regulators, and industry leaders coming together to create clearer, actionable AI governance frameworks and establish global AI standards.
"AI’s trajectory is clear — its impact will only grow. But without decisive leadership, we risk a future where innovation outpaces responsibility, creating security gaps, ethical blind spots, and missed opportunities," said Dubey.
"The business community must act now. By embedding responsibility into AI’s foundation—through design, governance, workforce readiness, and ethical frameworks—we unlock AI’s full potential while ensuring it serves businesses, employees, and society at large equally."
MORE FROM ITPRO
- Everything you need to know about AI and data protection
- Why your business needs data protection policies
- A lack of AI guidance is causing serious GDPR headaches
Emma Woollacott is a freelance journalist writing for publications including the BBC, Private Eye, Forbes, Raconteur and specialist technology titles.
-
Will AI hiring entrench gender bias?ITPro Podcast Leaders need to proactive as attackers launch more consistent, sophisticated attacks
-
Met Office hails huge efficiency gains in first year of cloud supercomputing with Microsoft AzureNews In moving to the cloud, the Met Office has bolstered operational resilience and helped to deliver more accurate forecasts
-
CrowdStrike says AI is officially supercharging cyber attacks: Average breakout times hit just 29 minutes in 2025, 65% faster than in 2024 – and some attacks take just secondsNews Cyber criminals are actively exploiting AI systems and injecting malicious prompts into legitimate generative AI tools
-
Using AI to generate passwords is a terrible idea, experts warnNews Researchers have warned the use of AI-generated passwords puts users and businesses at risk
-
Harnessing AI to secure the future of identityIndustry Insights Channel partners must lead on securing AI identities through governance and support
-
‘They are able to move fast now’: AI is expanding attack surfaces – and hackers are looking to reap the same rewards as enterprises with the technologyNews Potent new malware strains, faster attack times, and the rise of shadow AI are causing havoc
-
CISA’s interim chief uploaded sensitive documents to a public version of ChatGPT – security experts explain why you should never do thatNews The incident at CISA raises yet more concerns about the rise of ‘shadow AI’ and data protection risks
-
AI is “forcing a fundamental shift” in data privacy and governanceNews Organizations are working to define and establish the governance structures they need to manage AI responsibly at scale – and budgets are going up
-
Fears over “AI model collapse” are fueling a shift to zero trust data governance strategiesNews Gartner warns of "model collapse" as AI-generated data proliferates – and says organizations need to beware
-
Supply chain and AI security in the spotlight for cyber leaders in 2026News Organizations are sharpening their focus on supply chain security and shoring up AI systems
