Shadow AI is creeping its way into software development – more than half of developers admit to using unauthorized AI tools at work, and it’s putting companies at risk
Enterprises need to create smart AI usage policies that balance the benefits and risks
With software developers increasingly flocking to AI tools to support daily activities, new research suggests a concerning portion are using unauthorized solutions.
Findings from Harness’ State of Software Delivery Report show that more than half (52%) of developers don’t use IT-approved tools, raising significant compliance and intellectual property concerns.
“Perhaps the most alarming observation was around the use of company-approved coding tools - of lack thereof,” the report states.
“The unauthorized adoption of AI codegen tools creates significant shadow IT challenges that extend far beyond immediate security concerns.”
Shadow AI is a serious cause for concern for teams, the report added, with developers potentially exposing sensitive code snippets to third-party AI services without proper governance.
“Ultimately, they can’t track the origin of AI-generated code, nor can they ensure consistent security standards across teams,” Harness said.
Software developers aren’t the only ones flocking to shadow AI
The rise of shadow AI has become a recurring talking point over the last two years as enterprises globally flock to the various AI tools available on the market.
Sign up today and you will receive a free copy of our Future Focus 2025 report - the leading guidance on AI, cybersecurity and other IT challenges as per 700+ senior executives
In its Chasing Shadows report, Software AG found 75% of knowledge workers are already using AI, with this figure set to rise to 90% in the near future, and more than 50% of this group use personal or non-company issued tools when doing so.
Another study by customer service platform Zendesk noted there has been as much as a 250% rise year on year in the use of unsanctioned AI tools in certain industries.
The financial services sector was found to be the worst affected by this phenomenon with a 250% spike year on year compared to 2023 levels, but the healthcare (230%) and manufacturing industries (233%) also exhibited very high levels of shadow AI use.
RELATED WHITEPAPER
Developing robust AI usage policies will be integral to ensuring this growing reliance on unvetted AI tools does not expose your enterprise to unnecessary risks.
Harness’ report listed the critical gaps identified by software engineering leaders in their organization’s AI coding tool policies.
Three-fifths of engineering leaders said companies need policies prescribing the processes for assessing code for vulnerabilities or errors, with 58% stating they need to outline specific use cases where AI is safe or unsafe.
Bharat Mistry, field CTO at Trend Micro, told ITPro implementing the policies included in the Harness report were all wise, but highlighted the importance of training when trying to shape employee behaviour and foster responsible use of personal AI systems.
“I agree with the policies given above, however for me it begins with the human aspect. By investing in comprehensive training and awareness programs, businesses can empower their employees to use AI responsibly, identify and mitigate risks and contribute to the development of ethical and effective AI solutions,” he argued.
“This proactive approach not only enhances the organization’s AI capabilities but also builds a culture of trust and accountability around AI technologies.”
Speaking to ITPro, Steve Ponting, director of Software AG echoed Mistry’s comments, noting that training will be essential in mitigating the risks associated with employees using their own AI tools.
“Workers have been clear: they will use AI whether it’s sanctioned or not. This means that businesses could struggle to manage AI applications, leading to cyber-security risks, skills gaps, and inaccurate work.,” he explained.
“Businesses must have a plan in place to reduce risk, build skills and plan for AI’s inclusion in daily work. If people are determined to use their own AI, training is vital in this regard. Better training would make 46% of employees use AI more, but crucially, they would use it effectively and responsibly.”

Solomon Klappholz is a former staff writer for ITPro and ChannelPro. He has experience writing about the technologies that facilitate industrial manufacturing, which led to him developing a particular interest in cybersecurity, IT regulation, industrial infrastructure applications, and machine learning.
-
Trump's AI executive order could leave US in a 'regulatory vacuum'News Citing a "patchwork of 50 different regulatory regimes" and "ideological bias", President Trump wants rules to be set at a federal level
-
TPUs: Google's home advantageITPro Podcast How does TPU v7 stack up against Nvidia's latest chips – and can Google scale AI using only its own supply?
-
Anthropic says MCP will stay 'open, neutral, and community-driven' after donating project to Linux FoundationNews The AAIF aims to standardize agentic AI development and create an open ecosystem for developers
-
Atlassian just launched a new ChatGPT connector feature for Jira and Confluence — here's what users can expectNews The company says the new features will make it easier to summarize updates, surface insights, and act on information in Jira and Confluence
-
AWS says ‘frontier agents’ are here – and they’re going to transform software developmentNews A new class of AI agents promises days of autonomous work and added safety checks
-
Breaking boundaries: Empowering channel partners to unite DevOps and MLOps for a stronger software supply chainIndustry Insights Unifying DevOps and MLOps speeds delivery, strengthens governance, and improves software supply chain efficiency
-
Google CEO Sundar Pichai thinks software development is 'exciting again' thanks to vibe coding — but developers might disagreeNews Google CEO Sundar Pichai claims software development has become “exciting again” since the rise of vibe coding, but some devs are still on the fence about using AI to code.
-
Open source AI models are cheaper than closed source competitors and perform on par, so why aren’t enterprises flocking to them?Analysis Open source AI models often perform on-par with closed source options and could save enterprises billions in cost savings, new research suggests, yet uptake remains limited.
-
‘Slopsquatting’ is a new risk for vibe coding developers – but it can be solved by focusing on the fundamentalsNews Malicious packages in public code repositories can be given a sheen of authenticity via AI tools
-
Microsoft’s Windows chief wants to turn the operating system into an ‘agentic OS' – users just want reliability and better performanceNews While Microsoft touts an AI-powered future for Windows, users want the tech giant to get back to basics