Shadow AI is creeping its way into software development – more than half of developers admit to using unauthorized AI tools at work, and it’s putting companies at risk
Enterprises need to create smart AI usage policies that balance the benefits and risks
With software developers increasingly flocking to AI tools to support daily activities, new research suggests a concerning portion are using unauthorized solutions.
Findings from Harness’ State of Software Delivery Report show that more than half (52%) of developers don’t use IT-approved tools, raising significant compliance and intellectual property concerns.
“Perhaps the most alarming observation was around the use of company-approved coding tools - of lack thereof,” the report states.
“The unauthorized adoption of AI codegen tools creates significant shadow IT challenges that extend far beyond immediate security concerns.”
Shadow AI is a serious cause for concern for teams, the report added, with developers potentially exposing sensitive code snippets to third-party AI services without proper governance.
“Ultimately, they can’t track the origin of AI-generated code, nor can they ensure consistent security standards across teams,” Harness said.
Software developers aren’t the only ones flocking to shadow AI
The rise of shadow AI has become a recurring talking point over the last two years as enterprises globally flock to the various AI tools available on the market.
Sign up today and you will receive a free copy of our Future Focus 2025 report - the leading guidance on AI, cybersecurity and other IT challenges as per 700+ senior executives
In its Chasing Shadows report, Software AG found 75% of knowledge workers are already using AI, with this figure set to rise to 90% in the near future, and more than 50% of this group use personal or non-company issued tools when doing so.
Another study by customer service platform Zendesk noted there has been as much as a 250% rise year on year in the use of unsanctioned AI tools in certain industries.
The financial services sector was found to be the worst affected by this phenomenon with a 250% spike year on year compared to 2023 levels, but the healthcare (230%) and manufacturing industries (233%) also exhibited very high levels of shadow AI use.
RELATED WHITEPAPER
Developing robust AI usage policies will be integral to ensuring this growing reliance on unvetted AI tools does not expose your enterprise to unnecessary risks.
Harness’ report listed the critical gaps identified by software engineering leaders in their organization’s AI coding tool policies.
Three-fifths of engineering leaders said companies need policies prescribing the processes for assessing code for vulnerabilities or errors, with 58% stating they need to outline specific use cases where AI is safe or unsafe.
Bharat Mistry, field CTO at Trend Micro, told ITPro implementing the policies included in the Harness report were all wise, but highlighted the importance of training when trying to shape employee behaviour and foster responsible use of personal AI systems.
“I agree with the policies given above, however for me it begins with the human aspect. By investing in comprehensive training and awareness programs, businesses can empower their employees to use AI responsibly, identify and mitigate risks and contribute to the development of ethical and effective AI solutions,” he argued.
“This proactive approach not only enhances the organization’s AI capabilities but also builds a culture of trust and accountability around AI technologies.”
Speaking to ITPro, Steve Ponting, director of Software AG echoed Mistry’s comments, noting that training will be essential in mitigating the risks associated with employees using their own AI tools.
“Workers have been clear: they will use AI whether it’s sanctioned or not. This means that businesses could struggle to manage AI applications, leading to cyber-security risks, skills gaps, and inaccurate work.,” he explained.
“Businesses must have a plan in place to reduce risk, build skills and plan for AI’s inclusion in daily work. If people are determined to use their own AI, training is vital in this regard. Better training would make 46% of employees use AI more, but crucially, they would use it effectively and responsibly.”

Solomon Klappholz is a former staff writer for ITPro and ChannelPro. He has experience writing about the technologies that facilitate industrial manufacturing, which led to him developing a particular interest in cybersecurity, IT regulation, industrial infrastructure applications, and machine learning.
-
A torrent of AI slop submissions forced an open source project to scrap its bug bounty program – maintainer claims they’re removing the “incentive for people to submit crap”News Curl isn’t the only open source project inundated with AI slop submissions
-
‘This is a platform shift’: Jensen Huang says the traditional computing stack will never look the same because of AI – ChatGPT and Claude will forge a new generation of applicationsNews The Nvidia chief says new applications will be built “on top of ChatGPT” as the technology redefines software
-
So much for ‘trust but verify’: Nearly half of software developers don’t check AI-generated code – and 38% say it's because it takes longer than reviewing code produced by colleaguesNews A concerning number of developers are failing to check AI-generated code, exposing enterprises to huge security threats
-
Microsoft is shaking up GitHub in preparation for a battle with AI coding rivalsNews The tech giant is bracing itself for a looming battle in the AI coding space
-
AI could truly transform software development in 2026 – but developer teams still face big challenges with adoption, security, and productivityAnalysis AI adoption is expected to continue transforming software development processes, but there are big challenges ahead
-
‘1 engineer, 1 month, 1 million lines of code’: Microsoft wants to replace C and C++ code with Rust by 2030 – but a senior engineer insists the company has no plans on using AI to rewrite Windows source codeNews Windows won’t be rewritten in Rust using AI, according to a senior Microsoft engineer, but the company still has bold plans for embracing the popular programming language
-
AI is creating more software flaws – and they're getting worseNews A CodeRabbit study compared pull requests with AI and without, finding AI is fast but highly error prone
-
AI doesn’t mean your developers are obsolete — if anything you’re probably going to need bigger teamsAnalysis Software developers may be forgiven for worrying about their jobs in 2025, but the end result of AI adoption will probably be larger teams, not an onslaught of job cuts.

