Shadow AI is creeping its way into software development – more than half of developers admit to using unauthorized AI tools at work, and it’s putting companies at risk
Enterprises need to create smart AI usage policies that balance the benefits and risks


With software developers increasingly flocking to AI tools to support daily activities, new research suggests a concerning portion are using unauthorized solutions.
Findings from Harness’ State of Software Delivery Report show that more than half (52%) of developers don’t use IT-approved tools, raising significant compliance and intellectual property concerns.
“Perhaps the most alarming observation was around the use of company-approved coding tools - of lack thereof,” the report states.
“The unauthorized adoption of AI codegen tools creates significant shadow IT challenges that extend far beyond immediate security concerns.”
Shadow AI is a serious cause for concern for teams, the report added, with developers potentially exposing sensitive code snippets to third-party AI services without proper governance.
“Ultimately, they can’t track the origin of AI-generated code, nor can they ensure consistent security standards across teams,” Harness said.
Software developers aren’t the only ones flocking to shadow AI
The rise of shadow AI has become a recurring talking point over the last two years as enterprises globally flock to the various AI tools available on the market.
Get the ITPro daily newsletter
Sign up today and you will receive a free copy of our Future Focus 2025 report - the leading guidance on AI, cybersecurity and other IT challenges as per 700+ senior executives
In its Chasing Shadows report, Software AG found 75% of knowledge workers are already using AI, with this figure set to rise to 90% in the near future, and more than 50% of this group use personal or non-company issued tools when doing so.
Another study by customer service platform Zendesk noted there has been as much as a 250% rise year on year in the use of unsanctioned AI tools in certain industries.
The financial services sector was found to be the worst affected by this phenomenon with a 250% spike year on year compared to 2023 levels, but the healthcare (230%) and manufacturing industries (233%) also exhibited very high levels of shadow AI use.
RELATED WHITEPAPER
Developing robust AI usage policies will be integral to ensuring this growing reliance on unvetted AI tools does not expose your enterprise to unnecessary risks.
Harness’ report listed the critical gaps identified by software engineering leaders in their organization’s AI coding tool policies.
Three-fifths of engineering leaders said companies need policies prescribing the processes for assessing code for vulnerabilities or errors, with 58% stating they need to outline specific use cases where AI is safe or unsafe.
Bharat Mistry, field CTO at Trend Micro, told ITPro implementing the policies included in the Harness report were all wise, but highlighted the importance of training when trying to shape employee behaviour and foster responsible use of personal AI systems.
“I agree with the policies given above, however for me it begins with the human aspect. By investing in comprehensive training and awareness programs, businesses can empower their employees to use AI responsibly, identify and mitigate risks and contribute to the development of ethical and effective AI solutions,” he argued.
“This proactive approach not only enhances the organization’s AI capabilities but also builds a culture of trust and accountability around AI technologies.”
Speaking to ITPro, Steve Ponting, director of Software AG echoed Mistry’s comments, noting that training will be essential in mitigating the risks associated with employees using their own AI tools.
“Workers have been clear: they will use AI whether it’s sanctioned or not. This means that businesses could struggle to manage AI applications, leading to cyber-security risks, skills gaps, and inaccurate work.,” he explained.
“Businesses must have a plan in place to reduce risk, build skills and plan for AI’s inclusion in daily work. If people are determined to use their own AI, training is vital in this regard. Better training would make 46% of employees use AI more, but crucially, they would use it effectively and responsibly.”
Solomon Klappholz is a former Staff Writer at ITPro adn ChannelPro. He has experience writing about the technologies that facilitate industrial manufacturing which led to him developing a particular interest in IT regulation, industrial infrastructure applications, and machine learning.
-
Security experts issue warning over the rise of 'gray bot' AI web scrapers
News While not malicious, the bots can overwhelm web applications in a way similar to bad actors
By Jane McCallion Published
-
Does speech recognition have a future in business tech?
Once a simple tool for dictation, speech recognition is being revolutionized by AI to improve customer experiences and drive inclusivity in the workforce
By Jonathan Weinberg Published
-
AI was a harbinger of doom for low-code solutions, but peaceful coexistence is possible – developers still love the time savings and simplicity despite the allure of popular AI coding tools
News The impact of AI coding tools on the low-code market hasn't been quite as disastrous as predicted
By Ross Kelly Published
-
NetSuite targets UK customer productivity gains with new AI tools
News Oracle NetSuite has announced new AI tools and features for UK customers aimed at supercharging productivity.
By Rory Bathgate Published
-
‘Frontier models are still unable to solve the majority of tasks’: AI might not replace software engineers just yet – OpenAI researchers found leading models and coding tools still lag behind humans on basic tasks
News AI might not replace software engineers just yet as new research from OpenAI reveals ongoing weaknesses in the technology.
By George Fitzmaurice Published
-
‘Awesome for the community’: DeepSeek open sourced its code repositories, and experts think it could give competitors a scare
News Challenger AI startup DeepSeek has open-sourced some of its code repositories in a move that experts told ITPro puts the firm ahead of the competition on model transparency.
By George Fitzmaurice Published
-
‘We’re trading deep understanding for quick fixes’: Junior software developers lack coding skills because of an overreliance on AI tools – and it could spell trouble for the future of development
News Junior software developers may lack coding skills because of an overreliance on AI tools, industry experts suggest.
By George Fitzmaurice Published
-
GitHub's new 'Agent Mode' feature lets AI take the reins for developers
News GitHub has unveiled the launch of 'Agent Mode' - a new agentic AI feature aimed at automating developer activities.
By Ross Kelly Published
-
Westcon-Comstor strikes new Splunk EMEA distribution deal
News Westcon-Comstor has announced a new distribution agreement with Splunk in the EMEA region.
By Daniel Todd Published
-
The world's 'first AI software engineer' isn't living up to expectations: Cognition AI's 'Devin' assistant was touted as a game changer for developers, but so far it's fumbling tasks and struggling to compete with human workers
News Devin, a coding assistant from Cognition AI hailed as the world's 'first AI software engineer', hasn't quite lived up to expectations, according to researchers.
By Nicole Kobie Published