AI Code security report: Organizations must change their approach
56.4% say insecure AI suggestions are common — but few have changed processes to improve AI security


Over the last several years, we have seen an emergence in the supply of AI assistants for code. Software developers and web programmers have coding assistant tools like ChatGPT and Github CoPilot at their disposal now.
This whitepaper from Snyk shares insight from a survey about the use of AI coding tools across companies in different industries. The findings in this report underscore why it's so important for development and security teams to adopt a responsible approach to AI.
Here's what you'll learn:
- How much risk is injected into the development process by AI code completion
- The components that are labeled as secure by AI systems when it's not the case
- That cognitive dissonance between security implications of AI coding tools and developer confidence in its ability to generate secure code
Download now
Provided by Snyk
Sign up today and you will receive a free copy of our Future Focus 2025 report - the leading guidance on AI, cybersecurity and other IT challenges as per 700+ senior executives
ITPro is a global business technology website providing the latest news, analysis, and business insight for IT decision-makers. Whether it's cyber security, cloud computing, IT infrastructure, or business strategy, we aim to equip leaders with the data they need to make informed IT investments.
For regular updates delivered to your inbox and social feeds, be sure to sign up to our daily newsletter and follow on us LinkedIn and Twitter.
-
Why DORA is bigger than just a financial sector compliance check box exercise
The EU’s landmark digital resilience legislation has issued a wake-up call for adopting a continuous approach to cybersecurity
-
Helping customers adopt a multi-cloud infrastructure and accelerate their modernization journey
Sponsored Content We outline what shifting to a subscription model means for your business
-
Is ChatGPT making us dumber? A new MIT study claims using AI tools causes cognitive issues, and it’s not the first – Microsoft has already warned about ‘diminished independent problem-solving’
News A recent study from MIT suggests that using AI tools impacts brain activity, with frequent users underperforming compared to their counterparts.
-
‘Agent washing’ is here: Most agentic AI tools are just ‘repackaged’ RPA solutions and chatbots – and Gartner says 40% of projects will be ditched within two years
News Agentic AI might be the latest industry trend, but new research suggests the majority of tools are simply repackaged AI assistants and chatbots.
-
‘Digital first, but not digital only’: Customer service workers were first on the AI chopping block – but half of enterprises are now backtracking amid a torrent of consumer complaints and poor returns on AI
News While businesses have been keen on replacing customer service workers with AI, adoption difficulties mean many are now backtracking on plans.
-
‘I don’t think this is on people’s radar’: AI could wipe out half of entry-level jobs in the next five years – and Anthropic CEO Dario Amodei thinks we're all burying our heads in the sand
News With AI set to hit entry-level jobs especially, some industry execs say clear warning signs are being ignored
-
‘A complete accuracy collapse’: Apple throws cold water on the potential of AI reasoning – and it's a huge blow for the likes of OpenAI, Google, and Anthropic
News Apple published a research paper on the effectiveness of AI 'reasoning' models - and it seriously rains on the parade of the world’s most prominent developers.
-
Questions raised over AI’s impact as studies tout conflicting adoption outcomes
News Two reports highlight the difficulty of judging the impact of AI on jobs, productivity, and wages
-
Sick and tired of spreadsheets? Perplexity’s new tools can help with that
News Perplexity Labs is available now for Pro subscription users
-
Meta faces new ‘open washing’ accusations with AI whitepaper
News The tech giant has faced repeated criticism for describing its Llama AI model family as "open source".