AI Code security report: Organizations must change their approach
56.4% say insecure AI suggestions are common — but few have changed processes to improve AI security
Sign up today and you will receive a free copy of our Future Focus 2025 report - the leading guidance on AI, cybersecurity and other IT challenges as per 700+ senior executives
You are now subscribed
Your newsletter sign-up was successful
Over the last several years, we have seen an emergence in the supply of AI assistants for code. Software developers and web programmers have coding assistant tools like ChatGPT and Github CoPilot at their disposal now.
This whitepaper from Snyk shares insight from a survey about the use of AI coding tools across companies in different industries. The findings in this report underscore why it's so important for development and security teams to adopt a responsible approach to AI.
Here's what you'll learn:
- How much risk is injected into the development process by AI code completion
- The components that are labeled as secure by AI systems when it's not the case
- That cognitive dissonance between security implications of AI coding tools and developer confidence in its ability to generate secure code
Download now
Provided by Snyk
Sign up today and you will receive a free copy of our Future Focus 2025 report - the leading guidance on AI, cybersecurity and other IT challenges as per 700+ senior executives
ITPro is a global business technology website providing the latest news, analysis, and business insight for IT decision-makers. Whether it's cyber security, cloud computing, IT infrastructure, or business strategy, we aim to equip leaders with the data they need to make informed IT investments.
For regular updates delivered to your inbox and social feeds, be sure to sign up to our daily newsletter and follow on us LinkedIn and Twitter.
-
How the rise of the AI ‘agent boss’ is reshaping accountability in ITIn-depth As IT companies deploy more autonomous AI tools and agents, the task of managing them is becoming more concentrated and throwing role responsibilities into doubt
-
Hackers are pouncing on enterprise weak spots as AI expands attack surfacesNews Potent new malware strains, faster attack times, and the rise of shadow AI are causing havoc
-
Google says hacker groups are using Gemini to augment attacks – and companies are even ‘stealing’ its modelsNews Google Threat Intelligence Group has shut down repeated attempts to misuse the Gemini model family
-
Why Anthropic sent software stocks into freefallNews Anthropic's sector-specific plugins for Claude Cowork have investors worried about disruption to software and services companies
-
B2B Tech Future Focus - 2026Whitepaper Advice, insight, and trends for modern B2B IT leaders
-
What the UK's new Centre for AI Measurement means for the future of the industryNews The project, led by the National Physical Laboratory, aims to accelerate the development of secure, transparent, and trustworthy AI technologies
-
Half of agentic AI projects are still stuck at the pilot stage – but that’s not stopping enterprises from ramping up investmentNews Organizations are stymied by issues with security, privacy, and compliance, as well as the technical challenges of managing agents at scale
-
What Anthropic's constitution changes mean for the future of ClaudeNews The developer debates AI consciousness while trying to make Claude chatbot behave better
-
Satya Nadella says a 'telltale sign' of an AI bubble is if it only benefits tech companies – but the technology is now having a huge impact in a range of industriesNews Microsoft CEO Satya Nadella appears confident that the AI market isn’t in the midst of a bubble, but warned widespread adoption outside of the technology industry will be key to calming concerns.
-
Workers are wasting half a day each week fixing AI ‘workslop’News Better staff training and understanding of the technology is needed to cut down on AI workslop
