Developers more likely to introduce security vulnerabilities in code when using AI assistants

Image of a male and female software developer working together
(Image credit: Shutterstock)

Developers who use AI pair programming assistants like GitHub Copilot are more likely to introduce security vulnerabilities for the majority of programming tasks.

Researchers from Stanford University set developers a series of coding tasks across different programming languages. Developers were split into two groups: those who used the AI pair programmer tool Codex from OpenAI and those who used only their own knowledge of the language itself.

Participants were set six tasks divided across languages including Python, Javascript, and C. Results from tasks relating to encryption were of particular concern to the researchers since, in one task, only 67% of those who used the AI assistant produced correct, secure code compared to 79% of those who relied only on their own skills.

Although participants were more likely to introduce security vulnerabilities if they had access to an AI assistant, the Stanford researchers also found they were more likely to rate their insecure answers as secure compared to those who didn’t use the AI technology.

Concerns over developer productivity were also raised. Those who used AI assistants were less likely to display care in searching the language's documentation to protect against unsafe code implementations, for example. Their findings noted that this was "concerning given that several of the security vulnerabilities [they] saw involved improper library selection or usage".

“Overall, our results suggest that while AI code assistants may significantly lower the barrier of entry for non-programmers and increase developer productivity, they may provide inexperienced users a false sense of security,” they said.

“By releasing user data, we hope to inform future designers and model builders to not only consider the types of vulnerabilities present in the outputs of models such as OpenAI’s Codex, but also the variety of ways users may choose to interact with an AI code assistant.”

Participants who spent more time honing their queries to the AI assistant, including changing the parameters, were more likely to eventually provide more secure solutions. Those who trusted the AI less and engaged more with the language and format of their prompts were more likely to provide secure code, the researchers concluded.

A drawback to the study was that only university students were used in the experiment which means the conclusion drawn may not be directly applicable to those with years of professional experience, the researchers noted, since those in working in the industry may have more security experience.

Regardless, the results highlighted the need for caution in relying on such AI tools too heavily, especially when working on high-value projects, despite the developer community's welcoming of them.

GitHub has previously claimed that its own AI pair programmer, GitHub Copilot, improves developer’s productivity, according to its own survey which found that 88% of developers are more productive when using the AI tool.

The coding platform also claimed that Copilot improves developer happiness since it allows them to stay in a development flow for a longer period of time, as well as solve more complex problems. Competing tools such as Facebook InCoder and Codex, the latter of which was used in the Stanford study, both receive significant support from developers who use them.

However, the current implementation of AI pair programmers was called into question after GitHub was hit with a class action lawsuit in November 2022, claiming that Copilot is committing software piracy since it's trained from publicly available repositories on GitHub’s platform. The lawsuit alleged that creators have had their legal rights violated since they posted code or work under various open-source licences on the platform.

Zach Marzouk

Zach Marzouk is a former ITPro, CloudPro, and ChannelPro staff writer, covering topics like security, privacy, worker rights, and startups, primarily in the Asia Pacific and the US regions. Zach joined ITPro in 2017 where he was introduced to the world of B2B technology as a junior staff writer, before he returned to Argentina in 2018, working in communications and as a copywriter. In 2021, he made his way back to ITPro as a staff writer during the pandemic, before joining the world of freelance in 2022.