Vibe coding security risks and how to mitigate them

Vibe coding is increasingly popular, yet it opens firms up to multiple security risks – here’s what businesses need to know.

A stylized illustration showing a male software engineer wearing a yellow t shirt pressing a large blue button bearing the text "PROMPT" and three stars to show it's referring to an AI prompt. Beneath, code is beginning to be written, to show vibe coding. These elements are set against a light gray background, with a green speech bubble and large white cog directly behind the aforementioned elements.
(Image credit: Getty Images)

Vibe coding is gaining traction. The technique – which sees a developer describe their project to a large language model (LLM) to generate code – allows programmers to create software and apps with limited training and skills, lowering costs and boosting efficiency.

OpenAI co-founder and former AI chief at Tesla Andrej Karpathy – who popularized the term at the start of 2025 – describes it as "fully giving in to the vibes, embracing exponentials, and forgetting that the code even exists”.

But vibe coding also opens businesses up to security risks, including the possibility of introducing vulnerabilities into the software being created. In fact, nearly half of all code generated by AI contains security flaws, despite appearing production ready, according to Veracode’s 2025 GenAI Code Security report.

Veracode tested 100 leading LLMs across 80 curated tasks and found they produced insecure code 45% of the time, with no real improvement across newer or larger models.

It’s also frighteningly easy to make mistakes: earlier this year, a vibe-coding CEO managed to delete his entire codebase while using Replit’s AI assistant.

Vibe coding: benefits versus risk

Vibe coding simplifies app development, but it often bypasses essential security vetting and regulatory compliance checks, says Dr Jeff Schwartzentruber, senior machine learning scientist at eSentire. “Without rigorous preplanning, architectural oversight and experienced project management, vibe coding can introduce vulnerabilities, compliance gaps and substantial technical debt.”

Vibe coding lowers the barrier to entry for creating software, but the autonomous nature of AI-generated suggestions introduces security risks, says Louise Fellows, VP, northern Europe at GitLab. AI provides the ‘vibe’ or suggested patterns that developers might accept without critical evaluation or deep comprehension of their underlying security implications. This can result in vulnerabilities in the codebase.”

At the same time, the AI may not inherently prioritize security best practices, or adhere to specific organizational policies and compliance with regulations such as the EU general data protection regulation (GDPR). “Without proper governance and continuous security training, AI agents may suggest or generate code that includes outdated or vulnerable third-party libraries, lacks essential security controls, or improperly handles sensitive data,” Fellows warns.

The core problem is that AI doesn’t write secure code by default. “It just spits out something that works, but under the hood, the logic can be completely wrong, or wide open to attacks,” says Mackenzie Jackson, developer and security advocate at Aikido Security.

Vibe coding skips too many checks says Camden Woollven, group head of AI product marketing at GRC International Group, who tells ITPro this compounds the issue. “Developers are using AI to write code fast, often without reviewing it properly or even understanding what it’s doing.”

That leads to vulnerabilities including injection flaws, poor authorization, missing validation and hardcoded secrets, says Woollven. “If no one’s checking, that code goes live,” she adds.

Then there’s the dependency sprawl. “One prompt can pull in a dozen libraries, some of them insecure or unmaintained,” says Woollven. “If no one’s vetting what the model uses, you’re stacking risk fast. Worse, you end up with code no one fully owns or understands. No documentation, no visibility, no easy way to debug or maintain it. That becomes a long-term liability.”

Debugging is tough, even when you write the code yourself, says Jackson. “When it’s generated by a machine that’s making thousands of assumptions from a tiny prompt, it’s even harder.”

He points to issues with platforms such Lovable, where apps built by hobby developers can have serious security holes. “That’s just a preview of what we’re going to see all over the web as this keeps scaling up.”

Secure vibe coding best practices

Despite the risks, vibe coding is already being used by businesses, making it important that firms are on top of the trend and including it in their AI security strategies. The first rule is simple: Treat AI-generated code as untrusted, says Woollven. “Just because it runs doesn’t mean it’s safe.”

Firms also need to ensure governance measures are in place. “Set clear rules on where and how AI tools can be used, who’s allowed to take advantage of them, and what kind of review is mandatory,” Woollven advises. “Nothing should ship without sign-off from someone who knows what they’re looking at.”

Secure vibe coding starts with “strong governance and a human-centric approach”, says Fellows. As part of this, she urges IT leaders to establish clear accountability. “Teams should implement composite identities that directly link AI agent actions to specific human owners. This ensures you always know who's responsible for the code, even when AI suggests it.”

Over time, firms should prioritize continuous human oversight and automated checks, Fellows adds. “Human review gates must remain in place, alongside automated testing, to catch issues early before they reach production.”

It’s not enough to know what code is produced, she says. “You also need to monitor its security impact in staging and production environments, associated databases, and any applications it has access to.”

To boost security in vibe coding, sanitize inputs and use parameterized queries to avoid injection attack, Jackson advises. “Put rate limits in place so someone can’t just hammer your endpoints and never leave secrets such as API keys sitting in your frontend code or in a public repository.”

Meanwhile, continually check your packages are up to date and therefore secure: “A ton of attacks target old vulnerabilities,” Jackson warns.

As vibe coding takes off, educating employees of the risks will go some way to preventing security issues before they happen. With this in mind, Fellows recommends “upskilling teams on prompt hygiene and clearly defining when and where vibe coding is appropriate”.

If you let things get out of control, mistakes are bound to happen. Therefore, if you are going to use vibe coding capabilities, keep the scope small, Woollven says. “Don’t use AI to generate a whole app. Avoid letting it write anything critical like auth, crypto or system-level code – build those parts yourself.”

Kate O'Flaherty is a freelance journalist with well over a decade's experience covering cyber security and privacy for publications including Wired, Forbes, the Guardian, the Observer, Infosecurity Magazine and the Times. Within cyber security and privacy, her specialist areas include critical national infrastructure security, cyber warfare, application security and regulation in the UK and the US amid increasing data collection by big tech firms such as Facebook and Google. You can follow Kate on Twitter.