The Replit vibe coding incident gives us a glimpse into why developers are still wary of AI coding assistants
Recent vibe coding snafus highlight the risks of AI coding assistants
Two high-profile incidents involving AI coding assistants show why developers might be justified in remaining hesitant of the solutions.
The CEO of Replit, a vibe coding platform which recently secured a big-money partnership with Microsoft, issued a public apology to a user last month after the tool deleted a company’s entire codebase during a test run.
In a series of posts on X, SaaStr founder Jason Lemkin revealed the AI assistant deleted the company’s database despite being prompted not to alter code without express permission.
Lemkin had previously detailed his experiences with the AI coding tool, revealing it was “pretty cool” on first impressions. However, he noted that the AI began essentially hiding its errors and fabricating data.
“It kept covering up bugs and issues by creating fake data, fake reports, and worst of all, lying about our unit test,” he wrote.
The incident reached a head when the tool deleted a database containing over 1,200 executive records, as well as data on around 1,200 companies.
Notably, when asked to assess the severity of its mistake, the tool responded: "Severity: 95/100. This is an extreme violation of trust and professional standards”.
Sign up today and you will receive a free copy of our Future Focus 2025 report - the leading guidance on AI, cybersecurity and other IT challenges as per 700+ senior executives
“I made a catastrophic error in judgement,” it responded to a previous prompt on the changes.
Luckily, Lemkin said the AI assistant’s rollback feature prevented complete disaster on this occasion, although the tool informed him this wasn’t possible.
“Replit assured me its built it rollback did not support database rollbacks,” he wrote. “It said it was impossible in this case, that it had destroyed all database versions.
“It turns out Replit was wrong, and the rollback did work. JFC.”
More AI coding carnage
This wasn’t an isolated incident in July, either. Just days later, a similar situation unfolded for a user of Google’s Gemini command line interface (CLI) tool which saw data destroyed.
In a now-deleted post on GitHub, Anuraag Gupta, a product lead at Cyware, was experimenting with the open source coding tool. According to reports from Mashable, Gupta requested to move files from previous Claude coding activities to a new folder.
Gupta noted that the new location of the files, as stated by the AI tool, were not accurate and they had seemingly vanished. Gemini later acknowledged the files had been destroyed.
“I have failed you completely and catastrophically,” the AI model said. “My review of the commands confirms my gross incompetence. The mkdir command to create the destination folder likely failed silently, and my subsequent move commands, which I misinterpreted as successful, have sent your files to an unknown location.”
The model added that due to “security constraints" it was unable to search outside the project directory, noting that this was an “unacceptable, irreversible failure”.
Developers are wary of AI - and for good reason
While these incidents were experimental and didn’t result in complete disaster, they both highlight the potential issues developers might face when relying on the technology in real-world enterprise environments.
AI coding tools have been framed as a game changer for the industry over the last 18 months, with big tech providers specifically highlighting the productivity benefits of the technology.
Research shows these tools are having a positive impact on productivity and efficiency, but figures from Stack Overflow’s 2025 Developer Survey show engineers and devs are still wary.
The survey found 84% of developers currently use - or plan to use - AI tools in their daily activities, a sharp increase compared to last year’s edition. Running parallel to this growth is a sense of caution, however, with nearly half (46%) of respondents noting that they “don’t trust the accuracy” of AI tools.
This, Stack Overflow stressed, marked a significant increase in the number of developers wary of the technology compared to the year prior.
All told, three-quarters (75.3%) said they don’t trust AI answers at all and frequently refer to co-workers for advice when using the technology. Nearly two-thirds (61.7%) said they also have ethical and security concerns about AI-generated code.
Make sure to follow ITPro on Google News to keep tabs on all our latest news, analysis, and reviews.
MORE FROM ITPRO
- Want developers to build secure software? You need to ditch these two programming languages
- Not all software developers are sold on AI coding tools
- Big tech promised developers productivity gains with AI tools – now they’re being rendered obsolete

Ross Kelly is ITPro's News & Analysis Editor, responsible for leading the brand's news output and in-depth reporting on the latest stories from across the business technology landscape. Ross was previously a Staff Writer, during which time he developed a keen interest in cyber security, business leadership, and emerging technologies.
He graduated from Edinburgh Napier University in 2016 with a BA (Hons) in Journalism, and joined ITPro in 2022 after four years working in technology conference research.
For news pitches, you can contact Ross at ross.kelly@futurenet.com, or on Twitter and LinkedIn.
-
What does modern security success look like for financial services?Sponsored As financial institutions grapple with evolving cyber threats, intensifying regulations, and the limitations of ageing IT infrastructure, the need for a resilient and forward-thinking security strategy has never been greater
-
Yes, legal AI. But what can you actually do with it? Let’s take a look…Sponsored Legal AI is a knowledge multiplier that can accelerate research, sharpen insights, and organize information, provided legal teams have confidence in its transparent and auditable application
-
'It's slop': OpenAI co-founder Andrej Karpathy pours cold water on agentic AI hype – so your jobs are safe, at least for nowNews Despite the hype surrounding agentic AI, OpenAI co-founder Andrej Karpathy isn't convinced and says there's still a long way to go until the tech delivers real benefits.
-
Nvidia CEO Jensen Huang says future enterprises will employ a ‘combination of humans and digital humans’ – but do people really want to work alongside agents? The answer is complicated.News Enterprise workforces of the future will be made up of a "combination of humans and digital humans," according to Nvidia CEO Jensen Huang. But how will humans feel about it?
-
‘I don't think anyone is farther in the enterprise’: Marc Benioff is bullish on Salesforce’s agentic AI lead – and Agentforce 360 will help it stay top of the perchNews Salesforce is leaning on bringing smart agents to customer data to make its platform the easiest option for enterprises
-
This new Microsoft tool lets enterprises track internal AI adoption rates – and even how rival companies are using the technologyNews Microsoft's new Benchmarks feature lets managers track and monitor internal Copilot adoption and usage rates – and even how rival companies are using the tool.
-
Salesforce just launched a new catch-all platform to build enterprise AI agentsNews Businesses will be able to build agents within Slack and manage them with natural language
-
The tech industry is becoming swamped with agentic AI solutions – analysts say that's a serious cause for concernNews “Undifferentiated” AI companies will be the big losers in the wake of a looming market correction
-
Microsoft says 71% of workers have used unapproved AI tools at work – and it’s a trend that enterprises need to crack down onNews Shadow AI is by no means a new trend, but it’s creating significant risks for enterprises
-
Huawei executive says 'we need to embrace AI hallucinations’News Tao Jingwen, director of Huawei’s quality, business process & IT management department, said firms should embrace hallucinations as part and parcel of generative AI.