The Replit vibe coding incident gives us a glimpse into why developers are still wary of AI coding assistants
Recent vibe coding snafus highlight the risks of AI coding assistants


Two high-profile incidents involving AI coding assistants show why developers might be justified in remaining hesitant of the solutions.
The CEO of Replit, a vibe coding platform which recently secured a big-money partnership with Microsoft, issued a public apology to a user last month after the tool deleted a company’s entire codebase during a test run.
In a series of posts on X, SaaStr founder Jason Lemkin revealed the AI assistant deleted the company’s database despite being prompted not to alter code without express permission.
Lemkin had previously detailed his experiences with the AI coding tool, revealing it was “pretty cool” on first impressions. However, he noted that the AI began essentially hiding its errors and fabricating data.
“It kept covering up bugs and issues by creating fake data, fake reports, and worst of all, lying about our unit test,” he wrote.
The incident reached a head when the tool deleted a database containing over 1,200 executive records, as well as data on around 1,200 companies.
Notably, when asked to assess the severity of its mistake, the tool responded: "Severity: 95/100. This is an extreme violation of trust and professional standards”.
Sign up today and you will receive a free copy of our Future Focus 2025 report - the leading guidance on AI, cybersecurity and other IT challenges as per 700+ senior executives
“I made a catastrophic error in judgement,” it responded to a previous prompt on the changes.
Luckily, Lemkin said the AI assistant’s rollback feature prevented complete disaster on this occasion, although the tool informed him this wasn’t possible.
“Replit assured me its built it rollback did not support database rollbacks,” he wrote. “It said it was impossible in this case, that it had destroyed all database versions.
“It turns out Replit was wrong, and the rollback did work. JFC.”
More AI coding carnage
This wasn’t an isolated incident in July, either. Just days later, a similar situation unfolded for a user of Google’s Gemini command line interface (CLI) tool which saw data destroyed.
In a now-deleted post on GitHub, Anuraag Gupta, a product lead at Cyware, was experimenting with the open source coding tool. According to reports from Mashable, Gupta requested to move files from previous Claude coding activities to a new folder.
Gupta noted that the new location of the files, as stated by the AI tool, were not accurate and they had seemingly vanished. Gemini later acknowledged the files had been destroyed.
“I have failed you completely and catastrophically,” the AI model said. “My review of the commands confirms my gross incompetence. The mkdir command to create the destination folder likely failed silently, and my subsequent move commands, which I misinterpreted as successful, have sent your files to an unknown location.”
The model added that due to “security constraints" it was unable to search outside the project directory, noting that this was an “unacceptable, irreversible failure”.
Developers are wary of AI - and for good reason
While these incidents were experimental and didn’t result in complete disaster, they both highlight the potential issues developers might face when relying on the technology in real-world enterprise environments.
AI coding tools have been framed as a game changer for the industry over the last 18 months, with big tech providers specifically highlighting the productivity benefits of the technology.
Research shows these tools are having a positive impact on productivity and efficiency, but figures from Stack Overflow’s 2025 Developer Survey show engineers and devs are still wary.
The survey found 84% of developers currently use - or plan to use - AI tools in their daily activities, a sharp increase compared to last year’s edition. Running parallel to this growth is a sense of caution, however, with nearly half (46%) of respondents noting that they “don’t trust the accuracy” of AI tools.
This, Stack Overflow stressed, marked a significant increase in the number of developers wary of the technology compared to the year prior.
All told, three-quarters (75.3%) said they don’t trust AI answers at all and frequently refer to co-workers for advice when using the technology. Nearly two-thirds (61.7%) said they also have ethical and security concerns about AI-generated code.
Make sure to follow ITPro on Google News to keep tabs on all our latest news, analysis, and reviews.
MORE FROM ITPRO
- Want developers to build secure software? You need to ditch these two programming languages
- Not all software developers are sold on AI coding tools
- Big tech promised developers productivity gains with AI tools – now they’re being rendered obsolete

Ross Kelly is ITPro's News & Analysis Editor, responsible for leading the brand's news output and in-depth reporting on the latest stories from across the business technology landscape. Ross was previously a Staff Writer, during which time he developed a keen interest in cyber security, business leadership, and emerging technologies.
He graduated from Edinburgh Napier University in 2016 with a BA (Hons) in Journalism, and joined ITPro in 2022 after four years working in technology conference research.
For news pitches, you can contact Ross at ross.kelly@futurenet.com, or on Twitter and LinkedIn.
-
AI breaches aren’t just a scare story any more – they’re happening in real life
News IBM research shows proper AI access controls are leading to costly data leaks
-
Virgin Media O2 and Daisy Group merge to form O2 Daisy
News The new firm will start out as the UK’s second-largest small and medium-sized business (SME) solutions provider
-
Researchers tested over 100 leading AI models on coding tasks — nearly half produced glaring security flaws
News AI models large and small were found to introduce cross-site scripting errors and seriously struggle with secure Java generation
-
‘LaMDA was ChatGPT before ChatGPT’: Microsoft’s AI CEO Mustafa Suleyman claims Google nearly pipped OpenAI to launch its own chatbot – and it could’ve completely changed the course of the generative AI ‘boom’
News In a recent podcast appearance, Mustafa Suleyman revealed Google was nearing the launch of its own ChatGPT equivalent in the months before OpenAI stole the show.
-
Microsoft is doubling down on multilingual large language models – and Europe stands to benefit the most
News The tech giant wants to ramp up development of LLMs for a range of European languages
-
Everything you need to know about OpenAI’s new agent for ChatGPT – including how to access it and what it can do
News ChatGPT agent will bridge "research and action" – but OpenAI is keen to stress it's still a work in progress
-
‘Humans must remain at the center of the story’: Marc Benioff isn’t convinced about the threat of AI job losses – and Salesforce’s adoption journey might just prove his point
News Marc Benioff thinks fears over widespread AI job losses may be overblown and that Salesforce's own approach to the technology shows adoption can be achieved without huge cuts.
-
AI adoption is finally driving ROI for B2B teams in the UK and EU
News Early AI adopters across the UK and EU are transforming their response processes, with many finding first-year ROI success
-
‘The latest example of FOMO investing’: Why the Builder.ai collapse should be a turning point in the age of AI hype
News Builder.ai was among one of the most promising startups capitalizing on the generative AI boom – until it all came crashing down
-
Is ChatGPT making us dumber? A new MIT study claims using AI tools causes cognitive issues, and it’s not the first – Microsoft has already warned about ‘diminished independent problem-solving’
News A recent study from MIT suggests that using AI tools impacts brain activity, with frequent users underperforming compared to their counterparts.