Nearly 50 million Europcar customer records put up for sale on the dark web – or were they?
Europcar denies alleged breach, claiming the exfiltrated data was fabricated. Experts are now arguing over whether AI is to blame.


A database containing nearly 50 million customer records reportedly stolen from car rental service Europcar has been put up for sale on a hacking forum, but questions have been raised over the data’s authenticity.
If legitimate, the leak would be one of the largest data breaches in recent years, and the nature of the stolen information would have exposed customers to a wide range of attacks.
As pointed out by Reddit’s head of security Matt Johansen on X (formerly Twitter), car rental companies require customers to hand over a lot of personal identifiable information (PII), including passports and driver’s licenses, which are much harder to rotate than exposed passwords.
Europcar, however, has said the database is fake. It said the records included in a sample of the data do not match those it has on file, with none of the leaked email address records corresponding to those in its own database.
In addition, Europcar speculated that the data may have been synthesized using generative AI, pointing to a series of discrepancies that look like hallucinations.
For example, the data includes non-existent addresses, ZIP codes that don’t match addresses, and both first and last names that do not correspond to those used in email addresses.
At the time of writing, the authenticity of the data has not been verified, but Huseyin Can Yuceel, security researcher at Picus Security, appeared confident the data was created using generative AI tools.
Sign up today and you will receive a free copy of our Future Focus 2025 report - the leading guidance on AI, cybersecurity and other IT challenges as per 700+ senior executives
Read more
“The Europcar security incident unfolded like a classic Scooby Doo unmasking. In the space of a couple of hours, the infosec community went from analyzing the impact of one of the biggest data breaches of all time, to exposing an AI powered hoax.”
Can Yuceel argued this incident represents a novel attack vector being exploited by threat actors, using generative AI to create fake datasets that can be used to deceive and extort businesses.
“A far cry from initial reports of a data breach involving 50 million customers, this incident should be classified as an attempted social engineering attack. In social engineering attacks, it’s common for adversaries to manipulate their victims into sharing confidential information or executing malware to compromise the target system” he explained.
“In this case, it seems as though attackers tried to create panic and pressure their target into paying ransom for a false claim that they stole sensitive customer data.”
Questioning the role AI in fabricating data for extortion attempts
This debacle highlights AI’s offensive potential, according to Can Yuceel, who said businesses should take note of the extortion technique and adjust their incident response procedures accordingly.
“Adversaries are quick to adopt new techniques and tools, and the use of AI in cyber-attacks is becoming more commonplace. We, as defenders, should expect more AI-powered cyber-attacks in the near future.”
This interpretation has been challenged by other security experts, however, arguing the role of AI in fabricating the stolen data is not clear and could have been executed using legacy techniques.
Troy Hunt, founder and CEO at data breach site Have I Been Pwned, warned against jumping to the conclusion that AI was integral to this attack, citing his previous work on generating dummy data using software company Red Gate’s SQL data generator technology.
RELATED RESOURCE
Discover how you can harness generative AI's full potential
Hunt noted many of the email addresses were not synthesized, but lifted from records exposed in previous data breaches, arguing this confirms AI was definitely not involved in generating the leaked email addresses.
Regardless of AI’s role in the attack, the recommendations for businesses to protect themselves against falling for fabricated data breaches remains the same: always compare stolen records with internal databases to confirm the veracity of the breach before acting.
As such, Can Yuceel praised Europcar’s response to the incident by cross-checking the stolen data with that in their internal database.
“It appears that Europcar did its due diligence and followed the incident response best practices by confirming whether these claims were true. After analyzing the claim, they found that the data was fake and confirmed there was no breach.”

Solomon Klappholz is a former staff writer for ITPro and ChannelPro. He has experience writing about the technologies that facilitate industrial manufacturing, which led to him developing a particular interest in cybersecurity, IT regulation, industrial infrastructure applications, and machine learning.
-
Using DeepSeek at work is like ‘printing out and handing over your confidential information’
News Thinking of using DeepSeek at work? Think again. Cybersecurity experts have warned you're putting your enterprise at huge risk.
-
Can cyber group takedowns last?
ITPro Podcast Threat groups can recover from website takeovers or rebrand for new activity – but each successful sting provides researchers with valuable data
-
Using DeepSeek at work is like ‘printing out and handing over your confidential information’
News Thinking of using DeepSeek at work? Think again. Cybersecurity experts have warned you're putting your enterprise at huge risk.
-
Passwords are a problem: why device-bound passkeys can be the future of secure authentication
Industry insights AI-driven cyberthreats demand a passwordless future…
-
Microsoft patched a critical vulnerability in its NLWeb AI search tool – but there's no CVE (yet)
News Researchers found an unauthenticated path traversal bug in the tool debuted at Microsoft Build in May
-
AI breaches aren’t just a scare story any more – they’re happening in real life
News IBM research shows proper AI access controls are leading to costly data leaks
-
The rise of GhostGPT – Why cybercriminals are turning to generative AI
Industry Insights GhostGPT is not an AI tool - It has been explicitly repurposed for criminal activity
-
Think DDoS attacks are bad now? Wait until hackers start using AI assistants to coordinate attacks, researchers warn
News The use of AI in DDoS attacks would change the game for hackers and force security teams to overhaul existing defenses
-
Okta and Palo Alto Networks are teaming up to ‘fight AI with AI’
News The expanded partnership aims to help shore up identity security as attackers increasingly target user credentials
-
Despite the hype, cybersecurity teams are still taking a cautious approach to using AI tools
News Research from ISC2 shows the appetite for AI tools in cybersecurity is growing, but professionals are taking a far more cautious approach than other industries.