Using AI to generate passwords is a terrible idea, experts warn
Researchers have warned the use of AI-generated passwords puts users and businesses at risk
Sign up today and you will receive a free copy of our Future Focus 2025 report - the leading guidance on AI, cybersecurity and other IT challenges as per 700+ senior executives
You are now subscribed
Your newsletter sign-up was successful
Cyber experts have warned against using AI to generate passwords after research found glaring security failures.
Analysis from cybersecurity firm Irregular found a host of popular AI chatbots, including ChatGPT, Claude, and Google Gemini produced highly predictable passwords.
A key factor behind this, the study noted, is that large language models (LLMs) generate passwords based on recognizable patterns, rather than in the randomized manner recommended by security experts.
Testing of Claude, for example, produced 50 passwords. Of these, only 30 unique passwords were generated while one - G7$kL9#mQ2&xP4!w - was repeated 18 times.
GPT-5.2 fared similarly, according to researchers, with outputs showing “strong regularities”.
“Nearly all passwords begin with a v, and among those, almost half continue with Q,” Irregular said in a blog post. “Character selection is similarly narrow and uneven, with only a small subset of symbols appearing with any frequency.”
Notably, Gemini 3 Pro issued a security warning when prompted to generate suggested passwords, urging users not to use them.
Sign up today and you will receive a free copy of our Future Focus 2025 report - the leading guidance on AI, cybersecurity and other IT challenges as per 700+ senior executives
“The reason given by the model is not that the password is weak, but that the password is ‘processed through servers’”, which Irregular warned misrepresents the potential risk posed to users.
How password strength is measured
Password strength has traditionally been based on its predictability or unpredictability and measured in “bits of entropy”. This is used to measure how many guesses would be required for someone to brute force crack the password.
Simply put, the higher the entropy, the stronger the password.
“A password with only 20 bits of entropy, for example, would need about 2²⁰ guesses, or approximately one million guesses – which could be done within seconds,” researchers explained.
“A password with 100 bits of entropy, however, would need about 2¹⁰⁰ guesses – a 31-digit number, requiring trillions of years to crack.”
Using AI to generate passwords is ill-advised
Kevin Curran, IEEE senior member and professor of cybersecurity at Ulster University, said using AI to generate passwords is a “risky practice” and urged users against relying on chatbots for this purpose.
“These models often produce strings which appear strong and complex but are actually highly predictable, featuring repeating patterns or familiar structures drawn from their training data,” he said.
“This approach is a poor security practice because large language models do not generate true randomness; they rely on statistical probabilities learned from vast datasets.”
Curran added that AI-generated passwords “lack the high entropy” needed to ensure robust protection, and could also be vulnerable to automated cracking tools.
Indeed, Irregular noted that a typical 16-character password should have roughly 98 bits of entropy, whereas AI-generated results only had an estimated 27 bits, making them highly susceptible to cracking.
“This is the difference between taking billions of years to crack a password even with a strong supercomputer, and taking seconds with a standard computer.”
Despite glaring risks, researchers noted that AI-generated passwords are appearing in the real world at an alarming rate. The company advised users to stick to traditional password generation methods.
Curran noted that enterprises need to nip these practices in the bud and inform staff about the potential risks.
“Organizations should take proactive steps to prevent staff from relying on AI for password creation by establishing clear policies that ban the use of public chatbots for security-sensitive tasks and instead mandate approved password managers equipped with cryptographically secure random number generators,” he said.
“Regular training programs can raise awareness of these limitations while encouraging the adoption of stronger alternatives, such as passkeys or multi-factor authentication, to reduce overall reliance on traditional passwords.”
FOLLOW US ON SOCIAL MEDIA
Make sure to follow ITPro on Google News to keep tabs on all our latest news, analysis, and reviews.
You can also follow ITPro on LinkedIn, X, Facebook, and BlueSky.

Ross Kelly is ITPro's News & Analysis Editor, responsible for leading the brand's news output and in-depth reporting on the latest stories from across the business technology landscape. Ross was previously a Staff Writer, during which time he developed a keen interest in cyber security, business leadership, and emerging technologies.
He graduated from Edinburgh Napier University in 2016 with a BA (Hons) in Journalism, and joined ITPro in 2022 after four years working in technology conference research.
For news pitches, you can contact Ross at ross.kelly@futurenet.com, or on Twitter and LinkedIn.
-
Poor workplace communication is confusing your colleagues and destroying productivityNews Engaging with colleagues via email or chat platforms wastes time and hampers team alignment
-
How the Cybersecurity and Resilience Bill could impact MSPsIndustry Insights With the Cybersecurity and Resilience Bill now in Parliament, how should MSPs prepare for heightened regulatory scrutiny?
-
Researchers called on LastPass, Dashlane, and Bitwarden to up defenses after severe flaws put 60 million users at risk – here’s how each company respondedNews Analysts at ETH Zurich called for cryptographic standard improvements after a host of password managers were found lacking
-
Harnessing AI to secure the future of identityIndustry Insights Channel partners must lead on securing AI identities through governance and support
-
‘They are able to move fast now’: AI is expanding attack surfaces – and hackers are looking to reap the same rewards as enterprises with the technologyNews Potent new malware strains, faster attack times, and the rise of shadow AI are causing havoc
-
Ransomware gangs are using employee monitoring software as a springboard for cyber attacksNews Two attempted attacks aimed to exploit Net Monitor for Employees Professional and SimpleHelp
-
Notepad++ hackers remained undetected and pushed malicious updates for six months – here’s who’s responsible, how they did it, and how to check if you’ve been affectedNews Hackers remained undetected for months and distributed malicious updates to Notepad++ users after breaching the text editor software – here's how to check if you've been affected.
-
CISA’s interim chief uploaded sensitive documents to a public version of ChatGPT – security experts explain why you should never do thatNews The incident at CISA raises yet more concerns about the rise of ‘shadow AI’ and data protection risks
-
Former Google engineer convicted of economic espionage after stealing thousands of secret AI, supercomputing documentsNews Linwei Ding told Chinese investors he could build a world-class supercomputer
-
AI is “forcing a fundamental shift” in data privacy and governanceNews Organizations are working to define and establish the governance structures they need to manage AI responsibly at scale – and budgets are going up