With technology advancing, more people are relying on artificial intelligence to handle daily tasks, such as creating passwords. However, this seemingly smart solution may not be as secure as it appears.
We’ve all faced the challenge: registering for a new service or updating an existing account, confronted by an empty password field with no ideas in mind.
The complexity only grows: a combination of uppercase and lowercase letters, a minimum number of characters, numbers, a special symbol… and it must be entirely unique.
With a multitude of accounts across banking, shopping, streaming, and social media, and constant advice from cybersecurity professionals warning against reusing passwords, inventing a new, complex password each time can seem daunting.
It’s no wonder that some individuals are delegating this task to AI.
Recent findings indicate that users are utilizing AI chatbots, such as OpenAI’s ChatGPT, Anthropic’s Claude, and Google’s Gemini, to generate what they believe are ‘strong’ passwords.

AI programs are built on extensive datasets made up of publicly available information, generating what appears to be intricate character sequences for your password. However, security experts caution that this method might be flawed and could compromise your personal data.
Research by AI cybersecurity company Irregular, as confirmed by Sky News, revealed that ChatGPT, Claude, and Gemini tend to produce ‘highly predictable passwords’.
“You should definitely not do that,” Irregular co-founder Dan Lahav told Sky News.
“And if you’ve done that, you should change your password immediately. And we don’t think it’s known enough that this is a problem.”
The issue with predictable patterns is that they undermine robust cybersecurity, making it easier for cybercriminals to use automated tools to crack passwords.

Because large language models (LLMs) base their outputs on pattern recognition from their training data, they don’t genuinely randomize password generation. This means they produce passwords that look strong but are still predictable.
Although AI can create passwords that seem complex, they should not be used as password management tools.
Alarmingly, many AI-generated passwords are easily decipherable, while others may require mathematical analysis to expose their vulnerabilities.
Irregular’s use of Claude AI to create a set of 50 passwords resulted in only 23 unique combinations.
One password – K9#mPx$vL2nQ8wR – appeared 10 times.
Other examples were K9#mP2$vL5nQ8@xR, K9$mP2vL#nX5qR@j, and K9$mPx2vL#nQ8wFs.

Sky’s test using Claude produced the password K9#mPx@4vLp2Qn8R. While ChatGPT and Gemini showed slightly more variation, they still gave ‘repeated passwords’.
These passwords passed online password strength tests, misleading users into thinking they were ‘extremely strong’.
“Our best assessment is that currently, if you’re using LLMs to generate your passwords, even old computers can crack them in a relatively short amount of time,” Lahav warned.
The advice is to choose a lengthy phrase you can remember and steer clear of AI-generated options.
A Google representative told Sky: “LLMs are not built for the purpose of generating new passwords, unlike tools like Google Password Manager, which creates and stores passwords safely.
“We also continue to encourage users to move away from passwords and adopt passkeys, which are easier and safer to use.”
OpenAI and Anthropic have been approached for comment.
