I copied a 16-character password spat out by an AI and felt safe. Hours later, a report from Irregular left me staring at that string as if it were a painted bullseye. You might be doing the same—trusting clever-looking strings that are quietly repeatable.
Every Claude output I reviewed began with the same pattern
I asked Anthropic’s Claude to generate dozens of 16-character passwords. What I expected—wide variation—never arrived. Nearly every string began with an uppercase G, the second character was almost always 7, and a handful of symbols and letters showed up over and over while most of the alphabet vanished.
ChatGPT and Gemini showed the same narrow habits
OpenAI’s ChatGPT favored the character v at the start of most passwords and often placed Q as the second character. Google’s Gemini gravitated toward K in upper or lower case, then repeated a small set of follow-up characters. Those are not random errors; they are patterns you can exploit.
LLMs avoided repeating characters in a predictable way
The models seemed to avoid obvious repetition—no double letters, no repeating digits—which makes strings feel varied at a glance. That absence of repeats is itself a pattern. I learned that the models were smoothing their outputs to appear plausible, and that smoothing cost entropy.
Entropy math shows the strings are weaker than they look
Password strength is measured in bits of entropy, which tells you how many guesses an attacker must try. A secure 16-character password would normally sit near 98 bits of entropy. The LLM-generated strings Irregular tested averaged about 27 bits—millions of times easier to brute-force with modern GPUs. When you choose a password, you’re betting on entropy; these models are stacking the deck against you.
Are AI-generated passwords secure?
No. The short answer is no. Irregular’s experiments with Claude, ChatGPT, and Gemini show the models produce strings that pass simple checkers like KeePass but collapse under targeted guessing because the character choices are narrow and repeatable.
I found those same patterns in public code and docs
It’s one thing to test models in a lab. It’s another to find their fingerprints live on GitHub and in technical documents. When developers or automated agents copy LLM output into apps, they’re spreading predictable passwords into systems that attackers can scan. That means vulnerable targets exist right now.
Should I let ChatGPT generate my passwords?
No. You should not rely on an LLM to create credentials for sensitive accounts. Some platforms even warn against using their generated strings for critical logins. If an automated agent uses an LLM to seed passwords, you have a chain of risk that’s easy to break into.
Why prompting and temperature tweaks don’t fix this
I tested the usual tricks—forcing randomness, raising temperature, demanding unique symbols. The models still converged on plausible, repeatable choices. LLMs are optimized for coherent, likely outputs, not for pure randomness. Their internal biases act as an echo chamber, amplifying the same small set of choices.
Practical steps I recommend you take
If you manage your own accounts, stop pasting LLM-generated passwords into critical services. Use a vetted password manager or a cryptographically secure generator built for entropy. If you’re an engineer, purge LLM-created secrets from repos and audit any agent workflows that delegate credential creation to LLMs.
Irregular’s reporting pulled back a small curtain: tools that feel clever can make us lazy, and cleverness without true randomness is a liability. I’m asking you to treat AI-generated passwords with skepticism—will you keep trusting strings that are designed to be plausible, not private?