Security researchers find LLM-generated passwords follow predictable patterns that make them vulnerable to brute-force attacks, despite appearing strong to standard checkers.
Security researchers have discovered that passwords generated by large language models (LLMs) like Claude, ChatGPT, and Gemini are "fundamentally weak" despite appearing strong to standard password strength checkers.
The Illusion of Strength
When prompted to create 16-character passwords containing special characters, numbers, and mixed-case letters, all three major AI chatbots produced strings that passed online password strength tests with flying colors. These checkers reported that the passwords would take centuries to crack on standard PCs.
However, the reality is far more concerning. Irregular, an AI security company, found that these seemingly complex passwords follow predictable patterns that make them vulnerable to brute-force attacks within hours, even on decades-old computers.
Pattern Recognition Reveals the Weakness
The researchers conducted extensive testing by prompting Claude's Opus 4.6 model 50 times to generate passwords. Of the 50 returned, only 30 were unique, with 20 duplicates—18 of which were identical strings. The vast majority of passwords started and ended with the same characters, and none contained repeating characters, indicating they weren't truly random.
Similar patterns emerged when testing OpenAI's GPT-5.2 and Google's Gemini 3 Flash. The consistency was so pronounced that even when Google's Nano Banana Pro image generation model was prompted to write passwords on Post-It notes, the same Gemini password patterns appeared in the generated images.
Entropy Calculations Show the Vulnerability
Using the Shannon entropy formula and analyzing character probabilities based on the observed patterns, Irregular calculated that 16-character LLM-generated passwords have entropies of approximately 27 bits (using character statistics) and 20 bits (using log probabilities).
For context, a truly random 16-character password would have an entropy of 98 bits using character statistics or 120 bits using log probabilities. This massive difference means LLM-generated passwords could be brute-forced in hours rather than centuries.
Real-World Implications
The research revealed that these predictable patterns are already appearing in open source projects. By searching for common character sequences across GitHub and the web, researchers found test code, setup instructions, and technical documentation containing LLM-generated passwords.
This discovery could usher in a new era of password brute-forcing, where attackers leverage knowledge of LLM patterns to dramatically reduce the time required to crack passwords.
Industry Response and Recommendations
Google's Gemini 3 Pro demonstrated awareness of this issue by including security warnings with its password suggestions. The model explicitly stated that passwords requested in a chat interface should not be used for sensitive accounts and recommended using third-party password managers like 1Password or Bitwarden instead.
Irregular's findings align with broader concerns about AI-generated code security. Dario Amodei, CEO of Anthropic, previously predicted that AI will likely write the majority of all code in the future. If that's true, the passwords generated by these systems won't be as secure as developers might expect.
The Fundamental Problem
"People and coding agents should not rely on LLMs to generate passwords," Irregular stated. "Passwords generated through direct LLM output are fundamentally weak, and this is unfixable by prompting or temperature adjustments: LLMs are optimized to produce predictable, plausible outputs, which is incompatible with secure password generation."
The researchers emphasized that developers should review any passwords generated using LLMs and rotate them accordingly. They also warned that the gap between capability and behavior likely won't be unique to passwords, suggesting that other areas of AI-assisted development may face similar security challenges.
The findings highlight a critical security consideration as AI-assisted development and "vibe coding" continue to gain popularity. While LLMs excel at many tasks, password generation appears to be an area where their fundamental design—optimized for producing predictable, human-like outputs—directly conflicts with security requirements.

What this means for users:
- Never use AI-generated passwords for sensitive accounts
- Use dedicated password managers instead
- Be aware that AI-generated content may have hidden security vulnerabilities
- Review and rotate any passwords created with AI assistance
The research serves as a reminder that while AI tools are powerful, they're not infallible, and understanding their limitations is crucial for maintaining security in an increasingly AI-driven world.

Comments
Please log in or register to join the discussion