Flare's analysis reveals a growing black market where threat actors buy and sell stolen AI platform credentials, enabling everything from fraud to sophisticated cyber espionage campaigns.
The underground economy has found a new lucrative commodity: access to premium AI platforms. A recent analysis by Flare researchers reveals that paid accounts for services like ChatGPT, Claude, Microsoft Copilot, and Perplexity are being actively bought and sold in cybercrime forums and Telegram groups, creating a thriving black market that poses significant risks to organizations and individuals alike.

The Growing Value of AI Platform Access
As AI tools have become deeply integrated into business workflows and personal productivity, their value in underground markets has skyrocketed. These platforms now power everything from content creation and software development to research and business operations, often handling sensitive internal documents, proprietary code, and confidential information.
"Access to advanced AI models can significantly reduce effort, improve output quality, and accelerate tasks that previously required expertise or time," the researchers note. This same efficiency that makes AI tools valuable to legitimate users also makes them attractive to threat actors looking to automate and scale their operations.
How Threat Actors Obtain AI Accounts
The methods used to acquire these accounts are varied and increasingly sophisticated:
Credential Compromise: Many listings include aged Gmail or Outlook accounts, suggesting that compromised credentials are being reused to access AI platforms.
Bulk Account Creation: References to virtual phone numbers indicate actors are creating accounts at scale while attempting to bypass verification controls.
Trial and Promotional Abuse: Mentions of gift codes or trial access suggest that onboarding incentives are being exploited by malicious actors.
Shared Subscriptions: Some offerings appear to distribute access across multiple users rather than maintaining single ownership.
API Key Resale: There are indications that backend or programmatic access is also being marketed, potentially allowing for automated abuse at scale.
Why the Underground Market Thrives
Several factors drive demand for these illicit AI accounts:
Cost Advantages: Official subscriptions typically start around $20 per month and can increase significantly with usage. Underground listings frequently emphasize cheaper access or bundled offerings, creating a meaningful price gap.
Scale and Convenience: Buyers requiring multiple accounts for automation, testing, or evasion purposes find it easier to purchase ready-made access rather than create accounts individually, particularly where verification and payment requirements introduce friction.
Sanctions Bypass: In countries like Russia, Iran, or North Korea where access and payment with local credit cards to major AI platforms may be restricted, underground markets offer ready-to-use accounts that remove onboarding steps and provide immediate access.
Model Restrictions: Some posts promote "fewer restrictions," appealing to users looking to bypass safeguards or usage limits. While these claims often read like exaggerated advertising, they reflect a common reality in underground markets where accounts or API keys are resold with the promise of reduced controls.
The Dark Side of Accessible AI
The implications extend far beyond simple account misuse. Generative AI tools are increasingly being weaponized for sophisticated cyber attacks:
Phishing and Social Engineering: AI-generated text can produce highly convincing phishing messages and scam scripts at scale. Europol's 2025 threat assessment warns that criminal groups are using generative AI to automate phishing and fraud operations, enabling attackers to produce convincing content with greater speed and sophistication.
Personalized Attacks: Palo Alto Networks' Unit 42 reported that attackers are leveraging AI to craft highly personalized social engineering campaigns, allowing malicious messages to be tailored more precisely to individual targets and contexts.
Code Generation and Automation: Even individuals without strong technical backgrounds can leverage these tools to perform complex tasks, lowering the barrier to entry for sophisticated attacks.
Synthetic Content Creation: Some platforms include image, audio, or video generation capabilities that may be used to create synthetic content for impersonation or deception.
What's Being Sold
The underground market offers a wide range of AI-related products:
- ChatGPT Plus and Pro subscriptions
- Claude Pro access
- Microsoft Copilot bundled with Office 365 accounts
- Perplexity AI Pro and API-related offerings
- Bundled packages combining multiple services
- Claims of "premium access," "no limits," or "full API access"
These offerings often appear alongside other digital commodities like email accounts, developer tools, and verification infrastructure, suggesting an integrated ecosystem of account trading.
Protecting Your Organization
Organizations can take several steps to mitigate these risks:
Enable Multi-Factor Authentication: MFA on all AI accounts adds a critical layer of security.
Avoid Sensitive Data Sharing: Use approved enterprise environments rather than personal accounts for handling confidential information.
Monitor Usage Patterns: Watch for unusual login behavior and usage anomalies that might indicate compromised accounts.
Use Enterprise-Grade Accounts: These typically offer better controls and security features than consumer plans.
Secure API Keys: Rotate and properly secure API keys to prevent unauthorized access.
Monitor Underground Markets: Proactively identify exposed accounts, keys, and secrets before they can be exploited.
Educate Employees: Raise awareness about the risks of shared or purchased accounts and the importance of proper AI tool usage.
Implement Governance Policies: Establish clear policies for AI tool usage within your organization.
As AI services continue to evolve and gain adoption, their value within underground markets will likely increase. Addressing this emerging threat requires a combination of technical controls, user education, and proactive monitoring to protect both organizational assets and the broader digital ecosystem from exploitation.

Comments
Please log in or register to join the discussion