The AI-Powered 's1ngularity' Attack: GitHub’s Supply Chain Nightmare

Article illustration 1

A sophisticated supply chain attack targeting the popular Nx monorepo tool has compromised 2,180 GitHub accounts and exposed secrets from 7,200 repositories, cybersecurity firm Wiz revealed this week. Dubbed "s1ngularity," the campaign weaponized artificial intelligence to optimize credential theft—marking a dangerous evolution in automated cyber warfare.

Anatomy of an AI-Driven Assault

The attack began on August 26, 2025, when threat actors exploited a flawed GitHub Actions workflow in Nx’s repository (a build system with 5.5M weekly npm downloads). They injected malicious code into the telemetry.js script of compromised Nx packages, which executed post-installation on victim machines. Unlike traditional malware, this script deployed AI command-line tools (Claude, Q, Gemini) to scour systems for credentials using dynamically tuned LLM prompts:

Article illustration 2

"The evolution of the prompt shows the attacker exploring prompt tuning rapidly throughout the attack. We see role-prompting and varying specificity—like adding 'penetration testing'—to bypass LLM ethical guardrails," Wiz researchers noted.

Three Waves of Escalation

  1. Phase 1 (Aug 26-27): 1,700 users infected via malicious npm packages, leaking 2,000+ secrets and 20,000 files.
  2. Phase 2 (Aug 28-29): Attackers used stolen GitHub tokens to flip 480 organizations’ private repositories to public, rebranding them with "s1ngularity" and exposing 6,700 repos.
  3. Phase 3 (Aug 31+): Focused targeting of a single organization led to 500 additional repo leaks.
Article illustration 3

Critical Vulnerabilities Exposed

Nx’s post-mortem traced the breach to pull request title injection combined with insecure pull_request_target usage in GitHub Actions. This allowed execution of arbitrary code with elevated privileges—enabling npm token theft. While GitHub shut down attacker repositories within 8 hours, exfiltrated data was already replicated.

Industry-Wide Implications

This attack highlights several urgent risks:
- AI-Powered Malware: LLMs now actively refine attack logic in real-time, increasing evasion success.
- Token Compromise Fallout: Valid leaked tokens perpetuate breaches long after initial detection.
- CI/CD Weak Spots: Overly permissive workflows remain prime targets for supply chain hijacks.

Nx has since adopted npm’s Trusted Publisher model (eliminating token-based publishing) and added manual PR workflow approvals. Yet as one researcher observed: "When AI tunes its own attack prompts, defensive playbooks must evolve at machine speed."