AI-Assisted Attacks: How Lowered Technical Barriers Are Reshaping Cybersecurity Landscape
#Regulation

AI-Assisted Attacks: How Lowered Technical Barriers Are Reshaping Cybersecurity Landscape

Security Reporter
5 min read

This article examines how AI has democratized cyber capabilities, enabling non-technical actors to conduct sophisticated attacks. It explores the changing threat landscape, statistics showing increased attack frequency and severity, and practical approaches for organizations to adapt their security strategies.

In December 2025, a 17-year-old in Osaka was arrested under Japan's Unauthorized Access Prohibition Act after running malicious code to extract personal data from over 7 million users of Kaikatsu Club, Japan's largest internet cafe chain. When questioned, the young man revealed his motivation: he wanted to buy Pokémon cards.

This story might seem like a conventional tale of a tech-savvy youth getting in over his head, similar to the Kevin Mitnick stories of the 1990s. But something fundamental has changed: this young man wasn't technical. His attack wasn't the work of a skilled hacker but of an AI-assisted novice, highlighting a dramatic shift in the cybersecurity landscape.

Featured image

The Rise of AI-Assisted Attacks

2025 marked a turning point when LLM-backed chat and agent systems crossed a threshold, evolving from useful but error-prone coding assistants into end-to-end coding powerhouses. Throughout the year, several measures of cybercrime frequency and severity approximately doubled. Instances of malicious packages discovered on public repositories increased by 75%, cloud intrusions increased by 35%, and AI-generated phishing began outperforming human red teams entirely.

More concerning than the quantitative increases, however, has been the qualitative shift in who is conducting attacks. In February 2025, three teenagers (ages 14, 15, and 16) with no coding background used ChatGPT to build a tool that hit Rakuten Mobile's system approximately 220,000 times, spending their proceeds on gaming consoles and online gambling.

In July 2025, a single actor using Claude Code, a more sophisticated agentic coding platform, conducted an extortion campaign targeting 17 organizations over one month. The AI developed malicious code, organized stolen files, analyzed financial records to calibrate demands, and drafted extortion emails. In December 2025, another individual used Claude Code and ChatGPT to breach the Mexican government, targeting more than 10 agencies and stealing over 195 million taxpayer records.

These attacks represent a fundamental shift in threat profiles. While similar attacks were possible before 2025, we are now seeing single-actor attacks that would have been characteristic of organized teams in the pre-AI era, and smaller-scale attacks by nontechnical individuals that would have required the skills of a talented hacker.

Accelerating Exploit Development

The barrier to entry for conducting technically sophisticated attacks has been significantly lowered throughout 2025. This is evident in several metrics:

  • Malicious packages in public repositories grew from 55,000 in 2022 to 454,600 in 2025, according to Sonatype
  • Notable jumps occurred in 2023 (the year GPT-4 was released) and 2025 (a marquee year for agentic coding)
  • Time to exploit—measuring the time from when a vulnerability is publicized until an exploit is discovered in the wild—has plummeted from over 700 days in 2020 to only 44 days in 2025

Mandiant's M-Trends 2026 report found that time-to-exploit has effectively gone negative—exploits are now routinely arriving before patches, with 28.3% of CVEs exploited within 24 hours of disclosure.

The performance of frontier models like ChatGPT, Claude, and Gemini on technical benchmarks has also improved dramatically. On SWE-bench (a test of software development capability), top models could resolve only 33% of real GitHub issues in August 2024. By December 2025, that number had climbed to just under 81%.

The Detection Challenge

AI is accelerating both defenders and attackers, but based on 2025-2026 data, the arms race is favoring attackers. The average time to remediate a known high- or critical-severity CVE is now 74 days, according to the Edgescan 2025 Vulnerability Statistics Report. Additionally, 45% of vulnerabilities in systems maintained by large companies (1000+ employees) never get remediated.

The Shai-Hulud attack in September 2025 targeting the npm ecosystem compromised over 500 packages. Over 487 organizations had secrets compromised, and $8.5 million was stolen from Trust Wallet after attackers used exposed credentials to poison its Chrome extension.

Detection has become particularly challenging as AI-generated malware becomes increasingly sophisticated. In 2025, malicious npm packages posing as popular libraries like chalk and debug included documentation, unit tests, and code structured to appear as legitimate telemetry modules. Static analysis and signature scanners missed them entirely because the code, likely AI-generated, looked like real software.

Beyond Patching: Structural Solutions

As Chainguard CEO Dan Lorenc has observed, "The complexity and scale of vulnerability management has outgrown the capabilities of most organizations to manage on their own."

The lesson of 2025 is that organizations can't outrun these attacks. The exploit window is shrinking faster than patch cycles can compress, and AI-generated malware is slipping past detection tools that organizations have relied on for decades. The Venn diagram of "willing to do attacks" and "has technical ability to do attacks" has expanded dramatically.

Instead of focusing solely on speed and trying to outrun attacks, organizations should consider hitting delete on entire categories of vulnerability. This approach, exemplified by Chainguard Libraries, rebuilds every open source library from verified, attributable source code. The goal is to render whole categories of attacks structurally impossible, protecting users from CI/CD takeover, dependency confusion, long-lived token theft, or package distribution attacks.

When tested against 8,783 malicious npm packages, Chainguard Libraries blocked 99.7%. Against approximately 3,000 malicious Python packages, it blocked roughly 98%.

Preparing for an AI-Powered Future

With 454,600 malicious packages in 2025 and 394,877 discovered in a single quarter, the scale of the threat is undeniable. The examples are sobering: an amateur in Algeria built ransomware that hit 85 targets in his first month; a 17-year-old exfiltrated 7 million records to buy Pokémon cards.

The tools that enable these attacks are getting cheaper, faster, and more accessible. As we look toward 2027 with model capabilities expected to increase further, organizations need fundamentally different approaches to security.

Rather than scrambling when the next major supply chain attack hits, organizations can consider rebuilding their approach from the ground up. By focusing on structural solutions that eliminate entire categories of vulnerability, security teams can free themselves from the endless cycle of patching and detection, allowing them to focus on the remaining areas that require human judgment and expertise.

The democratization of cyber capabilities through AI represents both a challenge and an opportunity. Those who adapt their security strategies to this new reality will be better positioned to protect their organizations in an increasingly complex threat landscape.

Comments

Loading comments...