Former NSA boss warns AI-powered cyberattacks are already here and getting worse
#Cybersecurity

Former NSA boss warns AI-powered cyberattacks are already here and getting worse

Privacy Reporter
5 min read

Rob Joyce says Chinese cyberspies' use of Claude AI to automate attacks was a 'Rorschach test' that revealed how AI agents can find vulnerabilities humans miss, with machines now outpacing defenders.

The now-infamous Anthropic report about Chinese cyberspies abusing Claude AI to automate cyberattacks was a Rorschach test for the infosec community, according to former NSA cyber boss Rob Joyce.

"There were people on one side who hated it," Joyce, who is now a venture partner at DataTribe, said during a Monday talk at RSAC. "They thought it was a meaningless distraction. There was another side who saw it as a significant insight into offensive operations."

Joyce sits firmly in the latter camp. "I saw this as a really important set of insights – and something really scary."

The Beijing-backed snoops considered a typical attack chain, broke it into small steps, then built a framework using agentic AI to carry out an intrusion attempt. The agents mapped attack surfaces, scanned target organizations' infrastructure, found vulnerabilities, and even researched and wrote exploitation code. Once they were inside networks, China's bots found and abused valid credentials, escalated privileges, and moved laterally. In some cases, the agents even found and stolen sensitive data.

Featured image

"But the number one thing to me is: it worked. It freakin' worked," Joyce said. "It brought a set of tools, it went against real-world targets, and it won."

He fears that continuing improvements to LLMs, and the fact they're now effectively modular so crooks can quickly update their AI tools, means automated attacks will improve "exponentially."

Last year, in an interview with The Register, Joyce said AI will "soon" be a great exploit coder. On Monday, he told an audience of security experts and coders it's already happened.

The upside? Agentic AI systems' ability to find zero-day vulnerabilities and develop exploits at machine speed can be a boon defenders, too. Projects like Google's Big Sleep, an AI agent that helps security researchers find zero-day flaws, have spotted several including a previously unknown exploitable memory-safety flaw in the widely-used OpenSSL library.

OpenAI's Codex (formerly Aardvark) similarly uses agentic AI to detect and patch vulnerabilities in code, as does Anthropic's Clade Code Security.

"So across these three frontier models, all doing vulnerability research, they've shown that they can find vulnerabilities in major code," Joyce said. "In the long term, we get much better code. Google Chrome is going to benefit from the Google Big Sleep team, and it is going to be much harder to exploit the most popular web browser on the planet. But in the near term, the ability to find software vulnerabilities across massive code bases and vulnerabilities become exploits. That's a real risk."

Joyce quoted security researcher Sean Heelan, who analyzed OpenAI's then-Aardvark project and said: The more tokens you spend, the more bugs you find, and the better quality those bugs are. You can also see it in my experiments. As the challenges got harder I was able to spend more and more tokens to keep finding solutions. Eventually the limiting factor was my budget, not the models. I would be more surprised if this isn't industrialized by LLMs, than if it is.

What this means right now, according to Joyce, is that information asymmetry favors machine attackers. "This is not a story about AI being smarter than the humans. It's about scale and patience, its [AI's] ability to look at all of the techniques and components of that and develop the vulnerabilities. Machines don't get tired of reading code. They can review and review and review until they find that vulnerability."

So what does this mean for defenders? Joyce thinks they need to become "exceptional" at security basics. That means using AI tools to review code and detect anomalies in patterns and behaviors, which can indicate that attackers are abusing a legitimate tool – or user – for malicious purposes. Also, he recommends, start doing agentic red teaming against your organization to proactively find flaws and misconfigurations.

"You are going to be red-teamed whether you pay for it or not," Joyce said. "The only difference is, you know who gets the results delivered to them."

This assessment comes as the cybersecurity community grapples with the implications of AI-powered attacks. The Anthropic report that Joyce referenced detailed how Chinese cyberspies used Claude to automate various stages of the attack chain, from initial reconnaissance to data exfiltration.

The implications are profound. Traditional security measures that rely on human analysis and response times are increasingly inadequate against AI systems that can process vast amounts of code, identify vulnerabilities, and develop exploits at machine speed. The asymmetry Joyce describes means that attackers can leverage AI to find and exploit vulnerabilities faster than defenders can patch them.

For organizations, this means a fundamental shift in how security is approached. The old model of periodic penetration testing and reactive patching is giving way to continuous, AI-powered security assessment. Companies need to adopt AI tools not just for offense but for defense, using machine learning to identify anomalous behavior patterns that might indicate a breach.

The race between AI-powered attackers and defenders is accelerating. As Joyce noted, the same technologies that enable automated attacks can also be used to find and fix vulnerabilities before they're exploited. The question is whether defenders can adopt these tools quickly enough to keep pace with increasingly sophisticated AI-powered threats.

For the average user, this arms race may not be immediately visible, but it has significant implications for digital security. As AI becomes better at finding and exploiting vulnerabilities, the importance of keeping software updated and using strong security practices becomes even more critical. The days when a single vulnerability could compromise millions of devices are not over – they may be just beginning in a new, AI-powered form.

Comments

Loading comments...