Microsoft warns that AI agents are enabling cybercriminals and nation-state hackers to automate reconnaissance and infrastructure management, lowering barriers for less sophisticated attackers.
AI agents are increasingly being used by cybercriminals and nation-state hackers to automate the "janitorial-type work" involved in planning and executing cyberattacks, according to Sherrod DeGrippo, Microsoft's General Manager of Global Threat Intelligence. This development represents a significant shift in how attackers operate, allowing them to outsource time-consuming tasks and focus on more strategic aspects of their campaigns.
During an interview with The Register, DeGrippo explained that agentic AI is being used for automated reconnaissance against compromised systems. "Go find out about XYZ, and come back to me with everything you've seen. Go scan the net blocks owned by this particular entity," she described as typical commands given to these AI agents. While attackers could perform these tasks manually, AI agents dramatically reduce the time and effort required.
This automation represents "a great example of AI that can be used for regular, standard business purposes and can also be used by threat actors for malicious purposes," DeGrippo noted. The technology essentially provides attackers with a powerful assistant that can handle routine but essential tasks, freeing them up for more complex operations.
Microsoft's threat intelligence team has observed North Korean hacking group Coral Sleet using development platforms to rapidly create and manage attack infrastructure at scale. This group, known for fake IT worker scams, is leveraging AI to accelerate campaign staging, testing, and command-and-control operations. The ability to interact with malicious infrastructure using natural language is particularly concerning, as it allows attackers to convey complex ideas simply by expressing them.
Infrastructure management represents another key area where AI agents prove invaluable to attackers. Whether compromising existing legitimate infrastructure or setting up new attack platforms, criminals can now use AI to streamline these processes. This includes standing up command-and-control servers, managing compromised accounts, and maintaining the technical foundation needed for sustained campaigns.
DeGrippo emphasized that these capabilities lower barriers for less technically sophisticated criminals. "Threat actors will do what works, and they will do what gets them their objective easiest and fastest," she stated. "And so handing threat actors these really powerful tools is going to allow them to do more of that."
The adoption of AI agents by attackers follows a predictable pattern where criminals embrace any technology that makes their operations more efficient. Microsoft's observations align with broader industry concerns about AI's dual-use nature in cybersecurity. While defenders use AI to improve threat detection and response, attackers use it to enhance their capabilities.
However, DeGrippo noted that AI agents still have limitations when it comes to malware development. Microsoft's threat intelligence team has documented attackers using agentic AI to generate malware, but these AI-generated samples typically display hallmarks that human analysts can identify. The more sophisticated use case involves malware that can call different AI functions and libraries, creating more dynamic and adaptive threats.
"Anybody who has a software development background, regardless of if they're developing benign software or malicious software, is thinking about how to better enhance their workflows with AI," DeGrippo explained. This universal adoption of AI tools means that the same productivity gains seen in legitimate software development are now available to attackers.
The implications for cybersecurity are significant. As AI agents become more capable and accessible, defenders must adapt their strategies to account for attackers who can operate more efficiently and at greater scale. This includes developing better detection methods for AI-generated code and infrastructure, as well as understanding how these tools change the economics of cybercrime.
Microsoft's observations suggest that the cybersecurity landscape is entering a new phase where automation and AI play central roles on both sides of the threat equation. Organizations need to prepare for a future where attacks may be more frequent, better coordinated, and harder to attribute to specific actors. The "janitorial work" of cyberattacks is being automated, potentially allowing more sophisticated attackers to focus on high-value targets while AI agents handle the groundwork.
As DeGrippo's comments indicate, this is not a temporary trend but a fundamental shift in how cyberattacks are planned and executed. The cybersecurity community must evolve its defensive strategies accordingly, recognizing that the tools making legitimate businesses more productive are simultaneously empowering their adversaries.

Comments
Please log in or register to join the discussion