Google Reports State Hackers Using Gemini AI in 'All Stages' of Cyber Attacks
#Cybersecurity

Google Reports State Hackers Using Gemini AI in 'All Stages' of Cyber Attacks

Chips Reporter
4 min read

Google's latest Threat Intelligence Group report reveals that state-sponsored hackers from China, Russia, and Iran are leveraging Gemini AI models throughout the entire attack lifecycle, from reconnaissance to data exfiltration, marking a significant evolution in cyber warfare tactics.

Google's Gemini AI models have become a core component of state-sponsored hackers' attack vectors, with the technology now being used in all stages of cyber attacks, from initial reconnaissance to final data exfiltration, according to Google's latest Threat Intelligence Group report.

(Image credit: Nur Photo via Getty Images)

AI Integration Across the Attack Lifecycle

The report marks a significant shift in how nation-state actors are incorporating artificial intelligence into their cyber operations. While AI-powered hacking tools have been emerging since 2023, Google's findings indicate that Gemini has reached a level of sophistication where it's being utilized comprehensively across the entire attack process.

"Although AI use has been growing in white and black hat hacking in recent years, Google now says it's used in all parts of the attack process, from target acquisition to coding, social engineering message generation, and follow-up actions after the hack," the report states.

Country-Specific Attack Patterns

China's Technical Approach

Chinese threat actors have been particularly sophisticated in their use of Gemini, employing the AI to act as an expert cybersecurity persona. The report details how these groups have used Gemini for vulnerability analysis and penetration testing planning against specific targets.

"The PRC-based threat actor fabricated a scenario, in one case trialing Hexstrike MCP tooling, and directing the model to analyze Remote Code Execution (RCE), WAF bypass techniques, and SQL injection test results against specific US-based targets," Google's report explains.

North Korea's Social Engineering Focus

North Korean hackers have primarily leveraged Gemini for phishing operations. The AI has been used to profile high-value targets, particularly focusing on members of security and defense companies, and identifying vulnerable individuals within their professional networks.

Iran's Research and Persona Generation

Iranian government-backed hackers have utilized Gemini for research purposes, searching for official emails of specific targets and investigating business partners of potential victims. They've also employed the AI to generate convincing personas that could plausibly engage with targets by feeding it biographical information.

Misinformation and Propaganda Operations

(Image credit: Google)

One of the most concerning findings involves the use of Gemini for generating targeted misinformation and propaganda. The report indicates that threat actors from China, Iran, Russia, and Saudi Arabia are producing political satire, propaganda articles, and memes designed to influence Western audiences.

"Threat actors from China, Iran, Russia, and Saudi Arabia are producing political satire and propaganda to advance specific ideas across both digital platforms and physical media, such as printed posters," the report notes. While Google confirmed these assets hadn't been deployed in the wild yet, the company has taken proactive measures by disabling associated accounts and updating Gemini's protections.

The Rise of Custom AI Hacking Tools

Google's report highlights a growing demand for bespoke AI hacking tools among cyber criminals. One notable example is "Xanthorox," an underground toolkit marketed as custom AI for cyber offensive campaigns. Despite claims of being "privacy preserving," Xanthorox essentially functions as an API that leverages existing general AI models like Gemini.

"This setup leverages a key abuse vector: the integration of multiple open-source AI products—specifically Crush, Hexstrike AI, LibreChat-AI, and Open WebUI—opportunistically leveraged via Model Context Protocol (MCP) servers to build an agentic AI service upon commercial models," Google explains.

This trend has created a black market for API keys, as these tools require numerous API calls to various AI models. Organizations with large API token allocations have become attractive targets for account hijacking, emphasizing the need for enhanced security measures.

Current Limitations and Future Threats

While the report documents attempts to use Gemini for augmenting existing malware and generating new malicious software, Google notes that no significant advances have been observed yet. However, the company acknowledges this area is actively being explored and likely to evolve.

One proof-of-concept framework mentioned is "HonestCue," which uses Gemini to generate code for second-stage malware. In this approach, initial malware infects a machine, then contacts Gemini to generate new code for subsequent attacks. The report also notes a ClickFix campaign that used social engineering within a chatbot to encourage users to download malicious files.

Defensive Measures and Ongoing Challenges

As Google tracks these evolving threats, the company continues to disable accounts, block access to malicious assets, and update the Gemini model to resist manipulation attempts. However, the report acknowledges that defending against AI-powered attacks represents a "cat-and-mouse game" similar to traditional anti-malware efforts.

"Like traditional anti-malware defences, anti-AI attacks look set to be a cat-and-mouse game that is unlikely to end any time soon," the report concludes, highlighting the ongoing challenge of securing AI systems against sophisticated threat actors.

(Image credit: Google)

The findings underscore a critical evolution in cyber warfare, where AI tools like Gemini are no longer just potential threats but are actively being weaponized by state-sponsored actors across the entire attack lifecycle. As these capabilities continue to advance, the cybersecurity community faces an escalating challenge in developing effective countermeasures against AI-powered attacks.

Comments

Loading comments...