Google Reports State-Backed Hackers Using Gemini AI for Recon and Attack Support
#Cybersecurity

Google Reports State-Backed Hackers Using Gemini AI for Recon and Attack Support

Security Reporter
4 min read

Google's Threat Intelligence Group has observed North Korean and Chinese state-sponsored hacking groups weaponizing its Gemini AI model to accelerate reconnaissance, target profiling, and malware development, marking a significant evolution in cyber espionage tactics.

Google's Threat Intelligence Group (GTIG) has revealed that state-backed hacking groups are increasingly leveraging its Gemini AI model to enhance their cyber operations, marking a significant shift in how artificial intelligence is being weaponized for espionage and cyber attacks.

North Korean Group UNC2970 Leads the Charge

The North Korea-linked threat actor UNC2970, which overlaps with the notorious Lazarus Group, has been observed using Gemini to conduct reconnaissance on high-value targets. According to Google, the group utilized the AI model to synthesize open-source intelligence (OSINT) and profile potential victims to support campaign planning.

"The group used Gemini to synthesize OSINT and profile high-value targets to support campaign planning and reconnaissance," GTIG stated in its report. "This actor's target profiling included searching for information on major cybersecurity and defense companies and mapping specific technical job roles and salary information."

The tech giant characterized this activity as a blurring of boundaries between routine professional research and malicious reconnaissance, enabling the state-backed actor to craft tailored phishing personas and identify soft targets for initial compromise.

UNC2970 is best known for Operation Dream Job, a long-running campaign targeting aerospace, defense, and energy sectors. The group consistently focuses on defense targeting and impersonating corporate recruiters, using the Gemini AI to enhance these deceptive tactics.

A Growing Trend Among State-Sponsored Actors

UNC2970 is far from alone in weaponizing Gemini. Google has identified multiple other state-sponsored and financially motivated groups integrating the AI tool into their workflows:

UNC6418 (Unattributed): Conducting targeted intelligence gathering, specifically seeking sensitive account credentials and email addresses.

Temp.HEX/Mustang Panda (China): Compiling dossiers on specific individuals, including targets in Pakistan, and gathering operational data on separatist organizations.

APT31/Judgement Panda (China): Automating vulnerability analysis and generating targeted testing plans while posing as security researchers.

APT41 (China): Extracting explanations from open-source tool documentation and troubleshooting exploit code.

UNC795 (China): Troubleshooting code, conducting research, and developing web shells and scanners for PHP web servers.

APT42 (Iran): Facilitating reconnaissance and targeted social engineering by crafting personas, developing a Python-based Google Maps scraper, creating a SIM card management system in Rust, and researching proof-of-concept exploits for vulnerabilities like CVE-2025-8088 in WinRAR.

Novel Malware Leveraging Gemini's API

Google has also discovered malware specifically designed to exploit Gemini's capabilities. The most notable is HONESTCUE, a downloader and launcher framework that sends prompts via Google Gemini's API and receives C# source code as responses.

"HONESTCUE is a downloader and launcher framework that sends a prompt via Google Gemini's API and receives C# source code as the response," Google explained. "Rather than leveraging an LLM to update itself, HONESTCUE calls the Gemini API to generate code that operates the 'stage two' functionality, which downloads and executes another piece of malware."

The fileless secondary stage of HONESTCUE compiles and executes the generated C# source code directly in memory using the legitimate .NET CSharpCodeProvider framework, leaving no artifacts on disk and making detection significantly more challenging.

Additionally, Google identified COINBAIT, an AI-generated phishing kit built using Lovable AI that masquerades as a cryptocurrency exchange for credential harvesting. Some aspects of COINBAIT-related activity have been attributed to the financially motivated threat cluster UNC5356.

ClickFix Campaigns and Model Extraction Attacks

Beyond direct malware development, Google has observed threat actors leveraging generative AI services for ClickFix campaigns. These campaigns use the public sharing features of AI services to host realistic-looking instructions for fixing common computer issues, ultimately delivering information-stealing malware. This activity was first flagged by Huntress in December 2025.

Perhaps most concerning is Google's detection of model extraction attacks targeting Gemini. These attacks systematically query proprietary machine learning models to extract information and build substitute models that mirror the target's behavior.

In a large-scale attack, Gemini was targeted by over 100,000 prompts designed to replicate the model's reasoning ability across a broad range of tasks in non-English languages. Praetorian demonstrated the effectiveness of such attacks with a proof-of-concept extraction that achieved 80.1% accuracy by sending 1,000 queries to a victim's API and training a replica model for just 20 epochs.

"Many organizations assume that keeping model weights private is sufficient protection," security researcher Farida Shafik noted. "But this creates a false sense of security. In reality, behavior is the model. Every query-response pair is a training example for a replica. The model's behavior is exposed through every API response."

Implications for Cybersecurity

This trend represents a significant evolution in cyber threat capabilities. AI models like Gemini are being repurposed from defensive tools into offensive weapons, enabling threat actors to:

  • Accelerate reconnaissance and target profiling
  • Generate sophisticated phishing content and personas
  • Develop and debug malware more efficiently
  • Create convincing social engineering materials
  • Extract proprietary model capabilities for replication

As AI becomes increasingly integrated into both defensive and offensive cybersecurity operations, the arms race between threat actors and defenders is entering a new phase where machine learning capabilities are weaponized at scale.

The discovery underscores the need for organizations to implement robust AI security measures, including rate limiting on API endpoints, anomaly detection for unusual query patterns, and careful monitoring of how AI-generated content might be used in social engineering campaigns.

Featured image

Featured image: Google's Gemini AI being used by state-backed hackers for reconnaissance and attack support

Comments

Loading comments...