Google Expands Gemini AI to Combat Rising Tide of Malicious Ads
#Cybersecurity

Google Expands Gemini AI to Combat Rising Tide of Malicious Ads

Security Reporter
3 min read

Google is deploying its Gemini AI models to detect and block billions of harmful ads as scammers increasingly use generative AI to create sophisticated malvertising campaigns.

Google is ramping up its use of artificial intelligence to combat the growing threat of malicious advertisements on its platforms, as cybercriminals increasingly leverage generative AI to create more sophisticated and harder-to-detect scams.

According to the company's latest transparency report, Google blocked or removed 8.3 billion ads and suspended 24.9 million advertiser accounts in 2025. Among these, 602 million ads were directly tied to scams and fraudulent activities.

The Evolving Threat of Malvertising

Malicious advertising, commonly known as "malvertising," has plagued Google's ad network for years. Attackers purchase advertising space to impersonate legitimate brands and services, pushing malware, stealing cryptocurrency, or directing users to phishing sites.

These campaigns employ increasingly sophisticated techniques:

  • Cloaking methods that show different content to users versus ad reviewers
  • URL redirects that appear to lead to trusted websites
  • Domain spoofing that displays Google's own domains or legitimate software download pages
  • Authentication portal impersonation that mimics real login screens

Recent campaigns documented by security researchers have included fake login pages designed to steal Google Ads accounts, trojanized software distributed through ads impersonating tools like Google Authenticator and Homebrew, and cryptocurrency platform ads that drain visitors' digital wallets.

Featured image

AI vs. AI: Google's Counteroffensive

As threat actors begin using generative AI to create deceptive ads at scale, Google is fighting fire with fire by deploying its Gemini AI models to detect and block malicious campaigns in real time.

"Bad actors are using generative AI to create deceptive ads at scale, and Gemini helps us detect and block them in real time," explains Keerat Sharma, VP & General Manager of Ads Privacy and Safety at Google.

The company reports that by the end of 2025, the majority of Responsive Search Ads created in Google Ads were reviewed instantly, with harmful content blocked at submission. Google plans to expand this capability to more ad formats throughout 2026.

How Gemini AI Detection Works

Unlike previous detection systems that primarily analyzed keywords for malicious behavior, Google's Gemini-powered approach examines billions of signals to identify harmful ads:

  • Advertiser behavior patterns
  • Account history and reputation
  • Campaign structure and deployment patterns
  • Intent analysis across multiple data points

This comprehensive analysis allows the AI to identify malicious campaigns that might slip past traditional keyword-based filters.

Impact and Effectiveness

The results have been significant. In the United States alone, Google removed 1.7 billion ads and suspended 3.3 million advertiser accounts in 2025. The top two policy violations were "abusing the ad network" and "misrepresentation."

Artificial intelligence has also improved Google's response to malicious ads that slip through initial review processes. The enhanced AI models have reduced incorrect advertiser suspensions by 80%, demonstrating improved accuracy in distinguishing between legitimate businesses and bad actors.

The Arms Race Continues

The deployment of AI to combat malicious advertising represents an ongoing arms race between tech companies and cybercriminals. As Google expands Gemini's use across additional ad formats and enforcement systems, threat actors continue to evolve their tactics.

Industry experts note that this battle is likely to intensify as generative AI tools become more accessible and sophisticated. The ability to rapidly create convincing fake advertisements at scale presents a significant challenge for traditional detection methods.

What This Means for Users

For everyday users, Google's enhanced AI detection should result in fewer encounters with malicious ads. However, security experts still recommend:

  • Being cautious of ads offering deals that seem too good to be true
  • Verifying website URLs before entering sensitive information
  • Using ad-blockers on high-risk websites
  • Keeping security software updated
  • Reporting suspicious ads through Google's reporting mechanisms

As Google continues to refine its AI-powered defenses, the company aims to block malicious campaigns at submission time rather than after they've been served to users, potentially reducing the window of opportunity for scammers to reach their targets.

This technological escalation highlights the growing importance of AI in cybersecurity, where both defenders and attackers are leveraging machine learning to outmaneuver each other in an increasingly digital advertising landscape.

Comments

Loading comments...