Generative AI Facilitates Global FortiGate Firewall Breach Campaign
#Security

Generative AI Facilitates Global FortiGate Firewall Breach Campaign

AI & ML Reporter
2 min read

Amazon's security team documented how Russian-speaking hackers leveraged generative AI to compromise over 600 FortiGate firewalls across 55 countries in five weeks, demonstrating AI's growing role in sophisticated cyberattacks.

Featured image

Amazon's security division has disclosed details of a coordinated campaign where threat actors used generative AI tools to breach more than 600 FortiGate firewalls globally. The operation, attributed to Russian-speaking hackers, exploited AI-generated content to create convincing phishing lures and reconnaissance materials that bypassed traditional security filters.

The attackers targeted Fortinet's FortiGate firewalls – widely used network security devices – across organizations in 55 countries. According to Amazon's analysis, generative AI was instrumental in three attack phases: crafting socially engineered emails that mimicked legitimate IT communications, generating fake technical documentation to establish credibility, and automating vulnerability scanning scripts tailored to FortiOS environments.

While the specific generative AI services weren't named in the report, forensic evidence indicates the tools produced English-language content with fewer grammatical errors than typical non-native speaker phishing attempts. This technical refinement increased the attacks' success rate, with hackers gaining persistent access to networks within minutes of initial compromise in some cases.

Notably, the campaign exploited known vulnerabilities rather than zero-days. Amazon's advisory suggests the hackers likely targeted unpatched systems (CVE-2024-21762 and CVE-2023-27997) that hadn't implemented Fortinet's recommended updates. Once inside, attackers deployed custom malware designed to maintain persistence while exfiltrating credentials and sensitive data.

The operation's scale reveals practical limitations of current AI security tools. Automated defenses struggled to distinguish between legitimate AI-generated business communications and malicious content, highlighting a critical detection gap. Amazon recommends implementing behavioral analysis systems that monitor for unusual network traffic patterns rather than relying solely on content scanning.

This incident demonstrates generative AI's dual-use dilemma: The same capabilities that help developers write code can efficiently craft attack vectors. Security teams must now contend with AI-powered reconnaissance that adapts to target environments faster than human operators could achieve manually. As Amazon concludes in their report, defensive strategies must evolve beyond signature-based detection toward anomaly-driven security models capable of identifying AI-assisted attack patterns.

Comments

Loading comments...