A digital intruder broke into an AWS cloud environment and in just under 10 minutes went from initial access to administrative privileges, thanks to an AI speed assist.
A digital intruder broke into an AWS cloud environment and in just under 10 minutes went from initial access to administrative privileges, thanks to an AI speed assist.
The Sysdig Threat Research Team said they observed the break-in on November 28, and noted it stood out not only for its speed, but also for the "multiple indicators" suggesting the criminals used large language models to automate most phases of the attack, from reconnaissance and privilege escalation to lateral movement, malicious code writing, and LLMjacking - using a compromised cloud account to access cloud-hosted LLMs.
"The threat actor achieved administrative privileges in under 10 minutes, compromised 19 distinct AWS principals, and abused both Bedrock models and GPU compute resources," Sysdig's threat research director Michael Clark and researcher Alessandro Brucato said in a blog post about the cloud intrusion.
"The LLM-generated code with Serbian comments, hallucinated AWS account IDs, and non-existent GitHub repository references all point to AI-assisted offensive operations."
The attackers initially gained access by stealing valid test credentials from public Amazon S3 buckets. The credentials belonged to an identity and access management (IAM) user with multiple read and write permissions on AWS Lambda and restricted permissions on AWS Bedrock. Plus, the S3 bucket also contained Retrieval-Augmented Generation (RAG) data for AI models, which would come in handy later during the attack.
To prevent this type of credential theft, don't leave access keys in public buckets. Sysdig recommends using temporary credentials for IAM roles, and for organizations that insist on granting long-term credentials to IAM users, make sure you rotate them periodically.
After unsuccessfully trying usernames such as "sysadmin" and "netadmin" typically associated with admin-level privileges, the attacker ultimately achieved privilege escalation through Lambda function code injection, abusing the compromised user's UpdateFunctionCode and UpdateFunctionConfiguration permissions: They replaced the code of an existing Lambda function named EC2-init three times, iterating on their target user. The first attempt targeted adminGH, which, despite its name, lacked admin privileges. Subsequent attempts eventually succeeded in compromising the admin user frick.
The security sleuths note that the comments in the code are written in Serbian - likely indicating the intruder's origin - the code itself listed all IAM users and their access keys, created access keys for frick, and listed S3 buckets along with their content.
Code writing for LLMs 101
Plus, the attacker's code contained "comprehensive" exception handling, according to the security sleuths, including logic to limit S3 bucket listings and an increase to the Lambda execution timeout from three seconds to 30 seconds. These factors, combined with the short time from credential theft to Lambda execution, "strongly suggest" the code was written by an LLM, according to the threat hunters.
Next, the miscreant set about collecting account IDs and attempting to assume OrganizationAccountAccessRole in all AWS environments. Interestingly, they included account IDs that did not belong to the victim organization: two with ascending and descending digits (123456789012 and 210987654321), and one ID that appeared to belong to a legitimate external account.
"This behavior is consistent with patterns often attributed to AI hallucinations, providing further potential evidence of LLM-assisted activity," Clark and Brucato wrote.
In total, the attacker gained access to 19 AWS identities, including six different IAM roles across 14 sessions, plus five other IAM users. And then, with the new admin user account they had created, the crims snarfed up a ton of sensitive data: secrets from Secrets Manager, SSM parameters from EC2 Systems Manager, CloudWatch logs, Lambda function source code, internal data from S3 buckets, and CloudTrail events.
LLMjacking attacks
They then turned to the LLMjacking part of the attack to gain access to the victim's cloud-hosted LLMs. For this, they abused the user's Amazon Bedrock access to invoke multiple models including Claude, DeepSeek, Llama, Amazon Nova Premier, Amazon Titan Image Generator, and Cohere Embed. Sysdig notes that "invoking Bedrock models that no one in the account uses is a red flag," and enterprises can create Service Control Policies (SCPs) to allow only certain models to be invoked.
After Bedrock, the intruder focused on EC2, querying machine images suitable for deep learning applications. They also began using the victim's S3 bucket for storage, and one of the scripts stored therein looks like it was designed for ML training - but it uses a GitHub repository that doesn't exist, suggesting an LLM hallucinated the repo in generating the code.
While the researchers say they can't determine the attacker's goal - possibly model training or reselling compute access - they note that the script launches a publicly accessible JupyterLab server on port 8888, providing a backdoor to the instance that doesn't require AWS credentials. However, they terminated the instance after five minutes for unknown reasons.
AI agents can't yet pull off fully autonomous cyberattacks – but they are already very helpful to crims
AI-powered cyberattack kits are 'just a matter of time,' warns Google exec
Agents gone wild! Companies give untrustworthy bots keys to the kingdom
Yes, criminals are using AI to vibe-code malware
This is the latest in examples of attackers increasingly relying on AI to help them at almost every stage in the attack chain, and some security chiefs have warned that it's just a matter of time before criminals can fully automate attacks at scale.
There are things organizations can do to defend against similar intrusions and most involve hardening identity security and access management. First off: apply principles of least privilege to all IAM users and roles. Sysdig also recommends restricting UpdateFunctionConfiguration and PassRole permission in Lambda, limiting UpdateFunctionCode permissions to specific functions and assigning them only to identities that need code deployment capabilities to do their jobs. Also, make sure S3 buckets containing sensitive data, including RAG data and AI model artifacts, are not publicly accessible. And it's a good idea to enable model invocation logging for Amazon Bedrock.
We reached out to Amazon for comment, but they said they wouldn't be able to get us anything by publication time. We'll update this story with any relevant information we receive from them. ®
More about AI
AWS
Cybercrime
More like these

Comments
Please log in or register to join the discussion