Amazon's CISO CJ Moses reveals how AI tools have boosted penetration testing efficiency by 40%, allowing the company to maintain security levels while scaling services and avoiding massive hiring increases.
Amazon has achieved a 40 percent efficiency gain in penetration testing by deploying AI tools to identify and exploit vulnerabilities in its products and services, according to the company's chief information security officer, CJ Moses.
During an interview at the RSA Conference, Moses explained that Amazon was facing a growing challenge: as the company launches more products and services each year, the demand for human penetration testers was outpacing supply. The traditional approach of hiring more security professionals to manually test systems was becoming unsustainable both financially and operationally.
"Historically, this has been a very human- and resource-intensive endeavor, costing the cloud and online retail giant 'millions and millions of dollars in humans' - both AWS employees and contractors - to proactively find and exploit bugs in products, services, and applications during the development process and before customers used them," Moses said.
With AI integration, Amazon has managed to maintain the same level of security coverage while holding hiring flat, even as the company adds more cloud services, features, and lines of code. The efficiency gain comes from both human and operating expenses related to penetration testing.
Continuous Testing Replaces Point-in-Time Assessments
One of the most significant advantages of AI-powered penetration testing is the shift from periodic assessments to continuous monitoring. Traditional pentesting typically occurs at specific milestones or on an annual basis, creating windows of vulnerability between tests.
"No longer is pentesting at a point in time," Moses explained. "It continues to test, looking for next-level access, which is immeasurable from the standpoint of identifying issues, vulnerabilities, daisy chaining of potential vulnerabilities in an automated way, and then presents that as an alert to a human, for them to respond to and make a decision."
This continuous approach means that vulnerabilities can be identified and addressed much more quickly, reducing the window of exposure for potential attackers. The AI systems can perform the more mundane, data-intensive tasks like vulnerability identification and analysis, then hand off the decision-making to human experts.
Human Oversight Remains Critical
Despite the efficiency gains, Moses emphasized that humans remain firmly in control of critical decisions. The AI performs the heavy lifting of scanning and identifying potential issues, but humans make the final call on how to respond.
"An example being that if a pentesting AI is pentesting an application, and it finds a vulnerability that will provide further access, you want the AI to ask a human whether it exploits that access," he said. "AI is very good at doing things, especially when you have large amounts of data and need that big view. But from a decision-making capability, it isn't something that we're ready to rely on."
Moses compared AI's current decision-making capabilities to that of a 7-year-old child. "So if you're willing to let your 7-year-old make a decision as to whether they should jump to the next level of pentesting in your company, OK. But you may not want the AI doing that without someone much more experienced and older."
The Broader Security Landscape
The timing of Amazon's AI pentesting announcement comes as security experts warn that attackers are already using AI to find vulnerabilities. Rob Joyce, former NSA cyber boss, told RSAC attendees that organizations are being red-teamed whether they pay for it or not.
"You are going to be red-teamed whether you pay for it or not," Joyce said during a Monday panel. "The only difference is, you know who gets the results delivered to them."
This reality makes AI-powered defensive capabilities not just advantageous but necessary for organizations facing increasingly sophisticated threats.
Training AI Systems Like Human Employees
Moses drew parallels between securing AI systems and securing human employees, emphasizing that both require proper training and access controls. Just as human employees need to be trained on security protocols and given appropriate access levels, AI agents need similar governance.
"If you're used to securing humans, you're better able to secure AI," Moses said. "What are the two non-deterministic things that we must secure these days? Humans and AI. Look at your AI the way that you look at securing your humans. How do you secure humans? Training."
This training extends beyond just teaching AI systems what to do - it also involves carefully controlling what they know. Moses warned that AI systems will act on and share any information they're given, potentially with other AI systems.
"You tell them what you want them to know, not anything more," he advised. "If you tell them something that they don't need to know, they will act on it, they will use it, they will share it with their friends - and AI has friends."
Identity and Access Management for AI
The conversation around AI security naturally extends to identity and access management. Just as human employees should only have access to the systems and data necessary for their roles, AI agents need similarly restricted permissions.
This involves creating and managing "agentic identities" - digital credentials that define what an AI system can and cannot do. The underlying models must be trained with the right data to complete specific tasks, and access must be limited to only the systems and data needed for those tasks.
Moses's insights reflect a broader trend in the cybersecurity industry as organizations grapple with how to secure AI systems while leveraging their capabilities. The efficiency gains Amazon has achieved suggest that AI can significantly augment human security teams rather than replace them, allowing organizations to scale their security operations without proportional increases in headcount or costs.
The 40 percent efficiency gain represents just the beginning, according to Moses, who believes the industry hasn't yet hit the "hockey stick" of AI efficiency improvements. As AI tools become more sophisticated and better integrated into security workflows, organizations may see even greater benefits in their ability to identify and respond to threats.

Comments
Please log in or register to join the discussion