HackerOne CEO Kara Sprague addresses researcher concerns about AI training, stating the platform does not use bug bounty submissions to train generative AI models, while competitors like Bugcrowd and Intigriti also clarify their positions on AI and researcher data rights.
HackerOne has moved to clarify its position on artificial intelligence after researchers raised concerns that their bug bounty submissions might be used to train the platform's AI agents. The controversy erupted following the launch of HackerOne's Agentic PTaaS (Platform-as-a-Service), which the company described as delivering "continuous security validation by combining autonomous agent execution with elite human expertise."
The platform stated that its agents "are trained and refined using proprietary exploit intelligence informed by years of testing real enterprise systems." This language prompted immediate questions from the security research community about the source of that training data.
Security researcher @YShahinzadeh voiced a common concern on X, stating: "As a former H1 hunter, I hope you haven't used my reports to train your AI agents." The sentiment was echoed by others who worried they were "literally training our own replacement." One researcher, @AegisTrail, warned about potential consequences: "When white hats feel the legal system is rigged against them, the appeal of the 'dark side' becomes a matter of anger and survival rather than ethics. Just saying."
In response to the growing unease, HackerOne CEO Kara Sprague took to LinkedIn to address the issue "directly and unambiguously." Sprague stated: "HackerOne does not train generative AI models, internally or through third-party providers, on researcher submissions or customer confidential data."
She elaborated that researcher submissions are not used to "train, fine-tune, or otherwise improve generative AI models," and that third-party model providers are not permitted to "retain or use researcher or customer data for their own model training."
Sprague explained that Hai, HackerOne's agentic AI system, was designed "to help accelerate outcomes, such as validated reports, confirmed fixes, and paid rewards, while preserving the integrity and confidentiality of researcher contributions." She emphasized to researchers: "You are not inputs to our models... Hai is designed to complement your work, not replace it."
The controversy prompted other bug bounty platforms to clarify their AI policies as well. Intigriti's founder and CEO, Stijn Jans, told researchers via LinkedIn: "You own your work." Jans stated that Intigriti "apply[s] AI to create mutual benefit for both customers and researchers, amplifying human creativity so you can continue finding the complex, critical vulnerabilities that models often miss."
Bugcrowd has also addressed the issue in its terms of service, stating: "We do not allow third parties to train AI, LLM, or generative AI models on customer or researcher data." However, Bugcrowd also holds researchers responsible for their use of GenAI tools, noting that "using GenAI does not exempt them from strict compliance with platform rules or specific program scopes," while "automated or unverified outputs are not accepted as valid submissions."
The incident highlights the growing tension between security researchers and platforms as AI technology becomes more prevalent in the cybersecurity industry. Researchers are increasingly concerned about the potential for their work to be used to automate their roles, while platforms seek to leverage AI to improve efficiency and outcomes.
The clarification from HackerOne and other platforms suggests that while AI will continue to play an increasing role in security testing, the human element remains central to the bug bounty ecosystem. The challenge moving forward will be balancing the benefits of AI augmentation with the rights and concerns of the researchers who form the backbone of these platforms.

As AI continues to evolve, platforms will need to maintain transparency about how they use data and ensure that researchers feel valued rather than exploited. The incident serves as a reminder that in the rapidly changing landscape of cybersecurity, clear communication and ethical considerations are as important as technological advancement.

Comments
Please log in or register to join the discussion