GitHub is tightening its bug‑bounty criteria, emphasizing validated proof‑of‑concepts, clearer reporting, and a shared‑responsibility model for user‑trusted content while still welcoming AI‑assisted research.
Raising the bar: Quality, shared responsibility, and the future of GitHub's bug bounty program

GitHub’s bug bounty program has been a cornerstone of the platform’s security posture for years. Over the past twelve months the volume of submissions has surged, driven by new scanning tools and AI assistants that lower the entry barrier for security research. While more eyes on the code base is a net positive, the influx has also brought a flood of low‑impact reports—missing proof of concept, purely theoretical attacks, or findings already covered by the published ineligible list.
Why the change matters
A high‑signal queue lets the internal triage team focus on the findings that actually move the needle for platform security. When a large fraction of reports are noise, response times increase, researcher reputation suffers, and the program’s overall effectiveness declines. Some bounty programs have responded by shutting down; GitHub is choosing a different path: invest in higher standards and give researchers a clearer roadmap to success.
New submission criteria
Effective June 1 2026, every report will be judged against three tightened checkpoints:
- Working proof of concept – The submission must include reproducible steps that demonstrate a concrete security impact. A screenshot of the exploited request, a curl command that triggers the vulnerability, or a minimal exploit script are all acceptable. Abstract statements like “an attacker could …” without a live demonstration will be marked incomplete.
- Scope awareness – Researchers must review the scope and ineligible findings list before filing. Submissions that fall into categories such as DMARC/SPF/DKIM mis‑configurations, simple user‑enumeration, or missing security headers without an exploitable path will be closed as Not Applicable and may affect the reporter’s HackerOne signal.
- Validated output – Whether the finding originates from a scanner, a static‑analysis tool, or an AI assistant, the researcher must manually verify the result. A false positive that is caught before submission is acceptable; a raw tool dump that has not been reproduced will be treated as noise.
Report structure you should follow
- Summary – One‑sentence description of the issue.
- Reproduction steps – Ordered list with commands, request payloads, or screenshots.
- Impact statement – What an attacker can achieve (e.g., remote code execution, privilege escalation, data exfiltration).
Avoid multi‑page narratives, background essays, or AI‑generated filler. Brevity speeds up triage and improves the chances of a bounty payout.
AI‑assisted research is welcome—if it meets the same bar
GitHub openly encourages the use of AI tools such as large language models, code‑generation assistants, or automated fuzzers. The policy is simple: AI is a tool, not a replacement for verification. An AI‑suggested exploit that you have reproduced, documented, and packaged with a working PoC is treated the same as any manually discovered finding.
“An AI‑assisted finding that’s been verified, reproduced, and submitted with a working proof of concept is a great submission.” – Jarom Brown, Senior Product Security Engineer
Shared‑responsibility model explained
Many reports describe scenarios where a user interacts with attacker‑controlled content—cloning a malicious repository, running a crafted workflow, or feeding untrusted input to an LLM. These situations often stem from a user’s decision to trust that content. GitHub’s platform provides detection and remediation layers (automated scanning, manual review, abuse‑report mechanisms), but the ultimate security boundary lies with the user.
| Scenario | Why it’s shared responsibility |
|---|---|
| Prompt injection via content fed to an AI tool | The user chose to trust the input |
| Git hooks executing code from a checked‑out repo | By design, Git runs hooks on user‑checked‑out code |
| Malicious repository cloned locally | Cloning is an act of trust |
| LLM producing unexpected output from untrusted input | The user supplied the input |
If a finding demonstrates a bypass of GitHub‑controlled defenses that does not require the victim to actively trust malicious content, it belongs in the bounty program. Those are the high‑impact submissions the team is most eager to reward.
What this means for researchers
- Experienced hunters – Expect faster acknowledgments and quicker payouts because the queue will contain fewer low‑signal items.
- New entrants – Spend time reading the scope page and the ineligible list. Build a minimal, reproducible PoC before you file.
- Volume‑first approach – Shift toward depth. One well‑validated, high‑impact finding outweighs ten speculative reports both in bounty size and reputation gain.
Reward adjustments for low‑risk findings
GitHub will continue to accept submissions that lead to hardening or documentation improvements, but the compensation model changes:
- High‑impact exploits – Standard bounty payout.
- Low‑risk hardening or doc fixes – Researchers receive GitHub swag (t‑shirts, stickers, etc.) instead of cash. This acknowledges the contribution while reserving bounty funds for the most critical issues.
Looking ahead
The program’s evolution is guided by three goals:
- Higher signal-to‑noise ratio – stricter criteria and clearer expectations.
- Faster triage – less time spent on invalid reports, more time on remediation.
- Sustained collaboration – AI tools remain part of the workflow, but human validation stays non‑negotiable.
GitHub’s commitment to a transparent, researcher‑friendly bounty program is stronger than ever. By raising the bar on quality and clarifying the shared‑responsibility model, the platform aims to keep the security ecosystem healthy for the 180 million developers who rely on it.
Happy hunting!

Comments
Please log in or register to join the discussion