#Vulnerabilities

Brocards for Vulnerability Triage: A Practical Framework for Security Analysis

Tech Essays Reporter
6 min read

A comprehensive guide to vulnerability triage brocards - concise principles that help security researchers and maintainers quickly evaluate the legitimacy of vulnerability reports in open source software.

In the world of open source security, vulnerability triage has become an increasingly complex and time-consuming task. As someone who spends considerable time evaluating security reports for open source projects, I've observed that a significant portion of submissions fall into categories that can be quickly identified and dismissed using established principles. Drawing inspiration from the legal world's use of brocards - concise aphorisms that capture the essence of legal principles - I've compiled a set of brocards specifically for vulnerability triage.

No vulnerability report without a threat model

This fundamental principle, well-explained by Alex Gaynor in "Motion to Dismiss for Failure to State a Vulnerability," states that any vulnerability report lacking a coherent threat model can be safely dismissed. The threat model is the foundation upon which vulnerability assessment rests - without it, we cannot determine whether a behavior actually poses a security risk.

Consider a Python API that raises exceptions in undocumented or surprising cases. While this behavior might be undesirable from a user experience perspective, it doesn't constitute a vulnerability unless an attacker can exploit it to cause harm. Similarly, reports about hangs or stalls in local developer tools often fail this test. While hangs are certainly undesirable, the opportunity for harm is negligible in a developer tooling context where the developer can always terminate the process.

No exploit from the heavens

This principle addresses reports that describe severe end states but require attacker capabilities that are more powerful than the vulnerability itself. In essence, if exploiting the vulnerability requires capabilities that already provide equal or greater access than the vulnerability would grant, then no actual vulnerability exists.

A classic example involves content manipulation on web services where the manipulation can only occur if the attacker is an active man-in-the-middle. In this scenario, an active MiTM could send entirely arbitrary content anyway, making the specific manipulation vulnerability irrelevant. The attacker already possesses the capability to cause the same harm without needing the reported vulnerability.

Another example involves memory corruption in CPython where the corruption occurs by directly manipulating CPython's object internals at runtime via ctypes. Here, the attacker is already running arbitrary code to perform the corruption, meaning they don't need the vulnerability to achieve their goals.

No vulnerability outside of usage

This principle states that a vulnerability report can be dismissed if it describes behavior that could occur but doesn't actually happen in real usage of the software. The key distinction here is between theoretical possibility and practical reality.

Consider a private API within a library that has a buffer overflow vulnerability, but where the only usage of that API is statically assertable to never exceed safe bounds. In this case, no actual vulnerability exists because the vulnerable behavior cannot manifest in practice. Similarly, APIs with preconditions that must be maintained by the programmer fall into this category. If an API requires valid UTF-8 input and a fuzzer discovers that invalid UTF-8 causes uncontrolled program abort, this isn't a vulnerability if real programs properly maintain the UTF-8 invariant.

The nuance here is important: while the programmer is responsible for maintaining invariants, there is a legitimate vulnerability when usage actually violates those invariants. This is analogous to free(3) not being considered vulnerable to double free, but a program that calls free(3) on an already freed pointer being vulnerable.

No vulnerability from standard behavior

Perhaps the most controversial of these principles, this brocard states that behavior that is a direct consequence of correct adherence to a standard or specification cannot be considered a vulnerability in the implementation itself. If the vulnerability exists in the standard, it's the standard that needs fixing, not every implementation.

This principle often comes into play with "robustness" requirements in standards. Many RFCs and similar standards follow the (poorly named) robustness principle, allowing interactions that are not well-defined by permitting implementers to make judgment calls about intended semantics. For example, RFC 7230's "Message Parsing Robustness" section allows servers to ignore empty lines before request-lines and recognize single LF as line terminators.

Another common scenario involves cryptographic requirements that are insecure in isolation but secure by construction. The classic example is automated reports of MD5 usage where that usage is solely in constructions where MD5 is not actually broken, such as HMAC-MD5. While using a better hash function might be preferable, the presence of MD5 in an HMAC construction is not itself a vulnerability.

The nuance here is that implementations choosing to be more strict than standards require should be considered vulnerable if their intended strictness is violated.

No cure worse than the disease

The maintainer should reject or contest vulnerability reports whose consequences are worse than the vulnerability itself. This principle recognizes that the triage and remediation process consumes community resources and can itself become a denial of service.

ReDoS (Regular Expression Denial of Service) reports are the classic example, particularly in contexts where the impact of the "denial of service" is negligible. These reports typically involve significant maintainer time for triage and downstream time for remediation, effectively resulting in a denial of service on the community itself. A recent case illustrating this principle is CVE-2026-4539, where an anonymous reporter filed a CVE against pygments with VulDB, seemingly bypassing maintainer review entirely. This report, which ignored pygments' own security policy and wasn't accompanied by a fixed version, lit up tens of thousands of downstream dependencies with a "medium" severity vulnerability, causing significant disruption despite being essentially junk.

The current state of the CVE ecosystem unreasonably places the burden on maintainers to contest this kind of spam when adversarial reporters can bypass them entirely.

The report is neither necessary nor sufficient

This final principle cuts both ways: the presence of a vulnerability report (and a CVE or other identifier) is neither necessary nor sufficient for a vulnerability to exist. Many vulnerabilities are never formally reported, and many formal reports do not actually describe meaningful vulnerabilities.

This has important implications for how we interpret vulnerability data. No unvalidated assumption should ever be made about the relationship between the presence of a report and the presence of a vulnerability. This stems from strategic ambiguity in the vulnerability reporting ecosystem, where organizations like MITRE benefit from being perceived as high-quality sources of vulnerability information while also being able to disclaim responsibility for anything other than providing stable identifiers for claims of vulnerability.

The broader context

These brocards emerge from the practical realities of vulnerability triage in 2026. The ecosystem faces several challenges: spam submissions (including "beg bounty" attempts and increasingly zero-effort LLM-generated reports), strategic ambiguity in vulnerability reporting that benefits certain organizations, and an unreasonable burden placed on maintainers to contest spurious reports.

The legal world's use of brocards provides a useful model because, like legal principles, these vulnerability triage principles are not universally true in every circumstance. Rather, they provide standards by which claims can quickly be evaluated for legitimacy. A vulnerability report that fails to meet these basic criteria can often be dismissed without extensive analysis, freeing up valuable time and resources for genuine security issues.

For maintainers and security researchers, these brocards offer a practical framework for navigating the increasingly noisy landscape of vulnerability reporting. By applying these principles systematically, we can focus our attention on the reports that truly matter while efficiently filtering out the noise that threatens to overwhelm the vulnerability triage process.

Comments

Loading comments...