As AI models improve at writing and evaluating code, open-source projects face a new challenge: an influx of high-quality vulnerability reports that require human verification, creating more work for already stretched maintainers.
Once dismissed as AI slop, automated bug reports have evolved into sophisticated vulnerability submissions that are overwhelming open-source maintainers. The improvement in AI models' ability to write and evaluate code has created a paradox: better AI output means more work for humans who must verify each report.
For the curl project, this shift has been dramatic. Daniel Stenberg, founder and lead developer of curl, recently noted on social media that the project has stopped receiving low-quality AI-generated security reports. Instead, maintainers now face an ever-increasing volume of genuinely good security reports, almost all created with AI assistance.
"They're gone. Instead, we get an ever-increasing amount of really good security reports, almost all done with the help of AI," Stenberg explained. The reports are being submitted faster than ever before, imposing a growing workload on maintainers who must evaluate each one.
This isn't an isolated phenomenon. Linux kernel maintainer Greg Kroah-Hartman has observed similar trends, noting that AI-assisted bug reports now contain less nonsense and more valid concerns. While the Linux team has been working to handle the increased volume, smaller projects with fewer maintainers may be struggling to keep up.
Even with improved quality, the issues identified aren't always genuine security flaws requiring immediate attention. Stenberg points to curl's public list of closed reports as evidence. Most reports get closed because the issue isn't a serious threat, even if it might be worth correcting. For instance, a data race in a curl library was initially discussed as potentially warranting a CVE but was eventually fixed in a pull request and deemed simply "informative."
The situation has prompted several organizations to reconsider their vulnerability reward programs. The Internet Bug Bounty program recently announced it would stop issuing monetary awards for vulnerabilities at the end of March, citing the changing discovery landscape. "AI-assisted research is expanding vulnerability discovery across the ecosystem, increasing both coverage and speed," the program maintainers said. "The balance between findings and remediation capacity in open source has substantively shifted."
Stenberg himself took action last year by stopping payments for curl vulnerability reports, aiming to remove incentives for submitting erroneous or unsubstantiated reports. Other organizations have followed similar paths, recognizing that the economics of vulnerability discovery have fundamentally changed.
Linux maintainer Willy Tarreau responded to Stenberg's observations by arguing that reporting rules need updating to reduce the overhead. "It's time to update the reporting rules to reduce the overhead by making the LLM+reporter do a larger share of the work to reduce the time spent triaging," he suggested.
The irony is clear: capable AI tooling doesn't increase the capabilities of the humans in the loop. Much of the notional productivity gain from AI may simply be AI tool users moving the cost of code review off the books. As AI continues to improve at generating plausible code and identifying potential issues, open-source projects face the challenge of managing this new reality without burning out their volunteer maintainers.
The shift from AI slop to AI sophistication represents a significant change in how open-source security works. While the quality of automated reports has improved dramatically, the fundamental bottleneck remains human verification and triage. As more projects experience this phenomenon, the community will need to develop new approaches to handle the increased volume while maintaining the quality and security of open-source software.

This evolving situation highlights a broader challenge in the AI era: as automation becomes more capable, the human labor required to validate and integrate that automation's output may actually increase rather than decrease. For open-source maintainers, this means adapting to a world where AI assistance comes with its own set of management challenges.

Comments
Please log in or register to join the discussion