LLM‑driven vulnerability discovery has collapsed the old responsible‑disclosure timeline. When ten researchers can find the same critical bug in weeks and an AI can turn a public patch into an exploit in minutes, the classic 90‑day window no longer protects anyone. The article argues for treating every critical issue as a P0 emergency, automating defensive pipelines with LLMs, and abandoning monthly patch cycles.
The 90‑Day Disclosure Policy Is Dead
Published: 2026‑05‑09 – Updated: 2026‑05‑09
Reading time: ~14 min
Tags: #security #llm #disclosure #vulnerability‑management #linux
{{IMAGE:1}}
TL;DR
The 90‑day responsible‑disclosure window was built for a world where bug finders were rare and exploit development was slow. That world is gone. LLMs have compressed both timelines to near‑zero. Treat every critical security issue as P0 and patch it now, not “in the next sprint”.
1. The Old Model (Rest in Peace)
Imagine it is 2019. You discover a critical bug, write a report, and send it to the vendor. The vendor takes a few days to triage, a couple of weeks to develop a fix, and perhaps a month to roll it out. You give them 90 days before you go public, assuming:
- You are probably the only person who found the bug.
- Even if others find it, they will take their own time.
- The vendor has a comfortable head‑start on writing the patch.
- Attackers need days or weeks to turn the patch into a working exploit.
All four assumptions are now demonstrably false.
2. Story 1 – Ten Researchers, One Bug, Six Weeks
In late April I reported a critical flaw that let an attacker purchase a $5 000 item for free by forging the server’s response. The vendor replied, “We already knew – this was first reported in March. You are reporter #11.”
- Ten independent researchers filed the same bug within six weeks.
- A triage engineer on the vendor side posted that duplicate reports flood in within days after an LLM‑generated proof‑of‑concept appears.
- The same LLM that helped honest researchers can be used by anyone with malicious intent.
If ten people reported it, how many did not report it? The 90‑day clock only protects the users who already have the vulnerability; it gives every other attacker a free 90‑day head start.
3. Story 2 – 30 Minutes from Patch to Exploit
React recently published a set of patches (CVE‑2026‑23870, CVE‑2026‑44575, …). I downloaded the diff, fed it to an LLM, and within 30 minutes I had a working denial‑of‑service exploit for a local test app. The AI performed the heavy lifting: parsing the diff, locating the vulnerable code path, and generating a PoC.
In the pre‑LLM era, turning a public patch into an exploit took days to weeks of manual reverse engineering. That safety margin no longer exists. The moment a patch lands, assume an exploit already exists.
4. Story 3 – The Week Linux Caught Fire
Act 1: Copy Fail (CVE‑2026‑31431)
On 29 April, the team behind Theori disclosed a kernel crypto bug that gives root on every Linux distribution released since 2017. The discovery was made by an AI‑driven scanner that ran for one hour. The exploit is a 732‑byte Python script that works on Ubuntu, RHEL, Amazon Linux, SUSE, and more.
Within days, Iranian‑state actors were observed leveraging the bug to build DDoS botnets.
Act 2: Dirty Frag (CVE‑2026‑43284 / CVE‑2026‑43500)
A week later, researcher Hyunwoo Kim released two chained kernel bugs in the IPSec ESP and RxRPC modules. The bugs work even if the Copy Fail mitigation is applied. Kim reported the issues on 29‑30 April, coordinated a five‑day embargo with the kernel mailing list, but an unrelated third party broke the embargo on 7 May and published a full exploit.
Microsoft Defender confirmed in‑the‑wild exploitation within 24 hours. No distribution had a patch for the RxRPC component at that point.
The timeline reads like a horror movie:
| Date | Event |
|---|---|
| Apr 29 | Copy Fail disclosed (AI‑found) |
| Apr 30‑May 5 | Patch merged, mitigations applied |
| May 1‑6 | Nation‑state actors weaponize the bug |
| May 7 | Dirty Frag embargo broken, exploit public |
| May 8 | Real‑world attacks observed |
The 90‑day model collapses when multiple independent researchers can rediscover the same primitive in days and attackers can weaponize it before any vendor can ship a fix.
5. Why the 90‑Day Window No Longer Protects Anyone
- Abundant finders – LLM‑assisted scanners turn vulnerability hunting into a commodity. Ten independent reports are now the norm, not the exception.
- Instant exploit generation – AI can read a diff, understand the vulnerable path, and output a PoC in minutes.
- Broken embargoes – When several parties are working on the same bug class, coordination is futile; a third party can publish the exploit hours after the embargo is announced.
- Monthly patch cycles are dead – Attackers can exploit a vulnerability within the same day a patch lands, making a 30‑day maintenance window an attack window.
6. What the Industry Must Do (One Simple Ask)
Treat every critical issue as a P0 emergency and fix it immediately.
For Vendors
- Start the clock the moment a report lands, not when triage finishes.
- Assume at least nine other researchers already have the bug and that at least one is hostile.
- Deploy an emergency response process that can deliver a patch in hours, not weeks.
For Researchers
- Push for the shortest possible disclosure window. If a vendor cannot ship a fix in a week, that is a vendor problem, not a disclosure problem.
For Vulnerability‑Management Teams
- Move from “weekly scan → sprint triage → monthly patch” to real‑time detection and remediation.
- Automate the entire pipeline: detection, impact analysis, patch generation, and deployment.
7. A Blue‑Team Survival Guide for the LLM Era
The defensive side must adopt the same AI‑driven speed that attackers already enjoy.
- LLM‑assisted code review at PR time – Run a model that flags insecure patterns as part of the CI pipeline, just like a linter.
- Automated patch analysis – When an upstream dependency releases a security patch, automatically pull the diff, let an LLM assess impact on your codebase, and raise a ticket if needed.
- Continuous AI‑powered dependency scanning – Resolve transitive‑dependency vulnerabilities the moment they appear in the upstream repo.
- Pre‑release exploit verification – Before publishing a security patch, feed the diff to an LLM and ask it to generate a PoC. If it succeeds, you have a regression test and can be confident the patch truly mitigates the issue.
These steps are not optional niceties; they are the only way to keep the exploitation window from collapsing to zero.
8. Final Thoughts
Picture a sysadmin reading the Dirty Frag advisory on May 7, seeing that no patch exists, that the exploit is already public, and that the mitigation is “disable your IPSec modules”. That admin has 400 servers to patch. That is the new reality, not a hypothetical war‑game scenario.
The 90‑day disclosure policy, monthly patch cycles, and the assumption of a grace period between disclosure and exploitation are all dead. What remains is the ability to move fast, automate hard, and treat critical bugs as emergencies.
The same LLM wave that broke the old model also provides the tools for a new defensive workflow: real‑time scanning, AI‑driven code review, automatic impact analysis, and exploit‑in‑the‑wild testing. The question is whether defenders will adopt these tools before attackers do. Right now, attackers are winning the race.
If you’re still reading, thank you for staying the course. I’ll be publishing deeper dives on each of the stories mentioned here:
- “10 people found my bug before me” – duplicate‑finder problem and bounty implications.
- “30 minutes from patch to exploit” – the React story and the death of the n‑day gap.
- “The week Linux caught fire” – technical deep‑dive of Copy Fail and Dirty Frag.
- “Your CI/CD pipeline needs AI now” – defensive playbook.
- “Blue‑team survival guide for the LLM era” – practical integration patterns.
Feel free to reach out on Twitter/X with thoughts, disagreements, or suggestions.
Read other posts:
OpenSSL for Fun and Found a Nonce Leak → [link]

Comments
Please log in or register to join the discussion