AI is Breaking Two Vulnerability Cultures
#Vulnerabilities

AI is Breaking Two Vulnerability Cultures

Trends Reporter
6 min read

AI is fundamentally changing how security vulnerabilities are discovered and disclosed, challenging long-established practices in both coordinated disclosure and 'bugs are bugs' approaches.

The tension between two distinct approaches to vulnerability management is intensifying as AI accelerates the discovery and exploitation of security flaws. On one side stands the 'coordinated disclosure' culture, where vulnerabilities are reported privately to vendors with time to fix before public disclosure. On the other is the 'bugs are bugs' culture, particularly prevalent in Linux, where fixes are implemented quickly without drawing attention to them.

This dynamic came into sharp focus recently with the Copy Fail vulnerability. When Hyunwoo Kim discovered the fixes were insufficient, he shared a patch following standard Linux procedure: alerting a closed list of security engineers while fixing the bug quietly in the open. The goal was to 'embargo' knowledge of the serious vulnerability, allowing only those positioned to address it to know while keeping details quiet for a few days.

However, someone else noticed the change, realized the security implications, and shared it publicly. With the embargo broken, full details became visible. This incident illustrates the fundamental challenge: the historical assumptions underlying both vulnerability cultures are being undermined by AI-powered detection capabilities.

The 'bugs are bugs' approach has always relied on the assumption that 'often people won't notice, with so many changes going past, and there's still time to get machines patched.' This approach never worked perfectly, but with AI becoming proficient at finding vulnerabilities, it's becoming untenable.

As one commenter noted, 'With AI, anyone can do this to any software.' The signal-to-noise ratio for examining commits has improved dramatically, making it much easier to identify security patches. Additionally, having AI evaluate each commit as it passes is increasingly cost-effective and accurate.

The coordinated disclosure model faces its own challenges. The historical 90-day disclosure window made sense when vulnerability detection was slow. If you found something and reported it to the vendor, there was a good chance no one else would notice during that time. But now, with AI-assisted groups scanning software for vulnerabilities, that assumption no longer holds.

In the Copy Fail case, just nine hours after Kim reported the ESP vulnerability, Kuan-Ting Chen independently reported it. The era of long embargoes appears to be ending, as they create a false sense of non-urgency and limit which actors can work to fix a flaw.

The implications extend beyond just disclosure timing. As one commenter observed, 'This is actually breaking three vulnerability cultures.' Beyond the two mentioned, the culture of delaying upgrades and staying on stable versions for as long as possible is becoming increasingly untenable.

'If everything that's not latest can be trivially scanned and exploited,' argued one tech professional, 'then in the extreme I think there's a decent chance projects like Debian might have to radically overhaul or just shut down completely - the whole philosophy of slow and steady with old code just won't work.'

Others pushed back on this assessment, noting that Debian has always been diligent about shipping security patches to stable releases. 'Debian continuously issues security updates for stable versions, ingestable with automatic updates,' one commenter countered. 'Stable doesn't mean that vulnerabilities aren't getting fixed.'

The broader question is how to adapt to this new reality. Some suggest very short embargoes as a compromise, though even this may not be sustainable. 'I don't know how to resolve this,' admitted the original author, 'but personally very short embargoes seem like a good approach, and they'd need to get even shorter over time.'

AI offers potential solutions as well as problems. 'Luckily AI can speed up defenders as well as attackers here,' the author noted, 'allowing embargoes that would previously have been uselessly short.'

Testing with current AI models shows promise. When given specific commit hashes, models like Gemini 3.1 Pro, ChatGPT-Thinking 5.5, and Claude Opus 4.7 all correctly identified security vulnerabilities. When given just diffs without context, Gemini was certain it was a security fix, while GPT thought it probably was and Claude thought it probably wasn't.

This capability raises questions about the future of software development. Some suggest that AI will eventually make it standard practice to pre-scan all code for vulnerabilities before deployment. 'If AI vulnerability detection friction becomes low enough it'll become common/forced practice to pre-scan code,' one commenter predicted.

Others are more skeptical about the timeline. 'I've been dealing with a bunch of AI-generated (or at least -assisted) vulnerability reports lately,' one developer shared. 'In many cases the reports include proposed patches to fix the issues. It's been..... interesting. In many cases, the analysis provided in the report has been accurate and helpful. In some cases, the proposed patches have also been good, and we've accepted them with minimal or no changes. In other cases, despite finding a valid issue, and even providing a good analysis of the problem, the AI tool's suggested patch has been, quite simply, wrong.'

The fundamental challenge appears to be that defenders need to stop everything, while attackers only need to find one exploit. AI potentially changes this equation by making vulnerability detection cheap and accessible to everyone.

'When you have a large surface area and limited resources, it's much easier to be the side that only has to succeed once,' one commenter observed. 'AI eliminates the limited resources problem.'

Others see a darker future. 'The US is at war. Much of the world is at war at the cyber attack level right now,' warned one security professional. 'The US, the EU, most of the Middle East, Israel, Russia... Major services have been attacked and have gone down for days at a time - Ubuntu, Github, Let's Encrypt, Stryker. Entire hospital systems have had to partially shut down. Now, in the middle of this, AI has made attacks much faster to generate. Faster than the defensive side can respond.'

Despite these concerns, some see a path forward. 'Right now we are at a point in time when AI can find bugs for attackers and defenders, but defenders did not fix/find those bugs yet,' one commenter offered. 'In time most of the bugs AI can find will be fixed, and things will calm down. Some bugs will be left, but will be too complex to find and weaponise (or rarely). In short, attackers have advantage for a brief time now, but ultimately defenders will win.'

The changing vulnerability landscape demands new approaches to security. As one commenter put it, 'We need automated patch and release cycles. So far we've relied on incredibly slow manual processes to accept reports, investigate, verify, patch, and prepare releases. Releasing a fix often takes months. This is way too slow when attackers can just churn out new exploits in hours.'

The fundamental question remains: how do we build systems that can respond to vulnerabilities faster than they can be discovered and exploited? The answer may lie in rethinking our entire approach to software development, security, and vulnerability management in an AI-accelerated world.

As one commenter noted, 'The bugs are bugs description reads pretty insane to me personally but I know linux world has many people valuing principle of it over practical matters.' The tension between principle and practicality will only intensify as AI continues to change the security landscape.

Comments

Loading comments...