Linus Torvalds: 'The AI Slop Issue Is *NOT* Going To Be Solved With Documentation'
#Regulation

Linus Torvalds: 'The AI Slop Issue Is *NOT* Going To Be Solved With Documentation'

Hardware Reporter
2 min read

Linux creator Linus Torvalds rejects documentation-focused approaches to AI-generated code submissions, arguing kernel policies should treat AI as 'just another tool' while acknowledging bad actors won't self-identify.

Twitter image

Linux kernel developers have been debating policy frameworks for AI-assisted code contributions since early 2026, culminating in a characteristically direct intervention from project founder Linus Torvalds. The discussion centers on whether the kernel should implement specific documentation requirements for AI-generated patches—a proposal Torvalds vehemently opposes.

"Thinking LLMs are 'just another tool' is to say effectively that the kernel is immune from this. Which seems to me a silly position," Torvalds wrote in a Linux Kernel Mailing List (LKML) thread. His counterargument cuts to practical realities: "No. Your position is the silly one. There is zero point in talking about AI slop. That's just plain stupid. Why? Because the AI slop people aren't going to document their patches as such."

AI

The term "AI slop" refers to low-quality, automatically generated code submissions that lack proper context or understanding of kernel conventions. Torvalds contends that documentation requirements would only be followed by conscientious contributors—the very developers least likely to submit problematic AI-generated patches. Bad actors, he argues, would simply ignore disclosure protocols.

Torvalds advocates maintaining the kernel's tool-agnostic stance: "I strongly want this to be that 'just a tool' statement. The AI slop issue is NOT going to be solved with documentation, and anybody who thinks it is either just naive, or wants to 'make a statement'. Neither of which is a good reason for documentation."

This position reflects the Linux kernel's historical pragmatism. Kernel maintainers already reject approximately 30% of human-written patches during review, relying on rigorous technical scrutiny rather than submission labels. The proposed documentation approach risks creating false security while adding bureaucratic overhead.

Performance implications remain central to Torvalds' perspective. AI-generated code often exhibits subtle performance regressions—unoptimized memory access patterns, suboptimal scheduling decisions, or inefficient locking implementations—that evade static analysis tools but degrade real-world throughput. These issues require human expertise to detect during code review, regardless of how the patch originated.

The debate continues within the Linux development community, balancing concerns about AI-generated code quality against practical maintainer workflows. Torvalds' stance prioritizes technical rigor over procedural solutions, maintaining the kernel's performance-first ethos in the AI era.

Comments

Loading comments...