AGENTS.md as a dark signal // Josh Mock
#AI

AGENTS.md as a dark signal // Josh Mock

Tech Essays Reporter
4 min read

A senior engineer reflects on the ambivalent reality of AI-assisted coding, where a simple AGENTS.md file serves as both a practical tool for guiding AI agents and a troubling signal about the quality of code being generated, forcing a reevaluation of how we maintain open source software in an era of pervasive AI assistance.

It's been three years since my last post, and the landscape of software engineering has been fundamentally reshaped by the unavoidable impact of AI. As a senior-leaning engineer, I find myself ambivalent about whether AI tools are making us more productive, especially given the broader societal costs of large language models on economies, employment, intellectual property, and the environment. Yet, complete avoidance feels irresponsible when friends and family ask for insights about LLMs, agents, and human-in-the-loop systems. The tension is real: we must engage with this tidal wave of change while maintaining critical perspective on its implications.

My current experimentation involves using GitHub's Copilot agents to automate tasks that have languished on my backlog for years. The goal is simple: see if agents can clear technical debt while I focus on higher-value work. A teammate observed this process, and we shared a laugh about the agents' peculiar mix of intelligence and blindness. The agents would cleverly write unit tests to validate their changes, yet fail to notice that the test globbing patterns prevented CI jobs from even running those tests—a problem that would have caused failures on Windows systems. This pattern reveals a fundamental limitation: agents excel at executing discrete tasks but struggle with systemic awareness.

The proposed solution is elegant in its simplicity: instruct agents to write their learnings to an AGENTS.md file in the repository, creating durable memory for future agent sessions. This file serves as a shared context, a way for subsequent agents to understand the project's quirks, patterns, and pitfalls. It's a clever workaround for the statelessness of current AI systems, a way to bootstrap context across sessions. For the agent, it's a memory palace. For the human, it's a documentation file.

Yet for many senior engineers, the presence of an AGENTS.md or CLAUDE.md file triggers a different reaction. It becomes a dark signal—a subtle indicator that the codebase may have been "vibe-coded" with minimal human oversight. The implication is troubling: if agents have been left to their own devices, the code might be of dubious quality, riddled with subtle bugs, security vulnerabilities, or architectural inconsistencies that only become apparent under stress. Some projects openly admit to 100% AI-generated code with few human checks, but the total impact remains immeasurable. A file hinting that agents have been at work becomes, for some, a reason to look away.

This perspective shifts dramatically when viewed from the maintainer's chair of heavily-used open source projects. While seasoned engineers might roll their eyes at an AGENTS.md file and step away in disgust, the reality is that the "dark forest of vibe coders" exists. These contributors are opening pull requests on your projects, whether you know it or not. Some are vibe coding without even realizing it, using LLM-powered autocomplete that's enabled by default in their IDEs. In this environment, an AGENTS.md file transforms from a dark signal into a protective mechanism.

Consider the open source maintainer's dilemma: you receive contributions from a diverse community, including those using AI assistance. Without guidance, these agents might introduce subtle errors—incorrect assumptions about platform behavior, overlooked edge cases, or security oversights that are difficult to catch in code review. An AGENTS.md file becomes a form of gentle guidance, a way to provide "railings" for agents that might otherwise wander into pitfalls. It's not about trusting vibe coders; it's about recognizing that AI assistance is already pervasive and creating guardrails to prevent obvious mistakes.

This duality reflects a broader tension in software development. The AGENTS.md file represents a pragmatic adaptation to a new reality, where AI tools are embedded in our workflows whether we explicitly choose them or not. For maintainers, it's a tool for risk mitigation. For contributors, it's a way to ensure their AI-assisted work meets project standards. For the industry, it's a marker of how quickly our practices are evolving to accommodate non-human collaborators.

The file's existence forces us to confront uncomfortable questions about quality, responsibility, and the future of software craftsmanship. When we see AGENTS.md, are we seeing a project that values efficiency over quality, or one that's proactively managing the risks of AI assistance? The answer likely depends on context, but the signal itself is ambiguous—a Rorschach test for our attitudes toward AI in development.

As we move forward, the challenge isn't to reject these tools outright or embrace them uncritically, but to develop nuanced practices that leverage their strengths while mitigating their weaknesses. The AGENTS.md file, for all its simplicity, embodies this balance: it's both an acknowledgment of AI's role in our work and a tool for constraining its excesses. For maintainers, it might be worth the cringe if it means fewer broken builds and more reliable contributions. For the ecosystem, it represents an evolving contract between human and machine contributors—a document that says, "Here's what we've learned about working together."

The dark signal may be dark, but it's also illuminating. It reveals the seams in our current tools, the gaps in our processes, and the adaptations we're making to a new reality. Whether we see it as a warning or a guide depends on where we stand, but we can no longer pretend it doesn't exist.

Comments

Loading comments...