Daniel Breunig's whenwords project reimagines software distribution by providing specifications and tests without implementation code, relying on AI to generate language-specific versions on demand.

The traditional model of open-source libraries involves distributing concrete implementations in specific programming languages. Daniel Breunig's whenwords project challenges this paradigm by delivering a fully specified relative time formatting library without a single line of implementation code. This approach raises fundamental questions about knowledge representation, software distribution, and the evolving role of AI in development workflows.
At its core, whenwords solves common temporal representation problems through five key functions:
timeagofor relative timestamps ("3 hours ago")durationfor human-readable intervals ("2h 30m")parse_durationconverting text back to secondshuman_datewith contextual date strings ("Yesterday", "Last Tuesday")date_rangefor intelligent date formatting ("March 5–7, 2024")
What makes this project remarkable isn't its functionality—similar libraries exist—but its distribution method. Instead of language-specific implementations, whenwords provides two critical artifacts: a comprehensive specification (SPEC.md) detailing expected behaviors, and a set of language-agnostic test cases (tests.yaml) defining input/output pairs. The installation instructions consist of a single prompt instructing users to feed these artifacts to AI assistants like Claude or Codex, which then generate and validate implementations in the target language.
This approach demonstrates several advantages. By separating specification from implementation, whenwords becomes truly language-agnostic, with confirmed working versions in Ruby, Python, Rust, Elixir, Swift, PHP, and Bash. The maintenance burden shifts from the library maintainer to AI systems, as updates to the specification automatically propagate through regenerated implementations. It also eliminates dependency management overhead—users get fresh, dependency-free code generated specifically for their environment.
However, this model introduces new considerations. The quality of AI-generated implementations depends on both the clarity of specifications and the capabilities of the chosen AI system. Edge cases not covered in tests.yaml might surface differently across languages. Long-term maintenance raises questions: How will behavioral drift in AI models affect regenerated code? Can the community contribute test cases that expand coverage without complicating the core specification?
Whenwords represents more than a utility library; it's a prototype for AI-mediated software distribution. As large language models improve at generating correct code, we might see more "specification-first" projects where human effort concentrates on defining precise behaviors while delegating implementation to machines. This could accelerate cross-language compatibility and reduce barriers to adopting standardized functionality across diverse tech stacks.
The project also subtly challenges notions of authorship in open source. Traditional licenses govern code, but how does ownership apply when implementations are dynamically generated? Breunig's approach suggests a future where the most valuable open-source artifacts become highly refined specifications and test suites—the conceptual DNA of software—with executable code becoming an ephemeral, on-demand byproduct.
While not without challenges, whenwords offers a compelling glimpse into a paradigm where developers define what software should do rather than precisely how it does it. As AI continues reshaping our toolchains, this code-less approach might transform how we conceptualize, distribute, and maintain foundational software utilities.

Comments
Please log in or register to join the discussion