Long-term Emacs users face persistent minor frustrations despite the editor's infinite customizability. Evan Moses discovers that modern LLMs effectively generate functional Emacs Lisp code, solving niche problems like syntax highlighting and log parsing that weren't worth manual implementation.
For decades, Emacs has stood as the ultimate customizable editor—a digital workshop where users reshape their environment through Emacs Lisp (elisp). This freedom comes with cognitive overhead; every customization requires understanding subsystems, APIs, and conventions. Evan Moses, with 25 years of Emacs experience, articulates a common dilemma: the accumulation of minor annoyances deemed not worth solving. These unresolved friction points—awkward log formatting, missing syntax highlighting, or manual workarounds—represent deferred customization. Moses' revelation comes not through deeper elisp mastery, but through leveraging large language models as collaborative coding partners.

The core insight is straightforward yet profound: LLMs like Claude Opus and Gemini Pro demonstrate unexpected proficiency in generating functional elisp. Moses attributes this to two factors: abundant high-quality documentation in Emacs' ecosystem and vast open-source elisp examples in training data. The practical outcomes Moses achieved—a Cedar policy language major mode, JSON log backtrace extraction, and Go test log highlighting—demonstrate LLMs handling both syntactic boilerplate (font-lock rules) and structural logic (tree-sitter integration). The provided code snippet reveals typical collaboration patterns: LLMs generate working foundations with documentation, while human judgment handles integration and context-specific conditions like activation flags.
What makes this approach transformative isn't technical novelty—the regex-based highlighting Moses implemented is conceptually simple—but its dramatic reduction in activation energy. Customization tasks migrate from "not worth the yak-shaving" to "solvable in minutes." This shifts Emacs' customization paradigm: instead of requiring deep subsystem expertise upfront, users can delegate initial implementation to LLMs and focus on refinement. Moses notes this is particularly valuable for navigating unfamiliar territories like tree-sitter grammars or font-lock internals, where LLMs provide scaffolding that accelerates learning.
Three significant implications emerge. First assistance democratizes power-user customization; tasks previously requiring elisp fluency become accessible through natural language prompts. Second, it creates a new debugging workflow where users diagnose and repair generated code rather than building from scratch—a hybrid approach leveraging both machine efficiency and human judgment. Third, it highlights architectural differences between editors; Emacs' Lisp foundation makes such interventions possible, while less extensible editors might not benefit similarly.
Counterpoints warrant consideration. Over-reliance on generated code risks superficial understanding of Emacs internals, potentially creating maintenance debt. The ethical dimension of proprietary code generation—Moses' most advanced implementations remain private—raises questions about knowledge sharing in corporate environments. Additionally, prompt engineering becomes a new skill; describing problems effectively requires articulating context, constraints, and desired behaviors with precision.
Moses' experiments suggest LLMs aren't replacing elisp expertise but augmenting it. The successful implementations required reviewing, modifying, and integrating generated code—tasks demanding contextual awareness LLMs lack. This symbiosis echoes historical programmer tools: compilers didn't eliminate assembly knowledge but shifted focus to higher-level problem solving. For Emacs users, LLMs become cognitive leverage against accumulated friction, transforming minor annoyances from tolerated compromises into solvable opportunities.
The invitation extends beyond Emacs: any tool with documented extension mechanisms—Neovim's Lua, VSCode's TypeScript—could benefit similarly. As Moses concludes, the critical step isn't technical but behavioral: recognizing when a persistent annoyance becomes tractable through LLM collaboration. With careful implementation review, these tools offer unprecedented power to reshape our digital environments—not through revolutionary breakthroughs, but through eliminating countless minor frustrations that collectively shape daily experience.

Comments
Please log in or register to join the discussion