Ghostty's AI Policy: A Pragmatic Approach to Managing AI-Assisted Contributions
#AI

Ghostty's AI Policy: A Pragmatic Approach to Managing AI-Assisted Contributions

AI & ML Reporter
4 min read

The open-source terminal emulator Ghostty has published a detailed AI usage policy for external contributors, requiring full disclosure of AI tools used, mandating human verification of all AI-generated code, and banning AI-generated media. The policy reflects a nuanced stance: the project uses AI internally but must protect maintainers from low-quality contributions.

The open-source terminal emulator Ghostty has formalized its approach to AI-assisted contributions with a public AI policy document. Unlike many projects that either embrace AI wholesale or reject it entirely, Ghostty's policy represents a pragmatic middle ground that acknowledges AI's utility while establishing strict guardrails to protect project quality and maintainer time.

The Core Requirements

Ghostty's policy establishes several non-negotiable rules for external contributors:

Mandatory Disclosure: All AI usage must be explicitly stated in pull requests, including the specific tool used (such as Claude Code, Cursor, or Amp) and the extent of AI assistance. This transparency requirement applies to any form of AI involvement, from code generation to documentation assistance.

Issue-Driven Contributions Only: Pull requests created with AI assistance can only address previously accepted issues. Drive-by contributions that don't reference an existing issue will be closed, regardless of their quality. This prevents maintainers from being inundated with unsolicited AI-generated code that requires validation without context.

Human Verification Required: AI-generated code must be fully verified through human testing before submission. The policy explicitly warns against allowing AI to write code for platforms or environments the contributor cannot manually test. This addresses a common failure mode where AI generates code that appears correct but fails in edge cases or specific environments.

No AI-Generated Media: While text and code assistance is permitted under strict conditions, AI-generated art, images, videos, and audio are completely banned from contributions.

The Human-in-the-Loop Mandate

For issues and discussions, the policy requires a "full human-in-the-loop" approach. Any content generated with AI must be reviewed, edited, and trimmed by a human before submission. The policy notes that AI tends to produce verbose, noisy output that distracts from the main point, requiring human curation to maintain clarity and focus.

This requirement reflects a practical understanding of AI's current limitations. As the policy states: "In a perfect world, AI would produce high-quality, accurate work every time. But today, that reality depends on the driver of the AI. And today, most drivers of AI are just not good enough."

Maintainer Exemption and Internal Usage

Notably, the policy applies only to external contributions. Maintainers are exempt from these rules and may use AI tools at their discretion, having "proven themselves trustworthy to apply good judgment." This distinction acknowledges that experienced contributors have the context and expertise to use AI effectively as a productivity tool.

The project is transparent about its own AI usage. Ghostty is "written with plenty of AI assistance," and many maintainers embrace AI tools in their workflow. The policy explicitly states that its strict rules are not due to an "anti-AI stance," but rather a response to "the number of highly unqualified people using AI." The document concludes: "It's the people, not the tools, that are the problem."

Enforcement and Consequences

The policy includes a stark warning: "Bad AI drivers will be banned and ridiculed in public." This blunt language reflects the project's frustration with contributors who submit untested, low-quality AI-generated code that places the validation burden entirely on maintainers.

The policy also addresses the learning aspect, noting that while Ghostty loves to help junior developers grow, those interested in learning should "don't use AI, and we'll help you." This positions AI assistance as a productivity tool for experienced developers rather than a crutch for learning fundamental skills.

Context: The Broader Open-Source Challenge

Ghostty's policy emerges from a broader challenge facing open-source projects: the flood of AI-generated contributions that require significant maintainer time to review and validate. Many projects have struggled with contributors using AI tools to generate code without understanding it, leading to PRs that appear superficially correct but contain subtle bugs or architectural issues.

The policy's requirement that AI-generated code only address accepted issues is particularly significant. It prevents the common scenario where a contributor uses AI to generate code for a feature they think would be useful, only for maintainers to discover the feature doesn't align with project goals or has already been implemented differently.

A Model for Other Projects?

Ghostty's approach offers a template for other open-source projects grappling with AI-assisted contributions. Rather than banning AI entirely or allowing unrestricted use, the policy creates a framework that:

  1. Maintains transparency through mandatory disclosure
  2. Protects maintainer time by limiting unsolicited contributions
  3. Ensures quality through human verification requirements
  4. Acknowledges reality by exempting trusted maintainers
  5. Educates contributors about proper AI tool usage

The policy's tone—direct, unapologetic, and focused on practical outcomes—reflects the project's engineering mindset. It doesn't try to solve the philosophical question of AI in software development; it simply establishes rules that work for this specific project and its maintainers.

For developers considering contributing to Ghostty or similar projects, the policy serves as a reminder that AI tools are most effective when used by developers who understand the code they're generating and can verify its correctness. The policy's emphasis on human verification and testing underscores that AI assistance doesn't eliminate the need for fundamental software engineering skills—it merely changes how those skills are applied.

The full policy is available on GitHub: ghostty/AI_POLICY.md.

Comments

Loading comments...