EFF Mandates Human Documentation for AI-Generated Code Contributions
#Regulation

EFF Mandates Human Documentation for AI-Generated Code Contributions

Regulation Reporter
2 min read

The Electronic Frontier Foundation establishes new contributor guidelines requiring human-authored documentation and full disclosure for AI-generated code submissions.

Featured image

The Electronic Frontier Foundation (EFF) has implemented a formal policy governing the acceptance of AI-generated code contributions to its open source projects. This regulatory action establishes clear requirements for contributors using large language models (LLMs) in development workflows.

Under the new policy, contributors may submit LLM-generated code to EFF projects including Certbot, Privacy Badger, Boulder, and Rayhunter, but must adhere to three core compliance requirements:

  1. Mandatory Disclosure: Contributors must explicitly declare when submissions contain AI-generated code
  2. Human Documentation: All documentation and code comments must be authored by humans
  3. Thorough Review: Submitters must demonstrate complete understanding of contributed code through comprehensive testing and review

EFF Technical Director Alexis Hancock and Staff Technologist Samantha Baldwin outlined the rationale behind these requirements: "LLMs excel at producing code that appears human-generated but often contain underlying bugs replicated at scale. This makes LLM-generated code exhausting to review, particularly for smaller teams with limited resources." The foundation observed that insufficiently reviewed AI submissions force maintainers into extensive refactoring rather than simple code review, especially when contributors lack full understanding of their AI-generated code.

The compliance timeline takes effect immediately for all new contributions. Project maintainers reserve the right to reject submissions deemed unreviewable under the policy criteria. EFF emphasizes that the focus remains on producing high-quality software tools rather than maximizing code volume through accelerated AI generation.

OpenUK CEO Amanda Brock contextualized the policy within broader industry challenges: "We're seeing a combination of scraped content volume and automated contributions creating quality concerns. This policy represents early recognition of AI's impact on open source maintenance burdens." She anticipates similar policies emerging across other open source projects as maintainers grapple with scalability challenges.

Beyond technical considerations, EFF's policy references ethical concerns inherent in LLM usage: "These tools raise privacy, censorship, ethical, and climatic concerns. We are once again in 'just trust us' territory where major technology providers obscure their systems' operational parameters." This stance reflects EFF's consistent advocacy for transparency in technology governance.

Contributors to EFF projects should prepare detailed documentation artifacts demonstrating human authorship and maintain comprehensive testing records for all AI-assisted submissions. The policy positions human oversight as non-negotiable for critical project components, establishing a clear boundary between acceptable automation and essential human curation in open source development.

Comments

Loading comments...