Centralized AI Skill Definitions: Scaling Consistent Code Generation Across Engineering Organizations
#Backend

Centralized AI Skill Definitions: Scaling Consistent Code Generation Across Engineering Organizations

Backend Reporter
6 min read

How to create and enforce AI skill definitions that ensure generated code matches your organization's architecture patterns and conventions, with practical implementation for Go backend API teams.

In the rapidly evolving landscape of AI-assisted development, organizations face a critical challenge: how to leverage AI coding tools without sacrificing architectural consistency. When AI generates code without proper guidance, it produces functional but inconsistent implementations that violate established patterns, creating code review bottlenecks and technical debt.

The Problem: AI Without Context

Give an AI coding assistant a Go codebase with no context and ask it to add a new endpoint. You'll get something that compiles. It might even work. But it won't match your architecture. It won't follow your error handling patterns. It won't use your shared libraries. It won't write the tests the way your team expects.

The AI doesn't know:

  • That your transport layer is deliberately dumb — just request mapping, no business logic
  • That you use ULIDs, not UUIDs
  • That all dependencies must be passed explicitly — no globals, no init()
  • That new libraries need approval before being added to go.mod
  • That integration tests must use testcontainers against a real database, not mocks

Without these rules, every AI-generated PR becomes a code review battle. The reviewer catches the violations, requests changes, the engineer fixes them — and the AI makes the same mistakes next time. You're paying for AI tooling but not getting the leverage.

The Solution: Centralized Skill Definitions

A skill definition fixes this. It's a configuration file that tells the AI how your team works — the patterns, the conventions, the guardrails. Once it's in place, AI output matches your standards from the first generation.

What We Built

We maintain a Go backend service template — a golden path that every backend API service in the organisation is built from. It's opinionated by design. Every API service follows the same layered architecture, the same testing patterns, the same deployment pipeline.

A note on scope: This skill definition covers API services — request/response workloads served over REST, gRPC, or GraphQL. Background workers, event consumers, cron jobs, and data pipelines have different concerns (concurrency patterns, retry semantics, idempotency, backpressure) and deserve their own skill definitions. Don't force a worker into an API template.

The template already encodes our standards in code. But code shows what — it doesn't explain why, and it doesn't tell the AI what's off-limits. That's what the skill definition does.

Architecture: Encoding the "Why"

The first thing the skill definition establishes is the architecture: transport (REST/gRPC/GraphQL) → usecase → storage. But it doesn't just state the layers — it explains the rules:

  • Transport layer is dumb. It maps requests to DTOs, calls the usecase, maps the response back. No business logic here. Ever.
  • Business logic lives in internal/usecase/. This is transport-agnostic. It uses plain Go structs and context.Context.
  • Storage is behind interfaces. Never use GORM directly in handlers or usecases.
  • Dependencies are explicit. All handler dependencies are passed as function arguments or struct fields. No globals. No init().

Why so explicit? Because without this, an AI will happily put business logic in a handler, call GORM directly from a resolver, or create a package-level database variable. It doesn't know these are violations unless you tell it.

Adding a New Entity: The Golden Workflow

One of the most common tasks is adding a new domain entity. Without guidance, an AI might start with the endpoint and work backwards. That's wrong in our architecture.

The skill definition specifies the exact sequence:

  1. Define the entity and storage interface in internal/storage/database/
  2. Define the usecase (interfaces, DTOs, implementations) in internal/usecase//
  3. Wire the transport handlers in internal/transport/{rest,grpc,graphql}/
  4. Register everything in internal/bootstrap/bootstrap.go
  5. Write migrations via make scaffold name=

Model → usecase → transport → bootstrap. Not the other way around. This is the kind of workflow knowledge that lives in senior engineers' heads. Encoding it means every engineer — and every AI tool — follows the same path.

The Library Gate: Governing Dependencies

This is one of the most important sections, and one that most AI configurations miss entirely: Before reaching for an external library, check if the company shared libs already provide the functionality. This is a hard gate.

We've seen what happens without this rule. Teams independently adopt three different HTTP client libraries, two logging frameworks, and four ways to handle configuration. Each one is individually reasonable. Collectively, they're a maintenance nightmare.

The skill definition makes the approved dependency set explicit — Chi for HTTP routing, GORM for database access, testify for assertions, testcontainers for integration tests. If a library isn't in the template's go.mod, it needs a discussion before it gets added.

This is the kind of organisational guardrail that AI tools will never infer from the code alone.

Testing: Non-Negotiable

The skill definition doesn't suggest tests. It mandates them: Tests are mandatory. No exceptions. And it's specific about what "tested" means:

  • Unit tests with gomock and testify, colocated with source files, run in parallel
  • Integration tests with testcontainers against real Postgres — never mock the database in integration tests
  • Coverage enforced by CI via SonarQube

The distinction between unit and integration tests matters. We got burned in the past when mocked database tests passed but production migrations failed. The skill definition encodes that lesson so no one has to learn it again.

The "Never Do This" List

Every opinionated system needs explicit anti-patterns. Ours has eight:

  1. Never overhaul the template — the structure is the standard
  2. Never modify .github/ — workflows are synced from the template
  3. Never copy-paste without understanding the structure
  4. Never commit generated code
  5. Never bypass the shared library gate
  6. Never write a handler without tests
  7. Never use raw GORM outside the storage layer
  8. Never hardcode secrets, URLs, or environment-specific values

These aren't aspirational guidelines. They're hard rules. And they're the rules most likely to be violated by AI tools without explicit instruction — because an AI optimises for "working code," not "code that belongs in your system."

PR Process: Contract First, Stack Small

The skill definition also encodes how work gets reviewed:

  • Contract first. Before writing code, agree on the API contract — protobuf definition, GraphQL schema, or OpenAPI spec. The contract is the handshake between teams. Code comes after.
  • Stack PRs for large features. One concern per PR. If a PR touches usecase, storage, transport, and config simultaneously, it's too big. Break it down: model → usecase → transport → wiring. Each one is a reviewable, mergeable unit.

This is the kind of process knowledge that usually lives in onboarding docs that nobody reads. Putting it in the skill definition means the AI actively follows it — suggesting stacked PRs when the scope gets large, asking about contracts before generating code.

Implementation: Centralized and Immutable

The skill definition is only useful if it's consistent across every service. If individual teams can modify their copy, you end up with drift — and drift is worse than no standard at all.

We solve this the same way we handle CI workflows: template sync. Our service template repository contains the CLAUDE.md file alongside the .github/ workflows. When changes are pushed to the template, they automatically sync to all downstream service repositories.

Engineers can't modify the file in their repos — it gets overwritten on the next sync.

Comments

Loading comments...