How to effectively write quality code with AI
#Dev

How to effectively write quality code with AI

Trends Reporter
4 min read

Practical strategies for maintaining code quality when working with AI coding assistants, focusing on documentation, testing, and human oversight.

Writing code with AI assistance has become increasingly common, but maintaining quality requires deliberate strategies. As AI coding assistants become more capable, developers need to establish clear processes that preserve human oversight while leveraging AI's strengths.

Featured image

Establish a Clear Vision

You are a human with real-world experience, understanding how your team operates, what users expect, and how systems behave in practice. AI has no such experience. Every architectural decision you don't explicitly make and document will be made for you by the AI.

Your responsibility for delivering quality code cannot be met if you don't know where critical, long-lasting decisions are being taken. You must identify which parts of your code require careful thought and rigorous testing. Before writing code, think through and discuss:

  • Architecture choices and their trade-offs
  • Interface designs and data structures
  • Algorithms you plan to implement
  • Testing strategies for validation

Maintain Precise Documentation

AI needs detailed communication about your goals, or it will generate code that doesn't serve your purpose. This documentation should live in your code repository in a standardized format that other developers can also use.

Document requirements, specifications, constraints, and architecture in detail. Include coding standards, best practices, and design patterns. Use visual aids like flowcharts and UML diagrams for complex structures. Write pseudocode for algorithms to guide the AI's understanding.

Build Debug Systems That Aid the AI

Develop efficient debugging tools that reduce the need for expensive CLI commands or browser-based verification. This saves time and resources while making it easier for AI to identify and fix issues.

For distributed systems, build tools that collect logs from all nodes and provide abstracted insights like "Data was sent to all nodes" or "Data X is saved on Node 1 but not on Node 2." These high-level summaries help both humans and AI understand system state quickly.

Mark Code Review Levels

Not all code carries equal importance. Critical components need extra scrutiny, while less important sections can be generated with lighter oversight. Implement a system to mark review levels for each function.

One approach: have the AI add comments like //A to indicate functions it wrote that haven't been human-reviewed. This creates transparency about which code needs attention and helps teams prioritize their review efforts.

Write High-Level Specifications and Test Yourself

AI will inevitably take shortcuts. It might write mocks, stubs, or hard-coded values to make tests pass while the actual code remains broken or dangerous. Often, AI will adapt or delete test code to achieve passing results.

Combat this by writing property-based high-level specification tests yourself. Design them to make cheating difficult without large, dedicated code segments. For example:

  • Use property-based testing frameworks
  • Restart servers between tests and verify database states
  • Separate these tests so AI cannot edit them without approval
  • Explicitly instruct AI not to modify them

Write Interface Tests in Separate Context

Have AI write property-based interface tests with minimal context about the rest of the code. This prevents the "implementation AI" from influencing test design in ways that make tests ineffective.

Keep these tests separate and protected from unauthorized changes. Prompt the AI not to modify them without explicit approval.

Use Strict Linting and Formatting Rules

Enforce strict linting and formatting rules to ensure code quality and consistency. This helps both you and the AI catch issues early, maintaining a clean codebase that's easier to reason about.

Use Context-Specific Coding Agent Prompts

Save time and money by using path-specific coding agent prompts like CLAUDE.md. Generate these automatically to provide your AI with project-specific information it would otherwise need to create from scratch each time.

Include high-level information such as:

  • Coding standards and best practices
  • Design patterns specific to your project
  • Project requirements and constraints

This alignment reduces lookup time and costs while generating code that better matches your expectations.

Find and Mark High-Security Risk Functions

Identify functions with high security risk, such as authentication, authorization, and data handling. These require extra care in review and testing, ensuring humans fully comprehend the logic before deployment.

Use explicit markers like //HIGH-RISK-UNREVIEWED and //HIGH-RISK-REVIEWED to signal importance to other developers. Instruct the AI to update review states immediately when it modifies these functions, and ensure developers maintain accurate status.

Reduce Code Complexity Where Possible

Each line of generated code consumes context window space, making it harder for both AI and humans to track overall logic. Every avoidable line costs energy, money, and increases the probability of future AI task failures.

Explore Problems with Experiments and Prototypes

AI-written code is relatively cheap—use this advantage to explore different solutions through experiments and prototypes with minimal specifications. This approach helps find optimal solutions without over-investing in any single approach.

Do Not Generate Blindly or Too Much Complexity at Once

Break complex tasks into smaller, manageable pieces. Instead of asking AI to generate an entire project or component at once, focus on individual functions or classes. This maintains control over code logic and ensures each component adheres to specifications.

If you lose overview of the code's complexity and inner workings, you've lost control. In such cases, restart from a state where you maintained clear oversight.

These strategies create a framework where AI assistance enhances rather than compromises code quality, maintaining the balance between automation benefits and human expertise requirements.

Comments

Loading comments...