A new tool addresses a critical gap in AI-assisted development: providing deterministic answers about when a pull request is truly ready to merge, moving beyond the current state of endless polling and ambiguous feedback loops.
The promise of AI coding agents is that they can handle the tedious aspects of software development—writing code, fixing bugs, responding to review comments, and creating pull requests. Yet, a fundamental problem persists that undermines their efficiency: they cannot reliably determine when a PR is ready to merge. This isn't a minor inconvenience; it's a systemic issue that leads to wasted tokens, missed deadlines, and frustrated developers.
The Ambiguity Problem in Practice
When you instruct an AI agent to "fix the CI and address the review comments," you're asking it to navigate a landscape of uncertainty. The agent must interpret multiple signals:
- CI Status: Is the build passing? Are all required checks complete? What about optional checks? The agent might poll the API repeatedly, burning through tokens and API rate limits.
- Review Comments: Not all feedback is equal. A comment like "Consider using a more efficient algorithm" is a suggestion. "This introduces a security vulnerability" is a blocker. Distinguishing between the two requires context that AI agents often lack.
- Unresolved Threads: GitHub threads can remain "unresolved" even after the underlying code issue is fixed in a subsequent commit. An agent might see an open discussion and incorrectly assume it needs action.
- Human Judgment: Some feedback requires subjective interpretation. "What do you think about this approach?" is a question, not a mandate.
The result is a series of inefficient behaviors. Agents either poll indefinitely, give up prematurely, or constantly ask for human clarification. Each of these outcomes negates the productivity gains AI promises.
Introducing Determinism: Good To Go
Good To Go, a new open-source tool, aims to solve this by providing a single, deterministic command: gtg <pr_number>. It returns one of four clear statuses:
- READY: All clear, safe to merge.
- ACTION_REQUIRED: Specific comments need fixes.
- UNRESOLVED_THREADS: Open discussions require resolution.
- CI_FAILING: Checks are not passing.
There's no ambiguity. The tool analyzes a PR across three dimensions to reach this conclusion.
1. CI Status Aggregation
Good To Go consolidates status from multiple CI systems (GitHub Actions, Jenkins, CircleCI, etc.) into a single pass/fail/pending state. It understands the difference between required and optional checks, and it can handle in-progress runs. This aggregation is crucial because modern projects often have complex CI pipelines with dozens of checks. An agent trying to parse this manually would need to understand each system's API and the project's specific requirements.
2. Intelligent Comment Classification
This is where Good To Go moves beyond simple aggregation. It classifies each review comment into three categories:
- ACTIONABLE: Must be addressed before merge (blocking issues, critical bugs).
- NON_ACTIONABLE: Safe to ignore (praise, nitpicks, resolved items).
- AMBIGUOUS: Requires human judgment (suggestions, questions).
The tool includes built-in parsers for popular automated reviewers. For example, it understands CodeRabbit's severity indicators (Critical, Major, Minor, Trivial) and can interpret Greptile's code analysis findings. It also recognizes blocking markers from Claude-based reviewers and bug severity levels from tools like Cursor/Bugbot.
Crucially, it distinguishes between truly unresolved discussions and threads that are technically "unresolved" but have been addressed in subsequent commits. This prevents agents from chasing ghosts.
3. State Persistence
Good To Go can track what's already been handled across agent sessions. This is vital for long-running workflows where an agent might need to pause and resume. By maintaining a state database, the tool avoids redundant work and provides a coherent view of the PR's progress.
Designed for AI Agents
Good To Go is built with AI agents as first-class citizens. This manifests in two key design decisions:
Exit Codes That Make Sense: In default mode, the tool returns exit code 0 for any analyzable state. This is because AI agents should parse the JSON output to determine the next action, not interpret exit codes as errors. For shell scripts, a --semantic-codes flag provides meaningful exit codes (0 for ready, 1 for action required, etc.).
Structured Output: Every response includes exactly what an agent needs to take action. The JSON output contains the status, a list of action items, actionable comments, CI status, and thread information. This structured data allows agents to programmatically decide their next step.

Comments
Please log in or register to join the discussion