An independent software consultant details his experiment with creating a separate GitHub identity for an AI coding agent, exploring how this approach balances security, transparency, and workflow integration while maintaining human oversight.
The line between human and machine contributions to software development is becoming increasingly blurred, yet the tools we use remain stubbornly designed for human collaborators. This tension became apparent to me as I experimented with different ways to integrate AI coding agents into my development workflow. After trying various approaches, I discovered that giving an AI its own GitHub identity—complete with a separate account, avatar, and permission boundaries—creates a surprisingly elegant collaboration model that respects both the capabilities of the agent and the need for human control.
The Problem with Local AI Development
When an AI coding agent runs directly on your laptop, the experience feels natural. You have a terminal, the agent can execute commands, and you can monitor its progress in real-time. However, this convenience comes with significant security risks. Your laptop contains years of accumulated data—client projects, personal information, API keys, and proprietary code—that an AI agent shouldn't access, even accidentally. The agent might inadvertently read sensitive files or execute commands that expose private data.
Sandboxing provides a partial solution, but it's not foolproof. A determined or misconfigured agent can still escape its constraints. More importantly, sandboxing on your local machine doesn't solve the fundamental issue: you're still mixing your personal computing environment with the agent's workspace, creating cognitive overhead and potential for mistakes.
The alternative—using cloud-based AI coding services—trades one set of problems for another. Services like Claude Code Cloud offer convenience, but they abstract away the development environment, making it difficult to customize the agent's capabilities. You lose the ability to install specific tools, configure complex development stacks, or maintain a consistent environment that mirrors your production setup. The agent works in a black box that you can't fully inspect or tailor.
The VPS Approach and Its Integration Challenges
My solution started with a VPS running Debian on Hetzner, connected via Tailscale for secure access. This gave me a clean slate where I could install Claude Code in "YOLO mode" with my global AGENTS.md configuration file and custom skills. The environment is completely isolated from my personal machine, and I can restore it from a snapshot if anything goes wrong. If the agent needs to install packages, modify system configurations, or experiment with different toolchains, it can do so without affecting my local development setup.
But this introduced a new problem: how do I efficiently get code out of the VPS and into my projects? Git is decentralized, so technically I could set up the VPS as a remote and pull from it, but this felt cumbersome. It created friction in my workflow and made the agent's work less transparent. I wanted to see the agent's progress, review its changes, and integrate them naturally, not wrestle with git remotes and manual file transfers.
I considered giving the agent access to my personal GitHub account, but API key permissions proved insufficiently granular. I couldn't restrict the agent's access to only what it needed while preserving my own full access to my GitHub organization. The organization existed long before AI tools became common, and retrofitting fine-grained permissions seemed tedious and error-prone. Even with careful configuration, I'd still worry about the agent accidentally deleting repositories, pushing to wrong branches, or making other destructive changes.
The GitHub Identity Solution
The breakthrough came from thinking about AI agents as collaborators rather than tools. What if the agent had its own GitHub identity, just like any human developer? This would allow me to add it to my organization with precisely scoped permissions, interact through standard GitHub workflows, and maintain clear separation between human and machine contributions.
Thus, maragubot was born—an AI agent with its own GitHub account, complete with an avatar generated using nanobanana pro. The name reflects both the agent's purpose and its identity within my development ecosystem.
The Workflow in Practice
The collaboration follows a familiar open-source contribution pattern:
Fork Creation: maragubot creates a fork of the target repository in its own user namespace. This isolates its work from the main codebase while providing a dedicated space for experimentation.
Development: The agent works within its fork, making commits, running tests, and iterating on features. All development happens on the VPS, where it has access to the full development environment I've configured.
Pull Request: When ready, maragubot pushes its changes and creates a pull request from its fork to the original repository. This triggers the standard GitHub review process.
Self-Review: The agent writes its own review comments on the PR, addressing them with additional commits. This might seem circular, but it serves an important purpose: it forces the agent to articulate its reasoning and catch its own mistakes before I review the code.
Human Review: I review the PR, add my own comments, and request changes if needed. maragubot addresses my feedback through additional commits to the same PR.
Merge: Once satisfied, I merge the PR. maragubot doesn't have merge permissions, ensuring I maintain final control over what enters the codebase.
I've created a collaboration skill that encapsulates this workflow, which I'll refine based on experience. The skill helps the agent understand when to create PRs, how to respond to review comments, and when to ask for merges.
Advantages of This Approach
Clear Attribution: Every line of AI-generated code is visibly marked as coming from maragubot. Even though I review everything, this distinction matters. It creates a psychological separation that helps maintain a sense of authorship and responsibility. There's something about seeing "maragubot requested review" that keeps the human-machine boundary clear, which I find valuable as we navigate increasingly sophisticated AI collaboration.
Fine-Grained Permissions: By giving the agent its own identity, I can scope its permissions precisely. maragubot can create forks, push branches, and open pull requests, but it can't delete repositories, push directly to main branches, or modify organization settings. This follows the principle of least privilege while allowing the agent to do its job effectively.
Continuous Operation: Like other web-based solutions, this approach lets the agent work even when I'm away from my desk. I can start it on a task, disconnect, and return to find a pull request waiting for review. The VPS runs independently of my local machine.
Full Environment Control: Unlike cloud-based AI coding services, I have complete control over the agent's development environment. I can install specific versions of tools, configure complex build chains, or set up custom development servers. The VPS is a blank canvas that I can shape to match any project's requirements.
Mental Clarity: Working with a VPS is straightforward. It's a Linux machine with predictable behavior. There's no abstraction layer or proprietary system to understand. I can SSH in, inspect logs, modify configurations, and reason about the system without additional cognitive overhead.
Remote Monitoring: Tailscale makes it easy to check on the agent's progress from any device. While I don't do this often (my relationship with my phone is complicated), it's reassuring to know I can connect from my iPad or phone if needed.
The Friction Points
This approach isn't without drawbacks. The most immediate is the friction of working in a remote development environment:
Terminal Configuration: The VPS runs tmux by default, which required configuration to work comfortably with my trackpad. Simple things like scrolling or clicking links (hold shift!) needed adjustment. These are minor but add up when you're switching between local and remote work frequently.
Context Switching: I need to remember to log into the VPS and work there when directly interacting with the agent. It's another mental context to maintain. If I forget and start working locally, I might create conflicts or miss what the agent is doing.
Tooling Gaps: Some of my local development tools and workflows don't translate perfectly to the remote environment. I'm still figuring out how to best integrate debugging, testing, and other development activities across the local-remote boundary.
Anthropic is apparently working on handoff features that might address some of these issues, but for now, they remain part of the workflow that requires attention.
The Broader Implications
What strikes me most about this experiment is how it reveals the gap between current AI capabilities and the tools we've built to support them. We're essentially hacking existing human-centric workflows to accommodate non-human participants. The fact that giving an AI its own GitHub account feels innovative suggests we're still in the early days of designing collaboration systems for a mixed human-machine future.
This approach also raises interesting questions about authorship, responsibility, and the nature of software development. When maragubot creates a pull request, who is the author? The agent generated the code, but I designed the prompt, configured the environment, and reviewed the output. The GitHub interface treats it as maragubot's contribution, but the reality is more nuanced.
There's also the question of scale. This workflow works well for one agent and one developer. But what happens when developers work with multiple AI agents, each specializing in different aspects of development? Will we need new collaboration models that go beyond mimicking human workflows?
Iterating Toward Better Collaboration
I'll continue refining this approach over the coming weeks and months. The collaboration skill I mentioned will evolve as I identify patterns and pain points. I might experiment with different permission structures, automated review processes, or ways to make the handoff between human and machine more seamless.
What excites me most is that this feels like a genuine step forward in human-AI collaboration. It's not just about using AI as a tool or API—it's about creating a shared workspace where both parties can contribute effectively while maintaining appropriate boundaries. The agent has agency within its scope, I maintain control over the final product, and the workflow is transparent enough that anyone joining the project can understand what's happening.
As we all navigate these changes—software consultants, developers, teams—I'm curious to see what other patterns emerge. The tools we use shape how we think about problems, and as AI becomes more capable, we'll need to rethink not just our workflows, but the fundamental assumptions about how software gets built.
For now, maragubot and I will keep iterating. The existential dread can wait until we're all brain cyborgs anyway.
Markus is an independent software consultant. You can reach him at [email protected] or learn more about his services at maragu.dk. This post is also available via RSS or newsletter.

Comments
Please log in or register to join the discussion