A developer recovering from Guillain-Barré Syndrome seeks advice on modern AI coding tools and workflows, sparking a detailed discussion about Claude Code, voice input tools, and evolved development practices that have emerged over the past year.
I'm getting ready to return to work after a pretty intense year. Last May I was diagnosed with Guillain-BarrĀ» Syndrome, an autoimmune condition where the body attacks the nervous system. I spent two months in a wheelchair and about 10 months in rehab/day hospitalization working to get my body back to where it was. Things are much better now (I'm even back to running), though my hands are still quite weak, but improving slowly.
During this year I wasn't working, focusing mostly on recovery while the industry kept evolving. I did my best to stay up to date but I'd love to hear what tools you're using these days (Claude Code, Cursor, etc.) and what workflows are working best for you. Before I got sick, Cursor was my main tool, but there was still a lot of hands-on coding. Now it seems like people barely write code except for specific cases or interesting problems. Would really appreciate as much detail as possible. Also happy to answer any questions about what I went through this past year.
Community Responses: Evolution of AI-Assisted Development
Voice Input and Accessibility Tools
One respondent immediately recommended Handy, an open-source voice input application that has become essential for their workflow. The tool has saved considerable typing effort and demonstrates surprising resilience to speech-to-text inaccuracies. This suggestion directly addresses the hand weakness mentioned in the original post, highlighting how accessibility needs can drive tool adoption.
Agent Evolution: From Pair Programming to Delegation
A particularly detailed response traced their journey with Claude Code over the past year. Initially using it in a "turn-by-turn" style similar to pair programming, they would explicitly build context and provide direction on each turn. The LLM would validate their approach rather than lead it.
By late last year, their workflow evolved to a "lead/junior delegation" model. Now they ask agents to investigate codebase aspects, inform them of requirements, select from proposed solutions, and evaluate results. This shift represents a fundamental change in how developers interact with AI tools - moving from active driving to strategic oversight.
Context Gathering and Requirement Analysis
The respondent noted significant improvements in agents' ability to gather context and flesh out requirements from existing codebases over the past six months. They reported that tasks which previously took ~60 minutes for manual reference collection and option evaluation could now be accomplished in ~5 minutes through interactive sessions. This dramatic efficiency gain suggests that context window improvements and better reasoning capabilities have made agents substantially more useful for understanding complex codebases.
Tool Skepticism and Critical Evaluation
Despite trying various augmentation tools like Beads, Ralph, and OpenClaw, the respondent found these to be distractions rather than productivity enhancers. Their advice centered on deeply understanding your primary agent tool and maintaining critical evaluation of its output. They emphasized iterating after every bugfix and feature, asking what worked and what didn't.
For code review workflows, they continue to review pull requests themselves fully while running agents in the background for additional review. Sometimes agents catch missed issues, but more often they miss things the developer caught. This dual-review approach has led to small instruction updates to improve agent performance over time.
Team Practice Evolution
The most significant changes weren't in individual tools but in team practices. The team moved away from a traditional model of assigning projects to developers in minimal outlined states, allowing them to work independently until PR review time.
Their new approach involves:
- Larger projects with deeper requirement exploration at early prototyping stages
- Cheap code changes enabling speculative feature branch exploration
- Design presentations covering business needs through use cases to background jobs and schema changes
- Milestone-based delivery without sprints or sprint planning
- Individual developers acting like former teams, with teams acting like lead developers
This evolution mirrors pre-PR workflows from years ago, suggesting AI tools have enabled a return to more collaborative, design-focused development while maintaining modern version control practices.
Autonomous Agents and Mental Models
Another contributor shared their experience with pi.dev, initially seeking to build elaborate flows with subagents but finding the major improvement came from simply running in "yolo-mode" - letting agents work autonomously while maintaining oversight. They noted that completely autonomous agents haven't worked for them yet, preferring to keep an eye on progress and steer when needed.
Their "magic trick" for better results involves ending many prompts with requests for clarifying questions. This generates numbered lists of sometimes over ten questions about decisions the model would otherwise make independently. This approach ensures alignment between human intent and AI execution while helping the developer build their own mental model of the solution being constructed.
Terminal-Based Workflows and Skill Stacking
A third perspective came from someone using Claude Code across multiple terminal windows - sometimes six or more simultaneously. They moved from using CC primarily for "boring stuff" like commit messages and PRs to using it across the board, especially after the Opus 4.5 release.
Their workflow progression:
- Initial assistant use for routine tasks
- Expanded to multiple windows for various development activities
- Context fatigue led to adopting "Claude Code for web" for initial work, then local polishing
- Development of scripts and skills to manage multiple work-in-progress items
- Current focus on "stacking skills" - expanding commands to cover more scenarios
They maintain two 20-minute "sprints" per hour, monitoring progress and reassessing frequently. Between sessions, they engage in household tasks or exercise, creating a tranquil workday rhythm despite managing multiple concurrent activities.
Documentation and Planning Practices
The community emphasized documentation practices that maximize AI effectiveness. Key recommendations include:
- Letting agents plan ahead and document after completion
- Critically reviewing generated documentation and deleting approximately one-third that proves superfluous
- Using plan mode sparingly, instead structuring codebases and tasks to minimize its need
- Maintaining paper-based todo tracking for personal organization
Industry-Wide Shifts
Several respondents noted that much of what appears on social media about AI coding represents a mixed bag. While subagents and elaborate workflows exist, many find that clean codebases and regular prompting yield excellent results without complex tooling.
The consensus suggests we're in a transitional period where individual developers can accomplish what previously required teams, while team structures have evolved to provide strategic oversight rather than tactical implementation. This represents a fundamental shift in software development economics and practices.
Practical Recommendations for Returning Developers
Based on the community responses, here are actionable recommendations for someone returning after a year-long absence:
Start with voice input tools like Handy if hand weakness persists, as they can significantly reduce typing burden while maintaining productivity.
Adopt Claude Code as your primary agent, running it in multiple terminal windows for different tasks. The tool has matured significantly over the past year.
Implement the clarifying questions technique - end prompts with requests for numbered clarifying questions to ensure alignment and build your mental model of solutions.
Focus on verifiable outputs by strengthening test suites, lint rules, feature branch deploys, and simulation environments. This makes AI-generated code safer and more reliable.
Embrace the new team workflow of deeper requirement exploration, speculative feature branches, and milestone-based delivery without traditional sprints.
Maintain critical evaluation of AI output. Review everything yourself while using agents as additional reviewers, updating instructions based on what they miss or catch.
Structure your day around focused 20-minute sessions with regular reassessment, allowing for physical breaks and reduced stress despite managing multiple concurrent activities.
Build skills incrementally rather than attempting complex subagent architectures immediately. Start with simple commands and expand their scope as you understand your needs better.
The responses collectively paint a picture of an industry that has moved beyond experimental AI coding to established practices that fundamentally change how software development works. For someone returning after a medical absence, the learning curve exists but the tools are mature enough to provide immediate productivity gains while accommodating physical limitations.
The evolution from hands-on coding to strategic oversight represents not just a tooling change but a philosophical shift in what it means to be a developer in an AI-augmented world.
Comments
Please log in or register to join the discussion