A technologist reflects on how generative AI has transformed his workflow, replacing traditional search with intelligent retrieval and generation while raising questions about learning, expertise, and the nature of human cognition.
When I was a child, I was fascinated by the idea of sentient robots. Later, I spent years studying how the brain computes, eventually building a career in computing. Yet I missed the generative AI revolution entirely—it arrived and went mainstream while I wasn't paying attention. This blind spot has given me a unique perspective on what these systems represent and how they're reshaping our relationship with knowledge and problem-solving.
The End of Search as We Knew It
My initial reaction to generative AI was to categorize it as an advanced search tool. But the more I used it, the more I realized this framing was inadequate. What we're witnessing isn't just better search—it's search evolved into something fundamentally different.
At work and at home, my primary use case has become what I now call "search++"—a hybrid of retrieval and generation that eliminates the tedious middle steps of traditional information gathering. Where I once cold-called colleagues for information, scoured internal codebases, or pored through documentation, I now pose questions to AI systems that synthesize answers from multiple sources.
This shift feels particularly aligned with my impatient nature. I've never been one to methodically work through pages of dry documentation. The traditional learning process—with its blind alleys, experimental thrashing, and note-taking—always felt like an inefficient tax on progress. Generative AI removes that tax while preserving the outcome.
Learning in the Age of AI Assistance
There's a philosophical tension here that deserves examination. Some argue that by removing the struggle from learning, we impoverish the educational experience. The blind ends, the dead ends, the hours spent wrestling with documentation—these are seen as essential to deep understanding.
I disagree with this romantic view of struggle. We're not philosophers or basic scientists in most technical work; we're plumbers fixing toilets. Sometimes the most valuable skill is knowing how to solve a problem efficiently and move on. In a field where technical details change constantly, becoming an absolute master of every tool we touch is neither practical nor necessary.
Instead, I believe we're evolving toward a different kind of expertise—one focused on self-management, understanding broader contexts, and recognizing patterns across domains. The niche knowledge silos and yak-shaving aspects of our jobs become less critical when AI can handle them competently.
The Evolution of Code Generation
Code generation through AI has progressed through distinct phases. Auto-complete was the first genuinely promising application—about a year ago, I watched in amazement as generative auto-complete completed ten lines of Python code after seeing just my first few lines. By writing code in a consistent order and using clear variable names, I could trigger the system to fill in function calls with proper arguments and make appropriate local code changes.
That version now feels quaintly obsolete. We've moved to "agentic coding" that generates entire projects spanning multiple files. I haven't fully caught up with this latest wave, but the trajectory is clear: from completing sentences to writing chapters to authoring entire books.
Unexpected Applications
Some uses surprised me. Automated code review, for instance, wasn't on my radar. We're not ready to hand over complete responsibility yet, but AI code reviews have caught genuine errors that human reviewers missed. Yes, they also flag non-issues, requiring closer examination of my code—but I'd rather investigate false positives than let real bugs slip through.
Analysis code represents another unexpected success story. Over the course of March 2026, I transitioned from hand-writing Pandas and Matplotlib code to simply providing one-paragraph directives about desired metrics and visualizations. The coding agent handles the implementation, freeing me to focus on what I actually want to analyze rather than how to implement the analysis.
Debugging occupies an interesting middle ground. The systems are improving but remain imperfect. For common issues with forum discussions, AI brings substantial advantages—it considers broader context like the entire codebase and can attempt solutions autonomously. This represents a meaningful step up from searching for error message strings and manually piecing together root causes.
Test case generation has become another routine application. Test cases often involve substantial boilerplate code, making them ideal candidates for AI generation. The system hammers out the structure while I focus on tweaking logic and ensuring comprehensive coverage.
The 80/20 Problem
Not all applications have been successful. I experimented with using AI to create a bash script and was initially impressed. However, I encountered what I call the "80/20 problem"—the system could handle 80% of the task with my initial prompt but then settled into a local minimum where further prompting couldn't overcome its initial limitations and mistakes.
This suggests an important constraint: AI excels at generating competent first drafts but may struggle with the nuanced refinements that distinguish good solutions from great ones. The systems seem to lack the meta-cognitive ability to recognize when they're stuck in suboptimal patterns.
What I Won't Use AI For
Some boundaries have emerged naturally. I don't use generative AI for writing at work or for pleasure. Design documents at work require organizing my thoughts and thinking collaboratively with others—the writing process itself is integral to the thinking. Using AI would short-circuit this cognitive work.
Similarly, the writing I do for pleasure would lose its essence if AI-generated. The joy comes from the struggle of finding the right words, the satisfaction of crafting an argument, the personal voice that emerges through the process. For these uses, generative AI is actively counterproductive.
Philosophical Implications
What does the existence of generative AI say about human language and thinking? I believe these systems reveal something profound: that much of human knowledge work involves pattern recognition, retrieval, and recombination rather than pure creation.
Language, it turns out, contains sufficient structure and redundancy that statistical models can capture its essence well enough to generate coherent, useful text. This suggests our thinking processes might be more algorithmic than we'd like to admit—more about navigating possibility spaces than channeling some ineffable creative spark.
The Human Role Reimagined
As these systems improve, the question becomes not whether they'll replace us, but how our roles will evolve. I suspect we're moving toward a model where humans provide direction, context, and judgment while AI handles execution, retrieval, and initial generation.
The plumber metaphor still applies, but now we have power tools. We're not becoming obsolete; we're becoming more effective. The key skill becomes knowing what to ask for, how to evaluate the results, and when to intervene.
This transition feels less like replacement and more like cognitive augmentation—extending our capabilities in directions we couldn't previously reach while freeing us from the mechanical aspects of knowledge work. The future belongs not to those who resist this change, but to those who learn to direct it most effectively.
Comments
Please log in or register to join the discussion