4 Patterns of AI Native Development - InfoQ
#Regulation

4 Patterns of AI Native Development - InfoQ

DevOps Reporter
35 min read

Patrick Debois explores how AI is transforming software development through four key patterns: transitioning from producer to manager, focusing on intent over implementation through spec-driven development, moving from delivery to discovery, and managing agentic knowledge.

InfoQ Homepage Presentations 4 Patterns Of AI Native Development

AI, ML & DATA ENGINEERING

QCon London (March 16-19, 2026): Learn proven architectural practices to scale your systems faster.

4 Patterns of AI Native Development

LIKE

View Presentation

VERTICAL

HORIZONTAL

FULL

Speed: 1X 1.25X 1.5X 2X

46:59

Summary

Patrick Debois discusses the evolution of software engineering in the age of AI. He shares four key patterns: transitioning from producer to manager, focusing on intent over implementation through spec-driven development, moving from delivery to discovery, and managing agentic knowledge. He explains how these shifts redefine seniority, team roles, and the future of the DevOps workflow.

Bio

Patrick Debois is credited with coining the term DevOps, co-authoring the DevOps Handbook, and launching the very first DevOpsDays back in 2009. Since then, he's been shaping the tech industry with his ability to bring development, operations, and now GenAI together in transformative ways.

About the conference

InfoQ Dev Summit Munich software development conference focuses on the critical software challenges senior dev teams face today. Gain valuable real-world technical insights from 20+ senior software developers, connect with speakers and peers, and enjoy social events.

INFOQ EVENTS

March 26th, 2026, 11 AM EDT

AI Agent Swarm Patterns

Presented by: Tyler Jewell - CEO & President of Akka and a four-time DevEx CEO

April 16th, 2026, 1 PM EDT

Designing a Control Plane for Cloud Infrastructure: Governance, State, and Continuous Orchestration

Presented by: Mrinalini Sugosh - Sr Product Marketing Manager at Harness

Transcript

Patrick Debois: This picture, a year and a half ago we would be really proud and say, can you guess, was it generated with AI? Of course, it's generated with AI. There are four horses, and maybe you don't know the reason there are four horses, because unicorns do not exist. It's the only thing that actually matters. You're working maybe at a unicorn, but I don't think they really exist. We all have to work hard. I think they're slightly becoming a little bit more robotic. The other undertone is they could be the four horses of the apocalypse. We don't know where we're headed. It's a little bit misty. I'm trying to bring a little light to the story.

I assume you're all developers. That tab completion, I assume you're using this in some form. I was at a conference and I asked people, "Are you using any agentic stuff?" "No. We're not allowed to use that in our company". "Do you use Copilot?" "Yes, of course". Very confusing to me. They all learn how to copy and paste, even if it's just ChatGPT to bring the code in and to generate that. Then our IDEs got more evolved and it was not just one line that was suggested, it was multiple lines that were suggested. It even now has a model that predicts where your next edit is. "I changed this line". "No, probably you want to go there". It's almost like nudging us on what we need to do. Not just a few lines, multiple files became a reality. Then it became really fun to do our work. Like, where did it change things? We can't really follow. It was looking at more stuff. Our terminal got also more AI, and we can copy and paste things in the chat going from there. Our browser got AI, even that. Looking at the errors completely directly in our IDE. Then it started to help us with the things we really like, generating tests, and perfect test coverage. Some new pieces came along the way, MCP servers. One of those plugins that you put in there, and all of a sudden, your agent has a bunch of tools that can do things for you. I think for those who've ever seen that announcement of Devin, at the time it was announced, it was like, blasphemy. It cannot work. I don't know how many millions got sold, but I think they were the first one to visualize maybe in the product, even though it might have been a fake video, that continuous loop of looking at the browser, going into the terminals, changing code, actually what a real developer does all the time. Then we got a new way, like, you don't need an IDE. Let's go back to the terminal, back to the '80s or the '70s. Things get more complex, and they did that. For me as a summary, and I'm actually not going to talk about code generation, so this was everything that I had on code generation. I do want to point out we're on this journey from, we got an LLM, all the way up to, we have a team of agents that have multiple LLMs, they have RAG, they have the whole stack memory, and eventually if they don't have the tool they need, they're just going to build a tool. I figured for me personally, I'm just going to go there. It's fun. Let's see what goes. Also, in my job, I'm able to test a lot of tools in the software delivery and AI. I try to bring those tools in a CNCF-like landscape, but specifically for AI native dev tools. I started the year with 250 tools roughly. I think we're now over around 600 tools, just things popping up. Obviously, tools disappear, they come, they go. It's the period of chaos. We've seen this before with cloud, with security tools, DevSecOps, but it's kind of like, we're trying to figure out what actually is working, what is not working. Nice thing is, nobody is really an expert. This is the reality. I'm also not an expert, but I've put in the time, I play with it, and I try to explain things to people. For those who might remember the DevOps handbook, that was me. I also tried to make sense out of that world, and now I'm trying to make sense about the AI native development world.

What is AI native? When we move things to the cloud, it's like, let's ship the whole VM. Let's do that. What you see right now is people say, we're now AI native. They sprinkled a little bit of AI on top of their stuff, and now they're AI native. Of course, that's not true. We're still figuring out what AI native actually means, but that's the journey we're on. As a developer, whenever a new technology comes in, some of our tasks will be removed and done by the technology way faster. Some new tasks arrive, and some will change. I like that visualization of unbundling of our tasks. I know a lot of people say, it's just a faster way of typing. No. The things are changing the way we work. Some of my people say, what's the elephant in the room? Developers still there? Maybe there's new jobs. I'm joking a little bit. Maybe our job is to train the models, and you see these things pop up, I'm looking for coders who actually train the AI to say what's good and what's bad so the model becomes better. I'm not saying this is the job for everybody.

Pattern 1: From Producer to Manager

What I want to talk about is we all know that AI is changing, and I assume you're playing with it. The point of my talk is, how is it changing things of our tasks that we are doing right now? I put that into four patterns. The first pattern is usually the first that people experience. They ask the AI to generate the code. Now they're not doing the creation too much anymore but they have to say what's good and what's bad. You, instead of being the producer of the code, you become the manager. That's the first pattern that I want to highlight.

Code generation is exploding. The challenge that I hear more and more with people becoming more effective with those tools is actually the review. I apologize that I don't have all the answers by heart, but I'm trying to highlight several subtle things in tools or other things that people are trying. On the left, remember a multi-file diff? Who likes to read them? Not me. I'm colorblind, so red, green, gray as a color. I do not want to read all your chat views because I don't have time for that. I know it's very subtle, the thing on the right, and I think it was Cursor Composer at the time, where the review actually showed me the code, but with comments inlined. It was a little bit easier for me to understand what was happening, like a reduction, just, this is the thing that changed in that. It is just an example of, we need to be looking at ways to reduce the cognitive load, because the more it's producing, the more time we're going to spend. We solved this problem at more generation, now we need to solve that problem of review. Nobody said that a review needs to be text. Sometimes it makes more sense to do a review on a diagram. Why is the diff maybe not adapting to the diagram? One example. Company doesn't exist anymore, it's one of those tools in my list, but the concept was interesting to show actually what the change was. Google, they had this NotebookLM, building podcasts or whatever. Google said, it's a good idea. Why don't we make a Codecast? We're just going to listen to the LLM talking about all the awesome code changes it did, in the morning. Probably not the best way. I want to point out, it doesn't need to be text. Could be audio, could be any cues in there. The point that I make is, there's a website called the Moldable Development environment, is that we saw the movement from IDE back to terminal, but I still use my IDE to do the review, but what if my IDE actually becomes better at dealing with the review task? It's still early days, but I think that's where we see new emerging ways of dealing with those code changes.

Interestingly enough, we might also get more UI elements in our chats with MCP. Instead of just having text, maybe there's a widget that comes in there and helps them, similar to that image review. What if we just auto-commit, and we roll back when it's needed? We flip the table. Aider was the first to say, I'm just going to auto-commit. If you don't like it, it's up to you to roll back. Interesting. When are we certain of that? That's my biggest gripe with the faster we get at code generation, these decisions will slow us down. It's kind of some way that it puts the brakes on that evolution. We started creating more safe environments. The agents are generating code, and I can do rollbacks. It was good until here. That helps in a safe environment of auto-committing, but still able to roll back, similar to a CI/CD system, but it's now happening in your IDE while the coding agent is doing everything on your laptop. Then we enter another fun realm, is, what is the AI actually allowed to touch in my codebase? All of a sudden, we become like a user management and access manager. It wasn't complex enough that AWS had to invent IAM. It's now happening right in our editor. We become who says, like, you're allowed access to that and not that. Then we have to worry about costs, because the agents keep running. I'll look at my bill. Annoying. I want to use it, but on the other hand, people or companies started to have that max plan, and then people made an abuse of that max plan, and they're all back to like, there's a limit for four hours, then you have to wait. Cost management is a little bit of FinOps right there in our IDE. We're all Ops now. We all are reviewing the code somebody else has built. We are responsible for that code when we put it into production. If somebody is happening at night, the agents will likely not help us, but we'll have to deal with it. This is probably the transition in that manager role, worrying about cost, worrying about my agent, telling them what to do, and going from there. If I bring the analogy with DevOps, we were really happy with infrastructure as code. Great automation. I can now delete all systems at once. Then we brought in the safety, CI/CD systems, more testing, and so on, but then realized we still have to do this in production, and the IDEs are still very thin on putting something in production. I know the commit is one thing, and how to push to production is still far away. That resilience engineering might help us rethink the way we build applications with the risks that AI brings us in there. Then we had observability. Much like it's a very common thing, like, if we're not producing the code anymore, and you become a manager, how do you know what good looks like? Because you're not doing it anymore. Chaos engineering was very similar. We started to induce faults so we can keep training and be on our toes when things fail. Maybe that's our new profession is becoming the fireman when things fail, and we have to take care of that. Obviously, this is going to be the fun. If something happens, which agent did what? Who's responsible for what? I don't know.

Pattern 2: From Implementation to Intent

We all became managers. That's probably from the reviewer to dealing with, and actually being responsible to do that. That assumes that we are still very tightly telling AI what to do, and we're in that pair mode. What if you can just give like, here's a bunch of requirements, agent, code up, and then we do the review, and we are responsible. I'm now heading to a step before doing that. Telling people what to do, I assume there are some architects, you're very good at it. Or QA people, they test things. They know what to look for and go for that. In this pattern, I'm going to explain that we want to express our intent, and then the coding is being done by the agents, but we actually worry a bit less on very detailed things like the actual implementation. As long as our tests pass, as long as our requirements pass, we're ok. It's a little bit of a pipe dream still, but it gets you in the mindset of maybe I go there.

In a very crude way, Cursor rules are a way of already setting a bit of requirements to our agents. I tell it how I like my code, how I want things done. Now we have more standardization in the industry by an AGENTS.md, where we are not reliant on where every tool's put it, but we can reuse the AGENTS.md across multiple tools. Progress. Then what you saw people do is not just put like, the technical requirements, how our team works, but they start reusing the specification for functional requirements. Instead of doing the prompt, they're just like, let me rewrite everything in a markdown file and pass that in the prompt. A very crude way, but you saw people building that up. This is from GitHub. We're not in their chat loop anymore, but we just say, I got a task, I write a spec, the agent plans it and rolls it out. They even experimented, when they're like, what should that natural language look like? I think the debate is still out, what is the format of that spec? Most people just use markdown. I think the LLMs roughly enough understand how to do that. They brought recently Spec Kit, spec-driven development, specs go from there. It's a little bit experimental, but it gets the idea across. OpenAI as well has been talking about specs, but they do it for another reason, they do it for training the models. You can obviously reuse those specs as requirements to actually also implement things. Then Kiro from AWS was probably one of the first IDEs that made it serious in their editor. They say, do you want to vibe code, or do you want to code with specs? Very clear. They used a pattern, it feels a little bit BDD style, to do that. That's all good, but there's even another place where I got a bunch of code, whether that's legacy or something I did manually, and I also want to put that into my specification spec. There's like a bi-directional piece when you're writing specs as well, because you still want to go into the code, and sometimes it's way faster to go in that code. I work at Tessl, and one of the things we do is we try to figure out how we can key build both in sync. We keep your requirements in the markdown, and we link that to a file, so when your test file changes, we can say, please update the spec, and go in both directions in there.

When you write specs, surprisingly enough, you just have to talk about good engineering practices. Don't write big prompts, because nobody will read them. It's the same with your AI, it's like, do smaller pieces. Modular codebases, don't put everything in one file, which the LLMs really want to do, and they keep writing that one big file, but you tell them it's not a good idea. If your documentation is up to date, that also is helpful for both the AI and the humans. If your coding style is inconsistent in various pieces of your coding, the AI trips up like any human would do. There are advantages of those things, and then including tests. If the AI changes, you want to see what changed, and so on. What happened now a few times, and I explain that, and then the team says, we're going to do that. Why didn't you do that before? Now for AI. For whatever reason, it's a good practice, so please do it. The other benefit of writing things down is that it actually aligns not only the agents with your idea, but there is actually a team conversation aligning the humans of what it should be. Somebody visualizes like this, and I like that, had prompt engineering, and that was all ok. Can we get a better context? Then eventually it was intent engineering. Are we there yet? There's a lot of people asking me. I write all my requirements, and I backport my legacy code into requirements, and I just change the word COBOL to whatever language today. Done? No, we're not there yet, because there's a planning phase, there's a certain iterative thing like humans do as well. It gets the idea across, is where we are heading in that space. It can actually also solve a little bit of those loops. The better specifications have, if you keep your requirements in your projects, the AI doesn't go in multiple loops, it forgets less. It actually doesn't hallucinate that much because you have your documentation and all that stuff there. It might know of the more recent documentation and versions of stuff, and so on, and you keep more of that context. I encourage you to watch that video on spec-driven development.

Pattern 3: From Delivery to Discovery

Pattern number two was figuring out like, we're going to get better at requirements writing, and then once we've written the requirements, we delegate that, and then we are the manager to review that stuff. What do we know to actually decide what goes into the requirements? That's typically the product owner who says, I've talked to a lot of users, I think we did some research, and that's what we've put in there as well. They love the new vibe coding. In the past, they had to wait for a couple of days, weeks, until they got a developer. Now they're playing with it, and as you're all professional developers, you're like, vibe coding. They got to learn what they actually want by exploring what they need. It's like exploratory testing, but unrequired, so it's really helpful for them. Does anybody know where the word vibe coding comes from? In AI engineering, when people release new models, they're like, it feels better. It has a good vibe. Then they transpose this actually to vibe coding as well, this word with vibe coding. You say, the vibe coders, they're putting things in production. They don't have to. I do vibe coding, maybe for one or two days, and I actually understand my problem better. That's how I use that. It has a use, even if it's not about putting things directly into production.

Another thing that's really great is I can now ask three variations of the same thing with relatively low cost, and I can pick the best one for vibe coding. I'm learning what I actually want. If you take that to the extreme, and I found this an interesting example, imagine your end user would be able to vibe code a user interface on top of your actual product to actually build what they actually want. I don't have to wait for you to release it. I know it's the extreme, but it gets the idea across. There's value in those things and learning from that phase. That kind of parallelism of multiple IDEs, you can do that for the variations. Like, I want to have trilliards of the screen. I can also do this, I want you to implement this in three different algorithms. I want you to implement this in Rust, in something else, or using that library. Variations are also exploratory. Think of that when you're vibe coding. The other reason to do things more in parallel is obviously multitasking. I have a backlog of things, and I just delegate that to multiple developers. You understand that that decomposition of, you get to do that, you get to do that, we need to separate it a little bit, because, otherwise, you're tripping over the changes, and it becomes merge conflicts, and there's a lot of discussion. Those are two things on exploring things. For those who don't know, one of the very common techniques to have multiple versions of your codebase on the same laptop with the agents parallel working is a concept using Git Worktrees. Or if you really like to run everything in a container, like me, and I like because I don't trust systems that could randomly change things on my whole file systems, even if they tell me not to do. This is an MCP service that allows you to spin up multiple container variations, and then it gets merged back if you want that to merge in your codebase. Then, one thing that I use myself quite often is, while I'm coding, I'm not just using the git commit, but I also have used JJ for almost like my local versioning. Then I pick the best one for the commit. I have multiple commits locally, and then I only push what I want to my git on the variation. I mentioned that here's the big legacy app, change it from whatever language to Rust. That's not going to work. We need to do that deep composition, and very similar tools are now emerging like Backlog.md that help with the planning mode. They put like acceptance criteria on the different steps, and they move that along with your agents. Then you see a new set of tools emerging, which are the orchestrators. Now I'm not managing one agent, I'm managing a fleet of agents and they're all implementing different stuff to do that. Claude Squad is one of them. What's useful is almost like that flow with Devin again, you can say, just give me a bunch of options, come back to me in the morning, code as you want. It's a very convenient way to explore things as well. I hope that gave you an idea that we're focusing less on the complete delivery because we have good pipelines, I assume. The discovery of ideas on what to build is very valuable to actually bring the money in the bank.

Pattern 4: From Content to Knowledge

The last one is, while we're doing all that stuff, we want to capture actually that knowledge, much as we want to do that capturing of knowledge in our companies. Where does knowledge exist? Documentation? Yes, maybe I can bring in some files, some Jira tickets. It's all context that you typically pass along in your coding generation. Maybe I have some other knowledge, like backstories for agents that I want to have, that I want to reuse. That's also knowledge. One thing we do is we take open-source libraries and convert those into specs, like not just the documentation, but the documentation and the code. We mix that with examples and bring that in as well as part of the knowledge into your system. Your changelog and changes on APIs and things like that, they're also feedback that people are bringing in. Things you can bring into your context as well. What about your incident responses? It would be ideal if you're not repeating the same problem again in your next project. They're all knowledge that is flying around that you can reuse. Then, the more we gather, the more we can actually turn those LLM systems not just about generating code, but us training us as developers or new people and turn those into lessons from all the knowledge that we're gathering. Interestingly enough, Claude Code for those using it now has a learning mode. You can ask it questions instead of it generating the code. Interesting evolution there. Then this brings, do we still need to understand the code? What if it's just like the two agents talk gibberish to each other and we don't? My argument is, you are responsible. They might talk to each other in gibberish, but when there's an issue, they have to talk to you in something you understand. You still have to understand what good looks like with the tools. What's the motivation for a young person to learn? A question that comes up a lot. Why do people want to enter the profession if a lot of it's done by the AI? This is usually told to me by senior people. The young people, they don't know how to learn anymore. They didn't get the scars that we have. Let me show you my tattoos of whatever language. The young people just say, the agent is always there explaining things to me. It's always helping me learn things. You did not have that. You probably had to do 15 years to get good. I have something that helps me all day get better and I can learn faster. The jury is out. It's not that they cannot learn that as we do. Reviewing is still a skill in demand. While a few people said, "We don't need the reviews anymore. Remember, we don't need the testers anymore. We don't need the architects anymore". Reviewing is going to be a skill in demand because you have to understand what good looks like. What we don't know is, will we have smaller teams? Yes or no? Because the typical multidisciplinary team, a frontend, backend, and I know there's full stack, but then we need somebody from the data, somebody from AI, somebody from DevOps. We don't know. Maybe AI has a lot, so I can be with a smaller team and smaller groups and get things done faster. You will only be able to do either by saying, I understand what it's telling me to do, and I say, yes, or I don't know, I just say yes, or probably somewhere in between. Onboarding new developers that come into the company, really useful. The specs and knowledge is there. Much like we think about UX, learning stuff, you can now think about AX, agentic UX. How can the systems, while we're doing our jobs, while we're doing our coding, actually learn? How can they look at this? Specs is maybe one way that somewhere else there's a language in between, but there are various other ways. For those who are using Bun as the npm equivalent, you would see that there's like CLAUDECODE=1, which changes a little bit the behavior to be optimized actually for an agent in there. Then the more we assimilate all that knowledge, it could be keeping track of all the features that we tried, that we did. I don't know in how many companies anymore, but a year later, they tried the same feature as a year ago, because the people left and they think it's a good idea again. Maybe that can solve some of that. Another piece is that the AI can actually help us save that knowledge while we're doing this. While you're asking in the chat, this is an example of, again, Devin, it might've been all fake, these things, but still the idea is, it will ask us while we're asking it to do certain things, I think this is important, should we save this as knowledge? Yes, let's do it. It's like in the flow. When something is not correct, it would say, "I based myself on these documentations. Do you want me to update that documentation?" It's in our flow to save that knowledge. It's not one agent, it's multiple agents, and multiple agents with different contexts and multiple agents being used by multiple people in your company. What if they all can learn from there as well? Then you see concepts like hive coding, swarm coding, they're coming together as a bunch of things. I'm absolutely fascinated by the idea, but I'm absolutely scared about reviewing their code. I'm not sure where that's heading. The knowledge can also be used by agents to help actually humans on their PRs, or you forgot this, or maybe you should that. Knowledge has multiple ways.

Conclusion

I hope I gave you an overview of different patterns that are moving, like people are trying things. It's not polished things. The order that I gave it to you is probably also the order of maturity. That review section is ramping up and requirements and specs is heating up in the space, but then parallelization is just peeking around, and knowledge is still far off because we don't know exactly where that is heading in there. I tried to make a visual out of that. It was part of a blog post where I say, we're managing the intent. That's our backlog that we're doing that gets broken down by one of the agents into planning mode, ultimately that gets into parallel coding. Then they review and they get merged back. Then, while they're all working, they share things on the knowledge management layer to do that stuff. What's fascinating to me is that we're almost building the CI/CD workflow back on our laptops now. It's doing stuff. It literally might not be running on your laptop somewhere in the cloud, but it's like the commit is different. A lot of people ask me, what's the future of DevOps? They come with a story about AIOps and observability. I say, that's great uses of AI in that space. You can optimize your CI/CD workflow. For me, it isn't until this workflow is settled that we're going to see the impact of how the other workflows of the CI/CD will work or will change. That's why I'm more focused on this than seeing, where can we locally optimize the current workflow in that space? Then we got into, now we have to monitor all those agents. Told you observability is coming. I'm tracking what agent 1 is doing. Somebody actually put voices to it, and he was listening to the background, what the agents were doing. It's like, no, now I need to get to my keyboard. Interesting changes in our behavior there. Then we have to rethink what are actually coding metrics. We don't know anymore. It's as easy to generate so many files. Is one of the quality things the number of times we have to tell AI that it's wrong, is a metric? I know it will say, "Of course I can fix this. Yes, I understand your feedback. Let me fix that for you". It is how good we are actually at context management, intent, and then actually the coding tools themselves to do that. There is obviously a few of the analysts and companies in the world that say, I can just copy all the software now. I give them all the specs. Here's the existing application. Crawl it. Use it. Build me a similar thing. The one thing they forget, somebody needs to be responsible. Somebody needs to debug changes. Somebody needs to actually see that there's security. Good luck with that. I think we're still far off. It doesn't mean the changes are not interesting and heading in a certain direction. Another way of looking at it is, this is the company booking.com, and they're on their AI journey. I think the way that I look at that, it's not like, how efficient was AI? Imagine you had continuous testing, continuous delivery. How fast can I actually put something to production? How fast can I switch a new technology into my stack, or a new idea, or a new concept? For them, that's tech modernization, but it could be product modernization and how much can we actually deal with that piece. Maybe that's a more valuable way of looking at how fast we can reiterate on our choices with all the knowledge that we've built up while we're doing that.

Resources

I also run a conference. It is in New York, but we are live streaming. You'll see it loosely aligned to generate, review, learn knowledge in there. If that is of interest, please have a look. If you're interested in keeping track of the news, so I also curate some content in the AI Native Dev. If you want to play a little bit with the specs-driven development with Tessl, or you want to watch all my talks on YouTube, you can find me.

Questions and Answers

Renato Losio: What about cost? As a developer, do you see like, now we don't pay that much for running different options. Is that really sustainable money-wise?

Patrick Debois: Using the AI, like cost-wise? I do see that in general, the LLM costs go down. Infrastructural work improves. There was a lot of inefficiency, like one GPU could only load one model, or it wasn't optimized for certain CPU or GPU levels. There's been a lot of work done. Whether that is actually sustainable, yes or no, that is something that's really hard to see from the cost of the provider side, which I don't have enough things. I do know they're probably paying through their nose right now, which is the money bleeding. I don't have it yet for the environment bleeding. There is a certain piece in that. You could worry about the security, environmental, and stuff like that. Me personally, I like to see where first the functionality is, and how we're getting that. If we nail that, we can actually better judge whether the return on investment is right. You will have to do both right now. Probably right now, probably not right now, on your question. It's hard to predict.

Participant 1: If we're going to be managing the one that produces code, and we have someone that is managing us, how much overlap do you see. Who's going to be left with no work in the end? The thing about the Spec Kit is that you are inputting what they gave to you. How do you see this?

Patrick Debois: Maybe I extrapolate that a little bit from a traditional manager, away from even coding or something. There's a certain attention span that a user or a manager can have. I don't mean that they only have the attention span of a minute to listen to you in the elevator. I think as the complexity grows of what you're building, they cannot be understanding all the pieces, even though they get access to all the data. It's a common thing, like the CEO cannot keep track and hold all the moving pieces in their head. I think they might have access, and maybe they can create a dashboard if they really want to dig in. In general, it will still be like a scaled-out system to manage that, in my opinion. Who knows?

Participant 2: You talked a lot about jumping from the one who produces to the one who's just reviewing. I would say that what makes me a very effective reviewer is having a deep understanding of the code, how it works, the internals, because I've written it, I've written parts of the system. You said that AI can generate summaries, teach material. From knowing how we all usually learn about something, it's through doing it, through work on it, repetition. That's why we don't just read books. How do we say what you actually build up if you have a fully vibe-coded or spec-coded application and now have to review a merge request? How do you actually evaluate beyond just the functional requirements if this is a good solution or not?

Patrick Debois: You can do a better review if you've done the job before, you actually understand the job. I think I made a few allusions to it with the chaos engineering, you still have to train for that. Maybe instead of doing the coding, we have to invest as a company way more in training people and do side projects or something to keep up to date. We have to share that knowledge across multiple team members. I didn't say that the training will go away. Actually, the opposite. I believe that whatever we're saving right now in the time coding, we probably in the future have to invest in keeping people up to date on what they're doing. Whether that's a net positive sum, I don't know. Any new wave is shoveling complexity in some form.

Participant 3: You touched on the topic of junior developers using coding agents saying, "The agent tells me, and I can learn". I've used Claude Code, in my experience, Claude Code gets stuck often enough or produces stuff that doesn't work. I think my experience and my knowledge helps me to decide whether that's a good idea or not. Junior developers, I think, may not have that knowledge or that experience. If they use an agent, they may not be able to tell whether that is a good idea or not. Do you think that's just a temporary restriction because the coding agents aren't good enough, or is that something that will stay with us?

Patrick Debois: I experienced the way that I express that in the industry is, as long as we throw enough money at it, it will be solved. I make the parallel actually with self-driving cars. In some situations, the self-driving car is a perfect thing, but you have to understand what the limitations are and the constraints that you put it in. In DevOps, there was often a narrative, if you have good enough tests and your test hardness is good, your junior can come in, push code on day one, and they will be stopped. Even though they're experimenting, they will be stopped in that way. If you give them all the access and they're rewriting everything and you have zero tests, that's not what I'm advocating. That is still a certain thing. The other piece is coming back to the training and learning and creating a safe environment. Maybe now we actually spend more time mentoring our juniors instead of like, do all the stuff. Again, it's shifting complexity. I'm not saying this is completely solving, and our jobs are irrelevant. I hope I proved the opposite with the talk. There is a certain thing about being always available to ask questions. Maybe you get it wrong, but the other times it did get it right and they did learn something. That's about creating a fail-safe learning environment for people.

See more presentations with transcripts

Recorded at: MAR 09, 2026 by Patrick Debois

FOLLOW

RELATED SPONSORS

From Alert Storms to Autonomous Insight: Agentic AI for Incident Management

Architecting for What's Next: Key Software Trends Every Senior Practitioner Must Act On — Download the InfoQ eMag

Why APIs Can't Trust Clients—and How to Bridge the Gap

The Essential Guide to AI Tools for Jakarta EE Developers

Scalable Enterprise Java for the Cloud - Download the eBook

RELATED SPONSORS

March 26th, 2026, 11 AM EDT

AI Agent Swarm Patterns

Presented by: Tyler Jewell - CEO & President of Akka and a four-time DevEx CEO

RELATED SPONSORS

Sponsored by

RELATED SPONSORS

RELATED SPONSORS

RELATED SPONSORS

RELATED SPONSORS

RELATED SPONSORS

This content is in the AI, ML & Data Engineering topic

FOLLOW TOPIC

Related Topics: AI, ML & DATA ENGINEERING

INFOQ DEV SUMMIT MUNICH 2025

INFOQ DEV SUMMIT TRANSCRIPTS

PATTERNS

QCON

SOFTWARE DEVELOPMENT

CONFERENCE

ARTIFICIAL INTELLIGENCE

BEST PRACTICES

ARCHITECTURE

INFOQ

RELATED

EDITORIAL

Beyond the Gang of Four: Practical Design Patterns for Modern AI Systems

A Game of Patterns

Busting AI Myths and Embracing Realities in Privacy & Security

AI Innovation in 2025 and beyond

InfoQ Dev Summit Munich 2025: Master the 'How' with Deep-Dive, Practitioner-Led Guidance

POPULAR ACROSS INFOQ

GitHub Data Shows AI Tools Creating "Convenience Loops" That Reshape Developer Language Choices

Google Publishes Scaling Principles for Agentic Architectures

AWS Launches Agent Plugins to Automate Cloud Deployment

New Research Reassesses the Value of AGENTS.md Files for AI Coding

From Central Control to Team Autonomy: Rethinking Infrastructure Delivery

[Video Podcast] AI Autonomy Is Redefining Architecture: Boundaries Now Matter Most

Comments

Loading comments...