Tech

Software Development Process and Collaboration in the AI Agent Era

Waterfall, Agile, DevOps reduced coordination cost. AI agents move the bottleneck from coding to verification and integration — six principles that work.

Who should read this

Summary: Development processes exist to solve two problems — working around individual cognitive limits, and reducing the coordination cost that appears when multiple people work together. As AI agents move into the field, both premises shake at once. This article revisits Waterfall, Agile, and DevOps with a critical eye, reviews which collaboration practices actually held up under pressure, and ends with six principles for the AI-agent era. The reason these are offered as principles rather than a methodology is that methodologies ossify into ritual outside their original context; principles can be reinterpreted with context intact.

This piece is for team leads, tech leads, and senior engineers, as well as practitioners thinking about how AI-agent adoption will reshape team structure and process. It’s not a tool or framework guide — it’s a perspective for locating the axis of decision-making during a transitional period.


Intro

Development processes exist to solve two problems. The first is building a structure around an individual’s cognitive limits; the second is a device for reducing the coordination cost that appears when multiple people work together. From Waterfall through Agile through DevOps, every methodology is ultimately a different answer to those two problems.

As AI agents arrive in the field in earnest, both premises shake at once. Individual cognitive limits are extended dramatically by tooling, and the agents of coordination are no longer only human. This article revisits the development process we’ve had so far with a critical eye, and then offers a view on what collaboration looks like in the AI-agent era.


Part 1. Re-reading the evolution of development process

Waterfall is a regular target of ridicule, but the instinct behind it was sound. “Design first, then implement” was a proven approach in architecture and manufacturing; the project was porting it into software. The problem wasn’t the port itself — it was realizing too late that software’s nature is fundamentally different. Software deals with requirements that only emerge when the system runs. There is no moment when design is complete; only execution-driven learning completes it.

Agile grew out of that insight. Short cycles, execution, feedback, modification — a loop. But Agile often decayed into ritual in adoption. The moment scrum standups, sprint planning, and retrospectives become “events we have to hold” rather than “tools for learning,” the spirit is gone. From what I’ve observed in practice, the difference between teams that do Agile well and teams that don’t is not process compliance but the latency of decisions. Only the teams that decide fast, execute fast, and correct fast are actually agile. The specific question of what Agile looks like in the AI agent era is handled separately in Does Agile Survive the AI Agent Era?.

DevOps was a cultural movement breaking the wall between dev and ops, but its technical core was automating the feedback loop. CI/CD pipelines, Infrastructure as Code, and observability are all mechanisms for building “a structure where the system speaks about itself, without a human in the loop.” The reason this matters now is that this automated feedback loop becomes the operational substrate for agents in the AI era. An agent can know a test failed, know a deploy rolled back, read the error log itself — but only if those signals were already automated.


Part 2. Collaboration — what actually worked

The collaboration practices that verifiably work are surprisingly modest.

Code review is adopted almost universally but rarely works as intended. A good review is not “a bug hunt” but context sharing. The core value is that at the moment a PR is merged, at least two people understand the change. Under that lens, “LGTM”-only reviews aren’t reviews.

Pair programming is frequently dismissed as “inefficient,” but shines when you face a complex design decision or when someone is new and needs to absorb context. Two people at one keyboard aren’t only writing code — they’re aligning mental models. That shared mental model lowers communication cost for the following months.

Trunk-Based Development and feature flags reduce branching-strategy complexity and pull the integration point forward to “right now.” The simplest way to avoid the merge hell of long-lived branches is to not let branches live long.

Documentation is consistently undervalued. In my experience, teams that record “why we decided this way” (teams that write ADRs — Architecture Decision Records) don’t lose direction years later. Code says what it does; only documents say why.


Part 3. The premises AI agents shake

What changes when AI agents join a team isn’t merely “code is written faster.” The shift is deeper.

First, code production is no longer the bottleneck

This is the biggest tectonic move. For decades almost every software engineering process was built on the premise that writing code is expensive. Careful design, reuse, abstraction, DRY — all efforts to minimize “writing time.”

Second, the value of specification rises sharply

To give an agent a job, you have to say exactly what you want. Ambiguous requirements produce ambiguous outputs. Here’s the paradox — that was always true with humans too, but humans tacitly worked to resolve ambiguity on their own. Agents don’t, or do it in the wrong direction. So “the skill of making requirements precise” is no longer optional; it’s core in the AI era.

Third, the trust boundary moves

Code review has been partly about “is this person’s code trustworthy?” Going forward, it becomes “is this change itself trustworthy?” Code written by an agent carries no human reputation and no stylistic signals from a senior engineer. Every change has to be verified independently. That means code review, testing, and static analysis don’t shrink in importance — they grow. The runtime-level design that fills this trust gap is what harness engineering is about.

Fourth, the shape of coordination cost changes

Brooks’s Law (“adding people to a late project makes it later”) rests on the observation that communication cost between humans scales quadratically with headcount. Agents add a new variable. Agents don’t tire, don’t forget context (in principle), and can act on many tasks in parallel. But agent-to-agent coordination, and human-to-agent coordination, create new kinds of overhead — especially when agents make contradictory changes simultaneously, or when humans can’t keep up with agent output.


Part 4. Six principles for the AI Agent era

Based on the above, the following principles seem durable for the new era. The reason they’re principles, not a methodology, is deliberate: methodologies decay into ritual outside their original context; principles can be reinterpreted with context.

Principle 1. Specs as code, verification as automation

The single most important artifact a human should write is an executable specification. Tests, types, contracts, invariants — these are the language through which you tell the agent “what I want,” and the mechanism by which its output gets verified. TDD was once a matter of religious conviction; in the AI agent era, it becomes a practical necessity. Write tests first, let the agent implement — verification and production naturally separate.

Principle 2. Small, independent, verifiable units

Tasks where agents do well share three properties: small context, clear dependencies, verifiable results. These are the same principles as good software design — low coupling, high cohesion, clear interfaces. The only difference is that we used to design this way “so humans could understand it.” Going forward we design this way “so agents can work on it precisely.” The two goals align. Good design is rewarded more, not less, in the AI era.

Principle 3. Spend human time on review and judgment

If agents can produce code, the scarcest human resource is judgment. What to build, which trade-offs to accept, whether this code actually solves the problem — agents can’t answer these. They shouldn’t. Team shape will likely shift from “many juniors + a few seniors” to “a small number of developers with judgment + many agents.” That isn’t bad news for juniors. Judgment is trainable, and working alongside agents can accelerate the training. Only the content changes — the training target moves from “typing speed” to “problem definition and output evaluation.” Concrete per-level playbooks are in AI agent strategy by developer experience.

Principle 4. Small team, fast loop

The biggest paradox of the AI agent era is that as individual productivity rises, small teams get more valuable. Coordination cost still scales near-quadratically with headcount, and adding agents only complicates the picture further. So “2–3 people + agents” can outperform what 10 people used to do, faster. Amazon’s two-pizza team becomes more true, not less. One-pizza teams, and even one-person-plus-agents teams, become viable.

Principle 5. Guardrails first, freedom later

Giving an agent access is like giving a new hire production database access. Capability exists; context is thin. So guardrails — permission separation, sandboxes, rollback-safe deploys, observability — come first. When guardrails are strong enough, you can give agents more latitude, and the more latitude they have, the more useful they become. “An environment where agents can break things without catastrophic consequences” sets the ceiling on how usefully they can be deployed.

Principle 6. Docs for both humans and agents

Documentation now has two audiences: humans and agents. The good news is they want mostly the same things — clear context, decision rationale, exceptions and constraints. The bad news is many teams still don’t do this well. READMEs, ADRs, system diagrams — these are no longer “nice to have.” How effectively an agent can contribute to an existing codebase depends almost entirely on documentation quality. In a context-free codebase, agents produce only mediocre work.


Conclusion: principles, not methodology

I have no intention of proposing “a new methodology for the AI era.” Scrum 2.0 and Agile-with-AI will appear soon enough, and most will decay into ritual like their predecessors. When consultants start selling certifications, the methodology is already dead.

Remember the principles instead. The essential difficulty of software hasn’t changed.

So the efficient approach is this: redraw the boundaries of work so humans focus on the essential and agents handle the accidental. The ability to distinguish what is essential from what is accidental is the new core competency for a developer. The tools are new; doing well with them still rests on old virtues — clear thinking, honest feedback, fast learning.

Agents aren’t our competitors; they’re amplifiers. Good judgment becomes better; poor judgment becomes worse. The teams that win this era are not the ones who adopt the newest tools first, but the ones who most clearly know what they want to build.