On subtraction, codification, and the chemistry of working with things that do not understand
This issue is a collaboration. In April, I contributed an essay to heckelai.com, an academic platform working at the intersection of AI and professional practice. It went live there on April 29.
It belongs here too. What follows is the most considered version I have managed of something that surfaces repeatedly in client work: what the agentic turn is actually doing to people working inside it. Not the replacement narrative. Not the keynote version. The version visible from the ground.
Four episodes, written from inside the work rather than above it.
We were promised the machines would arrive in a single gesture. A profession made obsolete in a press release. A factory floor emptied overnight. The radiologist retired, the lawyer outcompeted, the writer surplus to requirements. We have been watching for this picture so long that we cannot see what is actually in front of us.
What is in front of us is smaller. A junior analyst no longer drafts the first version of the memo. A consultant no longer builds the deck from scratch. A developer no longer writes the boilerplate. None of these people lost their jobs. Each of them lost a portion of what used to constitute the job. I watch this happen on every engagement I run. The role description has not been updated. The work inside it has been hollowed by a degree no one has measured.
This is not replacement. It is subtraction by a thousand small absences. Tsing observed, writing about a different kind of accumulation, that the most consequential changes rarely arrive as events. They arrive as patterns visible only in retrospect, made of countless small acts that individually seem trivial. The agentic turn is following this pattern exactly. We are watching for the event and missing the accretion.
The mistake is not just analytical, it is political. If the displacement were a single dramatic act, we would know what to do about it. There would be policy levers, retraining programs, a clear before and an after. The papercut version offers no such handle. There is no moment to organize around. The work simply becomes lighter in ways no one celebrates and heavier in ways no one tracks.
This is not replacement. It is subtraction by a thousand small absences.
To codify a practice is an old human activity. Guilds did it. Professions did it. Industrial standardization did it. Every act of codification involved a slow institutional process: deciding what counted as the work, what counted as good work, who was qualified to judge, where the edges of the practice were. This deliberation was not always wise, and it was rarely just. But it was deliberation. It happened in committees, in apprenticeships, in arguments that took years.
Agentic AI compresses this process to nothing. A practice is observed, recorded, and rendered executable in the same week. I have done this myself, more times than I can count: watched a colleague work for an afternoon, written down what they did, handed the procedure to an agent, and within days the procedure was running without them. The agent does not understand what it is doing in any meaningful sense. It is performing the codified pattern, which is not the same as performing the practice.
What gets lost is not capability. The agent often performs the codified version of the task at a level indistinguishable from the human, sometimes better. What gets lost is the substrate: the tacit knowledge, the situational judgment, the small adjustments that experts made without naming them. These were never written down because no one knew they were there, including the experts themselves.
Then it does not. Conditions change. The edges of the practice are reached. Situations the codification did not anticipate begin to accumulate. The human who used to handle these by drawing on the unwritten substrate is no longer practiced enough to do so, because the practiced part of the work was the part that got automated. The substrate atrophies in the population that used to carry it. This is the cost of codification without deliberation, and it compounds slowly enough that no one names it as a cost until it is too late to recover.
The difference between pattern and practice is where most of the human knowledge lived.
In the picture we tell ourselves, the agent does the work and the human is freed for higher-order tasks. Strategy. Creativity. Judgment. The phrase appears in every consulting deck and every keynote, including ones I have given. It is half true, and half a story we tell to keep the picture comfortable.
The other half: the human is now responsible for directing the agent, evaluating its output, integrating its work into larger systems, catching its errors, explaining its decisions, and absorbing its failures. None of this is higher-order in the sense the keynote means. It is a different kind of labor, and it is not lighter than the work it replaced. It is differently heavy. I feel this in my own days. The work I do now is less typing and more deciding. Less making and more checking. Less alone and more accompanied, by something that does not get tired and does not particularly care whether the work is good.
Suchman, decades ago, described how the apparent autonomy of machines is always achieved through invisible human labor that makes the autonomy possible. The agent appears to act. The human does the work of making the action legible, correctable, accountable. This labor was already present in earlier waves of automation. Agentic systems intensify it because the surface area of agent action is larger and the failure modes are more various.
The asymmetry matters. The machine inherits the codified work, which is the part that was already legible enough to encode. The human inherits everything else: the orchestration, the exception handling, the integration, the trust calibration. This is not the deal that was advertised. It may still be a good deal, depending on what the human values. But it is the actual deal, and pretending otherwise distorts our ability to design for it.
It is not lighter than the work it replaced. It is differently heavy.
There is a moment, working with a capable agent, when the human decides to let it run. Not to watch each step. Not to verify each output. To delegate, in the strong sense: to extend trust to a system that has no stake in the outcome and no understanding of why the outcome matters. The agent that is running while I write this is in that state. I have decided, without ceremony, that I trust it enough for this task, in this context, today.
This moment is the actual site of human-AI chemistry. Everything else is interface design. What happens here is small and consequential. The human makes a calibration: how much can this thing be trusted, in this context, for this kind of task, given what I know about its failure modes. The calibration is not explicit. It is felt. Over time, with a particular agent, on a particular kind of work, the calibration becomes a habit. The habit shapes how the human works.
We do not yet have adequate language for this third thing: working with a system that is neither collaborator nor tool, that performs as if it understood without understanding, that earns trust without being able to be trusted in any of the ways we mean when we use that word.
The chemistry, if we want to keep the metaphor, is unstable. The bond is real but the elements are mismatched. The human brings stake, context, accountability, a body that will live with the consequences. The agent brings competence, speed, availability, and nothing else. The reaction produces work, sometimes excellent work. It also produces a slow change in the human, who begins to think with the agent's grain, to scope problems to the agent's reach, to feel the friction of tasks the agent cannot handle as friction in the world rather than as the texture of the work itself.
We are at the beginning of learning what this does to us. The papercuts accumulate. The codified practices replace the practiced ones. The orchestration burden settles into the body. The chemistry becomes the medium we work in, no longer noticed, until something fails and we remember that we had been calibrating all along.
The agent has finished.
I have not yet looked.
Weekly Dispatch is published by Gervi Labs, an independent AI research and design lab based in Norway. gervilabs.com/dispatch/weekly