Restraint as competitive advantage in AI-assisted coding
Should this exist? The new core competency
When methodology can’t keep up with judgment
When building no longer forces you to ask
The Rick Rubin's of technology will win. Especially in AI and coding. Not because they generate more code. But because they know which code should never exist.
AI has made writing software cheap, fast, and abundant. We can scaffold systems in hours that used to take months. We can autocomplete architectures, refactor entire codebases, spin up agents, pipelines, and tools with alarming ease.
That is not the bottleneck anymore. The bottleneck is judgment.
Rick Rubin never tried to impress by complexity. His contribution was restraint. He listened for what mattered and removed what did not. In AI-assisted coding, this mindset is becoming decisive.
Most systems today are overbuilt. Too many abstractions. Too many layers. The code works, but the system feels heavy. Fragile. Hard to reason about. Debugging becomes archaeology.
Good engineers used to be valued for how much they could implement. The next generation should be valued for how clean they keep the surface area. How readable the system remains. How few concepts a human needs to hold in their head to make a change with confidence.
This is not nostalgia for simplicity. It is an adaptation to a new reality.
When AI participates in writing the system, humans become editors, curators, and composers. The work shifts from production to direction. From syntax to intent. From building everything that is possible to choosing what is necessary.
And this is where taste matters.
A Rick Rubin approach to AI and coding means:
As AI becomes more autonomous, trust replaces cleverness as the core metric. Can I understand this system? Can I predict its behavior? Can I intervene without breaking everything?
The winning formula: use AI not as an excuse to scale complexity, but as a reason to reduce it.
Not more code. Better code.
Not louder systems. Clearer ones.
That is why the Rick Rubins of technology will win.
AI makes it easy to build. Wisdom is knowing what to delete.
The question used to be: can we build this? Now AI answers that question for us. Yes. Increasingly so. Soon almost always yes. Faster than we expected. The constraint of capability is dissolving.
What remains is a different question: should this exist?
This is not a moral question. It is a practical one.
Every system that gets built must be understood, maintained, debugged, and eventually unwound. Every feature adds surface area. Every abstraction adds cognitive load. Every ‘yes’ accumulates.
When building was hard, scarcity did the filtering for us. We could not build everything, so we had to choose. The limitation was a gift. It forced judgment.
Now the filter is gone.
AI can generate endlessly. It does not get tired. It does not feel the weight of complexity. It has no skin in the game of maintenance. We do.
So the question ‘should this exist?’ is not philosophy. It is the new core competency. The ability to say no. Say no with conviction, under pressure, when the tool says yes. I think this is quickly becoming the differentiator.
Maybe this is where human agency lives now.
Not in creation. In selection.
Not in building. In editing.
Not in yes. In the deliberate, reasoned no.
The question ‘can we build this?’ is answered.
The question ‘should this exist?’ is ours to hold.
Or is it?
Not in creation. In selection. Not in building. In editing. Not in yes. In the deliberate, reasoned no.
The old methodology asked: did you complete the phase? The new one asks: is your judgment keeping up with your generation speed?
The Double Diamond is not dead. But it no longer describes how we actually build.
I spent the better part of a decade selling and delivering work that was shaped by it. Discovery workshops. Definition sprints. Development phases. Delivery milestones. Two diamonds. Diverge, converge, repeat. The geometry was clean, the clients could follow it, and the invoices mapped neatly to the phases.
Now AI is eroding that geometry. Not because divergence and convergence disappear. They don’t. But because three things have changed simultaneously.
Divergence is no longer scarce. You can generate 100 concepts before lunch. This is not hyperbole. When variation becomes abundant, the bottleneck moves. Not to ideation. To judgment. The critical skill is no longer ‘how might we.’ It is ‘why this one, and not the other 99.’
Discovery is no longer a phase. With AI you can build while researching. Test while defining. Simulate user responses before recruiting a single participant. The clean handoff between ‘we’re still exploring’ and ‘now we’re building’ dissolves. You are always doing both.
Delivery is no longer an endpoint. AI-native systems adapt, learn, are reconfigured through prompts, policies, and model updates. You are not delivering a fixed artifact. You are releasing a capability space: a living system whose boundaries redraw themselves with every interaction.
The methodology didn’t break. The clock did.
The Double Diamond was always a time-based model disguised as a thinking model. It worked because the activities it described took weeks or months each. When AI compresses all of those timelines simultaneously, the sequence stops making sense.
The best metaphor I have found is a feedback system with judgment at its center. The methodology becomes less about which phase you are in and more about how quickly you can cycle between generating and evaluating, between exploring and committing, between deploying and learning.
Geometry is easy to draw on a slide.
A feedback loop with taste at its center is harder to sell, harder to staff, and harder to measure.
But it is what the work actually looks like now.
Something remarkable is happening. People who never wrote code are building working systems. They describe what they want. AI writes it. They run it. It works. This is not a gimmick. It is a genuine expansion of who gets to create.
I think this is beautiful. But it surfaces a problem we used to hide behind scarcity.
When building was hard, most ideas died before they became systems. The friction filtered out the unprepared. If you shipped something, you probably understood it... or at least, you understood enough to be accountable.
Now that filter is gone.
You can build a working prototype in an afternoon without understanding how it works. You can deploy something real without knowing what it touches, what it writes, what it deletes, what it sends, what it costs.
This is not a criticism. It is a description. The question is not ‘can I build this?’ AI answered that. The question is: ‘should I ship this?’
And most vibe coders were never taught how to ask.
Not because they are careless. Because nobody had to ask before. The skill was implicit, buried in years of learning to code, learning systems, learning failure. It was not a separate discipline. It was absorbed.
Now the building is fast. The judgment is still slow.
So I made a small tool. I call it rick/r.
It does not write code. It does not fix anything. From your code base it generates a prompt you paste into your AI assistant. That prompt asks the questions a producer would ask:
If certain gates fail, the tool says plainly: ‘Do not ship this yet.’
This is not about slowing people down. It is about helping them ask the question that building no longer forces them to ask.
The Rick Rubin move is not ‘do less.’ It is ‘know what you are doing before you commit.’
Vibe coding gave us the ability to create.
The producer’s question is: should this exist in the world?
That question is still ours.
Now the building is fast. The judgment is still slow.
Systems that feel calm, even when they are powerful.
All texts by Thordur Arnason.
Originally published on LinkedIn, 2025–2026.
Editor in Chief: Lena Thorsmaehlum
Publisher: Gervi Labs
Art: The Synthetics