DORA, SPACE, DevEx, DX Core 4: Each Answers a Different Question
Software teams don't break down due to lack of metrics. They break down because they measure with conviction things they don't fully understand.
Também em Português
Series Why Productive Teams Fail 3/4
Software teams don’t break down due to lack of metrics. They break down because they measure with conviction things they don’t fully understand.
“Productivity,” “efficiency,” and “performance” are words used with excessive confidence in technical and executive discourse, as if they were stable, universal, and self-explanatory concepts. They’re not. In practice, they function more as rhetorical shortcuts than as precise definitions. Each person in the organization hears these words and projects a different expectation onto them — and yet, everyone agrees to measure them.
The problem of conceptual ambiguity
It’s in this nebulous space that metrics frameworks emerge. They appear as attempts to organize conceptual chaos, offering models, indicators, and common languages. The problem is that, when used without reflection, these frameworks start being treated as ready-made answers, when in fact they’re just lenses.
And every lens magnifies some things while distorting or hiding others.
The question that precedes the metric
Before talking about metrics, therefore, we need to talk about the game. What kind of problem are we trying to solve?
- •Are we delivering fast enough?
- •How to reduce pipeline risks?
- •Is code quality adequate?
- •Is the work sustainable for the team?
- •Where is the cognitive strain?
- •What is being avoided or ignored?
The answer completely changes what makes sense to measure — and which framework makes sense to use.
The four frameworks
Throughout this series, we’ll discuss four widely used frameworks for discussing productivity and efficiency in software teams. They don’t compete with each other, because they don’t try to answer the same question.
Frameworks are not alternatives
The common mistake is placing them side by side as alternatives, when in fact each starts from a different definition of what matters.
DORA: The delivery flow
The four metrics:
- Deployment Frequency — How often code reaches production
- Lead Time for Changes — How much time between commit and deploy
- Time to Restore Service — Speed of recovery after failures
- Change Failure Rate — Proportion of deploys that cause problems
The model categorizes teams into Elite, High, Medium, and Low performers. This classification is useful for initial diagnosis, but becomes problematic when treated as an end in itself.
Useful for: Predictability, reliability, and delivery pipeline efficiency.
Not designed for: Human experience, learning, or cognitive strain. When used outside this context, it starts generating conclusions it never set out to support.
What DORA doesn't measure
DORA doesn’t capture why the metrics are what they are. A team can have high deployment frequency because they automated well — or because they’re under constant pressure to deliver fast.
SPACE: The conceptual map
The central idea of SPACE
Productivity in knowledge work is multidimensional and cannot be reduced to a single number without serious losses.
The five dimensions:
- Satisfaction and well-being — Feeling of satisfaction, fulfillment, and health at work
- Performance — Work outcome (quality, impact, not just volume)
- Activity — How work is done (counting, but contextualized)
- Communication and collaboration — How people and teams exchange information
- Efficiency and flow — Ability to do work without friction or interruption
The framework doesn’t prescribe specific metrics. It offers lenses to examine productivity from multiple angles and deliberately avoids single numbers or rankings. SPACE’s strength lies in forcing the question: “what else are we failing to see?”
SPACE doesn’t say exactly what to measure; it forces the uncomfortable question: why do you want to measure this and what are you leaving out by doing so? It’s less prescriptive and more philosophical — which is precisely its greatest value.
Developer Experience: Daily life
The
The central question isn’t “how fast are we delivering,” but “how difficult is it to do the right work in this environment”.
Core DevEx factors:
- Feedback loops — How long to see the result of a change (build, tests, deploy)
- Cognitive load — Complexity that needs to be kept in mind to work
- Flow state — Ability to enter and remain in deep concentration
- Tooling — Quality, integration, and reliability of tools
- Documentation — Clarity about how systems work and decisions were made
- Onboarding — Time and difficulty for new members to become productive
Excessive friction, poor tools, opaque processes, and constant interruptions don’t appear directly in classic delivery metrics, but they erode quality, motivation, and team sustainability over time.
DevEx as a leading indicator
While DORA measures outcome (what already happened), DevEx measures conditions (what’s about to happen). A team with poor DevEx can maintain high delivery for a while — until it can’t anymore.
DX Core 4: The pragmatic synthesis
Finally,
The four core dimensions:
- Fast feedback loops — Speed of cycles (build, test, CI, deploy)
- Low cognitive load — Reduction of unnecessary complexity
- Deep flow state — Protection against interruptions and context switching
- High developer satisfaction — Feeling of progress and fulfillment
Unlike SPACE, which maps dimensions for reflection, DX Core 4 points directly to where to act. Each of these dimensions can be measured, but can also be immediately improved through concrete actions: improve tools, simplify architecture, protect focused time, listen to feedback.
It’s less ambitious conceptually, but much more direct in practice. When used well, it helps transform diffuse diagnoses into concrete decisions.
One of the most critical aspects of DX Core 4 is the emphasis on fast feedback loops. Long waits between action and result don’t just delay work — they kill momentum
DX Core 4 as a starting point
If you don’t know where to start improving Developer Experience, these four dimensions offer an immediate action map. They’re not the only things that matter, but they’re the ones that most frequently make a difference.
The Risk: When Frameworks Become Ideology
So far, we’ve presented four frameworks with respect. Each has real value when used consciously. But there’s a risk that needs to be named before you encounter them in a degenerated state: frameworks can become ideologies.
From tool to doctrine
When a framework stops being a lens and becomes absolute truth, it stops illuminating problems and starts hiding them under a veneer of technical rationality.
How degeneration happens
Phase 1: Honest adoption — A team or organization discovers the framework, sees value, begins applying it consciously.
Phase 2: Normalization — The framework becomes common language. “Our DORA is good,” “we need to improve our SPACE satisfaction,” “poor DevEx here.”
Phase 3: Bureaucratization — Frameworks become requirements, not tools. Metrics are collected because “that’s what you do,” not because they answer questions.
Phase 4: Ideology — The framework becomes unquestionable. Questioning it is seen as resistance to “modernity” or “technical maturity.”
Signs you've reached Phase 4
- Discussions about the framework replace discussions about the real problem
- Criticisms of the framework are treated as heresy, not contribution
- Numbers become automatic justification for decisions (“because DORA says,” “because SPACE indicates”)
- The organization stops asking “why do we measure this?” and starts treating metrics as axioms
What the next articles will reveal
In the next four articles, we’ll examine each framework critically — not to destroy them, but to recover awareness of their limits, risks, and hidden assumptions.
Article 4: DORA — We’ll see that correlation is not causation. That metrics can describe and create realities. And how Gartner legitimizes frameworks not because they’re true, but because they reduce corporate anxiety.
Article 5: SPACE — We’ll explore how complexity can be weaponized. How “it’s multidimensional” can become an excuse for inaction. And why organizations resist SPACE precisely when it would be most useful.
Article 6: DevEx — We’ll discover that “DevEx” became a marketing buzzword. That improving experience isn’t technical — it’s political. And that cosmetics (new tools) don’t substitute for systemic change (power redistribution).
Article 7: DX Core 4 — We’ll understand that pragmatism can be a trap. That 4 dimensions can become a reductionist checklist. And that choosing where to act requires more than frameworks — it requires political courage.
Why this series doesn’t start with answers
As we saw in Article 2, metrics aren’t neutral — and choosing what to measure is choosing what to see and what to ignore.
Now we add another layer: choosing a framework is choosing a narrative about what constitutes success.
This choice is never purely technical. It reflects organizational priorities, leadership anxieties, power structures. When you adopt DORA without questioning, you’re implicitly saying “delivery is what matters.” When you ignore DevEx, you’re saying “human experience is secondary.”
- ✓Illuminate previously invisible aspects
- ✓Enable more precise conversations
- ✓Facilitate systematic diagnostics
- ✓Are reviewed and adjusted constantly
- ✗Hide complexity under a single number
- ✗Replace difficult conversations with dashboards
- ✗Become compliance rituals
- ✗Are treated as unquestionable truths
This series’ commitment
We won’t pretend frameworks are neutral. Each carries assumptions about what quality work is, who matters, and what can be sacrificed. These assumptions may be right or wrong — but they’re rarely explicit.
In the next articles, we’ll make them explicit. Not to reject frameworks, but to use them consciously. So that when you choose to measure something, you know exactly what game you’re playing — and are willing to bear the consequences of that choice.
Metrics are not neutral
Because, in the end, metrics are not neutral. They shape behavior, direct investment, and define what the organization comes to call success.
Frameworks as political instruments
Choosing a framework is choosing a narrative about what constitutes well-done work. And this choice isn’t technical — it’s political.
When an organization decides to measure only DORA, it’s implicitly saying: “delivery is what matters.” When it adds SPACE, it recognizes there’s more at stake. When it invests in DevEx, it admits that human experience has weight. When it ignores all and focuses on lines of code or story points, it reveals exactly what game it’s playing — even if it doesn’t admit it.
- •Which framework to use?
- •What metrics to collect?
- •How to categorize teams?
- •What counts as success?
- •Who will be rewarded?
- •What behavior do we want?
The danger of metric bureaucracy
Well-intentioned frameworks often degenerate into empty rituals. What begins as an honest attempt to understand complex work becomes a compliance checklist.
Signs of degeneration:
- Metrics are collected, but no one acts on them
- Teams optimize to look good in numbers, not to actually improve
- Discussions about measurement methods replace discussions about real problems
- Frameworks become compliance requirements, not diagnostic tools
- Framework language replaces direct conversations about difficulties
When this happens, the framework has stopped illuminating the problem and has become the problem.
The responsibility of those who measure
To measure is to intervene. There’s no neutral observation of social systems.
The fundamental choice
And it all starts with a simple choice — and rarely explicit: what game are we really trying to win?
If you don’t choose consciously, the system chooses for you. And systems, left to their own devices, tend to optimize for predictability, control, and absence of conflict — not for learning, quality, or sustainability.
Before adopting any framework, ask:
- What problem am I trying to solve with this metric?
- What behavior might it inadvertently encourage?
- What does it leave invisible?
- Who benefits if this metric improves?
- Who might be harmed?
Frameworks are lenses. Useful when you know what you’re looking for and aware of what you’re ignoring. Dangerous when treated as objective truths about human work.
The question isn’t which framework is better. The question is: do you know what game you’re playing?
Notas de Rodape
- [1]
Momentum, in the context of software development, refers to the psychological state of continuous progress and engagement. It’s the feeling of moving forward consistently, where each action generates visible results quickly, creating a virtuous cycle of motivation and productivity. When momentum is lost — through slow builds, bureaucratic processes, or prolonged waits — the cost isn’t just temporal: it’s cognitive and emotional. Teams lose focus, mental energy dissipates, and regaining context requires additional effort. Protecting momentum is protecting the team’s ability to work in a flow state.
Related Posts
DORA: Flow Metrics, Invisible Capabilities, and What Really Sustains Continuous Delivery
DORA doesn't measure productivity. It measures symptoms. And what really matters lies in the 24 capabilities that nobody remembers to implement.
Before Measuring, Someone Chose What to Believe In
Metrics aren't neutral. They reveal and reinforce the game already in motion. Understand why teams that deliver a lot still break down from within.
What We Really Mean When We Talk About Developer Experience
Developer Experience isn't about comfort — it's about cognitive capacity. Understand why productive teams still break down from within.