SPACE: Productivity Is Not a Number — It's a Human System Under Tension
SPACE doesn't offer quick answers. It offers intellectual friction. Understand why software productivity requires accepting unresolvable tensions.
Também em Português
Series Why Productive Teams Fail 5/8
After looking at DORA, it’s tempting to believe that software productivity can be understood — and improved — by observing the pipeline. In many ways, this is true. Predictable systems with low cost of change create real conditions for work to progress.
But as organizations mature operationally, a discomfort begins to emerge. As we saw in Article 2, metrics are not neutral. And if DORA can be gamed — as we explored in Article 4 — what happens when we expand the game to five simultaneous dimensions?
The gap between numbers and human experience
What the metrics show
- Metrics look good
- Flow is healthy
- Deployments are frequent
What people feel
- Teams deliver but complain of exhaustion
- There's a constant sense of urgency
- Living inside the system is costly
This is where the
An expansion, not correction
Not as a correction to DORA, but as an expansion of the problem. Where DORA asks “how well does the system deliver changes?”, SPACE asks something more uncomfortable: “what are we calling productivity — and who is paying the cost of that definition?”
SPACE offers intellectual friction, not answers
The SPACE approach emerges from research tied to productivity work in engineering within large technology organizations, especially in initiatives associated with GitHub and applied academic research. It wasn’t created to generate ready-made indicators, but to organize thinking. And that alone places it in a different category from most popular models.
The radical premise
Productivity in knowledge work is inherently multidimensional. Any attempt to reduce it to a single indicator inevitably sacrifices important aspects of work. The problem isn’t just technical. It’s epistemological.
We’re trying to measure something too complex with instruments too simple.
That’s why SPACE doesn’t start by asking what to measure. It starts by asking what is being ignored.
Five dimensions in permanent tension
The acronym represents five dimensions that coexist in permanent tension: Satisfaction, Performance, Activity, Communication & Collaboration, and Efficiency & Flow. None of them, in isolation, defines productivity. But any definition that ignores one of them creates distortions.
Satisfaction and Well-being: Structural variable, not luxury
Why satisfaction matters technically
Satisfaction is often treated as something “soft”, secondary, almost a luxury. SPACE inverts this logic. It treats satisfaction and well-being as structural variables, not emotional bonuses. A satisfied developer is not just someone happier; it’s someone with greater capacity for concentration, judgment, and learning.
The capacity argument
Satisfaction isn’t about making people happy. It’s about recognizing that mental capacity is a finite resource — and that systems that systematically wear people down reduce that capacity.
The connection between satisfaction and technical quality
Dissatisfied developers don’t stop delivering. They change how they deliver. They start avoiding risks, choose safe solutions instead of better ones, stop questioning bad decisions, accept technical debt without resistance.
- ✓Questions questionable decisions
- ✓Proactively proposes improvements
- ✓Invests in long-term quality
- ✓Shares knowledge
- ✗Executes without questioning
- ✗Does the minimum necessary
- ✗Accepts shortcuts without resistance
- ✗Keeps knowledge to themselves
The paradox is that dissatisfaction doesn’t immediately show up in productivity. It shows up in the quality of decisions, in the willingness to collaborate, in the energy available for difficult problems. The system keeps producing — but produces worse.
What happens when we ignore satisfaction
Ignoring this axis doesn’t make the system more objective — it just makes it blind to accumulated wear. The system keeps functioning until it doesn’t anymore. Turnover, burnout, and defensive decisions are late symptoms of ignored satisfaction.
How to measure satisfaction (without empty surveys)
The temptation is to reduce satisfaction to organizational climate surveys — those with generic questions that nobody takes seriously. SPACE suggests something different: measuring satisfaction through behavioral proxies.
- Retention: How long do people stay? Why do they leave?
- Volunteering: Do people volunteer for difficult projects or avoid them?
- Discussion quality: Are technical debates honest or political?
- Spontaneous feedback: Do people talk about problems or keep them to themselves?
The survey paradox
Satisfaction surveys capture what people say they feel. Behavior captures what they actually feel. When the two diverge, trust the behavior.
Satisfaction as a leading indicator
In SPACE, satisfaction functions as a leading indicator. It signals problems before they appear in delivery metrics. When satisfaction drops, productivity may still be high — but it’s living on reserves that eventually run out.
Satisfaction as an indicator of systemic health
High satisfaction indicates
- Sustainable system
- Capacity to absorb pressure
- Quality technical decisions
- Genuine collaboration
Low satisfaction indicates
- System operating under stress
- Mental reserves being consumed
- Quality silently deteriorating
- Collaboration becoming political
Organizations that treat satisfaction as “desirable” discover too late that it was structural to the system’s functioning.
Performance: Perceived quality, not speed
Performance, in SPACE, is not synonymous with speed. It’s tied to the perceived quality of work, the ability to solve complex problems, and to generate real impact.
The confusion between speed and performance
Organizations frequently treat speed as synonymous with performance. More commits per day, more tickets closed per sprint, less time between request and delivery. These numbers are easy to collect and satisfy the managerial need for visible progress.
But speed measures system throughput. Performance, in the SPACE sense, measures quality of outcome. They are different dimensions — and optimizing one doesn’t guarantee the other.
- •Many tickets closed
- •Short delivery cycles
- •Constant movement
- •Pipeline flowing
- •Problems actually solved
- •Code that doesn't come back as bugs
- •Sustainable technical decisions
- •Measurable business impact
Why the confusion persists
The preference for speed is no accident. It exists because speed is easy to measure and performance is hard to define.
How long does it take to know if a technical decision was good? Months, sometimes years. How long to know if a ticket was closed? Seconds. This temporal asymmetry creates a systematic bias: what’s measurable now gets attention; what only reveals itself later is ignored.
Hidden debt
A team can close 50 tickets per sprint while accumulating technical debt that will triple maintenance costs in six months. On today’s dashboards, that team looks excellent. In tomorrow’s problems, they’ll be the cause.
What performance really captures
In SPACE, performance tries to capture something closer to
This includes:
- Code quality: Bugs introduced, rework needed, maintainability
- Real problem resolution: Does the delivered functionality address the user’s need?
- Technical sustainability: Do today’s decisions create or reduce future problems?
- Business impact: Does the technical work translate into perceptible value?
The performance lens
It’s not “how much did we produce?” — it’s “does what we produced work and matter?”
The measurement dilemma
Here’s the problem: real performance is hard to measure directly. You can count commits, but you can’t count “technical decisions that aged well”. You can measure delivery time, but you can’t measure “problems that didn’t happen because someone thought better”.
That’s why SPACE treats performance as a dimension that requires qualitative judgment, not just automated collection. Code reviews, honest retrospectives, and user reaction loops are data sources as valid as pipeline indicators.
How to identify real performance beyond speed
Signs of high performance
- Bugs rarely return as regressions
- New code integrates well with existing
- Technical decisions hold up over time
- Users perceive value in deliveries
Signs of low performance
- Constant rework on the same modules
- Delivered features that nobody uses
- Technical debt growing silently
- Solutions that create more problems than they solve
When organizations confuse speed with performance, they optimize movement — not results. And systems optimized for movement can move very fast in the wrong direction.
Activity: The most treacherous signal
Activity is perhaps the most treacherous dimension. Commits, tickets, PRs, hours online — all of this is easy to count. And precisely because of that, it’s dangerous.
The treacherous nature of the Activity dimension
What high activity might mean
- Consistent progress
- Productive team
- High output
What high activity might actually be
- Constant rework
- Interruptions fragmenting work
- Lack of focus and clarity
SPACE doesn’t ignore activity, but treats it as a weak signal. Activity without context is noise. The approach doesn’t say “don’t measure”. It says: don’t confuse movement with progress.
Activity is the dimension organizations most like to measure — and the one that should least be treated as a primary indicator. The reason is simple: activity is visible, immediate, and unambiguous. You can open a dashboard right now and see how many commits were made today. How many tickets were moved. How many hours each person was online. This visibility creates an illusion of control.
The visibility trap
What’s easy to see tends to receive more attention. What receives more attention tends to be optimized. When activity is what’s most visible, activity is what gets optimized — even if it’s not what matters.
High activity can mean completely opposite things:
- ✓Real progress toward objectives
- ✓Focused and intentional work
- ✓Productive collaboration
- ✓Deliveries that solve problems
- ✗Rework disguised as progress
- ✗Work fragmented by interruptions
- ✗Meetings that generate more meetings
- ✗Deliveries that create new problems
Without context, an activity dashboard doesn’t distinguish between the two. A team that redoes the same work three times due to lack of clarity shows three times more activity than a team that got it right the first time. In the numbers, the dysfunctional team looks more productive.
The preference for activity measurements reveals a deep cognitive bias: if we can count it, we must be understanding it. But counting is not understanding. It’s just a specific — and limited — way of representing reality.
Counted commits say nothing about code quality. Closed tickets say nothing about value delivered. Hours connected say nothing about real work done.
The invisible work
Thinking. Planning. Discussing. Learning. Reviewing carefully. Waiting for the right information before acting. All these activities are invisible to counting measurements — and all are essential for quality work.
SPACE doesn’t demonize activity. It has value when used correctly:
- As a warning signal: A sharp drop in activity may indicate systemic blockers
- As context for other SPACE dimensions: High activity + low satisfaction = sign of burnout
- As individual reference: Sudden changes in activity patterns deserve attention
The mistake isn’t measuring activity. It’s treating it as a primary indicator when it should be a complementary indicator.
How to interpret activity indicators
Correct use of activity
- Compare patterns over time
- Identify anomalies worth investigating
- Cross-reference with other dimensions
- Use as a starting point for conversations
Incorrect use of activity
- Rank teams by activity
- Use as a performance target
- Assume more is better
- Ignore the context behind the number
The fundamental question activity cannot answer is: is this movement taking us where we want to go?
A team can be extremely active while walking in circles. It can show impressive numbers of tickets, commits, and deploys while the product stagnates, technical debt grows, and users remain dissatisfied.
Activity measures how much the system moves. Not where to.
Communication and Collaboration: Social friction consumes energy
Communication and collaboration appear as their own axis because software work is rarely individual. Dependencies, alignments, reviews, and collective decisions are part of the real cost of producing software.
The social overhead
Social friction also impacts productivity. Excessive meetings, noisy channels, and poorly distributed decisions consume mental energy the same way poorly designed systems do.
Why collaboration is its own dimension
The temptation is to treat collaboration as a means to other dimensions. “We collaborate to deliver faster”, “We communicate to avoid errors”. But SPACE treats collaboration as an autonomous dimension because its cost and value are independent of the final outcome.
A team can have excellent collaboration and still deliver little — because it’s solving the wrong problem. Another can deliver a lot with minimal collaboration — and create knowledge silos that weaken the system.
- ✓Information flows to those who need it
- ✓Decisions are made with context
- ✓Dependencies are managed proactively
- ✓Knowledge is distributed
- ✗Information gets stuck in silos
- ✗Decisions are made in the dark
- ✗Dependencies become blockers
- ✗Knowledge concentrated in individuals
The mental cost of coordination
Coordinating work with other people consumes mental energy. Every meeting, every Slack thread, every PR review, every expectation alignment — all of this has a cost. And this cost rarely appears in traditional measurements.
Time fragmentation
If a developer spends 2 hours a day in meetings, alignments, and communications, 6 hours remain for focused work. But the real impact is worse: context switching between meetings and deep code work fragments useful time even further.
The problem isn’t that coordination exists — it’s necessary. The problem is when the cost of coordinating exceeds the benefit of collaborating.
Signs of dysfunctional collaboration
How to identify collaboration problems
Signs of healthy collaboration
- Meetings have clear purpose and end with decisions
- Important information arrives without needing to chase
- Code reviews are constructive and fast
- People know who can help with what
Signs of dysfunctional collaboration
- Meetings generate more meetings
- Critical information arrives late or never
- Code reviews become political bottlenecks
- Nobody knows who decides what
The trap of excessive collaboration
Organizations that value collaboration sometimes fall into the opposite trap: collaboration as an end in itself. Everything needs to be discussed, aligned, consensualized. No decision happens without a meeting. No change passes without multiple approvals.
This creates a system where coordination friction is so high that doing anything becomes exhausting. Competent people are forced to spend more energy navigating the process than solving problems.
Collaboration overload
If a senior developer needs three meetings and five approvals to make a change that would take 30 minutes, the problem isn’t lack of collaboration — it’s excess. The system is optimized for control, not results.
How to measure collaboration
Collaboration is hard to measure directly. SPACE suggests proxies:
- Wait time: How long do decisions stay blocked waiting for alignment?
- Review quality: Do code reviews add value or are they mere formality?
- Knowledge distribution: How many people can solve critical problems?
- Communication satisfaction: Do people feel they have the information they need?
The goal isn’t to maximize collaboration, but to optimize the cost-benefit ratio. Enough collaboration for work to flow; not so much that work stops.
Collaboration and organizational structure
The quality of collaboration is rarely an individual problem. It reflects organizational structure. Poorly divided teams, poorly defined responsibilities, and misaligned incentives create friction that no communication tool can solve.
Structural constraints
Conway’s Law
Efficiency and Flow: A dimension, not the end goal
Efficiency and flow close the model by connecting it, in part, with what DORA already observes. Here enter wait time, interruptions, context switching, and systemic bottlenecks.
Contextualizing flow
In SPACE, flow is not treated as an end in itself. It’s just one dimension among others. Excellent flow doesn’t compensate for a system that wears people down. It just accelerates that wear.
What efficiency really means
Efficiency, in the SPACE context, isn’t just “doing more with less”. It’s about minimizing systemic waste: time waiting, energy spent on context switching, work lost due to lack of clarity.
- ✓Less time waiting for resources
- ✓Fewer unnecessary interruptions
- ✓Less rework due to lack of clarity
- ✓Less energy on non-value-adding tasks
- ✗Doing more things per day
- ✗Always being busy
- ✗Responding immediately to everything
- ✗Maximizing time utilization
The confusion between real and apparent efficiency is common. A developer who responds to Slack immediately, participates in all meetings, and is always “available” seems efficient. But if that constant availability fragments their deep work, real efficiency is low.
Flow: the state and the system
“Flow” has two relevant meanings here:
- Psychological state: Deep concentration where work happens without friction
- Systemic property: The system’s ability to move work from start to finish without blockers
Both are important, and both are fragile.
Flow's vulnerability
Entering a flow state takes time. A single interruption can cost 20-30 minutes of recovery. Systems that interrupt frequently structurally prevent flow from happening.
The enemies of flow
Systemic conditions for flow
What enables flow
- Protected time blocks
- Clarity about what to do
- Tools that work
- Autonomy to decide how to do it
What destroys flow
- Constant interruptions
- Ambiguity about priorities
- Tools that break or are slow
- Micromanagement requiring justifications
Organizations frequently sabotage flow without realizing it. “Open door” policies, culture of immediate response, poorly distributed meetings throughout the day — all of this fragments time and prevents deep concentration.
Efficiency vs. Utilization
A common conceptual error is confusing efficiency with utilization. Maximum utilization is not efficiency — it’s a recipe for bottlenecks.
Systems operating at 100% capacity have no margin to absorb variation. Any unexpected event — an urgent bug, a delayed dependency, a sick person — creates a cascade of delays. Paradoxically, reducing utilization can increase throughput.
System dynamics
High-utilization systems have exponentially longer wait times. A system at 90% utilization has much longer queues than one at 70%. Slack in the system isn’t waste — it’s response capacity.
Systemic bottlenecks
Efficiency also involves identifying where the system stalls. There’s no point optimizing development if the bottleneck is in review. No point accelerating review if the bottleneck is in deploy.
The theory of constraints is relevant here: optimizing any part of the system that isn’t the current bottleneck doesn’t improve the final result. First, identify where flow stalls. Then, optimize there.
- •Slow code review
- •Manual and risky deploys
- •Approvals that take days
- •Unavailable test environments
- •Work accumulating before review
- •Changes waiting for deploy window
- •Decisions in limbo waiting for someone
- •Developers waiting for environment
Efficiency without burnout
The risk of optimizing efficiency is creating systems that extract maximum from people. Eliminating all slack, all pauses, all breathing room may seem efficient in the short term — but creates fragile systems that break under pressure.
Long-term thinking
Real efficiency includes sustainability. A system that operates at maximum efficiency for three months and then collapses is less efficient than one that operates at 80% for years. The time horizon matters.
SPACE treats efficiency as one dimension among others, not as the supreme objective. A system can be highly efficient and still unsustainable, because it ignores satisfaction. It can have excellent flow and still fail at performance, because the efficient work is going in the wrong direction.
Efficiency is necessary, but not sufficient. And when treated as the only objective, it becomes destructive.
The most important point: dimensions conflict
The most important point — and most frequently ignored — is that these dimensions conflict:
Inherent tensions between SPACE dimensions
Optimizations that create conflict
- Increase activity
- Maximize efficiency
- Optimize collaboration
Non-obvious costs
- May reduce satisfaction
- May harm learning
- May decrease individual focus
SPACE doesn’t try to resolve these tensions. It makes them explicit.
Reframing the question
Instead of asking “which indicator is bad?”, the question becomes “which dimension are we sacrificing without realizing?”. Instead of seeking an aggregate score, the approach forces conscious choices. Every optimization decision has a cost — and SPACE demands that cost be visible.
Who benefits from complexity?
But there’s something uncomfortable about this multidimensionality that’s rarely discussed: who benefits when everything becomes too complex to question?
When DORA shows bad numbers, the conversation tends to be direct: the pipeline is slow, deployments are breaking, we need to improve. It’s uncomfortable, but decisive. There’s a clear target.
With SPACE, the conversation can easily turn into something else.
Example: The meeting that never ends
Scenario: Engineering team reports exhaustion. Leadership requests SPACE-based analysis.
- Satisfaction is low? “It’s cultural, we’re working on it.”
- Performance is ok? “Then the problem isn’t technical.”
- Activity is high? “Team is productive.”
- Communication could improve? “Let’s add more rituals.”
- Efficiency is reasonable? “Flow isn’t the bottleneck.”
Result: Three months of conversations. No decision made. Team remains exhausted. But now there are pretty dashboards showing “we’re monitoring all dimensions”.
When complexity becomes excuse
Organizations that avoid difficult decisions love SPACE. Not because it’s useless — but because this model can be used to delay action.
- “We need to understand all dimensions before acting”
- “We can’t optimize one dimension without considering the others”
- “It’s too complex for a simple answer”
All true. But when these phrases become patterns, they’re no longer analysis — they’re institutionalized paralysis.
There’s a thin line between accepting necessary complexity and using complexity as a shield against accountability. SPACE, like any sophisticated approach, can be used in both ways.
Who benefits?
Who benefits when the answer is always “it depends” and nothing ever changes?
Not the developers breaking under pressure. Not the product teams struggling with slow systems. It’s the organizational structures that manage to present “systems thinking” while avoiding real decisions.
SPACE vs DORA: Universal vs Contextual
This is perhaps the biggest contrast with DORA. While DORA seeks to identify universal patterns of high performance in delivery systems, SPACE assumes that productivity is contextual.
Context matters
What makes sense to measure in a platform team may be useless — or destructive — in a product team. What works in one organization may fail completely in another.
Why SPACE rarely becomes a dashboard
SPACE rarely becomes a dashboard. And when it does, it usually loses its value. It wasn’t made for continuous monitoring, but for difficult conversations. Conversations about priorities, trade-offs, and human consequences of technical and organizational decisions.
When sophistication becomes excuse
If DORA can be gamed, SPACE can be instrumentalized. And the weapon is precisely its strength: complexity.
This approach’s sophistication — with its five dimensions and inevitable tensions — should protect against destructive simplification. But in the wrong hands, it becomes a tool to obscure reality instead of clarifying it.
How the model can be used or abused
Legitimate use of SPACE
- Recognize real trade-offs between dimensions
- Make hidden costs visible
- Force conversations about priorities
- Question one-dimensional optimizations
Instrumentalization of SPACE
- Use complexity to avoid decision
- Hide costs behind 'everything matters'
- Delay indefinitely with 'more analysis'
- Justify status quo as 'necessary balance'
How to recognize instrumentalization in practice
Pattern 1: Multidimensional productivity theater
Symptom: Quarterly reports show the organization “is working on all five dimensions” simultaneously.
Reality: No significant improvement in any dimension. Resources too fragmented to generate real impact. But the narrative is pretty: “We’re balancing everything.”
What’s really happening: The organization managed to turn inaction into documented strategy.
Pattern 2: ‘We’re optimizing for the long term’
Symptom: Concrete problems (burnout, turnover, systems breaking) are met with “we can’t sacrifice other dimensions for a quick fix.”
Reality: “Long term” becomes an excuse for not acting in the short term. Meanwhile, people burn out, systems worsen, and technical debt grows.
What’s really happening: Language of strategic sophistication masking inability to prioritize.
Pattern 3: Executive who discovered SPACE yesterday
Symptom: Leadership cites SPACE to invalidate improvement requests with “that would be optimizing only one dimension.”
Reality: Requests were legitimate (e.g., reduce toil, improve observability, automate manual processes). SPACE becomes a rhetorical shield against change.
What’s really happening: The approach became a political tool, not an analytical one.
Misuse detection
When someone uses SPACE to say nothing can be done because “everything is connected and complex”, they’re not applying the model — they’re abusing it.
SPACE exists to make trade-offs explicit and choices conscious. Not to make choices impossible.
The risk of relativism
There is, of course, a risk here. Without discipline, SPACE can become relativism. If everything is multidimensional and contextual, nothing is comparable, nothing is decisive.
The inherent risk
This approach doesn’t deny this risk. It just assumes excessive simplification is a greater risk.
Why organizations resist SPACE (when used honestly)
Here’s the paradox: the same organizations that instrumentalize SPACE to avoid decision also fiercely resist applying it honestly. Why?
Because SPACE applied truthfully makes political conflicts explicit.
The discomfort of explicitness
The discomfort of political explicitness
DORA allows
- Discuss technically without touching power
- Focus on 'system efficiency'
- Optimize without questioning priorities
- Narrative of measurable progress
SPACE demands
- Admit we're sacrificing people
- Recognize efficiency has human cost
- Expose that priorities are political choices
- Accept that not everything is measurable
DORA is comfortable for those who prefer to avoid difficult questions: you can increase deployment frequency without questioning why the team is exhausted, reduce lead time without asking if what’s being delivered even matters. Metrics go up, real problems stay untouched.
SPACE doesn’t offer that comfort.
Executive resistance: Preference for simple narratives
Executives prefer DORA because it offers linear narrative of progress: “we were Low Performers, now we’re Medium, heading toward High.”
SPACE doesn’t allow that story. It forces admission: “We improved efficiency by sacrificing satisfaction. We increased performance at the cost of sustainability.”
That doesn’t fit in a board meeting presentation.
And so, SPACE is ignored or simplified until it loses meaning — because the alternative would be explaining trade-offs that reveal political decisions.
Managerial resistance: Fear of difficult conversations
Middle managers resist SPACE because it demands conversations nobody knows how to have:
- “Why are we prioritizing speed over well-being?”
- “Who decided that volume of activity matters more than real impact?”
- “Are we willing to slow down to reduce cognitive load?”
These questions don’t have technical answers. They have organizational and political answers. And asking these questions out loud exposes the fragility of current decisions.
Structural resistance: Incompatible incentive systems
The deepest resistance to SPACE isn’t ideological — it’s structural.
If your promotion systems reward volume of activity (commits, tickets closed), you can’t honestly apply an approach that treats activity as an unreliable indicator of real productivity.
If your success metrics ignore satisfaction and well-being, you can’t apply SPACE which treats these dimensions as structural.
The system has already chosen. SPACE would just make that choice uncomfortably explicit.
Uncomfortable truths
SPACE doesn’t fail because it’s misunderstood. It fails because it’s understood too well.
Organizations resist it not because it’s complex, but because it makes visible what they’d prefer to keep implicit: that productivity has cost, that optimization requires sacrifice, and that these choices are political — not technical.
SPACE tensions DORA, doesn’t replace it
SPACE doesn’t replace DORA. It tensions it. It reminds us that efficient systems can be inhumane, and that productivity isn’t just how much passes through the system, but how that system is experienced by those who work in it.
The fundamental difference between the approaches
DORA asks
- Does our delivery system work?
SPACE asks
- At what cost?
And that question can’t be answered with a number. It requires listening, interpretation, and conscious choices.
The bridge between system and human
That’s why, in this series, SPACE appears exactly here. After flow was understood, but before experience is explored in depth. It bridges system and human, indicator and meaning, efficiency and sustainability.
And in making that bridge, SPACE reveals something fundamental: productivity is not neutral, and complexity is not an excuse.
Dual reality
First truth: Productivity is genuinely multidimensional. Reducing it to a number always sacrifices something important.
Second truth: This complexity can be instrumentalized. Used to avoid decision, delay action, and maintain destructive status quo under a veneer of “systems thinking”.
Both are true. Accepting only the first makes you naive. Accepting only the second makes you cynical. The mature stance is accepting both and still acting.
But if complexity is not an excuse and trade-offs need to be explicit, a question imposes itself: what is it like to live these trade-offs day to day?
SPACE makes tensions visible. But visibility is not lived experience. Knowing that satisfaction and efficiency conflict is one thing. Feeling that conflict in your body, in the code, and in everyday decisions is something completely different.
Notas de Rodape
- [1]
Conway’s Law, formulated by Melvin Conway in 1967, states that “organizations which design systems are constrained to produce designs which are copies of the communication structures of these organizations”. In other words: software architecture tends to mirror organizational structure. If two teams don’t communicate well, the modules they develop probably won’t integrate well either.
- [2]
Forsgren, Nicole; Storey, Margaret-Anne; Maddila, Chandra; Zimmermann, Thomas; Houck, Brian; Butler, Jenna. The SPACE of Developer Productivity. ACM Queue, 2021. The paper introduces five dimensions for measuring developer productivity: Satisfaction, Performance, Activity, Communication & Collaboration, and Efficiency & Flow.
Related Posts
DORA, SPACE, DevEx, DX Core 4: Each Answers a Different Question
Software teams don't break down due to lack of metrics. They break down because they measure with conviction things they don't fully understand.
DORA: Flow Metrics, Invisible Capabilities, and What Really Sustains Continuous Delivery
DORA doesn't measure productivity. It measures symptoms. And what really matters lies in the 24 capabilities that nobody remembers to implement.
Before Measuring, Someone Chose What to Believe In
Metrics aren't neutral. They reveal and reinforce the game already in motion. Understand why teams that deliver a lot still break down from within.