Skip to content
platform-engineering

Before Measuring, Someone Chose What to Believe In

Metrics aren't neutral. They reveal and reinforce the game already in motion. Understand why teams that deliver a lot still break down from within.

9 min read

Também em Português

Series Why Productive Teams Fail
2/4

Measuring is not understanding

There’s a silent belief that cuts through technology, management, and modern organizations without asking permission. It’s rarely stated explicitly, but it guides decisions every day, in meetings, dashboards, and strategic plans.

If we can measure something, then we can understand it. If we understand, we can control. And if we control, we’re being responsible.

This logic seems too reasonable to be questioned. Measuring sounds like maturity. Numbers convey seriousness. Graphs create a sense of mastery.

The fundamental problem

Measuring something doesn’t mean understanding it — it just means choosing a specific, limited way to represent reality. Every metric is a slice. A reduction. An implicit hypothesis about what matters.

When we forget this, numbers stop being instruments and start being treated as truths. Decisions become justified not because they’re good, but because they’re measurable.

The visibility cycle

don’t live in a technical vacuum. They exist within social systems, and in these systems, they don’t just describe behavior — they shape it.

What is measured becomes visible

The first effect of a metric is to direct attention. What isn’t measured tends to disappear from discussions.

What becomes visible gets discussed

Meetings gravitate around numbers. Dashboards define agendas.

What is discussed becomes priority

Resources, time, and energy flow to what appears in reports.

There’s no such thing as a neutral metric. There’s only an assumed metric or a metric used without awareness.

When efficiency becomes anesthesia

The confusion begins to take shape when we treat different things as if they were equivalent.

What seems the same
  • Activity
  • Output
  • Movement
What it actually means
  • Outcome
  • Impact
  • Progress

The easier something is to measure, the more it tends to occupy the center of attention. Real impact, on the other hand, is difficult, diffuse, and slow. Over time, organizations start optimizing what they can see best — not necessarily what matters most.

The problem of decontextualized efficiency

This is where enters as an unquestionable virtue. Doing more, faster, at lower cost always seems desirable.

Efficiency amplifies, doesn't correct

Efficiency doesn’t correct decisions; it amplifies them. Applied to the wrong problem, it accelerates deviation. It makes faster what perhaps should never have been done.

The rarely asked question is “efficient for what?”.

When metrics become targets

Why does this happen?

Not out of perversity, but organizational dynamics. Numbers facilitate comparison, close discussions, and offer simple justifications for complex decisions.

What’s the effect?

When a metric becomes a target, it stops observing the system and starts measuring the system’s ability to adapt to the metric itself.[1]

The implicit game

Every system plays some game, even when nobody explicitly defines it. Something is always being optimized: predictability, appearance of control, speed, political tranquility, absence of conflict.

Metrics don’t create this game out of thin air; they reveal and reinforce the game that was already in motion.

Who chooses what to measure, chooses what matters

There’s a layer of power that rarely enters technical discussions about metrics: who decides what to measure?

It's not technical, it's political

The choice of what to measure is never purely technical. It reflects organizational priorities, leadership anxieties, and existing power structures.

When the primary metric becomes “velocity” or “story points per sprint,” it’s not just a pragmatic choice. It’s an implicit declaration about what the organization values: measurable movement over diffuse impact, quantity over quality, speed over sustainability.

Concrete example of metric-driven burnout:

An organization decides to track team velocity as the primary metric. Every sprint, there’s an expectation that velocity will increase or, at minimum, stay the same. The team responds by:

  • Inflating estimates to create margin
  • Fragmenting work to increase counting
  • Avoiding important but hard-to-estimate tasks
  • Working overtime to “hit the target”

Six months later, velocity is high, the dashboard is green, but the team is exhausted. Technical debt exploded. Important features were postponed because they “didn’t fit in the sprint.” Nobody remembers why they started measuring velocity — but everyone knows it needs to go up.

Who pays the cost?

Junior developers pay more: they don’t have political capital to question metrics. Minorities pay more: they navigate already hostile systems while performing under numerical pressure. Tired seniors pay: because they’ve seen this movie before.

The question that should be asked before choosing any metric:

  • Who benefits from this visibility?
  • What becomes invisible when we measure this?
  • What behavior are we implicitly incentivizing?
  • Who pays the cost when the system optimizes for this metric?

In Article 4, we’ll see how these dynamics manifest in frameworks like DORA — and how consultancies like Gartner legitimize metrics not because they’re true, but because they reduce corporate anxiety.

High delivery as a wrong signal of health

This is how teams emerge that deliver a lot and still break down from within. The backlog moves, the roadmap progresses, deploys happen. To those watching from outside, everything seems healthy.

Recognition

High-delivery teams become references. They receive praise and visibility.

Overload

More demand, more pressure, more responsibility arrive as “rewards”.

Compensation

The cost starts accumulating. The team sustains delivery by reducing safety margins.

Invisible collapse

Constant tiredness, rushed decisions, the feeling that it’s never a good time to stop.

Human systems don't fail abruptly

They compensate. They push the cost forward. The wear doesn’t appear in numbers; it appears as constant tiredness, as rushed technical decisions, as the feeling that it’s never a good time to stop.

DevEx as cognitive economy

This is where is often misunderstood.

What DevEx is not
  • Operational comfort
  • Reduced productivity
  • Luxury for mature teams
What DevEx is
  • Cognitive economy
  • Delivery sustainability
  • Necessity for any team

A team with poor DevEx doesn’t necessarily deliver less. It delivers at higher cost. Output remains high, but the energy needed to sustain it grows silently.

The classic sign of broken DevEx isn’t a drop in productivity, but high productivity accompanied by chronic exhaustion.

How exceptions become rules

This pattern crystallizes in technical leadership. Not out of bad intention, but because exceptions become rules:

Accept technical debt 'just this once'

The urgency of the moment justifies the shortcut. The debt stays for later.

Skip understanding to meet deadline

There’s no time for discussion. The feature needs to ship.

Treat incidents as one-off deviations

Each problem is seen in isolation, never as a pattern.

Each decision seems sensible in isolation. Together, they teach the system that delivering fast is safer than delivering well.

Burnout as a side effect of poorly defined success

Burnout isn’t individual failure. It’s a side effect of poorly defined success. The team doesn’t break despite high performance. It breaks because high performance becomes a permanent state.

Rework comes later, when systems become rigid, incidents repeat, and the organization wonders why everything got slower.

The real culture

At this point, culture is already defined. Not by what’s written in values, but by what was reinforced:

What wins
  • Deadlines
  • Speed
  • Silence
What loses
  • Technical discussions
  • Clarity
  • Healthy conflict

This is the real culture. Delivery metrics give rational veneer to this state of affairs.

The choice that precedes the metric

enter as lenses, not as solutions. They help us see, but they don’t correct poorly formulated problems. That’s why this series starts here, delaying the rush for answers.

Before choosing metrics, choose the problem you want to solve. If this choice isn’t conscious, the system will make it for you. And it doesn’t care about burnout, DevEx, or people.


FAQ

Perguntas Frequentes

So metrics are bad?

No. Metrics are tools. The problem is using them without awareness of the assumptions they carry. Every metric is a slice of reality — useful when you know what you’re slicing.

How to choose better metrics?

Start with the problem, not the metric. Ask: what am I trying to understand? What won’t this metric show? What behavior might it encourage?

Are DORA metrics reliable?

DORA offers a useful framework for measuring software delivery capability. But like any framework, it works best when you understand its assumptions and limitations.


Notas de Rodape

  1. [1]

    Goodhart’s Law states that “when a measure becomes a target, it ceases to be a good measure”. This phenomenon is well documented in economics and social sciences.

Related Posts

Comments 💬