Skip to content
platform-engineering

DX Core 4: When Understanding Isn't Enough and Action Becomes Mandatory

After DORA, SPACE, and DevEx, the diagnosis is done. The problem now is different: where exactly to intervene without breaking the entire system?

32 min read

Também em Português

Series Why Productive Teams Fail
7/8

After DORA, SPACE, and DevEx, the question changes.

It’s no longer “how does it work?” or “why does this matter?”. The question now is more direct — and more uncomfortable: where exactly should we intervene?

We’ve gone through three frameworks. In Article 4, we saw that DORA measures symptoms, not causes — and that metrics can be manipulated. In Article 5, we accepted that SPACE exposes real tensions between dimensions, but it can also be instrumentalized to avoid decision-making. In Article 6, we recognized that DevEx is a technical variable, but improving it is a political act, not a technical one.

The diagnosis is done. We already understand that flow matters, that productivity is multidimensional, and that developer experience shapes outcomes. We already know that frameworks can be misused and that optimization always has a cost.

And now a strange sensation usually appears. It’s not confusion — it’s excess clarity. So much understanding. So many details. So much awareness of trade-offs and risks.

The diagnosis is done, but action isn't

The problem now is different: where exactly to intervene without breaking the entire system?

As we saw in Article 2, choosing what to measure is already a political choice. But what about when it’s time to choose where to act?

This is the point where many debates about software productivity die. There’s understanding, there’s conceptual consensus, but there’s no decision. Everything seems too important, too interconnected, too sensitive to touch. The risk of making something worse while trying to improve something else paralyzes action.

Origin: Who created DX Core 4 and why

Before examining the framework, it’s worth asking: where did it come from?

DX Core 4 was developed by DX (formerly GetDX), a research and consulting company in Developer Experience founded by Abi Noda. The company offers benchmarks, research, and DevEx diagnostics for software organizations.

Commercial context matters

The framework didn’t emerge from independent academic research. It emerged from a company that sells DevEx diagnostic services. This doesn’t invalidate the model — but it changes how we should read it.

Why this matters

When a consulting company creates a framework:

  • Simplification is a commercial advantage. Four dimensions are easier to sell than five (SPACE) or 24 capabilities (DORA). Clients want clarity, not complexity.
  • Tangibility generates contracts. “We can measure and improve these 4 axes” is a more effective proposition than “productivity is too complex to reduce”.
  • The model reflects what the company can measure. If DX offers surveys and benchmarks, it makes sense that the framework emphasizes dimensions measurable through research.

None of this means DX Core 4 is useless or manipulative. It means that, like every framework, it serves specific interests. Understanding these interests helps use the model consciously.

Compare with other frameworks

  • DORA: Emerged from academic research (Nicole Forsgren) before becoming consulting
  • SPACE: Emerged from researchers within Microsoft/GitHub
  • DevEx: Emerged from academic research (Michaela Greiler, Noda, Storey)
  • DX Core 4: Emerged directly from consulting company

This doesn’t make DX Core 4 worse or better. It makes it different — and that difference matters.

Understanding this commercial context matters because it defines what the framework chose to emphasize: saleable simplification. But simplification of what, exactly?

The point of cognitive saturation

After DORA, SPACE, and DevEx, organizations reach a specific state. is born exactly at this point of cognitive saturation. Not as a new explanatory model, but as a deliberate reduction of complexity.

This choice already says a lot. DX Core 4 doesn’t try to capture everything. It accepts losing conceptual precision in exchange for capacity for action.

Identity and purpose of the framework

What DX Core 4 is

  • Model for deciding where to invest
  • Prioritization tool
  • Pragmatic action guide

What DX Core 4 isn't

  • Model for better understanding the problem
  • Complete conceptual framework
  • Universal metrics system

The four structural axes

Why exactly these four?

The choice isn’t arbitrary. DX Core 4 concentrates on areas where organizational and technical friction manifest in the most destructive ways. These aren’t all dimensions of development experience — they’re the ones where intervention generates the greatest return.

How these axes relate to previous frameworks:

  • DORA measures outcomes (, deployment frequency). DX Core 4 investigates the conditions that produce these outcomes.
  • SPACE tries to capture all dimensions. DX Core 4 reduces to the most concrete points where to act.
  • DevEx diagnoses friction. DX Core 4 organizes where to intervene first.

The logic of reduction

DX didn’t reduce from 5 to 4 axes by chance. Research identified that most recurring frictions concentrate in similar structural patterns:

  • Interruptions that break concentration → Flow
  • Systems that don’t respond comprehensibly → Feedback
  • Accidental complexity that consumes mental energy → Cognitive Load
  • Lack of clarity about priorities and decisions → Alignment

Other problems exist. But most can be mapped back to these four patterns.

Interdependence as a central characteristic

Carregando diagrama...
The 4 axes of DX Core 4 are not isolated components — they are lenses for looking at the same complex system. Each axis affects the others in non-linear ways.

The axes are not independent

Improving one axis almost always requires touching all the others. This interdependence isn’t a bug in the framework — it’s its most important characteristic.

Practical example:

You decide to improve Feedback by reducing build time from 30 to 5 minutes.

What really happens:

  • Flow improves: Developers test more frequently
  • Cognitive Load increases (temporarily): New CI/CD infrastructure requires learning
  • Alignment is tested: Teams need to agree on new usage patterns

Result: You didn’t improve “just feedback”. You reorganized the entire system — and the 4 axes just describe different aspects of this reorganization.

How to identify which axis is most degraded

Each axis has specific signs of degradation:

Degraded flow
  • Developers report 'I can't focus'
  • Constant firefighting mode
  • Frequent external blockers (approvals, environments, dependencies)
  • Significant time waiting for things outside control
Degraded feedback
  • Slow builds (>15 minutes)
  • Incomprehensible error messages
  • Finding out something broke takes hours/days
  • Debugging requires 'tribal knowledge' about the system
High cognitive load
  • Onboarding takes months (not weeks)
  • Only some can make certain changes
  • 'Knowledge in people's heads' is common answer
  • Constant fear of breaking something non-obvious
Broken alignment
  • Frequent rework ('that's not what they wanted')
  • Conflicts about priorities without clear resolution
  • Decisions reversed without explanation
  • Energy spent interpreting 'what really matters'

Use signals to prioritize, not to diagnose

These signals don’t diagnose root cause. They help answer: “If we could focus limited energy on one area, which would have the greatest immediate impact?”

The answer isn’t “this axis is worse”. The answer is “this axis is causing the most damage to the system as a whole at this moment”.

Flow: Daily experience, not statistics

The first axis is flow. Here, flow isn’t abstract velocity or obsession with delivery volume. It’s the concrete experience of being able to advance on a task without artificial blockers.

What DX Core 4 calls flow is different from:

  • Flow state (psychological state of deep concentration)
  • Throughput (volume of work completed per unit of time)
  • Velocity (agile metric of points delivered)

Flow here is closer to absence of unnecessary friction. It’s being able to start a task at 10am and, by noon, have advanced proportionally to the time invested — without having spent 90 minutes waiting for environments to come up, dependencies to be approved, or bureaucratic processes to be satisfied.

What degrades flow

Technical blockers:

  • Unnecessary waits (environments, builds, automatic approvals)
  • Opaque dependencies (don’t know what I need until I try and fail)
  • Unstable environments that require constant configuration

Organizational blockers:

  • Processes that interrupt reasoning (manual mid-task approvals)
  • Unnecessary work transitions between teams
  • Lack of autonomy for limited-scope decisions

Flow isn't eliminating all interruption

Not all interruption is artificial. Code review is necessary interruption. Pair programming interrupts solo work. Alignment meetings interrupt code.

The question is: does the interruption generate value proportional to the cost?

  • 30-minute code review that prevents critical bug → Justified cost
  • 3-day manual approval to change text string → Destructive friction

Why flow is treated as experience, not metric

You can have (DORA metric) but degraded flow (DX Core 4 experience).

How this happens:

  • Lead time measures aggregate (team/month average)
  • Flow measures individual daily experience

Practical example:

Team has 2-hour lead time (excellent!). But achieves this because:

  • 70% of changes are trivial and flow quickly
  • 30% of changes are blocked for days

Result: Aggregate lead time is great. Flow experience is miserable for those working on complex changes.

DX Core 4 captures this difference. It’s not enough to measure aggregate outcome — you need to understand the work experience.

Feedback: Conversations between system and developer

The second axis is feedback. Every productive system converses with those who use it. The question is: what is the quality of that conversation?

Feedback isn’t just “how long it takes to know if something worked”. It’s also how much cognitive effort is needed to interpret the response.

The two dimensions of feedback:

  1. Speed: How long between action and response?
  2. Clarity: How much effort to understand what the system is saying?

Systems with slow and ambiguous feedback create an environment where developers stop trusting their own tools.

Quality of system-developer conversations

Effective feedback

  • Compiles in seconds
  • Tests fail with clear messages
  • Errors appear early (fail fast)
  • Logs are accessible and structured
  • Dev environments reflect prod

Destructive feedback

  • Slow compilation (>10 minutes)
  • Cryptic messages
  • Errors appear too late (after deploy)
  • Debugging is archaeology (grep logs from 5 systems)
  • Dev doesn't reflect prod ('worked locally')

What happens when feedback is slow or ambiguous

Scenario 1: 30-minute build

Developer makes change. Waits 30 minutes to know if it worked.

Consequences:

  • Changes strategy: makes multiple changes before testing (batch)
  • When it fails, doesn’t know which of 5 changes caused the problem
  • Debugging becomes exponentially more difficult
  • Starts doing something else while waiting (context switch)

Result: Slow feedback doesn’t just waste time — it changes behavior in destructive ways.

Scenario 2: Incomprehensible error

Error: NullPointerException at line 2847 in module core.utils.handler

What’s missing:

  • What data was null?
  • Why was it null?
  • What was I trying to do when this happened?
  • How to reproduce?

Consequences:

  • Developer spends hours debugging
  • Adds defensive logs everywhere
  • Creates “tribal knowledge” about “weird errors”
  • Loses trust in the system

The hidden cost of bad feedback

Slow or ambiguous feedback doesn’t just delay deliveries; it erodes trust.

Developers start to:

  • Guess (“maybe it’s this…”)
  • Repeat steps unnecessarily (“let me run it again”)
  • Create defensive mechanisms (excessive logging, redundant tests)
  • Avoid changes in “dangerous” areas

The cost of this doesn’t appear in any DORA metric. But it accumulates silently.

When “improving feedback” makes things worse

Common mistake: Adding more observability without structuring information.

Example:

  • Before: 3 logs, few useful
  • After: 300 logs, mostly noise

Result: Feedback became slower (need to filter 300 lines) and more ambiguous (which information matters?).

The trap: More information ≠ better feedback. Sometimes, less structured information is more effective than lots of chaotic information.

Cognitive Load: The most neglected and destructive

The third axis is cognitive load. This is perhaps the most neglected — and the most destructive.

What is cognitive load

Cognitive load isn’t complexity inherent to the problem, but everything the system requires the developer to keep in mind to operate.

It’s not “how many technologies we use”. It’s “how much mental effort to navigate the system”.

Critical distinction:

  • Essential complexity: Inherent to the domain (complicated business rules, regulatory requirements)
  • Accidental complexity: Introduced by the way we build the system

High cognitive load generally comes from accidental complexity: technical, architectural, or organizational decisions that make the system harder to use than it needs to be.

Sources of cognitive load

Technical:

  • Implicit conventions (“you have to know to do X before Y”)
  • Multiple paths for the same task (3 ways to deploy)
  • Leaky abstractions (need to understand implementation to use)
  • Non-obvious dependencies (“changing A breaks B in non-intuitive way”)

Organizational:

  • Poorly delimited responsibilities (“need to ask 5 people”)
  • Decisions that depend on historical memory (“have to know why we did it this way”)
  • Undocumented tribal knowledge (“only so-and-so knows how this works”)
  • Inconsistent processes (each team does it differently)

The real impact: Systems that consume people

Systems with high cognitive load don’t scale people; they consume the best ones until they burn out.

Destructive pattern:

  1. Senior arrives: Can navigate complexity. System works.
  2. Senior burns out: Constant mental energy to “keep everything in mind”
  3. Senior leaves: Takes critical knowledge. System becomes more fragile.
  4. New senior arrives: Cycle repeats.

The system keeps working. But the human cost grows exponentially.

Practical example: 6-month onboarding

Scenario: 80-person startup. Onboarding takes 6 months for senior developer to be productive.

Why so long?

  • 12 services with different patterns
  • Each service’s deployment is unique
  • Local setup requires 37 documented steps (and 12 undocumented)
  • Tribal knowledge about “how things really work”
  • Old architectural decisions that nobody documents but affect everything

Naive diagnosis: “Our domain is complex”

Real diagnosis: Accidental cognitive load is absurd

What would DORA or SPACE show? Normal metrics. Lead time ok. Satisfaction ok (of those who survived).

What DX Core 4 exposes: System is killing onboarding and concentrating knowledge in few people.

The “just add” trap

How cognitive load grows silently:

  • Year 1: Simple system. 3 services. Everyone understands.
  • Year 2: “Let’s add Kafka for events.” (Now: REST + Kafka)
  • Year 3: “Let’s add gRPC for internal services.” (Now: REST + Kafka + gRPC)
  • Year 4: “Let’s add GraphQL for mobile.” (Now: REST + Kafka + gRPC + GraphQL)

Nobody removed anything. Just added.

Result:

  • 4 forms of communication between services
  • No clear documentation about when to use each
  • New developers need to learn all
  • Tribal knowledge about “preferred patterns”

Cognitive load quadrupled without anyone noticing. Each addition was “justified” in isolation. Accumulated cost was never measured.

Why cognitive load is neglected

Senior developers underestimate cognitive load because they’ve already internalized the complexity.

“It’s not that hard, you just need to…”

But what comes after “just need to” is usually:

  • Know 5 undocumented conventions
  • Understand 3 architectural decisions from 2 years ago
  • Know that X doesn’t work with Y (despite appearing that it should)

Who pays the price: New developers, juniors, anyone without “tribal knowledge”.

Organizational consequence: System favors permanence over renewal. Leaving is expensive (loses knowledge). Entering is expensive (long onboarding).

Reducing cognitive load requires renunciation

You can’t reduce cognitive load without removing things.

  • Fewer communication patterns
  • Fewer ways to deploy
  • Fewer observability tools
  • Fewer “powerful but complex” abstractions

The difficulty: Everything you want to remove has a defender. “But we need this for case X!”

The choice: More comprehensible general system vs. local optimization for specific case.

Alignment: Operational clarity, not vague culture

The fourth axis is alignment. Here it’s not about “good culture” or “shared values” in the vague sense. It’s about operational clarity.

Alignment means concrete questions have concrete answers:

  • What is priority? → Not “everything is important”, but clear order
  • Who decides what? → Not “everyone collaborates”, but explicit responsibilities
  • How are conflicts resolved? → Not “let’s talk”, but defined process
  • Where are responsibilities? → Not “team takes care of everything”, but clear responsibility

Alignment is not consensus

Alignment ≠ everyone agrees

Alignment = everyone understands the decision and knows how to act within it, even disagreeing.

Misalignment = even when there’s apparent consensus, each interprets differently in practice.

Operational clarity vs organizational confusion

Questions alignment answers

  • What is priority now?
  • Who decides about architecture change?
  • How do we resolve conflict between product and infra?
  • Who is responsible for performance?
  • When can we say no?

Symptoms of misalignment

  • Parallel work in incompatible directions
  • Energy spent interpreting contradictory signals
  • Silent rework
  • Decisions reversed without explanation
  • Conflicts without clear resolution
  • Value destroyed without anyone noticing

How misalignment silently destroys value

Common scenario:

Product team says: “Priority is delivery speed” Engineering team says: “Priority is technical quality” Platform team says: “Priority is stability”

Nobody is wrong. But there’s no decision about the trade-off.

What happens in practice:

  • Product pushes for fast features
  • Engineering resists to maintain quality
  • Platform blocks deploys to ensure stability
  • Everyone works hard. But in slightly incompatible directions.

Result: Lead time increases (because each side pulls in different direction). Satisfaction drops (because everyone feels “nobody understands the importance” of what they do). Real value delivered decreases.

The three types of misalignment

Type 1: Priority misalignment

Symptom: Everyone is busy, but results don’t appear.

What’s happening:

Each area has different priorities:

  • Product: deliver visible features
  • Engineering: reduce technical debt
  • Platform: stabilize infrastructure
  • Security: implement compliance

None of these priorities is wrong. But there’s no clear order among them.

Result: Dispersed energy. Each pulls in different direction. Slow progress on all fronts. High frustration.

What’s missing: Explicit decision about which priority comes first at this moment — and acceptance that others are temporarily in the background.

Type 2: Responsibility misalignment

Symptom: “Not my job” or “It’s everyone’s” (both equally destructive)

Scenario 1 - Vague responsibility: “Team owns the service”

In practice:

  • Performance dropped. Who investigates? “Team takes care of it”
  • But who specifically? “Everyone”
  • Result: nobody acts (diffuse expectation)

Scenario 2 - Absent responsibility: Nobody knows whose it is

Problem: Observability is bad. Who improves it?

  • “Not product (doesn’t deliver feature)”
  • “Not platform (not infra)”
  • “Not application engineering (doesn’t affect functionality)”

Result: Stays bad indefinitely. Orphan problem.

What’s missing: Explicit accountability, with first and last name. “Person X is responsible for observability. Doesn’t do everything alone, but is held accountable.”

Type 3: Decision process misalignment

Symptom: Decisions reversed, decisions ignored, or decisions “nobody made”

Common scenario:

  • Monday: Architecture meeting. “We decided: use GraphQL for new service”
  • Wednesday: CTO mentions “REST is our standard, right?”
  • Friday: Team starts development… with REST (ignoring Monday’s decision)

What happened:

  • Decision was made, but wasn’t binding
  • Wasn’t clear who had authority to decide
  • Wasn’t clear how decision would be communicated beyond the meeting

Result: Decisions are theatrical, not operational. Meetings become social performance, not decision-making process.

What’s missing: Clarity about who decides what, and how decisions become action.

Why alignment is difficult

Unlike the other 3 axes, alignment can’t be solved with technology.

  • Flow: Improve CI/CD, environments, automation
  • Feedback: Improve observability, tests, error messages
  • Cognitive Load: Simplify architecture, document, remove complexity

Alignment: Requires human decision, clear communication, and accountability.

There’s no tool that “installs alignment”. There’s no code refactoring that solves organizational misalignment.

The temptation to substitute alignment with process

Common mistake: Try to solve misalignment by creating more processes.

Logic: “Let’s create weekly alignment meeting”

Problem: Meetings don’t create alignment if:

  • Decisions have no owner
  • Conflicts have no resolution mechanism
  • Priorities remain ambiguous

Result: More meetings, same misalignment. Now with additional overhead.

Alignment requires decision, not process.

Concrete example: Company creates “Weekly Alignment Committee”. Result: more meetings, same misalignment. Alignment doesn’t come from process — it comes from clarity about who decides what when there’s conflict.

When “improving alignment” makes things worse

Common symptom: “Let’s create more governance to align teams”

Proposed solution:

  • Architecture committee (to align technical decisions)
  • Cross-team meetings (to align roadmaps)
  • Mandatory RFCs (to align changes)

What really happens:

  • Cognitive load increases: More processes to navigate
  • Flow degrades: More approvals, more waiting
  • Alignment doesn’t improve: Because the problem wasn’t lack of process, but lack of clarity in decisions

The trap: Alignment via governance only works if decisions are already clear. If they’re not, governance just adds bureaucracy without solving the root problem.

The temptation of excessive simplification

But there’s a risk here that needs to be named directly: DX Core 4 can become a bureaucratic checklist.

Four dimensions are drastically simpler than SPACE’s five. And this simplicity is, at the same time, its strength and its greatest trap.

Risk of transforming framework into checklist

Legitimate use of DX Core 4

  • Prioritize where to invest limited energy
  • Force decision instead of infinite analysis
  • Accept conscious trade-offs
  • Break organizational paralysis

Destructive simplification

  • Treat as compliance checklist
  • Reduce real complexity to 4 categories
  • Ignore unique organizational context
  • Declare premature victory ('covered all 4!')

When DX Core 4 becomes a reflection killer

Pattern 1: The ‘DX Core 4 coverage’ meeting

Scenario: Leadership wants to “implement DX Core 4”. Creates task force. Three months later, PowerPoint presentation:

  • Flow: ✓ “We implemented feature flags”
  • Feedback: ✓ “We added more logs”
  • Cognitive Load: ✓ “We created documentation”
  • Alignment: ✓ “We scheduled weekly sync meetings”

Final slide: “DX Core 4 completely covered!”

Reality:

  • Feature flags didn’t solve chaotic branching strategy problem
  • Logs added more noise than signal
  • Documentation is outdated and nobody reads it
  • Sync meetings consume 4 hours/week from everyone

What happened: DX Core 4 became a checklist. Instead of thinking tool, it became compliance audit. We have the 4 axes covered, so we’re good.

Pattern 2: What the 4 axes don’t capture

DX Core 4 is deliberately reductionist. It accepts losing details to gain traction. But what’s lost in this reduction?

  • Power and politics: Who decides? Who controls responsibilities?
  • Economics and incentives: Promotion and reward systems
  • History and context: Why did we get here?
  • External constraints: Regulation, compliance, legacy contracts

These factors don’t appear in the 4 axes. But they might be exactly what makes intervention impossible.

Example: Reducing cognitive load is a clear priority. But what if architecture reflects contracts with vendors that can’t be changed? DX Core 4 doesn’t see this. And technical optimization may run into contractual/legal barrier.

Pattern 3: Declaring victory too early

Symptom: “We’ve already covered the 4 main axes, DevEx is solved.”

Reality: Small improvements in each axis don’t necessarily add up to significant impact. Sometimes, transformative change requires radical focus on one axis — not distributed improvement across all.

The risk: DX Core 4 can create illusion of progress (“we’re working on everything!”) while blocking the necessary focus for real impact.

The power of exclusion

The power of DX Core 4 is less in these axes individually and more in what they exclude. The model doesn’t try to:

  • Measure happiness
  • Create universal scores
  • Compare teams
  • Promise global optimization

What it's for, after all

It serves something more modest — and rarer: help organizations stop investing energy where impact is low.

There’s something almost uncomfortable about this approach. It implies accepting that:

You can’t improve everything at the same time

Resources, time, and energy are finite. Choosing where to invest is also choosing where not to invest.

Some frictions need to be tolerated

While you attack the most destructive frictions, others will continue to exist. This is pragmatism, not negligence.

Focus is a political choice, not technical

Deciding priorities involves power, resources, and human consequences. It’s political by nature.

DX Core 4 forces this choice.

Antidote for two common vices

It also functions as an antidote for two common vices:

The tooling vice

New tools rarely solve flow, feedback, or alignment problems if the underlying system remains incoherent. Tooling is consequence of good decisions, not substitute for them.

The metrics vice

Not everything that matters can be measured precisely enough to become a KPI — and insisting on this usually makes worse exactly what you wanted to protect.

How DX Core 4 relates to other frameworks

How DX Core 4 differentiates

Other frameworks

  • DORA starts from aggregate data
  • SPACE tries to preserve all dimensions
  • DevEx diagnoses friction

DX Core 4

  • DX Core 4 starts from direct observation of work
  • DX Core 4 accepts losing details to gain traction
  • DX Core 4 requires decision

The limits of DX Core 4

This also reveals its limits. And, paradoxically, recognizing these limits is what makes the framework useful.

When the 4 axes aren’t enough

There are contexts where DX Core 4 simply doesn’t capture what matters. And forcing the model in these contexts generates more confusion than clarity.

Limitation 1: Economic constraints

Scenario: Startup with capital for 6 months of operation. Flow is broken, cognitive load absurd, feedback slow. DX Core 4 identifies all this perfectly.

Problem: The only real priority is survival. None of the 4 axes captures economic urgency. Investing 2 months improving DevEx might mean not reaching the next investment round.

Reality: In contexts of extreme scarcity, DX Core 4 is intellectual luxury. The choice isn’t “which axis to prioritize?” but “how much DevEx can we sacrifice to exist tomorrow?”.

Limitation 2: Compliance and regulation

Scenario: Regulated fintech. Compliance requires manual audit of every change, multiple approvals, extensive documentation.

DX Core 4 says: “Improve feedback loops! Reduce wait time!”

Reality: Regulator requires 48h window for review. There’s no possible optimization without changing law or sector.

What DX Core 4 misses: Non-negotiable external constraints that make intervention impossible, regardless of prioritization.

Limitation 3: Legacy systems and contracts

Scenario: 20-year system. Mainframe. Contracts with suppliers that can’t be changed. Expertise concentrated in 3 people who retire in 2 years.

DX Core 4 identifies: Absurd cognitive load, non-existent flow, zero alignment.

Reality: Problem isn’t lack of clarity about where to intervene. It’s lack of technical or organizational capacity to execute intervention.

Knowing we need to reduce cognitive load doesn’t help when:

  • The system is incomprehensible
  • Changes cost millions
  • Expertise is retiring
  • Rewrite is impossible

Limitation 4: Cultural and geographic contexts

Scenario: Multinational with teams in 15 countries, timezones from -8 to +8, radically different work cultures.

DX Core 4 assumes relatively homogeneous context. But:

  • “Flow” in German team (high documentation, rigid processes) is different from Brazilian team (more informal, less documented)
  • “Alignment” in high-context culture (Japan) works differently than in low-context culture (USA)

What DX Core 4 ignores: Cultural dimension that crosses all axes and can’t be reduced to “flow” or “alignment”.

When forcing DX Core 4 makes things worse

There are situations where insisting on DX Core 4 hinders more than helps:

  • When the real problem is outside the 4 axes
  • When external constraints make intervention impossible
  • When context requires radically different approach

Knowing when NOT to use the framework is as important as knowing when to use it.

Structural limits

Even within its domains, DX Core 4 has clear limits:

Doesn’t explain why

DX Core 4 doesn’t explain why a system reached this state. Doesn’t substitute deep analysis. It says “your flow is broken” but doesn’t say why or how we got here.

Doesn’t solve structural conflicts

It doesn’t solve broader structural conflicts. It’s a prioritization tool, not systemic transformation. If your problem is dysfunctional organizational structure, DX Core 4 helps prioritize symptoms — but doesn’t touch root cause.

Isn’t complete explanation

Using it as complete explanation would be as mistaken as using DORA to measure satisfaction. It organizes thinking, doesn’t substitute deep understanding.

The pragmatic vice: acting vs understanding

But there’s an opposite risk that also needs to be named: DX Core 4 can make you act before understanding enough.

If DORA, SPACE, and DevEx suffer from paralysis from excess analysis, DX Core 4 can suffer from premature action.

Two opposite vices, both destructive

Understanding without acting (paralysis):

  • Infinite analysis of trade-offs
  • Frameworks stacked without decision
  • Committees studying the problem for 2 years
  • Nothing changes because ‘it’s too complex’

Acting without understanding (precipitation):

  • Optimization without deep diagnosis
  • Improving symptom ignoring root cause
  • Quick action that worsens system
  • Repeating mistake because didn’t understand problem

DX Core 4 seduces with tangibility. Four areas. Choose where to act. Start tomorrow. There’s something deeply attractive about this clarity after so many complex analyses.

When acting too fast makes things worse

Example: Team identifies that cognitive load is biggest problem. Decides to “simplify architecture” by reducing number of services.

Action: Consolidates 15 services into 5.

Problem: Didn’t understand why they had 15 services. Services reflected real domain boundaries. Consolidation created confusing monoliths that now mix responsibilities.

Result: Cognitive load increased. Because they acted fast without understanding deeply.

What was missing: SPACE would have revealed that the problem wasn’t number of services, but lack of clear boundaries. DevEx would have shown that the problem was cost of navigation between services, not their existence.

The seduction of tangible intervention

DX Core 4 makes intervention look like an engineering project:

  1. Identify problematic axis
  2. Design solution
  3. Execute
  4. Measure impact

But DevEx problems are rarely engineering projects. They’re organizational changes, power renegotiations, questionings of old decisions.

Treating as technical project creates illusion of control. “If we follow the process, DX Core 4 solves it.” It doesn’t. Because the problem wasn’t technical.

The impossible balance

Understanding too much paralyzes. Understanding too little precipitates.

DX Core 4 doesn’t solve this dilemma — it just makes it more explicit. You need to choose where to act. But choosing requires understanding. And deep understanding takes time you may not have.

There’s no easy answer here. Just conscious trade-off between analysis and action.

The question other frameworks avoid

It helps answer a question other frameworks avoid:

The pragmatic question

If we could only improve a few things right now, which would make the system more habitable immediately?

This question has no neutral answer. It requires choices, renunciations, and organizational courage. But without it, everything else becomes an intellectual exercise.

The illusion that it’s simple

But there’s one last illusion DX Core 4 can create: the idea that the 4 axes are independent points where to act.

“Let’s work on Flow this quarter, Feedback next, Cognitive Load after…”

Practical example: Reducing cognitive load by simplifying architecture

Goal: Reduce cognitive load by simplifying architecture.

What really happens:

  • Flow: Need to change pipeline, CI/CD, branch strategy
  • Feedback: Observability needs to be redesigned for new architecture
  • Alignment: Teams need to renegotiate boundaries and responsibilities
  • And back to Cognitive Load: During transition, cognitive load increases (two systems coexisting)

Reality: You don’t improve one axis. You reorganize the entire system — and the 4 axes are just different ways of looking at this reorganization.

The illusion that it’s simple leads to:

  • Local optimization that worsens global: Improving flow without considering cognitive load can accelerate delivery of bad code
  • Inadequate investment: “Let’s put 2 people on Feedback problem” — but problem requires architectural change that affects everything
  • Organizational frustration: “We worked so hard on X but it didn’t improve” — because X doesn’t exist isolated from Y, Z, W

The illusion that it's simple

What it really offers

  • 4 lenses for same system
  • Choose where to focus attention
  • Systemic and non-linear change
  • Clarity about what you don't control

What DX Core 4 seems to promise

  • 4 independent points where to act
  • Choose one and optimize
  • Incremental and linear improvement
  • Control over the process

Use DX Core 4 to prioritize focus, not isolate intervention

Don’t use DX Core 4 to divide the problem into 4 pieces.

Use it to answer: “If we could focus limited energy on one area of the system, which would have the greatest systemic impact?”

The answer isn’t “let’s improve only this area”. The answer is “let’s start with this area knowing that touching it will require touching the other three as well”.

Who decides the priorities? (The question the framework doesn’t answer)

DX Core 4 assumes that, once problematic areas are identified, prioritization will be rational. But it rarely is.

The politics of prioritization

Real scenario: 2-day workshop. Teams map the 4 axes. Everyone agrees: cognitive load is the biggest problem.

What should happen: Invest in reducing cognitive load (simplify architecture, document, externalize tribal knowledge).

What really happens:

  • CTO prioritizes Flow: Because flow metrics appear in board meetings
  • Platform team prioritizes Feedback: Because it’s tangible technical project they can execute
  • Product prioritizes Alignment: Because “communication” is always a safe scapegoat

Cognitive Load isn’t attacked. Not because analysis was wrong. But because:

  • Reducing cognitive load requires admitting architecture was poorly designed
  • Requires challenging decisions from 3 years ago
  • Requires redistributing knowledge (and with it, power)

Prioritization reflects power, not analysis.

When each area wants different things

  • Developers: “Cognitive load is killing us”
  • Product: “We need more velocity (Flow)”
  • Executives: “We want visibility (Feedback/dashboards)”
  • Platform: “Alignment solves everything (more standards, more control)”

Everyone is looking at the same DX Core 4. Everyone reaches different conclusions.

Because prioritization isn’t technical. It’s result of who has power to decide what matters most.

DX Core 4 doesn't remove politics from decision

DX Core 4 doesn’t solve “what to prioritize”. It makes explicit that you need to choose — and that this choice is political, not technical.

The question was never “which axis is worse”. The question always was “who has power to decide what matters?”

It’s not framework that’s missing, it’s choice

After DORA, SPACE, DevEx, and now DX Core 4, one thing becomes painfully clear: productivity in software doesn’t improve because we understand more models.

It improves when we use these models to consciously decide what kind of system we’re building — and for whom.

Series synthesis of frameworks

The journey so far

  • DORA ([Article 4](/en/why-productive-teams-fail-04))
  • SPACE ([Article 5](/en/why-productive-teams-fail-05))
  • DevEx ([Article 6](/en/why-productive-teams-fail-06))
  • DX Core 4 (this article)

What each framework revealed

  • Flow matters, but metrics can be manipulated
  • Complexity is real, but can be instrumentalized
  • Experience matters, but improving it is political
  • Choosing where to act is inevitable — and political

The final truth

DX Core 4 doesn’t close the debate. It closes the excuse.

After this point, you can no longer say:

  • “We don’t know what to do” (we do)
  • “It’s too complex to act” (it is, but action is possible)
  • “We need more analysis” (no, we need decision)

From here on, framework isn’t missing. Knowledge isn’t missing. Conceptual model isn’t missing.

Choice is missing. And courage is missing to sustain that choice over time, knowing it has a cost.

But choice isn’t enough

And here we arrive at the final discomfort of this series.

Knowing where to act doesn’t answer if we’ll act. Knowing what needs to change doesn’t answer who is willing to change.

The question frameworks don't answer

Frameworks organize thinking. Frameworks make costs visible. Frameworks break intellectual paralysis.

But frameworks don’t save organizations. Don’t create courage. Don’t redistribute power. Don’t substitute leadership that takes responsibility for difficult choices.

After frameworks, what remains?

The most difficult question remains. The question that can’t be answered with more analysis, more metrics, more models.

The question about responsibility.

In the next article, we expand the view beyond the individual team. When the problem isn’t “how is my team” but “how does the organization flow”, other frameworks come into play: Team Topologies, Flow Framework, Developer Velocity Index. And with them, new risks of degeneration.

Related Posts

Comments 💬