Skip to content
platform-engineering

Beyond Team Metrics: Structure, Flow, and the Corporate Perspective

Team Topologies redesigns organization for flow. Flow Framework connects engineering to business. Developer Velocity Index shows how consultancies sell productivity. Three lenses that amplify what previous frameworks don't see.

26 min read

Também em Português

Series Why Productive Teams Fail
8/8

Throughout this series, we’ve explored frameworks that measure productivity at different levels: DORA in the pipeline, SPACE in dimensions, DevEx in experience, DX Core 4 in intervention points. They all share one characteristic: they focus on the team as the unit of analysis.

But teams don’t exist in a vacuum. They operate within organizational structures, respond to business pressures, and are evaluated by metrics that frequently come from outside engineering.

Three perspectives expand the view:

  1. Team Topologies — How organizational structure determines flow
  2. Flow Framework — How to connect engineering metrics to business value
  3. Developer Velocity Index — How corporate consultancies simplify (and sell) productivity

Why expand the view

DORA, SPACE, and DevEx are powerful for understanding what happens within teams. But if the organizational structure is wrong, or if the language with business is broken, optimizing within the team is insufficient. Sometimes, the problem is one level above.


Team Topologies: structure as a flow variable

The central argument

[1], published in 2019 by Matthew Skelton and Manuel Pais, starts from a provocative premise: organizational structure is the first variable that determines delivery capacity.

It doesn’t help to have perfect CI/CD, impeccable DORA metrics, and satisfied developers if the way teams are organized creates dependencies that block flow.

Conway's Law as design

Conway’s Law states that systems tend to mirror the organization’s communication structure. Team Topologies proposes inverting this law: design the organizational structure to produce the architecture you want.

The four team types

The model defines four fundamental team types, each with distinct responsibilities and characteristics:

1. Stream-aligned Teams (Times Alinhados a Fluxo)

These are teams that deliver value directly to the user or customer. They work on a continuous flow of features, from concept to production.

Characteristics:

  • Responsible for a clear slice of business value (product, journey, domain)
  • Must have autonomy to deliver without constant external dependencies
  • Need all capabilities necessary for their flow (design, development, operations)
  • Are the majority of teams in a healthy organization

The common problem: When stream-aligned teams constantly depend on other teams for simple deliveries, there’s an organizational design problem — not a process problem.

2. Platform Teams (Times de Plataforma)

They exist to reduce cognitive load on stream-aligned teams. They provide internal services that allow flow teams to focus on business value.

Characteristics:

  • Treat other teams as customers
  • Provide self-service services (infrastructure, CI/CD, observability)
  • Reduce effort duplication between teams
  • Should be optional, not mandatory (teams can use or not)

The common problem: Platform teams that become mandatory bottlenecks instead of enablers. If every deploy needs “platform approval”, the platform team became a committee, not a service.

3. Enabling Teams (Times Habilitadores)

Help other teams overcome obstacles temporarily. They’re specialists who transfer knowledge, not who do the work for others.

Characteristics:

  • Detect capability gaps in stream-aligned teams
  • Work temporarily with specific teams
  • Goal is to become unnecessary (transfer knowledge, not create dependency)
  • Examples: DevOps teams, SRE mentors, security specialists

The common problem: Enabling teams that become permanent support teams, creating dependency instead of capacity.

4. Complicated-Subsystem Teams (Times de Subsistema Complexo)

Responsible for components that require deep specialization that doesn’t make sense to distribute among all teams.

Characteristics:

  • Master areas that require specialized knowledge (ML, cryptography, video processing)
  • Provide simplified interfaces for other teams
  • Justified only when specialization is genuinely rare and necessary
  • Should be exception, not rule

The common problem: Using “complexity” as excuse to create silos. If the team exists because “only they understand the code”, the problem is documentation and architecture, not need for specialization.

Organizational structure impact on flow

Well-designed structure

  • Stream-aligned teams deliver with autonomy
  • Platform is service, not bottleneck
  • Enabling teams transfer knowledge
  • Dependencies are explicit and minimized

Poorly designed structure

  • Every request becomes ticket for another team
  • Platform has weeks-long queue
  • Specialists become indispensable
  • Nobody knows who is responsible for what

The three interaction modes

Beyond team types, Team Topologies defines three interaction modes between teams:

1. Collaboration

  • Two teams work together temporarily on a shared problem
  • High communication, blurred boundaries
  • Should be temporary — permanent collaboration indicates poorly defined boundaries

2. X-as-a-Service

  • One team provides service that another consumes
  • Low communication, clear contract
  • Ideal for stable platforms and subsystems

3. Facilitating

  • One team (usually enabling) helps another develop capacity
  • Communication focused on knowledge transfer
  • Goal is to make facilitation unnecessary
Carregando diagrama...
Team Topologies defines 4 team types and 3 interaction modes. The majority of teams should be Stream-aligned; other types exist to enable them, not to create permanent dependencies.

The most common mistake

Organizations treat collaboration as the default mode. Result: infinite meetings, hidden dependencies, and nobody can deliver independently. Collaboration should be temporary exception, not permanent state.

Team Topologies and DORA/SPACE/DevEx

FrameworkWhat it seesWhat Team Topologies adds
DORAPipeline speedWhy the pipeline is slow (dependencies between teams)
SPACEMultiple dimensionsWhy collaboration is fragmented (wrong structure)
DevExIndividual/team experienceWhy cognitive load is high (poorly defined responsibilities)

Team Topologies doesn’t compete with these frameworks — it explains why their metrics might be bad even when technical practices are right.

The limits of Team Topologies (what the book doesn’t tell)

The model is elegant — perhaps too elegant. In practice, there are limitations the book minimizes or completely ignores.

Limitation 1: Contexts where the model doesn’t work

Small startups (< 20 people):

  • Separating stream-aligned, platform, and enabling teams doesn’t make sense with 3 teams
  • Everyone does everything, and that’s correct at this scale
  • Applying Team Topologies too early creates unnecessary bureaucracy

Very hierarchical organizations:

  • Team Topologies assumes team autonomy that many cultures don’t allow
  • If every decision needs approval from 3 levels up, stream-aligned teams are fiction
  • The model doesn’t address how to change hierarchical culture

Companies with legacy contracts:

  • External vendors can force team structures that don’t fit the model
  • Complicated-subsystem team may be mandatory by contract, not by choice
  • Team Topologies assumes control that not every organization has

Limitation 2: Renaming teams isn’t Team Topologies

The most common anti-pattern: Organization reads the book, renames existing teams to the 4 types, declares victory.

  • Infrastructure team becomes “Platform Team” (but continues being mandatory bottleneck)
  • Support team becomes “Enabling Team” (but continues doing operational work)
  • Every team becomes “Stream-aligned” (but continues with cross-dependencies in everything)

The problem: The change is cosmetic. Interaction modes didn’t change. Dependencies remain the same. Bottlenecks persist.

What really changing requires:

  • Redesign responsibilities (not just names)
  • Eliminate blocking dependencies
  • Change incentives and success metrics
  • Renegotiate interaction contracts between teams

Limitation 3: Transition costs the book minimizes

Transitions are painful. The book treats reorganization as a design exercise, but in practice:

  • People lose responsibility: When you redesign teams, people lose domains they mastered
  • Knowledge fragments: Transition period has absurd cognitive load
  • Metrics worsen before improving: Every reorganization generates temporary productivity drop
  • Political resistance: Those who lose power resist, regardless of technical logic

What the book doesn’t say: Poorly managed transitions can destroy more value than they create. Reorganizing without buy-in, without knowledge migration plan, without stabilization period is a recipe for disaster.

Realistic estimate: Significant reorganization takes 12-18 months to stabilize. The book doesn’t prepare for this timeline.

Limitation 4: Consultancy bias

Matthew Skelton and Manuel Pais are Team Topologies consultants. This doesn’t invalidate the model, but creates incentives:

  • Simplicity sells: 4 types + 3 modes is more sellable than “it depends on context”
  • Universality sells: “Works for any organization” is better marketing than “has limitations”
  • Training sells: Model that can be taught in 2 days generates more revenue

The result: details are simplified, exceptions are minimized, real complexity is reduced.

Use Team Topologies as lens, not as law

The model is useful for thinking about structure — it’s not universal recipe. Organizations that treat Team Topologies as compliance checklist end up worse than if they hadn’t adopted anything.

Ask: “Does this structure make sense for our context?” — not “Are we following Team Topologies correctly?”

Practical transition guide to Team Topologies

Implementing Team Topologies isn’t about renaming teams — it’s about redesigning responsibilities, contracts, and flows. Here’s a realistic transition roadmap.

Phase 1: Diagnosis (2-4 weeks)

Symptoms of problematic organizational structure:

  1. Constant cross-dependencies — Every deploy requires coordination with 3+ teams
  2. Diffuse responsibility — No one can say “this domain is ours” without caveats
  3. Indispensable specialists — “Only John can change this”
  4. Approval queues — Infrastructure/platform teams with backlogs of weeks
  5. Permanent collaboration — Same teams in alignment meetings every week
  6. Simple changes take weeks — Trivial modifications cross multiple teams

Action: Map real dependencies. For each team, list: who blocks you? Who do you block?

Phase 2: Design (4-8 weeks)

Don’t start by moving people. Start by defining contracts.

  1. Identify clear value streams — From customer request to delivered value
  2. Define responsibility boundaries — Where does one team end and another begin?
  3. Choose interaction modes — For each team relationship, define: collaboration (temporary), X-as-a-Service, or facilitation
  4. Minimize complicated-subsystem teams — Each one is an exception that needs strong justification
  5. Validate cognitive load — Can stream-aligned teams deliver without depending on others?

Interaction contract template:

## Contract: [Team A] ← → [Team B]

**Interaction mode:** X-as-a-Service

**Boundary:**
- Team A is responsible for: [clear domain]
- Team B is responsible for: [clear domain]

**Interface:**
- Team B exposes: [APIs, tools, documentation]
- Expected SLA: [response time, availability]

**When to collaborate (exceptions):**
- [Specific situations that justify temporary collaboration]

**When NOT to collaborate:**
- Team A cannot ask Team B to "do it for us"
- Team B cannot require approval for changes within Team A's domain

Phase 3: Transition (3-6 months)

Realistic expectation: metrics will temporarily worsen.

  1. Migrate knowledge before moving responsibilities — Documentation, pair programming, shadowing
  2. Implement contracts gradually — Start with 2-3 team pairs
  3. Monitor cognitive load — If stream-aligned teams remain overloaded, boundaries are wrong
  4. Adjust boundaries based on evidence — Initial design always has flaws

Transition metrics:

  • Week 1-4: Productivity drops 20-30% (expected)
  • Month 2-3: Gradual recovery, dependencies begin to decrease
  • Month 4-6: Productivity returns to normal, flow improves
  • Month 6+: Gains appear (fewer blockers, faster deliveries)

Phase 4: Stabilization (6-12 months)

Metrics to validate success:

  1. Stream-aligned team autonomy — How many deploys happen without external coordination? (Target: >80%)
  2. Reduced approval queues — Platform team average response time (Target: <24h for 90% of cases)
  3. Collaboration as exception — How many inter-team alignment meetings? (Target: collaboration in <20% of deliveries)
  4. Knowledge transfer — Have enabling teams reduced demand? (Target: each enablement reduces future requests by 50%+)

Realistic total timeline: 12-18 months for complete stabilization.

Organizations that try to do this in 3 months create chaos. Organizations that take 3 years are just renaming teams.


Flow Framework: translating engineering to business

The language problem

[2], published in 2018 by Mik Kersten, addresses a different problem: engineering and business speak different languages.

When engineering reports “deployment frequency increased 40%”, the CFO asks: “So what? Did this generate revenue?”

When business asks “deliver faster”, engineering responds: “We’re at the limit. We have technical debt.”

The dialogue doesn’t happen because the metrics don’t translate.

The translation gap

DORA measures delivery capacity. SPACE measures productivity dimensions. DevEx measures experience. None of them directly answers: how much business value are we creating?

The four work categories (Flow Items)

The Flow Framework proposes categorizing all work into four types:

1. Features (Funcionalidades)

  • New value for the customer
  • Generates revenue, differentiation, or satisfaction
  • It’s what business usually wants most

2. Defects (Defeitos)

  • Fixing problems in existing features
  • Doesn’t add new value, but preserves existing value
  • Signal of development process quality

3. Risks (Riscos)

  • Work to reduce vulnerabilities (security, compliance, resilience)
  • Not visible to end user, but critical for sustainability
  • Frequently ignored until it becomes incident

4. Debts (Dívidas)

  • Work to improve technical base (refactoring, dependency updates)
  • Doesn’t deliver immediate value, but enables future deliveries
  • The hardest to justify to business
What business sees
  • We want more features
  • Why so many bugs?
  • Security is cost
  • Refactoring doesn't deliver value
What engineering sees
  • Features without quality become defects
  • Bugs are symptom of pressure
  • Security prevents catastrophe
  • Technical debt is killing us

Flow Metrics: metrics that business understands

The framework proposes metrics that translate technical capacity into business language:

Flow Velocity: How many work items are completed per period? (Similar to throughput, but categorized)

Flow Efficiency: How much time is work active vs. waiting? (Reveals process bottlenecks)

Flow Time: How much time from start to finish of an item? (Similar to lead time, but per category)

Flow Load: How many items are in progress simultaneously? (Reveals overload)

Flow Distribution: What percentage of work goes to each category? (Reveals where effort is being invested)

The critical insight of Flow Distribution

If 60% of work is defects and technical debt, and only 20% is features, business needs to understand that accelerating features requires investing in quality. Flow Distribution makes this conversation possible with data, not with opinion.

How to measure Flow Metrics in practice

Theory is elegant. Practice requires defining formulas, collecting data, and establishing benchmarks.

Calculation formulas

Flow Velocity:

Flow Velocity = Total items completed / Period

Categorize by type (Features, Defects, Risks, Debts) to understand distribution.

Flow Time:

Flow Time = Completion date - Start date

Calculate median (not average) to avoid distortion by outliers. Measure by category.

Flow Efficiency:

Flow Efficiency = Active time / (Active time + Wait time) × 100

Active time = time being actively worked on Wait time = time in queues, blockers, handoffs

Example: Item took 10 days from start to finish. Was actively worked on for 2 days. Spent 8 days waiting (reviews, approvals, dependencies).

Flow Efficiency = 2 / (2 + 8) × 100 = 20%

Flow Load:

Flow Load = Items in progress (work in progress) at the moment

High load indicates overload or bottlenecks.

Flow Distribution:

% Features = Feature items / Total items × 100
% Defects = Defect items / Total items × 100
% Risks = Risk items / Total items × 100
% Debts = Debt items / Total items × 100

Where to collect data

ToolFlow TimeFlow VelocityFlow DistributionFlow Efficiency
Jira✅ (Created → Done)✅ (Issues completed)✅ (Labels/Types)⚠️ (needs workflow stages)
Linear✅ (Timestamps)✅ (Issues completed)✅ (Issue types)⚠️ (needs status tracking)
GitHub Issues✅ (via labels/events)✅ (Closed issues)⚠️ (via labels)❌ (limited)
Azure DevOps✅ (Work items)✅ (Completed items)✅ (Work item types)✅ (Board columns)

To calculate Flow Efficiency, you need to track:

  • When item is being actively worked on (In Progress, In Review)
  • When item is waiting (Blocked, Waiting for Approval, In Queue)

Configure your workflow to differentiate active states from wait states.

Healthy distribution benchmarks

There’s no “perfect distribution”, but there are patterns observed in sustainable organizations:

Typical distribution in mature teams

Sustainable team:

  • 40-50% Features (new value)
  • 20-30% Defects (quality maintenance)
  • 10-15% Risks (security, resilience)
  • 15-25% Debts (technical investment)

Warning signs:

  • >50% Defects → Quality in collapse
  • <30% Features → Little new value being created
  • <5% Debts → Technical debt growing uncontrolled
  • 0% Risks → Security and resilience being ignored

Basic dashboard example

## Flow Metrics - Sprint 24 (2 weeks)

**Flow Velocity:** 23 items completed
- Features: 12 (52%)
- Defects: 6 (26%)
- Debts: 4 (17%)
- Risks: 1 (5%)

**Flow Time (median):**
- Features: 5 days
- Defects: 2 days
- Debts: 8 days
- Risks: 12 days

**Flow Efficiency:** 25%
- Active time: 2.5 days (median)
- Wait time: 7.5 days (median)
- Main cause: code review queue (3 days) + deploy approvals (2 days)

**Flow Load:** 18 items in progress
- Above sustainable capacity (14-16)
- Risk: overload leading to increased defects

Tools that support Flow Framework

Specialized tools:

  • Jellyfish — Engineering analytics with native Flow Metrics
  • Pluralsight Flow — Complete Flow Framework dashboard
  • LinearB — Flow metrics + benchmarks
  • Swarmia — Flow metrics + DORA combined

Open-source/self-hosted alternatives:

  • Metabase/Superset + custom queries on Jira/Linear
  • Grafana + data source plugins for management tools
  • Scriptable dashboards — Python scripts querying APIs

Start simple: Before paying for tools, export data from Jira/Linear to spreadsheet and calculate manually for 2-3 sprints. Validate if metrics are useful before automating.

The Value Stream as unit of analysis

Unlike frameworks focused on teams, Flow Framework uses the value stream as unit:

  • A value stream crosses multiple teams
  • Measures from customer request to value delivery
  • Reveals bottlenecks that individual team metrics don’t show

Example: A team may have excellent DORA metrics (frequent deploys, low lead time), but the total value stream be slow because there are bottlenecks in business approvals, transfers between teams, or external dependencies.

Flow Framework and other frameworks

QuestionDORA answersFlow Framework adds
Are we delivering fast?Yes (pipeline)Yes, but delivering what?
Where is the bottleneck?In pipelineIn entire value stream
How much effort goes to value?Doesn’t measureFlow Distribution shows
Does business understand our progress?Doesn’t translateMetrics in business language

Developer Velocity Index: the corporate perspective

What it is (and where it comes from)

The [3] was created by McKinsey in 2020. It’s an index that tries to correlate development practices with business results.

Unlike DORA (born from academic research) or Team Topologies (born from practice), DVI comes from the world of corporate consulting. This matters because it defines its audience and incentives.

Context matters

DVI wasn’t created to help engineers improve. It was created to help executives justify investments in technology. This doesn’t make it invalid, but defines its priorities and limitations.

The four dimensions of DVI

The index evaluates organizations in four areas:

1. Technology

  • Modern development tools
  • Cloud infrastructure
  • CI/CD practices

2. Working Practices

  • Agile methodologies
  • Code practices (code review, pair programming)
  • Test automation

3. Talent

  • Ability to attract and retain developers
  • Development programs
  • Learning culture

4. Enterprise Enablement

  • Leadership support
  • Technology investment
  • Strategic alignment

What DVI claims (and doesn’t prove)

The McKinsey study claims that companies in the top quartile of “developer velocity” have:

  • 4-5x more revenue growth
  • 55% more innovation
  • 60% higher employee satisfaction

Correlation vs. Causality

These correlations are impressive, but don’t prove causality. Successful companies have more resources to invest in tools and practices — not necessarily the reverse. DVI may be measuring consequence of success, not cause.

Why executives love DVI

DVI solves a specific problem for C-level:

  1. Investment justification: “McKinsey says companies with high velocity grow 4x more”
  2. External benchmark: “We’re in the 60th percentile compared to market”
  3. Simple narrative: One number that summarizes “engineering maturity”
  4. Reputational coverage: If it goes wrong, “we followed McKinsey”

Critical analysis of Developer Velocity Index

What DVI offers

  • Language that executives understand
  • Benchmark against market
  • Investment justification
  • Holistic view (tech + culture + management)

What DVI hides

  • Correlation treated as causality
  • Proprietary methodology (not replicable)
  • Incentive to sell consulting
  • Simplifies complexity into single number

The structural critique

The problem with DVI isn’t that it’s wrong — it’s that it comes from a specific context:

1. Conflict of interest: McKinsey sells consulting to improve the index it created.

2. Opaque methodology: Unlike DORA (published, replicable research), DVI is proprietary.

3. Dangerous simplification: Reducing “engineering maturity” to a single number ignores context, trade-offs, and important details.

4. Survivorship bias: The study analyzes successful companies. We don’t know how many companies with “high velocity” failed.

When to use (with caution)

DVI can be useful for:

  • Opening conversation with executives who don’t understand DORA/SPACE
  • Justifying budget in business language
  • Initial benchmark (with caveats)

But it shouldn’t replace frameworks with transparent methodology and focus on real improvement.


Warning signs: when each framework reveals problems

Before integrating perspectives, it’s critical to know when something is wrong. Each framework has specific red flags that indicate structural problems, not just low performance.

Team Topologies Red Flags

Symptoms of problematic organizational structure

Healthy structure

  • Teams deliver end-to-end without constant handoffs
  • Platform has clear SLAs and is self-service
  • Enabling teams become unnecessary after intervention
  • Collaboration happens by choice, not by obligation

Warning signs

  • Every change requires 3+ coordinated teams
  • Platform team has weeks-long queue
  • Specialists are permanent bottlenecks
  • No one can define responsibility boundaries
  • Collaboration is default mode, not exception
  • 'Stream-aligned' teams depend on external approval
  • Reorganization happens every 6 months (chronic instability)

What each symptom reveals:

SymptomWhat it indicatesAction
Constant cross-dependenciesTeam boundaries poorly definedRedesign responsibilities based on value stream
Weeks-long queue on platformPlatform became committee, not serviceConvert to self-service or increase capacity
Indispensable specialistsKnowledge not documented/distributedCreate temporary enabling teams for transfer
Permanent collaborationOverlapping responsibilitiesDefine clear interaction contracts
Handoffs everywhereTeams organized by function, not flowReorganize around customer value delivered

Flow Framework Red Flags

Pathological Flow Distribution:

DistributionDiagnosisRisk
>60% DefectsQuality in collapse, unsustainable pressureSystem entering technical debt spiral
<20% FeaturesAlmost no new value being createdBusiness will notice stagnation in 2-3 quarters
0% RisksSecurity and resilience being ignoredCatastrophic incident is matter of time
<5% DebtsTechnical debt growing uncontrolledVelocity will plummet in 6-12 months
80%+ FeaturesIgnoring quality and sustainabilityTechnical debt exploding behind the scenes

Critical Flow Efficiency:

Flow Efficiency below 15% is alarm

If work spends 85%+ of time waiting (in queues, blockers, approvals), the problem isn’t capacity — it’s process. Adding more people won’t solve it.

Common causes:

  • Code review queues (days to review)
  • Cascading approvals (multiple levels)
  • Inter-team dependencies (one waits for another)
  • Manual deployments (maintenance windows)
  • Excessive handoffs (sequential specialists)

Flow Time constantly growing:

Sprint 1: median of 5 days
Sprint 4: median of 8 days
Sprint 8: median of 12 days
Sprint 12: median of 18 days

Not normal. Constant growth indicates accumulating complexity, technical debt, or overload. Without intervention, system collapses.

Developer Velocity Index Red Flags

DVI has a dangerous characteristic: it’s possible to have high index with unhappy developers.

Gaming the metrics:

PracticeImpact on DVIReal impact
Mandatory modern tools✅ Increases scoreDevelopers forced to use inadequate tools
Mandatory training✅ Increases scoreGeneric training without practical application
Cloud migration✅ Increases scoreMigration without redesign generates complexity
”Agile methodologies” adoption✅ Increases scoreCeremonial Scrum without real autonomy

DVI-specific red flags:

  1. High index but high turnover — Good metrics, people leaving. Something’s wrong.
  2. Index rose but delivery didn’t — Gaming metrics without real improvement.
  3. Consultancy selling solution to problem it created — “You have low DVI. Hire our transformation program.”
  4. Single number hides context — DVI 75 might be excellent in traditional fintech, mediocre in ML startup.

The DVI paradox

Organizations can optimize for high DVI and destroy DevEx in the process. An index created to correlate productivity with outcomes can become a goal in itself — and when a metric becomes a target, it ceases to be a good metric (Goodhart’s Law).

When multiple frameworks show problems simultaneously

Scenario 1: Broken structure (Team Topologies)DORA symptoms: High lead time (dependencies), low deploy frequency (coordination) → Flow symptoms: Low Flow Efficiency (handoffs), high Flow Time (waiting) → DevEx symptoms: High cognitive load, process frustration

Scenario 2: Quality collapse (Flow Framework)DORA symptoms: High change failure rate, rising time to restore → DevEx symptoms: Frustrated developers, constant interruptions → Team Topologies symptoms: Stream-aligned teams spending time firefighting

Scenario 3: Gaming metrics (high DVI with real problems)DevEx symptoms: Low satisfaction despite “high maturity” → Flow symptoms: Pathological distribution (90% features, 0% debts → time bomb) → DORA symptoms: Metrics might even be good, but not sustainable

Golden rule: When one framework shows a problem, validate with another. Real problems appear in multiple lenses.


Integrating the perspectives

What each framework adds

FrameworkUnit of analysisCentral questionPrimary audience
DORAPipeline/Team”Does the system deliver well?”Engineering
SPACETeam/Individual”Are we measuring right?”Engineering/Management
DevExIndividual/Team”How is it to work here?”Engineering
Team TopologiesOrganization”Does structure allow flow?”Org Architecture
Flow FrameworkValue Stream”How much value are we creating?”Engineering + Business
DVICompany”How do we compare to market?”Executives

When to use each one

Problem in pipeline, bad delivery metrics: → Start with DORA for diagnosis

Teams deliver but people suffer: → Use DevEx to understand experience

Good team metrics but slow total delivery: → Team Topologies to verify structure → Flow Framework to map value stream

Need to justify investment to C-level: → Flow Framework to translate to business language → DVI as external reference (with caveats)

Redesigning organization: → Team Topologies as design framework → Flow Framework to measure impact

The question that remains

Eight articles. Six frameworks. Dozens of metrics, dimensions, team types, and interaction modes.

If you’ve gotten this far with the feeling of having many lenses and little clarity, that’s exactly the condition the next article addresses: how to integrate all this in practice? Which framework to use for which question? How to combine without creating paralysis from excess analysis?

Frameworks don’t change organizations. People change organizations. But people need more than measurement tools — they need a model for choosing which tool to use when.

The next question isn’t “which framework is better?” — it’s “how to use them all together without going crazy?”

In the next article, we’ll build exactly that model: a decision matrix that connects specific question to appropriate framework, without paralysis from analysis or metric overload.

Notas de Rodape

  1. [1]

    Skelton, Matthew; Pais, Manuel. Team Topologies: Organizing Business and Technology Teams for Fast Flow. IT Revolution Press, 2019. The book proposes four fundamental team types (stream-aligned, platform, enabling, complicated-subsystem) and three interaction modes (collaboration, x-as-a-service, facilitating) to optimize software delivery flow.

  2. [2]

    Kersten, Mik. Project to Product: How to Survive and Thrive in the Age of Digital Disruption with the Flow Framework. IT Revolution Press, 2018. The book introduces the Flow Framework, which connects engineering metrics to business results through four work types (features, defects, risks, debts) and flow metrics.

  3. [3]

    McKinsey & Company. Developer Velocity: How software excellence fuels business performance. McKinsey Digital, 2020. The study introduces the Developer Velocity Index (DVI) as a measure of engineering maturity and its correlation with business results.

Related Posts

Comments 💬