EN ES FR

The Subtraction Audit

Progress is not addition. Most impactful changes are removals. Here's how we find what to cut, what to protect, and what to never touch.

Qu'est-ce qui Compte?

Most transformation programs are addition programs. New tools. New processes. New layers of governance. The org gets heavier. The backlog gets longer. The people doing the actual work spend more of their time managing the complexity that was added to help them.

The Subtraction Audit starts from a different premise: most organizations are carrying significant dead weight - processes that were once necessary and are now inertia, reporting that nobody reads, governance that adds friction without adding safety. The first question is not "what should we build?" It is "what should we stop?"

The most impactful move is usually a removal.
The riskiest removal is the one nobody talks about.

THE FRAMEWORK

Five Dimensions

What We Audit

The Subtraction Audit covers five domains where organizations accumulate unnecessary weight. In each dimension, we are looking for the same thing: activity that consumes capacity without producing value, normalized by time, inertia, or the difficulty of arguing against something that once mattered.

Dimension 01
Process

Steps, approvals, and workflows that have accumulated over time. Originally justified; often never revisited. Process weight compounds silently - each addition is incremental, the total is enormous.

  • When was this step last challenged?
  • What is the cost of removing it versus keeping it?
  • Who benefits from this step existing?
Dimension 02
Technology

Tools, platforms, integrations, and subscriptions that were adopted to solve problems that may no longer exist. Technology estates grow; they rarely shrink. Every tool creates maintenance, training, and integration burden.

  • What problem was this solving when it was adopted?
  • Does that problem still exist?
  • What breaks if we turn this off?
Dimension 03
Reporting

Reports, dashboards, updates, and status documents. Most reporting is produced because it was once requested, not because it is actively used. Reports are asymmetric: high cost to produce, low cost to not read.

  • When was this report last acted on?
  • What decision does it inform?
  • Who would notice if it stopped?
Dimension 04
Governance

Committees, review boards, approval chains, and oversight structures. Governance frameworks are added in response to problems; they are almost never removed when the problem is solved. Governance adds friction; redundant governance adds friction without safety.

  • What risk does this structure mitigate?
  • Has that risk materialized in the last 3 years?
  • What is the cost in decision velocity?
Dimension 05
Communication

Meetings, recurring calls, status updates, and coordination rituals. Communication structures are the most politically sensitive category because they are relationship structures in disguise. The meeting that nobody wants to cancel is never the one where the work happens. It is the one where the relationships that allow work to happen are maintained - or where someone's perceived relevance is protected. Distinguishing between the two requires judgment, not just a calendar audit.

  • What would actually break if this meeting didn't happen?
  • Is it creating alignment or performing alignment?
  • Who owns the decision that this meeting is nominally about?
Addition is legible. It shows up on roadmaps.
Subtraction is invisible until something breaks.
That's why it requires a dedicated audit.
THE METHODOLOGY

The Subtraction Stack

Three Tiers of Removal

Not everything can be removed at the same speed or with the same confidence. The Subtraction Stack sorts candidates into three tiers based on value and visibility. The sequencing matters: cut high-confidence items first to build momentum and release capacity before attempting the harder removals.

Cut First
Low Value, High Effort
Things that consume significant time and produce little value that anyone can name. Nobody defends these once you surface them. The challenge is finding them - they normalize into background noise. Look for recurring work products that nobody references in any decision, meeting that produces no artifacts and no follow-ons, approval steps that have never said no.
Cut Last
High Value, Low Visibility
Things that clearly produce value but whose value is not obvious from the outside. A cross-functional relationship call that looks like overhead but actually prevents six weeks of rework. A weekly status email that nobody reads except the one person who catches every integration error. These require investigation before removal. The cost of a false cut here is high and often delayed - you don't see the damage until the next incident.
Never Cut
La Cicatrice
The weird thing one person does that nobody understands but everything breaks without. The 3 AM cron job nobody knows why runs at 3 AM. The manual reconciliation step that seems redundant until the month it isn't. The process note that says "don't skip step 7" with no explanation. These are load-bearing assumptions - institutional knowledge embedded in behavior rather than documentation. They look like waste. They are not. Cutting them is how you discover what they were for.
THE TEST

Removal Criteria

Two Definitive Tests

Most removal decisions can be resolved with two tests applied in sequence. Neither requires extensive analysis. Both require honesty about what would actually happen if the thing disappeared.

Test 1 - Dead Weight
Nobody notices in 2 weeks.
Pause the activity. If no one escalates, no output is missed, and no downstream process breaks - it is dead weight. The effort of removing it is the only cost. The test requires actual pausing, not hypothetical assessment. Opinions about whether something matters are unreliable. Behavior after it stops is evidence.
Test 2 - Load-Bearing
Crisis when it stops.
If removing something triggers escalations, breaks downstream dependencies, or reveals that it was silently preventing failures - it is load-bearing. This does not necessarily mean it should stay in its current form. It means its function must be consciously preserved or transferred before it is removed. Discovery of load-bearing status is valuable information regardless of removal decision.

What Falls Between

The difficult cases are things that produce some value - enough that people defend them - but less value than their cost. These require a different approach: not binary removal, but redesign. Reduce the cost to the point where the value is proportionate, or explicitly decide to accept the inefficiency because the relationship or political value is real. Both are legitimate outcomes. The error is pretending the inefficiency doesn't exist.

The things worth protecting are rarely
the ones with the clearest justification.
They're the ones nobody thought to write down.
THE RECORD

Anti-Portfolio

Things We Recommended Not Automating

The anti-portfolio is the record of subtraction candidates we reviewed and recommended against removing or automating. It documents the reasoning, so that when the same question resurfaces - and it always does - the answer is available. Organizations that don't keep this record repeat the same conversation every 18 months.

What Was Proposed
Why We Said No
Finding
Automate exception escalation routingEnd-to-end triage, zero human touch
The escalation path contained informal relationship judgment - who to call for which kind of problem - that the organization had never documented. Automation would have routed to formal owners, not the people who could actually resolve.
Scar. Documented the informal network instead. Preserved the routing as human judgment.
Remove weekly cross-team alignment call"Low ROI" per calendar audit
The call had low visible output. It was the only forum where three teams shared signals about emerging issues before they became incidents. Four months after a previous cancellation attempt, an incident had occurred that the call would have flagged. The calendar audit missed this entirely.
Load-bearing. Kept with a tighter agenda and documented early-warning function.
Replace manual reconciliation step with automated matchingWeekly process, one analyst, "clearly automatable"
The analyst was performing the reconciliation using judgment about what counted as a match - judgment shaped by three years of edge-case experience that had never been written down. Automated matching using formal rules produced a 12% error rate on the edge cases that mattered most.
Scar. Documented the matching logic in 40 explicit rules. Automated the easy 88%. Preserved human judgment for the 12%.
Eliminate senior review for standard loan decisionsApproval rate was 99.7% - review "adds no value"
The 0.3% that was caught by review was not random. It clustered in specific input patterns - a signal that the scoring model had a systematic blind spot. Removing review would have removed the only feedback mechanism that was catching model errors before they compounded in the portfolio.
Load-bearing feedback loop. Kept review. Added structured logging of catches to build model improvement data.
Replace relationship manager notes with structured CRM fieldsUnstructured notes "create compliance risk"
The unstructured notes were where relationship managers documented the context that couldn't be captured in structured fields - client communication preferences, family situations, sensitivity flags. Removing them in favor of structured fields eliminated the institutional memory of why specific clients needed specific handling.
Scar. Notes retained. Compliance risk addressed through access controls, not format change.
The anti-portfolio is not a list of failures.
It is a map of what you chose to protect.
That map is the organization's real knowledge base.