Disruptive By Design: The Metrics Look Great. The Program Doesn’t.
The slides are perfect. Every box on the dashboard is green. After an hour of briefing, senior leaders nod, thank the team and move on.
Two months later, the program slips a key delivery and scrambles for re-programming.
Nothing about the metrics is technically wrong. The data are accurate. The charts are polished. The problem is simpler and more uncomfortable: the metrics are not designed to change with decisions. The team is performing metrics, not using them. That is metrics theater, and for defense programs under pressure to move faster with fewer resources, it may be one of the most expensive habits we have.
Metrics theater is not about bad math. It is what happens when measures are created to satisfy a checklist rather than a decision, when charts are optimized for visual comfort instead of useful tension and when reviews focus on how much reporting was done rather than what changed as a result. Dozens of indicators appear on the screen, but very few are tied to clear trigger points that force a conversation or a trade-off. The most important questions are answered verbally instead of visually. When leaders ask, “Are we really on track?” the honest answer comes from experience and intuition, not from the dashboard.
In large, risk-averse organizations, metrics theater is comforting. It allows teams to demonstrate activity and sophistication. It looks impressive in a briefing book. However, if a review ends and no budget line, schedule decision or risk posture is affected, the program has consumed time and energy without gaining control.
The most disruptive change we can make in defense programs is not to add more metrics. It is to design a smaller set of measures explicitly tied to real decisions.
Every major program has a handful of recurring choices that shape outcomes: how to allocate funding between capabilities, manage releases on a constrained schedule, adjust contractor incentives and staffing, and when to retire legacy systems. If a metric does not inform a decision like these, it probably does not belong on the main dashboard.
Decision-ready metrics share traits linked to specific actions: “If this trend crosses a certain point, we will respond in a specific way.” They are sensitive to change in time to act, providing warning rather than a post-mortem summary. They are also few enough that program leaders can remember them without looking at a slide.
One practical way to break out of metrics theater is to flip the usual process. Instead of asking, “What can we measure?” program teams can ask, “What decision are we trying to support?”
Consider a familiar example: deciding whether to adjust scope or schedule for an upcoming release. For that one decision, a program might rely on a short list of indicators: a measure of technical readiness, such as the share of critical defects resolved; a measure of operational readiness, such as key user stories validated in a representative environment; and a simple view of schedule confidence and the impact of a time slip. These indicators do not need to be sophisticated models. They simply need to be transparent, consistently updated and discussed in a way that leads to a concrete choice.
Emerging leaders are in a strong position to change how metrics work. Analysts, action officers and contracting representatives are often closest to the data and the weekly friction. They can redesign one recurring briefing in a decision format, introduce explicit thresholds for a handful of meaningful metrics and quietly retire measures no one uses, removing charts that exist only because “we have always shown that slide.”
None of these steps requires a new policy, a technology purchase or a major reorganization, just discipline.
Metrics are not neutral. The way we design and use them shapes behavior, incentives and mission outcomes. If our metrics never cause us to change course, cancel work or reallocate resources, then they are not disruptive. They are decoration.
Ending metrics theater does not require a breakthrough tool. It requires treating every metric as a question: “What will we do differently if this changes?” When that question has a clear answer, the numbers on the slide stop being a performance and become a lever.
Comments