What Gets Measured Is What Gets Done

What Gets Measured Is What Gets Done

How Systems Produce Harm Without Deciding To


There is a familiar way to explain how institutions produce suffering. Either there are bad actors at the top—architects of cruelty who design harmful policies and direct their implementation—or there are bad actors at the bottom, rogue operators who exceed their authority and act outside the system's intent. Conspiracy or aberration. Evil masterminds or bad apples.

This explanatory framework is comforting because it preserves the possibility of remedy. Identify the bad actors. Remove them. The system, now purged, can return to its proper function.

But what happens when neither explanation fits? When every individual involved can tell a locally true story about their limited role, when no single decision-maker authorized the harmful outcome, and when the harm nonetheless emerges reliably and predictably from the system's ordinary operation?

This is the problem of systematic harm without explicit intent—and conventional frameworks are inadequate to explain it.


The Machinery of Undirected Harm

Consider how a modern administrative enforcement system actually operates. Not in theory, not in statute, but in practice—as a set of incentives, measurements, and feedback loops that shape behavior regardless of anyone's stated intentions.

Leadership sets quantitative targets. Processing numbers. Action counts. Intake figures. Throughput metrics. These become the measurable outputs by which the system's success is evaluated. The targets are framed as neutral, technical, and administrative—simply a way to ensure accountability, track performance, and allocate resources.

Crucially, these targets are rarely accompanied by explicit instructions about how they should be achieved. Leadership defines the destination but not the route. This framing positions the targets as mere management tools, disconnected from any particular method or outcome beyond the numbers themselves.

Operators optimize toward those targets using paths of least resistance. This is not cynicism; it is rationality. When your performance is evaluated on volume, you pursue volume. When your continued employment, your promotion prospects, and your standing with supervisors all depend on meeting numerical benchmarks, you find ways to meet them.

The populations easiest to locate, process, and count become the preferred targets—not because anyone decided to target them specifically, but because they satisfy the metric at the lowest operational cost. A person with a known address is easier to find than someone in hiding. A person who appears for a scheduled appointment is easier to process than someone who must be tracked down. A person with limited resources to contest an action is easier to count as "resolved" than someone with access to legal representation.

The perverse result: compliance becomes a liability. People who follow rules, maintain contact with authorities, and attempt to work within the system become optimal targets precisely because their compliance makes them legible. The system rewards operators for pursuing the people who are easiest to catch, not the people who pose the greatest concern.

Reporting systems surface success metrics while failing to capture costs. The dashboards show actions taken, cases processed, targets met. They do not show errors made, families disrupted, legal residents wrongly detained, or the slow destruction of trust that makes future voluntary compliance less likely.

This is not a limitation of technology. It is a choice about what to count. Every reporting system is a theory of what matters, enacted in code and procedure. When a system chooses to count outputs but not accuracy, throughput but not errors, volume but not harm, it is constructing a reality in which certain outcomes are visible and others do not exist.

What appears on the dashboard is real within the system's self-understanding. What does not appear might as well not have happened.

Harms are externalized onto populations with limited political voice. The people who bear the costs of the system's optimization are rarely the people who design it, operate it, or evaluate it. They are populations with constrained access to courts, media, or political representation—not necessarily because they lack formal rights, but because exercising those rights carries risks that make participation prohibitively expensive.

When challenging an error means risking further enforcement attention, many people will not challenge errors. When reporting mistreatment means becoming visible to a system you are trying to avoid, many people will not report mistreatment. When the cost of seeking remedy exceeds the harm of accepting injustice, many people will accept injustice.

This is not silence born of satisfaction. It is silence born of rational fear. And it means the feedback that would normally discipline institutional behavior—complaints, lawsuits, political pressure, public outrage—is structurally suppressed precisely where harm is greatest.

Accountability diffuses because everyone can tell a locally true story. This is the mechanism that makes the entire structure so durable and so resistant to reform.

Leadership didn't tell operators how to meet the targets. They just set goals and expected professional execution. If operators cut corners or made errors, that's an implementation problem—regrettable, but not leadership's decision.

Operators didn't decide to prioritize easy targets. They were responding to the incentive structure they were given. They have quotas to meet and supervisors tracking their numbers. If the system rewards certain behaviors, it's rational to exhibit those behaviors. They're just doing their jobs.

Analysts didn't design the reporting system to hide harms. They report what they're asked to report. If certain metrics aren't captured, that's a scope decision made above their pay grade. They're just running the queries.

No one made the decision that produced the harm. The harm emerged from the aggregation of individually defensible choices, each one locally rational, procedurally correct, and narrowly justifiable.


The System That Enacts What It Never States

The result of this machinery is a system that enacts propositions it never states.

Consider the policies that would be politically contested if articulated explicitly:

"We should prioritize enforcement volume over enforcement accuracy."

No official would publicly defend this. Yet a system that rewards volume and does not track accuracy will produce exactly this outcome.

"Compliance with legal processes should not protect you from enforcement attention."

No policy document contains this language. Yet a system that meets quotas by targeting people who appear for scheduled appointments, who maintain known addresses, who attempt to renew legal status, is implementing precisely this principle.

"Community disruption is an acceptable cost of meeting our numerical targets."

No press release would frame the mission this way. Yet a system that measures actions taken and does not measure communities harmed has made this calculation implicitly, embedding it in the structure of what gets counted and what gets ignored.

These propositions are never proposed, debated, or defended. They are never voted on, signed into policy, or announced. They are simply enacted—emerging from the interaction of targets, incentives, and selective measurement as surely as if they had been ordered from above.

This is what makes metric governance so powerful and so resistant to challenge. The system operates in a permanent state of plausible deniability about its own logic. Every critic who points to harmful outcomes can be met with a sincere response: That's not what we intended. That's not what we asked for. That's not what our policy says.

And every one of those responses is, in a narrow sense, true. Leadership did not intend the specific harms that occurred. They did not ask for operators to target the vulnerable. The written policy does not call for the destruction of trust.

But the system produced these outcomes anyway—reliably, predictably, systematically. If the outcomes were truly unintended, we would expect variation, inconsistency, surprise. Instead, the pattern is stable. The metric is hit. The costs are externalized. The harms accumulate. And no one is responsible, because no one decided.


Plausible Deniability as a System Property

In conventional discussions, plausible deniability is a tactic—something an individual cultivates to avoid accountability for a specific decision. A manager structures communications to avoid leaving evidence of their involvement. A leader issues vague guidance so that they can later claim any problematic interpretation was unauthorized.

Metric governance transforms plausible deniability from a tactic into a system property. It is not that individuals are clever about covering their tracks; it is that the system is architected so that no tracks are made.

When leadership sets targets without specifying methods, they are not giving themselves an alibi. They are designing a system in which the question of method never reaches them. The information that would implicate them in harmful choices is never generated, never reported, never elevated to their attention.

When reporting systems count outputs but not costs, they are not hiding evidence. They are constructing a world in which the evidence does not exist within the system's official reality. The harms occur, but they occur outside the frame of what the system knows about itself.

When operators meet quotas through paths of least resistance, they are not defying policy. They are executing it—exactly as the incentive structure demands. Their choices are downstream of the goal structure, not departures from it.

The result is an institution that can produce harm indefinitely without anyone being in a position to stop it—because no one is in a position to see it, officially, in the terms the institution recognizes as real.


The Missing Frame

Why does this matter? Why develop a framework for understanding harm that emerges from metrics and incentives rather than from decisions and directives?

Because the conventional frameworks lead to interventions that do not work.

If the problem is bad actors at the top, the solution is to replace leadership. But new leadership inherits the same targets, the same incentive structures, the same reporting systems. They face the same pressures to hit the same numbers. The individuals change; the outcomes persist.

If the problem is bad actors at the bottom, the solution is better training, stricter oversight, more accountability for individual operators. But operators are responding rationally to the incentive structures they are given. Punishing individuals for doing what the system rewards does not change the system; it creates resentment, turnover, and—often—even more aggressive optimization as remaining operators work harder to hit their numbers.

The problem is not personnel. The problem is architecture.

A system that measures volume will produce volume. A system that does not measure harm will not see harm. A system that distributes accountability across roles will find that accountability pools nowhere. These are not bugs to be fixed by better management. They are features of metric governance, operating exactly as designed—if not as described.

Understanding this does not provide easy solutions. But it does redirect attention from the question that leads nowhere (who is to blame?) to the question that might lead somewhere (what is being measured, and what is being ignored?).


What Comes Next

The framework introduced here—metric governance as a mechanism for systematic harm without explicit intent—raises immediate questions.

If no one decides the harmful outcomes, how is accountability possible? If everyone can tell a locally true story, who is responsible for the systemically false result?

The next installment examines the structure of accountability evasion in metric-governed systems: how responsibility is distributed so completely that it disappears, and what distinguishes modern bureaucratic absolution from its historical predecessors.


This essay is part of an ongoing series on metric governance and accountability.

Subscribe to The Grey Ledger Society

Don’t miss out on the latest issues. Sign up now to get access to the library of members-only issues.
jamie@example.com
Subscribe