"I Just Set the Goals"

"I Just Set the Goals"

How Accountability Pools Nowhere


The previous installment described how administrative systems can produce systematic harm without anyone deciding to cause it. Targets are set. Operators optimize. Reporting systems surface successes and suppress costs. Harms accumulate. And at every node in the system, individuals can tell a locally true story about their limited role.

This raises an obvious question: if harm emerges from the system's ordinary operation rather than from discrete decisions, who is responsible?

The answer that metric governance provides—structurally, not rhetorically—is: no one.

This is not an accident or an oversight. It is the system's most durable feature. Accountability is distributed so completely across roles that it ceases to exist as a practical matter. Everyone is responsible for their piece; no one is responsible for the whole. And because the whole is the only level at which harm becomes visible, the harm has no author.

Understanding how this works requires examining the architecture of accountability evasion—and recognizing how it differs from historical models that we have learned, at great cost, to reject.


The Old Formula: Agency Located at the Top

There is a familiar structure of accountability evasion, one that history has taught us to recognize and, in principle, to refuse.

"I was just following orders."

This formulation locates agency at the top of the hierarchy. The person who pulled the trigger, who processed the paperwork, who operated the machinery, claims they had no meaningful choice. Commands came from above. Refusal carried consequences. The individual was merely an instrument of decisions made elsewhere.

The postwar legal principles established at Nuremberg rejected this defense—not entirely, but substantially. The tribunals held that individuals retain moral and legal responsibility for the foreseeable consequences of their actions regardless of instruction. "Following orders" might mitigate culpability in some circumstances, but it does not eliminate it. The person who carries out an atrocity cannot escape judgment simply by pointing upward.

This principle has become foundational to how modern societies understand institutional wrongdoing. When we look for accountability, we look for the chain of command. Who ordered this? Who authorized it? Who knew and permitted it to continue? The search for responsibility moves upward until it finds someone who made a decision—or someone who should have known, should have stopped it, should have refused.

The logic is intuitive: decisions flow downward, so responsibility must be traceable upward. Find the decision, find the decider, find the accountable party.

But what happens when there is no decision to find?


The New Formula: Agency Located at the Bottom

Metric governance introduces a different structure of evasion, one that is less familiar and therefore harder to recognize—and harder to refuse.

"I just set the goals."

This formulation locates agency at the bottom of the hierarchy. Leadership establishes targets, incentives, and measurement systems, but claims no responsibility for how those targets are met. The methods are left to the professionals. The specific choices are made downstream. Leadership defined the destination; operators chose the route.

If operators cut corners, that reflects poor judgment at the implementation level—regrettable, but not leadership's doing. If the paths of least resistance led through vulnerable populations, that was not leadership's instruction. If compliance channels became traps, that was an unintended consequence of front-line decisions, not a design choice from above.

"I didn't tell them to do that. I just set goals and expected them to be achieved professionally."

This formulation is more resistant to challenge than its predecessor because it is more locally true. The person setting quotas genuinely did not specify the method. They did not issue orders to target legal residents, to arrest people at check-ins, to prioritize volume over accuracy. They just set a number and expected it to be hit.

The operator, meanwhile, has their own locally true story. They did not design the incentive structure. They are responding to targets set above them, evaluated by metrics they did not choose, pressured by supervisors tracking numbers they did not define. If the system rewards certain behaviors, exhibiting those behaviors is rational. They are just doing their jobs within the constraints they were given.

Neither party is lying. Leadership really didn't specify harmful methods. Operators really are responding to externally imposed incentives. Each story, examined in isolation, is accurate.

But the system-level outcome—the predictable, systematic harm—emerges from the interaction of these stories. And that outcome has no author, because authorship has been distributed across roles that each disclaim responsibility for the whole.


The Structural Difference

The two formulations—"following orders" and "setting goals"—are mirror images of each other. They locate agency on opposite ends of the hierarchy. And together, they close the circle of evasion completely.

The old formula said: I am not responsible because I did not decide; I only executed.

The new formula says: I am not responsible because I did not execute; I only decided the targets.

Between these two claims, there is no remainder. The executor points up. The target-setter points down. Accountability passes through the system without stopping.

This is not a failure of the accountability structure. It is the accountability structure, working as designed. The distribution of responsibility across specialized roles—each with limited visibility, limited authority, and limited liability—is not an accident of bureaucratic complexity. It is the mechanism by which large institutions operate without any individual bearing the weight of institutional outcomes.

In most contexts, this distribution is benign or even beneficial. It allows complex tasks to be divided among specialists. It prevents any single point of failure from bringing down the whole system. It creates redundancy and resilience.

But when the system produces harm, the same distribution becomes a machine for evasion. Every specialist can say, truthfully, that the harm was not their decision. And they are all correct, individually. The harm was no one's decision. It was the system's output—and the system is not a moral agent that can be held to account.


Willful Blindness, Procedurally Laundered

There is a concept in law called "willful blindness"—the deliberate avoidance of knowledge that would create liability. A person who suspects they are transporting contraband but carefully avoids confirming this suspicion cannot later claim innocence based on lack of knowledge. The choice not to know is itself a culpable act.

Metric governance institutionalizes willful blindness by making it structural rather than individual.

When leadership sets targets without specifying methods, they are not personally choosing to avoid knowledge of how the targets are met. They are designing a system in which that knowledge is never generated in a form that reaches them. The information about methods, costs, and harms exists—but it exists at lower levels, in contexts that are not elevated to leadership attention, in formats that are not captured by official reporting.

When reporting systems count outputs but not errors, they are not hiding evidence of harm. They are constructing an official reality in which evidence of harm does not exist. The errors occur. The harms accumulate. But they accumulate outside the frame of what the system knows about itself.

This is procedurally laundered willful blindness. The blindness is not chosen by any individual; it is built into the information architecture. No one has to decide not to know, because the system decides for them. The filters are structural, not personal. And because they are structural, they are no one's responsibility—and therefore everyone's alibi.

The person at the top can say, sincerely: "I didn't know this was happening. It was never brought to my attention. The reports I received showed we were meeting our targets."

And they are telling the truth. They didn't know, in the sense of having information officially presented to them in their role. The reports did show targets being met. The harms were not included in those reports—not because someone chose to hide them, but because the reporting structure was never designed to capture them.

The blindness is real. It is also chosen—not by the person claiming it, but by the system they lead and could, in principle, redesign. The choice not to instrument costs, not to track errors, not to measure harms, is a choice. It is made in the design of the measurement system. And it is made by leadership, even if leadership never consciously frames it as a choice.


Role Morality as Identity Protection

There is a deeper layer to this evasion structure, one that operates below the level of conscious strategy.

People do not simply adopt "locally true stories" about their limited role because such stories are convenient for avoiding liability. They adopt them because such stories are necessary for psychological survival within institutions that produce harm.

If you work inside a system that generates suffering as a byproduct of its ordinary operation, you face a choice. You can recognize your participation in that system as morally significant—which requires either changing the system, leaving the system, or living with ongoing moral injury. Or you can define your moral horizon narrowly enough that the system's outputs fall outside your sphere of responsibility.

The second option is far easier. And institutions make it easy. They provide roles with clear boundaries. They define responsibilities in ways that limit what any individual must consider. They create a division of moral labor that mirrors the division of operational labor.

This is role morality: the ethical framework in which your obligations extend only to the boundaries of your defined function. The analyst is responsible for accurate data within the scope of their queries, not for what the data is used for. The operator is responsible for professional execution of their assigned tasks, not for the wisdom of the policies they implement. The leader is responsible for setting clear goals, not for the specific methods used to achieve them.

Role morality is not simply a convenient excuse. It is, for many people, a deeply held ethical framework—one that allows them to see themselves as good people doing their jobs well, even when the institution they serve produces outcomes they would, in other contexts, find abhorrent.

This is why adding harm metrics to a system can provoke intense resistance, even from people who have no personal investment in causing harm. The new metrics threaten to expand the moral horizon—to make visible, within the official frame, consequences that were previously outside anyone's defined responsibility. This is not merely a bureaucratic inconvenience. It is an existential threat to the self-concept of people who have organized their professional identity around doing their role well.

"I'm not the kind of person who would participate in this" is a story that depends on "this" remaining outside the scope of what your role requires you to see. When the measurement system changes, the boundaries of moral responsibility shift—and suddenly people who thought they were blameless are implicated in harms they cannot unsee.

The resistance to such changes is not always cynical. It is often desperate. It comes from people who need to believe they are good, and who have built that belief on the foundation of limited visibility.


The Question That Has No Answer

Here is the problem this structure creates for accountability:

When harm emerges from the interaction of roles rather than from any discrete decision, the standard tools for assigning responsibility fail. You cannot trace the harm backward to a choice, because the harm was not chosen. You cannot identify the person who decided, because no one decided. You can only identify the system that produced the outcome—and the system is not a moral agent.

Legal systems struggle with this. Criminal law is designed to assign responsibility to individuals for their choices. Civil law can sometimes hold institutions liable, but the remedies are typically monetary—and money does not undo harm, prevent recurrence, or satisfy the moral demand for someone to be held accountable.

Political systems struggle with this. Democratic accountability assumes that officials make decisions and can be evaluated on those decisions. But when harmful outcomes are produced by systems rather than by choices, there is no decision to campaign against. The official can say, correctly, that they never authorized the harm. Their opponent cannot point to a moment of culpability, only to outcomes that emerged from structures that long predate any individual's tenure.

Even moral judgment struggles with this. We are accustomed to evaluating people based on their choices. But when the system is designed so that no one has to choose the harmful outcome—so that the harm emerges from the aggregation of locally defensible choices—the vocabulary of moral judgment loses its grip. Who do you blame? Everyone is following the rules of their role. Everyone can justify their piece. The whole is indefensible, but no one owns the whole.

This is the genius, if it can be called that, of metric governance. It does not require anyone to be evil. It does not require conspiracy or coordination. It does not even require awareness. It only requires that targets are set, that measurement is selective, and that roles are defined narrowly enough that no one's responsibilities encompass the system's total output.

Given those conditions, harm can be produced indefinitely, and the question "who is responsible?" will never have an answer.


A Different Question

If "who is responsible?" cannot be answered within the logic of metric governance, perhaps the question itself needs to change.

Rather than asking who decided—which the system is designed to make unanswerable—we might ask: Who chose what to measure? Who chose what not to measure? Who benefits from the gap between what is counted and what is ignored?

These questions do not rely on finding a decision to cause harm. They ask instead about the design of the measurement system—which is itself a decision, even if it is not experienced as one.

Someone chose the targets. Someone chose the reporting structure. Someone chose which costs would be tracked and which would not. These choices were made by people with names and positions, even if the choices were made passively, through inertia, by accepting inherited systems without interrogating them.

The choice to not measure harm is a choice. The choice to continue not measuring harm, once the unmeasured harms become visible through other means, is a choice. The choice to set targets that predictably produce harm as a byproduct, and to not adjust those targets when the byproduct becomes undeniable, is a choice.

These choices may not satisfy the requirements of criminal culpability. They may not fit the framework of intentional wrongdoing. But they are choices, and they are made by people who could choose otherwise.

Metric governance is designed to make the question "who decided to cause harm?" unanswerable.

It is not designed to make the question "who decided what to measure?" unanswerable. That question has specific answers. And those answers point toward the people whose choices shaped the system's blindness—even if they never chose the harms that blindness enabled.


What Comes Next

We have now seen how metric governance produces harm without intent, and how it distributes accountability so thoroughly that no one bears it.

But the framework so far has been largely theoretical. The next installment turns to evidence: how can we know what a system is actually optimized for, when its stated goals and its measured outputs diverge? How do we read behavior instead of rhetoric—and what does that reading reveal about the systems we live under?


This essay is part of an ongoing series on metric governance and accountability.

Subscribe to The Grey Ledger Society

Don’t miss out on the latest issues. Sign up now to get access to the library of members-only issues.
jamie@example.com
Subscribe