Yesterday, Dave had an extensive post on Gammon’s Law. He referenced an older post on Gammon’s Law that made some interesting observations about why government programs fail.
Why do so many government programs fail? We’ve seen it time and time again. A need is identified, a program is formulated and put into place, everything starts out well enough, and then, perhaps over time, something happens. The program doesn’t achieve its goals. Or the amount of resources needed for it to achieve its goals are vastly more than expected.
We’ve seen this in Social Security, Medicare, the Great Society programs, and the public school system. Is it waste, fraud, and abuse (those favorite whipping-boys of legislators)? Welfare cheats? Incompetence? Just needs a little fine tuning? We’re not spending enough (no matter how much we seem to be spending)?
[snip]
Why does this [rising inputs, such as increased funding and increased staff, leading to decreased outputs, however measured] happen? Does it have to happen? The short answer is yes, it does. Unfortunately for those who contemplate grand solutions to the geniune problems in the world. In a modern society the implementation of the kinds of plans we’re talking about here requires a bureaucracy. And Gammon’s Law is an intrinsic feature of bureaucracies.
[snip]
Note that bureaucracies are not about outputs. They are about process. And it’s been known since Weber’s time that bureaucracies take on lives of their own. They’re like one-celled organisms. Their only objective is survival. And survival in a bureaucracy is not about output but about process.
There’s a kind of entropy in a bureaucracy: it becomes more and more organized and less and less work gets done. There are fewer outputs.
[snip]
There are only two known organizing principles in modern societies: bureaucracy and the unpredictable large scale group behaviors of complex systems known as emergent phenomena. Reliance on emergent phenomena to solve the great problems requires an enormous amount of faith and hope.
Why then, if we know that bureaucratic programs will always grow in cost and invasiveness while shrinking in positive effect, do we continue to look for centralized – thus bureaucratic – solutions to societal problems? The key is found in Dave’s earlier post on emergent phenomena: “Quite a few of the things that are absolutely the most important to us are emergent phenomena: life, consciousness, history, the Market (Adam Smith’s Invisible Hand), and the workings of a free and democratic society are all emergent phenomena and, as such, are highly distasteful to those who look for a simple, tidy, elegant, and orderly universe.”
There are in the world more people who believe that the world can indeed be “simple, tidy, elegant, and orderly”, that we can exactly predict the future, and that we can control conditions to bring about an ideal future. The fact that every attempt to do so has resulted in failure – sometimes catastrophic failure – does not dissuade such a True Believer. It’s a case of the difference between what happens and what “should” happen. If only we spent more money on education, or forced people to stop black market work and only contribute their labor to the State, or prayed harder, everything would work out just fine.
But these people are not insane: there are places where centralized control works quite well. For example, a family is generally a centralized entity, with critical decisions taken by the parents (or, in some cases, by only one of the parents). Aircraft crews and ship crews, some types of military units and some kinds of small businesses work quite well on a centralized command and response model.
The critical factors that determine whether a centralized command and response model can work are the number of components in the system, the reliability of each system component, the number of rule state changes in the system, and the latency between command and response. Each of these critical factors acts on a system in a different fashion to shape its response to stimulus.
As the number of components in a system grows, the amount of effort needed to simply monitor the components grows as well. The effect is not linear, but geometric. In other words, doubling the number of components will more than double the amount of effort necessary to control them. The reason for this is simply that the controllers must also be controlled. As a result, adding components adds both monitors, and the controllers of the monitors, and eventually the supervisors of the controllers of the monitors, and so on. This is what resulted in the decreased safety margins in US nuclear plants after the TMI incident: fear of accidents resulted in an increase in safety systems, and now the safety systems have safety systems, which themselves have safety systems, which themselves have safety systems. If the safety of the safety of the safety fails, the plant is still shut down or degraded. This doesn’t make the plant safer, after a point, just less reliable.
The reliability of each component within the system has a different impact. As the reliability drops from theoretically perfect (never, in practice, attained), several different effects emerge. These include inability to trust inputs or outputs, faulty analysis causing improper response to stimulus, and degraded response times. For example, take an assembly line which has an automated parts counter. When the parts inventory gets to a certain level, new parts are ordered. But if the parts counter (human or machine) is faulty, the system can incur excessive costs (from ordering too quickly) or suffer degraded output from not having parts on hand (from not ordering quickly enough). At best, the system will degrade; at worst it will fail. To prevent the degradation, critical components are given some kind of monitoring in most systems, which of course increases the number of components and the possibility for failure.
Rule state changes cause yet another kind of challenge for command and response systems. Simply put, it takes a non-zero amount of effort to evaluate a rule. As the rules grow in number and/or complexity, the amount of time needed to evaluate the rules grows. And depending on the system’s inherent ability to resolve rules contradictions or situations outside of the rules, any fault might not be self-correcting. This is why computer software is buggy: it simply is not possible in any reasonably-complex piece of software to predict all of the possible program states within anything approaching a reasonable time and cost budget. For really critical systems, like Space Shuttle guidance computers, the software is incredibly expensive, because the time and effort must be expended to make them nearly defect-free, or people will die. Consumer operating systems, not so much. And of course, as the number and complexity of features in software grows, the number of possible states also grows, and at a faster pace.
The last critical factor in controlling a complex system is the time required to notice a stimulus, analyze it, determine an appropriate response, and command the response. If the latency in the system is sufficiently high, the response may be too little, too late. One really good example of this kind of failure is shown by the OODA loop, and what happens when a military gets inside the enemy’s decision cycle. Take a look, for example, at how the US military in Iraq dismembered the Republican Guards around Baghdad. The IRGC fought, but ineffectively. This is because we deprived them of sensors (and in some cases commanders), degraded and channeled their ability to respond, and moved very quickly. As a result, IRGC units would attempt to defend against US units that had already moved beyond the point where the defense was supposed to be set up. This resulted in the IRGC command system rapidly losing the ability to even locate its own forces, never mind do anything useful with them.
Combine these effects, and it’s easy to see where centralized systems fail: where the number of independent components to be controlled is large, the components are unreliable or inconsistent, rules are complex and numerous (and possibly contradictory, and frequently over-specialized), and the time between a stimulus and a response is large. In such circumstances, a bureaucratic system will either fail or seriously degrade.
For example, let’s look at public education. There are some 94000 public schools, 3 million teachers and 47 million students in this system. The number of components is large.
There are 50 states (plus DC) and some 17000 school districts or other educational agencies contributing rules, often conflicting. The rules are complex, numerous, contradictory, and frequently very specialized.
Not only do individuals in the system, and in the regulatory bodies overseeing the system, frequently act arbitrarily or unpredictably, but the schools have infrastructure problems and weather can cause the schedule to slip and sometimes there are media circuses and students transfer between schools and teachers quit in the middle of the year for personal reasons and the state of knowledge is constantly changing…. The components of the system are not reliable and predictable.
The lag time in public education (any education, actually) is stunning: it can take years or decades for the results of system changes to become known. Sometimes, there is no way to determine if the system changed because of commanded changes or because other circumstances forced changes that simply weren’t accounted for. Even over the course of a school year, the time between teaching, testing, and the teacher absorbing the test results and accordingly adjusting their techniques (one hopes) is months long – and that’s at the lowest level of the system.
In other words, it is simply not mathematically possible for a centralized public school system to effectively produce output (educated students) consistently over time. There are simply too many possible points of failure. Sure, there will be some successes – even a great number of them in absolute terms. But overall, the system’s efficiency is terribly low.
It should be kept in mind, though, that the alternative – using emergent behaviors – is not suitable in every case, either (a trap libertarians often fall into). Emergent systems can fall apart if the components of the system don’t agree on the rules of behavior at interfaces. (That is how civil wars and computer viruses happen.) Emergent systems can fail if they are faced with a challenge that can be handled by the system globally, but not by the individual elements of the system that are in direct contact with the challenge. (That is why we have national armies instead of just militias, and why we have health insurance.) Emergent systems can also fail because of insufficient damping, where a behavior once started causes larger and larger oscillations until the system can no longer function. (I sometimes fear that political debate in the US, particularly over executive appointments to office, is in such a series of oscillations, and that eventually it will simply be impossible to get any judge or cabinet member confirmed.)
But emergent systems do handle a great many problems very well, particularly the very large and very complex problems that the bureaucrats and statists tend to want to control centrally. And this is anathema to the bureaucrats and statists: in an emergent system, if something goes wrong, it is someone’s fault, and that person can be identified and punished. Bureaucracy and statism are all about avoiding personal responsibility by shifting the responsibility to “the system” or “the process” or “anyone but me”.