Monday, February 13, 2012

Unfortunate assumptions

These last few days at iConference and CSCW have exposed in myself an assumption that has been core to the way I've worked with and around computers for the greater part of the past decades. Namely, that all this technological wonder around us was in fact designed guided by some grand theory in the background that I had simply been too dense to discern, yet which operated well enough that it appears to have created a self-sustaining ecosystem of diverse implementations of what we call "information technology".

And yet, I've been told in no uncertain terms several times at these two conferences (and at an "experimental software design" course a week before) that there is no central theory that guides how our information technologies are designed. While there are a plethora of local theories about why components in microcosm work well most of the time if we follow such and such a pattern, we do not understand why those patterns ar ethe way they are, beyond resorting to the fallback of "complexity". As someone trained in a couple of the hard natural sciences, I was surprised to hear that the field of software design would be ecstatic if it could manage to get commercial implementations projects to not fail more than 75 per cent of the time.

I know there is a lot of trial and error in biology and chemistry to develop and optimize various tools and understandings of particular sub-systems within and among various organisms, and that some very hard problems remain to be solved. But in those cases, many of the designs have been supplied and we can only tweak around at the edges. I also know that in no other industry of scale would it be acceptable for a production design to have a 20% yield.

We sell information systems and their design as though they are consumer products suitable for all kinds of uses. A few sizes fit all, as it were.
If other branches of engineering were to fail to produce from designs the kinds of ubiquitous products that information systems are supposed to be, the public would be right to ask some damming questions. Imagine if 3/4 of houses or cars or electrical lines fell apart as they are being built.

One of the key appearent differences between roads and information systems is that roads serve an explicit policy purpose (relating to ensuring that residents of a geography may engage in unspecified social and economic intercourse throgh physical mobility), while information systems appear to serve no policy purpose at all. I say "appearent differences" because policy people from governance rarely think about the rules embedded in software algorithms as a kind of policy, and designers of information systems rarely think of policy outside of business rules and access controls. The two disciplinary infrastructures each has a concept of policy, but those concepts are not shared even though they are largely compatible conceptually.

Now, why is this a problem?

Consider that the formal and informal policy regiemes surrounding the design of roads or cars or aircraft or medical devices are relatively well defined and constrain the universes of possible variation in design choices. Of consequence, public policy provides a theoretical framework to guide the design of roads in order to meet the optimal outcome of effective (fast, cheap, good) transportation infrastructure. (In the same manner, the theory of electron orbitals in chemistry lets us design reactions in which we try to convert all of the starting materials into the desired prodcuts without producing waste. We cannot execute perfect reactions, but the theoretical best possible and most efficient outcomes provides an unambigious design goal.) There is no obvious analogous theoretical framework around which to design an information system.

There is no optimal or maximal condition of information replication or delivery to which to strive in the design of an information system. Therefore, it's difficult to measure if we make progress through revisions to our designs and implementations. And we must rely on crude indicators of policy effectiveness during and after implementation (well past the point of design and manufacture) to know whether or not the designed information system product is defective.

And there is the rub. It is difficult to evaluate the effectiveness of meatspace policies, but we can look at the degree of compliance, the costs, the outcomes, etc of any policy instrument. There are objective quantitative and qualitative measures that can tell us whether a policy is (likely to be) effective, and therefore how to design policies to achieve optimal or super-optimal outcomes. Policies designed and built into roads and infrastructure are beneficially constrained by competing but complementary theories of public good, good governance, etc. By contrast, policies designed and built into information systems and infrastructures are only constrained by the availability of starting resources (processing power, storage, and interconnectivity), without the supervisory social layer to keep long-term real world considerations in mind, or to constrain the activity of design exploration to a small universe of conscientious possibilities.

Laying enough roads and adding/deleting them enough times will eventually cause the system to encounter a good enough design without discovering the underlying principles, the Romans' arches for example, but there are better guided approaches such as through traffic engineering and urban planning theories. The approach of hacking and re-hacking information systems designs is (evidently) very good at stumbling onto good enough designs. Shall we look for ways to be constrained in and by our design of information systems?

No comments:

Post a Comment