Wednesday, February 29, 2012

Winter


The local bus stop.


Market square ice fort and art installation.


The drainage canal doesn't completely freeze over.


Snow on tree.

Fighting for Internet freedoms is hard, part 1131

As much as I find most forms of public protest to be ineffective as policy-making or policy analysis tools, the public should be trusted to make such decisions.

House Passes Bill That Will Make Protesting Illegal at Secret Service Covered Events: http://www.economicpolicyjournal.com/2012/02/houses-passes-new-bill-that-would-make.html

Although the text was public and listed on the usual open democracy and open government sites (e.g., http://www.opencongress.org/bill/112-h347/text), none of the usual civil liberties organizations appears to have said a thing about it (not that it's their fault for lacking adequate operational or theoretical tools to have intervened here).

The growing erosion of liberties concurrent with the growing number and scale of computer-based democratic movements in the last three decades suggests that we're not going to simply code, publish, or 'open', our way out of this problem.

Could we at least admit that the current approaches to rebuffing a broken system are themselves ineffective, so that such approaches might be more open to criticism and refinement?

Salmon and rice... casserole?



with berry sauce and hot pink salad.

Tuesday, February 14, 2012

What's new?

"What am I looking at here?" is a question I've asked repeatedly in the last two weeks of students, researchers, presenters, etc., proudly presenting interesting posters and presentations. Details covering everything from bribery in India, homeless use of twitter, surveillance cameras and federal legislation, ... While the empirical settings are endlessly fascinating and potentially impactful in the lives of many, the conclusions seem almost obvious.

Distractions distract, people sometimes struggle to discover and use information, different kinds of people collaborate differently on different kinds of tasks... One presenter sarcastically summed it up as "we show that different people are different".

I should have been asking the question "What's new here?" What does this empirical work tell us _that we didn't know before_ (or could reasonably guess) about humanity and how it interacts with itself and with technical artefacts? Surely, all of this individually fascinating work is telling us something about _how to avoid_ rediscovering the broad strokes for each future situation we encounter involving computers and humans, so that we can quickly localize and adapt to each situation.

I've seen (versions of) many of these studies and technical tricks countless times in art galleries and hacker spaces elsewhere. The particular combinations instantiated here may be of particular interest, but what features of the recombinations are relevant to the findings we are to absorb?

My personal challenge in not having a deep-rooted grounding in this field, is how to situate all these details in some frame of knowledge. How do all these empirically and operationally defined contexts and situations relate to each other, and to the several disciplines from which this CSCW endeavor draws?

Monday, February 13, 2012

Unfortunate assumptions

These last few days at iConference and CSCW have exposed in myself an assumption that has been core to the way I've worked with and around computers for the greater part of the past decades. Namely, that all this technological wonder around us was in fact designed guided by some grand theory in the background that I had simply been too dense to discern, yet which operated well enough that it appears to have created a self-sustaining ecosystem of diverse implementations of what we call "information technology".

And yet, I've been told in no uncertain terms several times at these two conferences (and at an "experimental software design" course a week before) that there is no central theory that guides how our information technologies are designed. While there are a plethora of local theories about why components in microcosm work well most of the time if we follow such and such a pattern, we do not understand why those patterns ar ethe way they are, beyond resorting to the fallback of "complexity". As someone trained in a couple of the hard natural sciences, I was surprised to hear that the field of software design would be ecstatic if it could manage to get commercial implementations projects to not fail more than 75 per cent of the time.

I know there is a lot of trial and error in biology and chemistry to develop and optimize various tools and understandings of particular sub-systems within and among various organisms, and that some very hard problems remain to be solved. But in those cases, many of the designs have been supplied and we can only tweak around at the edges. I also know that in no other industry of scale would it be acceptable for a production design to have a 20% yield.

We sell information systems and their design as though they are consumer products suitable for all kinds of uses. A few sizes fit all, as it were.
If other branches of engineering were to fail to produce from designs the kinds of ubiquitous products that information systems are supposed to be, the public would be right to ask some damming questions. Imagine if 3/4 of houses or cars or electrical lines fell apart as they are being built.

One of the key appearent differences between roads and information systems is that roads serve an explicit policy purpose (relating to ensuring that residents of a geography may engage in unspecified social and economic intercourse throgh physical mobility), while information systems appear to serve no policy purpose at all. I say "appearent differences" because policy people from governance rarely think about the rules embedded in software algorithms as a kind of policy, and designers of information systems rarely think of policy outside of business rules and access controls. The two disciplinary infrastructures each has a concept of policy, but those concepts are not shared even though they are largely compatible conceptually.

Now, why is this a problem?

Consider that the formal and informal policy regiemes surrounding the design of roads or cars or aircraft or medical devices are relatively well defined and constrain the universes of possible variation in design choices. Of consequence, public policy provides a theoretical framework to guide the design of roads in order to meet the optimal outcome of effective (fast, cheap, good) transportation infrastructure. (In the same manner, the theory of electron orbitals in chemistry lets us design reactions in which we try to convert all of the starting materials into the desired prodcuts without producing waste. We cannot execute perfect reactions, but the theoretical best possible and most efficient outcomes provides an unambigious design goal.) There is no obvious analogous theoretical framework around which to design an information system.

There is no optimal or maximal condition of information replication or delivery to which to strive in the design of an information system. Therefore, it's difficult to measure if we make progress through revisions to our designs and implementations. And we must rely on crude indicators of policy effectiveness during and after implementation (well past the point of design and manufacture) to know whether or not the designed information system product is defective.

And there is the rub. It is difficult to evaluate the effectiveness of meatspace policies, but we can look at the degree of compliance, the costs, the outcomes, etc of any policy instrument. There are objective quantitative and qualitative measures that can tell us whether a policy is (likely to be) effective, and therefore how to design policies to achieve optimal or super-optimal outcomes. Policies designed and built into roads and infrastructure are beneficially constrained by competing but complementary theories of public good, good governance, etc. By contrast, policies designed and built into information systems and infrastructures are only constrained by the availability of starting resources (processing power, storage, and interconnectivity), without the supervisory social layer to keep long-term real world considerations in mind, or to constrain the activity of design exploration to a small universe of conscientious possibilities.

Laying enough roads and adding/deleting them enough times will eventually cause the system to encounter a good enough design without discovering the underlying principles, the Romans' arches for example, but there are better guided approaches such as through traffic engineering and urban planning theories. The approach of hacking and re-hacking information systems designs is (evidently) very good at stumbling onto good enough designs. Shall we look for ways to be constrained in and by our design of information systems?

Friday, February 3, 2012

echoes and portents

One day, each person will have information assistant devices so powerful that they will not rely on connections to a small number of web application servers to provide anything more than updated data to be processed locally. Instead of downloading basic Javascript and HTML5 code that must be interpreted on every platform and optimised for the capabilities of none, an enterprising rebel will envision a way to gather the interpretation into an optimised thing that may be stored and retrieved locally at the time of running.

And then perhaps someone will devise a way to distribute such "optimised things of running" over some telecommunications network so that not every person using each same information assistant need to repeat the same work of gathering all of the raw basic pieces to interpret them. If we are so lucky, such packages may even be catalogued in repositories and given numbers indicating their order of production so that old devices may continue to use old packages, while newer devices with more capabilities may use newer packages. As storage technology advances, it may become possible to distribute curated and described collections of packages relating to common interests and purposes, without waiting for the slow telecommunications network to transmit individual packages.

As the capabilities of each person's information assistant grows even further, some of the packages may even become so powerful that they can collect, store, and compute useful information without relying on connections to the web servers to provide information, and then many may be free to assist the user while untethered. One such package might enable users of particularly powerful personal information assistants to experiment with operating a small version of the old and largely forgotten web application servers, thereby necessitating a suite of small utilities for the upkeep of such servers.

If many hobbyists and researchers start to offer information servers in this way, they will need a better way to discover and locate information, than simply lists of Tweets and Likes. Recalling some popular fictional serials characters from childhood, someone may create a tool by the name of "Annikin", to which someone else might follow through with another tool by the name of "Jar Jar".

Meanwhile, information assistants will also inspire recreational uses, because their encephlogeaphic interfaces will provide a far richer gaming experience than their visual predecessors. Encephlographic processing units will remain specialized despite all other APUs migrating onto the main SoC.

Realising that not everyone who has information to offer has the skills to design packages or operate servers, someone may create a standard by which to share information that does not need to change with each use.

This simple protocol, which may use labels describe the title and major sections of information in general, along with some basic presentation suggestions, would be accompanied by one tool (perhaps "Asorty server" after how its assorted origins) to serve such information from almost any personal information assistant owned by most people, and another tool to render such information on any similar personal computing assistant owned by most people. The global-scale matrix of information enabled by these tools may be largely ignored outside research circles until the vendor of the most popular information service introduces the mainstream public to matrix locations, coining the phrase "the WWDC that never ended".

And then, perhaps, someone decides to make labels that allow basic conditions, say to provide information in the language and format that most matches those of the personal information assistant being serviced. Such labels will logically be extended to form a Turning complete language, suitable for writing complete computing suites, on specialized information assistants designed to parse labels to render information. Eventually, specialized renders will be developed at great expense and operating cost, housed in renderer villages.

Epilogue: Such renderers will eventually take advantage of the n-dimensional output capabilities of EPUs, and perhaps attempt to use them to speed up some relatively simple output collapse functions for which EPUs are optimized, but for which the information assistant SoCs are not. EPU manufacturers will eventually realize that there are scientific and research markets for devices that are quick at accurately predicting the future, and manufacture special lines of products containing EPUs to forecast and enforce outcomes in complex information systems instead of rendering information for humans.