Thursday, November 25, 2004

Simplifying John Maeda

Today, I ran across an interview with MIT's John Maeda in The Economist. In this interview (actually just a springboard for the rest of the article), Maeda complains of the lack of usability in modern technology, proclaiming "that if he, of all people, cannot master the technology needed to use computers effectively, it is time to declare a crisis."

Of course, the article doesn't limit itself as the rant of one cranky academic, but provides supporting views and data. Ray Lane, a venture capitalist, asserts that, "Complexity is holding our industry back right now. A lot of what is bought and paid for doesn't get implemented because of complexity." There are several studies cited to bolster the assertion. One research group "found that 66% of all IT projects either fail outright or take much longer to install than expected because of their complexity. Among very big IT projects, those costing over $10m apiece, 98% fall short." These are strong numbers, but likely match the experience of most people. That is, while not everyone is involved in large projects, most people are in some way impacted by them. Whether the impact is realized or not, the basic assertion that technology is often difficult and unweildy is something easily proven. In fact, an illustration of this point begins the Maeda article.

I explored the issue of complexity a bit in previous blog entries here, here, and here. Much of my argument stemmed from the idea that we have been developing complexity - that there are two paths we could choose and we chose the one less navigable. However, there are other views on how we arrived at this juncture, how this Gordian Knot was tied.

One such alternate explanation was recently published in ACM Queue. While the authors of this article look forward to "make it possible for people to express their ideas in the same way they think about them," they also examine where we came from. Underlying their solution are many ideas from User Interface design such as direct manipulation. Basically, they view programming as "the process of transforming a mental plan in familiar terms into one compatible with the computer." The more direct this process is, that is, the fewer transformations that are needed to move from the "familiar terms" to "computer" terms (at least as perceived by the user), then the more effective the process is. One could assert that herein lies yet another bastardization of the Second Law of Thermodynamics. Nonetheless, the authors see increased levels of transformation as negative, and as a characteristic of modern programming languages. Of course, they are discussing programming, which may influence usability paradigms, but doesn't necessarily translate into usability (of the end application) itself.

The authors call the design process they promote, "Natural Programming". The tenets are straightforward, and have much in common with other formal processes for creativity. In fact, returning to my earlier arguments concerning the link between writing and coding, these tenets appear to be about the same as those in any Freshman Writing/Composition course. First, "[i]dentify the target audience and the domain" - or, the "who is the audience" from one's composition course. Then, "[u]nderstand the target audience" - what I'll call "what are the requirements/needs of this audience." Once one is ready to write/code, "[d]esign the new system based on this information." Finally, "[e]valuate the system to measure its success, and to understand any new problems the users have." In composition, one might refer to this is the cycle of drafting and revision.

It is this last step, revision, where the authors see many bug-related issues. Even in systems (such as Alice - an influence on Croquet) designed with ease-of-use and the premises of natural programming in mind, debugging "could benefit from being more natural." One of the key issues is how easy it is for the developer/coder/writer to identify errors. There is a study of Alice users cited where it was "found that 50% of all errors were due to programmers' false assumptions in the hypothesis they formed while debugging existing errors." In teaching composition, I found a similar issue when students revised their work - often, errors were introduced when they attempted to fix existing issues (or perceived existing issues). It is difficult to convey the scope and manner of the issue, since such exchange involves translation. We are given a set of symbols and rules to help define the problem (subject/verb agreement, singular/plural agreement, sentence fragment, etc. in writing; type, scope, exceptions, etc. in coding), but these anchor points seem ineffective.

Returning to the issue of making the process "natural," the choice of symbols used as anchor points is important. If we intend to use these words to direct attention to a particular issue, the path should be intuitive. Given the errors encountered in revision/bug-fixing, clearly this is not the case. There is an essay by Virginia Vallan and Seana Coulson published in the Journal of Memory and Language that provides some hints here. In this essay, the authors found that in high-frequency dialects (given marker occurs about six times as often as a given content word), subjects "learned the structure of the language easily," whereas in low-frequency dialects (markers occur about 1.5 times as often), subjects "learned only superficial properties of the language" (71). What do these markers do? They specify, identify, and distinguish. For example, "for English, children will use extremely high-frequency morphemes like 'the' as anchor points, and observe what words co-occur with the anchor points" (72). As an underlying tenet to Blending Theory, we need some cognitive anchor to begin to understand the new. And while we may identify an error within a specific context ("null pointer exception on line 205", "you have a problem with agreement here"), to the writer/developer the process of interpreting and resolving the problem is not completely intuitive.

So is there a way to bring these threads together? Is simplifying the technology a byproduct of making it more natural? It seems that the questions being raised are good ones, ones that will hopefully lead to a better working relationship with technology. While I have misgivings about several of Ray Kurzweil's and Vernor Vinge's tenets, such a merging underlies their concept of singularity. It will be interesting to follow these attempts to see what fruit they bear.

0 Comments:

Post a Comment

<< Home