Monday, November 29, 2004

Pinochet's Children

It has been almost a year since I saw the documentary Pinochet's Children. The film had a tremendous impact on me, the audience, and the participants - but not the world. Of course, I had an advantage when I saw it: the director, Paula Rodriguez, introduced the film, and there were several Chileans in the audience. Needless to say, the discussion after the film was very emotional. The people around me had lived through knowing that their friends and family - a sister in one case, a close childhood friend in another - had been "disappeared" and killed, often for being at the wrong place at the wrong time. I had only the antiseptic second-hand accounts of Pinochet as a villain.

Chile seems a remote place; indeed it contains the southern-most point one could go in the Americas. As an American, it seems impossible that one could walk, or drive, from here to there, however. So is American isolationism a myth? Possibly, the public is aloof, but it seems that "America" the nation is actively involved. It is no real secret that the CIA participated in the coup that put Pinochet in power, though international politeness downplays the extent of our involvement. It seems odd that a democracy would have such a bifurcation between populace and government. Yet, I'm not intending to get on my political soapbox - I'm using Chile as an illustration of our innate selfishness.

I'm sure that many would protest and say that American apathy is endemic to our nation, but I think that it is really a human characteristic. One tribe does not really care about the other, the American condition is a luxury, not a symptom of who we are. That isn't to say that said attitude is right in any sense. I'm just stating that most people (i.e., a significant portion of a population) would exhibit the same characteristics given the same environment. In other words, people are more-or-less interchangeable.

It is our very interchangeability, our same-ness, that makes a documentary such as Pinochet's Children powerful. We observe people and places that do not seem foreign in circumstances that are incomprehensible. The film, being a film, contains an element of make-believe - we "know" movies to be artificial. Thus, we engage with the feature on a personal level - we are not bounded by limitations of propriety or issues of safety. Yet, as documentary, we realize that the film really isn't make-believe. We are uneasy. Thisuneasiness is likewise a universal human trait. As members of our family-tribes/clans, we may isolate ourselves, but we still recognize ourselves in each other. As observers, we are troubled. I cannot say what the perpetrators feel.

Thursday, November 25, 2004

Simplifying John Maeda

Today, I ran across an interview with MIT's John Maeda in The Economist. In this interview (actually just a springboard for the rest of the article), Maeda complains of the lack of usability in modern technology, proclaiming "that if he, of all people, cannot master the technology needed to use computers effectively, it is time to declare a crisis."

Of course, the article doesn't limit itself as the rant of one cranky academic, but provides supporting views and data. Ray Lane, a venture capitalist, asserts that, "Complexity is holding our industry back right now. A lot of what is bought and paid for doesn't get implemented because of complexity." There are several studies cited to bolster the assertion. One research group "found that 66% of all IT projects either fail outright or take much longer to install than expected because of their complexity. Among very big IT projects, those costing over $10m apiece, 98% fall short." These are strong numbers, but likely match the experience of most people. That is, while not everyone is involved in large projects, most people are in some way impacted by them. Whether the impact is realized or not, the basic assertion that technology is often difficult and unweildy is something easily proven. In fact, an illustration of this point begins the Maeda article.

I explored the issue of complexity a bit in previous blog entries here, here, and here. Much of my argument stemmed from the idea that we have been developing complexity - that there are two paths we could choose and we chose the one less navigable. However, there are other views on how we arrived at this juncture, how this Gordian Knot was tied.

One such alternate explanation was recently published in ACM Queue. While the authors of this article look forward to "make it possible for people to express their ideas in the same way they think about them," they also examine where we came from. Underlying their solution are many ideas from User Interface design such as direct manipulation. Basically, they view programming as "the process of transforming a mental plan in familiar terms into one compatible with the computer." The more direct this process is, that is, the fewer transformations that are needed to move from the "familiar terms" to "computer" terms (at least as perceived by the user), then the more effective the process is. One could assert that herein lies yet another bastardization of the Second Law of Thermodynamics. Nonetheless, the authors see increased levels of transformation as negative, and as a characteristic of modern programming languages. Of course, they are discussing programming, which may influence usability paradigms, but doesn't necessarily translate into usability (of the end application) itself.

The authors call the design process they promote, "Natural Programming". The tenets are straightforward, and have much in common with other formal processes for creativity. In fact, returning to my earlier arguments concerning the link between writing and coding, these tenets appear to be about the same as those in any Freshman Writing/Composition course. First, "[i]dentify the target audience and the domain" - or, the "who is the audience" from one's composition course. Then, "[u]nderstand the target audience" - what I'll call "what are the requirements/needs of this audience." Once one is ready to write/code, "[d]esign the new system based on this information." Finally, "[e]valuate the system to measure its success, and to understand any new problems the users have." In composition, one might refer to this is the cycle of drafting and revision.

It is this last step, revision, where the authors see many bug-related issues. Even in systems (such as Alice - an influence on Croquet) designed with ease-of-use and the premises of natural programming in mind, debugging "could benefit from being more natural." One of the key issues is how easy it is for the developer/coder/writer to identify errors. There is a study of Alice users cited where it was "found that 50% of all errors were due to programmers' false assumptions in the hypothesis they formed while debugging existing errors." In teaching composition, I found a similar issue when students revised their work - often, errors were introduced when they attempted to fix existing issues (or perceived existing issues). It is difficult to convey the scope and manner of the issue, since such exchange involves translation. We are given a set of symbols and rules to help define the problem (subject/verb agreement, singular/plural agreement, sentence fragment, etc. in writing; type, scope, exceptions, etc. in coding), but these anchor points seem ineffective.

Returning to the issue of making the process "natural," the choice of symbols used as anchor points is important. If we intend to use these words to direct attention to a particular issue, the path should be intuitive. Given the errors encountered in revision/bug-fixing, clearly this is not the case. There is an essay by Virginia Vallan and Seana Coulson published in the Journal of Memory and Language that provides some hints here. In this essay, the authors found that in high-frequency dialects (given marker occurs about six times as often as a given content word), subjects "learned the structure of the language easily," whereas in low-frequency dialects (markers occur about 1.5 times as often), subjects "learned only superficial properties of the language" (71). What do these markers do? They specify, identify, and distinguish. For example, "for English, children will use extremely high-frequency morphemes like 'the' as anchor points, and observe what words co-occur with the anchor points" (72). As an underlying tenet to Blending Theory, we need some cognitive anchor to begin to understand the new. And while we may identify an error within a specific context ("null pointer exception on line 205", "you have a problem with agreement here"), to the writer/developer the process of interpreting and resolving the problem is not completely intuitive.

So is there a way to bring these threads together? Is simplifying the technology a byproduct of making it more natural? It seems that the questions being raised are good ones, ones that will hopefully lead to a better working relationship with technology. While I have misgivings about several of Ray Kurzweil's and Vernor Vinge's tenets, such a merging underlies their concept of singularity. It will be interesting to follow these attempts to see what fruit they bear.

Tuesday, November 23, 2004

Corndogs

Perhaps it is because of the impending holiday season, but I've been thinking about food lately. Yesterday it was pistachios, today it is corndogs. However, while I will gladly consume pistachios, I'm not a big corndog fan. Rumor has it that an esteemed administrator at my fine workplace has a serious penchant for corndogs and Schlitz beer. I'm guessing that he must stockpile the beer, since it isn't produced in volume anymore, but corndogs are freely available.

For those interested in the origins of the corndog, I suggest checking out this blogger's entry on hot dogs. Of course, the halcyon days of the corndog aren't gone, we have a National Corndog Day Festival. I find it strange, however, that the festival does not fall under National Hot Dog Month, but I suppose that these are two different critters. If one is interested in honoring carb-intensive meat products on-a-stick, then
perhaps it would be worthwhile to check out the Corndog Festival. One could listen to some of the best rock albums of the 70s that best accompany a corndog along the way. What would even be cooler would be listening to these tunes as one cruised along in the Weinermobile, though the corn car would probably suffice.

While these festivals are fun, they don't address the need for immediate corndog gratification. That is where a home corndog kit comes in, though results using these are varied. Just use a decent recipe. For best results, I recommend a commercial unit. Otherwise, you may need to fly out to Disneyland to get your fix.

One thing to watch is one's corndog intake. These premium pups are even worse for you than hot dogs. One should not attempt power-eating these guys, though there are worse medical scenarios involving corndogs.

Perhaps, then, our inimitable leader chooses corndogs for their entertainment value. After all, corndogs can be a special friend. Moreover, the corn dog is complex, exhibiting a dark side as well as a need for heroes. Besides, with enough Schlitz beer, I suppose that anyone could find a true friend in a fritterized meat-cicle. And such effusive offerings of kind regard help to lift our University out of the mire of sociopathy into the world of unity. Perhaps Dr. Atkins was wrong after all.

Monday, November 22, 2004

Pistachios

Pistachios. I have a jar of pistachios in front of me right now. The green ones - I never really liked the red ones all that well. Why are there red ones and green ones? The green ones are the natural color. According to the California Pistachio Commission's website, the red color was used to draw attention to the delicious nutmeats and to disguise blemishes produced from harvesting. From the information the site provides, the nuts must be hulled and dried within a day, otherwise, the shell will be stained. However, modern processing techniques prevent such problems. According to these Nut-mongers, the small percentage of red nuts (at least from California) are dyed not by necessity, but to appease customer preference. They must be referring to someone else.

Since, according to the California nut folks, pistachios originated in the middle east and were purportedly a favorite delicacy for the Queen of Sheba, it is not surprising that many middle eastern nations still produce these nutritious nuts commercially. And, while California does a good job promoting our domestic munchable, the foreign produce is often top-drawer. In fact, one site maintains that a Greek variety of pistachio, Aegina, is considered one of the best varieties in the world.

Pistachios and its relatives appear to also have benefits outside of making me fat. Looking at the Wikipedia entry, I find that the pistachio tree is related to plants used to produce both varnish and turpentine. I won't get fat on that! I also find that at least one enterprising soul has made a drinking game involving everyone's favorite party snack. The inimitable Martha Stewart used pistachios for decorating, perhaps more in line with her Gothic tastes. Maybe she was influenced by the swankSeeds of the Bible display available online.

Moving further in the realm of "I can't eat it" is Mitsubishi's experiment in urban transportation. Of course, this car is found in Japan, so perhaps Godzilla will find tasty nut-meats inside once he peels away that troublesome shell. The ensuing imagery would form a nice basis for a sequel to William Trowbridge's Complete Book of Kong: Poems, an imaginative work in and of itself. I had the pleasure to hear Trowbridge reading from the book, and it seems that there is a poem or two that mentions Godzilla. Something about Kong running into the big lizard in the Universal Studios cafeteria. So I end with Godzilla in a studio cafeteria talking to Kong as he considers the bag of pistachios in front of him; priced at a buck, crunchy, salty, and in his native green color.

Saturday, November 20, 2004

Plan 9 on a Gamecube

I have been wondering lately about modifying my Nintendo Gamecube. Recently, a group of individuals managed to boot Linux on the gamecube, and another managed to boot OS X on his gamecube. I have one of two things in mind:

1) Plan 9 cluster. Get Plan 9 to boot up on my gamecube. Actually
, this is a perfect solution, since one can remote boot. I'd like to get the initial bootloader on a memory card, then pull the secondary boot loader over the network connection. This wouldn't be the first weird implementation of this unusual OS.

2) Beowulf cluster. Years ago, I was involved in the now defunct Wulfstation project, aimed at bringing beowulf to the Playstation 2. This project failed for a variety of reasons, most of which revolved around the fact that the Playstation 2 had not been released yet. Yet, the underlying reasons for pursuing the project are still valid. Others have modded game machines to run linux/netBSD/etc., but few have really pursued the potential these machines have. Bottom line: there is a lot of machine there for $100 or less.

In order to pursue either Gamecube modification, I will need to obtain the optional, and increasingly hard to find ethernet adapter - actually several, depending on how many nodes I wish to set up. I currently have two Gamecubes, but I may want more.

Another interesting idea would be porting Apple's XGrid client to the gamecube. There has already been a Linux (reverse-engineerish hack) implementation, so hope is on the horizon. Given a bizarre set of circumstances, my current XGrid consists of (2) 350 mhz G3 towers, (1) 400 mhz G4 tower, (1) 500 mhz g3 iBook, and (1) 800 mhz g4 iBook, all using 802.11b/g to communicate (depending upon whether the g3 iBook is online). With relatively low processing power and bandwidth limitations, processor-intensive tasks could benefit from having (a) gamecube(s) aboard - cheap additional processing, a fantastic use for gaming consoles.

Friday, November 19, 2004

Translation. Immanuel Kant claimed that we understand new things in relation to what we already know (this is a rather extreme reduction of trascendental idealism). In more recent times, a similar argument was used as part of the underlying structure of Blending Theory. This process of integrating the new into the old is essentially a method of translation. Or so it would seem that translation is the basic mechanism for understanding the new when one looks at the various "translations" provided for learning new things.

An example of this process of learning via translation was recently shown to me by a coworker. The translation at hand here is the article Python for Lisp Programmers by Peter Norvig. In fact, according to Norvig, this really isn't translation as much as understanding that Lisp and Python are (arguably) essentially the same. To illustrate his point, he provides examples of data types, etc. between the two languages. One might play Devil's Advocate with his article by claiming that two spoken languages are the same because they have the same number of adjectives to describe cat hair. Yet, I think that he has a good point, if for no other reason than newer languages tend to borrow ideas (either deliberately or not) from older ones, and Lisp has been influential for about a half-century.

Another "translation" guide that I recently encountered was a Unix to Plan 9 command translation cheat-sheet. Granted, many of the same people worked on both operating systems, but, aside from that, it seems that here we see translation at play not only in learning but design. But how much difference is there between learning and building. According to Marvin Minsky, not much.

Perhaps, if we acknowleged that we learn best by actively relating new information to old, then we could accelerate our
learning. We could move post the ten year learning curve. We could find relevance for our knowledge, tie it together. Or, perhaps we could simply avoid feeling like we are "starting over" when we embark upon something new.

Thursday, November 18, 2004

Amazon Reviewer Ranking

I ran across one of my reviews today while passing through links on Amazon. I noted my rank (12856), and wondered just what it "takes" to be a top reviewer. Curious, I clicked on the top reviewer listed in the sidebar (entitled "Top Reviewers") and found a listing of the 8207 reviews written by a certain Harriet Klausner since November 22, 1999. Clicking on her "About Harriet Klausner" link I found that she claims to be a former acquisitions librarian who reads two books per day. Yet, somehow, she manages to post at least four and five book reviews per day on Amazon.....

The vast majority of the books reviewd by Mrs. Klausner are mass-marketed fiction. Interspersed, one can find (only) slightly higher-brow fiction writers such as Dean R. Koontz. I find it noteworthy that nearly every book receives either four or five stars. Given that she reads two books a day and posts reviews on at least double that amount, I guess that she must not read any bad books.

I also noticed that most of the recent entries are not available yet. This lady wouldn't work for a publishing house, would she? My suspicions are up....

Anyway, in other news today, there is a new source of information for the research-oriented, Google's academic search engine. I wonder what impact this new search tool will have on ERIC, EBSCO, and other respected tools. Also, given recent allegations that Google is not shy about filtering content (the other side), what is the impact upon academia? Or will it even be used? Will marketing win over quality? Ah, the suspense!

Oh well, enough ranting for today.....

Wednesday, November 17, 2004

Right brain. Left brain. According to some, in order to develop one side, one must deprive the other. Yet, there are individuals such as Jaron Lanier, Ray Kurzweil, and Eduardo Kac. These are individuals who develop the two sides simultaneously. In fact, Kurzweil's definition of singularity would seem to allow that both would naturally be pushed forward through the natural acceleration of our species. Kac, in fact, produces his own singularity through transgenetic art such as Alba", the genetically altered bunny.

Of course, these people aren't necessarily in agreement. At times, Lanier takes aim at many of the ideas the Kurzweil promotes in The Age of Spiritual Machines. Likewise, I would suspect that both Lanier and Kurzweil may take issue with some of Kac's work. To those who are stuck in right/left brain paradigms, however, it probably all looks greek. It reminds me of that part of P.D. Ouspensky's argument in Tertium Organum - additional dimensions can exist simultaneously, they just may not be perceived. Moreover, what is seen is interpreted differently according to this perception.

Take a close look at these individuals and what they are saying. They are debating our future. A future where right and left brain may be fused, or relegated to history, through the possibilities of the machine. Whereas individuals such as Alan Kay, Alan Turing, and John McCarthy represent part of the past of human/machine interaction, Lanier, Kurzweil, and Kac represent part of the future.

This isn't science fiction, it just looks like it.....

Tuesday, November 16, 2004

I like to listen to music when I work. I like music period, but perhaps that has something to do with being a musician. I like it all, from The Carter Family to Slayer, Pachelbel to Coltrane. What I have in my deck right now is the Deftones' "Adrenaline" and Monster Magnet's 1992 release, "Superjudge." While the former is decent, if not the best Deftones out there, the latter makes me think a bit. Fitting within the realm of sludge/stoner/doom rock, Monster Magnet has been about the only band, outside of the seminal band for the genre, Black Sabbath, to acheive semi-stardom. Yet, one finds them alongside other greats such as The Bevis Frond, Electric Wizard, and Abdullah at my favorite music site.

Superjudge is a cool album, mixing punk, metal, doom, and world music in seamless fashion. There are later bands such as System of a Down, perhaps more authentic, that do so with equal capacity, but a decade later. For doom/stoner jam sequences the title track can't be beat. The song has the sort of "bludgeoned with a side of beef" lilting tone later (psychedlically) explored by Abdullah in "Path to Enlightenment."

Perhaps what is most striking about Monster Magnet is the lyrics. The music is trippy-heavy, but there is a assertion of power in the lyrics. When the disc is spinning, the world begins and ends within the confines of the syrup-heavy rhythms these guys pump out. If God's voice is a stentorian hammer cutting throught the fog of a confused world, then these guys have found the equivalent within contemporary music.

My recommendation: turn the lights down, crank the stereo, put the disc on, and contemplate your place in the universe.

Monday, November 15, 2004

Today I read an essay available from Sun Microsystems entitled "A Note on Distributed Computing" (SMLI TR-94-29). This is an older document, written by Jim Waldo, Geoff Wyant, Ann Wollrath, and Sam Kendall ten years ago, bearing a publication date of November 1994. In this essay, the authors note the difference between local and remote resources, and what bearing this has on developing distributed applications.

Specifically, there are three areas of difference: latency, memory access, and concurrency. The authors argue that one cannot treat local versus remote resources in a generic manner, and that the above areas of difference are the thorns in the side of unified resource models. Moreover, the issue isn't something that can be papered over through language and implementation, but lies far deeper within the architecture, and, as I would add, in the very nature of language itself. Before you think that I've fallen off my rocker, let me introduce, briefly, Ferdinand de Saussure, and, or particular note, how he is relevant to semiotics. After all, isn't the juncture of language and computer science the nature of the symbol?

In an extremely abbreviated synopsis/reduction of Saussure, let's just say that he viewed language as bifurcated into meaning and symbol. Meaning is the mutually understood concept represented by a word or symbol. In other words, in order to understand a stanza, or even mote, of a dialog, a common protocol must be implemented. Both parties must be able to recognize the objects passed, and be able to parse them in more-or-less similar fashion.

The authors of the essay in question see communications and language as split. Likewise, one could assert that language and communications among people are two separate subjects, but such an argument is reliant, especially, upon post-Chomsky views of language. However, Saussure's initial bifurcation of language is useful in propping up an analogy between natural language and distributed computing. Within the limits of natural language, then, we see that when the authors write, "Just making the communications paradigm the same as the language paradigm is insufficient to make programming distributed programs easier, because communicating between the parts of a distributed application is not the difficult part of that application" (5), that we are looking at something more intrinsic than transport - i.e., the communicative dialog - as being the trouble-spot.

One thing of interest when comparing computer science to natural language is the inversion of order. So instead of the OSI seven-layer model, where language would reside at the top, in human communication, language sits near the bottom. At least when linguists such as Chomsky and Bickerton are taken into account. The idea of a proto-language underlying our own communications, if viewed as a part of language itself (ala Bickerton) rather than simply part of the underlying infrastructure, would seem to either a) produce somewhat of an inversion to the OSI model, or b) mandate further divisions within the layers. Of course, there is a quite valid apples-and-oranges criticism that could be levied here.

Returning to the essay, a fundamental premise of the authors' point is that local and remote resources differ in regards to memory-address-space and latency. Thus, they write, "Ignoring the difference between the performance of local and remote invocations can lead to designs whose implementations are virtually assured of having performance problems because the design requires a large amount of communication between components that are in different adddress spaces and on different machines" (5). Yet, it seems that a simpler solution is possible through our linguistic model, to perceive the local and remote as the same through a language-based equivalency. Therefore, when they suggest, "Whether or not it will ever become possible to mask the efficiency difference between a local object invocation and a distributed object invocation is not answerable a priori" (6), it would seem that the very question is not particularly useful.

Oddly, the authors seem to realize the wrong-headedness of a model based upon unified treatment of local versus remote resources. Regarding local versus remote calls they write, "it would be unwise to construct a programming paradigm that treated the two calls as essentially similar" (6). There are really two sides to my criticism here: 1) the model of local versus remote as related to a model of thought versus communicative language, and 2) messages across entities as analogous to human communication itself. The former reinforces the Saussurian analogy in that langauge existing as an exchange of symbols has certain constraints that thought lacks: meaning between entities must be agreed upon.

Also along the lines of the first side to my criticism is the author’s statement regarding memory access. They write, "A more fundamental (but still obvious) difference between local and remote computing concerns the access to memory in the two cases - specifically in the use of pointers" (6). Pointers would, of course, exist beneath a linguistic model rooted in Sausurrian theory, and have more to do with access to information. In fact, pointers may be more relevant, paradoxically, in a higher-level framework such as Blending Theory (aka Conceptual Integration).

One of the main issues with pointers, though, is that, like meaning, the internal representation cannot be shared, only agreed-upon translations of the symbols. In similar fashion, the authors write, "In requiring that programmers learn such a language [one which does not use address-space-relative pointers], moreover, one gives up the complete transparency between local and distributed computing" (6), again a rather neo-Saussurian assertion.

Further, "Even if one were to provide a language that did not allow obtaining address-space-relative pointers to objects (or returned an object reference whenever such a pointer was requested), one would need to provide an equivalent way of making cross-address space reference to entities other than objects" (6-7). Here we can perhaps map objects to symbols and address-space-relative pointers to meaning. Therefore, the issue begs Saussure’s entire model! That cross-address space reference would be agreed-upon meaning, or some intermediate framework for parsing the language structures.

The authors still grapple with the bifurcation, however, adding, "The danger lies in promoting the myth that ‘remote access and local access are exactly the same’ and not enforcing the myth" (7). However, I would argue that this situation would violate Saussure's definition of language, and that, since meaning is not agreed upon, the two systems would be engaged in a dialog of gibberish. That is to say, I believe the author's are noting the symptom, not the problem.

Turning to the possibility of an intermediate framework for parsing, the authors note that "[t]he alternative [to making local and remote access the same] is to explain the difference between local and remote access....by using an interface definition language" (7). This approach sidesteps the issue though, the authors continue, " By not masking the difference, the programmer is able to learn when to use one method of access and when to use the other," something that essentially boils down to speaking two tongues when a pidgin or creole would satisfy all involved parties.

However, a creole seems to be outside the realm of possibility for the authors. From their standpoint, "[w]hen we turn to problems introduced to distributed computing by partial failure and concurrency, however, it is not clear that such a unification [between local and remote memory access on one hand, and latency issues on the other] is even conceptually possible" (7). As a possible example to refute this supposition, I would propose Plan 9 by Bell Labs. There is an operating system that DOES treat local and remote resources identically, since it was built, from the ground up, to work in a distributed environment. Granted, this paper was published in 1994, the same year that Plan 9 was opened to the public (as I recall), so the authors may not have been privy to its possibilities.

Returning to the author’s assessment of the situation, it is clear that they do not consider the type of distributed model used by Plan9. "Not only is the failure of the distributed components independent, but there is no common agent that is able to determine what component has failed and inform the other components of that failure, no global status that can be examined that allows determination of exactly what error has occurred" (7). Perhaps the Bell Labs approach is decidedly outside a linguistic model such as Saussure’s. Yet, perhaps it is merely a question of where one draws the boundaries between self and group. The above statement by the authors does seem to allow the commonality between software and language being upheld. The difference is that software, in this model, wants/needs to know what broke down in communications, but in spoken discourse, such a requirement would be seen as obnoxious. If I cannot listen further to the conversation, or do not understand a stanza of the dialog, then I need to go no further than propriety demands - and the rules of propriety can be less rigid and formalized. Yet, the difference is not on the "client" side, but on the part of the "speaker." Again we see the boundaries between self and group/other as a major criterion for the model.

Concluding the essay, the authors note that, "One of two paths must be chosen if one is going to have a unified model."..."The first path is to treat all objects as if they were local and design all interfaces as if the objects calling them, and being called by them, were local."..."The other path is to design all interfaces as if they were remote" (8). Further, "Accepting the fundamental difference between local and remote objects does not mean that either sort of object will require its interface to be defined differently" (11). This seems to be a bit ambiguous, perhaps a function of a seeming contradiction. Yet, applying the linguistic model again, we can perhaps see the local/remote paradox as similar to our own unconscious shift from thought to language.

Overall, the essay suffers from a bit of redundancy and repetitiveness. Further, there are paradoxes not drawn out and dealt with adequately. Yet, the basic problems are well identified, and the argument that memory, latency, and concurrency are the major cruxes of the issue seems convincing. For an essay a decade old it is surprising just how relevant the issues are today. For my part, I think that the comparison to Saussurian linguistics provides an interesting and useful way to approach the issues identified by these authors.



Sunday, November 14, 2004

Forever Came Today Review

Since I am still writing a response to an essay I read today, I am going to cheat on my blog entry today. I really don't feel too bad since I just put up a new review on Amazon this morning.

In short, Graham Lewis' new book of poems is recommended. He introduces the reader, vicariously, to various "freaks" existing on the fringe of society. Yet, as a reader, one becomes self-conscious of those bonds that are typically forged with the characters. In other words, it is if our own dirty laundry is being aired.

Stay tuned......

Saturday, November 13, 2004

In a previous blog entry, I suggested a dichotomy between the approaches of Kay and Booch. Yet, perhaps this is unfair since both are working towards simplicity via high level approaches. They indeed converge to some degree in the area of metaprogramming. However, it seems to me that Kay (again more in the metonymic sense), represents the idea of a single, simple solution; whereas Booch represents an approach where high-level processes trickle down to the lower layers. Where I am most unfair is in my villifying Booch for the "crimes" of a new software development model. Maybe I'm a pradoxical techno-luddite.

Speaking of models, in the October, 2004 edition of Dr. Dobb's Journal, Gregory Wilson Discusses development models in his Programmer's Bookshelf column. He points to a tool shift over the past 20+ years, impliesa paradigm shift, and glosses over the increased complexity - somewhat hidden behind the ease with which tools and applications can be plugged into each other. His own background, he says, is the C->Emacs->Make->UNIX command line->CVS->character streams model, whereas the "new" one is Java->Eclipse->Ant->JUnit->Subversion->reflection->XML. Perhaps influenced by modern concepts - going back to what I previously attributed to the Booch-ites - he sees software development paradigms as analogous to the Standard Model in physics: everything from quarks and leptons to cosmology fall under this roof (personally, I think that the Theory of Everything may have been a better analogy).

Inverting Wilson's analogy, one could claim that, like software development, physics has become rather too complex as of late. It reminds me of the shift from Ptolemy, where planetary motion could be accounted for from a geocentric view, but the model was overly complex and unweildy. Copernicus dropped in with a simple model and suddenly the layman could understand the system.

Back to Wilson's article, however, he reviews a book, Java Open Source Programming(JOSP), within the context of this new Standard Model. In the book, the authors demonstrate how to build "yet another online pet store," but also demonstrate how to do so with available open source tools. Wilson notes the chain of tools brought together (apparently seemlessly) before "paus[ing] to describe how they communicte via" another set of tools. "And we're not even at page 200 yet..." Hmmm....I take this is a bit of a hint that something is askew. Further, he adds, "The second half of the book goes back over the application, replacing the simple throwaway prototypes of the first half with versions that could carry their weight in the real world." I am playing Devil's Advocate a bit here, but this does seem to illustrate the point that we need simplification. Sure, there is a benefit in separating the various components of the process: one can choose the application that one favors (Maven vs. Make, for example). So is it just a preference if one wants loosely coupled rather than tightly coupled applications and processes?

On a side note, I think that Wilson is taken in by the "pragmatic approach" to technology. He sees integration as an advance, rather than a prerequisite. Thus, when reviewing another book, Effective Software Test Automation(ESTA), he gets excited about the discovery that one can use Excel as a user interface. He enjoys the inclusion of real world applications within examples in JOSP, is a bit critical of the low-level details in Coder to Developer, and loves the "authors' explanation of how to build it" in ESTA. There is a trend toward the "what do I need to know to get my job done" that is probably reflective of the typical DDJ reader. Developers don't have time to ponder the philosophical implication of the architecture's ontology, but rather must spend their time implementing durable designs within the framework they are handed. Yet, it seems that we are in need of another Copernicus.

Friday, November 12, 2004

I've been dredging up a lot of the seminal works lately - going back to Vannevar Bush's 1945 Atlantic Monthly article, "As We May Think," through a lot of the Xerox PARC papers, Licklider's papers, etc. It just seems to me that a lot of the promises have been side-railed. That is, computing was supposed to become easier, more personal, and more integrated. While that is somewhat true at the cursory, application level, it has become more fractured and difficult at the application level. This assertion is obviously arguable. However, it seems that there is a chasm between a) rudimentary programming (using word processors, spreadsheets, web browsers) and b) what is now considered "programming" (hand-coding HTML; scripting; using graphical IDEs; hand-coding and building make files; full-blown project development using Apache software (maven, cocoon, avalon, etc.); integrating work into a team project using UML, etc - these are increasingly complex, but are closer together in terms of "learning curve" than moving from a to b).

Earlier systems, such as the early Xerox machines (alto, star, etc.) and Lisp machines (LMI, Symbolics, TI Explorer) enabled one to move from a to b (somewhat) seamlessly and intuitively. The current direction has been to segment the development process in order to "tune" each component, then re-evaluate the process-in-general through additional tools - high-level programming and process description. Thus, we start seeing things like BPEL, AGILE, ESB, MDA, and OCL/UML. While I respect players like Grady Booch, I think that the focus needs move back towards integration. Essentially, (to reduce the issue and provide "avatars" to represent the sides) we have Grady Booch pushing components and Alan Kay pushing integration. I think that integration is the right way and Smalltalk is/was an attempt at this.

Thursday, November 11, 2004

I recently ran across a weblog discussing the demise of the hobbyist/enthusiast computer book. The average tome seems geared toward techies - or wannabes, in many cases. The blogger's point resonated with much of what I've been dismayed with for years. Put simply, it seems that programming - working with computers - has become harder, not easier over time, and that this is the grand failure of the technology. While I've felt this way for years, my recent readings of and about Alan Kay have reinforced my dismay.

It seems that there is a disconnect between "using" a computer and "programming" a computer; not in function, but in difficulty. Ordinary people do very complicated things every day (i.e., perform a search on Google), yet moving beyond the "basics" can be extremely challenging. Moreover, productivity tools with flashy GUIs look like they would make things easier for the would-be developer, but instead may add to the levels of difficulty. There is a recent article in Dr. Dobbs Journal discussing a plug-in for Eclipse that simplified using the application since the complex sets of toolbars and buttons presented to the user could be more scary than a blank UNIX command prompt. Personally, I don't know that a command prompt is daunting - one just needs to
know what to say.

The unfamiliar is daunting. How we interact with a computer today is limited by how we did so yesterday. In essence, we build a box. This reminds me of reading Dryden, Milton, Chaucer, etc. One must study the first 20 or 30 lines before one gets the "hang of" the language. Read more the next day and one can get "into" it right from the start. Of course, stay away from it and there will again be a brief session of "getting to know one another again." Yet, the second time around, or when one moves to other poetry from a similar time, in a similar genre, there is an advantage: experience. With a background knowledge, one can quickly begin building internal and external references, criticism, etc. Once one has read a few of Blake's poems, "The Marriage of Heaven and Hell", "Visions of the Daughters of Albion", and "The Book of Urizen", then one can more easily attempt a work such as "The Four Zoas".

These texts are complex and rich. Not necessarily kid's stuff. Developing an application using the Apache Software Foundations products such as Maven, Cocoon, and/or Avalon is likewise complex, rich, and "not kid's stuff." However, a quick look at Squeak or VisualWorks, derivatives of Smalltalk - a language, as the name implies, written to be a David in a world of Goliath's; a language easy enough, as Alan Kay hoped, for children to develop relatively complex applications in it - demonstrates that complex and rich does not need to be opposed to ease-of-use.

Part of the solution lies with the developers and the evangelists. Those books showing eager kids writing software, not just playing games or using the web, could make a comeback.

Wednesday, November 10, 2004

I have been thinking about languages a lot lately. Not english, french, and hindi, but rather lisp, smalltalk, and java. The former two are considered by some to be "dead languages"; the latter the "holy grail." Yet, new development takes place both in and on these "dead" languages. Interesting.

Recently, I ran across a coding contest where the vast majority of entries were in lisp. A coworker was startled by this, wondering why all these people were working in this ancient tongue. This is really the first question that strikes people, given that the language is dead - right?

If one assumes that computer languages are a type of language (as the term is commonly viewed), then one could ponder the mortality of computer code within the more familiar world of linguistics. Linear B is a dead language. Algol is a dead language.

What makes a language dead? Is it when the language ceases to be exist? This definition really doesn't work, since written languages (which would include both Linear B and Algol) exist, they just aren't used. Well, they aren't used except by academics who are trying to understand them within a historical context: what are their influences, what were their ancestors, how extensive was their use and influence, etc. If we narrow the definition, then, to state that a language is dead when it is no longer used for the purpose for which it traditionally existed, this seems to work....though certain stipulations should be drawn out further. The real problem with this updated definition is that living languages change, dead languages don't. Ahh...we seem to be getting closer.

Are languages dead when they cease to change? We already noted that lisp and smalltalk are actively developed, just not widely used. On the other hand, many languages widely used are fairly static. C really hasn't changed much in the past 10 years - unless you count the derivatives: C++, objective-C, Java, C#, etc.

Native speakers? I guess that the parallel here is primary or preferred language. In many circles PERL is preferred - a sort of pidgin moving to creole with the advent of PERL 6 and parrot. At one point, most of my work was in shell scripts. At another point, C. Another point, Java. Another, PERL. Given my druthers, I'd work in Lisp (on a Lisp machine - LMI, Symbolics, etc.). I cut my lisp teeth in 1985 with Interlisp65 on an Atari 800xl. It would be years before I returned, but when I did....the language changed. Now we have, among others, Common Lisp and I can use objects; integrate with the web; and, using openMCL on my mac, use advanced components written in objective-C (called Cocoa). Hey, this is cutting-edge stuff for a dead language. :)

And smalltalk. If one thinks that it is dead, check out croquet - www.opencroquet.org - cool stuff. I use squeak for my smalltalk environment. Croquet is somewhat experimental and runs on top of squeak. I also am trying out VisualWorks, an "industrial-strength" smalltalk environment that has a new-ish release.

Yet the question of dead languages still remains. The link between computer and written/spoken languages is tenuous (even after you read my poetry in java). Yet there is something compelling about these older languages, and the impetus for the compulsion seems to stem from the original intent and is carried forward by the communities. But more on that later.....