O'Reilly Network    

 Published on The O'Reilly Network (http://www.oreillynet.com/)
 http://www.oreillynet.com/pub/wlg/9280

Thoughts on Complexity

by Kurt Cagle
Mar. 7, 2006
URL: http://www.understandingxml.com

Over the years, I've noticed that in programming, as in other systems, there seems to be a fairly invariant rule out there:
You can never eliminate complexity from a system, you can only move it from place to place.

or, put another way,
In any program, someone will have to deal with the mess when it hits the fan.

This is part of the reason why I have become convinced that the profession of programmer will always be needed. You can create boxes that can abstract away the complexities of dealing with repeat processes ... this is the whole idea of component development ... but by doing so, you are also intrinsically subscribing to the limitations that these components have to help you model your own environment. In some cases, the programmer was also capable of incredible foresight, making the components more flexible, but that flexibility in turn also increases the complexity of the applications using the components.


Additionally, components are typically designed to solve a given problem which is usually almost, but not quite, the problem that you need to solve. It is this impedence mismatch that forces programmers to have to use some of their own brain power, and the resulting solutions introduce complexity into the system again. A good software designer knows how to insure that the complexity stays on the programmer side of the divide, not the user's, but someone still has to pay the price.


I've been thinking a lot lately about abstraction. Abstraction is a complexity management tool. It replaces a block of code with some form of interface that hides the intricate details of that code behind something that is more manageable, though typically at a dual cost - it introduces an overhead penalty to the code that is invoked every time the abstraction is called (meaning that you need to increase your performance in general to compensate for this) and it reduces the level of control that people using the abstraction have to deal with the underlying actions performed by the code. Admittedly, there are times where these costs are well worth it. A prime example that I've had to deal with recently has been the use of the XPathEvaluator method within the Mozilla Javascript space. It's normal operation is an enormously complex undertaking, comparatively speaking:XML

//var elt = some Element
//var xpath = some Xpath expression
var xpe = new XPathEvaluator();
var nodeArray = [];
var result = xpe.evaluate(xpath,elt,xpe.createNSResolver(elt),4,null);
while (res = result.iterateNext()){
nodeArray.push(res);
}

This takes an element and an Xpath expression, performs the evaluation, then pushes the results into an array (nodeArray). If I have to do such an expression evaluation twice, I've needed to add sixteen lines of code - thrice, twenty four lines of code, and so forth. It seems a good candidate for abstractionfirst, to reduce its inherent complexity:
Element.prototype.getNodes = function(xpath){
var xpe = new XPathEvaluator();
var nodeArray = [];
var result = xpe.evaluate(xpath,this,xpe.createNSResolver(this),4,null);
while (res = result.iterateNext()){
nodeArray.push(res);
}
return nodeArray;
}

This does a number of things. First, by associating it with the prototype for an XML Element, I make the getNodes() method available automatically to any XML element:
var pNodes = document.documentElement.getNodes("//div");

This returns all <div> elements in an XHTML document. Additionally, I can also use it to extend other objects, such as the document node itself:
Document.prototype.getNodes = Element.prototype.getNodes

(which neatly eliminates the need for type-checking within the expression itself - only those elements for which the prototype has been extended will have the method in the first place).


The question here is what you've lost by this. In this particular case, you've lost the ability to specify the types of objects returned by the evaluator, which means that while you can get a set of nodes back, you couldn't get a set of text string values or numbers from the getNodes() method. You lose the XPathResult object, which has a few additional properties that are useful for creating data snapshots. There's a tradeoff here, in that you lose a certain level of control with the abstraction. You also add to the overhead of DOM usage, since every document and element now contains an additional pointer (and supporting infrastructure via the prototype mechanism) pointing to this particular function.


What's more, you now have to document this addition, have to make it available as part of a general library, and have to maintain any changes to it. This is where things get a little more complicated. You've traded off programmer inconvenience for an administrative inconvenience. The complexity has moved, and what's more its moved out of the code and into the realm of the coder. Again in this case the trade-off is generally worth it - if you have the process already in place, the addition of this code into a project code base will take a comparatively small amount of time, and once done, maintainance is not much of an issue here.


However, even here you run into other problems - perhaps someone has solved this problem in a different way. Perhaps someone is using the older selectNodes() method from Internet Explorer, which doesn't return an array but a nodelist, so if you want to support common code you'll need to come to an agreement with them about what the final interfaces should be. The point here is that such abstractions usually add to the complexity of the process in other ways, and the role of both programmer and program manager should be to determine at what stage the abstractions are worth taking.


One reason that I see this comes from recent examinations of software methodologies. Many older software methodologies tend to look upon software development as being largely a top down process of establishing formal objects then building the APIs that support these objects. Yet if you look at the model above, there is a strong push-back pressure that emerges from this abstraction process, something that typically becomes evident only when the level of complexity forces the need to develop that abstraction in the first place. It can of course be argued that good design would reduce or eliminate this abstraction, but this to me assumes that programmers are in fact perfectly capable of anticipating a wide variety of unintended consequences not only from their own code but from the code that other people create which ripples through and interacts with theirs.


To me, this is frankly an absurd assumption, especially in those situations (which are increasingly the norm, not the exception) where people do not have even marginal control over the code not immediately produced by them personally. Indeed, far from working within clearly defined silos, I believe that a big part of the reason for the sheer level of complexity today is precisely because programming is more like an odd form of field theory, where the software extends while beyond its immediate state of execution and may in fact have repercussions and consequences far removed from their initial domain. One could almost think of millions of small sub-etheric particles called Codeons, whipping back and forth through the quantum sea of ideas before impacting, wave like, upon the developer's world, affecting everything that he or she does in the realm of writing code.


Understanding this gremlin of abstraction, the fact that such abstractions tend to be pushed up the stack as much as they tend to be imposed from on high, I believe will be essential for us to move forward into ever larger and more networked domains. The current fashion has been largely to deprecate these bottom up abstractions, to see them as being "bad practice" as they distract from the cookie cutter view of programming that seems so very evident from the textbooks in the field, and yet I suspect that what this is instead is a manifestation of the Complexity Rule at work. I as a programmer working with your APIs will try to simplify my task as much as possible, while you as the API developer will attempt to do the same on your end. Leanrning how to balance between the two, learning to provide a give and take that moves the programming contract away from one unilaterally imposed by the API provider into a more equitable relationship, will, I suspect, prove to be the catalyst for the next generation of programming.


Kurt Cagle is an author, software developer and incorrigible blogger. He lives in Victoria, British Columbia, Canada, where he writes white papers, books, articles and fortune cookie sayings (the last being his most lucrative sideline).







Kurt Cagle is an author and writer specializing in XML, Web 2, SVG, and blogging.

oreillynet.com Copyright © 2006 O'Reilly Media, Inc.