Where is the Cost of Complexity in Program Design?
Recent threads on the Higher Order Perl mailing list discussed a particular tasks for which Perl's functional programming techniques make an easier solution than the obvious object-oriented approach.
Syntactically, Perl 5 isn't always great at either. There's definitely a bit of line noise in functional programming in Perl 5 and the boilerplate code in setting up objects and classes gets tiresome after a while. (Fortunately, Perl 6 corrects both.)
When the idea of functional versus OO approach came up, someone offered a Python solution to the problem. Because Python's functional programming support is, despite the protestations of overzealous snake-handlers everywhere, somewhat less than complete (and no, don't send me mail on this -- read-only, single-line closures don't count), the example took the OO approach.
Then the question of maintainability came up.
To understand the closure-based approach, a programmer needs to understand lexical variables, closures, and first-class functions. He also has to be able to read the code. To understand the OO-based approach, a programmer needs to understand classes, instance variables, and objects. He also has to be able to read the code.
There seems to be a rough agreement on the list that the functional approach takes significantly less code. In that case, it seems to me to be a win for maintainability to use the functional approach (and it's nice to have a language that supports it).
One often-stated objection to using so-called advanced language features (or languages that provide more than one way to solve a problem) is that novices may have difficulty reading code written by experienced programmers. My theory is different:
I believe there's a cost for each significant unit in code, whether syntactic or semantic. If the right approach for a problem means I pay for semantic complexity in order to avoid a higher cost elsewhere, fine.
That means that my successor and co-workers either have to be good programmers themselves (as I hope I'm a good programmer) or at least trainable. Fortunately, that's pretty much my minimum requirement for writing good software anyway.
I wonder if framing the issue in terms of the cost of complexity helps make certain decisions clearer.
Here's where you try to convince me that Python's lexicals really aren't broken and that no one really needs to write to closed over variables while ignoring the actual point of my post.
what matters is the number of program elements
I tend to agree with your point that the lesser number of elements (as defined in http://www.paulgraham.com/power.html as a metric for comparing languages) a program has, the more maintainable it should be. Any supposed readability tradeoff is just a side effect of using "bigger", more effective elements, like using one real closure in place of one or more objects in a design pattern with state variables and initializations and methods and interfaces and so forth. Whenever you're doing something moderately advanced, just comment it for novices. Of course, this really goes for any code that does something "tricky".
Re: Commented Complexity
Heck, whenever I do something tricky, I comment it for myself - so that when I come back to that piece of code sometime later, I can simply read what it does (and more importantly: why) instead of having to reverse-engineer it.