The Other Side of the Hundred-Year Language

by Schuyler Erle

An article, entitled "The Hundred-Year Language," recently appeared on Slashdot, in which the author, Paul Graham, speculates about what form programming languages might take in another hundred years, and what kinds of lessons we can draw from contemporary languages in theorizing about the shape of computer programming in the next century.



Before continuing, I feel compelled to offer the disclaimer that I am, in fact, a professional Perl hacker. I say this because the first thing I took away from Graham's article is that I was able
to guess that he was a LISP hacker well before I got to the end of the
article -- and, unsurpringly, he turns out to be a LISP hacker. This quote was the real
giveaway, of course:



I have a hunch that the main branches of the evolutionary tree pass
through the languages that have the smallest, cleanest cores. The more of
a language you can write in itself, the better.



Sounds a lot like LISP, doesn't it? Now, the notion that Graham propounds that "cruft breeds cruft", as he puts it, is a very
valid critique of Perl 5, for example. Granted, the Byzantine nature of Perl 5's core is well-known, and this may be said to be the source of some of the problems we often see in larger Perl-based applications, and is also the big thing that most Python lovers seem to loathe about Perl. But, with that simple assertion, I think Graham glosses over the essential lesson of Perl,
which is that expression via computer programming languages is less of a
mathematical skill, and really more of a linguistic skill. And, let's face
it, human languages are crufty, and our brains are wired to deal with that
complexity, and to leverage the richness of expression that comes along
with it.



I'd say that the brilliance of Perl, and the reason why people write more
big applications in it than they do in LISP, has a lot to do with how each language maps to the way that people think about problems. I
think Perl, for all its cruftiness, in some ways takes better advantage of
our innate linguistic and cognitive faculties than a language with a
"purer" syntax like Lisp. By contrast, if we were to take Graham's thesis
to its logical conclusion, we would expect all programs in the 22nd
century to be written for virtual Turing machines, which seems really
unlikely.



As such, any "language of the future" really needs to take both of these
competing needs -- clarity of abstraction versus richness of expression --
into account in order to be truly successful. Some people have expressed concern about the apparently burgeoning complexity of Perl 6 -- that it will, like the C++ Standard Template Library, become host to such a wide array of elegant solutions that no one in their right mind will be able to make head or tails out of it. Any language that seeks to make the trade-off between clarity and expressiveness will have to do so by trying to optimize the distribution of load between the language syntax and its set of standard functions, a notion that chromatic refers to as the Waterbed Theory -- i.e., push down the complexity in one place, and it pops back up somewhere else.



If we were to try to come up with a short list of modern contenders for the Hundred-Year Language, I'd probably start with Python, Ruby, and maybe Perl 6, when it comes out. Each of these languages makes a noble attempt, in their own separate ways, to bridge the divide between clarity and expressiveness by making certain trade-offs. But though Graham is absolutely right that
the core of such a language needs to be as simple as possible, it seems we need to take a closer look at the other half of the picture.