April 22, 2009

Review: Let Over Lambda

So after a lot of not reading Doug Hoyte's Let Over Lambda, I finally did manage to read it all the way through.

My overall impression first: Hoyte styles the book as a successor to PG's On Lisp; I think Let Over Lambda falls short of that goal, although it does contain enough interesting material to make the book worthwhile.

The best parts of the book are the chapter on read macros and the subsection on sorting networks. Great, practical examples and illustrations of good programming style.

The worst parts of the book are the chapter on implementing a Forth interpreter in Lisp and the "caddar accessor generator" (it's ok for me to say this because the name of this blog is ironic).

The chapter on anaphoric macros has finally made me change my mind about those things. It's ok, use them when you need them. All the stuff about "sub-lexical scoping" (ie - various interesting ways of using macros to capture free variables) didn't really make a deep impression on me - maybe I'm just too dull to see any good uses for that.

As pointed out in other reviews, the book could have used a lot more proofreading (esp of the code) and editing. Hoyte chose to self-publish the book, which I think was a mistake (and just today Tim Ferriss blogged about some other reasons why it's not a good idea).

To cap the review, don't read Let Over Lambda until you've read On Lisp. It's a fun book about some interesting Common Lisp programming techniques, but it could have been shorter.

April 15, 2009

Problem-solving is hard, let's go write XML configuration files

Previously I blogged about why the premise behind software frameworks turns out to be a logical fallacy. This blog post will continue my attack on the framework cult by examining the reasons why people continue to believe in the myth of increased productivity through frameworks.

There is one word that can summarize my argument: comfort.

Consider carpentry tools. How do you use them? What do you use them to accomplish? Now consider a toy construction set such as the Erector. The toy construction set offers you a path you can follow to arrive at some cool artifact, even if you don't know exactly what you want to do. The construction set can offer this security because it limits what you can accomplish, and how you can accomplish it.

According to Buddha, ignorance is the root of all evil. According to Christian tradition, people are inherently evil. It is then no surprise that most of the time most people do not know what they want to do. Finding out "what" is hard, requiring a lot of time and learning. In the realm of software development, this is practiced by methodologies such as domain-driven design. It is a lot easier to write XML configuration files instead.

There is a misguided comfort that comes from knowing that you did a day's worth of honest work. It doesn't matter if what you are doing is leading you down the wrong path, because with a framework you are at least accomplishing something, even if that something will not bring business value. As a project manager who decides to use a particular framework, you are in effect acting as a proxy sales agent for the party promoting that framework - selling your customer a solution that may not be in line with your customer's goals.

Many logical fallacies go into the typical software development decision-making process. Frameworks in particular are notoriously prone to the bandwagon effect (how many "enterprise" web-development frameworks for Java have come and gone and with what ridiculous frequency?). Any person held responsible for the consequences of a decision will be prone to post-purchase rationalization (remember that before a person tries to sell a methodology to his organization, he has to have been sold on it her/himself), so the bandwagon effect is frequently used as an argument for adopting a particular framework. In addition, nebulous terms such as "enterprise" (which, because of its strong association with frameworks, has almost universally acquired the definition as "verbose junk" in the software development world) somehow end up in place of well thought-out arguments and empirical evidence.

Obviously, there is some circumstantial evidence that frameworks do enhance productivity - people become experts at particular frameworks and can efficiently develop applications. This is not because they are using a particular framework, or because they are using a framework at all. This is because they have become experts at it.

April 9, 2009

Closure-oriented metaprogramming via dynamically-scoped functions

Today I came across this post from the ll1 mailing list (almost 7 years old now, via Patrick Collison's blog) from Avi Bryant explaining how Smalltalk's message-based dispatch permits a type of metaprogramming with closures, as an alternative to macros.

Of course if you've read Pascal Costanza's Dynamically Scoped Functions as the Essence of AOP (and if you haven't, click the link and do it now; it's one of my favorite CS papers), you will realize that there is no need for message-based dispatch or any kind of object-oriented programming to do that. All we need are dynamically-scoped functions.

Here is how I approached the problem:

(defpackage "BAR"
(:use "COMMON-LISP")
(:shadow #:=))

(in-package "BAR")

(defmacro dflet1 ((fname &rest def) &body body)
(let ((old-f-def (gensym)))
`(let ((,old-f-def (symbol-function ',fname)))
(unwind-protect (progn (setf (symbol-function ',fname) (lambda ,@def))
,@body)
(setf (symbol-function ',fname) ,old-f-def)))))

(defmacro dflet* ((&rest decls) &body body)
(if decls
`(dflet1 ,(car decls)
(dflet* ,(cdr decls)
,@body))
`(progn ,@body)))

(defun first-name (x) (gnarly-accessor1 x))
(defun address-city (x) (gnarly-accessor2 x))
(defun = (&rest args) (apply 'common-lisp:= args))
(defmacro & (a b) `(block-and (lambda () ,a) (lambda () ,b)))
(defun block-and (a b) (when (funcall a) (funcall b)))

(defun some-predicate (x)
(& (= (first-name x) "John") (= (address-city x) "Austin")))

(defun make-parse-tree-from-predicate (predicate-thunk)
(dflet* ((first-name (x) '#:|firstName|)
(address-city (x) '#:|addressCity|)
(= (a b) `(= ,a ,b))
(block-and (a b) `(& ,(funcall a) ,(funcall b))))
(funcall predicate-thunk nil)))


Then (make-parse-tree-from-predicate #'some-predicate) yields (& (= #:|firstName| "John") (= #:|addressCity| "Austin")), which we can manipulate and then pass to a SQL query printer.

Here I implemented dynamically-scoped functions using unwind-protect, which is not as powerful (or, possibly, efficient) as the implementation presented in Costanza's paper, but is simpler (I also used the same trick to implement dynamically-scoped variables in Parenscript).

The property of the same Lisp code to mean different things in different contexts is called duality of syntax by Doug Hoyte in his excellent book Let Over Lambda (almost finished reading, promise to write a review soon). Lisp offers this property both at run-time (via late-binding and closures) and at macro-expansion time (via homoiconicity and the macro-expansion process itself).

Another technique from Let Over Lambda illustrated in the above code is the recursive macro. This one is a personal favorite of mine; I find that the iterative simplification that recursive macros express provides very clean and maintainable code.

This code also provides examples of the two problems that the closure-oriented metaprogramming approach encounters in Common Lisp:

The first is the fact that we had to shadow = in our package. Common Lisp forbids the redefinition of the functions, macros and special forms defined in the standard, so we have to go out of our way if we want to achieve that effect. Barry Margolin provided a rationale for this in comp.lang.lisp post.

The second is the fact that Common Lisp has so many special forms and macros - and just happens to be one of them. Smalltalk avoids this problem by doing virtually everything via message passing and closures. In Common Lisp we don't have this straightjacket, but we also don't have this luxury of assuming that everything is an object or a closure.

Another Common Lisp feature that might break this example is function inlining (then again, I did just write about the benefits of late-binding...).

April 8, 2009

Masterminds of Programming

Today my copy of Masterminds of Programming arrived in the mail from O'Reilly; a small reward for giving a lightning talk at ILC.

The book is a series of interviews with programming language designers. Along with some expected atrocities like Stroustrup on C++ and Gosling on Java, and bizarre ones such as a 50-page interview with Jacobson, Rumbaugh and Booch on UML, there are interviews with Adin Falkoff on APL, Moore on Forth, Wall on Perl, and a few others. The functional camp is well-represented with SPJ, Hudak, Wadler, and Hughes interviewed about Haskell, and Milner giving an interview about ML.

It is telling that right at the preface the book starts off with an urban legend: "children can learn foreign languages much more easily than adults."

Some of the interviews are very revealing. The discussions present an entertaining window on the cavalier attitudes and biases of many programming language designers, which helps explain some of the dysfunction of the software world today. I don't think this is what the editors intended, but it makes for hilarious reading material.

Some of the interviews can be frustrating to read (every third question in Falkoff's APL interview seems to boil down to "lolz funny syntax"); thankfully this is balanced out by absolutely delightful ones such as Moore on Forth (IMO, the highlight of the book), and the Objective-C interview with Brad Cox and Tom Love. Overall the quality of the interviews varies widely, but not surprisingly mostly seems to correspond to the quality of the language being discussed.

Ultimately Masterminds of Programming is worthwhile not for its insights into programming language design (most of which unsurprisingly boil down to "oops I made a bunch of mistakes because I didn't start with a good model/think/know any better/know enough math"), but into programming and computing history in general.

To finish this review where it started off, here is another unintentionally amusing bit of insight from the preface:

Imagine that you are studying a foreign language and you don't know the name of an object. You can describe it with the words that you know, hoping someone will understand what you mean. Isn't this what we do every day with software?

It comes as no surprise that there is not a single entry for either "macro" or "metaprogramming" in the book's index (although Wadler does make a passing mention of Lisp macros in the Haskell interview).