October 13, 2007

October Calgary Lisp user's group meeting.

The next meeting of the Calgary Lisp user's group will take place on Wednesday, October 17th at 6:00pm at the University of Calgary in room Social Sciences 008 (map of the campus: http://www.ucalgary.ca/map/). Warren Wilkinson will talk about building web applications using continuations. In his own words: "It's my hope that by stepping you through what I've built, I can save you months of self-study. If I succeed, you'll find building web sites without continuations will seem hopelessly backwards, like starting a fire with two sticks."

October 8, 2007

Language features don't matter.

My personal philosophy on programming (what, you don't have one?) has lately veered towards the view that programming language features are completely irrelevant. The view doesn't stop there, because I think they are completely irrelevant only when you have a language with enough meta-/reflection/self-description facilities to allow you to implement those features yourself. In my opinion, such a language has a quality I like to call "not getting in the way of what you want to do," which I think is the only nice thing to be said about programming, programming languages, and computers in general.

That being said, programming language features sure are convenient. They are most convenient when they have developed as a result of real programmer experience. I think the pinnacle of such "features derived from sweat and tears" (programmers don't often bleed) is Common Lisp. The people that contributed to X3J13 (you can find a list in the HyperSpec: http://www.lisp.org/HyperSpec/Body/chap-0a.html) include some of the best programmers of their time, and collectively represent decades if not centuries of programming experience, drawing on systems like Maclisp, InterLisp and Lisp Machine Lisp and coming from institutions like the MIT AI Lab and Xerox PARC. So I was real glad to find this page by Abhishek Reddy this morning that does a very good job of describing some of the many features that come with Common Lisp. I think it's a pretty good overview for beginners and those interested in Lisp.

September 30, 2007

Learning to Lisp

Answering to Peter Seibel's appeal for a Google bombing, here are two links, one for a Lisp tutorial, the other for an illustrated Lisp tutorial. I am not just providing these for the sake of altruism, though. It's part of my insidious campaign of making Lisp programmers instead of trying to find them. Due to a particular set of circumstances that has developed recently, I'm putting together a team of web application developers that will do fun things that involve Hunchentoot and Parenscript. If you're in Calgary and would like to get involved, get in touch. If things go according to plan, we will be looking for people outside of Calgary in a few months as well.

September 19, 2007

Coworking in the East Village.

One of the things I did while in New York was check out the cooperBricolage coworking space in the East Village. The organizers arranged a deal with Cafe Fuego to add WiFi, power outlets, and turn down the music weekdays from 9-5. I spent most of the day there on Tuesday, September 11th, feverishly trying to get my talk about ParenScript done for that evening's LispNYC meeting. The food was pretty good, and the coffee decent. The space itself wasn't bad for working either, but I don't think I'd go there on a regular basis. Daniel Gackle and me worked for about a month out of cafes this summer, and long-term the noise and distractions really ground us down. Still, a nice place to drop in if you're visiting in New York and looking for a place to get some work done. I met two other coworkers(?) while there, Jennifer Hall and Robert Sosinski, and both turned out to be doing web stuff.

September 17, 2007

Lisping in the NYC

I gave a presentation about Parenscript to CLUDG, and then two weeks later I also gave it to LispNYC. An audio recording from the NY talk and a tarball of the presentation code can now be found here.

August 25, 2007

August Calgary Lisp user's group meeting.

The next Calgary Lisp user's group meeting is taking place on Tuesday the 28th at 6:30pm. The gracious folks at Arcurve have lent the use of their offices for the meeting. The address is 708 11 Ave SW, on the corner of 11th Ave and 6th St. I will be giving a talk about ParenScript and developing Lisp-powered "Rich Internet Applications".

The building where the office is located is locked down after-hours, so the plan is to convene in the lobby between 6:15 and 6:30pm (call 242-4361 if you arrive earlier or later so I can let you in). For those driving, there is a parking lot behind the building on 10th Ave.

August 18, 2007

Memory doubts

Patrick Logan recently posted a note expressing misgivings about the promise of transactional memory:

I can see someone making the argument in some domain I don't usually work in that shared memory threads are better than shared nothing message passing for performance reasons. Some hard-real time scenario (I think I was real tired when I wrote that. Must have been thinking back to the days when I'd get into "Is automatic garbage collection practical?" debates. Nobody has those anymore, do they? Because I've got some real good arguments for GC.), some huge number crunching scenario, etc. where every byte and every cycle has to count in the extreme. But the irony then is that STM seems far more suited to a language like Haskell, which is also unlikely to be suited for these performance scenarios.

My only fear is that for the masses including myself, we need *simple* mechanisms, and the fewer of those, the better. Shared nothing messages seem to scale up, out, and down. STM seems more complicated, and an incomplete solution, and only a solution for shared memory. No thanks, at least until bigger brains than my own can show me why I should change my mind. They seem sidetracked on this one.


With regards to transactional memory, the "performance argument" doesn't even need to be brought up - the second biggest objection to transactional memory (the first being that it is different than what's currently being used), is that even in the best case of no contention, there is a large overhead penalty imposed versus code using manual synchronization. Of course in the worst case you would have to throw out a whole bunch of transactions' worth of work and start again.

When it comes to scalability, transactional memory is just as limited as any other kind of transactional system. Published benchmarks already show a performance plateau on some of the larger Sun servers, and those have less cores and probably a much lower CPU-speed to memory latency (and bandwidth) ratio than any kind of upcoming 80-core Intel chip.

If anything, out of all available approaches message-passing is by far the fastest that can be implemented. Cache-coherency protocols introduce delays and seriously limit scaling in the number of processors in an SMP system, and even atomic instructions are so expensive that I've heard the overhead means that some of the lock-free datastructures currently being worked on don't perform well compared to the lock-based ones they are supposed to be replacing.

However, I would not by any means call transactional memory complicated. In fact it is the easiest approach to concurrency available for those who are used to traditional imperative programming. This is by no means a good thing, since without needing to understand concurrency people will inevitably write "concurrent" programs that won't scale or worse (it's not too hard to imagine a transactional system where performance suddenly drops off a cliff when you add processors because transactions are now being aborted all the time).

A final comment I want to make is to go back to considering objection #1 to transactional memory. While the state-of-the-art in distributed systems research is advancing well, the state of experience in actually implementing them is lagging much further behind, to the point where many people have a very misguided view about what's hard and what's not in programming distributed systems. This applies even to those doing the former, since most distributed systems researchers simply don't have the time to do anything but research (it's a very exciting time for the field, and large rewards will go to those who publish first). This was illustrated in an incident a while ago where a research professor in the field whose work and talents I otherwise highly respect told a class that message passing was too difficult for programmers and that they would prefer to use locks and semaphores to program, so how can we implement those in terms of the former? Of course I had to object. I suspect there are and will be a lot of similar arguments repeated over and over, and the familiarity card will be brought out again and again.

If someone does it to you, point out the fact that out of all software, only a very small amount is written with lock-based concurrency, and out of all those programs, only an insignificant percentage is actually written to be real parallel software (as opposed to multi-threaded GUIs and network servers). In this case, the familiarity card is really worth far less than it seems.

August 8, 2007

Radix trees for Common Lisp.

At the beginning of the year I wanted to do some toy coding to break out of a Lisp dry spell (the unfortunate result of too much schoolwork - at least I now have that behind me), and, having found out about Radix trees (also known as Patricia trees) recently, I decided to write an implementation for Common Lisp. I just decided to clean it up and put it online - you can find the code here (BSD license). The datastructure itself works for any subtype of sequence. I haven't implemented a version for integers (and unless there is demand, I'm probably not going to), so you'll have to use bit-vectors if you want to index stuff by binary keys.

July 28, 2007

Network analysis, class divisions, paranoia

This is a fascinating piece of sociological research by danah boyd, that, besides being valuable in and of itself, foreshadows immense changes in the way that sociology research will be conducted in the future.

What's the paradigm shift? Social networking sites force people to reify their social relations. Better yet, they put all of that data into one accessible, easy-to-mine place. If that's not a sociologist's idea of a goldmine, I don't know what is. Of course, with this come all sorts of privacy concerns, which is the part that I'm not really interested in.

The other important issue that danah's paper raises is that despite the discomfort, denial and decades of rhetoric to the contrary, it is time to face the fact that out of all the nations claiming democracy, the US has one of the most pervasive and manifested socio-economic class divisions in the modern world.

On a related note, I regularly keep going back to the thought of using network analysis to reveal the real power structure of governments - which business leaders met which government leaders for what reason when? Not that business and political leaders will start to reveal their lives in Facebook profiles (although I heard rumors that a recent strategy change by a local startup involves doing "LinkedIn meets Facebook" - the only image that comes to mind is executives posting photos of themselves drunk at the company picnic).

This issue has become more critical with the rise of special interest groups and lobbies (masked by the benign euphemism of "civil society" by their originators and eagerly picked up by those sucking at the NGOs' tits). After all, network analysis is already being successfuly used by governments to track the activities of their enemies.

There is a fascinating interview with Valdis Krebs of orgnet.com about mining publicly available information on the 9/11 hijackers using network analysis. Why not use these tools for the real benefit of civil society? Of course, some people may argue that the actions of the "Coalition of the Willing" in Iraq already qualifies them as a terrorist network.

The more paranoid will point out the connection between Accel Partners, the VC firm that initially invested in Facebook, and the CIA (I wonder if this connection was discovered using network analysis?), but that seems more coincidence than conspiracy.

July 20, 2007

New ParenScript release.

I just finished putting up the tarball of the new ParenScript release, which can be download here. Red Daly has been working on a new package/namespace mechanism for ParenScript, which will entail some major changes to the codebase, and this will be the last release before his work is integrated into the main ParenScript source tree.

July 17, 2007

Software Transactional Memory

I promised in an earlier post to discuss software transactional memory, so today I've decided to present a paper written by Andrew Seniuk and me for Lisa Higham's distributed algorithms course. The contains an overview description of STM, discusses implementation approaches of a few current systems, provides proofs of several properties of the DSTM implementation, and highlights possible pitfalls that can occur in transactional code. During the time Andrew and I were working on the paper, there was a lot of excitement in the blogosphere of the potential of STM, and I think it is important to highlight the problems that will be encountered when use of software transactional memory gets more widespread - both to help people avoid them, and to deflate some of the "silver bullet" hype that I think STM has received. Enough rambling, here's the link to the paper:

http://vsedach.googlepages.com/stm561.pdf

June 19, 2007

ParenScript gets its own c-l.net project page, maintainer

For those that don't know, ParenScript is a compiler that takes a Lisp-like language and translates it to readable, efficient Javascript. It is a surprisingly powerful tool, considering that ParenScript doesn't really do much more than provide an S-expression syntax for Javascript (and HTML and CSS generation, although I use different tools for that). Anyway, I think ParenScript (combined with Hunchentoot) is one of the best tools out there right now for developing standards-compliant rich Internet applications. So it was unfortunate that as of a couple of weeks ago, ParenScript was without an active maintainer or its own resources (mailing list, repository, etc.). I've decided to step in as a maintainer and put up a project page for ParenScript on common-lisp.net. It is now less than a week old, but already there have been code contributions from new developers, which is certainly encouraging. Daniel Gackle and me are using ParenScript to develop the UI for our startup product, Skysheet (more on that in upcoming posts), so I get to "eat my own dog food", which was a big motivation for me to step in.

June 10, 2007

Sunday afternoon reading.

Dan Creswell recently put up a "reading list" of recommended articles and papers on distributed system, which you can find here. Some interesting stuff there. The articles about the architecture of MySpace is what finally convinced me to get rid of my MySpace account.

June 3, 2007

How to write parallel programs

I recently finished reading through Nicholas Carriero and David Gelernter's How to write parallel programs (available free online), a hidden gem on parallel computing published in 1990. Most books about distributed and/or parallel computing that I've come across are banal junk (no, no one needs another book about MPI) - How to write parallel programs contrasts sharply.

I suspect a major reason why parallel programming is considered difficult today is that most people don't have any conceptual feel for the space of possible parallel problem-solving approaches. Most educational material presents a fragmented view of models and paradigms; the vast majority of books focus on a single system or approach. The majority of programmers, not having delved into the breadth of the parallel computing literature, simply are not aware that there exist alternatives that may make their parallel problem-solving tasks much easier.

In light of this, the key insight of How to write parallel programs is distilling parallel programming into three distinct, conceptual paradigms for coordination in parallel problem-solving. It may come as a surprise (I promise I won't provide any more spoilers) that DCOM, CORBA, RMI or any other object-oriented "remote method call" junk is not one of them. Which brings me to a tangent where I rant about the whole idea of "remote method invocation" being beyond stupid by managing to get both message-passing in object-oriented programming and the basic assumptions about distributed systems completely wrong. More people should listen to Deutsch and Kay.

How to write parallel programs is intended to be a textbook, and that means algorithm analysis, which frequently leads to surprisingly practical insights about distributed systems, such as load-balancing considerations, dealing with IO bottlenecks, and even using time complexity analysis to (inadvertently) find faulty nodes. Examples are written in C-Linda (an implementation of Gelernter's tuplespaces). Following the arguments and doing the exercises is well worth the time.

I'm going to end this post with a request for help from the readers: I am trying to track down a copy of a paper referenced in the book: R. Abarbanel and A. Janin. Distributed object management with Linda. Research report, Apple Computer, Cupertino, CA, September 1989. If someone has a copy or can obtain one (any old-school Apple employees reading this?), please get in touch.

May 21, 2007

The ontology of XP

I just finished reading Adrian Mackenzie's Cutting Code: Software And Sociality. The book attempts to explore the meaning of software in its many incarnations using the tools of contemporary literary analysis. Although I didn't like most of it (many assertions I disagreed with, others I could not make sense of, despite the conspicuous absence of poststructuralism in the book), one chapter that stood out was about eXtreme Programming. It's the best explanation of XP I have encountered, and well worth reading for everybody.

April 25, 2007

Calgary Coworking

So yesterday I pitched an idea for a Calgary coworking space (if you don't know what those are, follow the link below and read around) at DemoCamp, and got some very enthusiastic responses. If you're interested, drop me an email, or put your name down at the Calgary coworking wiki page I've started:

http://wiki.coworking.info/CoworkingCalgary

April 8, 2007

Calgary Lisp user's group announcements.

I've put up a group page for the Calgary Lisp user's group on upcoming.org (apparently it's all the rage with the web2.0 kids):

http://upcoming.org/group/2946/



It's got an RSS feed and email and SMS reminders. This should make it easier for people to keep track of and for me to make meeting announcements.



One more thing: I think we need a spiffy acronym (or how else will I tag my blog posts?). I suggest CLUDG, for Calgary Lisp User's and Developer's Group. I think phonetically this may be treading on the Calgary Linux user's group (CLUG) turf, but hey, Lisp was here first.

April 7, 2007

Next Calgary Lisp user's group meeting.

The next Calgary Lisp user's group meeting will take place this upcoming Wednesday, April 11th, at the usual place (Saigon Y2K on Crowchild near McMahon Stadium) starting at 6:30pm. I will talk about my time at ILC07. There will possibly be one or two new people showing up. See you there.

More ILC stuff.



Since no one else in the Lisp blogosphere has yet stepped up to the plate, I feel I should post some more of the interesting happenings that occurred at ILC.



First off, to what must be many people's disappointment, unless Duane gave private MACL demos to other attendees, I don't think anyone is going to blog too much about it. All the MACL "demo" at Duane's tutorial consisted of was opening and scrolling around a .cl file (which MACL just happened to be incidental to). The impressive thing is that there was a snag in the demo setup and he had to rebuild ACL first, which was done in well under 5 minutes.



Next year marks Lisp's 50th birthday. It looks like the current plan is to have another International Lisp Conference in Cambridge, this time Massachusetts, in 2008 to celebrate. Herbert Stoyan talked about getting all the (really) old-school Lispers together for the thing. After this year's experience (thank you Nick and all of the others who helped put it together!) I'll definitely do everything I can to attend. There will also be a contest associated with the conference. The details have been revealed, and they are exactly: "Lisp 1.5" (yes, that's all there is to it - do something with/to/on Lisp 1.5).



Thanks again, and see you at ILC 2008!

April 4, 2007

ILC impressions.

I just got off the plane a little over two hours ago - unfortunately time constraints prevented me from sitting down and blogging before then (they also prevented me from attending the last 1 1/2 days of the conference - there were some talks I was really looking forward to). I wasn't going to blog right at this moment, but Zach's post must be creating some expectations.

Hands down, in terms of the quality of presentation Charlotte Herzeel's talk on the HALO temporal-logic-selected-pointcuts AOP tool was my favorite. I should have read the paper first though, as much of the stuff went over my head initially. A surprising development of Charlotte's talk turned out to be the revelation that JonL White is really into AOP, or in any case knows how to ask a lot of thoughtful questions about it. For some reason this is not something I imagined that old-school Lispers were interested in.



Christian Queinnec made the most persuasive argument in favor of computer-assisted teaching I have ever seen. A surprising result (surprises seemed to be a theme of the conference for me...) is that it turns out that teaching Scheme to undergraduates helps more women succeed in computer science. As this issue is right now in a pretty catastrophic state (based on hard enrollment numbers, an outside observer could only conclude that women are being actively excluded from CS education), why not accomplish two wonderful things at once and teach more Scheme?



Hannes and Andreas's talk on Dylan macros for destructuring and manipulating binary data was good (too much code-reading for my tastes, though). The system they built (Network Night Vision) looks pretty damn awesome (if you are interested in what you can do with Dylan and DWIM user interfaces, it is also very much worth looking at). Speaking of user interfaces, Peter Herth's talk on LTk had some really neat demos.



One of the themes on Monday was computer vision (whether this was intentional, I don't know). Chris Connolly gave a talk about SRI's FREEDIUS system, which is used for geospatial (ie - surveillance) CV, and in the afternoon Cyrus Harmon talked about the image processing system he developed to categorize Drosophila melanogaster (it's a kind of fly) imaginal discs (undeveloped limbs/protruding squishy things) where different genes in different discs at different stages were colored and photographed, producing a whole bunch of images that needed to be accurately categorized. Cyrus developed the ch-image image processing library to help in this regard. In both FREEDIUS and Cyrus's PhD system, all of the low-level image manipulation and feature detection code is written entirely in Lisp (well, Chris said that in FREEDIUS Gaussian smoothing is done in either C or Lisp and that they're probably going to drop the C function and do it from Lisp because the performance difference isn't that large). So there you go - if you didn't want to believe Didier Verna's benchmarks, now you have proof that there is absolutely no reason not to do all of your performance-intensive array processing tasks in Lisp.



One more surprise at the conference was Michael Sperber's talk on the R6RS standardization effort. I wasn't aware about any of the new things R6RS will (hopefully) introduce before Michael's talk, but combined with Olin Shiver's unveiling of the Scheme loop macro at Danfest, I think that there is a good possibility that I might get converted back into the Scheme fold in the not too distant future.

April 1, 2007

Sunday's tutorials.


edi_tutorial
Originally uploaded by vsedach.
Today was tutorials day at ILC. I went to Edi Weitz's Hunchentoot and CL-WHO tutorial in the morning, and Duane Rettig's Allegro optimization tutorial in the afternoon. Interesting things to ponder. One surprise that Duane revealed was MACL - a new Mac OS X UI for Allegro Common Lisp, developed by Clozure Associates. This explains what is going on (or rather, not) with MCL. Fun tidbit: MCL was called MACL originally.

March 31, 2007

Updates from ILC




Photos!

March 29, 2007

Going to the ILC

All arrangements have been made and I'm flying out to London tomorrow. I plan to blog from the conference. If you're there on Saturday, drop by Tishka's cafe (where Nick Levine is planning to host a breakfast) - I'll be there for brunch, possibly late breakfast, and maybe I'll drop by for lunch as well. I'll also be attending the afternoon tour.

March 17, 2007

ILC 2007!

After Nick Levine sent me an email stating that the ILC 2007 programming contest judges would like to give me a prize, only too bad I can't come to Cambridge (nice marketing tactic :)), I finally broke down and booked a flight. I'm going to ILC 2007! I'm going to be totally broke!

March 10, 2007

Complete computing system in 20,000 lines of code

I just finished watching Ian Piumarta's talk on the project at Viewpoints Research Institute to build a complete (metal up) personal computer system in 20,000 lines of code. While the easy way to do this would be to use APL, Ian, Alan Kay, Dan Ingalls, and the others working on the project have instead opted to go for building metacircular/self-describing systems instead. Of course, in modern parlance this is called "reflection," a very crappy version of which can be found in the Java language and runtime system. Among fascinating programming language and compiler topics, Ian discusses Schorre's Meta-II compiler compiler (developed in 1962), which had a self-describing grammar. Fascinating stuff, and in my opinion critically important, since the only proven way to reduce software complexity has been, to, well, reduce software complexity - less lines of source code written in a simpler way makes for better, more maintainable software, and this is exactly what the project aims to do.

Here is the link to the video:

http://stanford-online.stanford.edu/courses/ee380/070214-ee380-300.asx

March 8, 2007

Upcoming Lisp user's group meeting

The next Calgary Lisp user's group meeting will take place at 6:30pm on Wednesday, March 14th, at the Saigon Y2K restaurant at 2110 Crowchild Trail NW in the strip mall on Crowchild Trail right near the Banff Trail C-Train station and across from McMahon Stadium.

March 6, 2007

Nested Transactions

Here is another post in my series about distributed systems.

This one is about J. Eliot B. Moss's 1985 PhD dissertation on nested transactions, Nested Transactions: An Approach to Reliable Distributed Computing, published by the MIT Press. In it, Moss explains the design of his distributed transaction system supporting nested transactions.

Although Moss's is not the first distributed system to support nested transactions, the dissertations introduces the novel mechanism of two-phase locking and deadlock detection as the mechanism for guaranteeing serializability as a correctness condition.

A contrasting approach to implementing nested transactions is to use multi-version timestamping (introduced in David P. Reed's 1978 PhD dissertation, mentioned in a previous post). This approach is free of deadlock as a result of object timestamping, however it is vulnereable to livelock (slow, long-running transactions retrying repeatedly due to changes committed by quickly running ones). For the same reason, multi-version timestamping does not work well in the presence of a high degree of contention over an object by many transactions. Aspnes et alia's 1988 paper on applying multi-version timestamping to nested transactions provides an extension of Reed's original algorithm that somewhat alleviates this problem.

Nested transactions, while not strictly necessary for many systems, become critical when nondeterminism, such as the 'orElse' construct introduced in Harris et alia's 2005 paper on composable memory transactions and implemented in the GHC software transactional memory system (to be discussed in an upcoming post), is needed.

February 16, 2007

Croquet system overview

Since distributed and parallel systems and concurrency are on my mind a lot, I thought I would start sharing what I consider to be the "things worth reading" about those topics through this blog (as much as for your benefit, my numerous readers, as it is to help me form my thoughts by committing them (pun fully intended) to blog posts).

To start off is some light reading about the design of Croquet: http://www.croquetconsortium.org/index.php/System_Overview

I think the key things to note about Croquet is how Reed's multi-version timestamping (I plan to make David P. Reed's PhD dissertation a future topic) is applied to messages and extended with the notion of pseudo-real time, how replication of computation is not only seen as not bad but essential to the system, and how actions are performed through a sort of delayed message passing (where the programmer uses the above-mentioned pseudo-real time scheme to specify the delay), which is called "temporal tail recursion." Notice that none of the above features are really orthogonal - none of them would work very well (or at all) by themselves. This is probably the most coherent and well-thought out model for handling time in an interactive distributed system that's out there right now.

Another lesson from Croquet is that security in an open distributed system with mobile code can be handled elegantly at the language instead of the virtual machine level. A powerful example is the fact that temporal tail recursion guarantees that malicious objects cannot go into infinite loops that suck up all computational resources. In Smalltalk all iteration is expressed explicitly as message passing, so there is nothing lost by restricting iteration to temporal tail recursion. This is a demonstration of both the benefits of powerful, uniform paradigms in programming languages, and that of the actor/functional programming paradigm specifically.

February 10, 2007

Alternative computing technologies.

I finally got around to putting the slides for a presentation I gave on how computing systems can fit E. F. Schumacher's concept of appropriate/alternative technologies in Martha Whitney Langford's excellent Science Technology And Society class in 2004 online. Get them here (pdf, 6MB).

One particular novel insight offered is how Free Software seems to fit Schumacher's criteria of alternative technology, but is experiencing the most success in a context that is the antithesis of Schumacher's ideas of where alternative technology is most appropriate.

February 3, 2007

ILC 2007 Programming Contest!



It has begun!

January 18, 2007

Next Calgary Lisp user's group meeting!



The next meeting of the Calgary Lisp user's group will take place on Wednesday, Jan. 24th at 6:30pm at Saigon Y2K in the north-west (2110 Crowchild Trail NW, right near the Banff Trail LRT station and across from McMahon Stadium). Hope to see you there!