Subject: Re: How Common Lisp sucks
From: (Rob Warnock)
Date: Mon, 01 May 2006 04:11:00 -0500
Newsgroups: comp.lang.lisp
Message-ID: <>
Tagore Smith <> wrote:
| Rob Warnock wrote:
| > Ron Garret  <> wrote:
| > +---------------
| > | Sockets.  Database connectivity.  Threads.
| > +---------------
| >
| > Which is why I love Lisp!! For my very first serious production
| > Common Lisp app[1] I whipped up a persistent application server
| > behind Apache, connecting with a "mod_lisp"-like protocol via
| > Unix-domain sockets to CMUCL -- creating a new CMUCL thread for
| > each HTTP request
| This, honestly, would make me nervous- that's not necessarily a
| criticism though- could just be that I am overly conservative about
| things like this. How many page views do you serve in a day?

Dunno, probably a peak of 2-3/s, which would would be ~200K/day,
except that such peaks tend to be only a few minutes in length
at most, so it's probably more like a few K/day on average.
"Very light", in other words.  ;-}  ;-}

| I am developing a new lisp based system like this and have chosen to
| use lispworks, partially because $1000.00 (x a few servers) is a small
| price to pay to have someone to complain to if things go wrong ;).

That's your decision. I was doing the work pro-bono for a non-profit
that didn't have any money at all to spend on the project, so free
tools were a consideration. And I'd already been using CMUCL for some
time on other stuff, so I was reasonably comfortable with it.

| I am a bit wary of CMUCL's threading- I just don't want to be the
| first person to exercise it really heavily.

Again, that's certainly your prerogative, though as part of my
development I stressed it fairly hard (~80 HTTP req/s) with no
issues that I could see. And I'm still using a slow development
hack that fork/execs a small C-based CGI trampoline per request.

Using real "mod_lisp" people have reported *much* better performance
than that -- e.g., >500 req/s or more [including dynamic HTML
construction but excluding database accesses].

| I have to serve at least half a million page views a day through
| this system, and maybe as many as a million, but they are not evenly
| distributed- 75% of the page views will come between 1 PM and 8 PM EST,
| with the bulk of that falling between 2PM and 5 PM.

That's not too bad. Assuming half your traffic is during the peak
three hours, that's 0.5 Mreq in 10800 s, or ~46 req/s. 

| That would be a pretty light load if we were serving flat html, but
| some of this is inherently dynamic, and requires a database call
| (I am also using postgres) for each request. No matter what else I
| do, I will have to pool connections.

Clearly!  On the server I used for the above, starting up a new
SQL connection takes ~250 ms [PostgreSQL 7.3.x], so even the
modest performance needs mentioned above required at least minimal
connection pooling, in this case re-using the same connection for
all SQL accesses during a single HTTP request. [Yes, the first
cut didn't even do *that*!  But I was still learning SQL... ;-} ]

| I think I may have as much experience as anyone else in the world
| when it comes to dealing with really high load on a postgres
| installation. My experience is with 7.x, so it may be moot with
| the 8.x release, but it tells me that making connections to postgres
| is really expensive- if you don't pool connections, and you are
| using postgres, you should probably not worry about optimizing
| anything else (unless you are doing something ridiculously expensive
| in your code). 

Based on my limited experience [see above], I agree completely,
though I've not yet been forced to work very hard at pooling.
Paying a single 250ms SQL connection cost for an HTTP request
that does at most a dozen SQL accesses was acceptable in the
above application, though it clearly would not be in yours.

| I don't take a lot of the criticisms of CL made in this thread very
| seriously- many of them seem to me to be comparable to the difficulties
| that you would have with any language/implementation. But it would be
| nice to see a really good DB library with connection pooling.

I used Eric Marsden's "Pg" library <>, which
speaks the PostgreSQL socket protocol directly, and found it very 
straightforward to use... and to understand, the once or twice I
had to look "under the covers".  You might find that adding the
pooling you need to it would be simple enough. I would have already
added cross-request pooling myself except that the performance of
the web site was already "good enough" without it.


Rob Warnock			<>
627 26th Avenue			<URL:>
San Mateo, CA 94403		(650)572-2607