Subject: Re: New to Lisp question: advantages of Lisp syntax?
From: Erik Naggum <erik@naggum.no>
Date: 1999/06/04
Newsgroups: comp.lang.lisp
Message-ID: <3137456283455984@naggum.no>

* "Jonathan" <jonathan@meanwhile.freeserve.co.uk>
| Perhaps we have a different defintion of slow.

  no, but we do have a different definitions of performance.  I want total
  system performance: short response times to actual requests and never a
  failure to perform as required.  you want fast individual function
  execution.  and never the twain shall meet...

| Those benchmarks I referred to showed Allegro 3 running at a quarter of
| C++/STL speed.

  since you aren't going to get hold of Allegro 3, why do you base anything
  on such ridiculously out-dated statistics?  has it occurred to you that
  those who create and publish unfavorable benchmarks have an axe to grind?

| To me that's very, very slow.

  _why_?  this doesn't make sense at all.  what would you say if I took one
  of my examples and showed that the simple, direct, heavily optimized C
  version is a whopping 16 times _slower_ than the simple, direct, heavily
  optimized Common Lisp version of the same _problem_?  (not at all the
  same solution, of course, but what good is comparing _languages_ if you
  don't do the most natural thing as well as possible in it?)

  don't take this personally, Jonathan, but benchmarks are only good for
  measuring the performance changes of a _single_ test subject under
  various conditions, and those who believe otherwise are generally fools.
  (you've discovered Lisp, so you're exempted.)  benchmarks are good at
  stuff like the same code given to Pentium II and Pentium III at the same
  clock rate, like Pentium II at 400 and at 450 MHz, with 66MHz bus and
  memory or with 100MHz bus and memory, etc.  like the same C code given to
  umpteen different compilers at their maximal optimization.  or like the
  same operating system and system hardware and performance of different
  network cards, etc.  benchmarks are only good under carefully controlled
  conditions.  lose that control, and you can prove just about anything.

| I make my living optimizing code and I can usually beat the library and
| compiler combination in question by more than an order of magnitude -
| sometimes several.

  what a magnificent waste.  why don't you go work for a Lisp vendor?

| And execution speed matters in my markets - an awful lot.

  having heard this line from people who subsequently could neither defend
  their actual speed need nor prove to be doing anything serious about it
  (which means sacrificing something else to get it), I have very serious
  doubts about this claim.  in fact, I think it's a myth, and that those
  who perpetuate the speed myths are incompetent at anything but optimizing
  individual operations, and the problem is: that's damn easy.  it's also
  very, very rewarding once you set your mind on that track, like a drug
  addiction, not the least because you can always shave off one more cycle.

  recently, I cut the CPU expenditure on one piece of functionality in my
  application by a factor of 13,000.  one of the speedups meant that all
  but the fastest cases took a pretty high constant delay, but the overall
  result was a factor 90 performance improvement.  this would not have been
  a smart move unless I knew that the complex memoization technique I used
  would pay off.  in the general case, it's wasteful and costly, and not
  the kind of thing a C programmer would think about doing.  for my needs,
  the total cost of the function in C is twenty times higher than the total
  cost of the function in Common Lisp, even though the first call is about
  3.5 times more expensive than the C version.  and as we connect more
  clients, the more the Common Lisp version will outpace the C version.
  the funny thing about this is that I probably wouldn't have chosen to use
  a complex memoization technique if the code had been "fast enough"...

#:Erik
-- 
@1999-07-22T00:37:33Z -- pi billion seconds since the turn of the century