From ... From: Erik Naggum Subject: Re: New to Lisp question: advantages of Lisp syntax? Date: 1999/06/04 Message-ID: <3137456283455984@naggum.no>#1/1 X-Deja-AN: 485522463 References: <7j1lm3$bph$1@news8.svr.pol.co.uk> <3137271818084864@naggum.no> <7j375m$90q$1@news8.svr.pol.co.uk> <3137329919324438@naggum.no> <7j4dge$467$1@news6.svr.pol.co.uk> <3137374233008473@naggum.no> <7j71po$tii$1@news6.svr.pol.co.uk> mail-copies-to: never Organization: Naggum Software; +47 8800 8879; http://www.naggum.no Newsgroups: comp.lang.lisp * "Jonathan" | Perhaps we have a different defintion of slow. no, but we do have a different definitions of performance. I want total system performance: short response times to actual requests and never a failure to perform as required. you want fast individual function execution. and never the twain shall meet... | Those benchmarks I referred to showed Allegro 3 running at a quarter of | C++/STL speed. since you aren't going to get hold of Allegro 3, why do you base anything on such ridiculously out-dated statistics? has it occurred to you that those who create and publish unfavorable benchmarks have an axe to grind? | To me that's very, very slow. _why_? this doesn't make sense at all. what would you say if I took one of my examples and showed that the simple, direct, heavily optimized C version is a whopping 16 times _slower_ than the simple, direct, heavily optimized Common Lisp version of the same _problem_? (not at all the same solution, of course, but what good is comparing _languages_ if you don't do the most natural thing as well as possible in it?) don't take this personally, Jonathan, but benchmarks are only good for measuring the performance changes of a _single_ test subject under various conditions, and those who believe otherwise are generally fools. (you've discovered Lisp, so you're exempted.) benchmarks are good at stuff like the same code given to Pentium II and Pentium III at the same clock rate, like Pentium II at 400 and at 450 MHz, with 66MHz bus and memory or with 100MHz bus and memory, etc. like the same C code given to umpteen different compilers at their maximal optimization. or like the same operating system and system hardware and performance of different network cards, etc. benchmarks are only good under carefully controlled conditions. lose that control, and you can prove just about anything. | I make my living optimizing code and I can usually beat the library and | compiler combination in question by more than an order of magnitude - | sometimes several. what a magnificent waste. why don't you go work for a Lisp vendor? | And execution speed matters in my markets - an awful lot. having heard this line from people who subsequently could neither defend their actual speed need nor prove to be doing anything serious about it (which means sacrificing something else to get it), I have very serious doubts about this claim. in fact, I think it's a myth, and that those who perpetuate the speed myths are incompetent at anything but optimizing individual operations, and the problem is: that's damn easy. it's also very, very rewarding once you set your mind on that track, like a drug addiction, not the least because you can always shave off one more cycle. recently, I cut the CPU expenditure on one piece of functionality in my application by a factor of 13,000. one of the speedups meant that all but the fastest cases took a pretty high constant delay, but the overall result was a factor 90 performance improvement. this would not have been a smart move unless I knew that the complex memoization technique I used would pay off. in the general case, it's wasteful and costly, and not the kind of thing a C programmer would think about doing. for my needs, the total cost of the function in C is twenty times higher than the total cost of the function in Common Lisp, even though the first call is about 3.5 times more expensive than the C version. and as we connect more clients, the more the Common Lisp version will outpace the C version. the funny thing about this is that I probably wouldn't have chosen to use a complex memoization technique if the code had been "fast enough"... #:Erik -- @1999-07-22T00:37:33Z -- pi billion seconds since the turn of the century