Subject: Re: C++ briar patch (Was: Object IDs are bad)
From: Erik Naggum <erik@naggum.no>
Date: 1997/05/18
Newsgroups: comp.lang.scheme,comp.lang.lisp,comp.lang.misc,comp.lang.functional,comp.lang.c++
Message-ID: <3072942706778740@naggum.no>


* Harley Davis
| My prima facie evidence is that Ilog rewrote several of its Lisp
| libraries into C++ back in 92-93.  We consistently added features during
| the transition, and the result was always faster and smaller code and
| dynamic memory use, usually by several factors.  These were the same
| programmers, mind you, all highly trained in Lisp who moved to C++ very
| rapidly.

I have done my share of developing in Lisp and deploying in C.  there's no
doubt that once I understand a problem well enough avoid backtracking while
programming, I also write much more compact code.  this is not unique to C,
however.  several times, I have rewritten a set of functions many times in
the course of a project, and they always get better with time.  I view
programming as an exploration of the solution space, and this means that I
know a lot more about the problem when I have working code than when I set
out to solve the problem, no matter how much design work I did up front.
C++ seems to be good at implementing the _final_ version of something.

however, when I wrote "original code" in C, my first shots were much more
buggy and "efficient" in the wrong places than the first shots are in Lisp.
I very seldom debug my Lisp code, but I also do a lot of exploratory work
in the Lisp listener.

there is also a question of how much time was spent writing the libraries
you refer to in C++ and Lisp.  I find myself writing Lisp code 5 times
faster than I ever did C.  (I don't do C++.)  somehow, I stop writing when
the code fits some idea of "complete", which is very different for C and
for Lisp.  if I spent equal amounts of time with the Lisp code as I do with
the C code, I can assure you that the Lisp code would be very efficient
indeed, profiled for time and space and tweaked where necessary, etc.

| What metric would you propose for the opposite point of view?

I thought my point was well communicated with "toss a coin" and "unfounded
conjecture".  you're the one to offer suggestions that cry out for metrics.
I don't see any from you, though.  what you provide is an excellent example
of a very different point.  if your libraries had to be grown from nothing
into their present stage by a _new_ set of people who had to make all the
design choices and all the mistakes that your legacy Lisp code required to
get to where it got, we would have a relevant comparison for your claim
that C++ applications would be a factor of 2.5 larger if written in Lisp,
but that is still an unfounded conjecture on your part.  all you have
evidence of is that something that grew in Lisp could be rewritten smaller
in C++ after the fact.  I claim that this is true of _any_ rewriting,
almost no matter which languages are involved.  as designs solidify, they
can almost always be expressed much more compactly.  at the end of the
journey lies simplicity achieved.

| Perhaps you feel that I abused the word "temptation", so let me restate
| my point: The path of least resistance (ie what code is easiest to write
| based on built-in functionality and datatypes) in Lisp often leads to
| slow code which allocates lots of unnecessary memory.  The equivalent
| path of least resistance in C++, while far from optimal, is somewhat
| better in this regard.

that's what I'm saying, too.  but once on the path of least resistance, it
is important to realize that the resulting C code is buggy and while it may
require little memory or CPU time, it requires all the more time debugging
and correcting, possibly rewriting to remove design deficiencies, while the
Lisp code requires much less time to write, and albeit simple-minded and
wasteful in memory and CPU, it works correctly and can be optimized when
there is time and need.  also, from all of my experience with C and Lisp,
the argument list in a C function goes through several changes that affect
all callers, while the lambda list in Lisp stays is adorned with optional
or keyword arguments added with suitable default values, affecting only
callers that need the changes.  data structures grow, too, and require
recompiles of all user code in C.  changes in Lisp structures don't require
recompiles, unless the code is very heavily optimized, which there is no
need to request until you actually need it.

however, and this is the important human factor of programming, if a Lisp
programmer spent as much time on his code as a C programmer does on his
path of least resistance, it wouldn't be the path of least resistance in
Lisp, anymore, it would be approaching production quality code.

| It's too bad that someone as passionate in their beliefs and as
| articulate in their expression as you should be deceived on this point.

I don't think I'm deceived, of course, and you've done nothing to show me I
am, so: many thanks for the compliments!

| However, I think it is an inescapable conclusion: In its average
| incarnations, Lisp, for all its other benefits as a productive tool, is
| simply not a shining example of an easy way to produce efficient
| applications for most programmers.

if we allow ourselves to focus on a single aspect of a language and its
use, it is possible to argue that any and all languages suck big time.  you
seem to have chosen "efficiency" of the code (regardless of its quality).
I choose efficiency of the programmer (which includes "correctness" of the
unoptimized code).  I won't argue that the first shot at a function that a
programmer who has made it to the "average" mark for C programmers would
write in Lisp would suck _more_ in Lisp than in C (because it would not be
useful later, and could only be a horizontal building block, not a vertical
one, all other differences between C and Lisp programmers being equal), but
I strongly disagree that one should argue from this to the merits of
programming languages.

I have used C since 1981, but I first saw Lisp in 1978.  the influence of
The Little Lisper has been remarkable, although I had limited access to
Lisp systems.  when I finally got a computer on which I could use Lisp for
real (after a harsh meeting with C++, I decided "there must be a better way
than this" and found it), I had a long transition period when I cared
overly much about the performance of my code.  many of my questions to this
group (and to my Lisp vendor) have been concerned with the efficiency of
the compiled code.  I used to be willing to spend days keeping some piece
of code as fast as it used to be while fixing design flaws in it, while it
obviously would take billions of runs to recover _my_ time in keeping it
fast.  I would be more than willing to wait several minutes for a compile
so I could pride myself on having shaved less time off the execution of an
_interactive_ program than an extra context switch or page fault would
cost.  this just wasn't smart.  the problem with the obsession with local
"efficiency" that so easily inflicts C and C++ programmers is that it was
once valid when small local improvements in the source code led to linear
improvements in performance.  this is no longer true.  achieving optimal
execution conditions is a very different task these days from it used to
be, and it is counter-productive to argue for programmer control over CPU
registers and "I know what I'm doing" because, in fact, programmers _don't_
know what the computer is doing when running a process, and the kinds of
issues that _matter_ to performance can only be handled by the compiler or
the run-time system.

| > | Few programmers are up to the task.  For many years, Ilog was the
| > | largest Lisp vendor in Europe, and believe me, I saw mounds of
| > | absolutely horrendous Lisp code written by supposedly well-educated
| > | and knowledgeable programmers, the cream of the crop of European
| > | programmers.
| > 
| > yeah, funny how 350 million people can produce so few good programmers
| > when 250 million people can produce so many, isn't it?
| 
| Now perhaps it's time for *you* to propose a metric, or stop insulting
| European programmers (of whom I'm not one, but for whom I've a great deal
| of respect).

well, actually, I thought _you_ were insulting all European programmers:
if the cream of the crop of European programmers exposed you to mounds of
absolutely horrendous Lisp code, what is one to believe of the rest?

BTW, Europe is suffering from more than countries and languages.  people
who don't know this continent might be appalled to learn that research goes
on in a whole bunch of languages and that only when something major is done
is it published in today's lingua franca of science, English.  all the
grunt work is repeated in all the languages, fruitlessly if nobody else in
the same language group reads it.  the cream of the crop of European
scientists (and programmers) often learn English well and move to the US
because of sheer exhaustion with this system.  now that the European Union
is going more protectionist than any block in recent human history, even my
cat has to eat inferior food because EU can't import foodstuff from the US,
anymore.  the manufacturer had to set up a "European Headquarters" with a
brand new plant, all of European make, and has to make food according to
European regulations, which are _way_ below the scientific results my pet
food manufacturer has discovered on its own and using as its forté compared
to other brands.  hooray for Europe!  let's give communism a second chance!

#\Erik
-- 
if we work harder, will obsolescence be farther ahead or closer?