Subject: Re: Core Lisp (was Re: cautios question (about languages))
From: Erik Naggum <erik@naggum.no>
Date: 1999/07/29
Newsgroups: comp.lang.lisp
Message-ID: <3142248319960082@naggum.no>

* Rainer Joswig
| How big is a working Franz Allegro lisp image?

  on my system, the free system memory decreases by 740K when I start up
  the second Allegro CL 5.0.1.  the similar process with CLISP requires
  1108K of fresh memory.  it is very hard on my system to measure the exact
  memory consumption of a process except for the fresh memory it grabs.

| Last time I looked, ACL generated really large code on RISC machines (has
| this changed?).

  it's impossible to tell since you don't give any clue when that last time
  was, what "really large code" means, or which RISC processor you're
  talking about.  I _could_ say "yes, it's much, much better now" and you
  wouldn't know what I had answered, but anyone careless enough to believe
  your question was meaningful would probably believe my answer, too.  that
  is to say, I don't believe people are actually interested in performance
  information from others in a free forum -- even if people are honest,
  they are way too sloppy to produce meaningful data to base business
  decisions on, and anyone with an agenda will "win" by being selectively
  dishonest, as much comparative "marketing" and campaigning is already.

| Reasons for a Core Lisps are:
| 
| - small footprint is still an advantage on various devices
|   (imaging placing a Lisp system in any arm of a robot)

  as others have indicated, ROM is cheaper than RAM.

| - it's much easier to understand

  this is actually not so.  a Core Lisp would need more macrology and more
  higher-leve code to support good programming styles, and would suffer
  like Scheme when small communities of people develop incompatible
  "libraries" of extensions.  agreeing on a large base serves a larger
  community.  we cannot afford to let a thousand flowers bloom when the
  soil only supports a hundred -- we'll all vanish and the weed will take
  over completely.

| - it's much easier to port

  you don't port the Lisp code, you port the compiler and run-time system.
  if you're even moderately smart, the run-time system is written largely
  in Lisp and what you really need is a proto-Lisp, not a Core Lisp, but
  you wouldn't want anyone to actually program in the proto-Lisp besides
  the engineers who boot up a Common Lisp system.

| - it's much easier to experiment with extensions and changes

  this is wrong -- tweaking something is easier than building from scratch.

| - faster startup time

  this is wrong -- startup time is unaffected by total system size.

| - small means also that the kernel fits into current cache sizes
|   (I guess the boost MCL has got from the PowerPC G3 processor
|   is especially because it has a small footprint and the G3 has a
|   nice second level cache architecture)

  what use is this when you need space for all the user code that makes
  code worth writing, again?

| - you might be able to experiment with different GC strategies

  unless you by "Core Lisp" mean a proto-Lisp that lives below the real
  Lisp, this does not seem to be a valid argument.

| - it might be a good teaching vehicle

  we've been there before.  some students prefer to know that as they learn
  more and more, a lot of work has already been done for them, while other
  students prefer to be able to learn it all in a short time and go on to
  build stuff from scratch.  e.g., you could easily teach medicine in a
  year if you wanted to produce doctors faster, but they would still need
  seven years to be trustable in any critical situation where they would be
  called upon.  society would have to respond to one-year doctors with a
  lot more bureaucrazy and each doctor's skills would need to be charted
  with much more detail than we do today.  so you would get more doctors
  through the system, at phenomenal increases in total system costs.  the
  same is true of any other complex system that is taught in stages.
  
| - it could be the base for an OS kernel

  nothing prevents you from doing this already.  you don't need somebody
  else to define a Core Lisp for you first, in other words.  just do it.

| - you should be able to run thousands of Lisp threads on a single machine
|   (-> a web server, file server, ...)

  this does not relate to system size, only _incremental_ process size.  a
  bigger base system will generally have smaller incremental sizes than a
  small base system, where each thread needs to set up its own environment.

  I wonder which agenda John Mallery actually has -- what he says doesn't
  seem to be terribly consistent.  neither do your arguments, Rainer.  in
  brief, it looks like you guys want to destabilize the agreement that has
  produced what we have today, for no good reason except that insufficient
  catering to individual egos have taken place up to this point.

  haven't various people tried to produce a Core English already?  how well
  did that those projects go?  more importantly, why isn't there anything
  available in Core English except the designer's teaching materials?  I'd
  say the evidence is clear: people don't want to be artificially limited
  by some minimalist committee.

  Core Lisp is a mistake, but it will be a serious drain on the resources
  available in the Common Lisp community.  language designer wannabes and
  language redesigners should go elsewhere.

#:Erik
-- 
  suppose we blasted all politicians into space.
  would the SETI project find even one of them?