Subject: Re: Core Lisp (was Re: cautios question (about languages))
From: Erik Naggum <erik@naggum.no>
Date: 1999/07/30
Newsgroups: comp.lang.lisp
Message-ID: <3142319226936032@naggum.no>

* Rainer Joswig
| How big is a working Franz Allegro lisp image?

* Erik Naggum
| on my system, the free system memory decreases by 740K when I start up
| the second Allegro CL 5.0.1.  the similar process with CLISP requires
| 1108K of fresh memory.  it is very hard on my system to measure the exact
| memory consumption of a process except for the fresh memory it grabs.

* Rainer Joswig
| You have not answered my question.

  I think what I wrote suggests that I'm aware of that.  your question is
  like a child asking "daddy, why is the water blue?" and to avoid getting
  into a lot of detail because the child doesn't understand his question,
  it is sometimes useful to answer "because they put blue stuff in it".
  [this is actual conversation between child and father.]

  what I have written above is what the cost in RAM would be in a ROM-based
  implementation.  I regret that you asked an unaswerable question.

| Unfortunately it's easy for me to put my software in RAM, not in ROM.

  so now "easy for me" has moved up to the top of the list of requirements?

  you're not really trying to solve any problems, are you, Rainer?  this
  _is_ just another stupid exercise in showing how Common Lisp is Big and
  Bloated and how bad that is, isn't it?  let's create more problems so
  we're sure we can't solve any of them!  that's how we keep academics out
  of the unemployment statistics when they have ceased to be useful, but it
  is still not a smart way to use their brainpower.

| A user would use the usual CL on top of that [the Core Lisp].

  this means that you actually confuse a proto-Lisp with a Core Lisp.  I'm
  frankly amazed that this is possible.  people build proto-Lisps in order
  to make booting the whole system easy -- it is not what people should use
  to program anything, and it's so implementation-dependent that there is
  no point at all in standardizing it, especially not by people who don't
  actually know how to boot a Common Lisp system.

  a Core Lisp that is just like a proto-Lisp upon which everything else in
  Common Lisp is built is a waste of time and effort -- it would be like
  defining some primitives in C and instead of ignoring that as necessity,
  make a whole lot of stink about how others need to define the same
  primitives in C in _their_ Common Lisps so some other Common Lisp which
  has exactly the same external definition can be retargeted on another
  Core Lisp.  why would anyone ever think of wasting time on this?  sheesh.
  
| Tweaking something small should be easier than tweaking something large.

  this has never been the case.  what makes you think it suddenly became
  the case?  why _should_ it be easier, when it clearly isn't?

| This is wrong.  Sure startup time is affected by total system size.  You
| need to be careful about that at initialization time and load time.  Is
| the code still in cache, etc.  Many systems now have very fast cache
| systems (for example the backside cache of the G3), taking advantage of
| that is not unreasonable.  You might have to deal with non-locality of
| code and data, ...

  you're just bickering now, Rainer.  you do understand that this stuff is
  completely tangential to the issue of total system size.

  the usually _relevant_ costs of a startup is related to how much cannot
  be done prior to startup.  this includes dynamic linking (with shared
  libraries), initialization of memory pools, and any preparations
  necessary for graceful error handling and exit.  this is stuff that does
  not take time to do, and the likelihood that it is in cache is directly
  related to how often you do it, not how big the total system is.  if it
  ever would be important to reduce the cache misses at startup time,
  compile the startup code specially, and earn exactly nothing except that
  you might win a stupid startup-time contest arranged by people who have
  no clue about what makes a whole Common Lisp system useful.

| The use of that is that a large part of your program might uses routines
| from your kernel.  Additionally runtime services like GC would surely
| benefit if they could stay in cache.

  how would all of this wonderful stuff of yours fit in the _same_ cache
  that can't hold the full system today, when it has be at least as big
  after it has been slopped onto the Core Lisp?  whatever made you believe
  that the cache could hold more just because the core is smaller?  geez.

  but _are_ we really defining a Core Lisp with the strongest requirement
  that it fit in today's processor caches?  is that what this is exercise
  is _really_ about?  no, I don't think so.  the cache argument is bogus,
  the "easy for me" argument is bogus.  this is all about Common Lisp being
  too big and bloated and someone wanting so desperately to prove it just
  to annoy other people.

| > I wonder which agenda John Mallery actually has -- what he says doesn't
| > seem to be terribly consistent.  neither do your arguments, Rainer.  in
| > brief, it looks like you guys want to destabilize the agreement that has
| > produced what we have today, for no good reason except that insufficient
| > catering to individual egos have taken place up to this point.
| 
| Sure, go on Erik.  Make fool out of yourself by blaming other people.
| It's a well known tactic by you to mix in your personal attacks.

  yet, what is really amusing is that you answer in _much_ worse kind.  the
  obvious conclusion is that I must have hit on some real truth and bruised
  some of the already very fragile egos.

| >   haven't various people tried to produce a Core English already?
| 
| What has this to do with the topic I was discussing?

  it shows that you don't learn from history and available experience.

| > Core Lisp is a mistake, but it will be a serious drain on the resources
| > available in the Common Lisp community.  language designer wannabes and
| > language redesigners should go elsewhere.
| 
| Erik, you finally made it in my kill file.

  again, it is very obvious that truth hurts: this is a waste and you guys
  know it, but at least the effort will stand a chance of being remembered.

| Not that I expect you to care, but I don't feel that urgent a need to
| read your pointless rantings anymore.  Better try to impress beginners
| with your excurses in Lisp and how the world is according to you.

  I'm sorry, Rainer, I'm not used to being exposed to such reeking envy,
  but I feel profoundly sorry for you, yet I'm happy you won't respond, now
  that I am in your kill file: the vileness of your response suggests that
  there is no limit to what disgusting level you could sink to in order to
  prove I'm a fool you should not have to listen to, even though you do
  know I speak the truth about your useless waste of time.

  a Core Lisp would be good if we wanted to encourage more implementations
  in the free software world, or wanted to encourage the retargetability of
  existing implementations.  already, however, whoever wants to reimplement
  Common Lisp is better off buying or reusing existing code for the
  exterior of the language (unless the argument from McCarthy and Joswig is
  that the existing implementations aren't as good as they would have made
  them, had they been allowed to do them) -- any smarter approach to
  implementation would also be different from the past, and so any Core
  Lisp would therefore make better implementations _less_ likely.

  but what better way to respond to "it's a waste of time, stupid!" than to
  acknowledge it by responding "I'm not listening to you!".  this would
  have been _so_ amusing had it not been for the enormous waste of effort
  and derailing of community efforts that will now take place in spite of
  the obvious.

  I also thought this kind of idiocy was what we had the Scheme community
  to learn from and not have to repeat ourselves.  I guess I was wrong:
  some people just have to make their very own mistakes before they learn.

  now, even with the obvious futility of this project, there are a need for
  Core Lisps in the plural: how you define them is so dependent on the
  specific application needs that each project will want its own definition
  -- and not just because they have unique needs, but because it takes more
  time to evaluate the existing alternatives than to roll one's own.  so
  it's better to let each project learn from others via the literature and
  define his own Core Lisp, than to standardize it for all to use.

  the history of programming has shown us that subsetting languages does
  not work, neither in the definition phase where you try to define which
  components some component depends on (people then agree on which subset
  to use and the full definition is lost), nor in the shaking of the
  component tree so unneeded components fall out (programmers will want to
  use features without having to remember their component, and will hate to
  reimplement things only because they don't want a whole component).  yet,
  every Lisp implementor has to grapple with the "cold load order problem"
  -- from which substrate the first few definitions can be executed.  every
  Lisp system that is loading needs to have a few pieces in place, but in a
  natively compiled Lisp, what's necessary for loading the system is not
  what is necessary for running the system once fully loaded.  and which
  primitives to port and base others on depends on how the compiler is
  built and used, and cannot be ported to another compiler unless that
  compiler have demands put on it that it is meaningless to demand from a
  competitor in a free market.

  I can only assume that the people who want to define a subset are not
  familiar with the boot problem in the environments they use all day, or
  ignore it for some higher agenda.  e.g., it is "educational" to try to
  dump a random package with Emacs.  effectively, the packages that are
  dumped with Emacs are written in a much smaller Emacs Lisp than the
  packages that are loaded during normal execution.  now, what does the
  smart programmer do when he tries to load a new package that needs some
  functionality from another package that should not be loaded?  why, he
  moves the definitions around so they fit!  he does not, and I repeat: he
  does not, sit down to define a standard for which functions are needed in
  the "Core Emacs Lisp" and try to get all the packages in the system be
  expressible in that language.  but such is what Core Lisp would be to
  Common Lisp.

#:Erik
-- 
  suppose we blasted all politicians into space.
  would the SETI project find even one of them?