Subject: Re: Tail recursion & CL
From: Erik Naggum <>
Date: Mon, 08 Oct 2001 22:53:59 GMT
Newsgroups: comp.lang.lisp
Message-ID: <>

* Juliusz Chroboczek <>
| JS> If you run your (loop) example on a laptop running on batteries
| JS> you will see in not so long time that even (loop) is doomed to
| JS> "terminate" because some resource is exhausted.
| I see.  It's actually useless to try to write reliable programs,
| because my machine might very well run out of batteries.

  Juliusz, I find your approach in this thread to be irrational, and I
  suspect that it has some philosophical underpinnings that I cannot quite
  get a grip on.  You have removed the real world from your reasoning, and
  when people reintroduce it, they fall apart like they depend on a reality
  different from what is normally experienced.  There is something wrong
  with a philosophy when that happens.  In this case, your probably half-
  facetious response betrays, I think, a failure to differentiate between
  the normal and the exceptional.  This is akin to what we find in politics
  when people make no distinction between normal and emergency ethics --
  under normal circumstances, no person's life is in danger and endangering
  a person's life is wrong, but in an emergency, a person's life is in
  danger and it may be ethical to endanger another person's life, such as
  the attacker of the first person.  If you attempt to apply normal ethics
  to the emergency situation, you _will_ endanger someone's life and that
  is wrong whichever way you look at it, so there is no right answer,
  anymore, meaning your ethics has failed, but you cannot argue that your
  normal ethics has failed across the board -- it has only failed in an
  emergency.  Likewise with software: If your theory assumes a normal
  course of execution and termination, the theory _fails_ if apply it to
  exceptional termination, but that does not mean the theory is wrong apart
  from being abused for exceptional situations.

  This issue is of course getting more complex in software where the normal
  course of execution includes a large number of exceptional situations,
  but running out of resources is clearly such an emergency that no theory
  of reliable _software_ can deal with it.  The hardware exists.  It cannot
  be abstracted away while the theories are still expected to have their
  usual predictive power.  Regardless of how reliable your software is, if
  you are running it on a laptop computer in, say, Kabul, it just becomes
  so incredibly stupid to argue that the _software_ failed if the hardware
  evaporates and that it is useless to write reliable software when it
  might as well blow up on you, literally.  On the other hand, we do have
  people from vendors in this newsgroup who argue that just because you
  need sockets, which are not standardized, you might as well disobey some
  other parts of the standard because the standard is not perfect in its
  power to encompass the needs of the world.  I think it is the same bogus
  ideas that underlie the attitude to perfection and normalcy of both of
  you guys, that if you define a perfection that is inherently unattainable
  and impossible, you can go on griping about irrelevant flaws forever.

  As far as I can tell, Common Lisp is good enough that it attracts people
  who can make up a desire for an irrationally perfect world and thus never
  actually be satisfied with what they are able to get in any real life.
  For instance, you will never get a guarantee for tail-call merging in the
  standard, and it is _completely_ useless to ask for it or even pretend
  that you can get more than actual support for what you want from vendors.
  The standard is not at fault for not giving you the irrationally perfect.

  My hero, George W. Bush, has taught me how to deal with people.  "Make no
  mistake", he has said about 2500 times in the past three weeks, and those
  who make mistakes now feel his infinite wrath, or was that enduring care?