Subject: Re: Time
From: Erik Naggum <>
Date: 1999/11/23
Newsgroups: comp.lang.lisp
Message-ID: <>

* Devin Currie <>
| I am hoping that someone would have such a list because it will take
| me a while to compile such a list myself after enough experience and
| testing.

  that is a reasonable thing to hope for, but unfortunately, there is no
  substitute for experience and testing.  the experience you get varies
  from version to version of each implementation and may be completely
  useless from implementation to implementation.  the best you can hope for
  is to understand the major factor for the time and space consumption of a
  function.  e.g., the function = (with n > 2 arguments) will return false
  as soon as two neighboring arguments are unequal, which means the running
  time is proportional to n, while the function /= (with n > 2 arguments)
  will return false as soon as one argument is equal to any other argument,
  which means the running time of each test is proportional to n, and of
  the whole function proportional to n*(n/2), or n², for brevity.

  however, just how much time is involved can vary tremendously.  suppose
  the numbers are all fixnums and declared as such.  you would probably
  expect the complexity of = to be such that the compiler can inline the
  tests completely and produce very efficient machine code.  the complexity
  of /=, however, is such that you would face geometric expansion of the
  generated machine code were it done inline, so it would be a function
  call away.  that function would most probably be more generic (not in the
  CLOS sense) and not be able to benefit from the declaration of the
  fixnums, so it would do type tests or more generic number comparisons.
  so even though = and /= would differ by n/2 in theoretical execution
  time, you could very easily wind up with a large constant factor on top
  of that difference that for most practical purposes (reasonably small
  values of n) would dwarf n and even n².

  obviously, the thresholds and conditions for when such factors play a
  bigger role than the theoretical performance values simply cannot be
  communicated exhaustively, and most of us have only sketchy knowledge of
  the vast space of interacting conditions that we face daily, anyway.
  that's why profiling the execution-time behavior of the code on real-life
  data under real-life conditions is such a big win.  understanding the
  results of profiling is a challenge in itself.

  the only good news here, relative to your hopes, is that you don't need a
  general list, which would overwhelm you if you got it, anyway, you need
  experience with whatever you're actually trying to do.  with intelligent
  probes into the solution space, you will also be able to vary the way you
  do these things radically and achieve very different optimization results
  -- and Common Lisp is the best ever language to led you fiddle with the
  many varying algorithms that could solve a problem, not just fiddle with
  a single implementation because it's so hard to write it's too hard to
  discard it.

  so the interesting question becomes: how much time and space does it take
  to write efficient Common Lisp programs?  despite numerous efforts to
  profile the _programmers_, interpreting the results that have come back
  is far from trivial.  e.g., most programmers spend a lot of CPU time in a
  function of their brain that returns true for the input question "is this
  really supposed to work?" despite mounting evidence to the contrary, yet
  there is evidence that programmers whose interrupt handlers for the often
  non-maskable interrupt "there's _got_ to be a better way than this!" are
  more able to return false from the above function and thus waste less
  time on preconceived notions of what constitutes speed and correct code.
  unfortunately, there's no substitute for experience with this, either.