Subject: Re: data hygiene [Re: Why is Scheme not a Lisp?]
From: Erik Naggum <erik@naggum.net>
Date: Sat, 23 Mar 2002 07:53:28 GMT
Newsgroups: comp.lang.lisp
Message-ID: <3225858821399981@naggum.net>

* "Paul F. Dietz"
| Mr. Naggum, no, it is not just plain wrong.  *You* are just plain wrong.
| *Study* that algorithm I posted.  Build the dependency graph of the
| loads.  You will see that the maximum path length in the alist graph is
| N+O(1) vs. 2N+O(1) for the plist.

  Mr. Dietz, you are anti-productive.  There is more to this than the
  "maximum path length".  Nobody has _ever_ disputed these claims, but you
  continue to neglect the _additional_ costs: things like cache lines and
  the register requirements of your algorithm.  What you offer us is a
  _complication_ that has absolutely no value at the level you offer it --
  it has significant value in understanding _some_ issues, but when you
  refuse to listen to the measurable facts, something is just plain wrong.

  Optimization is about making things execute more efficiently, faster in
  this case.  When your code runs significantly slower then mine (93 ns vs
  71 ns on my machine), there is something _wrong_ in your premises to
  optimization.  Is this so hard to understand and accept?  A 28% increase
  in the time spended on scanning an alist is not optimization in my book.

| I've presented a trivial proof that you cannot (it's right in front of
| you there.)

  What is right in front of me is not just proofs, but also measurements.
  Since you diss my measurements, I must assume that you are either unable
  to deal with the real world or have a personal problem, since you could
  very easily _repeat_ these measurements in a straight-forward scientific
  way, yet you cling to your proofs.  I find this _extremely_ annoying.

  We are, or at least I am, discussing optimization on a real machine with
  real registers and real memory.  It is very useful to be able to talk
  about some abstract machine with abstract registers and abstract memory,
  but optimization of real machines obey many _additional_ constraints.  A
  proof that considers only one set of constraints and completely ignores
  all others is not useful when discussing which will run faster on actual
  hardware.  And let us not forget that some of these optimizations give
  you a lower cost of some things at the expense of higher costs of other
  things.  Optimizing access into an alist many thousands of key-value
  pairs long is precisely the kind of stupid thing that people who learn
  that Lisp is for List Processing would do.  Real programmers would find
  that not only is there no benefit to your more complex algorithm, it has
  far higher costs than savings.

  If I have "mistaken" this discussion for being some academic exercise in
  theoretical "optimization" apart from implementation, please let me know.
  I have sort of based my entire line of reasoning on the kinds of design
  choices people were inclined to make because "alists are faster than
  plists".  Theoretical optimization with a large number of registers is
  fine -- as long as it actually runs faster, too.  A _potential_ to run
  faster given a more perfect universe does not impress me when I want raw
  speed.

| How about you show me an algorithm that does plist search in better than
| 2N+O(1) load latency?

  So you want others to compete with you on your home ground and refuse
  adamantly to consider any other premises.  How interesting.

  I have tried to make you realize that I have discussed actual hardware.
  There has been absolutely no secrecy around what I have been doing, but I
  have continually found that you want to discuss something else that does
  not yield better results.  I am profoundly sorry to see that you are
  completely oblivious to hardware and think a _more_ optimal algorithm can
  reasonably be expected to run _slower_ than a less optimal algorithm.  I
  consider an algorithm to be faster when it _is_ faster, as _measured_.

| Funny, I was wondering the same thing.

  Yeah, another case of the mirror game is brewing.  Sheesh, some people.

  But obviously we have different purposes: You want to "prove" algorithmic
  optimization on an abstract machine with an infinite number of registers
  and constant memory access costs.  I actually want to obtain the fastest
  implementation of a algorithm on a given machine (repeated for more
  machines as necessary), including its variable memory costs due to
  multi-level caching, its register shortage, etc, and I consider more
  complex algorithms that run slower to be going in the wrong direction.

  It is time to realize that "optimization" is both a mathematical and an
  engineering discipline.  I had sort of hoped you could have figured out
  what _I_ have been demonstrating, when you are so bloody eager to shoe me
  stuff that does not answer any question anybody had asked before you came
  along.  So, you have given us a nice proof that does not matter, code
  that runs slower than my _far_ simpler code (which even checks for nil in
  the alist, which you skimped on!), and you still huff and puff.  What
  makes you tick?  I am frankly seriously annoyed by people who refuse to
  listen to simple, repeatable, measurable facts and prefer their theories.

///
-- 
  In a fight against something, the fight has value, victory has none.
  In a fight for something, the fight is a loss, victory merely relief.