Subject: Re: dealing with errors (was: "Programming is FUN again")
From: Erik Naggum <>
Date: 1998/03/29
Newsgroups: comp.lang.lisp
Message-ID: <>

* Rob Warnock
| Not as "precise", but certainly as "correct", yes?  You won't deny, I
| hope, that incorrect specifications usually lead to incorrect functioning
| of the total system, *especially* when the code is proven to implement
| the specification!

  no, I won't deny that, but there is an important difference between what
  constitutes a correct program and a correct specification: the latter
  must not contain inconsistencies or conflicts.  a program must be allowed
  to contain inconsistencies and conflicts because it is impossible to do
  everything at once.  since a specification is a statement of the static
  properties of a program, and a program's execution is a dynamic process,
  the types of incorrectness that can occur are vastly different in nature.
  this all leads to simpler (to express and implement, anyway) requirements
  on specifications than on programs.  since the program should now be
  derived from the specification, we have removed a tremendous fraction of
  the randomness in the way humans think and process information.

  writing specifications, however, is much harder than writing programs,
  but at least you can always know whether it is internally consistent or

| The problem I was noting in that postscript was that in attempting to
| prove *total system* correctness [as opposed to proving correctness of an
| encapsulated library component, which is often fairly straightforward]
| one eventually must regress (step back) to the initial human desires that
| led to the specification -- whereupon one runs smack bad into the
| "DWIS/DWIM" problem ("Don't do what I *said*, do what I *meant*!"), which
| at its root contains the conundrum that much of the time we humans don't
| actually *know* exactly what we want!

  oh, yes.  total system correctness is often meaningless even if it can be
  proven just for this reason.  I see that we don't disagree on much, but I
  have become wary of the many paople who argue against proving correctness
  of components because of the human factors in "satisfiability" overshadow
  any correctness properties of a system at some point close to the users.

| We violently agree.  However, I was trying to warn that that *still*
| isn't enough to prevent disasters, since the best you'll ever get with
| the best methodologies is code whose behavior meets the originally stated

  we violently agree, indeed.

| Yet I think we also agree that the truth of this point is no excuse
| for not using the best methodologies we have access to *anyway*...

  precisely, and to wrap this up: I think the report from the Ariane 5
  failure was incredibly intelligent and honest about the issues they were
  involved in.  would that similar efforts would be undertaken when
  software less critical also fails.  there is a lot to learn from mistakes
  that we probably never will learn from until we get rid of the obvious
  mistakes that we _believe_ we know how to handle.  I'm reminded of a
  "definition" of insanity that might apply to programming: to keep doing
  the same thing over and over while expecting different results.

  the irony of this whole verifiable programming situation is that we are
  moving towards the point where we can prove that human beings should not
  write software to begin with, and we let computers program computers.
  however, as long as C++ and similar repeatedly-unlearned-from mistakes
  hang around, programmers will still be highly paid and disasters will
  come as certainly as sunrise.

  religious cult update in light of new scientific discoveries:
  "when we cannot go to the comet, the comet must come to us."