From ... From: Erik Naggum Subject: Re: dealing with errors (was: "Programming is FUN again") Date: 1998/03/29 Message-ID: <3100169086878095@naggum.no>#1/1 X-Deja-AN: 338826266 References: <351BC8A3.7CE38A98@badlands.nodak.edu> <6fihvl$7bdrd@fido.asd.sgi.com> <3100091181606046@naggum.no> <6fkjln$79u8n@fido.asd.sgi.com> mail-copies-to: never Organization: Naggum Software; +47 8800 8879; http://www.naggum.no Newsgroups: comp.lang.lisp * Rob Warnock | Not as "precise", but certainly as "correct", yes? You won't deny, I | hope, that incorrect specifications usually lead to incorrect functioning | of the total system, *especially* when the code is proven to implement | the specification! no, I won't deny that, but there is an important difference between what constitutes a correct program and a correct specification: the latter must not contain inconsistencies or conflicts. a program must be allowed to contain inconsistencies and conflicts because it is impossible to do everything at once. since a specification is a statement of the static properties of a program, and a program's execution is a dynamic process, the types of incorrectness that can occur are vastly different in nature. this all leads to simpler (to express and implement, anyway) requirements on specifications than on programs. since the program should now be derived from the specification, we have removed a tremendous fraction of the randomness in the way humans think and process information. writing specifications, however, is much harder than writing programs, but at least you can always know whether it is internally consistent or not. | The problem I was noting in that postscript was that in attempting to | prove *total system* correctness [as opposed to proving correctness of an | encapsulated library component, which is often fairly straightforward] | one eventually must regress (step back) to the initial human desires that | led to the specification -- whereupon one runs smack bad into the | "DWIS/DWIM" problem ("Don't do what I *said*, do what I *meant*!"), which | at its root contains the conundrum that much of the time we humans don't | actually *know* exactly what we want! oh, yes. total system correctness is often meaningless even if it can be proven just for this reason. I see that we don't disagree on much, but I have become wary of the many paople who argue against proving correctness of components because of the human factors in "satisfiability" overshadow any correctness properties of a system at some point close to the users. | We violently agree. However, I was trying to warn that that *still* | isn't enough to prevent disasters, since the best you'll ever get with | the best methodologies is code whose behavior meets the originally stated | goals... WHICH MAY HAVE BEEN WRONG. we violently agree, indeed. | Yet I think we also agree that the truth of this point is no excuse | for not using the best methodologies we have access to *anyway*... precisely, and to wrap this up: I think the report from the Ariane 5 failure was incredibly intelligent and honest about the issues they were involved in. would that similar efforts would be undertaken when software less critical also fails. there is a lot to learn from mistakes that we probably never will learn from until we get rid of the obvious mistakes that we _believe_ we know how to handle. I'm reminded of a "definition" of insanity that might apply to programming: to keep doing the same thing over and over while expecting different results. the irony of this whole verifiable programming situation is that we are moving towards the point where we can prove that human beings should not write software to begin with, and we let computers program computers. however, as long as C++ and similar repeatedly-unlearned-from mistakes hang around, programmers will still be highly paid and disasters will come as certainly as sunrise. #:Erik -- religious cult update in light of new scientific discoveries: "when we cannot go to the comet, the comet must come to us."