Kent M Pitman <email@example.com> wrote:
| Mathematics gives us tools for abstracting information that allows us
| to know important things that transcend the layers, like "information
| is persistent" or "storage is randomly accessible" or "this data
| structure is linked". And sometimes those terms are useful as
| descriptions. But to say that these form laws of behavior is what the
| problem is. Different abstractions can be substituted without
| violating physics, and "abstraction theory" (if there is such a
| field), not "physics", is what tells us what's doable and not.
This is in fact what Dijkstra's "A Discipline of Programming"
or David Gries's "The Science of Programming" is all about:
a theory of abstraction which allows you to prove useful lemmas
about *each* elementary programming construct, about each allowed
*composition* of elementary programming constructs, and thus
grow a program and its proof together -- which *is* possible
[and those books show how] -- rather than trying to take an
already-completed ball of mud and prove something about it --
which we know, thanks to Turing, is impossible in general
["The Halting Problem"].
For any serious professional programmer who has not read these,
I strongly recommend doing so. Like learning Lisp itself, it will
at least expose you to a *very* different view of constructing
software, one which will illuminate & inform, *even if* you do
not choose to follow the suggested methodology rigorously in your
own coding afterwards [as I confess I seldom do]. Those not familiar
with Dijkstra's other works [and his sometimes terse or even dense
style] may find the Gries book more approachable; conversely, for
those who wish to jump directly into the deep end, the Dijkstra book
begins directly: Chapter 1, "Executional Abstraction".
Their formalism is based heavily on the notion of "predicate
transformers", computational operations which transform a
post-condition on the total system state *after* the operation
into a pre-condition on the total system state *before* the
operation, and in particular, the "weakest pre-condition" induced
by an operation onto a given post-condition. A simple example:
Suppose the predicate R(x) = x > 3 [In CL, (defun r (x) (> x 3))].
Given the operation S(x) = x := x - 1 [in CL, (decf x)], what is
the *weakest* [most liberal] pre-condition P(x) such that if P(x)
is true *before* an instance of S, then R(x) will be true *after*
the instance of S? That is:
(assert (p x)) ; What is the weakest P such that...
(assert (r x)) ; ...*this* ASSERT never fires?
Of course, the answer is simply P(x) = x > 4 [or (defun p (x) (> x 4)).
Thus the common computational operation of decrementing a number
has been reinterpreted as a "predicate transformer" that converts
one predicate into another, sort of a macro, if you will, that rewrites
any post-condition into its corresponding weakest pre-condition.
wp(S,R) ==> P
or in our example:
wp((decf x) (> x y)) ==> (> x (1+ y))
If that's as far as it went, it would be useless. But they go on
to develop a "library" of such transformers, an "algebra" of them,
that allows you to start at the end of your program [or the section
of program you're working on] with the desired outcome [the post-
condition] and back your way into whatever starting system state
you happened to be handed initially [the pre-condition]. Included
of course are the classical "structured programming" constructs
of sequencing (concatenation), alternation (IFs & CASEs), and
looping (WHILE). Along the way one learns a completely new way
to look at guaranteeing loop termination, and a correspondingly
new way of choosing the definitions (and sematics) of your loop
| Perhaps this is the dividing line between engineering and science,
| and perhaps all I'm saying is that computer science is an engineering
| discipline at heart, not a science. That doesn't mean it's not bound
| by the laws of science, but rather that it seeks to overcome the
| apparent restrictions and exploit the powers... in the magical ways
| Clarke suggests. Science's goal should not be to tell people what
| they can't do, nor that they did it the wrong way if they ignored
| Science and got something done; rather, it should be a tool that
| explains what is done (or else its predictive nature is lost) and
| enables people who are otherwise at a deadlock (because it offers
| structural insight that might otherwise not be apparent).
I believe that the above-referenced works do this (or at least try).
| Computer Science should be offering to help people solve
| impossible problems...
Here, however, I must respectfully disagree. Computer Science
should be providing tools to allow people to know whether their
problems are *truly* impossible, in which case they can then stop
wasting their time on them, or merely difficult [albeit possibly
very, *very* difficult!], in which case Computer Science should,
as you say, offer techniques for "breaking intellectual logjams".
Dijkstra said [in his 1972 Turing Award acceptance lecture
"The Humble Programmer"] that:
A study of program structure had revealed that programs -- even
alternative programs for the same task and with the same mathematical
content- can differ tremendously in their intellectual manageability.
A number of rules have been discovered, violation of which will
either seriously impair or totally destroy the intellectual
manageability of the program. These rules are of two kinds. Those
of the first kind are easily imposed mechanically... For those of the
second kind I at least ... see no way of imposing them mechanically,
as it seems to need some sort of automatic theorem prover for which
I have no existence proof. Therefore, for the time being and perhaps
forever, the rules of the second kind present themselves as elements
of discipline required from the programmer. Some of the rules I have
in mind are so clear that they can be taught and that there never
needs to be an argument as to whether a given program violates them
or not. Examples are the requirements that no loop should be written
down without providing a proof for termination nor without stating
the relation whose invariance will not be destroyed by the execution
of the repeatable statement.
I now suggest that we confine ourselves to the design and
implementation of intellectually manageable programs. If someone
fears that this restriction is so severe that we cannot live with it,
I can reassure him: the class of intellectually manageable programs
is still sufficiently rich to contain many very realistic programs
for any problem capable of algorithmic solution. We must not forget
that it is not our business to make programs, it is our business to
design classes of computations that will display a desired behaviour.
Those who want really reliable software will discover that they
must find means of avoiding the majority of bugs to start with,
and as a result the programming process will become cheaper. If
you want more effective programmers, you will discover that they
should not waste their time debugging, they should not introduce
the bugs to start with. In other words: both goals point to the
It is at this these goals that the two books I mentioned at the
p.s. By the way, to bring this back a little closer to home,
it was in that same lecture that Dijkstra said:
The third project I would not like to leave unmentioned is LISP,
a fascinating enterprise of a completely different nature. With
a few very basic principles at its foundation, it has shown a
remarkable stability. Besides that, LISP has been the carrier
for a considerable number of in a sense our most sophisticated
computer applications. LISP has jokingly been described as
"the most intelligent way to misuse a computer". I think that
description a great compliment because it transmits the full
flavour of liberation: it has assisted a number of our most
gifted fellow humans in thinking previously impossible thoughts.
Rob Warnock <firstname.lastname@example.org>
627 26th Avenue <URL:http://rpw3.org/>
San Mateo, CA 94403 (650)572-2607