John Thingstad <firstname.lastname@example.org> wrote:
| Rob Warnock <email@example.com> wrote:
| > The good news, such as it is, is that the Dijkstra proof methodology
| > is one of the few I've seen ... which handles imperative constructs
| > ... almost as conveniently as purely functional constructs.
| The bad new is that finding loop invariants is fairly difficult with
| the imperative constructs (Hoare logic)
Quite true, in the general case. Fortunately, almost all the loops
one tends to write in "normal" code have a small set of loop-invariant
patterns that work well. ["Normal" loops being those more-or-less-simple
traversals or convergences where mostly all you have to worry about
are getting the boundary conditions right (avoiding "fencepost" errors),
where writing the loop invariant is a matters of seconds or a few
minutes at most. "Hairy" loops, then, would be those occasional monsters
where finding a suitable loop invariant is a matter of hours or days
| You might bet better off specifying the system using Generator Induction
| logic and then, if necessary, transforming it to a imperative design.
Haven't run across Generator Induction Logic yet. I'll look it up,
 I am reminded by this of the case of Shannon's Coding Theorem,
which *proves* that for any channel, no matter how noisy, there
*does* exist a block code which allows you to transmit data
as close to the channel capacity as you like with as low
an error rate as you like, *BUT*... not being a "constructive"
proof, gives you absolutely *no* hints as to how to find said code.
Integral calculus sometimes feels like this, too.
 Note that the "channel capacity" is a function of *both* the
bandwidth (or bit rate) and the error rate, and decreases
with the error rate, thus this is not as impossible a promise
as it may sound initially.
Rob Warnock <firstname.lastname@example.org>
627 26th Avenue <URL:http://rpw3.org/>
San Mateo, CA 94403 (650)572-2607