
Dijkstra on the cruelty of really teaching computing science (1988) - gnosis
http://userweb.cs.utexas.edu/users/EWD/transcriptions/EWD10xx/EWD1036.html
======
patio11
A desirable property of techniques to improve task success is that it is
actually implementable by the people carrying out the tasks. Formally proving
the correctness of CRUD apps leaves something to be desired here. (To say
nothing about formally proving the correctness of the browser and underlying
OS, which is probably a necessary pre-requisite of proving the correctness of
the CRUD app.)

One of the reasons statistics took Japanese manufacturing by storm is that you
can, and Toyota does, teach everything you need to know to justify pushing the
Big Red Button to someone with a high school math background. ("Here's a graph
for the stat of interest. For each production run, test one and put an X on
this graph. If you see three Xs above this line, the process is out of
control. Anyone in the factory who sees this, even the cleaning lady, should
immediately walk over and push the Big Red Button, costing us hundreds of
millions of yen. You will not get in trouble, indeed, we will praise you for
diligence. Why does this work? _Stats 101 lecture._ ")

------
luu
I disagree with Dijkstra on the big picture, but it's amazing how many nuggets
of wisdom this essay contains.

For instance, this passage gets across the main idea of Joel's leaky
abstractions essay in just one short paragraph, almost two decades before Joel
wrote his essay

 _It is possible, and even tempting, to view a program as an abstract
mechanism, as a device of some sort. To do so, however, is highly dangerous:
the analogy is too shallow because a program is, as a mechanism, totally
different from all the familiar analogue devices we grew up with. Like all
digitally encoded information, it has unavoidably the uncomfortable property
that the smallest possible perturbations —i.e. changes of a single bit— can
have the most drastic consequences. In the discrete world of computing, there
is no meaningful metric in which "small" changes and "small" effects go hand
in hand, and there never will be._

~~~
tome
_In the discrete world of computing, there is no meaningful metric in which
"small" changes and "small" effects go hand in hand, and there never will be._

This is a fascinating quote. What's the justification for it though? Is it not
possible that one day we will invent a computer language that supports writing
very "robust" (in the sense of source code sensitivity) programs?

~~~
luu
Isn't that, in many ways, the opposite of what people want out of a
programming languages? It seems to me that expressiveness, and robustness (in
the sense that you define robustness) are inversely related.

Whenever a language comparison pops up, there's always a large contingent of
people who argue for the superiority of a language because ideas can be
expressed more concisely in 'their' language, but that necessarily means that
some small change will have a relatively large effect.

There's also robustness in the sense that, e.g., BitC was trying to be robust,
and that's something that I think is promising.

~~~
Hexstream
However, a concise, expressive, very high-level domain-specific language can
make it impossible to express many idioms that would break the semantics of
the domain.

As an analogy (I hesitate to use one because of the article), you could have a
high-level representation of HTML that would make it totally impossible to
generate <HTML></NOTHTML>, which is not the case if you use a general-purpose
system. (not a foolproof analogy but anyway)

------
lifeisstillgood
I think luu is underestimating Mr Dijkstra - this one essay has more nuggets
than Mcdonalds...

 _I prefer to describe it the other way round: the program is an abstract
symbol manipulator, which can be turned into a concrete one by supplying a
computer to it._

 _if we wish to count lines of code, we should not regard them as "lines
produced" but as "lines spent"_

 _As economics is known as "The Miserable Science", software engineering
should be known as "The Doomed Discipline"_

and so on.

You can happily read most of the current problems we face in the Doomed
Discipline laid out 30 years ago.

In the end, reading honest thinking from people orders more intelligent than
yourself always inspires something.

------
limist
Worth reading just to receive the wisdom of this change of perspective - I'm
going to stop calling my errors "bugs":

 _We could, for instance, begin with cleaning up our language by no longer
calling a bug a bug but by calling it an error. It is much more honest because
it squarely puts the blame where it belongs, viz. with the programmer who made
the error. The animistic metaphor of the bug that maliciously sneaked in while
the programmer was not looking is intellectually dishonest as it disguises
that the error is the programmer's own creation. The nice thing of this simple
change of vocabulary is that it has such a profound effect: while, before, a
program with only one bug used to be "almost correct", afterwards a program
with an error is just "wrong" (because in error)._

~~~
yafujifide
> while, before, a program with only one bug used to be "almost correct",
> afterwards a program with an error is just "wrong" (because in error).

So almost every program is "wrong" then. This is pretty pessimistic. Every
glass is empty unless it is 100% full.

I think the better approach is just to not fool ourselves. If a large
application works, but has a few bugs, then it's OK to say that it's 90%
working (or 10% broken--have your pick).

~~~
kiujygtyujik
Thats the difference between maths and engineering.

A proof that has a single exception is wrong, a brige that stays up with a
broken rivet is correct

~~~
sqrt17
A bridge that depends on all rivets being perfect is a bad thing. A proof that
no longer holds outside of its domain of application is still useful.

The trick is to recognize when you need one and when you'd rather use the
other.

~~~
kiujygtyujik
Same in software - an application that fills a business purpose but has bugs
or "errors" is correct.

An error free perfect piece of software hat doesn't is useless

------
helmut_hed
When he says this:

... _the subculture of the compulsive programmer, whose ethics prescribe that
one silly idea and a month of frantic coding should suffice to make him a
life-long millionaire_

I feel like he's describing many a startup today.

------
ShardPhoenix
But is he yet, or still, right? Do formal methods really matter? Are they even
possible for realistic problems?

~~~
diN0bot
for real formal methods it's a trade off that only those who really need
security or reliability might practically make, eg banks and space agencies.

however, i think there is a continuum of methods, from TDD to proving
correctness. it is often practical in the real world to do up front design
before hacking out some code. i believe super smart hackers actually make lots
of smart design decisions when "hacking" out code, whether that is realizing
the appropriate loop invariants to make an algorithm work or figuring out the
right abstractions for reusable code. there's lots of ways that good forsight
produces correct code faster. that's what _intelligence_ is: thinking, not
brute force.

the smart coder, especially after gaining a variety of experiences and
learning from different mentors, takes pieces of different methods in order to
simultaneously make headway on different goals, not just correctness and
extensibility, but refactorability, fast-rampup for new team members, fun,
etc.

~~~
loup-vaillant
The trade off you speak of rely on the unspoken assumptions that (0) applying
formal methods costs more than not to, and (1) the astonishing complexity of
our programs is actually needed.

I highly doubt (0), at least when you take into account the costs of errors:
crashes, wrong results, security breaches… These costs impact the _user_
instead of the programmer, but they are costs nonetheless (plus, letting
customers pay for these strikes me as not nice).

I highly doubt (1), at least when you take into account our overusing of low
level programming languages, and of course anthropomorphic thinking.

------
awakeasleep
Could anyone explain what Dijkstra meant by

>It is misleading in the sense that it suggests that we can adequately cope
with the unfamiliar discrete in terms of the familiar continuous, i.e.
ourselves, quod non. It is paralyzing in the sense that, because persons exist
and act in time, its adoption effectively prevents a departure from
operational semantics and thus forces people to think about programs in terms
of computational behaviours, based on an underlying computational model. This
is bad, because operational reasoning is a tremendous waste of mental effort.

I don't get the part where thinking in terms of computational behaviors throws
something off (and more).

------
limist
Where and how can one effectively learn the kind of formal reasoning he
describes, and that he mentions having taught to college freshmen? Could a
knowledgeable HN'er please give follow-up resources? Thanks!

~~~
jhck
I was introduced to formal reasoning about programs in an introductory course
on algorithms and data structures in March. Most of the curriculum was from
CLRS,[1] which presents proofs of correctness for many algorithms but doesn't
explain how to develop such proofs. The curriculum for the last two weeks of
the course, however, was a handout titled _Transition Systems_ ,[2] which I
found to be an excellent introduction to the subject.

Since that time I've been interested in the idea of proving programs correct.
On weekends I'm currently making my way through David Gries's _The Science of
Programming_. Similar books include

\- _A Discipline of Programming_ by Dijkstra

\- _The Evolution of Programs_ by N. Dershowitz

[1] <http://mitpress.mit.edu/algorithms/>

[2] <http://www.cs.au.dk/dADS1/daimi-fn64.pdf>

~~~
drothlis
In a similar vein is Elements of Programming by Alexander Stepanov (designer
of the C++ STL) and Paul McJones:
<http://www.elementsofprogramming.com/book.html>

~~~
limist
@jhck, drothlis: Thank you very much to both of you, those suggestions are
exactly what was asked for. I'd upvote you more if I could. :)

Both the Gries book and Stepanov's book have really impressive reviews on
Amazon, am looking forward to diving into them.

[http://www.amazon.com/Science-Programming-Monographs-
Compute...](http://www.amazon.com/Science-Programming-Monographs-
Computer/dp/0387964800)

[http://www.amazon.com/Elements-Programming-Alexander-
Stepano...](http://www.amazon.com/Elements-Programming-Alexander-
Stepanov/dp/032163537X/)

------
stuhacking
I enjoy Dijkstra's essays.

On one hand, they're wonderfully blunt and unforgiving. I see in their
contents statements that I agree with, but also very strict ideas that I know
I've broken in the past. I suppose then that reading the EWDs is usually a
guilty pleasure.

One the other hand, it would be very difficult to actually employ a lot of his
best practice ideas due to the complexities of working in team environments
with differently minded colleagues against tight deadlines. Trade-offs have to
be made, unfortunately.

------
konad
He's right. One should never anthropomorphise computers, they hate that!

