Most of Dijkstra's writing is overly melodramatic and this is no exception. Now that computing is even more ubiquitous his elitist attitude about programmers is even less relevant. Driving a car does not require a degree in mechanical engineering and thermodynamics and similarly writing programs should not require a degree in mathematics and logic. Also, reasoning by analogy and intuition is one of the better ways to solve problems. If it is good enough for people like Einstein and Feynman then it is good enough for me and Dijkstra in this case is just plain wrong.
> Carl Friedrich Gauss, the Prince of Mathematicians but also somewhat of a coward, was certainly aware of the fate of Galileo —and could probably have predicted the calumniation of Einstein— when he decided to suppress his discovery of non-Euclidean geometry, thus leaving it to Bolyai and Lobatchewsky to receive the flak.
There's a long history of mathematicians and scientists suppressing their results.
Copernicus didn't publish about heliocentrism until he was on his deathbed.
Newton started developing calculus in 1665 but didn't publish his theories and methods for calculus until 1693.
Darwin didn't publish until 1858, when Alfred Wallace had begun to rediscover Darwin's theories. Darwin had already theorized about natural selection for himself and privately among naturalist friends since he returned from the Beagle in 1836.
I am not a physicist, but my understanding is that Einstein's formulas showed him that the universe was expanding. He didn't trust what his own formulas, which were radical at the time, were showing him, so he modified his equations so that the universe would be stable - adding a cosmological constant. Observations by astronomers like Hubble later showed him that his original formulas were correct (my understanding of the field equations are shallow so I may have mischaracterized this).
> Einstein's formulas showed him that the universe was expanding.
More precisely, they showed him that there was no solution describing a static universe; the universe would have to be either expanding or contracting. Nothing in the math gave any reason to favor expansion over contraction; it's just that, since we now know the universe is expanding, that's the possibility we always focus on when talking about this episode.
> He didn't trust what his own formulas, which were radical at the time, were showing him
I don't think it was that he didn't trust the formulas; he had already explored other solutions that were giving correct predictions. It was just that he thought the universe should be static, so he didn't like the fact that there wasn't a static solution to his original equations.
> Observations by astronomers like Hubble later showed him that his original formulas were correct
More precisely, they showed that the universe was expanding, so there was no need to insist on having a static solution to the equations. But, as FreeFull pointed out, we now know that there is a nonzero cosmological constant, so Einstein's original equations, without that constant, weren't quite correct anyway.
And then we have found the acceleration of the universe is accelerating, and the cosmological constant starts looking relevant again. What Einstein called his biggest blunder might not be a blunder at all.
"It is probably more illuminating to go a little bit further back, to the Middle Ages. One of its characteristics was that 'reasoning by analogy' was rampant; another characteristic was almost total intellectual stagnation, and we now see why the two go together. A reason for mentioning this is to point out that, by developing a keen ear for unwarranted analogies, one can detect a lot of medieval thinking today."
This speech was so quintessentially Dijkstra. Here's where it got super-Dijkstra:
> Right from the beginning, and all through the course, we stress that the programmer's task is not just to write down a program, but that his main task is to give a formal proof that the program he proposes meets the equally formal functional specification.
And so we produce programmers who are helpless without formal functional specifications, and are therefore incapable of writing any software in the real world, with conflicting, underspecified, and post-hoc requirements.
But more importantly: this sucks all of the joy out of it! Who here got into programming out of a passion for translating formal functional specifications into formal proofs? How can any of the magic of software survive such a mindset?
> And so we produce programmers who are helpless without formal functional specifications, and are therefore incapable of writing any software in the real world, with conflicting, underspecified, and post-hoc requirements.
That … doesn't seem like a problem that exists in the world as it exists (but see the caveat below on my qualification to make such statements). It seems like the real world of programming is so full of people who are so used to conflicting, underspecified, and post hoc requirements that it doesn't occur to them that there is any other way, and so we live in a world in which it is simply assumed that software doesn't actually work as it is supposed to do, even if only in edge cases. Should all software be formally specified? No, and I'm sure that, despite his passion for overstatement, even Dijkstra didn't think so—but I think that the programming world would probably be a better place if more programmers were aware of the potential and possibility of formalisation, and so could introduce and take advantage of it as appropriate.
> But more importantly: this sucks all of the joy out of it! Who here got into programming out of a passion for translating formal functional specifications into formal proofs?
Me! I think it's not fair to assume that everyone enjoys programming for the same reason that you do, any more than it would be for Dijkstra to assume that others do or should for the same reason that he did. To be fair, though, I'm a mathematician, not a professional programmer.
Honest question, isn't it a historical bias ? maybe in his days programming projects were devoted to huge, complex and costly problems that were always pretty strictly defined, unlike the late 70s ~Apple/Microcomputer days ?
> But more importantly: this sucks all of the joy out of it! Who here got into programming out of a passion for translating formal functional specifications into formal proofs? How can any of the magic of software survive such a mindset?
Sure, but I do like the bliss of watching a tiny piece of code knowing it's sound by induction.
> maybe in his days programming projects were devoted to huge, complex and costly problems that were always pretty strictly defined, unlike the late 70s ~Apple/Microcomputer days ?
Huge? Not by today's standards. Complex? Again, not by today's standards, but they didn't have today's libraries to build on. Costly? Definitely.
Pretty strictly defined? They tried to be, but the gaps always showed. Certainly they were not strictly defined by Dijkstra's standards.
If you can get past Dijkstra's arrogance and abrasive presentation, he actually has some useful things to teach.
If you look at one of the books where he and his colleagues teach how to do this (for example, A Method of Programming, by Dijkstra and Fiejen) it's pretty clear that the programmer him(her)self is supposed to come up with the formal specification. Much of the book is about how to write the specifications.
The formal specifications are just assertions - they are Boolean expressions that express properties of the solution.
For example the formal specification for a square root
function might be abs(r*r - x) < d, where x is the number, r is the root, and d is an error tolerance.
In general these formal specifications cannot be mechanically compiled into an executable program - they are nonconstructive - like the square root example above. So you still have to find or invent an algorithm.
An interesting feature of Dijkstra's method is, he does suggest some heuristics for getting from the assertions to the code, that can help you discover an algorithm. They remove the mystery surrounding how some non-obvious algorithms were discovered.
Dijkstra's approach is actually a lot like test-driven development - the idea is, you start with some assertions that describe what the code is supposed to do, then you check the assertions as you go -- instead of writing a bunch of code and checking it at the end. The difference is just how you check the assertions - by running tests or by attempting a proof. Test cases are formal specifications too - they are just incomplete - somebody said they are 'specification by example'.
If you don't have a formal functional specification that adequately characterizes the program's behavior, then Dijkstra's approach is useless.
If you do have a formal functional specification that adequately characterizes the program's behavior, then you have already have source code, and Dijkstra's approach is irrelevant.
Or at least something formally equivalent to source code, even if there is not a "compiler" for it. But then the programmer becomes merely a human compiler, doing a provably-formally-equivalent transform to a formal specification.
Even if Dijkstra's approach is not irrelevant, it is inhuman - it makes the programmer into a machine. (Note that the first "computers" were human machines.)
I think Dijkstra's viewpoint is a bit "elitist" here. He talks about what are the best things students should learn (formal methods etc.) instead of talking about what would be the best ways to teach those to them. He takes the viewpoint of the professor, unsurprisingly, not the viewpoint of the student.
> I think Dijkstra's viewpoint is a bit "elitist" here.
I don't think that there's any need for scare quotes; Dijkstra was consistently and unapologetically elitist, and probably would (and maybe even did?) say as much himself.
If you have a hammer, everything looks like a nail. If you're a mathemetician, everything looks like a math problem.
.
We're not really that special.
.
.
The programmer is in the unique position that his is the only discipline and profession in which such a gigantic ratio, which totally baffles our imagination, has to be bridged by a single technology. He has to be able to think in terms of conceptual hierarchies that are much deeper than a single mind ever needed to face before.
I disagree.
A city planner does not need to know or care how to make bricks. An application developer does not need to know or care how HashMap<> internals work (well, not after they pass the interview at least).
An identical program can run on 1MB of data or 1TB of data, without changing it or adding layers of abstraction. A city with 4x as many people is not just 4 1x cities stuck together.
Complexity and the number of abstraction levels do not necessarily scale with system size in elementary particles.
.
To this I should add that, to the extent that we view ourselves as mechanisms, we view ourselves primarily as analogue devices: if we push a little harder we expect to do a little better. Very often the behaviour is not only a continuous but even a monotonic function: [...]
Of interest is that while this view may be popular, I think it's beginning to become apparent that it's not entirely accurate. Besides more well-known effects like overtraining, there's also reduced productivity with too-high work hours, reduced creativity in the face of excessive pressure, etc. And it's not just the "monotonic" part that's questionable -- things like burnout and the traditional "mid-life crisis" and mental breakdowns hint that we're not even particularly continuous.
In the discrete world of computing, there is no meaningful metric in which "small" changes and "small" effects go hand in hand, and there never will be.
This also applies to complex mechanical systems, and to anything with DNA. Whatever mutations cause cancer can't be that big a change, or they wouldn't happen so often.
.
The nice thing of this simple change of vocabulary is that it has such a profound effect: while, before, a program with only one bug used to be "almost correct", afterwards a program with an error is just "wrong" (because in error).
This is very silly. Not all bugs/errors render a program wholly useless.
.
[Don't anthropomorphise programs/computers.] ... ...deal with all computations possible under control of a given program by ignoring them and working with the program. We must learn to work with program texts while (temporarily) ignoring that they admit the interpretation of executable code.
This depends entirely on what you're doing. Are you trying to study computation using some program as an example, or are you trying to accompilsh some real-world effect with that program?
A programming language, with its formal syntax and with the proof rules that define its semantics, is a formal system for which program execution provides only a model. It is well-known that formal systems should be dealt with in their own right, and not in terms of a specific model. And, again, the corollary is that we should reason about programs without even mentioning their possible "behaviours".
Since the current context is the evils of anthropomorphization, what is the assumed use that "formal system" is a better (more useful) model than "intentional actor"? Insistence on formality becomes especially weak when the system can't be formally defined, either because the behavior depends too much on (unknown) input data or because the formal spec is intractable.
.
.
Right from the beginning, and all through the course, we stress that the programmer's task is not just to write down a program, but that his main task is to give a formal proof that the program he proposes meets the equally formal functional specification.
We've since discovered that all this will accomplish is to pass the buck to the people who wrote the invariably-buggy (I mean, erroneous) spec. Which since it's fully formal, could just be mechanically translated into a program anyway.
.
.
I think some of the things I learned were pretty good (and in a pretty good order): use of mathematical induction to prove that recursive algorithms work (introductory); complexity analysis (same introductory course); lambda calculus and compilers (4xx level). That's certainly better than trying to focus on formal methods, when all they can accomplish is to move the informality. I'd have also benefited from something on creating and refining models, or turning a problem into a spec (because really, building the right thing is always harder than building the thing right). For CS rather than CompEng, I'd also suggest more depth on discrete math than I took, and whatever kind of math is appropriate for distributed systems (queues and graphs and flow rates and stuff).
> This depends entirely on what you're doing. Are you trying to study computation using some program as an example, or are you trying to accompilsh some real-world effect with that program?
I don't think Dijkstra was actually interested in producing programs that accomplished some real-world effect.
> That's certainly better than trying to focus on formal methods, when all they can accomplish is to move the informality.
Correct, and very insightful. (Roger Penrose made the same point in "The Emperor's New Mind - at some point, it all starts with an informal idea of what the program is supposed to do.)
https://news.ycombinator.com/item?id=8909529
https://news.ycombinator.com/item?id=8872085
https://news.ycombinator.com/item?id=2122826
https://news.ycombinator.com/item?id=8368788
https://news.ycombinator.com/item?id=6701607
https://news.ycombinator.com/item?id=6838434
https://news.ycombinator.com/item?id=3553983
https://news.ycombinator.com/item?id=1666445
https://news.ycombinator.com/item?id=43978
https://news.ycombinator.com/item?id=2090256
https://news.ycombinator.com/item?id=1989473