I tried to read the "Discipline of programming" where he explains his approach, but it was barely understandable and it takes 300 pages for him to get to the point of developing simple algorithms of the type you meet in the first chapter of an algorithm textbook. It could have been the translation (didn't read the English original), but I doubt it, because I have never read anything technical from him that would actually be interesting. I am afraid his essays are liked because of the general sentiment for "more rigour" in programming, whatever it would mean, and not because of any understanding of what precisely he advocates and the merits of his techniques. The living proof is some comment in this thread how Dijkstra sheds insight into the value of TDD...
So, if you upvote his articles, what precisely have you learned from Dijkstra?
More precisely he is always a champion of the idea of having programs express things in a direct and understandable way which makes comprehension straightforward. And a champion of getting there by removing unneeded features from languages.
This was a necessary overcorrection to now-forgotten excesses. But today it is clear that he went too far. As an example, I would consider Go's a language which is heavily influenced by Dijkstra's ideas on the value of simplicity in language design. However the internal loop exit implied by break exists. Multiple return values are used to separate out data (the result of a calculation) from metadata (the presence and details of errors). We use features beyond what Dijkstra considered wise.
I think it's a bit of a dilution of his message to say he was a champion of comprehensibility. He was more interested in correctness, and his dream was that formal methods would be foundational to software development. That clearly didn't happen.
You have to understand some mathematics and mathematical logic to really understand Dijkstra however, you cannot look at it from a pure software engineering background because you will only take away platitudes from what he says without actually understanding him. If you have some background you can read about his real technical ideas:
This is the area Dijkstra worked most on during his research career. Those ideas actually go back centuries before anyone ever thought about structured programming. For hundreds of years there has been a fundamental debate ongoing on whether all kinds of reasoning can be reduced to some kind of calculi with well specified rules, which would be equivalent to being able to construct a (at least theoretical) machine for automating it. This is where computers come from, this goes back to Charles Babbage, to the works of Leibniz, Boole, Turing etc.
Dijkstra is in the same historical tradition. The program I understand Dijkstra has for Computer Science is similar to the program Hilbert had for mathematics with what is now called formalism, from which the work of Goedel and Turing sprung off. When programming was still mainly done by mathematicians this has been a lively research area, there were various systems of proving correctness invented, various ways of deriving programs, there has been a very heated debate on when proofs are needed and to what extent ordinary programmers have to master them etc. You can find loads of books and monographs written in the 70s and 80s about this. Now as far as research goes program derivation seems to be an almost dead topic, but there are still people today championing it in some alternative form. Richard Bird wrote a book called "Pearls of Functional Algorithm Design" where he derives algorithms using algebraical properties:
Alexander Stepanov has been advocating something similar and wrote a book called "Elements of Programming":
If that's the case then you need to read more closely. This discussion is sparked by http://www.cs.utexas.edu/users/EWD/transcriptions/EWD03xx/EW.... I'll focus on just two statements from this one which demonstrates the theme that I said was there:
Finally, although the subject is not a pleasant one, I must mention PL/1, a programming language for which the defining documentation is of a frightening size and complexity. Using PL/1 must be like flying a plane with 7000 buttons, switches and handles to manipulate in the cockpit. I absolutely fail to see how we can keep our growing programs firmly within our intellectual grip when by its sheer baroqueness the programming language —our basic tool, mind you!— already escapes our intellectual control.
It would be hard to find a clearer advocation of simplicity in programming languages.
But a clearer statement of his priorities - and why they are priorities - can be found here:
A study of program structure had revealed that programs —even alternative programs for the same task and with the same mathematical content— can differ tremendously in their intellectual manageability. A number of rules have been discovered, violation of which will either seriously impair or totally destroy the intellectual manageability of the program. These rules are of two kinds. Those of the first kind are easily imposed mechanically, viz. by a suitably chosen programming language. Examples are the exclusion of goto-statements and of procedures with more than one output parameter. For those of the second kind I at least —but that may be due to lack of competence on my side— see no way of imposing them mechanically, as it seems to need some sort of automatic theorem prover for which I have no existence proof. Therefore, for the time being and perhaps forever, the rules of the second kind present themselves as elements of discipline required from the programmer...
Your complaint is essentially that I focused on rules of the first kind described, when Dijkstra did a lot of work on rules of the second kind. If I were truly doing that then I'd be failing to understand him exactly as badly as you are in saying he was only interested in rules of the second kind while ignoring the fact that his largest concrete impact came from http://www.u.arizona.edu/~rubinson/copyright_violations/Go_T....
I recognize and value his work in both areas. However the question that was asked was about his impact on current programming practice. And there is no question that his ideas on structured programming have had more impact than his ideas on provably correct software.
What I actually hoped for is someone who really learned the way of writing programs Dijkstra advocated to some extent and their experience with it. I can't say I really understand how those derivations look like in practice.
In other words you're missing the plain meaning of the sections that I quoted because you try to consider it in the light of http://www.cs.utexas.edu/users/EWD/transcriptions/EWD10xx/EW... which was written a decade and a half later? Pardon me, but I won't be emulating your example.
The focus of Dijkstra's research career was how to make correct software. As he himself would claim, there are two halves to this process. The first is to limit ourselves to forms of writing code which are easy to reason about. The second is to actually perform that reasoning.
The part of his proposal that actually had an impact is the part which says that we need to focus on methods of expressing ourselves that are easy to reason about. The part that did not have a direct impact is the part which says that we need to perform that formal reasoning. The fundamental reason why not is that Dijkstra's reasoning assumed the existence of a consistent, unchanging specification. The real world does not work that way - computers exist to do what humans ask. And humans do not always ask for things that make sense.
This is not to say that this is the impact that he wanted to have - it is clearly not - but it is the impact that he did have.
However some of his other ideas have indeed found their way into practice, albeit in a muted way that he would have objected to. For example take unit tests, since you brought them up. He was against tests as part of including QA as an integral part of the programming process - if the programmer performed properly then that should not be needed. (Nice theory, fails in practice.) However today, well-designed unit tests do serve as a limited form of specifying exactly what a given piece of code is supposed to do, and verifying that it in fact does that. (I'm sure that he would say limited and inadequate. But it is better than nothing.)
This clearly shows you haven't actually studied his writings, because he has been raising the same points for many years in almost every EWD, OP is EWD340, so here is a sample from before this period:
For example from EWD317:
If we take the existence of the impressive body of Mathematics as the experimental evidence for the opinion that for the human mind the mathematical method is indeed the most effective way to come to grips with complexity, we have no choice any longer: we should reshape our field of programming in such a way that the mathematician's methods become equally applicable to our programming problems, for there are no other means. It is my personal hope and expectation that in the years to come programming will become more and more an activity of mathematical nature.
For a programming language to be simple in the sense of being able to prove things is a completely different thing than for it to be simple in the sense of being easy to understand informally. Yes, bastardized versions of his ideas did make it into the mainstream, I doubt it's always even because of his direct influence and that's what the wiki article I posted three comments earlier is about.
That's impossible; Discipline of Programming has only 217 pages of text. Surely you've been so enthralled by your reading that you didn't notice skipping through the covers to the book laying below!
You need to understand that Dijkstra's primary research focus was algorithms. Mathematically deriving algorithms from formal specifications isn't a weird idea at all. Sure, it feels cumbersome when the algorithm you're deriving is a commonly known one, like a mergesort or a binary search. The idea is that you can derive other, more specific algorithms in the same way, and you'll have proven their correctness.
These days, algorithms are not a major part of most computer systems. Trying to apply Dijkstra's approach to a user interface or a JSON parser makes little sense. But this does not invalidate the method for its intended domain. It also does not make Dijkstra's ramblings misguided nonsense.
I disagree... While GUI developers don't worry about O(n^2) versus O(n log n) performance, sites like Google wouldn't respond in real time without a focus on the best underlying algorithms.
I agree with the rest of what you say.
On a serious note, you do have a good point. I haven't read much of Dijkstra's work myself.
One morning I was shopping in Amsterdam with my young fiancée, and tired, we sat down on the café terrace to drink a cup of coffee and I was just thinking about whether I could do this, and I then designed the algorithm for the shortest path. As I said, it was a twenty-minute invention. In fact, it was published in ’59, three years late. The publication is still readable, it is, in fact, quite nice. One of the reasons that it is so nice was that I designed it without pencil and paper. I learned later that one of the advantages of designing without pencil and paper is that you are almost forced to avoid all avoidable complexities. Eventually that algorithm became, to my great amazement, one of the cornerstones of my fame. I found it in the early ’60’s in a German book on management science - - “Das Dijkstra’sche Verfahren.” Suddenly, there was a method named after me
Test driven development, 1972.
Today a usual technique is to make a program and then to test it. But: program testing can be a very effective way to show the presence of bugs, but is hopelessly inadequate for showing their absence. The only effective way to raise the confidence level of a program significantly is to give a convincing proof of its correctness. But one should not first make the program and then prove its correctness, because then the requirement of providing the proof would only increase the poor programmer’s burden. On the contrary: the programmer should let correctness proof and program grow hand in hand.
For loops have brain damaged us.
Another lesson we should have learned from the recent past is that the development of “richer” or “more powerful” programming languages was a mistake in the sense that these baroque monstrosities, these conglomerations of idiosyncrasies, are really unmanageable, both mechanically and mentally. I see a great future for very systematic and very modest programming languages. When I say “modest”, I mean that, for instance, not only ALGOL 60’s “for clause”, but even FORTRAN’s “DO loop” may find themselves thrown out as being too baroque. I have run a a little programming experiment with really experienced volunteers, but something quite unintended and quite unexpected turned up. None of my volunteers found the obvious and most elegant solution. Upon closer analysis this turned out to have a common source: their notion of repetition was so tightly connected to the idea of an associated controlled variable to be stepped up, that they were mentally blocked from seeing the obvious. Their solutions were less efficient, needlessly hard to understand, and it took them a very long time to find them.
Does anyone know what these test problems and solutions were? Or have similar examples?
The "answer" is the classic Dining Philosophers problem, which Dijkstra himself invented to illustrate the issues of concurrent programming. It isn't clear that this is actually what he meant, but it's probably the best we are going to get now that he is gone.
while(dest++ = src++);
"...their notion of repetition was so tightly connected to the idea of an associated controlled variable to be stepped up, that they were mentally blocked from seeing the obvious."
I was expecting an example where a loop worked, but a simpler mathematical solution exists without a loop.
Conceiving simple tests to prove a simple piece of programming works, keeps it simple. Simple is powerful.
> program testing can be a very effective way to show the presence of bugs, but is hopelessly inadequate for showing their absence
But proofs are hard. Therefore he advised that the program and its proofs (he didn't say tests) be created together so that the program could be constructed in such a way as to make the proofs easier.
If software development were to continue to be the same clumsy and
expensive process as it is now, things would get completely out of
balance. You cannot expect society to accept this, and therefore we
must learn to program an order of magnitude more effectively.
For example, processing power, communication speed, and storage has improved by orders of magnitude, but battery capacity is perhaps better by one factor of ten. Much of the advance in battery life comes from reduced power consumption by other components. Much of the advance in software development has come by creating inefficient abstractions that let us create things faster but squander CPU, network, and battery life.
Most companies today are willing to burn up some hardware performance in order to reduce software development time. If they're burning too much, they'll fix it or they'll go out of business. Premature optimization is still not a good idea.
Since then, programs are shared and run on many more machines; the cost is amortized through portability and writing general-purpose tools. Dijkstra was right to say that something had to change, but he didn't anticipate the efficiency gains from sharing portable software, at least not in this essay. Instead he seems to have thought that programmers should learn to write programs faster. We've made some progress on that, but the great efficiency gains happened at a higher level.
It was the Turing Award Lecture in 1972 - it's over 40 years old. Printed in "Classics in Software Engineering" by Yourdon Press, 1979, ISBN 0917072146.
Here are some of the previous submissions here on HN:
https://news.ycombinator.com/item?id=1799296 <- 3 comments
https://news.ycombinator.com/item?id=1894784 <- 8 comments
https://news.ycombinator.com/item?id=6112467 (This item)
I spend about an hour on HN a day, in small bouts. Most of the time I will skim over titles to see if there is anything interesting, if there is and its a long read (like this one) I will save it to "Pocket" to read it later. I will probably get around to reading this few weeks or even a month from now.
I suspect, very few people will read it on the spot, which is why even interesting submissions like this doesn't get a lot of discussions.
Also, although the author carries immense weight, and I'm sure the ideas put forth in the article are fantastic, what benefit will I get out of the article other than "hmmm, well that was interesting"? It's not a tutorial on a tool I can put into practice, or necessarily any sort of technique I can use in my day-to-day life (at least before reading, it seems this way). It looks like a philosophical article about the art of programming and being a programmer.
I love these kinds of articles, but their return on the large time investment required is ultimately not very large. This is in my relatively limited experience, of course.
They represent a philosophy that is referenced often but always before as an unintentional consequence of our internal laziness -- a laziness that should be fought against. However here it is being presented as a conscious choice. Indeed a right, proper, and good, choice.
The commenters seem to be arguing that reading is only worth the time if the content has been distilled to its basic facts, and further that that facts need to be immediately actionable. Have we no room for soul? Do we lack the energy to take general concepts and apply them to new areas in new ways?
When we break a larger writing down and extract just the main theses, we make it easier and quicker to under understand, but we also neuter and even change the meaning. Sometimes what we learn or what we experience is subtle. Sometimes writing doesn't give us a todo list, but instead it ever-so-gently shades and nudges all our todo lists.
You are reading too much into it. I (and probably the person you replied to) have nothing against long-form articles, as a matter of fact I prefer them.
On a typical day, I will spend 1-2 hours on a book, I will read many smaller articles here on HN and on Reddit, I will also check out my RSS reader, I will read work-related email, I will do actual work, I will train for my marathon (alternate day running and weights) which takes about 2 hours, spend time with my family, socialize, and hopefully get some sleep too. Its all about how you manage your time, not about distilling long-forms into bite-size chunks.
If given a choice between reading a long-form article online or reading a book, I will read a book.
There is only so much time in a day. There is so much to do. I save long article like this for my lazy or slow days to read.
As for the second, there's nothing wrong coming to HN for a certain kind of article, and overlooking others kinds. Do I have to read every article, on the chance that it affects me in a positive way, in order to avoid criticism?
Unfortunate, but true. I prefer to batch up my online reading time so that I can get through the denser articles and still have time to read the five minute blog posts.
But I suspect that the main reason why it's hard for most people to talk about his work is that they lack the historical context to understand what he's talking about. For someone who has never studied or used FORTRAN, ALGOL 60, or any of the macro languages he alludes to, his criticisms seem very abstract. For someone who has seen those languages, they're concrete and visceral.
Looking back twenty years ago, I now understand that one of the highest-value courses in my CS degree was the "Survey of Programming Languages" class. We spent two to three weeks studying and working with each of LISP, FORTRAN, ALGOL-60, Smalltalk, and a couple others I don't remember. I enjoyed the class but at the time I wasn't developed enough to think more than "man, people sure have come up with some strange ways of doing things; ok, back to C, I love my filesystems class."
Now I recognize how precious that exposure was. It's kind of like traveling; exposure to different cultures teaches you as much about your own culture as about theirs. You don't always realize what assumptions you're making until you see other people making different assumptions.
This is the value of formal CS education, i.e. Dijkstra's life work. Some people ask whether a CS degree is worthwhile when the Internet makes it so easy to learn how to program. It's the difference between university and vocational training; if you only want fix cars, you just need some skills classes and time spent apprenticing in a mechanic's shop. If you want to be an automotive engineer, you need an engineering education. If you just want a job, you don't need college; if you want to participate in the core of what our civilization has to offer, you need an education (self-study or formal, doesn't matter).
(Tip for the kids in school today: If you're at university and you find yourself frequently saying "Why do I have to learn this crap? I'll never use it!" then you might just be wasting your time and money. Do yourself a favor and drop out, unless someone else is paying for your play time. But if you learn for the sake of learning, if you recognize that learning how to learn, to prepare for a lifetime of learning, is the point of education, then stay in school, since you'll reap the rewards many times over.)
It will be taught again in the fall. He covers three languages (SML, Racket, and Ruby) in ten weeks, hitting three of the four quadrants on strongly typed/dynamic and functional/object oriented.
It's interesting how terminology has changed. In the 1970s you might have been able to say that Dijkstra was part of a field called "software engineering", in that he was writing about methodology for developing software. But in the 1980s it became a fairly different field, focused more on how to develop effective processes for large-scale development (modeling, testing, code review, team structure, tooling, etc.). By 1988 Dijkstra was vehemently against that version of software engineering.
Here's his article on that subject: http://www.cs.utexas.edu/~EWD/transcriptions/EWD10xx/EWD1036...
A number of these phenomena have been bundled under the name "Software Engineering". As economics is known as "The Miserable Science", software engineering should be known as "The Doomed Discipline", doomed because it cannot even approach its goal since its goal is self-contradictory. Software engineering, of course, presents itself as another worthy cause, but that is eyewash: if you carefully read its literature and analyse what its devotees actually do, you will discover that software engineering has accepted as its charter "How to program if you cannot.".
And since there's a lack of comments on many of the threads, I'll add my opinion to this one. This was written by a man who's got a long-term love affair with programming. I recognize this because I am the same way, and it's actually rarer than you might think.
Most of the people who were my colleagues have gone on to management, or more likely switched to other careers. I've been in management repeatedly (even C-level) and have had to talk my way back into software architecture and development. I wrote my first program 41 years ago and it's still my favorite past-time.
I think the problem with Dijkstra is that A) he views every program as a huge algorithm, where 90% of programming now is doing IO and user interfaces B) he doesn't take into consideration that people can have different brain wiring than he has and be effective with other techniques that the ones he advocates.
"[LISP] has assisted a number of our most gifted fellow humans in thinking previously impossible thoughts.", that's pretty profound ;-)
I only know of one programmer who did - one of the best hackers I've ever met, who eventually dropped out of UT to program full time. I never asked about his opinions of the professor, and anything I could write would be second hand.
I had a professor out of UT for my programming languages class. When asked, "How do you go about debugging interpreters?" he responded, "I don't debug. I prove every line of code correct before I type it."
Was Dijkstra inspiring? Or just a pedantic fuddy-duddy?
But in reality it wasn't that long ago and a lot of the early computer pioneers are still alive. Computing is still a really young field!
If it is too long for you, then it is only good you don't waste your precious little time on this planet reading it.
Take some time, this one is worth stepping back from the Internet firehose of information and sipping the lemonade of prose.