"Chemistry is accepted as a science: we hate it for its pollution and our dependency on its products, but we no longer blame it for not trying to make gold. Medicine has been accepted as a science: we hate it for the overpopulation and the soaring medical bill but no longer blame it for not producing the Elixir that gives eternal youth. Astronomy has been accepted as a science; we hate it for removing the earth from the center of the universe but no longer blame it for not realizing the astrologer’s dream of accurate prediction of the future. They are accepted as sciences; at the same time, the flourishing business in Healing Gems and Crystals, the horoscopes in otherwise respectable magazines, and governments relying on astrologers are a healthy reminder that Science as such remains rejected and that the old dreams linger on. Finally, one general remark about how sciences emerge: sciences become respectable by confining themselves to the feasible and successful by allowing themselves to be opportunity-driven rather than mission-oriented. (This, by the way, is why managers hate successful science: because it is not mission-oriented, they cannot manage it.)"
Dijkstra was not always right and incontestably he had a taste for polemic, but damn the old man was wise.
There is a school of thought in the community where I work that the alchemists get a pretty bad rap. Probably the clearest statement of this is in one of the chapters of Cyril Stanley Smith's "A Search for Structure", particularly Chapter 5, "Matter and Materials".
The thing that the alchemists were on to was that the same ingredients can have different properties if you process them differently.
There was a lot they got wrong -- they were wrong about gold being a "property" of base matter that could be elicited by suitable processing, and they had no coherent model, and the field was rich with what we now understand to be superfluous mysticism. But, the argument goes, a clear-headed understanding of the situation would rank alchemy as a valid and legitimate precursor of modern materials science, alongside phlogiston and caloric theory. Wrong, to be sure, but far from stupid.
Dijkstra's critique is evidently, by his own analogy, along the lines of the critique of alchemy for being wrong, and assuming that in its wrongness, it therefore has no value at all.
But, taking the analogy more seriously, one can be more charitable. Modern programming is indeed alchemical in character, it captures important insights about what is possible at scale, and produces products that are useful. It lacks a model, there is no theoretical bridge yet between the foundations of logic and Dijkstra's Very Large Scale Applied Logic. It is instead a grab-bag of tools, and doubtless the domain of some superfluous mysticism.
A model is desirable, and putting programming practice on a sound logical footing is a fine idea. But we also should not let our prejudice for reductive solutions blind us to the utility of what has been achieved by the people forging ahead in their absence.
Dijkstra had a lot of influence for some years on the UTCS undergraduate program, whose first undergraduate course for much of the 1990s was taught in Haskell and was a brutal weed out, at ~70% attrition for the first year courses.
The UTCS undergrad program has been "nerfed" twice since the 1990s - in 2001 and in in 2014. Various political interests during the first tech bubble - to produce more graduates - and later, to have more under-represented minorities, have dramatically reduced what Dijkstra pushed for.
I took that course in '96, as a lost liberal arts major, with no background in CS. If you paid attention & didn't get behind in course work, it was an easy A. Most of the people failing that class never bothered to do the course, or claimed to already know how to program. The lecturer I had was Richardson, and I used to model my courses in grad school after his; he was a gifted educator.
This was the key fact. They knew how to program Pascal from Computer Science AP and got 5's on the AP exams and came into UT Austin with abundant confidence but when presented with hardcore mathematical logic data structures and algorithm analysis in functional programming starting week 1 of college they got the rug pulled out from the under them. Of course, this was real computer science, not "infantile" imperative/OOP programming.
Ham "Haskell" Richardson was a total fanboy of Dijkstra!
I will note that the current 1st UTCS course is a serious joke - CS312 was a remedial course, non-counting for major, before 2014, called CS 305J. It's an embarrassment compared to Berkeley's CS61A.
A 70% attrition rate is awful. How many of those kids could become fine engineers if they had more runway to fail and learn without being weeded out?
All this tells me is that if CS concepts don't click for you instantly you were out - even though this is a terrible heuristic for who would make a good engineer (or good computer scientist, whatever the goal is).
Hm, it depends how "attrition" is defined. I read it the way it is done in german universities, meaning 70% and above indeed fail the weeding courses (usually math) at the first try, but you had 3 tries and still could attain the next semester, without passing.
But for quite some, it was indeed eye opening and they left for something else. It is a bit brutal, but effective and it gets the message across. If you don't want to struggle to learn the basics, you are wrong in computer science.
That doesn't mean, you cannot become a programmer, there is another formal path of doing so, but attaining a university does mean playing at another level. (or well, if should mean that, I got to learn too many who just learned to play the bullshitbingo)
> How many of those kids could become fine engineers if they had more runway to fail and learn without being weeded out?
This question is worth taking seriously. I've seen a study of a slower course that found that, nevertheless, there was no difference in the number of students who could understand basic variable assignments between the start and at the end; either they "got it" straight away or not at all.
I bet after that initial 70% attrition rate, the subsequent attrition rate was close to zero. As opposed to the current state of affairs, where people are slowly and painfully abraded away over the course of years (or graduate at a standard that may not have been acceptable, say, 20 years ago).
Perhaps, but also, UTCS = University of Texas Computer Science department, and universities don't make engineers, they make scientists and researchers...
Computer Science != programming. This memo by Dijkstra is pretty much agreeing with that (though complaining about the existence of non-CS programming).
UT's CS degree has been continually nerfed since 2014, with several reductions in the number of required science/math courses and an increase in the number of required "culture flag" (I forget the exact euphemism) courses, encompassing precisely the types of X diaspora/Y-ism studies courses you might expect. Source: graduated UTCS 2017, saw this happening real-time
> Science is hated because its mastery requires too much hard work, and, by the same token, its practitioners, the scientists, are hated because of their power they derive from it.
> Mathematical elegance, conceptual simplicity, and their companion, brevity of an unambiguous reference manual, are a condition sine qua non for any [software] product reliable enough to attain stability.
> Needless to say, this sober message is unacceptable. Simplicity requires hard work to be obtained and education for its appreciation, and complexity sells much better.
This is some hard-hitting stuff. Very interesting, thanks!
In the end it is the difference between science and applied engineering.
For a product it in the end has to work in the real world, considering available time, resources and requirements (which often tolerate some failure rate)
Strict science is important for progress, engineering can bring it to application.
More interesting to me than the analogizing of computer science to alchemy is the power dynamic he highlights in the applied sciences - namely the tendency for the craft to acquire and distribute power outside of (and even contrary to) the interests of those doing the investing.
I think with machine learning and especially the LLMs, we're getting pretty close to alchemy.
There's this quote that any sufficiently advanced tech is indistinguishable from magic. They probably didn't realize this was also true for the creators of that tech.
> I have taken extensive experiments with CS faculty members from all over the world, and the vast majority of them —I mean about 95 % of them— cannot program a Binary Search.
Knuth’s the Art of Computer Programming volume 3 has some more on this. Off the top of my head he writes that it took something like a decade from the first published binary search to the first correct for all inputs version.
This is a bar that you would expect a professor to be able to jump over, but it goes to show that sometimes a simple sounding algorithm can be exceedingly tricky to get right. In my experience some kind of Hoare triple derived methodology with invariants is the only reliable way to code a correct binary search from scratch, as opposed to simply having memorized a solution. Here[1] is what at a glance appears to be a readable and correct example of how one might do that.
For Dijkstra programming meant to think about a problem, then sit for a few minutes at your desk and write with pen on paper "in one go" the perfect algorithm/program, no bugs, no errors, no corrections, from the first try, and also, in beautiful handwriting. Some kind of idealized way in which Mozart would allegedly write his compositions. Whereas "Beethoven was a doubter and a struggler who started writing before he finished the composition and then glued corrections on the paper. In one place he did this nine times. When they peeled them, the last version proved identical to the first one. This iterative method of programming is somehow a very Anglo-Saxon custom. British education is pervaded by it. People learn, when they write, not to try to get it right the first time. Just write what's on your mind and then rewrite repeatedly to get the product you want. That's partly why word processors are marketed so aggressively and partly why they have been so successful there." [1]
> For Dijkstra programming meant to think about a problem, then sit for a few minutes at your desk and write with pen on paper "in one go" the perfect algorithm/program, no bugs, no errors, no corrections, from the first try, and also, in beautiful handwriting.
An approach that works best if you get to pick your own, narrowly defined problems, and do not deal with externally imposed, let alone changing, requirements. It utterly does not scale.
I'm not entirely sure when the last time was that Dijkstra actually typed a line of code into a computer (or punched it into a card, as the case may have been), but it appears to have been around the early 1970s. So, like the Pope's pronunciations on sex, Dijkstra's pronunciations on programming derive a lot of their clearcut purity by coming from a non-practitioner.
Interestingly enough Dijkstra anticipated your response and for me at least adequately refuted it in some of his other EWDs.
Are you aware that among the “narrowly defined problems, and do not deal with externally imposed, let alone changing, requirements” that Dijkstra solved are such doozies as distributed consensus, synchronization, and OS level hardware interrupt handling?
And so these are now no longer problems? Or do they remain thorny matters in practice, precisely because the versions that Djikstra solved are narrowly defined and without awkward external requirements?
Good software engineering largely rests on taking whatever "awkward external requirements" you're dealing with and decomposing them by concerns until you have a set of "narrowly defined" problems that you can effectively reason about, formally or informally. Dijkstra was a master of that kind of decomposition and that's how he got the interesting "theoretical" problems he solved: they came up in the course of breaking down practical problems!
The point then is that with the concerns having been adequately separated, rigorous, disciplined, and mathematical reasoning is a better approach to thorny "real-world" problems than spitballing and hoping it works.
> Good software engineering largely rests on taking whatever "awkward external requirements" you're dealing with and decomposing them by concerns until you have a set of "narrowly defined" problems that you can effectively reason about, formally or informally. Dijkstra was a master of that kind of decomposition and that's how he got the interesting "theoretical" problems he solved: they came up in the course of breaking down practical problems!
Every good software engineer breaks down the problems they have to solve, and usually you find that they're, say, 20% interesting theoretical problem and 80% messy practical drudgery. Unfortunately most of us don't have the luxury of choosing to only solve the nice 20%.
> The point then is that with the concerns having been adequately separated, rigorous, disciplined, and mathematical reasoning is a better approach to thorny "real-world" problems than spitballing and hoping it works.
What's the evidence for this? Because if Djikstra published a paper "solving" the problem, but the problem was not actually solved in the real world, that to me suggests that his approach is decidedly ineffective.
> What's the evidence for this? Because if Djikstra published a paper "solving" the problem, but the problem was not actually solved in the real world, that to me suggests that his approach is decidedly ineffective.
Another possibility is that the field is dominated by coders who are mathematically illiterate and unwilling to remedy that deficiency while hiding behind rhetorical hand waving about the “real world.” You yourself appear to stand as evidence for that explanation.
Fortunately we are seeing some real progress in the field. For example tools like TLA+ are slowly but surely getting wider and wider usage. Lamport is very much in the same school of thought as Dijkstra by the way. Even TDD represents a major step toward a more disciplined approach to programming.
> Another possibility is that the field is dominated by coders who are mathematically illiterate and unwilling to remedy that deficiency while hiding behind rhetorical hand waving about the “real world.” You yourself appear to stand as evidence for that explanation.
I'll take the gratuitous personal attack as a sign that you have no real argument. "coders who are mathematically illiterate" doesn't explain anything. Even in the research mathematics community it's acknowledged that if you haven't communicated your proof to others then you haven't proved the theorem (see all the fuss around the ABC conjecture).
I'm all for mathematical rigour in programming (and, not that it matters, I'm as experienced/qualified as a non-research mathematician can get). But judging how effective something is by how rigorous it is is putting the cart before the horse.
> Even TDD represents a major step toward a more disciplined approach to programming.
A step that Dijkstra would have abhorred; he used variations of "Program testing can be used to show the presence of bugs, but never to show their absence!" in at least a dozen different EWD notes.
Yes, Dijkstra invented the semaphore, but that "solved" synchronization about to the extent that John McCarthy inventing the linked list "solved" data structures (or FAANG hiring, for that matter).
While i am not one to condone Hagiography, your comment is pure Hubris.
If you think solving problems by constant trial-and-error is the norm and Dijkstra's idealized mathematically disciplined systematic approach methods can well be ignored, then i fear you have not understood the difference between Science, Engineering and Craftsmanship/Tradesmanship (without formal training).
>I'm not entirely sure when the last time was that Dijkstra actually typed a line of code into a computer ... coming from a non-practitioner.
This is laughable. By this logic every code monkey today (including myself) is better than Dijkstra because we simply have written a larger volume of code and that too in many different languages!
One needs to practice Humility when approaching the Words/Works of Masters and try to understand them instead of being dismissive. I keep the following quote by Arthur Conan Doyle from the novel The Valley of Fear in mind when studying the Greats;
Mediocrity knows nothing higher than itself, but Talent instantly recognizes Genius.
You might find the following two papers relevant :
1) This paper gives a retrospective on a earlier seminal paper by the author himself (what he got wrong and what he got right) : Retrospective: An Axiomatic Basis For Computer Programming by C.A.R. Hoare - https://cacm.acm.org/magazines/2009/10/42360-retrospective-a...
This is what impresses me the most about Dijkstra's accomplishments. It's not that he was extremely smart, although he was. Or that he discovered many fundamentally important algorithms, although he did. It's that he chose to develop and share how he did what he did. It's that you can read through one of his papers and it makes you feel like you could have applied his methods and made the same discovery. Of course that's not necessarily true, but that it's even sometimes true is remarkable.
And speaking of his papers, the EWD archive the submission is from is an absolute treasure-trove. While I'd recommend one of his books for someone interested in learning the methods he and his colleagues used, the more technical EWDs are very interesting. I personally enjoy the more polemical ones too, but I understand why many people are put off by them. Whenever someone gets around to writing the comprehensive multi-volume history of the development of Comput(er|ing) Science they will be an invaluable resource. Here's just one that I find notable[1].
>It's that he chose to develop and share how he did what he did. It's that you can read through one of his papers and it makes you feel like you could have applied his methods and made the same discovery. Of course that's not necessarily true, but that it's even sometimes true is remarkable.
Exactly Right!
This is the reason i worship at the "Altar of Dijkstra :-)" because he shows by example how to apply mathematical logic to develop a step-by-step solution to seemingly complicated problems. I am more interested in understanding the methodology of thinking involved in the problem-solving process (the essence of his genius which cannot be taught directly but can only be demonstrated via examples) than the details of the actual examples themselves.
To borrow Dr. Watson's words about Sherlock Holmes (from the story "The Red-Headed League");
I trust that I am not more dense than my neighbors, but I was always oppressed with a sense of my own stupidity in my dealings with [Edsger Dijkstra]. Here I had heard what he had heard, I had seen what he had seen, and yet from his words it was evident that he saw clearly not only what had happened but what was about to happen, while to me the whole business was still confused and grotesque.
> If you think solving problems by constant trial-and-error is the norm
News flash: It is.
> Dijkstra's idealized mathematically disciplined systematic approach methods can well be ignored
I never said that. I think they have their place, and I use them myself in some situations. It's Dijkstra and his acolytes who think iterative approaches can be ignored.
> One needs to practice Humility when approaching the Words/Works of Masters and try to understand them instead of being dismissive.
I've read a lot of Dijkstra's writings. They are not without merit. But he himself could have done with practicing some humility when engaging with computing as it is actually practiced in the physical world, as opposed to in his mathematical models.
Knuth is not any less theoretically sound than Dijkstra. But he engaged with practical computing issues and wrote software that is still in wide use. Wirth is not any less disdainful of commercial computing hype than Dijkstra. But he built actual hardware and software systems. Those are the masters I look up to.
I don't think you have understood the nature of Dijkstra's work nor the genius behind it. It is different from the genius of Niklaus Wirth (simplicity of OS, Programming Language implementations) or Donald Knuth (efficiency of Algorithm implementations) in that it is at a more "meta level" bringing together the fields of Computation, Languages and Mathematics.
I have already pointed out the laughable ridiculousness of your measuring Dijkstra's contributions by the amount of code written. He pioneered, defined and established great many aspects of the Computing Science field when it was still in its infancy. Here is a non-exhaustive list (from https://en.wikipedia.org/wiki/Edsger_W._Dijkstra and other sources)
1) He designed/described an ISA/Assembly Language to interface with one of the first Computers ever built (his PhD thesis named "Communication with an Automatic Computer").
3) He designed/co-wrote one of the very first Algol 60 compilers named the "Dijkstra-Zonneveld Algol 60 compiler" - https://en.wikipedia.org/wiki/ALGOL_60
4) Instrumental in establishing "Structured Programming" constructs in language design itself which we all take for granted today.
5) Instrumental in establishing Mathematical Rigour to Computer Programming via Mathematical Logic/Predicate Calculus akin to the writing of Mathematical Proofs to show "Correctness by Design and Construction". The importance of this cannot be overstated.
6) Instrumental in establishing aspects of Concurrent Programming (eg. CSP), Synchronization (eg. Mutex, Semaphores) etc.
7) Instrumental in devising several non-trivial algorithms for both "normal" vs. Concurrent programs eg. Shortest-Path, Dining Philosophers etc.
As you should now be able to appreciate, Dijkstra did write a lot many "Programs" (in addition to other writings) all of which were path-breaking which few can claim about their own life's work.
1-3 are from his time as a practitioner in the field. There was an exchange in Knuth's interview in Peter Seibel's "Coders at Work" that I thought really insightful:
> Seibel: It seems a lot of the people I've talked to had direct access to a machine when they were starting out. Yet Dijkstra has a paper I'm sure you're familiar with, where he basically says we shouldn't let computer-science science students touch a machine for the first few years of their training; they should spend all their time manipulating symbols.
> Knuth: But that's not the way he learned either. He said a lot of really great things and inspirational things, but he's not always right. Neither am I, but my take on it is this: Take a scientist in any field. The scientist gets older and says, "Oh, yes, some of the things that I've been doing have a really great payoff and other things, I'm not using anymore. I'm not going to have my students waste time on the stuff that doesn't make giant steps. I'm not going to talk about low-level stuff at all. These theoretical concepts are really so powerful-that's the whole story. Forget about how I got to this point."
> I think that's a fundamental error made by scientists in every field. They don't realize that when you're learning something you've got to see something at all levels. You've got to see the floor before you build the ceiling. That all goes into the brain and gets shoved down to the point where the older people forget that they needed it.
(4) He certainly promoted them. I don't think he invented them; I believe the one he was most involved in was guard clauses which he promoted as an alternative to if statements. For better or worse, this never caught on.
(5) I think it is, in fact, vastly overstated. Mathematical rigor has its uses in some areas (progress conditions in loops, e.g., and getting binary searches right is vastly easier by applying some mathematical thinking). The kind of "end to end" correctness reasoning he promoted has found only a few adopters (avionics, etc).
If you disagree, how many of your last 10 projects have you proven correct? How many of them have found to be defect free so far?
6-7 yes, sure. I'm not denying that he made many important contributions to algorithmic thinking. But that does not translate into expertise into how programming ought to be done. Notably, none of his accomplishments in 1-3 were attained according to the methodology he advocated later in life as the only possible one, nor did he try to reproduce them using that methodology.
Whenever somebody says something, one has to consider their Competency, Context and Time to read between the lines and try and get at the gist of what is being intended rather then the mere words themselves.
This is orders of magnitude more important when we try to understand "The Greats" (in any field) because they have a certain insight into things, a huge knowledge base to draw upon and are aware of myriads of nuances involved in their domain of expertise all of which we poor students have to struggle to grasp.
It is with the above points in mind that one should interpret Knuth's nuanced quote above. He had the highest of regards/respect for Dijkstra (see his detailed review (pdf) of Dijkstra's book "Structured Programming" at http://infolab.stanford.edu/pub/cstr/reports/cs/tr/73/371/CS...) Given that Donald Knuth is the "Patron Saint of Yak Shaves" (https://yakshav.es/the-patron-saint-of-yakshaves/) his perspective did not allow a fully mathematical abstraction without any reference to a machine (real or virtual). Note that he did not deny Dijkstra's approach but merely wasn't agreeing with Dijkstra's strident promotion of it. That is a viewpoint and not a value judgement.
Coming back to the points above;
[1-3] - One has to remember that when Dijkstra did these almost nothing that we take for granted today existed. Everything had to be built from ground-up. You can imagine the difficulty when you have nothing/nobody to learn from and are basically inventing stuff as you go along. You only had Mathematics as a stable bedrock.
[4] No, he pioneered it. Wikipedia actually credits him with coining the very term "Structured Programming" (https://en.wikipedia.org/wiki/Structured_programming#cite_no...) Regardless, he wrote EWD249 "Notes on Structured Programming" in 1969. Also see EWD1308 "What led to Notes on Structured Programming".
[5] Most people misunderstand what is meant by "Correctness" in a Computer Program. Perhaps Bertrand Meyer's synonym of a "Contract" is better suited to explain that what is meant is "Correctness w.r.t. Specification". Obviously this is fundamental to Programming itself. To get the benefit of this approach you don't have to go 100% rigour (rather difficult) but train yourself to think in these terms and using appropriate pre/post conditions. A good example can be seen in the use of Design-by-Contract syntax in Eiffel. Here is Dijkstra from EWD288 "Concern for Correctness as a Guiding Principle for Program Composition"
My thesis is, that a helpful programming methodology should be closely tied to correctness concerns. I am perfectly willing to admit that I myself may be the most complete victim of my own propaganda, but that will not prevent me from preaching my gospel, which is as follows. When correctness concerns come as an afterthought and correctness proofs have to be given once the program is already completed, the programmer can indeed expect severe troubles. If, however, he adheres to the discipline to produce the correctness proofs as he programs along, he will produce program and proof with less effort than programming alone would have taken.
Finally, a word or two about a wide-spread superstition, viz. that correctness proofs can only be given if you know exactly what your program has to do, that in real life it is often not completely known what the program has to do and that, therefore, in real life correctness proofs are impractical. The fallacy in this argument is to be found in the confusion between "exact" and "complete": although the program requirements may still be "incomplete", a certain number of broad characteristics will be "exactly" known. The abstract program can see to it that these broad specifications are exactly met, while more detailed aspects of the problem specification are catered for in the lower levels.
[6-7] Your comment "Notably, none of his accomplishments in 1-3 were attained according to the methodology he advocated later in life as the only possible one" is exactly the wrong inference to draw! It is precisely due to this formative experience that he was such a strident proponent of the use of Mathematics in Programming. As pointed out above, he had to invent a lot of this stuff as he went along and had nothing other than mathematical logic to aid him in his endeavour. Dijkstra actually said he was "slow" to understand the works of Floyd/Hoare/Others on Program Correctness techniques. But of course once he got involved, he took it to the next level.
In summary, read Dijkstra, try and get at what he intended to convey rather than the tone/words (which are irrelevant in the broader scheme of things) and put forth effort to understand what has been said/written down. The rewards are great.
> [...] using appropriate pre/post conditions. A good example can be seen in the use of Design-by-Contract syntax in Eiffel.
I agree that these have some promise, but there is a rather wide gap between those and correctness proofs, and I'm not convinced that's a gap generally worth bridging.
> One has to remember that when Dijkstra did these almost nothing that we take for granted today existed
And yet at that time, he was capable of writing an ALGOL 60 compiler in assembly language within a few months. Two decades later, armed with his complete arsenal of program correctness techniques, he spent weeks writing an toy expression parser in a high level language, abandoned his original objective along the way, and ended up with an absolute dog's breakfast of impenetrable prose and code: https://www.cs.utexas.edu/users/EWD/transcriptions/EWD05xx/E...
You have now shifted goalposts and are arguing about things different from the ones you started with. There doesn't seem to be anymore to discuss about and so i will end this with the following notes.
Your original contention that Dijkstra had not written much code (thus insinuating that he was mostly "talk") i have proven false. Incidentally, he wrote about 1318 EWDs totaling over 7700 pages in addition to his other books and papers (most of them have some "program pseudo-code")! This is indeed some impressive output.
I had also pointed out that much of his work is at a "meta-level" trying to get at the nature of programming and trying to establish that discipline on a sound mathematical basis. Thus it cannot be compared directly to conventional programing but one has to understand the intent behind his writings and then try to practice them honestly in one's own work. He did not claim that his methods were "easy" only that they were a necessity to write "correct" programs. The degree to which they are necessary and applicable by "ordinary programmers" (i.e. w/o too much mathematical maturity) can be tuned (hence my example of DbC from Eiffel). From Mathematics we all know the difficulty of writing and reading proofs. So your expectation that Dijkstra's usage of it should be easy to grasp is fundamentally wrong. It is the nature of the subject itself that demands effort and there is no getting around it. Incidentally while i am somewhat familiar with Wirth's compiler book (written for students), i have not read EWD550(seems to be written as a challenge response and quite intricate/non-trivial). It seems that they have very different purposes and audiences in mind and so a comparison between them is not fair.
> Your original contention that Dijkstra had not written much code (thus insinuating that he was mostly "talk")
I could have been clearer about this, yes. I know that he was quite skilled at writing code in his younger years. However, he largely seemed to have given up on the practice by the 1970s, and the less he was coding, the more certain he seemed to become how it ought to be done and taught.
> From Mathematics we all know the difficulty of writing and reading proofs. So your expectation that Dijkstra's usage of it should be easy to grasp is fundamentally wrong.
So the supposedly superior methodology is neither easier to use, nor does it produce easier to maintain programs. It's almost like its superiority consists purely of gatekeeping the programming profession to those conversant with the methodology's shibboleths.
> I have not read EWD550
I should have led with that. As far as I know, it's the longest published program to which Dijkstra tried to apply his methodology, so, to me, the clearest evidence that it does not scale.
You now seem to be trying to argue the inarguable and also trying to play "gotcha" games.
To take the latter first, because EWD550 is intricate and unfinished, is it your contention that Dijkstra was wrong about his methodology which he had already demonstrated in other papers and books? This is not an argument but only calls into question one's own current competency and effort put forth to understand what has been written down. I am sure that if one sits down with it for concentrated study for some time that one can understand it. People lose interest in many things that they take up. This is even more true of geniuses since they have so much going on that once they have understood the gist of something they move on and leave the rest as "an exercise for the reader". They are not obligated to spell out everything for "the reader"; whether "The Reader" likes it or not is quite another matter. So to me it seems your inference on EWD550 is wrong.
>So the supposedly superior methodology is neither easier to use, nor does it produce easier to maintain programs. It's almost like its superiority consists purely of gatekeeping the programming profession to those conversant with the methodology's shibboleths.
Again your inference is wrong. The "superiority" lies in "proving correctness w.r.t. specification" and not in "ease of comprehension". There is no gatekeeping or any other malicious intent involved but merely insistence on a methodology. Mathematical Proofs are by definition intricate, tedious and require concentrated study. That is the nature of mathematical logic and there are no shortcuts (the famous example of Russel and Whitehead taking more than 300 pages to prove 1+1=2 comes to mind here). If one wants to follow Dijkstra's method to the letter, then one has to "up one's game". If one doesn't want to do that but get the same benefits (to a certain degree) then use a simpler approach like DbC based on the same principles but much more "user friendly".
To paraphrase a famous quote: "There is no Royal Road to understanding Dijkstra".
Well, don't tell me, tell Dijkstra. I commit after every few lines, just on that he would think I am an absolute moron, not to be allowed near keyboards.
Interesting that in these all comments, nobody referred to the (un)reality of the creation of proofs. The fact that the proof has to be longer than a program - ignored completely 40 years ago - undermines the generality of the "purely scientific" approach marketed by Dijkstra. While his critics of software business are fundamentally sound - today even more than in the '80s - his idealistic view of programming as scientific activity and negation of the importance of (to not say contempt to) software engineering weakens his message even if we keep in mind the creation date of his philippic.
It took me a while, but by reading a lot of AI research papers, I finally realized that Computer Science is primarily an experimental subject. You can see that we are running a lot of experiments in these papers. I don't think we understand what is going on inside the Deep Learning networks, for example. Sure, there are quite a few studies, but overall I think this is accurate, we simply does not know what is happening inside. The main difficulty is high dimensionality. We simply do not have mathematical frameworks to analyze structures in the 100s of dimensions. With the advent of special relativity in 1900s, we developed some understanding of 4-dimensional objects, and a bit higher dimensions in the String Theory. But 100s or 1000s dimensions, - we simply have no mathematical tools for that. The real CS theory is sorely lacking. For some reason, theoretical computer science seem to contribute quite little to the practical deep learning world, while primarily concerning itself with the complexity theory and computability questions. Fun stuff, but... Somebody needs to do this. Calling all theoretical physicists looking for hard but immediately useful problems to solve!
> The main difficulty is high dimensionality. We simply do not have mathematical frameworks to analyze structures in the 100s of dimensions.
The main difficulty in deep learning is the non-convexity of the optimization problem. We can handle simpler problems in high dimensions just fine. The oracle complexity bounds for projected gradient descent in convex optimization even hold for infinite-dimensional problems - see work of Nesterov.
Most of the hard questions about deep learning remain hard even for neural networks with low-dimensional inputs, outputs, and hidden layers. Also, some of the more fruitful approaches in deep learning theory involve taking the limit as the width of one network layer goes to infinity.
> For some reason, theoretical computer science seem to contribute quite little to the practical deep learning world, while primarily concerning itself with the complexity theory and computability questions. Fun stuff, but... Somebody needs to do this.
Lots of theoretical researchers are trying to figure out why deep learning works. Check out the work of Jason Lee, Simon Du, Sebastien Bubeck, etc. Most of these researchers have a CS background.
I am not sure we are disagreeing on much here. Yes, non-covexity is major problem for optimization. And yes, very simple problems could be analyzed in high dimensions. But many problems are not simple, including understanding general structure of the information/data flow through the network as well as non-convex optimization itself. The work on infinite width networks is very interesting and is getting novel insights. Many theoretical CS researchers are working on understanding of deep neural networks. I should have rephrased the sentence from "Somebody needs to do this" to "It would be nice if we could make a major progress in this direction".
> For instance, the well-documented decline of productivity of the American white-collar worker has convincingly been linked to over-automation of the American office, but the “negative” outcome of this study has been totally ignored, in fact so much that many people involved in computing cannot believe it.
I would guess something on the productivity paradox (https://en.wikipedia.org/wiki/Productivity_paradox), so more that office productivity didn’t improve with automation in the same manner it did in manufacturing.
One of my pet theories is that IT is a net loss for most businesses and that they would be better off with one designated computer for Excel batch jobs. Instead they should use cabinets, folders and paper mail.
I think with this comment perhaps it comes to a transfiguration of the past. In germany, especially in its government, but also in smaller, very conservative companies with old bosses there are still some hold outs working purely paper based.
The amount of work it takes to keep a paper based environment organised seems insane. The amount of work it takes to share information in a paper based is crazy. Maybe not so much if it's just a small business, but anything bigger you need to run your own postal service with people sending letters all the time to each other complemented with dedicated personnel for "bulk deliveries" (like bellboys they carry heaps of folders from room to room) and dedicated personnel to organise the archive and the retrieval. There's an insane latency when you're not in the same building and even when this is the case deliveries can still take a day or two. I only got a peek into the inner working of courts here when they were still very, very paper based (it change the last 5,10 years) and it felt like you were at some post logistics centre. People running around with luggage containers filled with files.
I have never seen this in a smaller setting, I imagine it's easier because you can just speak to each other.
I mean, obviously I don't actually know if my "theory" is valid or not. I did not work at that time.
My feeling is just that there is no order anymore.
When I visited my dad (DoD bureaucrat) or mum (dentist) as a child there were so much order with secretaries and file cabinet rooms.
The cost of sending letters or filing documents worked as a filter I guess. No data format was ever invalid. Just put the paper in the folder. There was no "computer says no" for the clerks. Also, there was a limit in how convuluted processes you can practically have without computer programs.
The main thing was probably that you had to have secretaries keeping order. They are gone now.
I have never experienced that order where I have worked (I am 34). But I see it in remaints of the late 90s early 00s documentation and old file cabinets for prior projects.
Nowadays everything just disappear in some network folder black hole and employee attrition.
Jevons Paradox definitely applies - the cheaper something is the more you spend on it. There's no will to prune unnecessarily complex processes and reporting when you can just automate them. But not to the extent of it actually becoming a negative; you forget how much all those clerks and secretaries used to cost.
Question:
> "For reasons I don’t quite understand —partly, probably lack of instrumentation— the science of chemistry was very slow to emerge..."
Answer?
> "..the alchemists, who, in their effort to make gold from cheap base materials, at least attack a problem of the highest social relevance, viz. enable the government to finance its wars without inflicting poverty on the people."
Supposedly the Roman Emperor Diocletian persecuted the alchemists of that era because he feared they'd be able to make gold and raise armies to attack the Roman Empire (and/or debase the Russian currency). This likely set back the state of chemical knowledge, possibly for centuries.
P.S. Nuclear transmutation of elements is now quite common; unfortunately nobody has figured out how to get just the desired isotopic species of an single element from nuclear reaction instead of a pile of unstable isotopes of mixed elemental identity.
"Nuclear transmutation of elements is now quite common; unfortunately nobody has figured out how to get just the desired isotopic species of an single element from nuclear reaction instead of a pile of unstable isotopes of mixed elemental identity."
I think centrifugation would get the job done, but it is just not worth the effort at those small quantities.
A mass spectrometer can separate perfectly any isotopes from any mixture or chemical substance (by deflecting the ions in electromagnetic fields), but the rate at which this is done is too small for producing usable quantities in a reasonable time and with an acceptable energy consumption.
Imagine if someone released a programming language or system that made it possible and tractable to both specify, implement, and prove the correctness of, say, ten different classes of things that most medium-to-large-scale programs have to deal with. Also imagine that this system had no complexity beyond the intrinsic kind.
A lot of people would perhaps be initially impressed. But then they would grow to dislike the fact that the system encourages things like proofs of correctness.[1] They would dislike the fact, or the idea, that they would have to adopt to project conventions and processes that involved that kind of work in order to get their changes merged. That’s friction. And you have to deal with the intrinsic complexity more or less up-front.
Then some time passes and an alternative system comes out. But this system is easier to use. It’s easier to use because it helps you 90% of the way to write and implement those same ten different things. So maybe there are no specifications, and there are certainly no proofs, and certainly no guarantess that X will always be the case or that Y will never be the case. This lack of guarantees gets lauded as “pragmatic”.
In order to try to make up for the missing 10%, fuzzing libraries are written, CI pipelines are set up, everyone is encouraged to write more tests, and more process is added to code reviews. In part because “You are supposed to always do Z, anyway”.
A downside is that you went from having firm guarantees (even with asterisks since they would point you to things like trusted kernels) to having no guarantees. The only guarantee that you have is right after you have run your somewhat lean six minute test suite just before committing: you know that your changes are good modulo those tests since you just ran them. Then you submit a PR and the remote CI reruns the tests.
In the end, with all those code reviews and unit tests and whatnot, you look around and see only blobs of imperative code that could in principle do anything. So you add more sandboxes and security layers. Because, the low-level programmer interjects, “it’s all imperative code under the hood [anyway]”.
[1] Only “encourages” since you can e.g. write stringly-typed programs even in Idris.
Good suggestion; I just finished Dyson's Turing's Cathedral*, and it certainly suggests that von Neumann was as, or more, into (what would become) systems than (what would become) theory.
> "All stable processes we shall predict. All unstable processes we shall control." — JvN
Dijkstra was not always right and incontestably he had a taste for polemic, but damn the old man was wise.