Hacker News new | comments | ask | show | jobs | submit login
A Bridge Too Far: E.W. Dijkstra and Logic (vanemden.wordpress.com)
127 points by akkartik 26 days ago | hide | past | web | favorite | 72 comments



There are some good lecture videos with Dijkstra online that give a better introduction to his way of thinking than anything he wrote:

"Reasoning about programs": https://www.youtube.com/watch?v=GX3URhx6i2E

"The power of counting arguments": https://www.youtube.com/watch?v=0kXjl2e6qD0

"Structured programming": https://www.youtube.com/watch?v=72RA6Dc7rMQ

"NU lecture": https://www.youtube.com/watch?v=qNCAFcAbSTg


For those interested in Dijkstra, this website has a lot to offer: http://www.dijkstrascry.com/

I also followed some lectures given by the guy that runs that website. Apparently Dijkstra was not an easy person to get along with, there are stories of him being bored during some lecture, walking to the front, and laying down for a nap. (Sorry I can't provide a source, it's basically a rumour among academic staff here in Belgium).


> David Gries (Cornell University) was one of the recipients of "on-line" coaching during a lecture:

> My own experience with lecturing before Edsger took place in Marktoberdorf. It rests on the fact that in some languages (notably Fortran), the equality symbol and the assignment symbol are the same, and many people say "x equals e" when they mean "store the value of e in x," or "x becomes e."

> I was lecturing along, when I said "x equals e" meaning an assignment of e to x. From the back of the room came a loud "becomes," and then a stunned silence. Finally, I gathered my wits and said, "Thank you, Edsger, for correcting me. If I make the same mistake again, stop me." Twenty minutes later, I made the same mistake, and again from the back of the room came "becomes" "Thanks, I won't make the mistake again," I said, and to this day I haven't!

(https://history.computer.org/pioneers/dijkstra.html)


One of my drinking buddies, who has no affinity with computers whatsoever, recently casually remarked how he had often met Dijkstra. Apparently he was friends with one of the Dijkstra kids and used to visit him at their home in Nuenen. "The father was supposed to be some hotshot computer professor."

Needless to say, I nearly swooned when he told that. I couldn't help pumping him for details, none of which I feel at liberty to pass on to you, except that he thought Dijkstra was excentric but kind.


Perhaps not completely related to this topic, but interesting nonetheless, a small picture into Dijkstra's life from Dutch tv (subtitled in English): https://www.youtube.com/watch?v=RCCigccBzIU


Interesting video, thanks.

The subtitles at around 4:05 are out of sync. The quote reads

> Je moet de wereld niet geven waarom ze vraagt, maar wat ze nodig heeft.

In English:

> You must not give the world what it asks, but what it needs.


Edgar Daylight (the guy that runs that website) writes interesting books/booklets interviewing persons like Barbara Liskov and Donald Knuth. Here is a list of them: http://www.lonelyscholar.com/

I have read them all and recommend them. The one with Michael Jackson is particularly intersting for it's dialectic qualities.


Another typical part of the lecture style was the “Dijkstra Font”. Some of the lectures in my CS program were given by people who worked with Dijkstra, like Feijen, and they all used the exact same style of beautiful handwriting. Perhaps the font was a result of really taking the time to think things over to make sure it was correct. Those lectures could move real slow when it was time to write some predicate logic or pseudo-code on a blackboard, but they hardly used the eraser.


There is an old joke on the Internet about measuring arrogance in nanoDijkstra, but I don't know how much of it is about real arrogance and how much is about he telling things as they are.


The two are mutually exclusive? Let me have a talk with your kids about life, death, Santa and the Easter bunny. I won't be a condescending dick, I'll just be telling them how things really are.


It is practically impossible to teach good programming to students that have had a prior exposure to BASIC: as potential programmers they are mentally mutilated beyond hope of regeneration. - Dijkstra

Pretty arrogant gatekeeping. And obviously wrong.


Taken from EWD 498, which is easily recognizable as satire unless you present it ouf of context.

https://www.cs.utexas.edu/users/EWD/transcriptions/EWD04xx/E...


I'm not sure it's satire, though it's certainly hyperbolic and exaggerated (regarding the impact of learning BASIC). It's also important to keep in mind the sort of language BASIC was at the time and Dijkstra's aversion to goto statements and unstructured programming. It had some of the elements needed for structured programming, but they could easily be bypassed. `GOSUB`, for instance, allowed for calling subroutines and stored a return address that `RETURN` would use. But `GOSUB` just jumped to a label:

  10 GOSUB 100
  100 <DO STUFF>
  ...
  200 RETURN
It was entirely possible to jump anywhere inside the subroutine, and commonly done. For instance, the first time you call it maybe you want to initialize some variables but then keep them the same later:

  10 GOSUB 100
  20 GOSUB 130
  100 <RUN ONCE CODE>
  110 <MORE RUN ONCE>
  120 <LAST RUN ONCE>
  130 <THE REST>
  200 RETURN
The logic behind doing this isn't actually expressed in the code. To an extent, he's right, there's a great deal of unlearning that has to happen after learning to program this way.

There are certain patterns of programming that people pick up based on their early experience which can be hard for them to overcome later ("The determined Real Programmer can write FORTRAN programs in any language.").


Yes but this is also how it works in assembler, and nobody in their right mind would say knowing assembler would make you a worse programmer. We can all agree that BASIC is a crappy language by modern standard, but it did have a repl and was interpreted which gave it a quick turnaround, ideal for experimenting and learning.

Dijkstra on the other hand believed programming students should be forced to prove the correctness of their programs.


You agree that "BASIC is a crappy language by modern standard". Why is Dijkstra not allowed to make that point?

> Dijkstra on the other hand believed programming students should be forced to prove the correctness of their programs.

I did that in my CS homework like many others and I consider being able to reason about the properties of a program/algorithm a core competence of a CS graduate. Why do you think it's a problem?


> You agree that "BASIC is a crappy language by modern standard". Why is Dijkstra not allowed to make that point?

But...that is not the point he is making. He is not saying BASIC is a crappy language - he is saying programmers which have learnt BASIC are bad programmers, even if they learn other languages and techniques. That is a pretty arrogant and not particularly clever thing to say.


Have you ever tried to teach the right way of doing something to someone who has been doing it the wrong way for a long time and wants to argue that there's no difference?


Sure. I usually approach it by showing why I think the better way is better, rather than just calling them idiots. It have also often happened to me I have reconsidered something I had considered "the right way" for many years, due to a convincing counterargument or example.

For example I considered Dijkstra-style single-entry single-exit the right way for years, until I realized that early exits in many cases made functions a lot simpler and more straightforward.


> Dijkstra on the other hand believed programming students should be forced to prove the correctness of their programs.

This is why Dijkstra wrote his newsletter by hand - he couldn't find a word processor that had been proven correct, nor a proven correct OS to run it on.

I'm kidding, but it illustrates the problem with Dijkstra's approach. If not just students but actual working programmers had to prove the correctness of their programs, we'd have much less software - say, 5% as much. But that other 95% does some actually useful things, even though it can still break. Would you rather have a word processor that crashes every once in a while? Or would you prefer a typewriter?


And that core statement being made, that points to the "truth that might hurt", is the difference between satire and humour. You provided one reason for why that EWD should be considered satire (because it criticizes a status quo through hyperbole and exaggeration), while stating at the beginning that you don't consider it satire. That's a form of satire by itself. Smiley.


I am unable to read that presentation as satire. And as somebody that grew up using BASIC, he is much more correct than the overall "consensus" of the time. (Read "consensus" as "successful propaganda".)


It was impossible for you to learn good programming?


It took many years of unlearning. But that was on my own, I can't be sure if a teaching environment would make it faster.


Yeah, quite the opposite of "telling things as they are".


One I’ve heard is that at a conference someone asked him “what do you do?” to which he replied “I cast false pearls before real swine.”

I don’t know if it really happened, but it’s clear that a lot of people didn’t like him.


And also clear that the feeling was mutual.


He was rather famous at UT Austin for asking if the colors on the slides of speakers' slides meant anything. I believe it was some form of hazing for prospective faculty. Also, there was some discussion of the difference between a stack data structure and a networking stack.

On the other hand, everyone who I personally know, who took one of his classes, really liked it and him.


I'm fighting through a book on formal logic at the moment, I never did a computer science degree, is it on the syllabus? It feels like this should have been the first thing I ever studied. I kind of got to it in reverse, data shuffling => everything looks like sql/pandas => everything is relational logic (read about minikanren, datalog, prolog) => lots of things are a surprisingly thin layer over formal logic.

I haven't explored the idea but I read somewhere that type systems are close to unification (rust's typesystem is apparently very close ??).

Interesting to see this article and realise that Dijkstra (God) was interested in logic too.

Has anyone explored the idea that you write your program largely in logic, and that's your spec/typesystem, then you write the performance-relevant parts in a more procedural language, with either generative tests or some kind of proof system showing their equivalence?


Yeah, that's the basic idea of refinement based program verification. An example of that is: https://github.com/seL4/l4v

There are many programs which can be nicely specified with logic, and which could thus be treated this way. That we haven't build up the infrastructure yet to do this cheaply and routinely for those kinds of programs / libraries has various reasons. For example software engineers never doing a computer science degree ;-)


> is [logic] on the syllabus?

It used to be, but these days, most normal universities have de-mathematised their core CS curriculum, and logic is rarely taught in any depth. If you are lucky it's a short few lectures on some introductory mathematics-for-computer-science Year 1 lecture.

> this should have been the first thing I ever studied.

In my experience as a CS prof at university, teaching logic at the start of a CS course will baffle 99% of students, and they will not see the point. At the same time, the top 1% love it. Note that as of January 2019, most programming jobs, including top paying FAANG jobs, don't require a substantial grounding in logic.

Note that logic is deeply engrained in human thinking, we intuitively understand the meaning of terms like not, and, or, exists etc from early childhood on. The Kantian hypothesis here is that logic is part of the very fabric of our preception. So on some level, we don't need to learn logic as adults. What a logic course does it, tell us how to formalise logic.

Moreover, by the Curry-Howard correspondence [4], (constructive) logic and programming are essentially the same thing, so one might argue that a computer science degree is basically a long logic course, but using a novel approach to formalising logic.

> type systems are close to unification

Yes, in the sense that type inference does unification. The most famous paper on type inference [1] is explicitly based on Robinson's famous unification algorithm [2]. User "dkarapetyan" in [3] even quipped: "Any sufficiently advanced type system is an ad-hoc re-implementation of Prolog. Once you have unification and backtracking you basically have Prolog and can implement whatever Turing machine you want with the type system." Relatedly, compilers for languages with advanced typing systems are sometimes quite slow (e.g. the old Scala compiler), and the cost of unification plays a big role here.

As an aside, I recommend implementing the core type-inference algorithm in [1] for a toy language. It's simple, only ~100 LoCs in a high-level language, but it will really help you understand typing systems, and hence modern programming languages. Few things amplify your understanding of programming languages so much, for so little effort.

> proof system showing their equivalence?

Proving equivalence between programs is not computable (by easy reduction to the halting problem), and infeasibly expensive to do manually for non-trivial programs. Barring unexpected breakthroughs in algorithms (e.g. a constructive proof that P=NP), this is likely not to change in the next decade or two.

[1] L. Damas, R. Milner, Principal type-schemes for functional programs.

[2] J. A. Robinson, A Machine-Oriented Logic Based on the Resolution Principle.

[3] https://news.ycombinator.com/item?id=13843288

[4] https://en.wikipedia.org/wiki/Curry%E2%80%93Howard_correspon...


> Proving equivalence between programs is not computable (by easy reduction to the halting problem), and infeasibly expensive to do manually for non-trivial programs. Barring unexpected breakthroughs in algorithms (e.g. a constructive proof that P=NP), this is likely not to change in the next decade or two.

There is an assumption implicit in your statement that the language in which the programs are written admits uncontrolled general recursion. Since you mention the Curry-Howard, you must be aware of the concept of total functional programming languages, in which every program has a normal form. The halting problem for such languages is trivial, and proving equivalence between programs in such languages is feasible.

Of course, no mainstream programming language is total. However, such languages are not one or two decades away. For example, the Coq proof assistant implements a dependently typed total functional programming language, and is turning 30 this year.

To my mind, the interesting question is whether we can productively use such languages for solving practical problems. This is an active research area. Turner [1] teaches us how to write total programs that run indefinitely by making use of codata. Idris [2] is a dependently typed language that supports codata and is explicitly aimed at practical programming.

[1] D. Turner (2004) “Total functional programming”, https://github.com/mietek/total-functional-programming/blob/...

[2] https://www.idris-lang.org


> The halting problem for such languages is trivial, and proving equivalence between programs in such languages is feasible.

This is a common misconception. Program equivalence for those languages (and even halting) is just as (in)feasible as for general Turing machines; i.e., they do not make deciding any interesting property any easier, despite the fact that they always halt. The reason is that unlike what people erroneously believe, hardness of verification is not due to the "classical" halting theorem most people learn, but due to what is sometimes known as the "bounded halting" theorem, a step in the proof of what is arguably the most important theorem in computer science to date -- the time hierarchy theorem.

According to bounded halting, deciding any non-trivial property can be done in time linear in the number of states of the program, and no faster (so this is a generalization, or strengthening, of the classical halting theorem -- if the number of states is infinite, then the worst case is infinite time, i.e. undecidable). The proof holds unchanged even for total programs. I.e., while the property of halting (in some finite time) becomes trivial, the question of whether the program halts in, say, a trillion years, cannot be decided in less than a trillion years. I.e., deciding properties of programs in total languages is not feasible, even in principle.

To become convinced that totality cannot help verification, consider that every program in any language could be automatically changed to a total program without changing its behavior: you just add counters initialized to, say, 2^1000, to every loop/recursion. Therefore, if totality were of any help, we would be allowed to assume all programs are total, without loss of generality (because, we only care about the correctness of programs until the end of the universe, and not beyond).

On the other hand, in some particular cases, verification is possible, but for those cases it makes no real difference whether they are in a total language or not. Totality is necessary for the use of the lambda calculus in proof assistants for proving general mathematical properties; without it, the logic is unsound. It does not assist in verifying computer programs.

For more results on the real reasons verification is hard, see my post: https://pron.github.io/posts/correctness-and-complexity


> proving equivalence between programs in such languages is feasible

Not in the sense of worst-case time complexity.

I'm quite aware of Coq, and Idris, and of Turner's work. They can be used productively if (1) the problem has an good and stable specification, and (2) is inherently sequential, and indeed functional, and (3) doesn't need performance. There are quite a few situations where this is the case, but (1) is almost always a problem in industry. See also the agile/waterfall distinction.


> Not in the sense of worst-case time complexity.

Fair enough.


I'm a bit skeptical about logic being engrained in human thinking. Sure, we might understand "or" from childhood, but what about the distinction between inclusive and exclusive or? It seems this is naturally glossed over, and making a clear distinction has to be taught.

I also wonder about nesting and scopes. I'm reminded of a blog post I read [1] where subtleties of generic and existential types in Rust are explained using the scopes of the for-all and there-exists operators in the equivalent logic. On the one hand, this is a helpful clarification. But understanding nested logical operators doesn't itself seem all that intuitive or natural?

[1] https://varkor.github.io/blog/2018/07/03/existential-types-i...


   Intuitive everyday logic does 
   work a bit different
This is true.

Intuitive everyday logic does work a bit different in most people from propositional and first-order logic, in particular with regards to or and implies. For example, most people's or is the exclusive or you mention. But both logics can explain each other. For example, not, and as well as for all is pretty similar in formal and informal logic, and, as you know, those three are a basis for all of (first order) logic.


> Proving equivalence between programs is not computable (by easy reduction to the halting problem)

It's decidedly not computable for Turing machines (because the input/output space is infinitely large), but you can theoretically compute it for any linear bounded automata.

Certainly not practical for a very large potential input space like the working memory of a PC, but if you can make a trivial enough program to constrain the memory of another and ensure determinism that it is feasibly expensive to verify manually, you can create small LBA and guarantee equivalence to the extent that you verified the constraints checker.


>So on some level, we don't need to learn logic as adults. What a logic course does it, tell us how to formalise logic.

Unless it's a course in dialectical logic, or actually any other non-classical logic.


I'm not sure what you mean. All formalisations of non-classical logic operate in the framework of first-order logic, in the sense that the informal meta-language in which the non-classical logics are explained in is traditional first-order logic, see e.g. [1].

I don't know what "dialectical logic" logic is.

[1] Graham Priest, An Introduction to Non-Classical Logic


> All formalisations of non-classical logic operate in the framework of first-order logic, in the sense that the informal meta-language in which the non-classical logics are explained in is traditional first-order logic

What gave you this idea? It is frequently useful to work over different base logics to obtain models of non-classical logics. E.g., when you work in categorical logic you usually work over an intuitionistic logic which can then be interpreted in an arbitrary topos. This gives you the ability to internalize any constructions you do "on the outside".

There is a large community of mathematicians fixated on classical first-order logic, but this is just because of tradition. For models of classical first-order set theory it just doesn't make much of a difference. This is not true of "all formalizations of non-classical logic".


> work over an intuitionistic logic

Whoops, yes, you are right.

That said, a lot of the development of category theory that I'm aware of, is taking place with set theory. Certainly the two books [1, 2] that I learnt category theory from do. But that was a while back, and it might be the case that this is different now, I have not kept up with this research direction. For example is there a full, unrestricted formalisation of category theory in HoTT?

[1] J. Adamek, H. Herrlich, G. E. Strecker, Abstract and Concrete Categories: The Joy of Cats.

[2] S. Mac Lane, Categories for the Working Mathematician.


Textbooks are usually supposed to be accessible to a wide audience so it makes sense when discussing foundations to start from a (hopefully) familiar set theory. It's usually a trade-off, since you end up repeating yourself when it comes to "internalized" constructions. "Sketches of an Elephant" is a good example of a textbook that pretty much presents everything twice. Once in an ambient set theory and once internally.

What I meant specifically is work such as the following:

  Internal Universes in Models of Homotopy Type Theory - https://arxiv.org/abs/1801.07664
which explicitly works in an extensional type theory with some axioms to simplify and generalize a complicated model construction.

> For example is there a full, unrestricted formalisation of category theory in HoTT?

You can formalize category theory in HoTT as presented in the book. This has some advantages over a presentation in extensional type theory or set theory (being able to work up-to equivalence) and some disadvantages (universes in categories have to be constructed explicitly, since the ambient universe is not truncated). In my opinion, it's not the case that one is strictly superior - in the end it always depends on what you want to do.


Dialectical logic is a name given to describe the non-formal method apparent in Hegel's work, in which two opposing poles confront each other and progress towards a positive outcome.

https://plato.stanford.edu/entries/hegel-dialectics/


The word logic has seen a marked shift (restriction) since Hegel's times: what Hegel meant by logic is no longer what we mean today by the term.


I'd question whether the people who talk about logic in analytic philosophy departments and mathematics have a monopoly on the usage of an ancient term and concept which has found immense value in non-formal contexts too.


They do not have monopoly, but have a right to a technical usage, which in this case is just short for formal logic.

As far as I know there is no formal equivalent of Hegel's logic. This is not a criticism (my favourite thinker is Jung) but is still a relevant distinction.


Of course, though my point was about the term "logic" in general rather than "formal logic". I am happy to say that Hegel's logic isn't a formal logic, but it's not exactly what we know to be "informal logic" either (i.e arguments and fallacies). Funnily enough, a Boolean opposition between formal and informal logic doesn't seem to include the full range of possible values...


Nobody has a monopoly of meaning.

Ultimately meaning closely tracks use. But if you want to maximise the probability of being understood, then it's wise to use a term in the way that the audience you are talking to is using it. And on a comp-sci centric site like Hacker News, that means to understand "logic" in the sense of formally valid truth-preserving inferences. All the more so since the post I was replying to, and my own clearly were arguing about formal logic.

As an aside, there have been attempts at formalising logical approaches a little more in the tradition of Hegel [1], maybe started in earnest by Paul Lorenzen. But this only got real traction in the 1990s with Girard's linear logic [2] game semantics of logic [3]. The tradition of game semantics is firmly within the Fregean tradition of logic as formal logic.

[1] https://plato.stanford.edu/entries/logic-dialogical/

[2] https://en.wikipedia.org/wiki/Linear_logic

[3] https://en.wikipedia.org/wiki/Game_semantics


I've made myself understood using "logic" (as in a logic) a few times on Hacker News, for instance "the logic of capital" - nobody objected to this kind of usage. You made a link between understanding and the lack of the need to "learn" logic later in life - which ties in very well with what you might not have expected - Hegel's logic, which aims to deal with the intersection between logic and the movement of thought very closely. Implicitly there was the idea that classical logic represents how we use logic internally, which is a notion also firmly in the camp of continental logic theorists. So it's not relevant that you were talking about formal logic - after all, this site is for interesting things, not formal things.

I also think it's worth asking whether the formalization of Hegel's logic (or one similar to his, even superficially) can preserve the meaning. Hegel was famously against what he saw as the rise of mathematical formalism and its precursor, Greek and Latin terms, in new philosophical texts - because he didn't view our consciousness as being "at home" in such concepts. This is of course entangled with the critique of positivism and the reduction of concepts and scientific knowledge to formal logically linked facts. Already this definition of "logic" which you use to strictly only include Ferge and Russell's descendants shifts the conversation to a world in which the totality of things can be explained by this family of logics - after all, anything outside that would be "illogical". I'm not saying it's a deliberate move.

If the browsers of Hacker News are only able to understand a very limited definition of "logic" then perhaps it is good to broaden their minds.

Thank you for the first link in particular, even though it seems dialogical logic doesn't seem to capture some key Hegelian features, such as the uniqueness of the negation of the negation.


> "the logic of capital"

There is no logic of capital. The context "of capital" disambiguates the phrase and makes it clear that the discussion is about a politics, and has about as much to do with logic as the holy roman empire was holy, roman, or an empire.

> Hacker News are only able to understand a very limited definition of "logic"

Why underestimate HN? Why not ask: how is using a word in non-standard meaning helping people's understanding?

As an aside, there's quite a few working logicians on HN, including two how have published on features of Hegel's conception of (what he called) logic, albeit filtered through Sellars and Brandom's Hegel reading.


Somewhat relevant: William Lawvere, Some Thoughts on the Future of Category Theory (1991). (The main points of the paper are summarized at https://ncatlab.org/nlab/show/Some+Thoughts+on+the+Future+of... )


Beautiful comment. Care to recommend a textbook on formal type systems?


Depends on your background. For students who can program but are not mathematical, I recommend TAPL:

   https://www.cis.upenn.edu/~bcpierce/tapl/
I recommend implementing every typing single system in the book. If you come from a mathematical background, it might be too slow paced though.

I'm not sure I have a good recommendation for somebody who starts out from a strong maths background.



Yes.

IIRC Softwarefoundations grew out of a course the author of TAPL has been teaching at U Penn.


I just checked, and my former university seems to still have the same opportunities to learn about logic as it did during the mid-90's.


I'm pretty sure we did propositional and predicate logic in first year - UK CS course in the 1980s. Programming proving and synthesis was second year stuff, of a four year course.


Back in high school (early '60s), I studied Suzanne Langer's Introduction to Symbolic Logic, and the fluency that gave me in Boolean algebra and bit-twiddling had me in good shape for many years.


Can someone explain how Dijkstra Predicate Logic differs from mainstream Predicate Logic? The section that is supposed to explain this goes on to say "If you are interested in boolean term logic, look elsewhere." and then mentions something of functions mirroring predicates which is something I don´t recall having trouble with when working through when reading EWDs.


I'm interested in this, too. I don't know what is meant by "Boolean term logic."

On the other hand, it reminds me of a comment by EWD discussing the meaning of "1 = 2"; according to him, his mother would say it's meaningless ("not well formed?"), while he, or someone, would say it's simply false.


The referenced interview:

An Interview With Edsger W. Dijkstra by Thomas J. Misa (DOI:http://doi.acm.org/10.1145/1787234.1787249)

https://cacm.acm.org/magazines/2010/8/96632-an-interview-wit...


“7.1. Don’t go for homebrew logics D&S is not the only example; in general computer people seem to have a penchant for whipping up homebrew logics. See E.F. Codd’s Relational Calculus [12], an obvious mess.”

An interesting perspective, and perhaps technically valid from a purely theoretical perspective; however, the field seems to have found a few practical uses for it: just ask Jim Gray and Mike Stonebraker, among others.

(keywords to search: System R, SQL, RDBMS, INGRES, Postgres)


That line stood out to me too, since relational theory has been such a success. I was surprised that [12] is just Codd's paper "Relational Completeness of Data Base Sublanguages," where he introduces first relational algebra and then relational calculus. I read that paper just last year, and it didn't seem like a mess to me. Perhaps he was thinking of three-valued logic (i.e. NULLs), which came later? I'd like to learn more about how the relational calculus is an obvious mess.


EWD has his own idea of the "right" ways to program, think, and do math. If your ideas don't fit in with his approach, then your ideas are obviously garbage (to EWD, and he will not be shy about making his view known), no matter how formally correct or useful they are.


Okay, but this is the author's opinion here, not EWD's. It was quite a glib dismissal, and footnoting it seems a bit disingenuous even, hinting to casual readers that there is some substance to his "obvious mess", when really the footnote is just to Codd's own paper. (You find these cunning footnotes everywhere btw.) Did EWD feel the same way about the relational calculus? What are the criticisms? I know plenty of ways SQL doesn't live up to relational theory, but this is about relational theory itself: the paper suggests that there is something unsound about it, and that Codd could have invented something better if he had gotten help from a professional logician.


I find it hard to believe that the author is citing their own opinion here; I assumed the opinion to be Dijkstra's, for two reasons. First, to my mind it sounds like Dijkstra. Second, I would expect the author of such an article to be mostly presenting Dijkstra's ideas rather than their own.


Really? He is saying that Dijkstra's own book Predicate Calculus and Program Semantics ("D&S") suffered because "in general computer people seem to have a penchant for whipping up homebrew logics," and Codd is another example. Anyway if he were giving Dijkstra's opinion that makes the lack of citation even more egregious.

But anyway the "who" is boring. I'm more interested to know what makes relational calculus an "obvious mess"?


Ah, you are correct, and my (GP) post deserves more downvotes than it got.

As to your question: I don't know. I don't really understand the formal definition of all the terms in play. But it sounds like maybe he wants relational calculus to be just a formal logic, whereas it might be something more (an algebra, or even a "calculus", though I don't really understand what makes something a calculus in these areas).


None of the downvotes were mine btw. :-)

He praises Prolog because it was influenced by a logician, so I'm curious what he thinks of Datalog. That is something on my radar to read about, but so far I know nothing about it. But I believe it is something like "Prolog for databases." I know Datomic uses it, but it has a long history.

Personally I think relational theory is one of the greatest success stories in all Computer Science, but that doesn't mean it is problem-free. I'd be happy to hear more about its shortcomings, especially as a formal logic.

Anyway, thanks for the conversation! :-)


Relational could be non-problem-free from a formal logic perspective and still be perfectly fine for programming. I think that the article was written, if not from EWD's viewpoint, at least with some of his perspective, that programming should be a formal (and formally verified) process, and any deviation from that is rebuked in scathing terms, no matter how pragmatically useful it is.

I think that viewpoint is quite short-sighted. Things can be formally flawed (by some standard), and still be perfectly pragmatically useful in a wide variety of situations.


Let's see, we have "classical" logic (propositional, 1st order predicate, and higher order), intuitionistic, constructive, and natural logic. There's naive set theory and the various fixes to it. And so on.

I'm not sure what "mainstream logic" refers to, since most mathematicians don't use formal methods.


Just because those things are useful doesn't mean that they wouldn't be more useful/tractable if they'd stuck closer to logic. How do you know that their usefulness isn't a large part due to the closeness in logic and limited where they deviate, rather than the reverse?




Applications are open for YC Summer 2019

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: