
A Bridge Too Far: E.W. Dijkstra and Logic - akkartik
https://vanemden.wordpress.com/2018/07/21/dijkstra-and-logic
======
stiff
There are some good lecture videos with Dijkstra online that give a better
introduction to his way of thinking than anything he wrote:

"Reasoning about programs":
[https://www.youtube.com/watch?v=GX3URhx6i2E](https://www.youtube.com/watch?v=GX3URhx6i2E)

"The power of counting arguments":
[https://www.youtube.com/watch?v=0kXjl2e6qD0](https://www.youtube.com/watch?v=0kXjl2e6qD0)

"Structured programming":
[https://www.youtube.com/watch?v=72RA6Dc7rMQ](https://www.youtube.com/watch?v=72RA6Dc7rMQ)

"NU lecture":
[https://www.youtube.com/watch?v=qNCAFcAbSTg](https://www.youtube.com/watch?v=qNCAFcAbSTg)

------
evertheylen
For those interested in Dijkstra, this website has a lot to offer:
[http://www.dijkstrascry.com/](http://www.dijkstrascry.com/)

I also followed some lectures given by the guy that runs that website.
Apparently Dijkstra was not an easy person to get along with, there are
stories of him being bored during some lecture, walking to the front, and
laying down for a nap. (Sorry I can't provide a source, it's basically a
rumour among academic staff here in Belgium).

~~~
marcosdumay
There is an old joke on the Internet about measuring arrogance in
nanoDijkstra, but I don't know how much of it is about real arrogance and how
much is about he telling things as they are.

~~~
goto11
_It is practically impossible to teach good programming to students that have
had a prior exposure to BASIC: as potential programmers they are mentally
mutilated beyond hope of regeneration._ \- Dijkstra

Pretty arrogant gatekeeping. And obviously wrong.

~~~
okl
Taken from EWD 498, which is easily recognizable as satire unless you present
it ouf of context.

[https://www.cs.utexas.edu/users/EWD/transcriptions/EWD04xx/E...](https://www.cs.utexas.edu/users/EWD/transcriptions/EWD04xx/EWD498.html)

~~~
Jtsummers
I'm not sure it's satire, though it's certainly hyperbolic and exaggerated
(regarding the impact of learning BASIC). It's also important to keep in mind
the sort of language BASIC was at the time and Dijkstra's aversion to goto
statements and unstructured programming. It had some of the elements needed
for structured programming, but they could easily be bypassed. `GOSUB`, for
instance, allowed for calling subroutines and stored a return address that
`RETURN` would use. But `GOSUB` just jumped to a label:

    
    
      10 GOSUB 100
      100 <DO STUFF>
      ...
      200 RETURN
    

It was entirely possible to jump anywhere inside the subroutine, and commonly
done. For instance, the _first_ time you call it maybe you want to initialize
some variables but then keep them the same later:

    
    
      10 GOSUB 100
      20 GOSUB 130
      100 <RUN ONCE CODE>
      110 <MORE RUN ONCE>
      120 <LAST RUN ONCE>
      130 <THE REST>
      200 RETURN
    

The logic behind doing this isn't actually expressed in the code. To an
extent, he's right, there's a great deal of _un_ learning that has to happen
after learning to program this way.

There are certain patterns of programming that people pick up based on their
early experience which can be hard for them to overcome later ("The determined
Real Programmer can write FORTRAN programs in any language.").

~~~
goto11
Yes but this is also how it works in assembler, and nobody in their right mind
would say knowing assembler would make you a worse programmer. We can all
agree that BASIC is a crappy language by modern standard, but it did have a
repl and was interpreted which gave it a quick turnaround, ideal for
experimenting and learning.

Dijkstra on the other hand believed programming students should be forced to
_prove_ the correctness of their programs.

~~~
okl
You agree that "BASIC is a crappy language by modern standard". Why is
Dijkstra not allowed to make that point?

> Dijkstra on the other hand believed programming students should be forced to
> prove the correctness of their programs.

I did that in my CS homework like many others and I consider being able to
reason about the properties of a program/algorithm a core competence of a CS
graduate. Why do you think it's a problem?

~~~
goto11
> You agree that "BASIC is a crappy language by modern standard". Why is
> Dijkstra not allowed to make that point?

But...that is not the point he is making. He is not saying BASIC is a crappy
language - he is saying _programmers_ which have learnt BASIC are _bad
programmers_ , even if they learn other languages and techniques. That is a
pretty arrogant and not particularly clever thing to say.

------
zimablue
I'm fighting through a book on formal logic at the moment, I never did a
computer science degree, is it on the syllabus? It feels like this should have
been the first thing I ever studied. I kind of got to it in reverse, data
shuffling => everything looks like sql/pandas => everything is relational
logic (read about minikanren, datalog, prolog) => lots of things are a
surprisingly thin layer over formal logic.

I haven't explored the idea but I read somewhere that type systems are close
to unification (rust's typesystem is apparently very close ??).

Interesting to see this article and realise that Dijkstra (God) was interested
in logic too.

Has anyone explored the idea that you write your program largely in logic, and
that's your spec/typesystem, then you write the performance-relevant parts in
a more procedural language, with either generative tests or some kind of proof
system showing their equivalence?

~~~
YorkshireSeason
> _is [logic] on the syllabus?_

It used to be, but these days, most normal universities have de-mathematised
their core CS curriculum, and logic is rarely taught in any depth. If you are
lucky it's a short few lectures on some introductory mathematics-for-computer-
science Year 1 lecture.

> _this should have been the first thing I ever studied._

In my experience as a CS prof at university, teaching logic at the start of a
CS course will baffle 99% of students, and they will not see the point. At the
same time, the top 1% love it. Note that as of January 2019, most programming
jobs, including top paying FAANG jobs, don't require a substantial grounding
in logic.

Note that logic is _deeply_ engrained in human thinking, we intuitively
understand the meaning of terms like _not_ , _and_ , _or_ , _exists_ etc from
early childhood on. The Kantian hypothesis here is that logic is part of the
very fabric of our preception. So on some level, we don't need to learn logic
as adults. What a logic course does it, tell us how to formalise logic.

Moreover, by the Curry-Howard correspondence [4], (constructive) logic and
programming are essentially the same thing, so one might argue that a computer
science degree is basically a long logic course, but using a novel approach to
_formalising_ logic.

> _type systems are close to unification_

Yes, in the sense that type _inference_ does unification. _The_ most famous
paper on type inference [1] is explicitly based on Robinson's famous
unification algorithm [2]. User "dkarapetyan" in [3] even quipped: _" Any
sufficiently advanced type system is an ad-hoc re-implementation of Prolog.
Once you have unification and backtracking you basically have Prolog and can
implement whatever Turing machine you want with the type system."_ Relatedly,
compilers for languages with advanced typing systems are sometimes quite slow
(e.g. the old Scala compiler), and the cost of unification plays a big role
here.

As an aside, I recommend implementing the core type-inference algorithm in [1]
for a toy language. It's simple, only ~100 LoCs in a high-level language, but
it will really help you understand typing systems, and hence modern
programming languages. Few things amplify your understanding of programming
languages so much, for so little effort.

> _proof system showing their equivalence?_

Proving equivalence between programs is not computable (by easy reduction to
the halting problem), and infeasibly expensive to do manually for non-trivial
programs. Barring unexpected breakthroughs in algorithms (e.g. a constructive
proof that P=NP), this is likely not to change in the next decade or two.

[1] L. Damas, R. Milner, Principal type-schemes for functional programs.

[2] J. A. Robinson, A Machine-Oriented Logic Based on the Resolution
Principle.

[3]
[https://news.ycombinator.com/item?id=13843288](https://news.ycombinator.com/item?id=13843288)

[4]
[https://en.wikipedia.org/wiki/Curry%E2%80%93Howard_correspon...](https://en.wikipedia.org/wiki/Curry%E2%80%93Howard_correspondence)

~~~
mietek
_> Proving equivalence between programs is not computable (by easy reduction
to the halting problem), and infeasibly expensive to do manually for non-
trivial programs. Barring unexpected breakthroughs in algorithms (e.g. a
constructive proof that P=NP), this is likely not to change in the next decade
or two._

There is an assumption implicit in your statement that the language in which
the programs are written admits uncontrolled general recursion. Since you
mention the Curry-Howard, you must be aware of the concept of total functional
programming languages, in which every program has a normal form. The halting
problem for such languages is trivial, and proving equivalence between
programs in such languages is feasible.

Of course, no mainstream programming language is total. However, such
languages are not one or two decades away. For example, the Coq proof
assistant implements a dependently typed total functional programming
language, and is turning 30 this year.

To my mind, the interesting question is whether we can productively use such
languages for solving practical problems. This is an active research area.
Turner [1] teaches us how to write total programs that run indefinitely by
making use of codata. Idris [2] is a dependently typed language that supports
codata and is explicitly aimed at practical programming.

[1] D. Turner (2004) “Total functional programming”,
[https://github.com/mietek/total-functional-
programming/blob/...](https://github.com/mietek/total-functional-
programming/blob/master/doc/pdf/turner-2004.pdf)

[2] [https://www.idris-lang.org](https://www.idris-lang.org)

~~~
YorkshireSeason
> _proving equivalence between programs in such languages is feasible_

Not in the sense of worst-case time complexity.

I'm quite aware of Coq, and Idris, and of Turner's work. They can be used
productively if (1) the problem has an good and stable specification, and (2)
is inherently sequential, and indeed functional, and (3) doesn't need
performance. There are quite a few situations where this is the case, but (1)
is almost always a problem in industry. See also the agile/waterfall
distinction.

~~~
mietek
_> Not in the sense of worst-case time complexity._

Fair enough.

------
iamcurious
Can someone explain how Dijkstra Predicate Logic differs from mainstream
Predicate Logic? The section that is supposed to explain this goes on to say
"If you are interested in boolean term logic, look elsewhere." and then
mentions something of functions mirroring predicates which is something I
don´t recall having trouble with when working through when reading EWDs.

~~~
mcguire
I'm interested in this, too. I don't know what is meant by "Boolean term
logic."

On the other hand, it reminds me of a comment by EWD discussing the meaning of
"1 = 2"; according to him, his mother would say it's meaningless ("not well
formed?"), while he, or someone, would say it's simply false.

------
boshomi
The referenced interview:

An Interview With Edsger W. Dijkstra by Thomas J. Misa
(DOI:[http://doi.acm.org/10.1145/1787234.1787249](http://doi.acm.org/10.1145/1787234.1787249))

[https://cacm.acm.org/magazines/2010/8/96632-an-interview-
wit...](https://cacm.acm.org/magazines/2010/8/96632-an-interview-with-edsger-
w-dijkstra/fulltext)

------
amacbride
“7.1. Don’t go for homebrew logics D&S is not the only example; in general
computer people seem to have a penchant for whipping up homebrew logics. See
E.F. Codd’s Relational Calculus [12], an obvious mess.”

An interesting perspective, and perhaps technically valid from a purely
theoretical perspective; however, the field seems to have found a _few_
practical uses for it: just ask Jim Gray and Mike Stonebraker, among others.

(keywords to search: System R, SQL, RDBMS, INGRES, Postgres)

~~~
pjungwir
That line stood out to me too, since relational theory has been such a
success. I was surprised that [12] is just Codd's paper "Relational
Completeness of Data Base Sublanguages," where he introduces first relational
algebra and then relational calculus. I read that paper just last year, and it
didn't seem like a mess to me. Perhaps he was thinking of three-valued logic
(i.e. NULLs), which came later? I'd like to learn more about how the
relational calculus is an obvious mess.

~~~
AnimalMuppet
EWD has his own idea of the "right" ways to program, think, and do math. If
your ideas don't fit in with his approach, then your ideas are obviously
garbage (to EWD, and he will not be shy about making his view known), no
matter how formally correct or useful they are.

~~~
pjungwir
Okay, but this is the author's opinion here, not EWD's. It was quite a glib
dismissal, and footnoting it seems a bit disingenuous even, hinting to casual
readers that there is some substance to his "obvious mess", when really the
footnote is just to Codd's own paper. (You find these cunning footnotes
_everywhere_ btw.) Did EWD feel the same way about the relational calculus?
What are the criticisms? I know plenty of ways SQL doesn't live up to
relational theory, but this is about relational theory itself: the paper
suggests that there is something unsound about it, and that Codd could have
invented something better if he had gotten help from a professional logician.

~~~
AnimalMuppet
I find it hard to believe that the author is citing their own opinion here; I
assumed the opinion to be Dijkstra's, for two reasons. First, to my mind it
sounds like Dijkstra. Second, I would expect the author of such an article to
be mostly presenting Dijkstra's ideas rather than their own.

~~~
pjungwir
Really? He is saying that Dijkstra's own book _Predicate Calculus and Program
Semantics_ ("D&S") suffered because "in general computer people seem to have a
penchant for whipping up homebrew logics," and Codd is another example. Anyway
if he _were_ giving Dijkstra's opinion that makes the lack of citation even
more egregious.

But anyway the "who" is boring. I'm more interested to know what makes
relational calculus an "obvious mess"?

~~~
AnimalMuppet
Ah, you are correct, and my (GP) post deserves more downvotes than it got.

As to your question: I don't know. I don't really understand the formal
definition of all the terms in play. But it sounds like maybe he wants
relational calculus to be _just_ a formal logic, whereas it might be something
more (an algebra, or even a "calculus", though I don't really understand what
makes something a calculus in these areas).

~~~
pjungwir
None of the downvotes were mine btw. :-)

He praises Prolog because it was influenced by a logician, so I'm curious what
he thinks of Datalog. That is something on my radar to read about, but so far
I know nothing about it. But I believe it is something like "Prolog for
databases." I know Datomic uses it, but it has a long history.

Personally I think relational theory is one of the greatest success stories in
all Computer Science, but that doesn't mean it is problem-free. I'd be happy
to hear more about its shortcomings, especially as a formal logic.

Anyway, thanks for the conversation! :-)

~~~
AnimalMuppet
Relational could be non-problem-free from a formal logic perspective and still
be perfectly fine for programming. I think that the article was written, if
not from EWD's viewpoint, at least with some of his perspective, that
programming should be a formal (and formally verified) process, and any
deviation from that is rebuked in scathing terms, no matter how pragmatically
useful it is.

I think that viewpoint is quite short-sighted. Things can be formally flawed
(by some standard), and still be perfectly pragmatically useful in a wide
variety of situations.

