"Reasoning about programs":
"The power of counting arguments":
I also followed some lectures given by the guy that runs that website. Apparently Dijkstra was not an easy person to get along with, there are stories of him being bored during some lecture, walking to the front, and laying down for a nap. (Sorry I can't provide a source, it's basically a rumour among academic staff here in Belgium).
> My own experience with lecturing before Edsger took place in Marktoberdorf. It rests on the fact that in some languages (notably Fortran), the equality symbol and the assignment symbol are the same, and many people say "x equals e" when they mean "store the value of e in x," or "x becomes e."
> I was lecturing along, when I said "x equals e" meaning an assignment of e to x. From the back of the room came a loud "becomes," and then a stunned silence. Finally, I gathered my wits and said, "Thank you, Edsger, for correcting me. If I make the same mistake again, stop me." Twenty minutes later, I made the same mistake, and again from the back of the room came "becomes" "Thanks, I won't make the mistake again," I said, and to this day I haven't!
Needless to say, I nearly swooned when he told that. I couldn't help pumping him for details, none of which I feel at liberty to pass on to you, except that he thought Dijkstra was excentric but kind.
The subtitles at around 4:05 are out of sync. The quote reads
> Je moet de wereld niet geven waarom ze vraagt, maar wat ze nodig heeft.
> You must not give the world what it asks, but what it needs.
I have read them all and recommend them. The one with Michael Jackson is particularly intersting for it's dialectic qualities.
Pretty arrogant gatekeeping. And obviously wrong.
10 GOSUB 100
100 <DO STUFF>
10 GOSUB 100
20 GOSUB 130
100 <RUN ONCE CODE>
110 <MORE RUN ONCE>
120 <LAST RUN ONCE>
130 <THE REST>
There are certain patterns of programming that people pick up based on their early experience which can be hard for them to overcome later ("The determined Real Programmer can write FORTRAN programs in any language.").
Dijkstra on the other hand believed programming students should be forced to prove the correctness of their programs.
> Dijkstra on the other hand believed programming students should be forced to prove the correctness of their programs.
I did that in my CS homework like many others and I consider being able to reason about the properties of a program/algorithm a core competence of a CS graduate. Why do you think it's a problem?
But...that is not the point he is making. He is not saying BASIC is a crappy language - he is saying programmers which have learnt BASIC are bad programmers, even if they learn other languages and techniques. That is a pretty arrogant and not particularly clever thing to say.
For example I considered Dijkstra-style single-entry single-exit the right way for years, until I realized that early exits in many cases made functions a lot simpler and more straightforward.
This is why Dijkstra wrote his newsletter by hand - he couldn't find a word processor that had been proven correct, nor a proven correct OS to run it on.
I'm kidding, but it illustrates the problem with Dijkstra's approach. If not just students but actual working programmers had to prove the correctness of their programs, we'd have much less software - say, 5% as much. But that other 95% does some actually useful things, even though it can still break. Would you rather have a word processor that crashes every once in a while? Or would you prefer a typewriter?
I don’t know if it really happened, but it’s clear that a lot of people didn’t like him.
On the other hand, everyone who I personally know, who took one of his classes, really liked it and him.
I haven't explored the idea but I read somewhere that type systems are close to unification (rust's typesystem is apparently very close ??).
Interesting to see this article and realise that Dijkstra (God) was interested in logic too.
Has anyone explored the idea that you write your program largely in logic, and that's your spec/typesystem, then you write the performance-relevant parts in a more procedural language, with either generative tests or some kind of proof system showing their equivalence?
There are many programs which can be nicely specified with logic, and which could thus be treated this way. That we haven't build up the infrastructure yet to do this cheaply and routinely for those kinds of programs / libraries has various reasons. For example software engineers never doing a computer science degree ;-)
It used to be, but these days, most normal universities have de-mathematised their core CS curriculum, and logic is rarely taught in any depth. If you are lucky it's a short few lectures on some introductory mathematics-for-computer-science Year 1 lecture.
> this should have been the first thing I ever studied.
In my experience as a CS prof at university, teaching logic at the start of a CS course will baffle 99% of students, and they will not see the point. At the same time, the top 1% love it. Note that as of January 2019, most programming jobs, including top paying FAANG jobs, don't require a substantial grounding in logic.
Note that logic is deeply engrained in human thinking, we intuitively understand the meaning of terms like not, and, or, exists etc from early childhood on. The Kantian hypothesis here is that logic is part of the very fabric of our preception. So on some level, we don't need to learn logic as adults. What a logic course does it, tell us how to formalise logic.
Moreover, by the Curry-Howard correspondence , (constructive) logic and programming are essentially the same thing, so one might argue that a computer science degree is basically a long logic course, but using a novel approach to formalising logic.
> type systems are close to unification
Yes, in the sense that type inference does unification. The most famous paper on type inference  is explicitly based on Robinson's famous unification algorithm . User "dkarapetyan" in  even quipped:
"Any sufficiently advanced type system is an ad-hoc re-implementation of Prolog. Once you have unification and backtracking you basically have Prolog and can implement whatever Turing machine you want with the type system."
Relatedly, compilers for languages with advanced typing systems are sometimes quite slow (e.g. the old Scala compiler), and the cost of unification plays a big role here.
As an aside, I recommend implementing the core type-inference algorithm in  for a toy language. It's simple, only ~100 LoCs in a high-level language, but it will really help you understand typing systems, and hence modern programming languages. Few things amplify your understanding of programming languages so much, for so little effort.
> proof system showing their equivalence?
Proving equivalence between programs is not computable (by easy reduction to the halting problem), and infeasibly expensive to do manually for non-trivial programs. Barring unexpected breakthroughs in algorithms (e.g. a constructive proof that P=NP), this is likely not to change in the next decade or two.
 L. Damas, R. Milner, Principal type-schemes for functional programs.
 J. A. Robinson, A Machine-Oriented Logic Based on the Resolution Principle.
There is an assumption implicit in your statement that the language in which the programs are written admits uncontrolled general recursion. Since you mention the Curry-Howard, you must be aware of the concept of total functional programming languages, in which every program has a normal form. The halting problem for such languages is trivial, and proving equivalence between programs in such languages is feasible.
Of course, no mainstream programming language is total. However, such languages are not one or two decades away. For example, the Coq proof assistant implements a dependently typed total functional programming language, and is turning 30 this year.
To my mind, the interesting question is whether we can productively use such languages for solving practical problems. This is an active research area. Turner  teaches us how to write total programs that run indefinitely by making use of codata. Idris  is a dependently typed language that supports codata and is explicitly aimed at practical programming.
 D. Turner (2004) “Total functional programming”, https://github.com/mietek/total-functional-programming/blob/...
This is a common misconception. Program equivalence for those languages (and even halting) is just as (in)feasible as for general Turing machines; i.e., they do not make deciding any interesting property any easier, despite the fact that they always halt. The reason is that unlike what people erroneously believe, hardness of verification is not due to the "classical" halting theorem most people learn, but due to what is sometimes known as the "bounded halting" theorem, a step in the proof of what is arguably the most important theorem in computer science to date -- the time hierarchy theorem.
According to bounded halting, deciding any non-trivial property can be done in time linear in the number of states of the program, and no faster (so this is a generalization, or strengthening, of the classical halting theorem -- if the number of states is infinite, then the worst case is infinite time, i.e. undecidable). The proof holds unchanged even for total programs. I.e., while the property of halting (in some finite time) becomes trivial, the question of whether the program halts in, say, a trillion years, cannot be decided in less than a trillion years. I.e., deciding properties of programs in total languages is not feasible, even in principle.
To become convinced that totality cannot help verification, consider that every program in any language could be automatically changed to a total program without changing its behavior: you just add counters initialized to, say, 2^1000, to every loop/recursion. Therefore, if totality were of any help, we would be allowed to assume all programs are total, without loss of generality (because, we only care about the correctness of programs until the end of the universe, and not beyond).
On the other hand, in some particular cases, verification is possible, but for those cases it makes no real difference whether they are in a total language or not. Totality is necessary for the use of the lambda calculus in proof assistants for proving general mathematical properties; without it, the logic is unsound. It does not assist in verifying computer programs.
For more results on the real reasons verification is hard, see my post: https://pron.github.io/posts/correctness-and-complexity
Not in the sense of worst-case time complexity.
I'm quite aware of Coq, and Idris, and of Turner's work. They can be used productively if (1) the problem has an good and stable specification, and (2) is inherently sequential, and indeed functional, and (3) doesn't need performance. There are quite a few situations where this is the case, but (1) is almost always a problem in industry. See also the agile/waterfall distinction.
I also wonder about nesting and scopes. I'm reminded of a blog post I read  where subtleties of generic and existential types in Rust are explained using the scopes of the for-all and there-exists operators in the equivalent logic. On the one hand, this is a helpful clarification. But understanding nested logical operators doesn't itself seem all that intuitive or natural?
Intuitive everyday logic does
work a bit different
Intuitive everyday logic does work a bit different in most people from propositional and first-order logic, in particular with regards to or and implies. For example, most people's or is the exclusive or you mention. But both logics can explain each other. For example, not, and as well as for all is pretty similar in formal and informal logic, and, as you know, those three are a basis for all of (first order) logic.
It's decidedly not computable for Turing machines (because the input/output space is infinitely large), but you can theoretically compute it for any linear bounded automata.
Certainly not practical for a very large potential input space like the working memory of a PC, but if you can make a trivial enough program to constrain the memory of another and ensure determinism that it is feasibly expensive to verify manually, you can create small LBA and guarantee equivalence to the extent that you verified the constraints checker.
Unless it's a course in dialectical logic, or actually any other non-classical logic.
I don't know what "dialectical logic" logic is.
 Graham Priest, An Introduction to Non-Classical Logic
What gave you this idea? It is frequently useful to work over different base logics to obtain models of non-classical logics. E.g., when you work in categorical logic you usually work over an intuitionistic logic which can then be interpreted in an arbitrary topos. This gives you the ability to internalize any constructions you do "on the outside".
There is a large community of mathematicians fixated on classical first-order logic, but this is just because of tradition. For models of classical first-order set theory it just doesn't make much of a difference. This is not true of "all formalizations of non-classical logic".
Whoops, yes, you are right.
That said, a lot of the development of category theory that I'm aware of, is taking place with set theory. Certainly the two books [1, 2] that I learnt category theory from do. But that was a while back, and it might be the case that this is different now, I have not kept up with this research direction. For example is there a full, unrestricted formalisation of category theory in HoTT?
 J. Adamek, H. Herrlich, G. E. Strecker, Abstract and Concrete Categories: The Joy of Cats.
 S. Mac Lane, Categories for the Working Mathematician.
What I meant specifically is work such as the following:
Internal Universes in Models of Homotopy Type Theory - https://arxiv.org/abs/1801.07664
> For example is there a full, unrestricted formalisation of category theory in HoTT?
You can formalize category theory in HoTT as presented in the book. This has some advantages over a presentation in extensional type theory or set theory (being able to work up-to equivalence) and some disadvantages (universes in categories have to be constructed explicitly, since the ambient universe is not truncated). In my opinion, it's not the case that one is strictly superior - in the end it always depends on what you want to do.
As far as I know there is no formal equivalent of Hegel's logic. This is not a criticism (my favourite thinker is Jung) but is still a relevant distinction.
Ultimately meaning closely tracks use. But if you want to maximise the probability of being understood, then it's wise to use a term in the way that the audience you are talking to is using it. And on a comp-sci centric site like Hacker News, that means to understand "logic" in the sense of formally valid truth-preserving inferences. All the more so since the post I was replying to, and my own clearly were arguing about formal logic.
As an aside, there have been attempts at formalising logical approaches a little more in the tradition of Hegel , maybe started in earnest by Paul Lorenzen. But this only got real traction in the 1990s with Girard's linear logic  game semantics of logic . The tradition of game semantics is firmly within the Fregean tradition of logic as formal logic.
I also think it's worth asking whether the formalization of Hegel's logic (or one similar to his, even superficially) can preserve the meaning. Hegel was famously against what he saw as the rise of mathematical formalism and its precursor, Greek and Latin terms, in new philosophical texts - because he didn't view our consciousness as being "at home" in such concepts. This is of course entangled with the critique of positivism and the reduction of concepts and scientific knowledge to formal logically linked facts. Already this definition of "logic" which you use to strictly only include Ferge and Russell's descendants shifts the conversation to a world in which the totality of things can be explained by this family of logics - after all, anything outside that would be "illogical". I'm not saying it's a deliberate move.
If the browsers of Hacker News are only able to understand a very limited definition of "logic" then perhaps it is good to broaden their minds.
Thank you for the first link in particular, even though it seems dialogical logic doesn't seem to capture some key Hegelian features, such as the uniqueness of the negation of the negation.
There is no logic of capital. The context "of capital" disambiguates the phrase
and makes it clear that the discussion is about a politics, and has
about as much to do with logic as the holy roman empire was holy,
roman, or an empire.
> Hacker News are only able to understand a very limited definition of "logic"
Why underestimate HN? Why not ask: how is using a word in non-standard meaning helping people's understanding?
As an aside, there's quite a few working logicians on HN, including
two how have published on features of Hegel's conception of (what he
called) logic, albeit filtered through Sellars and Brandom's Hegel
I'm not sure I have a good recommendation for somebody who starts out from a strong maths background.
IIRC Softwarefoundations grew out of a course the author of
TAPL has been teaching at U Penn.
On the other hand, it reminds me of a comment by EWD discussing the meaning of "1 = 2"; according to him, his mother would say it's meaningless ("not well formed?"), while he, or someone, would say it's simply false.
An Interview With Edsger W. Dijkstra by Thomas J. Misa (DOI:http://doi.acm.org/10.1145/1787234.1787249)
An interesting perspective, and perhaps technically valid from a purely theoretical perspective; however, the field seems to have found a few practical uses for it: just ask Jim Gray and Mike Stonebraker, among others.
(keywords to search: System R, SQL, RDBMS, INGRES, Postgres)
But anyway the "who" is boring. I'm more interested to know what makes relational calculus an "obvious mess"?
As to your question: I don't know. I don't really understand the formal definition of all the terms in play. But it sounds like maybe he wants relational calculus to be just a formal logic, whereas it might be something more (an algebra, or even a "calculus", though I don't really understand what makes something a calculus in these areas).
He praises Prolog because it was influenced by a logician, so I'm curious what he thinks of Datalog. That is something on my radar to read about, but so far I know nothing about it. But I believe it is something like "Prolog for databases." I know Datomic uses it, but it has a long history.
Personally I think relational theory is one of the greatest success stories in all Computer Science, but that doesn't mean it is problem-free. I'd be happy to hear more about its shortcomings, especially as a formal logic.
Anyway, thanks for the conversation! :-)
I think that viewpoint is quite short-sighted. Things can be formally flawed (by some standard), and still be perfectly pragmatically useful in a wide variety of situations.
I'm not sure what "mainstream logic" refers to, since most mathematicians don't use formal methods.