It's great because of how accessible it makes compilers to average programmers, and, as a bonus, it also throws shade at the overly dense compiler textbooks:
"Classic compiler books read like fawning hagiographies of these heroes and their tools. The cover of Compilers: Principles, Techniques, and Tools literally has a dragon labeled “complexity of compiler design” being slain by a knight bearing a sword and shield branded “LALR parser generator” and “syntax directed translation”. They laid it on thick." (chapter 6)
Having read Compilers for a compilers course in college, Crafting Interpreters was a fun read on many levels.
I don't think that is in the spirit of this list at all, no matter how good it may be. This is about seminal papers advancing the state of the art in the theory of programming languages.
"Notes on Structured Concurrency, or: Go Statement Considered Harmful" [1] by Nathaniel J. Smith.
I believe that structured concurrency will (slowly) cause a massive paradigm shift with regards to concurrency in the same way that structured programming [2] did, and just like "Go To Considered Harmful" kicked off structured programming, "Go Statement Considered Harmful" will be looked to as the work that kicked structured concurrency off. (It's also fun that they have similar titles!)
In fact, I think that someday, someone will prove a "Structured Concurrency Theorem" that is to concurrency what the Structured Program Theorem [3] is to programming in general; I believe the Structured Concurrency Theorem will say something like:
> Any concurrent program can be rewritten to an equivalent program that uses structured concurrency.
I could be wrong, but I would bet heavily on this. In fact, I am betting heavily on it by making a language based on structured concurrency that would compete with Rust. I'm so confident in it that I would bet that Rust either adopts structured concurrency or will be replaced by languages that do. This is despite Rust's current popularity.
Also, I've been exploring ways to prove the Structured Concurrency Theorem. I'd love to be the one to prove it.
Being able to write the same program in both paradigms, perhaps even with the same parallel speedup, isn't that difficult (as soon as an appropriately rigorous definition is available a proof probably won't take long). The difficulty is in gaining the same performance from structured concurrency as non-structured -- a difficulty that plagues modern languages with respect to goto as well.
I agree with everything you have said, but I think it won't matter, for three reasons:
1) The switch to structured programming still happened, even with the performance hit. The advantages were so large that the performance hit didn't matter. I think the same will happen will happen with structured concurrency because I think the advantages are also that much larger.
2) The performance of structured concurrency can be improved with many techniques, including thread pools (to reuse threads).
3) If structured concurrency starts to take off (and I argue that it is), then system software will be optimized for that case. For example, Linux's clone() syscall might be optimized for the "create thread" use case. I think other OS's would do the same.
I’d say there is a large-ish list that could be written using the criteria Pierce used (i.e. amount of influence determined by citation). However, the field has really taken a more compartmentalized path in the years since this list was written. In 2022, I think the list would be much more compartmentalized as a result. This list was compiled 2 years after Pierce published ‘Types and Programming Languages’ and almost the whole list is cited in that work somewhere.
The weird part about programming is that it's possible to never attend a course, read a book (or especially an academic paper), and still be a very productive programmer.
In as much as Programming is a means of expressing your thought forms to solve a problem precisely, your "productive programmer" would be quite limited in what he can actually do due to a lack of knowledge of what has already been systematized before. If you don't know something already exists you quite obviously cannot use it where it is most needed.
Yeah, I think some apriori knowledge on data structures, algorithms, big O, pass by reference, pass by value, etc. is going to be of strong benefit. I was once working on a legacy Java code base where the prior programmer employed an ArrayList for all of his data structure needs when other data structures would have been much more suitable. That was painful. That said, the long list of academic references originally posted may be of little practical value for most software developers.
> That said, the long list of academic references originally posted may be of little practical value for most software developers.
In many cases because what they contain has been integrated (in whole or in part) into the normal order of business.
Some of the big threads:
Structured programming. Even C and C++ constrain their goto statements to the same function. Most other languages have similar constraints or limit them to things like named loops/breaks. So you'll learn structured programming just by learning anything but assembly or old-school BASIC and FORTRAN (where there is some structured programming, but also a lot of unstructured programming).
Concurrency, Milner and Hoare's works have influenced many modern languages. CSP influenced or formed the basis of many modern concurrency systems. Go and Erlang/Elixir (anything on BEAM) are fairly mainstream languages that draw on it at least in part.ure* understanding of it because implementations aren't pure implementations of it for whatever reasons).
Even many of the type systems stuff. Type inference is used by many modern languages, generics/polymorphism is also used by most modern statically typed languages. Even Go finally got on the generics train.
So yeah, you don't need to study these papers to benefit from them, many (most?) modern languages are directly or indirectly influenced by them.
>the long list of academic references originally posted may be of little practical value for most software developers.
Oh No! That is quite the wrong way of looking at it.
One of my favourite quotes (attributed to Mark Twain) goes: I have never let my schooling interfere with my education.
It is with that sentiment in mind that one should "read" the above list of papers. They contain the distilled essence of somebody's lifework. One does not have to read and understand completely every bit of what has been written in the paper. Indeed without the proper background, that might be quite difficult. But one should be able to browse and pick out the main ideas from the paper (this is what i try to do with all the papers/books that i read). Keywords/Jargons/Things-that-one-might-not-understand are filed away at the back of the mind so that later on when one comes across the same/related concept we can go "aha" and gain a deeper understanding of the subject matter. This is the essence of gathering and assimilating knowledge.
I don't get how the Mark Twain quote is apropos here though. MT had almost no formal education and started working on a river boat as a teenager. Another Mark Twain quote comes to mind here, “A classic is something that everybody wants to have read and nobody wants to read.”
This is true of almost every field. Keep in mind that these papers are about studying programming languages themselves in a meta sense, not how to be the programmer. Like the difference between a literature degree and a creative writing degree.
You can be a writer without a literature degree.
You can be a painter without an art history degree.
You can be a scientist without a degree in the philosophy of science
> You can be a scientist without a degree in the philosophy of science
Scientists generally have a Ph.D. degree in science. Although the Ph. in Ph.D. stands for philosophy, 99% of scientists don't learn philosophy, or even philosophy of science.
That said, a scientist without a Ph.D. is a rare bird.
I dont mean PhD's in science or degrees in "natural philosophy", but actual philosophy lf science.
Philosophy of science studies what it means to do science in a meta sense. A PhD in science is how to actually be a scientist. This is a critical difference. The papers in the linked article are akin to the former not the latter.
And by that missing pieces that you seldom need and being far less productive as you could. All the while being considered very productive since everybody else does the same...
Also, even if one does not read books (or especially papers), he still will ask friends and colleagues about programming-related things. And those friends and colleagues may be avid readers of books (or especially academic papers). Even other's code can be considered derived from books (or especially academic papers).
So this uneducated yet productive programmer is unable to escape reading literature by proxy.
Programmers who don’t have the curiosity to go beyond scratching the surface write the worst codebases and make for terribly stressful coworkers, to be honest.
It is not true. I worked with many programmers that were productive and never read any scientific papers. It is enough that somebody on the team (me) reads them and provides the corresponding perspective.
Well, what can we do? It's your experience versus mine, and you're in no position to tell me that my experience isn't true.
Also the comment I responded to only mentioned books and courses, so I'm not sure that your reply is relevant. I certainly do not read white papers myself--don't feel smart nor motivated enough to do that--but I do read plenty of reference books and take courses on CS theory, which derive their content from white papers anyway, except that the information is presented in a more readable and pedagogical manner.
It is not how it works. A single counter-example is enough to refute your point.
For simplicity (just the essence), imagine, you said: "I've never saw a white crow" (your experience) and I said: "I saw a white crow" (my experience). The conclusion that combines both: there are white crows.
I think this is the case for most fields, self-learning as compared to university is always an option, but much harder and prone to error. One advantage of programming is that buggy code tends to be less consequential than a buggy bridge or a buggy surgery, and the cost of producing is incredibly low. You can test and play around as much as you like.
Necula's work on proof-carrying-code was more or less transformed into CCured which is nothing but memory management.
It's from 2004. A lot of the interesting stuff with separation logic was just getting started at that point in time. This branch led to Facebook's Infer and bi-abduction.
That's the important note. Memory management is in no way special or more special than other things. But it is fairly hard to enable the typesystem to handle it. Rust started some parts but it's just the beginning.
Per Martin-Löf: Constructive Mathematics and Computer Programming
https://www.cs.tufts.edu/~nr/cs257/archive/per-martin-lof/co...