Without even learning all the theory, you can get pretty far ahead (IMHO) in a building your own programming language (if that's the goal). The two resources that I've found invaluable are - the prog lang course by Dan Grossman and PLAI by Shriram. Lastly, there's also a whole bunch of interesting university courses that you can refer to - https://github.com/prakhar1989/awesome-courses#programming-l...
 - http://www.cs.columbia.edu/~sedwards/classes/2016/4115-sprin...
 - https://www.coursera.org/course/proglang
 - http://cs.brown.edu/courses/cs173/2012/book/
Honestly, "programming language theory" is a bit of a misnomer. It's less a theory for programming languages and more a theory of CS from a language perspective. (Where theoretical CS is a theory of CS from a computational point of view.)
Programming language theory is interesting in and of itself and is fairly distinct from the sorts of things you'd learn in a normal programming languages course or by implementing your own language. I'm not saying either of those is useless—I'm a big fan of doing both!—just that they accomplish fundamentally different things. Your advice is good if you just want to implement a programming language; the linked repository is good if you want to explore PL theory.
PL theory shares a lot with formal logic and the foundations of mathematics. The main areas are things like type theory (a special approach to formal logic) and semantics (akin to proof theory or model theory).
What you'll learn by reading the books linked above is almost completely disjoint from what the class you linked covers.
My advice to younger folks looking at this field would be to turn back and instead solve some practical problem by creative programming. Git or BitTorrent would be good examples to emulate. Even a smaller project could have more impact than the whole field of PL research combined.
That said, for die-hard-pragmatists, it seems reasonable to assume that learning PLT might make one a better programmer.
Why is this a 'theory', what makes this different from a documentation different design patterns in programming languages?
Of course, all of these are overly broad generalizations, and it's not as if there aren't points of intersection between these different ways of looking at programming languages.
But the kind of stuff you'll see in a book like Types and Programming Languages is super-different in approach from the kind of stuff you'd see in, say, the Gang of Four book, both of which are different again from Bret Victor's approach.
PLT has become its own thing, and work in the area is not necessarily related to PL. If you scan current papers at POPL, you'll see a trend of applying language-oriented theory to problem X where X has nothing to do with implementing/designing a programming language.
We can apply theory to PL, but many of the factors in a PL design today (or yesterday) are not rooted in theory.
> But the kind of stuff you'll see in a book like Types and Programming Languages is super-different in approach from the kind of stuff you'd see in, say, the Gang of Four book, both of which are different again from Bret Victor's approach.
It is not even just the approach that is different, but the goals and outcomes.
I'm in a university so I have access to most journals.
I'm very interested in learning more about this.
IE: Implement a language and use that as an example and improve it.
This method of doing things offers a sound approach to developing things rather than ad-hoc implementations. It allows properties about the method to be proved with absolute certainty.
In hindsight after you have "done" it, everything seems they were doable.
Monads for example proved to be very useful in functional languages, and as Phil Wadler puts it "it is doubtful that the structuring methods presented here would have been discovered without the insight afforded by category theory. But once discovered they are easily expressed without any reference to things categorical. No knowledge of category theory is required to read these notes." in his now famous paper .
But maybe monads are too exotic to provide a good example. Let's talk about higher-order functions instead. Closures are very common to all programming languages these days and they existed long long before we had the first computer thanks to the lambda calculus. It would be nice then we look at lambda calculus to understand use and ramifications of such constructs. What they are good for, if they have limitations, etc.
More importantly, there seems to be a correspondent logic to useful type theories we come up with (or perhaps discover). One famous example is Curry-Howard correspondence . So it might be useful if we understood what has come decades, sometimes centuries before, rather than reinventing the wheel.
Nah, it's fine.
But a monad is just a part of an abstraction. Which also means that they have been used without that specific denomination for ages.
The use of Category theory just allows to think about things out-of-context but it is not necessary nor evident that it "always" leads to interesting results for the everyday programmer.
It is like the difference between applied and pure mathematics.
The concept of a monad in haskell was introduced to solve a specific problem though, if I remember well. And it was not uncontroversial. But people who know better could chime in.
If I wanted to see what was being said by the field, are there any seminal works to 'ingest'?
In terms of seminal work, I admit I am quite ignorant. I would check with SIAM.
A path to enlightenment.
I've read a lot on these subjects already, but I am certainly saving this for two reasons:
* Including papers and sections of books in my weekly reading
* I find it quicker to understand if I can grab a handful of explanations and critique them (often rewriting the confusing parts of several explanations once understood)
If you want an easy list, or a minimal list, this isn't it - and I am glad.
Is that what is taught these days? No wonder people think that's all there is. :-(
Looking at mpweiher's profile it looks like he might be interested in things like metaprogramming and object protocols - again they're seen as either part of the software engineering or systems disciplines rather than PLT.
Metaprogramming also has a rich history in traditional PL venues like POPL and ICFP, such as the Scheme/Racket work on macros, Template Haskell and friends, staged computation work a-la MacroML.
I say that I'm a systems person to signify that I want to talk about ASTs, IRs and compiler output rather than core language models and formal semantics.
But you could argue about definitions all day.
The relevant observation is that the reason this list focuses on formalisms like type theory and models we can reason about formally like FP is that this is what we mean by PLT.
First, I don't have specific things that were left out (well, maybe a few), it was more a sense of there is so much more, especially that I would want to know and feel I should know when creating a computer language. The important bits.
First and foremost, programming languages are for people and for building systems. Except for CTM, that seems to be almost completely missing (and I had initially missed that it was actually on the list, so my bad).
So anyway, if you want to have theory of programming languages, you have to have something about how people think and work and how that affects programming language design. As an example, I found John Pane's HANDS system very illuminating, or maybe the "Smalltalk: Bits of History, Words of Advice", which talks about iterating on language usability, or Henrik Gedenryd's thesis "How Designers Work". Not saying these are it, but just the flavour I'd be looking for.
On the systems front, I think I'd want to see how problems building systems lead to features in programming languages, what the tradeoffs are etc. Again, without this, what's the point of a theory of programming language? And I do hope features in programming languages are trying to address actual problems building systems. Otherwise, it would be like having theories of engineering without talking about real world objects that we are trying to build such as airplanes, bridges or buildings.
As a tiny and obvious example, what are the tradeoffs between more compile-time checking with longer compile times and more inscrutable errors and a more dynamic approach with much faster cycle times but less checking? And yes, there are tradeoffs.
When you look at Hennessy/Patterson for Computer Architecture, you see (well understood?) tradeoffs that matter in terms of application. Yes, more registers are good, but then you have encode them in the ISA, and your path lengths increase and you need to save/restore more on a context switch, etc.
One thing that should also definitely be on there is HOPL. Histories of people actually designing programming languages, trying to solve specific problems and then figuring out what worked, what didn't and why. Without some sort of reflection like this (=evidence), again, what's the point of even beginning to talk about a "theory" of programming language? Unless it's a definition of the word "theory" that I am not aware of.
In all seriousness, though, I'm also just speaking from personal experience. That said, if you are working in type theory, chances are that you will end up skimming almost everything on this list...
the problem is that all of those textbooks are great for an introduction (that's why they're textbooks) and so what is usually more productive is:
1. ask your supervisor or a senior student or postdoc for a 10-30 minute explanation / intro to a topic
1a. skim the relevant chapter(s) from one of the textbooks to make sure you got what they said
2. go read research papers
and frequently, you can just skip straight to 2.
This is the opposite of that.