Crash Course on Notation in Programming Language Theory 133 points by bshanks on Jan 10, 2014 | hide | past | favorite | 14 comments

 Excellent, I think if I reread this before turning every page of Pierce's Types and Programming Languages I might understand it a little.
 Exactly! I found it hard to break into TaPL, and this is just the thing I needed.
 I was just about to ask how well this would work as a companion to tapl. I'm still very early in the book so the notation isn't indecipherable, but I'm sure it'll get there soon.
 Can anyone recommend one of those newfangled MOOC/open courses that cover this material? I find myself often frustrated by working through the notation in PL articles, and it's probably because I've never taken any CS math past basic undergraduate combinatorics/graph theory.
 This is by fellow who wrote the awesome book Concepts, Techniques, and Models of Computer Programming. It'll probably teach different models of computation via the Oz programming language, but I doubt it will go into the kind of theory you'd find in Types and Programming Languages.
 The Coursera compilers class covers some of it, and is interesting in its own right.Disclaimer: due to having a newborn, I did a lot of watching, but not (yet) any coding. :-)
 Very nice. A (personal) quirk: It's the first time (I recall) seeing the natural numbers defined as the non-negative integers (including zero) vs just the positive integers (which makes sense, as zero has many useful counterparts, such as the empty set etc). It makes the part of indexing elements from one stand out a bit, in contrast to the examples (0,1,2) -- not (1,2,3)...
 There is no universal agreed upon convention whether the natural numbers include zero or not. There is a lot of tradition of it not including zero, because non-computer scientists find counting from one more intuitive. On the other hand, there are good reasons for starting at zero even within pure mathematics itself, e.g. so that natural numbers can be used to represent the cardinalities of finite sets. Zero is quite natural in this context ;-)
 Another reason some mathematicians like to start them with 1 is because then you can write 1/n for some natural number n without needing to handle the possibility of a division by 0. As far as I've seen in classes so far, we tend to use |N = 0,1,2,... in algebra and |N = 1,2,3,... in analysis, where one uses 1/n a lot.
 Interesting. Having division defined (for a set) can certainly be useful. So division and indexing (as seen in the article here, indexing starts at 1) are two arguments for positive integers, and ranges and empty sets are arguments for non-negative integers.
 Papa Dijkstra wants to have a word with you:
 I found another article by the same guy on the right of the page, written in July of 2012:http://siek.blogspot.co.uk/2012/07/the-semantics-of-familiar...... Looks interesting, but it looks like I'll have to read this crash-course first!
 Thank you for sharing. Very good for bringing back some old (positive) memories on learning this the first time around.

Search: