If I could add one thing to these two posts, it would be to learn Prolog.
We write inference rules top-down ("from these premises, we draw this conclusion"), but we often read them bottom-up ("given this goal, what facts could have lead to it?").
This double-vision thing, where you simultaneously view a set of inference rules in both a deductive and goal-directed fashion, is IMO one of the main stumbling blocks for people new to type theory. Luckily[1], however, it's fundamentally the same intuition that Prolog programmers develop -- and learning new programming languages is usually easy for programmers.
[1] It's not really by luck, of course: Prolog is called a logic programming language for a reason...
Do you have any recommendations on learning Prolog with the goal of writing practical/real type systems? Ideally a book that has runnable example programs?
I had bookmarked this awhile ago but haven't gone through it:
He gives a history of metalanguage notations (for both language semantics and syntax, including regexes and CFGs). And then he criticizes the inconsistency that has cropped up in these notations over the years.
The irony is that people who are formalizing precise semantics of programming languages do not agree on the syntax/semantics of the metalanguage.
I guess this is what you notice when you read hundreds of programming language papers, as I'm sure Guy Steele has.
-----
Also, does anyone know if what he refers to as "progress and preservation" are the same / related to "liveness and safety" in distributed algorithms? It sounds like the same kind of idea.
> Also, does anyone know if what he refers to as "progress and preservation" are the same / related to "liveness and safety" in distributed algorithms? It sounds like the same kind of idea.
Progress and preservation are used to prove type safety, which (like the name says) is a safety property. Informally, safety means "a bad thing never happens", and liveness means "a good thing will eventually happen".
Informally, progress means "a typed program will never immediately exhibit undefined behaviour", and preservation means "a typed program always stays well-typed". Together, these two properties mean "a typed program never exhibits undefined behaviour".
This has the shape of a safety property. All this can be made much more formal, but that's the intuition.
The inconsistency may be more of a problem for someone new to the field, as it can give such persons the impression that they are not following what the author is saying. With more experience, it is easier to skip over the inconsistencies without really noticing.
http://siek.blogspot.co.uk/2012/07/crash-course-on-notation-...
If I could add one thing to these two posts, it would be to learn Prolog.
We write inference rules top-down ("from these premises, we draw this conclusion"), but we often read them bottom-up ("given this goal, what facts could have lead to it?").
This double-vision thing, where you simultaneously view a set of inference rules in both a deductive and goal-directed fashion, is IMO one of the main stumbling blocks for people new to type theory. Luckily[1], however, it's fundamentally the same intuition that Prolog programmers develop -- and learning new programming languages is usually easy for programmers.
[1] It's not really by luck, of course: Prolog is called a logic programming language for a reason...