
Developing a Statically Typed Programming Language (2017) - proxybop
https://blog.mgechev.com/2017/08/05/typed-lambda-calculus-create-type-checker-transpiler-compiler-javascript/
======
shpongled
Pierce's _Types and Programming Languages_ is a phenomenal textbook. I'm
almost done with my implementation of System F-omega (polymorphic lambda
calculus with higher kinded types and type operators), featuring a full
handwritten lexer/parser with helpful diagnostics.

My end goal is to use it as one phase of IR for a functional language
compiler.

~~~
proxybop
Congrats!! I’ve been trying to study the basics of type theory but without a
more formal background in some of the mathematics/proofs i’m in a little over
my head with those books. The red dragon book seems to have simpler / less
mathematical explanations of basic type stuff so far

~~~
birthdaywizard
I would recommend "Software Foundations" also by Pierce. It covers a lot of
the same material but in Coq. I found it much more approachable not having as
much of formal background. Having proofs machine checked with error messages
is a god send for your sanity when you don't have a professor there to
validate you.

------
AdieuToLogic
Cool article.

While I cannot find the exact quote from Martin Odersky[0], I do believe he
once said something along the line of, "it takes about ten years to make a
complete typed language."

If anyone also recalls this and has a link to the quote, I would much
appreciate the pointer to it.

0 -
[https://en.wikipedia.org/wiki/Martin_Odersky](https://en.wikipedia.org/wiki/Martin_Odersky)

------
Scramblejams
Suggest removing the anchor from the link, as it drops you into the middle of
the piece. That, or add “Type Rules” to the title.

------
hhas01
Curious: from what I can see, a type system is really just an embedded
declarative DSL for doing set algebra.

So is there a technical reason why education and implementations always
intertwine it with a larger client language, rather than treating it as a
complete, self-contained entity in its own right? Or is that lack of
decomposition just oversight?

~~~
tomp
You can count the number of type systems that are "logically OK" (the formal
terms are _sound_ (doesn't admit wrong programs, unlike e.g. Java), _complete_
(can admit all correct programs (for some definition of "correct"), unlike
e.g. Scala), _decidable_ (always finishes)) on one hand. The rest amount to
various amounts of hacks - common ones include specifying function parameter
types to allow type inference to function, various escape hatches (like Object
in Java and interface {} in Go), generics incorrect or limited to various
degrees. Subtyping is tremendously hard. The specific hacks implemented in
different programming languages are often implementation-specific (and
arbitrary), and often result in weird interactions between different language
features.

Basically, type systems == science (figuring out new things), programming
languages = engineering (making new findings useful).

~~~
TheAsprngHacker
> You can count the number of type systems that are "logically OK" (the formal
> terms are _sound_ (doesn't admit wrong programs, unlike e.g. Java),
> _complete_ (can admit all correct programs (for some definition of
> "correct"), unlike e.g. Scala)... on one hand. The rest amount to various
> amounts of hacks

Huh? Rice's Theorem states that any static semantic analysis must either
reject valid programs or accept invalid programs. This says nothing about
whether a type system is rigorous or hacky.

> common ones include specifying function parameter types to allow type
> inference to function

It's true that type inference becomes undecidable without annotations for
things like dependent types, and there's nothing hacky about that.

~~~
ImprobableTruth
Rice's theorem only aplies to turing complete programs, so if youre "weak
enough" there's no problem being both sound and complete. I think it's easy to
see why people would think of such systems as "nicer" (even though the others
are of course still rigorous).

>It's true that type inference becomes undecidable without annotations for
things like dependent types, and there's nothing hacky about that.

But it's not just for more advanced type systems, type inference is
undecidable for basically any system more complex than HM typing. So in
practice virtually all programming languages with type systems that go beyond
the absolue basics can't fullfill the "promise" of type inference, which is
that you don't have to explicitly annotate types. I think if you take into
consideration that even relatively "good" type systems like Haskell's
occasionally require you to annotate not just functions, but 'random' terms,
you might see why some people might consider it "hacky".

------
LessDmesg
I started reading but stopped when I saw "succ n" and "prev n". Unary numbers
are so academic and disconnected from reality that I lose interest in any
paper that uses them. Lambda calculus makes me yawn too. Guess I'll be making
a programming language on my own to see how far I get without reading TAPL or
any CS papers :-)

~~~
quickthrower2
I found TAPL too hard so I know where you are coming from. I knocked up this
programming language in a few hours.
[https://github.com/mcapodici/badlanguage](https://github.com/mcapodici/badlanguage)

~~~
LessDmesg
It's not so hard as it is hard to read because of the formalism. Instead of
introducing lambda calculus and the fraction-like thingies with Greek letters,
CS scientists should just use Python and write an implementation on the fly.

------
macintux
2017

