
Type inference - luord
https://eli.thegreenplace.net/2018/type-inference/
======
georgewfraser
For anyone interested in this subject, there's a great minimalistic
implementation in OCaml here:

[http://okmij.org/ftp/ML/generalization.html](http://okmij.org/ftp/ML/generalization.html)

The author starts with a naive implementation similar to the one linked here,
then implements a series of optimizations that make it more efficient.

------
skybrian
That's how it works, but I'll point out that just because you _can_ leave out
type declarations in a Hindley-Milner language doesn't mean you _should_.

Compilers are good at deducing types from a function's implementation, but
humans will find them easier to understand if they know what the function's
inputs and outputs are supposed to be before attempting to figure out how the
implementation does it. Error messages will be better, too.

~~~
lou1306
On the other hand, from my limited experience in F# (but I guess OCaml would
be similar) I found that, when you let the type inference algorithm do its
thing, it can help you realize that your function is more general than you
thought. For instance, you write a function for lists of strings and in the
end it can actually work on any list.

Maybe it's trivial, but I found it helped me thinking about
refactoring/reusing my code in contexts I didn't expect.

~~~
hinkley
Is it that hard to figure out that your method is generic?

And what is this “end” you speak of? You don’t know what next year’s
requirements are.

Don’t confuse the implementation with the contract. That’s a variant of duck
typing. If you need to do stringy things to the data later don’t advertise
genericity. People will start using your function and then you’ll have to make
a new method because they own the contract now.

~~~
lou1306
> If you need to do stringy things to the data later don’t advertise
> genericity. [...] People will start using your function and then you’ll have
> to make a new method because they own the contract now.

I agree that my point doesn't really apply to functions that are part of some
kind of API.

Still, type inference is not duck typing: the former happens at compile time,
unlike the latter.

> Is it that hard to figure out that your method is generic?

Well, it is not if you decide it to be generic from the start. What I was
saying is that sometimes you start with "concrete" implementation, but the
type inference helps you thinking about refactoring it to maximize reuse.
Something like "hmmm, right now this method has signature 'a -> int -> 'b... I
wonder if I can turn it into 'a -> ('b -> 'c) -> 'c and remove this _other_
method that looks pretty similar..."

------
muglug
This is awesome. Type inference is an endlessly fascinating road to venture
down, and each programming language presents its own set of challenges.

I've built a type inference system[0] for PHP and it's incredibly satisfying
when it does what you expect.

[0] [https://getpsalm.org](https://getpsalm.org)

------
legulere
The problem with advanced type inference is that the exact algorithm needs to
be part of the standard for compability reasons (not just between compilers
but also between compiler versions).

So for me it makes more sense to have everything that’s beyond basic in
something like a lsp-language server/ide.

~~~
gergoerdi
> The problem with advanced type inference is that the exact algorithm needs
> to be part of the standard for compability reasons

I don't think that's true -- if you have principal types, you can just say in
your language spec that the principal type is inferred.

~~~
lifthrasiir
It can be really tedious to list the cases where principal types do not exist.
Haskell has a separate section for ambiguous overloading [1] for that reason,
and more ad-hoc or advanced type systems will be harder to describe.

[1]
[https://www.haskell.org/onlinereport/haskell2010/haskellch4....](https://www.haskell.org/onlinereport/haskell2010/haskellch4.html#x10-790004.3.4)

------
abeppu
I think he's using the term "bi-directional" in a potentially confusing and
nonstandard way. As I understand it, in this area, that term typically
describes a style of type checking and inference which arose later, and which
involves both checking rules and synthesizing rules (the two directions), and
which can be made to support some quite expressive type systems, but generally
requires programs to have a sprinkling of explicit annotations.

On that topic, I found these notes to be very helpful:
[https://www.cs.cmu.edu/~fp/courses/15312-f04/handouts/15-bid...](https://www.cs.cmu.edu/~fp/courses/15312-f04/handouts/15-bidirectional.pdf)

But as I understand it, the concept started here:
[http://www.cis.upenn.edu/~bcpierce/papers/lti-
toplas.pdf](http://www.cis.upenn.edu/~bcpierce/papers/lti-toplas.pdf)

~~~
vilhelm_s
Also, I think one should not call it Hindley-Milner. A key clever part of the
HM algorithm is how it infers types for polymorphic functions, which this code
doesn't do.

For just the idea in this post, I suggest calling it something like
"unification-based type inference", or "type inference using equational
constraints".

------
teajunky
Every article on Eli Bendersky's website is just great. If he ever writes a
book I would immediately buy one.

------
piinbinary
In case anyone finds it useful, I wrote an explanation of how type inference
with let-polymorphism works:

[http://jeremymikkola.com/posts/2018_03_25_understanding_algo...](http://jeremymikkola.com/posts/2018_03_25_understanding_algorithm_w.html)

------
renox
I don't know much about this topic but I've heard that Hindley-Milner's type
inference has also drawbacks as it doesn't work well with partial compilation
and subtyping.

So does the benefits outweight the drawbacks?

~~~
Drup
HM type inference works perfectly fine with separate compilation and/or
partial files (whichever you mean). See the OCaml tooling for a concrete
demosntration.

Subtyping is a fairly large domain, and it really depends what you mean. There
are cases where it works fine (again, see OCaml) and other where it's more
problematic (see Scala). In any case it's not really HM anymore, it needs to
be more.

------
always_good
The upside of type inference is that making small changes to types upstream
doesn't necessary incur changes to code downstream.

But the downside of type inference is that it makes reading code harder and
you may depend on an IDE to know what your intermediate types are. Especially
wrt generics and higher-order types where intermediate function calls don't
have a concrete type in their signature.

Nothing demonstrates this better than any time you've written code yourself
only to immediately invoke the IDE tooling to inspect what the type is. If you
didn't know, good luck to the person reading it in git diffs. I've had to git
clone someone's Rust project recently just to follow the type transformation
across a bunch of future/stream chains.

This isn't an argument against static typing but rather an argument against
inference excess. There's a nice middle ground where asserting the concrete
type in critical places makes the code more readable but also checks your own
assumptions. Like any time you received Maybe<Maybe<Int>> where you wanted
Maybe<Int>.

~~~
Klathmon
A general rule of thumb I follow is to explicitly type your interfaces, and
infer private internals.

(I mean "interfaces" not referring to any specific language feature, but to
the concept in general).

~~~
steveklabnik
This is sorta where we landed with Rust; you have to type out type signatures
of function signatures, but not inside function bodies. It's not based on
public/private lines, but it's a similar idea.

~~~
kjeetgill
Agreed! That's where Java 10 landed with var. Only local variables can have
their types inferred but Fucntion signatures and fields need explicit types.

