Gamma |- e => t
The other direction also holds:
Gamma |- e <= t
Consider for example, a function:
func foo(a) b
e <= a
In effect, you perform 2 passes: forwards inference, and then backwards inference.
p/s: I'm not actually a PL person. This is from my casual reading, so my understanding is highly probably very wrong
Bidirectional propagation of type information allows the types of parameters of anonymous functions to be inferred. When an anonymous function appears as an argument to another function, the expected domain type is used as the expected type for the anonymous abstraction, allowing the type annotations on its parameters to be omitted. A similar, but even simpler, technique infers type annotations on local variable bindings.
From this I gather it means syntactically bidirectional. (But I don't understand this really, so someone should correct me :P)
edit: but it’s incomplete: it never gets around to discussing subtyping and thus helping justify its existence. I think the main point is that due to subtyping and other complexities, we may not be able to synthesize the type of an expression just by looking at it, but if its type has been determined by other constraints, we can still check that that type is valid for the expression.
Inferring types of function parameters and results isn't as useful. Combined with generics, it gets really complicated. It seems better to have those declared explicitly. Programmers need to be able to read function signatures when programming. They're an essential part of the documentation. Also, whole-program type inference and separate compilation inherently conflict.
That's pretty much where things seem to be settling out. Even Python now has optional type declarations for function signatures.
Note that this is quite different to, and simpler than, HM inference. It doesn't require unification like the article describes and is something all statically typed languages practically have to do anyway: even with full annotations, validating `T x = f()` (or equivalent) requires type checking the `f()` expression to see that it actually is a T. And thus, this style of "inference" can be gained by simply not requiring the T type to be written and just taking the result of type checking `f()` as gospel, and is what C++ and Go do. (There's a bunch of subtlety/complexity for this in C++, but that's due to C++, not this form of "inference".)
Proper HM (or similar) type inference will use information from how variables are manipulated after their declaration to decide on the type, and so is more than what's already done for the purposes of type checking. This is what Rust does, breaking the supposed trend.
I wonder why you think so? The inferred types in a language like Haskell or OCaml are almost always exactly what the programmer would have written. So long as a language has principle type inference, the worst case is that the inferred type is a bit more general than you expected.
> Also, whole-program type inference and separate compilation inherently conflict.
There is no conflict here, actually. All type variables in definitions exported from a module are generalizable. As such, no uses in other modules will further refine those types.
I have no experience with ML-using languages but in Scala, which has inferior but still useful type inference, it is recommended to add explicit types to public functions/methods because otherwise, implementation details or errors might alter the signature.
Take for example:
data Tree = Node Leaf Leaf | Leaf Integral
makeEmptyTree x => Leaf x
C++ most certainly did not start this. It had been in ML
(orginally developed by Robin Milner) for decades. It has been a feature of every ML derived language, and Scala had it back in 2004.
> forward inference of types within functions, while declaring them in function definitions.
part, not type inference in general.
I'd agree if we're talking about top-level only, though.
val x = fn()
int x = fn()
A lot of times I'll find myself creating an extra local variable just to have the type info in the variable name since otherwise it's invisible from just reading the code. Essentially adding an extra unchecked type system layer I have to manually maintain:
val listOfThings = functionThatHopefullyReturnsThings()
D also has this.
Does any actual editor do that? Few text editors for programs are even aware of definitions in other modules.
It works with many text editors, like Emacs or vim.
edit: the trend today is to work with the "language server protocol":
Haskell people are writing new tooling based on LSP to supersede ghc-mod:
Might I suggest taking a fresh look at IDEs? It sounds like you might be stuck in text-editor land, wondering how green the grass is on the other side (or if they even have grass). IDEs do indeed provide grass, and a significant portion of the time it'll even trim and weed itself when you look at it and wonder if it's time to mow.
A trend has emerged recently: editor-agnostic language oracles. These tools expose an IPC interface for querying, and in some cases even mutating (i.e. refactoring), source code. They often incorporate a compiler so the type information you get is perfect, and exactly what the compiler sees. Integration with the programmable text editor of your choice (vim, Emacs, Atom, Sublime Text, VSCode…) is a simpler matter of making the right IPC calls.
I think this is the sweet spot. I would hate to be locked into an IDE's poor text editor or buggy interface just because they offer some convenient tools.
Some examples of editor-agnostic tooling:
* CIDER for Clojure and ClojureScript (via weasel or figwheel). The primary editor plugin is Emacs, but vim and Sublime Tex support exists outside the main repository.
* rtags and ycmd for C and C++
* racer and RLS for Rust
* eclipse.jdt.ls for Java
* guru for Go
Microsoft has even created a protocol for these daemons to follow, which makes editor integration even simpler: https://github.com/Microsoft/language-server-protocol/wiki/P...
But that's probably still a year or two away at least for any given language for basic support, much less reaching parity with mature IDEs. In the meantime...
Incidentally, things have moved on in the functional programming world from vanilla Hindley-Milner types, and there's a lot of interest now in dependent types, which depend on values returned at run time, e.g. if you're drawing on a window, the x-coordinate's type is a number from 0 to the window's width, which might change while the program is running. This adds an extra layer of type safety, eliminates a further class of errors, and allows the compiler to safely remove some run-time checks.
(It also does that for C++ and C# and whatnot for inferred types, but I thought I've seen other editors do it too...)
MkPair :: a -> b -> Pair a b
not :: Bool -> Bool
succ :: Int -> Int
foo x = MkPair (succ x) (not x)
So next time you're implementing vanilla HM, maybe consider a compositional type system instead, which can give the following, much more informative error message:
Cannot unify 'Int' with 'Bool' when unifying 'x':
Cannot unify 'Int' with 'Bool' in the following context:
MkPair (succ x) (not x)
Bool -> Pair Int Bool Bool
x :: Int Bool
But also thanks to that detour, this article is a nice read on OCaml in particular: http://okmij.org/ftp/ML/generalization.html
I had to rewrite mine (https://github.com/chewxy/hm) twice, and finally removing typeclasses to be fast enough for a inference system I wrote.