Hacker News new | comments | show | ask | jobs | submit login

Let's not confuse Hindley-Milner type inference with Haskell's (or ML's) type system.

While Haskell's type system can be complex, depending on the exact language extensions in play, Hindley-Milner type inference is simple. It's just good UI — you almost never need to write explicit type declarations, because the compiler (or typechecker) can almost always figure out what you mean. That's it!

Using advanced Haskell extensions can occasionally require annotating a non-top-level value with a type, in order to resolve an ambiguity. The compiler will point out which value has an ambiguous type.

Also, if you've made a mistake and your program doesn't typecheck, depending on whether explicit type declarations have been provided or not, the compiler may complain about different parts of the program. Adding type annotations may help pinpoint the exact location of the mistake. This isn't really a problem, and it's more of an issue in SML and OCaml than Haskell, because these languages don't encourage annotating top-level values with types.

There is never any need to simulate any part of the Hindley-Milner type inference algorithm in your head. The algorithm's only failure mode is a request for more information, and the requested information should already be obvious to you, the programmer, as programming in a ML-derived language means thinking in terms of typed values.

The warts I had in mind were problems with the Haskell ecosystem, such as Cabal hell, or the Haskell community, such as a tendency towards academese — not anything to do with its type system, or type inference mechanism.

In a stark contrast, Scala's "type inference" is not Hindley-Milner, and is barely able to infer the existence of its own arse with both hands, failing in even the simplest of cases.

Example 1. Unable to infer type of `optBar`, because `None` is also a type:

    scala> class Foo {
         |   def setBar(bar: Int) = optBar = Some(bar)
         |   def getBar = optBar.get
         |   var optBar = None
         | }
    <console>:8: error: type mismatch;
     found   : Some[Int]
     required: object None
             def setBar(bar: Int) = optBar = Some(bar)
                                                 ^
Workaround:

    scala> class Foo {
         |   def setBar(bar: Int) = optBar = Some(bar)
         |   def getBar = optBar.get
         |   var optBar: Option[Int] = None
         | }
    defined class Foo
    
Example 2. Unable to infer type of `foobar`, with uncurried `foo`:

    scala> def foo(bar: Int, baz: Int) = bar + baz
    foo: (bar: Int, baz: Int)Int

    scala> val foobar = foo(1, _)
    <console>:8: error: missing parameter type for expanded function ((x$1) => foo(1, x$1))
           val foobar = foo(1, _)
                               ^
Workaround:

    scala> val foobar = foo(1, _: Int)
    foobar: (Int) => Int = <function1>
Workaround, with curried `foo`:

    scala> def foo(bar: Int)(baz: Int) = bar + baz
    foo: (bar: Int)(baz: Int)Int

    scala> val foobar = foo(1)_
    foobar: (Int) => Int = <function1>
Which languages have "basic unidirectional type inference"? I'm not familiar with the term.



Scala […] is barely able to infer the existence of its own arse with both hands, failing in even the simplest of cases.

Thanks for this. Not only did I lol, but this confirmed my suspicion that Scala was "not the language I was looking for" to use for an upcoming project (a compiler that, for business/politics reasons, must run on the JVM). You've no doubt spared future me regret.


Check out ermine[1], a language for the JVM that was written for exactly that reason. It's being used at S&P Capital IQ. You can look at the user guide[2] for more information.

[1]: https://github.com/ermine-language

[2]: https://launchpad.net/ermine-user-guide


Thanks for pointing it out!




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact

Search: