Hacker News new | comments | show | ask | jobs | submit login

I sense a fallacy lurking here. Sure, a type system like Haskell's or even Scala's is magic (and can be used without understanding it) as long as it can "Do What You Mean," but when it can't, you have to understand its workings. We can't sweep that fact under the rug by calling any failure to "Do What You Mean" a bug, a wart to be fixed -- a deviation from correct DWYM behavior. There is no actual DWYM type system, or at least no one's ever come up with one.

What type system doesn't have warts? Haskell's probably has both more and fewer than most, depending on how you count them, because so much emphasis is put on the type system.

When you use any language long enough, you end up needing to simulate pretty much every observable aspect of it yourself -- not including, for example, the garbage collector, the JVM bytecode verifier, or the GCC code optimizer, which are supposed to be transparent, but including the type inferencer, the JVM threading model, and the JavaScript semicolon insertion rules, which are not. Some of these things you can steer clear of for a long time or even avoid forever by staying on the beaten path, but they lurk, waiting for the thousands of people who have run into bugs or compiler errors and needed to understand them.

I don't know HM too well, but it seems to have more in common with the dataflow analysis algorithms in optimizers and verifiers -- which programmers don't usually have to understand -- than the basic unidirectional type inference that already serves pretty well to reduce redundancy in the source code.




Let's not confuse Hindley-Milner type inference with Haskell's (or ML's) type system.

While Haskell's type system can be complex, depending on the exact language extensions in play, Hindley-Milner type inference is simple. It's just good UI — you almost never need to write explicit type declarations, because the compiler (or typechecker) can almost always figure out what you mean. That's it!

Using advanced Haskell extensions can occasionally require annotating a non-top-level value with a type, in order to resolve an ambiguity. The compiler will point out which value has an ambiguous type.

Also, if you've made a mistake and your program doesn't typecheck, depending on whether explicit type declarations have been provided or not, the compiler may complain about different parts of the program. Adding type annotations may help pinpoint the exact location of the mistake. This isn't really a problem, and it's more of an issue in SML and OCaml than Haskell, because these languages don't encourage annotating top-level values with types.

There is never any need to simulate any part of the Hindley-Milner type inference algorithm in your head. The algorithm's only failure mode is a request for more information, and the requested information should already be obvious to you, the programmer, as programming in a ML-derived language means thinking in terms of typed values.

The warts I had in mind were problems with the Haskell ecosystem, such as Cabal hell, or the Haskell community, such as a tendency towards academese — not anything to do with its type system, or type inference mechanism.

In a stark contrast, Scala's "type inference" is not Hindley-Milner, and is barely able to infer the existence of its own arse with both hands, failing in even the simplest of cases.

Example 1. Unable to infer type of `optBar`, because `None` is also a type:

    scala> class Foo {
         |   def setBar(bar: Int) = optBar = Some(bar)
         |   def getBar = optBar.get
         |   var optBar = None
         | }
    <console>:8: error: type mismatch;
     found   : Some[Int]
     required: object None
             def setBar(bar: Int) = optBar = Some(bar)
                                                 ^
Workaround:

    scala> class Foo {
         |   def setBar(bar: Int) = optBar = Some(bar)
         |   def getBar = optBar.get
         |   var optBar: Option[Int] = None
         | }
    defined class Foo
    
Example 2. Unable to infer type of `foobar`, with uncurried `foo`:

    scala> def foo(bar: Int, baz: Int) = bar + baz
    foo: (bar: Int, baz: Int)Int

    scala> val foobar = foo(1, _)
    <console>:8: error: missing parameter type for expanded function ((x$1) => foo(1, x$1))
           val foobar = foo(1, _)
                               ^
Workaround:

    scala> val foobar = foo(1, _: Int)
    foobar: (Int) => Int = <function1>
Workaround, with curried `foo`:

    scala> def foo(bar: Int)(baz: Int) = bar + baz
    foo: (bar: Int)(baz: Int)Int

    scala> val foobar = foo(1)_
    foobar: (Int) => Int = <function1>
Which languages have "basic unidirectional type inference"? I'm not familiar with the term.


Scala […] is barely able to infer the existence of its own arse with both hands, failing in even the simplest of cases.

Thanks for this. Not only did I lol, but this confirmed my suspicion that Scala was "not the language I was looking for" to use for an upcoming project (a compiler that, for business/politics reasons, must run on the JVM). You've no doubt spared future me regret.


Check out ermine[1], a language for the JVM that was written for exactly that reason. It's being used at S&P Capital IQ. You can look at the user guide[2] for more information.

[1]: https://github.com/ermine-language

[2]: https://launchpad.net/ermine-user-guide


Thanks for pointing it out!




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact

Search: