Hacker Newsnew | comments | show | ask | jobs | submit login

My feeling is that there's a relatively small subset of programmers who adapt well to unification inference, and they think the way you do - they grok the logical problem the inference engine is trying to solve, and don't worry at all about how it does it.

There is a larger set of programmers who try to understand it and run into, well, stuff like this:

http://en.wikipedia.org/wiki/Hindley%E2%80%93Milner

I hope you're not seriously suggesting that this content is on the same intellectual level as, say, a good RFC.

A good RFC in fact works very hard to be as stupid as possible, and one source of irritation from the RFC-producing world to the PDF-producing world - so to speak - is that the PDF-producing world doesn't seem to even understand that they have this job, much less are making any particular effort to do it.

The way people should understand the Hoon type inference engine is to think about the concrete reality of what it's doing, not the abstract mathematical problem it's trying to solve. This is simply because human beings aren't very good as a species at mathematics. Hate that that's true, but that it is true I don't think anyone can deny.




But you don't need to understand any of the H-M algorithm to understand how to use a language with a H-M type system. You just need to understand that every expression in your program must be able to be statically assigned a non-union non-dependent type.

If an expression can't be assigned such a type (e.g. it's sometimes an integer and sometimes a string (in the same function call); or it's a non-empty list, etc.) then your program won't type. Otherwise it will.

Importantly: it doesn't matter how you come to this conclusion; the compiler will use the H-M algorithm, but that doesn't mean you need to. You can just look at your "if" expression and realize that you're returning an integer in one branch and a string in the other and that that violates the "non-union non-dependent" requirement above.

(This, BTW, is an example of the difference between a spec (what I outlined above) and an implementation (what the H-M algorithm says to do).)

-----


This is an excellent point, and we're in vehement agreement. Or almost vehement agreement.

The thing is, as a programmer, I have to understand what result the algorithm will produce. I have to understand what type my variable will be. If I have to look and see what the compiler came up with, I have way too much work to do.

There are two ways to solve this problem. One is to build an abstract mathematical model which you can emulate in your head - not using the same methods, maybe, but getting the same results. The other is to build a concrete, mechanical model which you don't emulate, but rather simulate. Your brain computes not the goal of the algorithm, but the mechanism by which the algorithm is implemented. Same result - you know what type the variable will be.

From my perspective, it's one problem, it has to get solved, and the right way to solve it is the way that's the easiest for the people who will actually use the language. For my kind of people, I feel math guys are far too enamored of doing it the emulation way instead of the simulation way.

But this is basically an aesthetic opinion about user interface design, so we may be at the end of Hume's is-ought question here...

-----


As a side note, you should check out Erlang's static checker, Dialyzer. It implements a complete (i.e. no-false-positives) type system, as opposed to a sound (i.e. no-false-negatives) type system (e.g. H-M and friends). It pretty much eliminates the need to make any predictions about what the type checker will say, as it will only complain if your program can provably crash due to a type error.

Of course this still permits the possibility of type errors, but in practice it works fairly well, and does not constrain the programmer.

-----


Dialyzer is decent, given the constraints for which it was designed, but nowhere near as helpful as a ML-derived static type system.

For example, Dialyzer won't reject a function declared to return values of type A and actually returning values of type B along an execution path which isn't currently traveled.

-----


> One is to build an abstract mathematical model which you can emulate in your head

The ability to emulate your abstract or concrete programming model in your head cannot be overemphasized. I used to program the original 8-bit B&W Gameboy, which was a machine so simple and small I could literally emulate it at the clock cycle level while I was writing the code for it.

If you can fit your model into your cache (brain) you can do amazing things using just your wetware.

-----


In practice, using a language with Hindley-Milner type inference is massively more simple than you appear to think it is. There are countless other warts, but this ain't it.

-----


I sense a fallacy lurking here. Sure, a type system like Haskell's or even Scala's is magic (and can be used without understanding it) as long as it can "Do What You Mean," but when it can't, you have to understand its workings. We can't sweep that fact under the rug by calling any failure to "Do What You Mean" a bug, a wart to be fixed -- a deviation from correct DWYM behavior. There is no actual DWYM type system, or at least no one's ever come up with one.

What type system doesn't have warts? Haskell's probably has both more and fewer than most, depending on how you count them, because so much emphasis is put on the type system.

When you use any language long enough, you end up needing to simulate pretty much every observable aspect of it yourself -- not including, for example, the garbage collector, the JVM bytecode verifier, or the GCC code optimizer, which are supposed to be transparent, but including the type inferencer, the JVM threading model, and the JavaScript semicolon insertion rules, which are not. Some of these things you can steer clear of for a long time or even avoid forever by staying on the beaten path, but they lurk, waiting for the thousands of people who have run into bugs or compiler errors and needed to understand them.

I don't know HM too well, but it seems to have more in common with the dataflow analysis algorithms in optimizers and verifiers -- which programmers don't usually have to understand -- than the basic unidirectional type inference that already serves pretty well to reduce redundancy in the source code.

-----


Let's not confuse Hindley-Milner type inference with Haskell's (or ML's) type system.

While Haskell's type system can be complex, depending on the exact language extensions in play, Hindley-Milner type inference is simple. It's just good UI — you almost never need to write explicit type declarations, because the compiler (or typechecker) can almost always figure out what you mean. That's it!

Using advanced Haskell extensions can occasionally require annotating a non-top-level value with a type, in order to resolve an ambiguity. The compiler will point out which value has an ambiguous type.

Also, if you've made a mistake and your program doesn't typecheck, depending on whether explicit type declarations have been provided or not, the compiler may complain about different parts of the program. Adding type annotations may help pinpoint the exact location of the mistake. This isn't really a problem, and it's more of an issue in SML and OCaml than Haskell, because these languages don't encourage annotating top-level values with types.

There is never any need to simulate any part of the Hindley-Milner type inference algorithm in your head. The algorithm's only failure mode is a request for more information, and the requested information should already be obvious to you, the programmer, as programming in a ML-derived language means thinking in terms of typed values.

The warts I had in mind were problems with the Haskell ecosystem, such as Cabal hell, or the Haskell community, such as a tendency towards academese — not anything to do with its type system, or type inference mechanism.

In a stark contrast, Scala's "type inference" is not Hindley-Milner, and is barely able to infer the existence of its own arse with both hands, failing in even the simplest of cases.

Example 1. Unable to infer type of `optBar`, because `None` is also a type:

    scala> class Foo {
         |   def setBar(bar: Int) = optBar = Some(bar)
         |   def getBar = optBar.get
         |   var optBar = None
         | }
    <console>:8: error: type mismatch;
     found   : Some[Int]
     required: object None
             def setBar(bar: Int) = optBar = Some(bar)
                                                 ^
Workaround:

    scala> class Foo {
         |   def setBar(bar: Int) = optBar = Some(bar)
         |   def getBar = optBar.get
         |   var optBar: Option[Int] = None
         | }
    defined class Foo
    
Example 2. Unable to infer type of `foobar`, with uncurried `foo`:

    scala> def foo(bar: Int, baz: Int) = bar + baz
    foo: (bar: Int, baz: Int)Int

    scala> val foobar = foo(1, _)
    <console>:8: error: missing parameter type for expanded function ((x$1) => foo(1, x$1))
           val foobar = foo(1, _)
                               ^
Workaround:

    scala> val foobar = foo(1, _: Int)
    foobar: (Int) => Int = <function1>
Workaround, with curried `foo`:

    scala> def foo(bar: Int)(baz: Int) = bar + baz
    foo: (bar: Int)(baz: Int)Int

    scala> val foobar = foo(1)_
    foobar: (Int) => Int = <function1>
Which languages have "basic unidirectional type inference"? I'm not familiar with the term.

-----


Scala […] is barely able to infer the existence of its own arse with both hands, failing in even the simplest of cases.

Thanks for this. Not only did I lol, but this confirmed my suspicion that Scala was "not the language I was looking for" to use for an upcoming project (a compiler that, for business/politics reasons, must run on the JVM). You've no doubt spared future me regret.

-----


Check out ermine[1], a language for the JVM that was written for exactly that reason. It's being used at S&P Capital IQ. You can look at the user guide[2] for more information.

[1]: https://github.com/ermine-language

[2]: https://launchpad.net/ermine-user-guide

-----


Thanks for pointing it out!

-----


I think you're just projecting your familiarity with things like RFCs as somehow more intuitive. People have plenty of problems with the network stack and its associated abstractions! TCP/IP or even just HTTP lead to plenty of confusion. In a vacuum, I am not convinced the RFC would be any easier than a paper on Hindley Milner. (Wikipedia is not a good replacement for either.)

"Intellectual level" isn't exactly a well-defined term. It inevitably conflates the inherent complexity of something with the background needed to quickly understand it. The difference in the two articles is not in how accessible they are to somebody with no relevant background--both the RFCs and H-M would make no sense to a layperson!--but in the fact that they target different audiences.

Both are very stupid in the same way: they are very explicit about every little detail. It's just where an RFC uses technical terminology mixed with a legalese-like language (SHOULD and MUST and so on), H-M uses mathematical notation and inference rules. I do not believe that one of these is so much more inherently difficult to understand than another; it's much more a matter of experience and background knowledge. Without that, reading either is going to be relatively slow and laborious. Having gone through both RFCs and type theory, I've found both similarly slow in terms of content; the main difference is that the mathematical notation happens to be much more dense. So perhaps it takes me five times longer to get through a page about type theory as a page of an RFC, but that single page contains the same amount of information, just encoded differently.

Also, I don't buy that you have to understand how the inference works to use it, any more than you have to understand or think about the OSI model to write networking code. You can use both perfectly well--and most people do--without needing to think about or understand everything. Moreover, you won't be able to learn either without a good deal of work and the relevant background. I suspect that it just happens that the background for one matches much more closely with yours than the background for the other.

Sure, humans as a species are not very good at mathematics. But they aren't very good at engineering either! They are really bad at keeping a bunch of details in memory and running (even simple) algorithms in their heads. All told, it's better to avoid it.

-----


For the source of my bias, I have actually spent a considerable amount of time working with people in the "RFC world." For example, I wrote the kernel of the Openwave WAP browser, which shipped about a billion units, even if it sucked and no one used it. I have spent just about no time in the PDF world, and even when I was in grad school I was an OS guy. (Here's a fun old paper of mine: http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.41.7...). So, zero time in the "PL theory" world.

I can tell you how this world sees the "PL theory" world. They say: look, you have your own way of formalizing computing, there are infinitely many ways of formalizing computing, the one we use seems to work just fine, so why should we learn yours? Whether they won't, or they can't, or they are too pigheaded to - the answer is, they aren't.

I don't view language design as an opportunity to change the world to this extent. And I think many, many good ideas are getting lost in the PL theory world. So, my goal was to replicate these ideas, as best as I can, for programmers who think about programming the way I do.

-----


My instinct is to agree with urbit here but I just wanted to thank you for the sudden enlightenment that many intelligent people probably find legalese, spec-ese and RFC-speak (or as I refer to it, "plain English prose") as awkward and unnecessarily painful to read as I do conventionally written math papers.

It sounds trivial and self-evident written down like that but most long overdue revelations probably do.

-----




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact

Search: