Hacker Newsnew | comments | show | ask | jobs | submit login

"""Its self-compiling kernel, 7000 lines of code, specifies Hoon unambiguously; there is no Hoon spec."""

You realize there's such a thing as over-specifying something, right? You realize that using source code as a spec does this?

If you don't realize; consider that non-data aspects of a program, such as runtime and memory usage and bug compatibility, can sometimes form part of a specification, and sometimes don't. A specification says as much what is not required of an implementation as what is required. By claiming your one true implementation is the spec, one has no means of ascertaining what behavior is incidental, and what is actually required/able to be relied upon.

I'm not normally an XKCD fan, but this strip sums it up perfectly: http://xkcd.com/1172/

.

"""Hoon can be classified as a pure, strict higher-order static type-inferred functional language, with co/contra/bivariance and genericity. However, Hoon does not use lambda calculus,"""

A higher-order functional language does not "use" the mathematical definition of higher-order functions? In that case, what do you mean by "higher-order functional language"?

EDIT: I was going to make an analogy to the hypothetical claim that language X's arithmetic system doesn't use Church numerals (i.e. the mathematical formulation of natural numbers)… but Hoon is actually implemented using Church arithmetic. Go figure.

.

"""unification,"""

By what mechanism does Hoon implement type inference? Honestly, that's kind of like saying Hoon doesn't use, say, queues. Great, but, why?

.

"""or other constructs from “PL theory.”"""

Disdainful much? I'm really curious what's the background behind the scare quotes here.

.

EDIT: Just to be clear, I'm not claiming your project is uninteresting or not worth your while. I just feel you're being overly dismissive of the, let's say "traditional" schools of thought, with invalid justification.




As for the logical definition of a "spec," you are welcome to find any inconsistencies or imprecisions in the Nock axioms. Actually someone reported a (cosmetic) bug in the Nock spec this morning, which is quite cool.

There's a very important point in that Nock can specify not what a computer computes - but what it has computed, if it terminates. In general, practical systems have to interrupt long-running computations, so they must always compute some subset of that definition. But it can be, and is, a strict subset. I find this a meaningful and accurate statement.

Sorry, I didn't mean "PL theory" as scare quotes - I just didn't want to exclude anyone for whom the term doesn't instantly bring to mind a body of work.

The truth, though, is that the "PL theory" we have is just one theory of programming. It's a convention, it's not a piece of reality like pi. Church-Turing equivalence tells us that we have infinitely many such representations.

Eg, the Turing Machine is equivalent in power to lambda, but much less useful as a programming framework. My feeling is that, as a programming model, lambda suffers from the fact that it was originally designed, by mathematicians, to do mathematics.

Just from a UI basis, programmers are not mathematicians. The skill is very similar, but mathematicians work best in geometric and logical abstractions, whereas programmers are more comfortable with mechanical representations.

For instance, the general lambda approach is to represent data as code (Church numerals), a very sensible approach from the math side, whereas the Nock approach is to represent code as data - which feels more natural to the programmer.

So, nothing can tarnish the very powerful and elegant model of computing that is "PL theory." But it is not "the" theory of computing - just one of infinitely many possible theories.

-----


"PL theory", the field of study, contains many, many theories of programming. There is not a lambda calculus, there are several (e.g. the untyped lambda calculus, the simply typed lambda calculus, System F, Martin-Löf's dependent type theory, among others) although not all of them are Turing-equivalent (the lack of which can be a useful feature), as well as other systems which are not based on lambda (e.g. concatenative calculi, which correspond roughly to things like Forth.) In fact, from a quick perusal, it seems that Nock is a minor variation on a graph rewriting system with the graphs restricted to DAGs, so... a studied and established theory of computation.

Similarly, there are techniques for representing code as data that are quite widely known and understood in programming language theory, e.g. through Danvy's defunctionalization or the use of arrows to make explicit the structure of computation or stage-separation and things like Lisp macros. These also are not ignored in the field of PL theory.

Effectively, my criticism is that your practical understanding of "PL theory" amounts to a mostly-remembered reading of "Types and Programming Languages," and so you are accidentally reinventing things poorly and describing them in an ad-hoc way. Now, Nock and Hoon and the others are quite interesting and impressive, and I plan to keep following them in the future, but I still assert that, rather than standing on the shoulders of giants as you could be, you've dismissed them as windmills and are trying to climb your own way.

-----


Arguably there's a lot of folks reinventing things poorly in ad-hoc ways, doubtful that will ever change. What's more concerning is the flagrant anti-theory attitude that, at least to me, seems to be on the rise. In the context of discussions around Haskell or Agda it's sadly common to see "mathematical" bandied around as a synonym for "impractical".

-----


As for the logical definition of a "spec," you are welcome to find any inconsistencies or imprecisions in the Nock axioms.

Not my point. The "spec" can be perfect, but you're not saying what a Nock implementation needn't do. If I write NockJIT that optimizes increment-loops into O(1) addition, does that violate the spec? You might reply "no", but someone else who relied on such loops for hardware timing obviously would say "yes".

-----


The spec defines the semantics, not the performance. I think that's a pretty concrete distinction.

But in practice, at a certain level some kind of informative or even quasi-normative convention will have to creep in if you want to define the question "computer X can run program Y reasonably well." It's a qualitatively different problem, but it remains a real problem. Not one we have in the early days of the system, though.

I am happy to solve most but not all of any problem...

-----


The spec defines the semantics, not the performance. I think that's a pretty concrete distinction.

But it's a distinction you have to make. Concrete ≠ obvious.

Other things that are sometimes in specs and sometimes not:

Is the intermediate state of a running Hoon program specified by the code? (It matters for someone writing a debugger!)

Is how the compiled program handles data at runtime specified? (It matters for a cryptographer!)

Are the errors produced by the compiler specified? (It matters for an IDE!)

Does the compiler have any limitations that should be allowed to be reduced / not have any limitations that should be allowed to be restricted? (It matters for anyone trying to make a faster compiler!)

-----


I think the source of the confusion is that I'm carving off a stricter, and smaller, definition of the word "spec" than what we're used to. I agree. The word is not quite right.

In general the answer to your questions is "yes and no." Well, really it's no - except that as a fairly common case, for example when we want to catch compiler errors without breaking out of the system, we virtualize Nock within itself. This is also serviceable when it comes to adding an extra operator, 11, that dereferences the global namespace. But the fact that "virtual Nock" is just a virtualization stack within one Nock interpreter is not semantically detectable.

-----


As for the type inference, you don't need unification to do type inference. Hoon infers only forward, not backward. Of course you could say it's a subset of unification, as it is.

I find Hindley-Milner inference systems too powerful. The programmer has to model what the inference system is doing in his or her head, and it's a source of significant cognitive load. Hoon's inference algorithm is slightly too smart in some cases, and requires slightly too much typing in others, but I think it's generally well matched to human capacities - especially for those of us who are not natural mathematicians.

-----


What? Why do you have to think about what the type inference is doing? I've never needed that with H-M in practice, and I've spent a fair amount of time with both Haskell and OCaml.

When I use a good inference system like that, I simply don't think about the typechecker most of the time. It just doesn't come up. The types simply work out, largely because the inference is sufficiently good.

The only times I've had to model the inference in my head is when it is insufficiently powerful, like with OCaml objects or certain typeclass patterns in Haskell. That is, it's only a problem when the inference breaks unexpectedly.

Type inference just works, until it doesn't. And that's the only place I've had difficulties: when it doesn't.

The solution to a problem like that is not to make it break more often! That just exacerbates things.

A less capable system (say Scala or, god forbid, Go) just makes life more difficult. Even if it's easier to run the algorithm in your head. Especially then, in fact: you shouldn't be thinking about it on the first place.

-----


My feeling is that there's a relatively small subset of programmers who adapt well to unification inference, and they think the way you do - they grok the logical problem the inference engine is trying to solve, and don't worry at all about how it does it.

There is a larger set of programmers who try to understand it and run into, well, stuff like this:

http://en.wikipedia.org/wiki/Hindley%E2%80%93Milner

I hope you're not seriously suggesting that this content is on the same intellectual level as, say, a good RFC.

A good RFC in fact works very hard to be as stupid as possible, and one source of irritation from the RFC-producing world to the PDF-producing world - so to speak - is that the PDF-producing world doesn't seem to even understand that they have this job, much less are making any particular effort to do it.

The way people should understand the Hoon type inference engine is to think about the concrete reality of what it's doing, not the abstract mathematical problem it's trying to solve. This is simply because human beings aren't very good as a species at mathematics. Hate that that's true, but that it is true I don't think anyone can deny.

-----


But you don't need to understand any of the H-M algorithm to understand how to use a language with a H-M type system. You just need to understand that every expression in your program must be able to be statically assigned a non-union non-dependent type.

If an expression can't be assigned such a type (e.g. it's sometimes an integer and sometimes a string (in the same function call); or it's a non-empty list, etc.) then your program won't type. Otherwise it will.

Importantly: it doesn't matter how you come to this conclusion; the compiler will use the H-M algorithm, but that doesn't mean you need to. You can just look at your "if" expression and realize that you're returning an integer in one branch and a string in the other and that that violates the "non-union non-dependent" requirement above.

(This, BTW, is an example of the difference between a spec (what I outlined above) and an implementation (what the H-M algorithm says to do).)

-----


This is an excellent point, and we're in vehement agreement. Or almost vehement agreement.

The thing is, as a programmer, I have to understand what result the algorithm will produce. I have to understand what type my variable will be. If I have to look and see what the compiler came up with, I have way too much work to do.

There are two ways to solve this problem. One is to build an abstract mathematical model which you can emulate in your head - not using the same methods, maybe, but getting the same results. The other is to build a concrete, mechanical model which you don't emulate, but rather simulate. Your brain computes not the goal of the algorithm, but the mechanism by which the algorithm is implemented. Same result - you know what type the variable will be.

From my perspective, it's one problem, it has to get solved, and the right way to solve it is the way that's the easiest for the people who will actually use the language. For my kind of people, I feel math guys are far too enamored of doing it the emulation way instead of the simulation way.

But this is basically an aesthetic opinion about user interface design, so we may be at the end of Hume's is-ought question here...

-----


As a side note, you should check out Erlang's static checker, Dialyzer. It implements a complete (i.e. no-false-positives) type system, as opposed to a sound (i.e. no-false-negatives) type system (e.g. H-M and friends). It pretty much eliminates the need to make any predictions about what the type checker will say, as it will only complain if your program can provably crash due to a type error.

Of course this still permits the possibility of type errors, but in practice it works fairly well, and does not constrain the programmer.

-----


Dialyzer is decent, given the constraints for which it was designed, but nowhere near as helpful as a ML-derived static type system.

For example, Dialyzer won't reject a function declared to return values of type A and actually returning values of type B along an execution path which isn't currently traveled.

-----


> One is to build an abstract mathematical model which you can emulate in your head

The ability to emulate your abstract or concrete programming model in your head cannot be overemphasized. I used to program the original 8-bit B&W Gameboy, which was a machine so simple and small I could literally emulate it at the clock cycle level while I was writing the code for it.

If you can fit your model into your cache (brain) you can do amazing things using just your wetware.

-----


In practice, using a language with Hindley-Milner type inference is massively more simple than you appear to think it is. There are countless other warts, but this ain't it.

-----


I sense a fallacy lurking here. Sure, a type system like Haskell's or even Scala's is magic (and can be used without understanding it) as long as it can "Do What You Mean," but when it can't, you have to understand its workings. We can't sweep that fact under the rug by calling any failure to "Do What You Mean" a bug, a wart to be fixed -- a deviation from correct DWYM behavior. There is no actual DWYM type system, or at least no one's ever come up with one.

What type system doesn't have warts? Haskell's probably has both more and fewer than most, depending on how you count them, because so much emphasis is put on the type system.

When you use any language long enough, you end up needing to simulate pretty much every observable aspect of it yourself -- not including, for example, the garbage collector, the JVM bytecode verifier, or the GCC code optimizer, which are supposed to be transparent, but including the type inferencer, the JVM threading model, and the JavaScript semicolon insertion rules, which are not. Some of these things you can steer clear of for a long time or even avoid forever by staying on the beaten path, but they lurk, waiting for the thousands of people who have run into bugs or compiler errors and needed to understand them.

I don't know HM too well, but it seems to have more in common with the dataflow analysis algorithms in optimizers and verifiers -- which programmers don't usually have to understand -- than the basic unidirectional type inference that already serves pretty well to reduce redundancy in the source code.

-----


Let's not confuse Hindley-Milner type inference with Haskell's (or ML's) type system.

While Haskell's type system can be complex, depending on the exact language extensions in play, Hindley-Milner type inference is simple. It's just good UI — you almost never need to write explicit type declarations, because the compiler (or typechecker) can almost always figure out what you mean. That's it!

Using advanced Haskell extensions can occasionally require annotating a non-top-level value with a type, in order to resolve an ambiguity. The compiler will point out which value has an ambiguous type.

Also, if you've made a mistake and your program doesn't typecheck, depending on whether explicit type declarations have been provided or not, the compiler may complain about different parts of the program. Adding type annotations may help pinpoint the exact location of the mistake. This isn't really a problem, and it's more of an issue in SML and OCaml than Haskell, because these languages don't encourage annotating top-level values with types.

There is never any need to simulate any part of the Hindley-Milner type inference algorithm in your head. The algorithm's only failure mode is a request for more information, and the requested information should already be obvious to you, the programmer, as programming in a ML-derived language means thinking in terms of typed values.

The warts I had in mind were problems with the Haskell ecosystem, such as Cabal hell, or the Haskell community, such as a tendency towards academese — not anything to do with its type system, or type inference mechanism.

In a stark contrast, Scala's "type inference" is not Hindley-Milner, and is barely able to infer the existence of its own arse with both hands, failing in even the simplest of cases.

Example 1. Unable to infer type of `optBar`, because `None` is also a type:

    scala> class Foo {
         |   def setBar(bar: Int) = optBar = Some(bar)
         |   def getBar = optBar.get
         |   var optBar = None
         | }
    <console>:8: error: type mismatch;
     found   : Some[Int]
     required: object None
             def setBar(bar: Int) = optBar = Some(bar)
                                                 ^
Workaround:

    scala> class Foo {
         |   def setBar(bar: Int) = optBar = Some(bar)
         |   def getBar = optBar.get
         |   var optBar: Option[Int] = None
         | }
    defined class Foo
    
Example 2. Unable to infer type of `foobar`, with uncurried `foo`:

    scala> def foo(bar: Int, baz: Int) = bar + baz
    foo: (bar: Int, baz: Int)Int

    scala> val foobar = foo(1, _)
    <console>:8: error: missing parameter type for expanded function ((x$1) => foo(1, x$1))
           val foobar = foo(1, _)
                               ^
Workaround:

    scala> val foobar = foo(1, _: Int)
    foobar: (Int) => Int = <function1>
Workaround, with curried `foo`:

    scala> def foo(bar: Int)(baz: Int) = bar + baz
    foo: (bar: Int)(baz: Int)Int

    scala> val foobar = foo(1)_
    foobar: (Int) => Int = <function1>
Which languages have "basic unidirectional type inference"? I'm not familiar with the term.

-----


Scala […] is barely able to infer the existence of its own arse with both hands, failing in even the simplest of cases.

Thanks for this. Not only did I lol, but this confirmed my suspicion that Scala was "not the language I was looking for" to use for an upcoming project (a compiler that, for business/politics reasons, must run on the JVM). You've no doubt spared future me regret.

-----


Check out ermine[1], a language for the JVM that was written for exactly that reason. It's being used at S&P Capital IQ. You can look at the user guide[2] for more information.

[1]: https://github.com/ermine-language

[2]: https://launchpad.net/ermine-user-guide

-----


Thanks for pointing it out!

-----


I think you're just projecting your familiarity with things like RFCs as somehow more intuitive. People have plenty of problems with the network stack and its associated abstractions! TCP/IP or even just HTTP lead to plenty of confusion. In a vacuum, I am not convinced the RFC would be any easier than a paper on Hindley Milner. (Wikipedia is not a good replacement for either.)

"Intellectual level" isn't exactly a well-defined term. It inevitably conflates the inherent complexity of something with the background needed to quickly understand it. The difference in the two articles is not in how accessible they are to somebody with no relevant background--both the RFCs and H-M would make no sense to a layperson!--but in the fact that they target different audiences.

Both are very stupid in the same way: they are very explicit about every little detail. It's just where an RFC uses technical terminology mixed with a legalese-like language (SHOULD and MUST and so on), H-M uses mathematical notation and inference rules. I do not believe that one of these is so much more inherently difficult to understand than another; it's much more a matter of experience and background knowledge. Without that, reading either is going to be relatively slow and laborious. Having gone through both RFCs and type theory, I've found both similarly slow in terms of content; the main difference is that the mathematical notation happens to be much more dense. So perhaps it takes me five times longer to get through a page about type theory as a page of an RFC, but that single page contains the same amount of information, just encoded differently.

Also, I don't buy that you have to understand how the inference works to use it, any more than you have to understand or think about the OSI model to write networking code. You can use both perfectly well--and most people do--without needing to think about or understand everything. Moreover, you won't be able to learn either without a good deal of work and the relevant background. I suspect that it just happens that the background for one matches much more closely with yours than the background for the other.

Sure, humans as a species are not very good at mathematics. But they aren't very good at engineering either! They are really bad at keeping a bunch of details in memory and running (even simple) algorithms in their heads. All told, it's better to avoid it.

-----


For the source of my bias, I have actually spent a considerable amount of time working with people in the "RFC world." For example, I wrote the kernel of the Openwave WAP browser, which shipped about a billion units, even if it sucked and no one used it. I have spent just about no time in the PDF world, and even when I was in grad school I was an OS guy. (Here's a fun old paper of mine: http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.41.7...). So, zero time in the "PL theory" world.

I can tell you how this world sees the "PL theory" world. They say: look, you have your own way of formalizing computing, there are infinitely many ways of formalizing computing, the one we use seems to work just fine, so why should we learn yours? Whether they won't, or they can't, or they are too pigheaded to - the answer is, they aren't.

I don't view language design as an opportunity to change the world to this extent. And I think many, many good ideas are getting lost in the PL theory world. So, my goal was to replicate these ideas, as best as I can, for programmers who think about programming the way I do.

-----


My instinct is to agree with urbit here but I just wanted to thank you for the sudden enlightenment that many intelligent people probably find legalese, spec-ese and RFC-speak (or as I refer to it, "plain English prose") as awkward and unnecessarily painful to read as I do conventionally written math papers.

It sounds trivial and self-evident written down like that but most long overdue revelations probably do.

-----


I've actually found knowledge of type inference quite useful for understanding error messages. In OCaml, you typically allow the compiler to infer all types for functions, and only have them compared to your type signatures later, when the containing module is ascribed to a signature. This means that you can see type errors in a perfectly-correct definition of a function -- if you used it incorrectly earlier in the file. This confused the heck out of me until I thought for a few seconds about how it worked.

Haskell mostly avoids this problem by encouraging writing type signatures and checking them immediately against definitions, turning the global analysis that is OCaml's type inference into a local problem. However, when using local definitions, I do still sometimes have to think about whether a type error comes from a definition or a use.

-----


You can solve this, to some extent, in OCaml, by just type-hinting everything. But then what fun is type inference?

-----


> The programmer has to model what the inference system is doing in his or her head, and it's a source of significant cognitive load.

Huh? I don't know what your type-inferred language of choice is, but personally I never have to go down to the level of solving constraint sets and unifying types in my head when working any of the ML dialects. Even when writing my own inference algorithms, once I'm able to establish confluence and progress I rarely give the main inference constraint solver a second thought when writing code.

-----


See my response to tikhonj. No, of course not, because you're naturally good at PL theory.

Or maybe artificially good at it. Whatever the reason, you're good at it. But I believe it's clear that most people, even most programmers, aren't naturally good at it. And a lot of effort have been invested in trying to make them artificially good at it, without much result as I can see.

The number of extremely smart people I've met, who nonetheless think Haskell is too smart for them, is considerable. This is quite simply a UI problem. If you have a UI problem and you either don't try to solve the problem, or do and fail, you get an adoption problem. Haskell has an adoption problem, n'est ce pas? So where does my reasoning go wrong?

-----


> Haskell has an adoption problem, n'est ce pas? So where does my reasoning go wrong?

The problem is thinking that this comes from unification-based inference algorithms which is an implementation detail thats fairly far removed from the day to day programmer concerns. Most Haskell programmers are not type theorists and don't implement HMF algorithms. The only difference between a Haskell programmer and "most programmers" is that they've taken the time to learn the language. In my experience many people turn away from Haskell because they can't immediately transfer their existing knowledge into it and don't like feeling like a beginner again, so they stop. I've met many /more/ people who think that programming in Java is too smart for them as well, it's just a matter of putting the time in to learn.

-----


My experience is that there are two very different kinds of Haskell programmers: those who really understand the math, like you, and those who treat it as a black box (the "Learn You A Haskell" contingent).

As a black-box language, I think Haskell is of nontrivial value, but the black box is very deep and weird. This is a UI problem. If you expect people to understand the theory, this is also a UI problem in a different way.

So, two UI problems add up to a big UI problem, which is my theory about the adoption issues. I'm curious as to what your theory is.

-----


> If you expect people to understand the theory, this is also a UI problem in a different way.

I don't really understand what you mean by "theory", you don't need to know /any/ PL theory to write Haskell and you certainly don't need to know mathematics. What specific ideas are you referring to that require advanced knowledge as a prerequisite for doing day-in-day-out programming tasks in Haskell?

-----


Again, you have to be able to perform the same computation as the type inference algorithm - for any inference engine, any type system, any language, you can't use it if you don't know what it's going to do.

There are a lot of different ways of handling this problem in Haskell - some involve knowing the notations and results of the branch of math called "PL theory," some don't.

As UIs, they all have drawbacks - and we know this, because you and I both know all the ways Haskell is awesome and rocks. We also know what people say when they complain about Haskell. They're complaining about exactly this material - so why not take the customer at his word?

-----


> for any inference engine, any type system, any language, you can't use it if you don't know what it's going to do.

My whole argument is that you don't need to know anything about the implementation of the inferencer to use it, you don't need to predict it's behavior anymore than you need to model the CPU instruction selection of the compiler in your head. Most of the time you can safely program at a level of abstraction that doesn't involve the implementation of the language, and in Haskell this "normal level" is well above the level that involves low-level things like the lambda calculus.

I think we're just an impasse, the meme that you need mathematics or PL theory to program in Haskell is one that puzzles me and I don't understand where it comes from. Best of luck on your project.

-----


I'm not sure it's realistic to say someone could make a career in a programming language without being able to look at a function and deduce the static type of a value without asking the computer.

As I wrote elsewhere in the comments:

When you use any language long enough, you end up needing to simulate pretty much every observable aspect of it yourself -- not including, for example, the garbage collector, the JVM bytecode verifier, or the GCC code optimizer, which are supposed to be transparent, but including the type inferencer, the JVM threading model, and the JavaScript semicolon insertion rules, which are not. Some of these things you can steer clear of for a long time or even avoid forever by staying on the beaten path, but they lurk, waiting for the thousands of people who have run into bugs or compiler errors and needed to understand them.

I don't know HM too well, but it seems to have more in common with the dataflow analysis algorithms in optimizers and verifiers -- which programmers don't usually have to understand -- than the basic unidirectional type inference that already serves pretty well to reduce redundancy in the source code.

I could imagine citing C++ as a counterargument -- no one understands the type system, but people use the language anyway -- but it's still not an abstraction you don't have to understand to use, like a CPU.

-----


> My experience is that there are two very different kinds of Haskell programmers: those who really understand the math, like you, and those who treat it as a black box (the "Learn You A Haskell" contingent).

I don't think that LYAHFGG treats Haskell as a black box, not for the level that it aims to teach Haskell - it makes it very clear that all functions are curried, emphasizes that a functor should be viewed more as of a "computational context" instead of a "box" (for example function composition), and so on. I've found it to be as precise and vigorous as other tutorials on Haskell, for the level it aims at.

-----


Of course you could say it's a subset of unification, as it is.

It definitely is, and a limiting one at that. How does Hoon infer the type error in whatever is the Hoon equivalent of the, say, JavaScript expression (a == b && b == 5 && a == "foo"), where a and b are function parameters?

-----


But my previous reply is a little content-free, as what you're really asking for is how we solve the same problem that typeclasses solve in Haskell.

Very briefly, there's two ways of asking whether a caller can change the type of a parameter. The normal way is to say, are all the possible values of the caller's type in the set of nouns that the parameter defines? This is what I call "geometric polymorphism."

But there's also "generic polymorphism," in which we ask: will the Nock code that was generated for the parameter type, actually work with the type we're calling it with? A really stupid way to ask this is to recompile the function, with the parameters exchanged for the arguments. The Hoon approach is a little bit smarter than this but not much.

So, with these two approaches, you basically get variance and genericity, which is all of polymorphism.

-----


It's difficult to have a conversation with someone using their own terms for well-established concepts.

If I understand you correctly, what you're calling "geometric polymorphism" is usually called structural typing or subtyping, and what you're calling "generic polymorphism" is usually called duck typing.

Duck typing isn't amenable to static typechecking, so it's not comparable to Haskell's typeclasses.

-----


Looks like I'm behind on my programming language theory research! Typed Racket and soon, Typed Clojure, are actually adding optional static typing to duck-typed languages. Are you doing something similar?

-----


Equality is entirely untyped in Hoon. You're just comparing the nouns - "S-expressions without the S." So there is no type error at all. You'd say

&(=(a b) =(5 b) =("foo" a))

and it'd work just fine.

-----


That's a bit underwhelming.

-----


why, would you prefer some other answer than 'false'?

now, the real question is whether Hoon thinks that (= 5 "5"), or (= 100 "1e2"), or (= 4 "four"), like some languages helpfully provide.

-----


The question posed by colanderman's expression is valid if and only if there is a type containing both 5 and "foo".

Here's Haskell's answer:

    Prelude> \ a b -> a == b && b == 5 && a == "foo"

    <interactive>:2:25:
        No instance for (Num [Char]) arising from the literal `5'
        Possible fix: add an instance declaration for (Num [Char])
        In the second argument of `(==)', namely `5'
        In the first argument of `(&&)', namely `b == 5'
        In the second argument of `(&&)', namely `b == 5 && a == "foo"'
Here, the typechecker took the most general type of 5, namely some type belonging to the Num typeclass, and discovered that the type of "foo", which is [Char], isn't an instance of the Num typeclass, or, in Java parlance, doesn't implement the Num interface. This means there is no type containing both 5 and "foo", hence — type error.

You're asking about implicit conversions, which are an undisputable evil. However, attempting to provide answers to invalid questions also isn't very good.

You may have already seen the rather entertaining Wat talk, by Gary Bernhardt: https://www.destroyallsoftware.com/talks/wat

The central theme of this talk is not just implicit conversions — it's surprising answers to invalid questions. A much better way to handle an invalid question is to reject it as early as possible — Fail Fast. With a static type system, this can be very early indeed.

-----




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact

Search: