Hacker News new | comments | show | ask | jobs | submit login
Confession of a Haskell Hacker (r6.ca)
163 points by malloc47 1307 days ago | past | web | 124 comments



This post is a representative of everything that I both love and hate about Haskell:

1) It is certainly true, and I've had the experience many times, that "if it compiles, it's correct" applies frequently to Haskell. Even better, I've frequently written code for the first time, hit the compile button, and had it work first time - something that almost never happens to me with Java or C. 2) The huge, HUGE caveat to that is that it's only true for pure and sufficiently polymorphic code. If you're writing anything in the IO monad, or if you're using specific data types (e.g. Int, []) rather than writing polymorphic code, then all bets are off.

There's only one way (ignoring ⊥) to write the forward pipe operator (|>) :: a -> (a -> b) -> b in Haskell. If your code compiles, it's guaranteed to be correct.

There are many, many ways to write the isPrime :: Integer -> Bool function, which differ widely in correctness, understandability and efficiency. Once you move from type variables to concrete types, you open yourself up to many more potential errors.

There are even more ways to write the function deleteFile :: FilePath -> IO (), including but not limited to (i) deleting the file and doing nothing else, (ii) deleting the file and its containing folder, and (iii) wiping your entire filesystem. All of these would type check. Sure, you probably wouldn't make these mistakes, but the point is that the type system won't help you here any more than it would in <insert untyped language here>.

I think Haskell is an incredible language. It's certainly the language I have the most fun with, and I find its approach to parallelism, concurrency, I/O and state to be natural and appealing. But there's a real danger of overstating what Haskell is capable of, and turning off newcomers to the language in the process.

-----


Your description on deleting files not being protected by the type system is the result of side effects at runtime.

The point of the OP was discussing the safety that can be achieved from pure code.

-----


I feel like I addressed this fully in my comment. I agree with the OP - you can do great things with the Haskell type system, if you're writing pure code and if you're writing sufficiently polymorphic code. I think this is awesome!

However, aside from a few libraries and a smattering of very heavily theoretical work, no code satisfies both of those predicates. If you want to write a 'real world' application in Haskell, you're going to have to get your hands dirty with IO and with concrete types, which means that the ability of the type system to prevent you from shooting yourself in the foot is severely curtailed.

-----


I find this happens a lot with Haskell. I can write hundreds of lines of code and have them work perfectly as soon as I get a error-free and warning-free compilation.

The type system is definitely a big part of this, but almost as important are algebraic data types and compiler checks for exhaustivity when scrutinising values of such types.

-----


This just screams "bad development practice", IMO.

Maybe I've only worked with amazing super hero developers (not likely), but the type of bugs avoided by Haskell's type system just aren't a big problem in any of the projects I've ever worked on.

The difficult bugs are almost always mistakes in the requirements and design errors, which become even more difficult to fix when you have a bunch of code.

On the other hand, I guess I could see type errors being a problem if you often write hundreds of lines of code before trying to compile and run it...

-----


Bad requirements and design errors can't be fixed by a perfect programming language.

Those are certainly the biggest problems in building software, but not the types of problems this particular article is talking about overcoming.

(I think we're both in agreement on this)

-----


Haskell's type system can stop a ton of errors that would not be affected by a normal type system. I've certainly run into these sorts of errors in the wild.

For example, the type system can differentiate mutable code from immutable code. This means you will never get a mutable data structure when you don't expect it. I've seen several rather subtle and hard-to-catch bugs in some Python code I used to work with because lists were being mutated in the wrong place.

Haskell also makes it much easier to create lightweight new types (with the appropriately named newtype keyword :)). This means that there is no reason not to create different types for semantically different numbers--you can tag all your specialized numbers with a special type. Now, other languages let you do this as well, but in Haskell there is neither a runtime nor a syntactic overhead: numeric literals still work (they're polymorphic), standard numeric functions still work and the runtime representation remains exactly the same. But it will ensure your functions always get the right sort of number. As a practical example, you could differentiate between px and em for CSS measurements (I'm not sure if any library does this, but it is possible).

The type system also makes sure that you cover all the possible cases when you consume (e.g. pattern match on) some data type. I've certainly had bugs crop up when I forget some weird edge-case. Haskell warns about this possibility at compile time.

For constraints that cannot easily be expressed in the type system, there is a pattern called a "smart constructor". This is basically a wrapper function that verifies your invariant; any function that depends on it will force you to have used this function. This way, if the invariant is violated, the code fails immediately clearly. This also makes sure that anybody using those functions actually considered the invariant.

There are a bunch of other uses I haven't listed. For a web development example, I believe that Yesod uses the type system to ensure that all your internal links point to valid locations in your web app.

This is not a an exhaustive list in the least. The important point is that the Haskell type system can do quite a bit more than that of more popular statically typed languages like Java or C#, and, just as critically, makes using these more extensive features easier. There is much less overhead to creating a new type in Haskell than there is in Java, which makes it more likely that programmers will use them more extensively.

Now, of course no type system can possibly protect you from design or specification problems. However, there are still a whole bunch of subtle bugs that can sneak into your code but would have been caught by Haskell's type system. Most of them would probably also be caught by a good test suite, but the type system gives you most of them effectively for free. This also makes writing tests simpler because you do not have to test anything proved by the type system.

-----


This is supported by the parametricity theorem. That is a big word, but it boils down to this: let's say you wrote an identity function of type "a -> a", and it passed the type checker. Then it is correct: you simply can't do much with a value of type "a", because you don't know anything about it.

If an identity function is too simple, consider "compose :: (a -> b) -> (c -> a) -> (c -> b)". I think it can be proven that if you write an implementation that passes the type checker for this signature, the implementation is necessarily correct.

-----


That's not quite true, it must either be correct or bottom.

I can write:

  id :: a -> a
  id = id

-----


This is why Haskell is evil and you should use ML.

-----


This is why Turing-complete languages are evil and you should use Agda.

-----


Yes in ML you'd have to eta-expand first:

    val id : 'a -> 'a
    fun id x = id x

-----


You can still write x=_|_ in ML, you just have to be subtler about it.

-----


    id x = id x

-----


For certain definitions of correct...

    id :: a -> a
    id a = case (a,unsafePerformIO (print "rm -rf *")) of (x,()) -> a

-----


Note the "Safety Inferred" seal of approval on the upper right hand corner of the hackage package http://hackage.haskell.org/packages/archive/lens-family-core... This means it survives a recursive check of dependencies that precludes that kind of nonsense.

-----


I hadn't thought of that. Types serve as a good safety net, but they're not foolproof. Expecting something like that to be correct because it passed through the type checker is like expecting to be served at a restaurant dressed in a gorilla suit because you got through the door.

-----


And of course it is fun to think about total functions that are "effectively" bottom...

    id :: a -> a
    id a = case (a,ack 4 3) of
        (x,0) -> x
        (x,_) -> x
        where
            ack 0 n = n+1
            ack m 0 = ack (m-1) 1 
            ack m n = ack (m-1) (ack m (n-1))

-----


Hi. I make a living writing code. Also, I'm bad at math other than bit arithmetic and calculating tips. (Edit: I guess I sort of understand sets and basic set operations)

What in the hell are you two talking about and why shouldn't I take this as a sign that I should avoid Haskell at all costs?

-----


You can prove properties in Haskell using the type system, (provided that you can show that the expression you are proving properties about actually terminates).

I was merely pointed out that last part in the most succinct way I knew how.

You should definitely not avoid Haskell at all costs. I would characteristic programming as the process of constructing systems and managing complexity. If you have a greater exposure to languages and paradigms then you have more tools at you disposal for doing your job. A programmer may be able to get by without knowing anything about Haskell or functional programming in general, but that's only because they don't realize when they are kludging something that is so simple if only it was thought about differently.

-----


I'm a grumpy old clisper. I know well the benefits of functional programming. (And Perlis languages in general. If you don't know the c2 page for this, you should feel deep shame.)

Thing is, my fellow lispers, and our scheming siblings too, spent most our time talking about programming or actual code. The former being the means or mode to produce the latter.

Why don't Haskellers ever seem to be talking about programming or actual code when I encounter them in the wild?

Haskell seems to be some mad science project that came about after some grad students happened across an ML manual and a bottle of scotch. (At least you guys didn't make that 31-bit integer mistake. Oof, tagged pointers.)

The emphasis on the type system seems to be a bit revisionist to me. The original (pre-Haskell '98) emphasis was on the laziness really. That turned out to be a huge mistake outside of some cute demonstrations for the purpose of understanding the runtime behavior of your code, so you're choosing to ignore this monstrous mistake and instead promulgate this type safety nonsense as if static analysis has gone anywhere in the last few decades.

So back to my original questions, why don't I ever seem to find you people talking about actual code?

When I encounter fellow Python programmers, they're usually talking about cool libraries, solving interesting problems, neat tools they've discovered or made, best practices, etc.

With Haskell it's more like I've bumped into a rather unpleasant evangelical minister or Amway salesman bred with Doc Brown from Back to the Future. It's like the Haskell community has two modes of operation: proselytization and mathematical gibberish.

-----


I agree that laziness turned out to be a bad default for the reasons you mentioned (other Haskellers may disagree with me on that point), but I don't think that means that it was a mistake to pursue it. It forced the designers to build in purity which turned out to be immensely valuable. Having laziness as a default is also not the kind of decision you can easily revert.

You're right to say that the field of static analysis has been very active. What you're forgetting is that this has been in no small part due to type theory.

I think the reason you don't find people talking about actual code, as you put it, is because actual code and so called mathematical gibberish are one and the same thing.

Lisp is based on the idea that code are data are equivalent, Haskell is based on the idea that code and mathematical gibberish are equivalent.

I didn't intend to imply that you didn't know the benefits of functional programming. I could have worded that better. Do you know the benefits of typed functional programming?

Having recently delved into lisp, I do miss the static analysis that goes on before the program starts. I realized that I used types to plan out the program before I started coding and I miss having the type-checker picking holes in my plans. I'm determined to stick at it though so that I can see what I'm missing in terms of meta-programming and other dynamic features.

-----


>Do you know the benefits of typed functional programming?

Yes, that's how I know what ML and the tragedies of tagged pointers are.

SBCL supports type annotations, and Racket has a Typed Racket dialect that does type checking insofar as you've provided the types.

Part of the problem that Lispers and Schemers have is that they can add the static analysis to their programs anytime they want through the macro system (hence typed racket). They don't have to hack up the compiler or anything.

The other problem is cultural. Lispers resent implementation-level obligation to a set of rules that were made ignorant of any given problem they may be solving. What if you need to resolve something at runtime? What if you need to resolve something at compile-time? Lisp lets you choose where each should happen.

I feel similarly about purity. I think it makes for a nice default, but I think the degree to which immutability is forced on you in Clojure and Haskell are pretty damned awkward for expressing a variety of problems and not just ones related to IO. It seems especially silly since a lot of code in Common Lisp will by default be pure because it'll just return results based on inputs absent mutation.

Regarding your delvings into Lisp, since you're not unfamiliar with the benefits of functional programming I'd like to recommend a book that will help you break ground with a dynamically typed, multiple-dispatch Lisp with an object system. Get "Art of the Metaobject Protocol". Hopelessly specific to Lisp but glorious nonetheless.

The book doesn't get as deep as I'd like, but it's considered one of the best books on the subject nevertheless.

-----


One of the things I don't like about Haskell is that it has a really complicated type system because it isn't (and couldn't realistically become) dependently typed. This is one of the things that drew me to lisp, you can choose which type system you use.

I must disagree with you on purity though. Purity allows you to apply equational reasoning, it allows compilers to apply optimizations that wouldn't otherwise be possible. It makes multi-threading, if not possible then substantially easier. You do get some of these advantages from having purity by default, but if it's not enforced, you can't get most of them. For me, I see this as an acceptable compromise (actually I don't see it as a compromise at all, if all I got from purity was equational reasoning I'd still use it), but as you point out, the culture amongst lispers is different.

Thanks for the book recommendation, I must admit that I already have a number of books to get through before I'll be able to look into it (including LiSP and "Compiling with continuations"), but I surely will do.

-----


My dream programming language is actually a Lisp with static-by-default typing that was powerful and useful. If you end up implementing something, PLEASE post it on HN or let me know, I'd love to check it out. :)

This is how I know about Typed Racket despite not being that keen on Scheme dialects (other than call/cc), I was very excited to see how compile-time typing could work in a Lisp/Scheme.

-----


> If you don't know the c2 page for this, you should feel deep shame.

Consider me shamed, mind sharing a link? I know the quote from Alan Perlis you're talking about, but the Google search results for "perlis languages" is dominated by Fogus's blog post with the same title, and your comment is the first result for "perlis languages c2", followed by a few other wiki pages.

-----


http://c2.com/cgi/wiki?EpigramsInProgramming

-----


Types are propositions, and programs are proofs. If your program satisfies the type, then it is a proof of the proposition.

-----


That is true, but the proposition is one that is encoded in the type. There are many derivations of certain proofs. Consider:

    map :: (a -> b) -> [a] -> [b]
And [a] is shorthand for a recursive type: \forall a . \mu l . (a * l) + ().

And the translation of types to propositions means that the sum type -- either I have () or I have a list, translates to a logical connective of "or". The product type -- I have an a and the rest of the list -- translates to "and" .

The proposition is therefore that, for all a, for all b, if a implies b, then true or (a and list of a) implies true or (b and list of b).

There are several proofs of this proposition, and some of them are useless.

The standard one:

    map f []     = []
    map f (x:xs) = f x : map f xs
Is what we want.

There's another one, though:

    map f []     = []
    map f (x:xs) = [f x]
And many others that aren't so useful as the standard one.

Anyway, this has been a long and roundabout way of saying that, while a good type system give you a lot of nice static properties about a program, you simply cannot ignore dynamic semantics and rely on the types. Well-typed programs are correct in that they will not reach a state where evaluation is undefined / get stuck ("I'm supposed to do what? These are Strings, not Ints!"), but they do not imply correctness of the program, even though they do provide a proof of a proposition!

-----


In Haskell all types are inhabited, so every proposition has a proof. This makes Haskell a pretty useless logic.

-----


Some of them are only inhabited by bottom, no? I cannot write down a function f :: a -> b besides:

    f :: a -> b
    f = undefined

-----


There are a lot of other Haskell features you can use. Term-level recursion works:

  f :: a -> b
  f = f
Type-level recursion works, even without explicit term-level recursion:

  data T a b = C (T a b -> (a -> b))
  
  q :: T a b -> (T a b -> (a -> b))
  q (C f) = f

  w :: T a b -> (a -> b)
  w x = (q x) x

  f :: a -> b
  f = w (C w)
This is basically encoding Russell's paradox at the type level. You can write out f explicitly, just so that it doesn't look like you might somehow still be applying w to itself recursively:

  g :: a -> b
  g = (\x -> ((\(C f) -> f) x) x) (C (\x -> ((\(C f) -> f) x) x))
There are even ways of doing this using highly impredicative definitions involving GADTs and type families, without involving any explicit recursion at all:

http://okmij.org/ftp/Haskell/impredicativity-bites.html

The GADT example no longer seems to work, so changes to GADT type checking might have eliminated it, but the type family one still does.

-----


It doesn't affect your objection to MaleKitten but not all types are occupied in Haskell. You are thinking of kind * By contrast,

    {-#LANGUAGE DataKinds#-}
    a :: 'Nothing
    a = a
and

    {-#LANGUAGE MagicHash#-}
    import GHC.Prim
    a :: Int#
    a = a
don't typecheck, though I cant say I understand the ins and outs of the latter.

-----


Find the type of:

fix x = x x

-----


From the article: "The types are so polymorphic that I conjecture that there is only one way to write functions matching the required types such that all parameters are used non-trivially and recursion is not used."

He is not arguing "it works if it compiles" in general. He is arguing "it works if it compiles", because types are so polymorphic. Polymorphic part is actually important. If you don't get that part, you are missing the point.

-----


Ah, a Haskell circle-jerk waiting to happen and a confession that's really a way to act superior about Haskell's type system. Listen, it's a great language and I've spent a few years deep in it, but I find this post arrogant and not even accurate.

The libraries he is talking about are for data types, something that the type system (almost by definition) is perfectly adequate for testing. So yes, he's tested it by compiling it. But for any real world Haskell program the type system alone is not enough.

-----


Wow, you are quick to accuse the community of being a "circle-jerk" even before most people commented. Looking through the other comments on this thread, most are either against the practice or qualify it. And the qualification is completely reasonable--given sufficiently general types, the space of possible implementations becomes very small which means you are increasingly likely to be correct just if your code typechecks.

The article never claims you can always use the type system in place of tests. It doesn't even claim that it is a reasonable practice. The only thing it says is that the author did rely on the type system and that he thinks it is reasonable given sufficiently polymorphic types.

In short, you're knocking down a straw man and using it to accuse the entire community of being an arroganet circle-jerk.

-----


I think your reply is inappropriately negative ("circle-jerk", "real world"), and that the negatively of your post detracts from what value might be derived from the article.

The author of the post provided an amusing example of a case where he posits the type system is sufficient and valuable. He didn't claim that the type system would be sufficient for a complete real-world application.

-----


Well, I guess I'm just old enough and ugly enough to call things like I see them. Haskell is a neat language, but it is frustrating to listen to these sorts of brag-posts about the language because they're filled with dangerous hubris. The type system is amazing for certain kinds of problems. Hey, if you're writing a compiler then Haskell is probably the best language out there. Or in memory data structures. Hm. Or parsers. . . . It's a great language for CS research.

The term "real world" is not meant to slur, it's meant to be accurate. I've shipped code written in Haskell and I deeply understand the huge pain in the arse it is to use in production for applications that are not toys.

-----


Sorry to burst your bubble, but I wonder if your experience is clouded by the (past) inadequacy of your skill set and learning level. We use Haskell in production across a number of domains and it has produced solid results. So do a number of other very serious companies out there (as an example, most banks have varying degrees of Haskell in production for critical systems).

Also, how are parsers or in-memory data structures NOT real-world tasks? You never had to parse some bit-stream or stage a sophisticated-enough computation pipeline that required lots of different, intermediate data types?

I'm not sure why you're seemingly dying to make negative blanket statements based on how "it was a pain in the arse" for YOU.

-----


"Dangerous hubris" is an extremely condescending phrase. Do you think the author does not realize that the type system is only sufficient in certain, very specific cases? That's exactly what he mentions at the very end--the reason he trusted the type system so much is only because he believes there is only one non-trivial, non-recursive implementation with the requisite types. He qualifies exactly what he means and, moreover, does not even defend his behavior. And that is "dangerous hubris"?

Also, there is certainly plenty of real-world code I would not like to write with Haskell--drivers, for example. However, it seems great for web-apps, which are very "real world" and, naturally, very important on HN.

In my personal experience, it's also surprisingly good for quick command-line scripts in place of Bash or Perl. Writing small scripts for automation and command-line clients for my existing tools was as easy or easier in Haskell than in any other language I've used.

-----


I don't think you realize just how condescending and full of arrogance your posts are.

You are implying that computer science is both useless and only consists of implementing toys. What makes your problems "real world" and the problems that haskell is good for not "real world"? What makes something "not a toy"? Parsers and compilers are toys?

If you want to start a discussion about things that make haskell difficult to work with on a daily basis, please do so. Otherwise, kindly stop throwing around insults.

-----


Can you share some examples where you've found it to be a huge pain in the arse to use in production?

-----


If you're using the term “real world”, it is extremely unlikely---like sha1 hash collision unlikely---that you are being accurate. In all cases where I can remember its being used, it's being used as a way to avoid a concrete and accurate description of fact, and to add a touch of disdain for something else.

-----


>I've shipped code written in Haskell and I deeply understand the huge pain in the arse it is to use in production for applications that are not toys.

And yet plenty of people (myself included) are using haskell for real world, production, non-toy applications all the time. So either all of us people using it are mentally ill and just imagining up non-existent production haskell code, or your experience reflects your ability, not the language's capabilities.

-----


> not even accurate.

What is not accurate in that post?

-----


I explained that in the second paragraph.

-----


The author didn't claim that the system is alone sufficient "for any real world Haskell program", so your refutation appears rather spurious.

-----


I explained that he _did_ test the program because he was writing a type. The type system by definition checks types.

-----


I see what you mean, but it's rather bizzare to classify type checking as a form of testing, even if type system is very strong. Types and tests are different ways of specifying behaviour: tests are existential guarantees (output of the code sometimes has some properties), types are universal guarantees (output of the code always has some properties).

In an ideal world, tests would be subsumed by types, but in reality tests can cover things that types do not.

-----


> In an ideal world, tests would be subsumed by types

It would be literally impossible for this to happen in a Turing-complete language, though, because a hypothetical 'perfect' type system would be able to solve the halting problem.

The Haskell 'Bottom' type alludes to this in its documentation, actually: http://www.haskell.org/haskellwiki/Bottom

(Essentially, Bottom is a member of every type, or else the compiler would need to distinguish between a halting program and a non-halting program before accepting the input program).

-----


Some people argue that in an ideal world we wouldn't be using Turing-complete languages. Nothing like a total functional programming language to make life better :).

-----


Some people argue that totality isn't enough, you really want a feasible (polytime) language.

http://www.di.unito.it/˜gaboardi/papers/BaillotGaboardiMogbi...

;-)

-----


I thought about total functional programming, like Agda. But even if you have Turing completeness, you can add time constraints to tests and everything will be decidable.

-----


What are functional lenses?

http://www.haskellforall.com/2012/01/haskell-for-mainstream-... http://stackoverflow.com/questions/8307370/functional-lenses

-----


I think the interesting point here is that Haskell provides such a high static assurance out of the box that what you write is correct that this can happen. You'd never hear of a ruby programmer releasing a gem without trying it.

-----


With the serious caveat that the program you are writing must only be vulnerable to the kind of errors which Hindley-Milner can show aren't present. ie: no IO, no non-deterministic concurrency, no non-total functions, etc, etc.

-----


I wish this caveat were attached every time someone mentioned the "once it compiled it worked first time" bit. Not because there's no truth in the works-if-compiles idea, but because getting smug about it is sure to bite you in the ass sooner rather than later.

Now that I've said that, I'll state what should be obvious (and probably is to many/most): failure to compile due to type errors is a big clue I am thinking about something incorrectly or incompletely. It's not like I just twiddle stuff until it compiles. I look at it and say, "Duh! That was dumb of me!" It's like using Unit Analysis working out a long physics problem... errors in your thinking about it are very likely to show up in unit analysis. The chance of you coming up with a bogus equation that passes unit analysis is pretty small. When it doesn't pass, you have a good look and find where you went wrong rather than merely tweak.

-----


That little connection sent me on a Google quest.

Interesting paper along those lines:

"Types for Units of Measure: Theory and Practice": http://research.microsoft.com/en-us/um/people/akenn/units/CE...

-----


Oh! I am so glad you looked that up. Thank you!

-----


You forgot the bug: "doesn't do what you want, but does it perfectly".

-----


In addition to that, it's often difficult to predict the runtime properties of a Haskell program. Due to it's laziness, one can often run into space leaks where thunks are not evaluated (yet):

http://blog.ezyang.com/2011/05/space-leak-zoo/

-----


That's still a significant portion of the code-base though. In almost every other language you'd have to write tests to accomplish the same thing. It's certainly possible to compose functions that return something of the correct type but not the correct value (in fact, that's trivial to do). Those should be easier to spot/debug/test than the errors the type system checks for you.

-----


it's a matter of how much you put into the type system:

  add :: Integer -> Integer -> Integer
  add x y = x * y
in any moderately complicated code, you need to test. types only go so far.

-----


To supplement "it's a matter of how much you put into the type system":

This error can be caught if you put units of measure into type system. F# can do this, and addition function will have type int<a> -> int<a> -> int<a> while multiplication int<a> -> int<b> -> int<a * b>; for example, int<m/s> -> int<s> -> int<m> might be a function taking velocity and time, and returing distance. I think Haskell has libraries for this.

Similarily, add x y = x - y would be detected if (-) required a group, while (+) worked for monoids only.

However, it would be rather hard to create a type system detecting a typo in:

quadruple x = 3 * x

-----


> it's a matter of how much you put into the type system:

It's also a matter of how much polymorphism you have. There are lots of functions `Integer -> Integer`; there is only one total function `a -> a`. (This is the parametricity theorem mentioned by sanxiyn in http://news.ycombinator.com/item?id=4215055 .)

-----


There are lots of different functions `a -> a`. One could take a few cycles to complete, whereas others could take a day or week to complete. One could use a few bytes of space, whereas others could use megabytes or gigabytes.

-----


I think that this is a different meaning of 'different' from the one usually recognised in computer science; it's my understanding that what usually matters is extensional equality (are the same outputs (eventually) produced for the same inputs?) rather than intensional equality (are those outputs produced in the same way?). Otherwise, all compiler transformations would be illegal.

(You could argue that a (correct) transformation that sends space usage way up may actually change even extension equality, in the sense that it could result in the transformed algorithm failing on a given piece of hardware when the original algorithm would have succeeded. However, my understanding is that the state of 'realistic' computer science, where hardware limitations are part of the reasoning, is in a much more primitive state than 'idealised' computer science.)

-----


Opaque types like "Integer" don't give a lot of assurance, so either you get a stronger type system (Agda-style) or you have to test.

But complicated code may work with non-opaque types, and so it could still be reliable without tests.

-----


I have some open source projects that I've released on multiple occasions without having tested them. I think people make the assumption that things that have adequate documentation and comments/support tickets, etc. that indicate use, that they are tested before release. That is simply untrue. You get what you get. That is true in the world of both paid and unpaid open-source and closed-source software and well as life in-general.

Something that helps in this regard is travis-ci. It's a free CI server and if you have at least some level of testing, you have some level of confidence.

-----


Why did you write it if you have never run it? Was it just for people who read your article and might want to run the code?

-----


Probably as an intellectual exercise.

-----


But what point is there to an intellectual exercise if you don't know whether you've done it right in the end?

-----


"Beware of bugs in the above code; I have only proved it correct, not tried it." -- Knuth

-----


Touché. I see your point.

-----


That's the point of the article. The claim is that he knows he has done it right because the declarative types are simple enough to understand as a whole, and they enforce and ensure correctness, verified through compilation.

-----


Until the code has actually been tested, he doesn't know that it works, he just thinks that it does. The above point is that he has not completed the exercise, by verifying what he thinks.

-----


> Until the code has actually been tested, he doesn't know that it works, he just thinks that it does.

After the code has been tested, he still doesn't know that it works, only that it sometimes produces the expected output. (As a mathematician, it's a lot easier for me to trust a proof than an empirical verification; but I agree that what the HM type system proves is not always what one wants it to prove.)

-----


I agree. I claim that he knows that it works under the circumstance that he has tested, and can be more confident that it will work in circumstances similar to what he has tested.

-----


True. But with very general code, like this library, it would be difficult for unit tests to cover more than a small fraction of the domain.

-----


The code was tested by the compiler, reducing the failure space to whether or not his declarative types are an adequate Proof.

It's an amusing anecdote. Even when using Haskell, there are some obvious constraints to relying on the compiler as the sole validation of correctness.

-----


That's not good enough; you're still testing a proxy behavior, not the behavior under question. The hypothesis is, "Haskell's type system is robust enough that one can code an application through type checking alone." Doing the hypothesis is not testing the hypothesis; assuming the hypothesis is correct to demonstrate that the hypothesis is correct is begging the question.

In particular, the kinds of errors that still may exist are ones where the implementer has a fundamental misunderstanding of what needs to happen. The code may correctly implement what he wanted it to do, but what he wanted it to do is wrong.

If you still object, consider: I'm describing the scientific method.

-----


Consider that his Haskell program (ostensibly) relies on mathematical proofs, a chain of them. The compiler consumes and validates this chain of proofs and checks whether or not the program is "true." It's hard to imagine why you'd use the scientific method to verify that 1 + 1 + 1 == 3; basic math and logic--- something computers are good at--- are sufficient.

As he and others have pointed out, the author was pretty up-front about the scale (relatively small; looks like a few hundred LOC) and nature (implementing an extant, well-defined API) of the problem. It's not an application as you suggest, and the author doesn't suggest it's a good idea for applications, either.

The conclusion isn't "use Haskell & never test your code!". After all, he explicitly offers no conclusion. :) It's just an example of how, sometimes, you can be reasonably certain that your Haskell program is correct when it compiles.

Myself, I'd still run it, or at least QuickCheck it. I don't trust myself not to write logically sound, completely incorrect code.

-----


As an exercise, I think it's great. But my point is that he has not completed the exercise.

-----


Once you've figured out on paper that 1 + 1 = 2, you don't necessarily have to put one apple next to one apple and count two apples to “complete the exercise.”

-----


No, you don't. But software is not that simple.

-----


I object because you're attempting to extrapolate an axiom from an anecdote merely intended to demonstrates the value and power of the type system.

-----


My point is we have not yet verified if the assumed conclusion of the anecdote is correct.

-----


I actually encounter this a lot with Haskell. Once you taste the power, it isn't good enough. You want more. Dependent types, etc... But the truth is, it is still a large step up from where we were before in my opinion.

-----


The compiler can only test what he writes - not what he intends to accomplish.

-----


Ah yes, the old question of "what is knowledge?".

-----


Well, unless he has a proof that the code works. I think the idea is that, in this case, the types are so polymorphic that they do constitute a proof.

-----


The insight behind Knuth's glib "I have only proved the program correct" quote is that proofs can still contain our untested assumptions. It's possible for someone to carry over an untested and wrong assumption from the implementation to the proof without ever finding out it is incorrect.

-----


Of course, the same is true for tests. If you prove the wrong thing, your code obviously won't be correct. If you test the wrong thing, your code obviously won't be correct.

However, if you prove the right thing, your code is correct. If you test the right thing, you code is correct in the cases you tested.

And yet, for whatever reason, only having tests is fine but only having proof isn't.

-----


There's no way to prove that you proved the right thing, unfortunately. And, of course, your proof may not be correct. (Proof checkers can help, and I would have confidence in the correctness of a type-check as proof.)

Only having tests is "fine" for most circumstances because, unfortunately, proofs are intractable for most systems. During a software engineering talk I recently attended, an interview candidate stated that the largest proved-correct code base is on the order of 10,000 lines of code. Most people don't have the resources to do even that.

But, philosophically, I always want tests for the same reason that I run experiments: I want empirical evidence, not just reasoning. Empirical evidence increases my confidence that we have accounted for what we think we have.

-----


That's not very surprising; Haskell's type system is really that ridiculously powerful. But I suspect many hackers working with other languages would be able to state the equivalent: "I released a library which I only unit-tested, without writing any helper project that actually uses it".

-----


except that they had to write unit tests - whereas the compiler/type system automatically checked the type safety of the code this guy wrote.

(Not that writing unit tests for haskell code is a bad idea - QuickCheck is WAY more powerful than junits, for example)

-----


QuickCheck and JUnit have different purposes. QuickCheck is not a unit testing framework.

-----


> QuickCheck and JUnit have different purposes. QuickCheck is not a unit testing framework.

Indeed. What I meant was that basically:

    P(HaskellCodeCorrect|Compile+QC+UnitTest) =~= P(OtherCodeCorrect|Compile+UnitTest)
while the distribution of effort between compilation (if any) and unit tests in other languages are quite different than in Haskell, i.e. skewed towards tests in the former and getting code to compile in the latter case.

-----


And that's something he's proud of?

If I never ran my code, it wouldn't have any bugs either, no matter what language I was using.

I guess it does help explain all the half-assed packages on Hackage.

-----


Hindley-Milner is powerful, but there are usually multiple ways to construct a value of the correct type. Only one of those ways is the correct way, and I'd want to test my functional code to make sure that this was the way that I chose.

-----


As sanxiyn points out in http://news.ycombinator.com/item?id=4215055, although there are certainly many ways to produce a value of any concrete type, sufficient polymorphism will often guarantee (via the parametricity theorem) that there is only one total function of a specified type.

-----


nothacker, you are "dead"/"ghosted". I am reposting his comment here:

nothacker 42 minutes ago | link [dead]

I have some open source projects that I've released on multiple occasions without having tested them. I think people make the assumption that things that have adequate documentation and comments/support tickets, etc. that indicate use, that they are tested before release. That is simply untrue. You get what you get. That is true in the world of both paid and unpaid open-source and closed-source software and well as life in-general.

Something that helps in this regard is travis-ci. It's a free CI server and if you have at least some level of testing, you have some level of confidence.

-----


Minor aside, is "I conjecture" valid English?

I'm not criticising, just curious and in the pub.

-----


'Conjecture' is a verb as well as a noun. But I'll admit it sounds weird — by French inflection one would expect "I conject", which seems to be obsolete...

-----


Yep. Dictionary.com calls it obsolete. Webster wants me to pay, and OED can't find it.

conject late 14c., obs. verb replaced by conjecture (v.). Also in form congette.

-----


my concise oxford dictionary (dead tree, c 1984) lists conjecture as a verb (not marked as obsolete).

-----


It's the word "conject" that's obsolete, not "conjecture".

-----


oh, sorry, mis-followed the conversation.

-----


I believe so.

-----


Very common in mathematical circles. Actually, the entire first page of Google's results for "we conjecture that" appears to be math.

-----


As someone who is interesting in writing more Haskell, I've heard all the "if I get it to compile it works perfectly the first time", and I wonder if people are missing out on what proper TDD can bring them.

At first TDD becomes a way to assert your code does what you expect. After a few years you begin to use it as a design technique, and the correctness argument becomes less and less of a reason for using it. It's more of an (albeit nice) side effect at that point. I can write code that I test after the fact and still get the same correctness benefits, but the feedback I get from testing my design is gone.

Maybe I'm missing something, but I don't know if just having an excellent type checker is enough to provide the same quality feedback loop as well executed TDD does.

-----


You can use the types the same way--define your types and write type signatures for your functions before implementing them.

I've found this particularly useful when writing really confusing code (like a Prolog interpreter, for example). Coming up with the types on a function clarifies what it has to do and shrinks the space of possible solutions significantly.

Types are a symbolic way to reason about code. Using types, you can get additional insight into how your function can be written just by following some simple rules. A good example of this is realizing that a function you're working on is actually a specific version of a more general function--you can see this if the type signatures look similar. This lets you reuse very high-level, generic code easily.

Going back to the Prolog example, I was having some trouble figuring out how the resolution algorithm should work. Then I realized that the sub-part I was having problems with was just special type of fold. This helped me get a very concise version of the function by reusing an existing library function.

So types can provide exactly the same sort of benefits as TDD.

-----


This is why Hackage is full of junk and Haskell is unapproachable for real-world projects. Everyone's throwaway toy file is published as a package, and the useful well-documented packages get lost in the mix.

-----


Ignoring the baseless FUD in the first part of your post, your last point is actually valid: it is hard to find good, maintained packages on Hackage. Right now the standard solution seems to be just asking somebody with more experience, but this is clearly not scalable.

Happily, a new version of Hackage creatively called Hackage2 is being worked on which should ameliorate this problem (along with some other improvements). Improvement is just around the corner.

-----


Hackage2 has been around the corner for years. The "public testing instance" doesn't exist. http://hackage.haskell.org/trac/hackage/wiki/HackageDB/2.0

If someone's actually working on it again, that would be delightful.

-----


As somebody who knows Haskell only superficially, this line of code and the author's statement makes me wonder if people can read their own or somebody else's code after some time.

-----


Yes, you can. I'm not a Haskell guru, and I haven't played with it in some time. I can read that line of code just fine. I don't actually know what it's doing, because I don't know the functions involved, etc., but the form of the line is easy enough to grok. Like any other language, it looks mysterious if you haven't read and written enough code in it, or in a language with a similar style.

-----


"I don't actually know what it's doing"

Sorry, but that means you couldn't read it.

-----


In a very real sense, that's true. But...I don't know what it's doing but I know how it's doing it, so to speak. With languages in which I'm quite fluent (Haskell not so much), I can sometimes spot problems in the code of someone coming to me for help without knowing anything about the libraries, et al, because even though I "couldn't read it" in your sense I could read it in the other sense.

-----


No it doesn't:

    foo(bar(x))
Can you read that code? Yes? But you don't know what it is doing, because you don't know what the functions foo and bar do. Same deal with the haskell code in question. Just because you don't know what Compose and getCompose do, doesn't mean you can't read the code.

-----


I don't feel it's so clear cut. Haskell fuzzies the difference between syntactic and semantic elements. What in your piece of code is the parentheses might very well be defined as functions in Haskell. For instance, the ($) and (.) functions, but also I would call StateT and perhaps Compose and WrapMonad (while not knowing what they do) such things too.

So, I still feel that understanding a piece of Haskell means that you will need to understand the individual functions at least superficially.

-----


You feel wrong. There is no such fuzzy. StateT, Compose and WrapMonad are functions. They are just like foo and bar in my example. Just as in a C-style language you need to know the parens mean function application, in haskell you need to know spaces and ($) are function application, and (.) is function composition. The code is perfectly readable even if you don't know what the functions being used do.

-----


Makes perfect sense to me.

-----




Applications are open for YC Summer 2016

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact

Search: