Hacker News new | past | comments | ask | show | jobs | submit login
The Four Flaws of Haskell (neilmitchell.blogspot.com)
108 points by kccqzy on Aug 15, 2016 | hide | past | web | favorite | 75 comments



One flaw I'd point out after working with Purescript a lot is that Haskell's "IO " is very large, to the point of not meaning anything.

printing to screen? IO

network connections? IO

launching the nukes? IO

You get this weird effect of having pure code, then _all_ your effectful code inhabiting the same types. Though this helps make you write more pure code, when you do write effectful code, it's not nearly as safe as one might want.

Purescript's effects is a nicer granular mechanism.


While I agree that wrapping IO in newtypes for various purposes would be useful, from a somewhat more 'pure' standpoint, it's impossible to guarantee. Writing to a file could be network, it could be outputting to a screen somewhere, it could be launching the nukes, as far as I'm aware, it's essentially impossible to tell in many cases, so I think there's an argument for being 'honest' about this and not making type guarantees for things that aren't necessarily true.


> it's essentially impossible to tell in many cases, so I think there's an argument for being 'honest' about this and not making type guarantees for things that aren't necessarily true

I see what you mean, e.g. an innocent-looking message to stdout might be piped into a file, on a FUSE filesystem which could do anything. But I don't think that's really Haskell's problem; it's a case of Haskell conforming to an external interface, and anything plugged into that interface is the concern of the user/sysadmin.

The main reason that Haskell's IO is monolithic is that there's traditionally not been a nice way to combine monads (monad transformers get the job done, but there's always been a desire for something better).

Effect systems are becoming popular because they are more composable, and hence newer languages (PureScript, Idris, etc.) are favouring those over monads.


Right, it's not about what "really" happens, but more about what is ideally happening (I mean you can put IO effects in pure values through `unsafe` too).

My main gripe with monad transformers is that it doesn't really let you capture a "sum" of effects. Combining happens through a monad stack.

The effect systems let you describe "this and that" instead of "this in that", or "this, then that". For a lot of IO, "this and that" is much more useful.


You can build more granular effect systems on top of IO if you want, and many people have done.


True, this is maybe more about the ecosystem than the language itself.


You can also build IO on top of the effect system https://github.com/slamdata/purescript-io :)


At least half the value of "IO" is that it marks pure functions as being pure. If Haskell had a "Pure" typeclass instead of IO it would be much the same (except with the wrong default, and you'd have to type more!).

I like Purescript, and it might be worth making IO more granular as described. But my opinion is this would be focusing a lot of effort on the less valuable half of IO.


One way around this is to have typeclasses for all the different kinds of effects you'd like with instances for IO. Then for testing you can use a different instance that produces a value of a free monad.


That's what we attempted to remedy with `capabilities`[1]

[1] https://hackage.haskell.org/package/Capabilities


It's a good practice to wrap your IO with newtype.


It's almost as if programs are useless if they don't perform some IO and therefore, trying to encode this kind of side effects in the type system is theoretically interesting but practically inconvenient.


Yeah, but that's at the uppermost level. Having more granularity at least for functions in libraries between does nothing/does anything makes a lot of sense, considering the observable actions a function can take are often part of the documentation of an API.


> It's almost as if programs are useless if they don't perform some IO

You seem to have missed the point. At the moment, a CLI calculator has type "IO ()", a filesystem scanner has type "IO ()", an FTP library has type "IO ()", a random password generator has type "IO ()", etc.

It's very convenient that GHC points out when I try to, for example, perform arithmetic on lists, or insert unsanitised strings into templates. It would also be convenient if it points out when my CLI calculator is trying to perform network requests, when my filesystem scanner is trying to generate random numbers, when my random password generator is scanning the filesystem, etc.


In trying to advocate for fine-grained effect systems, i hope Haskell isn't presented as 'unsafe' according to an extremely high standard usually reserved for theorem proving languages. Pure functional programming is fighting an even more basic battle that side-effects should even be separated at all. I feel presenting Haskell's IO as unsafe throws out the baby along with the bathwater. Others have not reached the peaks you look down on.


are you saying they should check their haskell safety privilege? :^)


Readability : although I'm not proficient in Haskell it seems to lend itself towards perl-like clever one liners.


I share this feeling , esp. when each third party lib wants to introduce some variant of <$../<>).

But I've found some of the more common symbols end up being how you think about your code when you become more versed.

The big one is <$> for fmap. If you have some functor/monad/thingy, you'll often be like "I have this pure function that I want to apply to the inner value of my functor", and <$> is the perfect analog to $.

The high abstraction level of Haskell makes reaching for these really tempting and easy. And the alternatives are verbose and harder to read in many situations.

compare "capitalise <$> readStrLn" to

    do val <- readStrLn
       return capitalise val

(though "fmap capitalize readStrLn" isn't too bad)


This post explores the differences in those two styles: http://www.yesodweb.com/blog/2015/10/beginner-friendly-code-...

> It seems like my progression as a Haskeller results in forcing myself to write in a harder-to-parse style to make my code shorter, to satisfy some base need for "better" code, even though by most measurements I just made, the longer/explicit/pattern matching code is in fact better.


To be fair, "<$>" is a synonym for "fmap", so "fmap capitalise readStrLn" is the go-to alternative.

The "do" example is not only more verbose, but less general (it only applies to monads rather than all functors), and relies on special syntax (despite its awkwardness, "<$>" is still just a function call).

You could also do "capitalise `fmap` readStrLn", which is kind of intermediate between the two :)


`ApplicativeDo`[1] do-notation works with Functors

[1] https://downloads.haskell.org/~ghc/master/users-guide/glasgo...


Interesting, readability is something I would put on a list of upsides of Haskell.

I agree that the abundant use of operators can make things difficult in some cases, but this is largely naivety on the part of the reader rather than complexity like Perl.

Perl's obfuscated one-liners are just vanilla things that every perl programmer will understand, while Haskell's weird lines are generally package specific operators that one needs to become familiar with.

Once familiar (unlike in Perl) a seasoned Haskeller will be able to grok a lot of what is going on very quickly.

As a further point, the strictness of it all makes understanding the entire system a much easier experience than many other languages.


The article is about things that impact professionals (thought they also beginners). For better or worse, experienced Haskell users can read the one liners just fine.


Sure, but you've also nuked a developer's ability to easily onboard and understand the codebase with "clever" one liners. Good, maintainable code tends to trend toward lower cognitive load more often than compact solutions.


I'm generally of this opinion, but with Haskell I think more often than not the "clever" one liners are actually not clever -- they're just completely standard usages of fundamental language constructs, and they only appear clever to people who haven't used the language.

It's very common to see newcomers or people who've never used Haskell at all mention <$> and friends in this regard. Then someone will say that if you just used do notation explicitly it'd be a lot clearer. And then someone will point out that do notation might look familiar, but semantically it can be spectacularly different from what one might expect from truly imperative languages (e.g., when using the nondeterminism of the list monad). So by encouraging overuse of do notation, you're making the code feel familiar, but you're not actually making it more understandable; if anything, you're making it harder to understand, because you're making it look like something that it's not.

Additionally, while do notation is very generic and broadly powerful, <$> does exactly one thing: it applies a function to a wrapped value, producing a new wrapped value. Once you've used Haskell a fair bit, you very strongly internalize this and tend to immediately understand code involving <$>. If all this code were expanded out to use do notation, you'd have to spend a lot of extra timing reading through the do notation to realize that, oh, this is just a re-implementation of fmap, over and over again.


Is that then a limitation of the language? I've only briefly toyed with Haskell (and most of the time I use PHP, so my opinion is almost moot). But from what you've written, it sounds like it's hard to write Haskell that's expressive for novices and experts at the same time.


> Is that then a limitation of the language?

How exactly is it a limitation that the language has greater expressive power than most?

Haskell does have serious limitations: laziness is a horrible default, modularity is a joke, reasoning about performance is very difficult, and the correctness of most basic abstractions is conditional on the user never ever using a partial function, not even accidentally - or else hilarity ensues.

But functors, applicatives and monads are godsend.


If you think reading X loc/min is the norm, and then hit a language where you read much slower, you might think "this is hard to read". But if you're reading the same density of _logic_ at the same speed, then that's not a drawback. If the haskell is say 3x as dense in logic, then reading at 1/3 the speed isn't a drawback at all...


Personally I don't think it's a limitation of the language so much as it is an artifact of Haskell's relative uniqueness. Most programmers have never encountered a type system stronger than that of, say, Java, so they have little to no precedent for this sort of language. Consequently, the very idea of a generic fmap (i.e., <$>) is foreign and new.


> it's hard to write Haskell that's expressive for novices and experts at the same time.

That is definitely a fair analysis. Though I suppose that is a limitation it is not necessarily a severe one, or one that should matter to most people putting code in production.

Haskell in general is extremely unfriendly to novices; but what you lose in accessibility you gain ten fold in maintainability.

Haskell in general requires far, far more upfront problem solving than something like PHP or Javascript. It's easy to take for granted the implicit coercion, smart defaults, overlapping scopes when working in PHP to iteratively work on a problem.

In haskell you are more likely to need a pretty good idea of what the final solution will look like before you even get the code compiling the first time, unlike php/js where you can write one or two lines, check the output, rinse and repeat.


Cognitive load has nothing to do with your ability to "easily onboard".

If you can't understand German, would you say that usage of German increases one's cognitive load? No, that would be preposterous. If you need to speak in German, then you hire Germans or people willing to learn the language. Onboarding might be a challenge, but it wouldn't have anything to do with the performance of those that do know German.

I myself speak English as a second language, because I had to learn it in order to do my job. Seeing you're from NYC, there's a high probability that English was imposed on you from birth. Good for you given our profession, but a lingua franca is context sensitive, temporary and English isn't even the second most spoken language.

And do you know what language is more universal than all natural languages and that doesn't change much? Math. And math uses symbols, not English words. And Haskell is much closer to math than all mainstream languages.


There are much bigger fish to fry than "clever" one-liners when onboarding a new Haskell programmer.


My problem about the one liners is not the introduction of new symbols so much as it's the application of many functions on the same line.

experienced devs may have some talent reading more complicated expressions. It doesn't remove the fact that the expression requires many mental steps to visualize, and that for each of these steps, even an experienced dev may slip.

An example of this is function assembly. When you combine elusive function naming (because we like math, right ?) with assembling of a dozen functions to define a new function, you definitely have a readability problem.


I'm not sure whether I would say I "can just fine" but I would certainly say I don't want to have to!


The less decisions you make through you programming day, the more you achieve and the less stressful the job is.

Naming variables is a decision (I am looking at you, function arguments in most mainstream languages). Naming variables once and for all is a big and heavy decision (and now it is time for class members to be looked at).

For one operator in Haskell I often have to invent names for three-to-five variables in C#.


That's a great way of putting it.

Sure, sometimes an intermediate name helps—that's when I reach for `where` or `let`. But much of the time the extra name would just be extra noise that obscures what the code is doing and makes it harder to read at a glance.


I suspect that there's a grab bag of things that happen in those one liners, and whether they are a good idea or not depends on the details. For instance, <$> seems pretty helpful, but I shudder when I see flip. I'd go out on a limb and speculate that even for experienced readers, flip is going to be a bit difficult to parse. But I'll have to come back to that when I have more experience.



PureScript is a language that compiles to JS which is heavily inspired by Haskell, yet fixes some of it's "flaws".

Namely it (1) has better record syntax, (2) is employs strict-evaluation instead of lazy-evaluation by default, (3) has a better class hierarchy for "numbers", (4) has a better class hierarchy for control classes[1] (like Monad, Applicative, Functor, etc.). There are more, but to me they are the main improvements.

Compared to Haskell, PureScript fixes some of the "flaws". These flaws are mostly different from the flaws mentioned in the article. Maybe one day PureScript can be compiled to native ...<searching internet>... which is already being worked on[2].

[1] https://github.com/purescript/purescript-prelude/tree/master...

[2] https://github.com/andyarvanitis/purescript-native


Is Haskell's default lazy evaluation really a flaw that needs to be fixed? I personally like it, I find that most of the time lazy evaluation is what I need instead of strict.


I like Haskell's laziness by default myself. But it turns out that it is hard to reason about for those new to the language, and hard to efficiently cross-compile to a non-lazy language.

Perhaps I should rectify that I do not think laziness is a flaw, yet probably a large hurdle to adoption.


If you'd like strictness by default, there's a pragma for that https://ghc.haskell.org/trac/ghc/wiki/StrictPragma


This is another flaw that PureScript fixes: No language pragmas.


I don't think this is a flaw, in fact I think it neatly solves the problem of how to extend the language without committing to a design too eagerly and provides a platform for experimentation.

There are pragmas that GHC implements that are now widely held to be problematic, but we wouldn't know this without an implementation, and using pragmas, we are realistically able to deprecate and remove these extensions.


Very few extensions have been deprecated: http://hackage.haskell.org/package/Cabal-1.24.0.0/docs/src/L...

Convenient, widely-used extensions that have been around for a long time aren't yet part of the language standard: https://prime.haskell.org/ticket/67


That's not anywhere near a complete list of deprecated extensions and there are others that have been completely removed from the language [1]. It's unfortunate that ScopedTypeVariables hasn't been standardised, but there hasn't been a new standard since 2010, so it's not hugely surprising.

https://ghc.haskell.org/trac/ghc/wiki/LanguagePragmaHistory


Thanks for that link! I haven't seen that wiki page before. Even so, only 4 extensions have been removed. I know others are de facto deprecated, like Rank2Types.

I picked scoped type variables as an example because it's been in GHC since version 6.4. That was released in March 2005, so it had plenty of time to make it into the Haskell2010 language standard.


I'd expect either Rank2Types or RankNTypes to be included in Haskell', you may be thinking of ImpredicativeTypes.

Wow, I did not realize ScopedTypeVariables had been around for so long.


Sorry, I meant that you pretty much always want RankNTypes instead of Rank2Types. My fault for not specifying.


Does PureScript have some alternative approach to growing the language? Or is your comment suffixed with an implicit "for now"? :)


My impression is that PureScript aims to not be configurable. For example, the compiler does not allow you to disable warnings. Other tools, like `psa`, can do that, but the base language is always the same.

Specifically related to language extensions, there is this wiki page: https://github.com/purescript/purescript/wiki/Differences-fr...

> The PureScript compiler does not support GHC-like language extensions.


That doesn't stop other people's code being lazy and such laziness not being reflected in the type.


PureScript also compiles to readable C++. I haven't used it, but it is compelling.

https://github.com/andyarvanitis/purescript-native


Tools and ecosystem coherence, basically. We have infrastructure technical debt and...factions.



FYI, it's possible to use Nix package manager outside of NixOS. I'm not quite convinced that some of the choices the NixOS distro has made are good (or to my liking) but the package manager seems much better thought out.

I was able to quickly install Nix on a not quite up-to-date Linux distro to get my hands on specific versions of some programs that are too big to conveniently build from source.

I'm not sure it solves the package management problem for Haskell completely, but it should help managing GHC versions and external dependencies (ie. native libs required by Haskell libs), which cabal/stack aren't great with.

Note: I was a bit disappointed that installing Nix required root access to create /nix store. I would have liked to put it in my home directory instead.


Unfortunately, because Nix derivations embed full paths to all their dependencies, moving /nix elsewhere means you don't get access to the binary cache and wind up having to build everything from source. If you're happy with that, you can compile Nix manually to put the store in your home directory: https://nixos.org/wiki/How_to_install_nix_in_home_%28on_anot...


Nix (and NixOS) are certainly nice to use with Haskell, but Stack is good. The post even says so:

> Stackage solved the consistent set of packages, and Stack made it even easier. I now consider Haskell package management a strength for large projects, not a risk.


I often wonder what is it that keeps Haskell out of the mainstream as a development language. There seems to have been enormous popular support for Haskell by experts over quite some time, yet relatively newer languages like Golang are coming along and eating Haskell's lunch.


One point might be, that Haskell thinks big. Its strengths might not be too noticeable with smaller projects and be problematic because of a steeper learning curve. The inertia is huge. Imperative programming has a lot of momentum, because it reads more or less like text, not maths.

A steep learning curve means effectively more power. And that means responsibility. If modularization means separation of concerns, and thus less responsibility, lack of power is largely mitigated, but it puts the responsibility on the s/w-architects.

Just like LISP's strength supposedly is revealed when it's the whole OS. And as they say, any involved program will contain an ad-hoc implementation of LISP :)

C, eg., also resembled the hardware architecture more closely. With concurrent programming, though, we see more functional influences creep into the staple languages. It's the other way around with Haskell's do notation.

Anyway, as a research project, it is rather academic. Therefore it is often not backwards compatible, which is an immediate show stopper and also shows that it's far from finished, due to its bigger goals.


I think it's because the needs of industry and academia (or hobbyists?) or rather different. Onboarding and hiring are much more important in industry and both are much harder with Haskell because most people aren't nearly as familiar with functional programming as they are with imperative programming.

Most somewhat experienced developers could pick up Golang in a day or two, having come from C++/Java, but learning Haskell is much more daunting, at least in my personal experience. I don't feel like Golang really brings anything new to the table, it just takes away stuff, which isn't a bad thing.

However, learning Haskell was really eye opening to me, and yet, even a year later, I feel somewhat comfortable writing code but I am only beginning to try to learn performance analysis and think about laziness. I mean, imagine hiring someone who's only used imperative languages and telling them to start writing code that doesn't use for loops or if statements which don't have corresponding else statements.


Probably the whole needing to learn an entire part of English that many didn't even know existed before.

A language really shouldn't require its language-specific glossary to be mandatory reading, just to understand anyone speaking about it. I could just learn any other language in half the time!

from the above comments

> "I have this pure function that I want to apply to the inner value of my functor"

pure function I know well enough, that is the same for any language, functors though... is that a function object that can be applied to something? then why does it already have a value?

I can take terminologies from C, to python, or C#, and feel a-okay, but haskell kinda feels like it went and shut it self away from the world, and its terminology developed differently due to that, I guess.


Haskell terminology is based on the mathematical underpinnings of its type system. It's due to this that those unfamiliar with the maths often have trouble at first grokking the terminology. It's not that Haskell shut itself off, it just draws it's vocabulary from a different source than the other languages.


I am aware it didn't invent the words, it was more of a "feels like" statement, as opposed to literal


Yeah, this is why I didn't become a surgeon. They have so many invented words for all the different parts of the body and diseases. Seriously, why call something a metatarsal when "foot bone" works just as well?


Actually, "middle toe bone"


J is a very functional language used a lot for mathematics and statistical analysis, and it uses more familiar terms for the language like 'verb', 'noun' 'adverb', and 'conjunction' as just some of the terms, as compared with Haskell. J can be very terse, and people don't take kindly to the ASCII character set used as symbols ala APL to perform function composition. It's called line noise in defamatory circles, which is also directed at its cousins K and Kona and others.

Haskell uses a lot of terms specifically from Category Theory. People in the J community are very mathematically inclined, and usually work on different fields of mathematics. I am working on geometric algebra (Clifford Algebra) in J.

I find the J documentation much more friendlier than Haskell's docs.

And J has several IDE's that come bundled with it: JQT - a QT Gui with syntax highlighting, a help section, a minimalist project structure; JHS - a web-based IDE with hooks to D3 vs. the standard plot in JQT; and there is a Python-based IDE (JQide).

There are labs that walk you through chapter-based examples in the interpreter's REPL. One of the newer labs uses videos that go along with the lab in the console, all part of the J ecosystem.

I personally think that Haskell will maintain that purity of purpose, and not get diluted enough to be picked up by the average coder. Even J remains obscure after decades too, but I am hoping that all of the array hoopla (GPUs, big data, machine learning) will bring it to the forefront as a more natural fit, since its native unit and structure is based upon arrays.


Golang is simple and most graduates would be familiar with the style (C with some extras)

Haskell is radically different for people


The overuse of weird operators is just a deal breaker for me. Haskell ideas are good. But I prefer explicit keywords rather than -> <$> and co everywhere. for instance, pattern matching that looks like switch statements would be more readable than lines of |


You can just write something like

    foo x = case x of
              pattern1 -> ...


I would add pragmas.

People like to say there is just one implementation of Haskell and it's ghc, but there's actually n different ones, with n being the number of combinations of pragmas that exist.

Each combination you use in your source is a different version of Haskell.


While I see how this at first appears to be a problem (when I started with Haskell I sure thought of it as one), it turns out that language-pragma filled code can be mixed with vanilla code too without problem. The pragmas usually just expose more of the inner representation of GHC (I'm thinking things like FlexibleInstances, TypeApplication, RankNTypes and not the pathologically broken and perhaps logically unsound ImpredicativeTypes). So it really is just GHC you are shipping (and not GHC + which extensions are enabled).


Eh, I think in practice it's more like there's one large language, but you have to enable some of the features with pragmas.


Regarding space leaks, the recent GHC version now allows an entire module to be strict, not lazy. Would this prevent space leaks (and perhaps other performance-sensitive problems)?


atom (ghcmod + the various haskell pluings) is good enough for me honestly

sublimehaskell (hsdev, etc.) was good as well




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: