printing to screen? IO
network connections? IO
launching the nukes? IO
You get this weird effect of having pure code, then _all_ your effectful code inhabiting the same types. Though this helps make you write more pure code, when you do write effectful code, it's not nearly as safe as one might want.
Purescript's effects is a nicer granular mechanism.
I see what you mean, e.g. an innocent-looking message to stdout might be piped into a file, on a FUSE filesystem which could do anything. But I don't think that's really Haskell's problem; it's a case of Haskell conforming to an external interface, and anything plugged into that interface is the concern of the user/sysadmin.
The main reason that Haskell's IO is monolithic is that there's traditionally not been a nice way to combine monads (monad transformers get the job done, but there's always been a desire for something better).
Effect systems are becoming popular because they are more composable, and hence newer languages (PureScript, Idris, etc.) are favouring those over monads.
My main gripe with monad transformers is that it doesn't really let you capture a "sum" of effects. Combining happens through a monad stack.
The effect systems let you describe "this and that" instead of "this in that", or "this, then that". For a lot of IO, "this and that" is much more useful.
I like Purescript, and it might be worth making IO more granular as described. But my opinion is this would be focusing a lot of effort on the less valuable half of IO.
You seem to have missed the point. At the moment, a CLI calculator has type "IO ()", a filesystem scanner has type "IO ()", an FTP library has type "IO ()", a random password generator has type "IO ()", etc.
It's very convenient that GHC points out when I try to, for example, perform arithmetic on lists, or insert unsanitised strings into templates. It would also be convenient if it points out when my CLI calculator is trying to perform network requests, when my filesystem scanner is trying to generate random numbers, when my random password generator is scanning the filesystem, etc.
But I've found some of the more common symbols end up being how you think about your code when you become more versed.
The big one is <$> for fmap. If you have some functor/monad/thingy, you'll often be like "I have this pure function that I want to apply to the inner value of my functor", and <$> is the perfect analog to $.
The high abstraction level of Haskell makes reaching for these really tempting and easy. And the alternatives are verbose and harder to read in many situations.
compare "capitalise <$> readStrLn" to
do val <- readStrLn
return capitalise val
> It seems like my progression as a Haskeller results in forcing myself to write in a harder-to-parse style to make my code shorter, to satisfy some base need for "better" code, even though by most measurements I just made, the longer/explicit/pattern matching code is in fact better.
The "do" example is not only more verbose, but less general (it only applies to monads rather than all functors), and relies on special syntax (despite its awkwardness, "<$>" is still just a function call).
You could also do "capitalise `fmap` readStrLn", which is kind of intermediate between the two :)
I agree that the abundant use of operators can make things difficult in some cases, but this is largely naivety on the part of the reader rather than complexity like Perl.
Perl's obfuscated one-liners are just vanilla things that every perl programmer will understand, while Haskell's weird lines are generally package specific operators that one needs to become familiar with.
Once familiar (unlike in Perl) a seasoned Haskeller will be able to grok a lot of what is going on very quickly.
As a further point, the strictness of it all makes understanding the entire system a much easier experience than many other languages.
It's very common to see newcomers or people who've never used Haskell at all mention <$> and friends in this regard. Then someone will say that if you just used do notation explicitly it'd be a lot clearer. And then someone will point out that do notation might look familiar, but semantically it can be spectacularly different from what one might expect from truly imperative languages (e.g., when using the nondeterminism of the list monad). So by encouraging overuse of do notation, you're making the code feel familiar, but you're not actually making it more understandable; if anything, you're making it harder to understand, because you're making it look like something that it's not.
Additionally, while do notation is very generic and broadly powerful, <$> does exactly one thing: it applies a function to a wrapped value, producing a new wrapped value. Once you've used Haskell a fair bit, you very strongly internalize this and tend to immediately understand code involving <$>. If all this code were expanded out to use do notation, you'd have to spend a lot of extra timing reading through the do notation to realize that, oh, this is just a re-implementation of fmap, over and over again.
How exactly is it a limitation that the language has greater expressive power than most?
Haskell does have serious limitations: laziness is a horrible default, modularity is a joke, reasoning about performance is very difficult, and the correctness of most basic abstractions is conditional on the user never ever using a partial function, not even accidentally - or else hilarity ensues.
But functors, applicatives and monads are godsend.
That is definitely a fair analysis. Though I suppose that is a limitation it is not necessarily a severe one, or one that should matter to most people putting code in production.
Haskell in general is extremely unfriendly to novices; but what you lose in accessibility you gain ten fold in maintainability.
In haskell you are more likely to need a pretty good idea of what the final solution will look like before you even get the code compiling the first time, unlike php/js where you can write one or two lines, check the output, rinse and repeat.
If you can't understand German, would you say that usage of German increases one's cognitive load? No, that would be preposterous. If you need to speak in German, then you hire Germans or people willing to learn the language. Onboarding might be a challenge, but it wouldn't have anything to do with the performance of those that do know German.
I myself speak English as a second language, because I had to learn it in order to do my job. Seeing you're from NYC, there's a high probability that English was imposed on you from birth. Good for you given our profession, but a lingua franca is context sensitive, temporary and English isn't even the second most spoken language.
And do you know what language is more universal than all natural languages and that doesn't change much? Math. And math uses symbols, not English words. And Haskell is much closer to math than all mainstream languages.
experienced devs may have some talent reading more complicated expressions. It doesn't remove the fact that the expression requires many mental steps to visualize, and that for each of these steps, even an experienced dev may slip.
An example of this is function assembly. When you combine elusive function naming (because we like math, right ?) with assembling of a dozen functions to define a new function, you definitely have a readability problem.
Naming variables is a decision (I am looking at you, function arguments in most mainstream languages). Naming variables once and for all is a big and heavy decision (and now it is time for class members to be looked at).
For one operator in Haskell I often have to invent names for three-to-five variables in C#.
Sure, sometimes an intermediate name helps—that's when I reach for `where` or `let`. But much of the time the extra name would just be extra noise that obscures what the code is doing and makes it harder to read at a glance.
Namely it (1) has better record syntax, (2) is employs strict-evaluation instead of lazy-evaluation by default, (3) has a better class hierarchy for "numbers", (4) has a better class hierarchy for control classes (like Monad, Applicative, Functor, etc.). There are more, but to me they are the main improvements.
Compared to Haskell, PureScript fixes some of the "flaws". These flaws are mostly different from the flaws mentioned in the article. Maybe one day PureScript can be compiled to native ...<searching internet>... which is already being worked on.
Perhaps I should rectify that I do not think laziness is a flaw, yet probably a large hurdle to adoption.
There are pragmas that GHC implements that are now widely held to be problematic, but we wouldn't know this without an implementation, and using pragmas, we are realistically able to deprecate and remove these extensions.
Convenient, widely-used extensions that have been around for a long time aren't yet part of the language standard: https://prime.haskell.org/ticket/67
I picked scoped type variables as an example because it's been in GHC since version 6.4. That was released in March 2005, so it had plenty of time to make it into the Haskell2010 language standard.
Wow, I did not realize ScopedTypeVariables had been around for so long.
Specifically related to language extensions, there is this wiki page: https://github.com/purescript/purescript/wiki/Differences-fr...
> The PureScript compiler does not support GHC-like language extensions.
I was able to quickly install Nix on a not quite up-to-date Linux distro to get my hands on specific versions of some programs that are too big to conveniently build from source.
I'm not sure it solves the package management problem for Haskell completely, but it should help managing GHC versions and external dependencies (ie. native libs required by Haskell libs), which cabal/stack aren't great with.
Note: I was a bit disappointed that installing Nix required root access to create /nix store. I would have liked to put it in my home directory instead.
> Stackage solved the consistent set of packages, and Stack made it even easier. I now consider Haskell package management a strength for large projects, not a risk.
A steep learning curve means effectively more power. And that means responsibility. If modularization means separation of concerns, and thus less responsibility, lack of power is largely mitigated, but it puts the responsibility on the s/w-architects.
Just like LISP's strength supposedly is revealed when it's the whole OS. And as they say, any involved program will contain an ad-hoc implementation of LISP :)
C, eg., also resembled the hardware architecture more closely. With concurrent programming, though, we see more functional influences creep into the staple languages. It's the other way around with Haskell's do notation.
Anyway, as a research project, it is rather academic. Therefore it is often not backwards compatible, which is an immediate show stopper and also shows that it's far from finished, due to its bigger goals.
Most somewhat experienced developers could pick up Golang in a day or two, having come from C++/Java, but learning Haskell is much more daunting, at least in my personal experience. I don't feel like Golang really brings anything new to the table, it just takes away stuff, which isn't a bad thing.
However, learning Haskell was really eye opening to me, and yet, even a year later, I feel somewhat comfortable writing code but I am only beginning to try to learn performance analysis and think about laziness. I mean, imagine hiring someone who's only used imperative languages and telling them to start writing code that doesn't use for loops or if statements which don't have corresponding else statements.
A language really shouldn't require its language-specific glossary to be mandatory reading, just to understand anyone speaking about it. I could just learn any other language in half the time!
from the above comments
> "I have this pure function that I want to apply to the inner value of my functor"
pure function I know well enough, that is the same for any language, functors though... is that a function object that can be applied to something? then why does it already have a value?
I can take terminologies from C, to python, or C#, and feel a-okay, but haskell kinda feels like it went and shut it self away from the world, and its terminology developed differently due to that, I guess.
Haskell uses a lot of terms specifically from Category Theory.
People in the J community are very mathematically inclined, and usually work on different fields of mathematics. I am working on geometric algebra (Clifford Algebra) in J.
I find the J documentation much more friendlier than Haskell's docs.
And J has several IDE's that come bundled with it: JQT - a QT Gui with syntax highlighting, a help section, a minimalist project structure; JHS - a web-based IDE with hooks to D3 vs. the standard plot in JQT; and there is a Python-based IDE (JQide).
There are labs that walk you through chapter-based examples in the interpreter's REPL. One of the newer labs uses videos that go along with the lab in the console, all part of the J ecosystem.
I personally think that Haskell will maintain that purity of purpose, and not get diluted enough to be picked up by the average coder. Even J remains obscure after decades too, but I am hoping that all of the array hoopla (GPUs, big data, machine learning) will bring it to the forefront as a more natural fit, since its native unit and structure is based upon arrays.
Haskell is radically different for people
foo x = case x of
pattern1 -> ...
People like to say there is just one implementation of Haskell and it's ghc, but there's actually n different ones, with n being the number of combinations of pragmas that exist.
Each combination you use in your source is a different version of Haskell.
sublimehaskell (hsdev, etc.) was good as well