Haskell is a language for experimenting, success is not their goal and it would remove resources to actual research.
They had promises and async/await in 1995. They're trailblazers that other languages can follow (like rust did).
I would add that, after stack, tooling is not a problem anymore. It's not as good and polished as rust's cargo but it's ahead several other languages.
Still, as a language, Haskell is not ideal for teaching and productivity.
There too many different ways of doing things (eg. strings, records); compiler errors need improvement, prelude has too many exceptions-throwing functions (eg. head); exceptions handling is not something I want in a pure language; ghc extensions change the language so much that using some extensions almost feel like having to learn another language.
On documentation, I can't say I feel the need for it, but I understand some developers may be used to program against documentation and feel lost without it.
I think that Haskell is a great language to prototype pure business logic because of the type system and focus on purity, but it has several warts, because haskellers focus more on language research than DX.
The reason I stopped using Haskell is because I was bit by exception handling (which is a feature shared by many other languages, incidentally!) and by GC spikes.
I still like Haskell, it's closest to my "ideal" language than any other, but for production Rust is more useable (albeit being a bit uglier)
> Haskell is a language for experimenting, success is not their goal and it would remove resources to actual research.
You rightfully point to Rust, which took a lot of inspiration from Haskell, but I think it's worth emphasizing just how much of the progress in programming languages in the last two decades was inspired by functional programming (many features were not invented in Haskell, but some like type inference were popularized by it).
For example: proper type inference, algebraic data types (enums in Rust) and consequently option types, pattern matching, property-based testing, immutability by default, parametric polymorphism (generics), ad-hoc polymorphism (type classes/traits/...), first class functions (very old idea but only recently common in mainstream languages), ...
It's kind of alluded to, but, while FP languages, as a broad category, were the progenitor of a lot of different PL ideas, Haskell ended up implementing many of them in a single place. It's fair to say "we were influenced by (this other language that originated an idea)", but it's also fair "and Haskell also has that feature". Do it enough times, and you start to see why the claim that Haskell is important; being a research language it's able to have all of these ideas implemented in it in pursuit of its design goals, where other FP languages pick and choose in pursuit of a different one.
I think it would be best to create a spin off of Haskell. A small subset with only the good parts of Haskell, few hand selected extensions, new prelude, with focus on performance, very easy tooling/build chain and use in production for businesses: "Production Haskell Lite".
I learned type inference in Caml Light, Haskell was still Miranda back then, and I bet any language designer of all major languages has similar backgrounds in regards to ML type inference.
Even type classe ideas can be found in CLU, ML, Objective-C, before the paper that gave origin to their adoption in Haskell.
AFAIK, that was the basis of type inference in many of the non-mainstream languages before they branched off into their own more powerful type systems.
> I think that Haskell is a great language to prototype pure business logic because of the type system and focus on purity, but it has several warts, because haskellers focus more on language research than DX.
This is very true. We recently started https://github.com/digitallyinduced/haskell-ux to keep track of haskell DX improvements. Some of our suggestions are already issues on the GHC (haskell compiler) bug tracker.
Exactly. It's a research vehicle. Not everything in Haskell is useful for business or productivity.
The reason I stopped using it is because of performance. Despite having state of the art optimizations there it's still too slow for my needs and writing fast code is way to hard compared to C++ or C.
I was also quite disappointed to learn that a lot of useful concepts (Monad transformers/stacks) have a runtime performance impact when it looked to me like I was just playing with types.
Well, it's also one that happens to make people lots and lots of money. Standard Chartered, for example, uses it extensively to earn lots of money. Facebook uses it for spam filtering logic. Niche, perhaps, but painting it as purely for research is just incorrect.
Yes, it's not going to be great if all you're writing is integrations with various vendors using SOAP or just legacy or odd protocols. Those have libraries or code generators on the JVM/.Net/etc. platforms... not so much in Haskell. However, this has nothing to do with the language, it's just a matter of people actually doing the work to support those things.
Everything else, though... you're golden. It has a learning curve, but there's a reason that Scala is moving ever closer to monads, adopting proper syntax for type classes, etc.
The language doesn’t have to be good to make money, in fact in can be quite bad.
Oddly, while modern development embraces agility, many things often benefit from small changes, and a bad language has small change built-in.
Why? Well, if the language is bad, you have to pay your developers well to retain them, since there are few that want to program in that language. The developer comes aboard because of the money and the challenge. Once they join the company and become a developer in a bad language, there are fewer alternatives for the developer to find another job in that bad language. This means that they have to stay around and get to know that bad language better, making to even harder for the business to hire others to help or replace them. So, the development doesn’t suffer as much from team scaling problems, and change can’t happen as quickly.
This isn’t what you want for everything of course. Especially when talking with a VC about a startup.
But JS, an ok language, has gotten to a similar level of nonsense through the difficulty and complexity of its rapidly changing ecosystem and changing browsers.
And few languages have really avoided unnecessary difficulty over time in their ecosystems.
I think we in the IT community should really strive to be better at nuance when discussing these things.
I mean, as much as I've seen "Haskell sucks" posts, I've also seen quite a lot of "Haskell solves all your problems!" posts. That's not how anything works in the Real World(TM) when solving concrete business problems -- whatever that business domain may be. Rather, it's trade-offs all the way down. It does get tiresome to read these re-hashes of debates which should have been over already. (EDIT: For anyone wondering, the answer is: It depends.)
EDIT: Final edit while I can. We see this a lot with "lol PHP" style comments... and all those achieve is a sense of elitism and making people who actually get lots of important work done in PHP feel bad. I don't want that world.
> The language doesn’t have to be good to make money, in fact in can be quite bad.
In fact, I think that there's an inverse relationship between how good a language is, and how much money has been made with it, on the condition that you have heard of it.
Why? Well if you've heard of a language, it's likely because it is a language that has proliferated through the community. The worse a language is the more it must have made people money to 'survive' and proliferate.
Think of the worst/ugliest languages you can think of (coming to my mind is Visual BASIC, C++, PHP, Javascript). These are languages that made an exceptional amount of money, and this allows them to survive despite being so bad.
I am a professional software developer and my full time job is currently writing software in Haskell for a bank.
I also stream projects and open source work I do in Haskell once a week.
I am not a researcher. I don’t have a horse in that race. Although I am thankful for the research that is done as things like STM and type system extensions benefit my work greatly.
I really wish Haskell could shed this meme that it’s a “research,” language and that, “nobody uses it.”
I’m Nobody. I use Haskell every day. It’s a practical language for industrial strength programming in a pure functional language.
The fact that Haskell is a good industrial strength language is a byproduct of its quest for language design excellence.
Avoid success at all cost, implies an unwillingness to sacrifice on design to please corporate needs.
I don't think it's a "meme", there really is a focus on research over being production ready. Monads, Arrows, Dependent Types - Haskell is where a lot language research happens (and the language chosen by a lot of researchers).
Sure, you can ship Haskell in production if you're happy to fill in the gaps. I did some of that for a non-Haskell company and it wasn't as easy to deploy apps written in other languages and community-provided solutions for logging / monitoring were lacking.
That said, I'm always happy to hear about people using Haskell in production and wish you the best
> I would add that, after stack, tooling is not a problem anymore. It's not as good and polished as rust's cargo but it's ahead several other languages.
In a way Cabal 3 is even ahead of cargo, by being able of sharing build libraries between different projects and still having a sandbox like build for each project.
> There too many different ways of doing things (eg. strings, records), compiler errors need improvement, prelude has unsafe functions (eg. head), exceptions handling is not something I want in a pure language, ghc extensions change the language so much that using some extensions almost feel like having to learn another language.
For such a self called safe language, some things are almost comedy like, like getting exceptions for uninitialised record fields or unhandled alternatives in case expressions.
But Haskell still has a place in my heart and I‘m still following its development. But for my side projects Rust has replaced it, by being at some things even safer, but foremost safe at the places it‘s most important for me and also quite a bit more pragmatic. For me Rust combines the best parts of Haskell and C++.
Multiprocessing and parallel programming is a different thing from async/await, which primarily has to do with with green threads and coroutines. You're right that the ideas go way back (hell, Knuth was writing about coroutines in TAoCP in the 70s!), but this does not qualify.
Unix was originally green threads, at least in kernel mode: it was a non-preemptable kernel running on a uniprocessor. This means that kernel code ran until hitting a voluntary context switch. User space was preemptible.
(User space being preemptible doesn't really make a semantic difference to the shell & and wait examples, unless some of the commands contain lengthy CPU-bound loops.)
Async/await is specialised syntax for a more generic concept of monads which haskell was the language in which this was heavily researched.
Specifically continuation monad describes asynchronous computation (promises) and was one of the motivating examples
In the early 90s going back all the way to Moggis original "notions of computations" paper from 1991
http://homepages.inf.ed.ac.uk/wadler/topics/monads.html
Async/await and LINQ were the brainchild of Erik Meijer. A haskell researcher.
But F# doesn't have async/await.
As you write, it has "computation expressions", which are inspired by Haskell's "do-notation", but are more generalised and powerful (than both Haskell's "do-notation" or Scala's "for-comprehension", let alone async/await).
But since industries used Haskell, now they are concerned with backward compatibility, which AFAIK makes some parts of Haskell ugly. I wish there would be something like RHEL/Fedora model rather than letting Simon Peyton Jones being dictated by industry by ceding towards backward compatibility.
`head` is not unsafe, it is a partial function. An unsafe function can lead to u.b.; the behavior of `head` on an empty list is very much defined. Haskell is not a total language.
As an aside -- Rust's definition of "unsafe" (e.g. "can lead to u.b.") is not the only definition of unsafe one can use for a programming language, which can have different safety guarantees.
As a motivating example, in many languages converting a reference to an integer containing a memory address is unsafe (e.g. Java[1]/golang[2]/C#[3]/Haskell[4]), but this is considered safe in Rust.[5] All these languages literally use the word "unsafe" for it.
I think in all these cases unsafe actually means 'leads to UB'.
Converting a pointer to an integer address is problematic if you have a garbage collector due to compacting. Technically it's still the dereferencing that actually causes the UB, but I don't think this is too much of a leap.
C# marking all pointers unsafe also makes sense in the same way, because "passing pointers between methods can cause undefined behavior."
I agree, I'm just pointing out that not all languages have the same safety model, even if they are similar.
Converting a pointer to an integer address and never dereferencing it (e.g. to print it) does not lead to U.B., but it does leak ASLR information. Some languages consider that safe (Rust) and others do not. I think that is in an important distinction.
I think it's worth pointing out that your values[0] may not align with Haskell's values. That's absolutely fine, but it doesn't mean that Haskell is "bad" in some objective sense.
AFAICT, most of your 'complaints' would apply equally to, say, Java or C#.
[0] I can't recall which exact presentation it was, but Bryan Cantrill had a brilliant segment on this in one of them. Perhaps others around here can remember?
> I can't recall which exact presentation it was, but Bryan Cantrill had a brilliant segment on this in one of them. Perhaps others around here can remember?
They didn’t argue that it was “bad in some objective sense”, they pointed out a feature in Haskell that they specifically don’t like. Unless you consider Haskell to be a perfect language, discarding an opinion that was labeled as exactly that seems silly to me. Especially with such a broad strokes “love it or leave it” reply...
Those complaints apply to those languages too, yes. What’s your point here?
Haskell has plenty of legitimate criticisms for use as a production language.
However this article isn't it. Please don't read this and repeat the opinions posted here. A lot of what the author says is just plain wrong and shows a misunderstanding of basic functional idioms.
For example: "Functors are basically an Object with a internal state changing method in typical OOP terms." Uh, no. Functors are stateless, with a well defined semantic. This is nothing like an object with internal mutable state, with instance methods that mutate it in god-knows-what-way.
There are a lot of mischaracterizations like this in the article.
> programming languages are meant to ease the task of creating computer programs as opposed to writing assembly by hand
This! The haskell ecosystem is missing a certain kind of pragmatism. There's a lot of beautiful type abstractions, talking about monads, etc., but not enough builders doing actual application development. In my opinion it's not the language that is bad, but the ecosystem. Missing documentation, missing tooling and infrastructure, no focus on actually building applications.
We're trying to fix that with IHP, a new haskell framework with a focus on actual building applications. Imagine the productiveness of rails combined with the typesafety of haskell.
6 months after it's release it's already the second biggest haskell framework, and we just had a new record of weekly active users last week. To me this shows that by fixing the ecosystem haskell can reach a lot more than it's currently doing.
This indeed seems to be the main/only point of criticism in this post that is valid: however, it is not that Haskell has no "pragmatic" libraries to get stuff done (e.g. WAI/Warp, Yesod, Servant, ... are top notch, practical libraries if you are writing network/HTTP services). They do seem to drown in the sea of libraries/blog posts that are focused on the academic/abstract stuff. The end result does not feel consistent, and reminds me of the horrors we had with the early C++ metaprogramming efforts: it looks cool, but you end up fighting the language and produce unreadable code.
For Haskell to become a successful "industrial" language, I think most of the dependent-typing stuff should probably go (to Idris/Agda etc.), so that a clear and consistent Haskell subset can be defined.
The other arguments in the article are just weak. The rant about data not having a type, is missing the point. Sure, often you receive data that you need to inspect to know what it is. You can easily do this in Haskell (just label it with the UnknownData type), and have a function that inspects it and returns, depending on the contents, the right type). The big advantage is that you don't have to keep on doing this same check.
Types being the cause of difficulty in refactoring when business requirements change, is the opposite of my experience. In large dynamically typed codebases, being sure that a large refactor caught everything, is very hard / costly in test coverage. I have seen this go wrong many times. Having the compiler point out what you have missed, based on the types, is very helpful.
While I think the arguments are a bit weak, I do agree that it is at least unclear if Haskell is a sound choice as a production language at this time. Fighting against an ecosystem is not something you want to be doing while building your product. But in contrast to the author, I do think this is fixable, and see steps happening in the right direction (e.g. with IHP, but also with the efforts around the Haskell Foundation and the Haskell Language Server)
> but not enough builders doing actual application development.
It's simply because the community is small.
People compare Haskell to other languages like Java or JS like there's an equal amount of manpower, and therefore, if not everything has a library, it must be because haskellers are slacking on useless stuff. It is not.
Other small languages communities have exactly the same issues, it's not specific to haskell. The only thing that is specific is outsiders feeling entitled to an ecosystem and blaming "these damn fp ivory tower types" for not providing it.
No, it's really not. Haskell is a pain in the ass to program in. It's a language developed for mathematicians, not the real world. It makes the same mistakes as OpenCyc and expert systems.
I know assembly, C, C++, Scheme, Common Lisp, Python, MATLAB, C#, some D, some Javascript, some Mathematica. Programming in Haskell looks promising initially, but just drains all the joy and productivity out of my programming.
> [Haskell] just drains all the joy and productivity out of my programming.
It's fine that you don't like the language. I mean, not everyone has to enjoy a language. On the other hand, there's a difference between voicing an opinion, which is personal, and making claims like:
> Haskell is a pain in the ass to program in. It's a language developed for mathematicians, not the real world.
Because then, you are discarding other people's experience about the language, while you claim your own experience is relevant.
> > [Haskell] just drains all the joy and productivity out of my programming.
> It's fine that you don't like the language. I mean, not everyone has to enjoy a language. On the other hand, there's a difference between voicing an opinion, which is personal, and making claims like ...
Indeed. Different people find different things fun:
I don't think that quote is complete enough though. If you really believe that programming languages are just tools for building programs (in particular "useful" programs that make money) then yes, Haskell has not much to offer you over PHP or Ruby and other languages in that vein.
On the other hand if you are interested in research, exploring the extent of the design space for programming languages and what you can (or can't) express in them, then Haskell is an excellent language for you. Both for the community of like-minded researchers and for the flexibility of the language/compiler itself.
On a wider scale, I've always liked https://josephg.com/blog/3-tribes/ as an explanation for why this topic comes up again and again and again. The writer of the article is more of a "type 3" programmer. He wants to build a house but does not care much which tools he uses as long as they don't get in the way. It is the end result they are interested in, not the process. Haskell is, by design as a research language, more process oriented than result oriented. And yes, you can still get results with it as IHP and SC and Jane Street have shown. But they seem to be the exceptions that show the norm.
- You can change features of your product easier thanks to the type system. Doing a big refactoring with rails always adds many bugs. With haskell you spend less time on doing these kind of changes because the compiler tells you what needs to be changed.
> With haskell you spend less time on doing these kind of changes because the compiler tells you what needs to be changed.
This. I think what all the other folks in this thread that are just throwing shade on Haskell after struggling through a tutorial (admittedly, it can be pedagogically difficult to introduce concepts compared to Python or Ruby or JS, etc).
Perhaps Haskell will never beat C++ & friends (Rust, C, maybe Swift?) at performance, but what it will beat is the myriad other web apps written in Python and Ruby, which I bet a lot of folks on this site use. All that it lacks are the same levels of batteries-included frameworks like Django and Rails.
I can't even begin to mentally enumerate the number of times I've seen Python (and, much less often, Ruby) codebases at the places I've worked that were critical to the infrastructure (often having been the outgrowth of the initial POC that got the product started) that were mysterious Monoliths, which if you tried to change would be some Hydra situation: for every bug fixed, another 2 or 3 or 10 are created.
Perhaps if a team of really well-disciplined Python professionals can go 0-60 very quickly due to the language's flexibility, but it counts on that discipline being maintained going forward in the code, and training new hires or hiring only experts to work on the project.
Haskell is basically like encoding this discipline into the compiler itself. You don't have to spend time praying that your test suite is adequate (although tests are still needed) or picking over the minutiae of some pull request trying to remember all the things that the `|` operator may have been overloaded with from that "coding ninja" who decided to put esoteric things in your Python codebase did before he got poached by some other company and peaced out without documenting anything.
For all of the pitfalls that Haskell has from a language perspective (which can often be avoided by using an alternate prelude), the advantages when compared to other modern languages made for "moving fast and breaking things" are very prominent: I no longer have to wonder what the `data_object` parameter of my `update_callback_param_cache` method is supposed to be and waste minutes of my life desperately grepping through the code or, if possible, `pdb`ing my way through a live version and trying to trigger that code path.
Your example is live updating html in a browser, NOT live updating the type system. In other words, your evidence is trivial, unrelated to your main claim, and unconvincing.
> If you really believe that programming languages are just tools for building programs (in particular "useful" programs that make money) then yes, Haskell has not much to offer you over PHP or Ruby and other languages in that vein.
I'd agree with this generally. During grad school I loved hacking my research on haskell.
But this doesn't seem to be a consistent claim from the community. It'd be helpful if the core maintainers published this somewhere. Because I consistently get haskellers telling me that I'd be more productive as an engineer and my programs would have fewer errors if only I worked in Haskell.
Some other languages do have this sort of vision statement (or, at least a fairly well implied one) that makes it clear what they are offering and what they aren't. C++ wants to give you speed and flexibility without overhead for things you aren't using. That's been consistent for decades.
> Because I consistently get haskellers telling me that I'd be more productive as an engineer and my programs would have fewer errors if only I worked in Haskell.
i still think it's true
i have worked in haskell and i currently get paid to write more mainstream languages.
the mainstream languages are literal wastes of my time in comparison. not my money tho, so i'm fine with my employer paying me to waste time.
> programming languages are meant to ease the task of creating computer programs as opposed to writing assembly by hand.
is wrong: Assembly is certainly a programming language, which the author acknowledges when writing `writing assembly`.
As mentioned elsewhere, the whole blog post appears to be a troll post and should definitely not be taken as a part of a sober discussion on programming languages.
He should have written “as opposed to writing object code by hand,” because assembly does save some effort there. Some architectures were even designed around making assembly code more readable, rather than making it the compiler’s problem.
Not sure if you are the right person to talk to but there's a small mistake on the front page. Missing "language" in functional programming language (at least on my phone)
The line which sprung out most to me was the one claiming that "Clojure took the world by storm". This is simply not the case. JavaScript has taken the world by storm, as has Python. Ruby, PHP and others have taken the world by storm before and are slowly fading. Clojure has never taken the world by storm and (IMO) never will. Neither will Haskell btw.
To me, Clojure is very much on the same level as Haskell: an extremely niche programming language with a small (sub-1% in TIOBE) but dedicated following that is unlikely to grow much beyond what it already is.
Rest of the article: meh. It presupposes a lot of what it thinks Haskell should aspire to without investigating whether those things are actually what it is aspiring to be.
Yeah, I think Clojure people have a lot of insecurity around Haskell and it’s rarely a good look. Clojure is a language of sensible compromises without any real philosophical slam dunks, which doesn’t seem to satiate engineers’ natural desire to win internet points.
> Clojure people have a lot of insecurity around Haskell
I beg to differ. I think it's the other way around. I rarely see Clojure folk bashing on Haskell - they usually admit Haskell's strong points or any other language. They very often borrow ideas from other languages, libraries, and tools.
I've heard they are figuring out interop with Python and R. They've built Clojure-like Lisp dialects that work on Golang, Erlang, compile to Lua, etc.
Haskellites, however, get sad and defensive anytime someone mentions that it's not so widely used in the real world.
That’s a nice thought, but I haven’t seen examples of that myself. And your claim of real-world use seems to be the opposite of what I found with a quick Google search:
Obviously this is far from scientific, but Clojure being a new, hyped language, it wouldn’t surprise me if that skewed people’s perception of how much it was used in production code.
I feel like at this point, it's neither new (coming up on 12 years old) nor hyped (it's been used in anger a lot by now).
I'm not sure how you read real-world utility from looking at graphs of SO and GitHub. It's like claiming that garbage trucks have no real-world utility because they are vastly less popular than cars.
Clojure is hovering around the same space as Rust, and beating WASM and Cuda. All of these languages occupy a useful niche, and a more-than-cursory search will reveal a lot of companies deploying them where they can play to their respective strengths.
I think you are ignoring the facts: Clojure at the moment is the most widely used programming language with a strong FP emphasis.
Clojure gathers more conferences and meetups, has more active podcasts, books, jobs, etc. Many companies are using Clojure in production: Apple, Walmart Labs, Cisco, Pandora, CircleCI, Roamresearch, Grammarly, Pitch - are just a few that come to mind. And they're not using it for the small stuff; for example, Cisco has built its entire security platform with it.
Clojure is the third most popular JVM lang. To be fair though, this is mainly due to Kotlin taking over Scala. Also, Clojure probably is the second most popular lisp dialect on GitHub (after Emacs-lisp).
So I think the claim is correct. At least within the FP world - Clojure currently dominates over Haskell, OCaml, F#, Scala, Elixir, Erlang, etc.
Are you not just moving the goalposts? The original article was "took the world by storm", not "took the FP world by storm". As part of the wider programming world FP languages are a tiny minority, regardless of how many impressive companies we rattle off. JS and Python absolutely rule the roost at the moment.
I read it as an emotional outburst by the author, and I don't think it actually contains any information relevant to the rest of the article.
If we wanted to treat it as an argument, we'd have to nail down the parameters for what "taking the world by storm" means objectively, or the author would have to clarify what they meant subjectively.
You claim that they’re ignoring facts, but provide no facts of your own beyond “some companies use it in production,” which is undoubtably also true of Haskell.
I've maintained an active interest in FP languages over the past several years. I'm not arguing here about the merits of choosing Haskell over Clojure, etc., or about other blogpost opinions (I'm afraid I have to disagree with many of them). I'm merely sharing what I know about the current state of FP in the industry.
Too bad the tone is so vitriolic, because there is an argument hidden in there somewhere.
Type systems allow us to add theorems to our programs about how they operate and have the compiler check their consistency.
We fight with bugs: programs behaving unlike we planned and specified. We encode plans and specifications as types, and now have formal proof that a certain class of bugs is eliminated. Excited, we want to use this strategy to eliminate more bugs.
A larger part of the programming task is moved over to this higher order type layer. As it becomes less trivial, bugs and antipatterns start to manifest.
We do not have any answers about the objectively best division of labor between static and dynamic aspects. This is a research project that's bigger than any single person, community or language, and I'd argue we need more of Haskell, Rust, Idris and their ilk rather than less. To merely argue that they did not result in a dramatically better OS or a word processor is beside the point.
I for one upvoted not because I agreed with the article but for what I hope is a constructive discussion.
Personally, and without wishing to sound entitled, I would love for it to be a lot easier to dip one’s toe in the water. A great start would be a web dev tutorial that starts from scratch with a sane Prelude that hides away anything I don’t need to know now (or perhaps ever). With of course a database, json, and html.
Coming from Flask where I was well served in that regard (no pun intended).
Aside: Thought ocaml might get there first but I’m not seeing that either
I agree, the benefit of this, is that it creates a frontier where math can be applied more rigorously to programs. There is a related xkcd comic about this.
We’re in a “not there yet” kind of situation.
The author likely comes from a very mainstream product centric domain, as most do. In this context one has to agree with the criticism and maybe also admit that this is not what the language is for?
This is a good point. But I also want to add that as a freelance developer, I've done quite a number of medium sized projects for business use cases in Haskell singlehandedly. Empirically, the results were very low in bugs and remain at use at the companies in question without requiring much maintenance.
So this kind of thing is something that is well served by Haskell, in my opinion. I wouldn't seriously recommend someone else that the best path to doing this kind of work is to go and learn Haskell but I also feel this is something I wouldn't have been able to do as well using many other more popular languages.
I don't have the experience or knowledge necessary to judge this post, but I’m glad it exists and got posted to Hacker News because I believe the discussion will end up being insightful.
I am not generally a functional programming person. I grew up on C and C++ and went on to learn Python, JS, etc. Eventually I finally wrote actual code in Scala and thought it was fine. Didn’t love the syntax, not great at “thinking” functionally, but loved sum types and pattern matching.
I feel like Rust is the perfect mix for me because it gives me the aspects I like about functional programming languages (sum types, pattern matching, strong type inference) combined with composition based polymorphism and an almost C++ like attitude towards meta programming (though obviously with a clean slate.) I do think that learning to “think functionally” better would be good; maybe I should listen to the 4chan meme and read SICP after all. :)
I’ll be interested to see if this generates thoughtful discussion. It feels like a flame-y, ignorant piece of writing and my gut response is to say flame-y, marginally less ignorant things in response. Which I won’t, because that would be unhelpful and make everyone’s life who read it slightly less pleasant for no reason.
In my experience so far, SML hit a really sweet spot between ease of learning and features. I never had a "what would I use this for?" moment. Kotlin comes close, but is less elegant and especially lacks full pattern matching.
Unfortunately, sml is used approximately nowhere. Not sure why, but I also haven't used ocaml yet. Maybe one day. My hosting provider (hcoop.net)'s domtool is written in SML, and I hope to hack on it at some point.
I am personally not a huge fan, but like many statically-typed functional languages, Haskell is great for
- creating new programming languages and DSLs
- problems in finance, energy, medicine, etc., that are more correctness-sensitive than performance-sensitive (it is widely used in Bitcoin brokerages / exchanges / etc, and in general blockchain technologies)
- highly-concurrent scenarios that would otherwise involve a lot of painful management of threads/locks/etc
I'm not a big fan of DSLs. Yes, they allow for convenient notation to express things within specific domains, but they also create more barriers. It is both more impressive and more useful if one is able to create abstractions in mechanical sympathy with idiomatic uses of existing languages. Of course, this is harder to do.
I think DSLs are a bit of a lazy cop-out.
This is not a new idea. The myth of the tower of Babel and the observation that by creating lots and lots of communication barriers and interfaces, you lose the ability to coordinate major efforts is indeed very old.
I disagree. DSLs give you a new set of primitives which is more suited to the problem domain and makes it harder to make mistakes by accidentally encoding things which don't exist in the problem domain.
Of course, just making a DSL isn't enough. Some DSLs are badly designed and inelegant, or their features do not compose well with each other. But a well designed DSL can be a thing of both beauty and clarity.
I think this shouldn't be surprising as even modern programming languages are DSLs of a kind when compared to machine code.
>DSLs give you a new set of primitives which is more suited to the problem domain and makes it harder to make mistakes by accidentally encoding things which don't exist in the problem domain.
But domains are interrelated. If there is no common set of primitives upon which various different domain abstractions are built, then you have to relearn absolutely everything for each domain, even things that are not domain specific at all.
You end up with a huge number of special cases. Mistakes are made because only very few people will be able to remember all the semantics of all the DSLs they need for their work.
I agree, and that's why DSLs are not magical pixie dust and have to be used with taste and judiciously. But in cases where they do work, they are strictly better than free form code, IMO.
Designing good DSLs is hard. If you recommend that people construct DSLs to solve their problems they will. Most of the time these DSLs will not be good and they will present either barriers or bottlenecks.
The reason DSLs often become a problem is that people often do not think about the cost of maturing and maintaining them.
I've seen this happen quite a few times. Someone gets the idea that "we need a DSL for this". It is implemented, but the resources and time to do a proper job isn't there so the documentation, tooling and roadmap is severely lacking or entirely absent.
More requirements are uncovered and development becomes hampered by what the DSL can express or how it is implemented. If you are really unlucky, the original architect leaves or loses interest. I've seen entire products grind to a halt for 6 months because the only person with deep understanding of a DSL on which everything hinges has left the company.
DSLs, like any programming language in general, can be useful. But they are expensive pieces of software to write and maintain and not the "easy solution" that people tend to be misled into thinking that they are. And if expense is spared in their making, it has to be paid for later in troubles encountered further down the road.
Haxl is actually an example of "the more impressive and more useful" approach: it is an abstraction embedded within a Haskell library, that allows you to express your "intent" in simple, standard Haskell code, decoupled from the way this intent will be reached (i.e. the execution is determined/optimized by the library)
Haskell allows this to be done in a way that makes if very hard for the user to break this abstraction (the type system will prevent you from doing this), while still allowing the full language power of Haskell to be used.
Many of those communication barriers and interfaces are a feature, not an obstacle.
For example, in a typical enterprise application it's good to know that SQL queries and web resource access control (two typical DSL uses) cannot interfere with one another accidentally.
SQL is a DSL that is fairly heavy in that a lot has been invested in it over multiple decades. (It even has extensions or outgrowths that turn it into more of a general purpose language)
This results in ample tooling, lots of documentation, multiple implementations, some of which are really good and a lot of community knowledge.
One may like or dislike SQL, but it is undeniable that it is useful to and mastered by a great many people.
Most DSLs have few or none of these properties. In some environments DSLs tend to be presented as something you can create casually. That it is a lightweight solution. And what starts as a small solution can often grow - usually to a point where not having put in a lot of work in the basic design will make the language unsound.
I don't think DSLs are a lightweight solution at all. I think that giving inexperienced programmers the idea that DSLs are is something they ought to be designing probably isn't helpful.
It means that the type system is flexible enough to encode specific domain logic in a way that can be verified by the compiler (and therefore always be correct at runtime unless there is a bug in the compiler). Likewise, it means that plainly incorrect domain logic (eg due to a typo) is far less likely to occur than in a Python program. Some of this can be encoded by statically typed languages like C (“the average of an array of floats is a float”) but not all of it (“given a function that maps floats to floats, applying that function to an array of floats returns an array of floats”).
Python of course has none of this (“this average function actually returns the string ‘fix me!’ on some inputs because the programmer forgot to fill in and else branch”). Mypy is a useful tool but it doesn’t stop type-incorrect code from being executed and isn’t as powerful as Haskell. Likewise, the Haskell compiler certainly doesn’t catch everything, but it does completely eliminate many common Python bugs.
There is obviously a “spectrum” since no general purpose language has a truly rigorous type system.
The compiler and type system allows you to more robustly encode and enforce correctness. Even as someone who's written relatively little Haskell compared to Python, knowing that I can run it and it's not going to break in some weird, unexpected way is a fantastic feeling. If I had to write something that needed to be logically "bulletproof" and correct, I'd feel orders-of-magnitude more comfortable writing it in Haskell/Rust than Python. Python has too much magic, too many ways to do something unchecked, too many ways to work-around some issue. It's too easy to write poor, difficult-to-comprehend/maintain code in Python, at least with Haskell/Rust I can re-factor something and _know_ that I didn't break anything or change any behaviour - the latter especially is straight up not a guarantee I could make with Python in my experience.
1. You can afford the costs of a garbage-collected system.
2. Your collaborators know it or are eager to learn it.
3. You intend to maintain your system over an extended period of time while adding features.
4. You don't need close integration with the platform GUI.
That last one isn't impossible to do in Haskell, but it's painful enough that it overrides the benefits in a lot of cases.
What products niches does this leave? Servers and command-line tools, primarily, though there is room for GUI applications that use various cross-platform UI toolkits and games that build their entire UIs from scratch. (Yes, both of those exist. Even games. Unity uses C#, it's not like garbage collection is incompatible with games - it just has overhead you'll have to live with and work around.)
Honestly, condition 2 above is the biggest limit. A lot of people who like to talk about judging things on their merits refuse to learn Haskell because it's not yet another shallow skin over the same programming concepts.
My personal experience is that any time you think Ruby would be a good choice for a long-term project, Haskell is a far better option. It supports the same joy of green-field development but is less of a tarpit when the green field is 5 years behind you.
Seems like you'd be better off using Rust for most of these. A lot of the same correctness benefits of Haskell, but a much more practical langauge with a more complete ecosystem of libraries.
Amen. The real benefit of Haskell is not some hyper performance optimization (when you actually truly need C++ or Rust), but for business critical applications that need to perform fairly well and, most critically, need to not be filled with bugs as it continues to evolve.
Whatever amount of money someone might lose by using Haskell compared to Rust or C++ (e.g. in hiring trainers to train your engineers, the overhead in terms of your PaaS bill due to GC or whatever else) is very small compared to the savings:
- Compensating a customer for an SLA violation due to some inadequately tested code path that caused an outage
- Wasted developer time trying to act like a Human Compiler (e.g. including a bunch of extra code to check type expectations and handle violations gracefully....at runtime)
- Wasted developer time trying to understand the dynamic behavior of some code in a PR
- Wasted developer time trying to understand old crufty parts of the codebase when you refactor
Perhaps it's slow(er) to compile than you might like or not as optimal as C++, but unless you can afford to hire the absolute best Python / Ruby developers available and have some airtight culture of documentation and best practices, I would venture that it's better off to stake one's intellectual property on something that can survive employee churn without that knowledge walking out of the building.
Depends, I would certainly not pick Rust over C++ for graphics or GPGPU programming as I don't plan to be one doing the ground work to build something that has the industry acceptance of SYSCL, CUDA, Qt, COM/UWP, MSL.
Ironically C++ is closer to Haskell in regards to expressiveness.
Rather than merely lamenting that no one's mentioned OCaml, how about providing something specific about OCaml you think is relevant to contribute to the discussion?
For example, pointers about how OCaml compares to Haskell weak spots:
- Basic features (e.g. records, strings, portability)
- Undocumented, fragmented and/or half baked libraries
- Bizarre and inconvenient "advanced" syntax
- Extravagant memory usage
Rust is only an option if one needs deployment scenarios where having a GC is a not an option.
For everything else you are better off with a language that supports automatic memory management, and now Haskell is even supporting linear types anyway.
That depends on whether you find having to manage memory more of an impediment, or whether you find not having easy access to imperative constructs more of an impediment.
People keep forgetting that GC enabled languages like F#, Haskell, D, C#, OCaml, also provide mechanisms to fine tune memory allocation, its location on the stack and global memory segments besides GC heap, and even native heap.
Additionally some of them, like Haskell being discussed here, are also extending their type systems to provide Rust like guarantees for resource management, for the 1% uses cases where it really matters.
As for my short experiments doing graphics programming with Rust, it is naturally an impediment, that proves the point that Rust place is as systems programming language filling the same role as C++ on Android, UWP or Apple platforms, the low level drivers, compiler toolchain, windows system compositor.
It is still something that everyone using the language have to deal with, regardless how little it is.
Whereas with automatic memory management languages, if and only when it becomes an hurdle, the performance expert can pick up a profiler and only optimise what is required.
Using .NET as example, then one can think about struct vs class, new vs stackalloc vs Marshal.AllocHGlobal, direct arrays vs Span<>.
Or based on experience, already pick the right data structure and memory location right from the start.
My point is that there are other things in other langauges that you similarly always have a to deal with. For example, in Haskell writing impetative code is a pain. Not impossible, but difficult. In C# there are no Sum Types, so expressing an "or" data structure is difficult.
Which of these things you find to be more of an impediment is somewhat subjective.
Except that .NET is a polyglot runtime, F# is also an option with the same low level programming capabilities as C# to do C like low level programming, and it has sum types.
I just gave one concrete example, among many possible ones.
Haskell is pretty nice for building web applications.
Thanks to it's type system you can build much more stable web apps in less time. Usually later in the application life cycle you have a hard time refactoring stuff when working with e.g. JS or rails. Without tests you will definitly break stuff. With haskell you can confidently refactor your code without worrying about breaking stuff, the compiler will tell you.
The way data structure are declared in haskell is also very nice for domain modeling. You don't have as much boilerplate as e.g. when using PHP with doctrine.
The performance is also pretty good compared to python or rails.
Can you use it to write code that will be used (compiled) both server-side and client-side?
Because these days this is my bar for a language that is "nice for building web applications". I've gotten so much mileage out of Clojure+ClojureScript just because of this, it's not even funny.
You can, but it's... a bit messy unless you're using Nix as the build platform. Hopefully there will be a WASM backend for GHC... I mean, it's bound to happen right? :)
If you want statically checked types, I'd probably say that Scala is better at the server+client game (i.e. when you want to have both). Of course, Scala has its own drawbacks wrt. Haskell: lack of typed effects is a big one for me.
(Just for context: I work on a website+SPA written entirely in Scala which has been around for years and years. I also have quite a lot of experience in Haskell.)
Oh, I know about them. In fact, I'm trying to introduce ZIO in my Scala-mostly company. The problem is that a stray UUID.randomUUID() can destroy any and all guarantees. (We're already using Monix.)
That truly is the singular reason that I still prefer Haskell over Scala. I can get over syntax awkwardness, etc. etc. The impure-in-pure is... difficult when you can't just grep for unsafe*, etc.
EDIT: Fwiw, ZIO is definitely better than Haskell's IO. Better than RIO? Perhaps. Is it better than polysemy, tho? I don't think so. Btw, I know about polysemy's issues as well... hopefully lexi-lambda can get her GHC runtime changes merged so that we can have a true "free" effect system backed by a tailored runtime. Interestingly, Project Loom is also heading in a similar direction (first class continuations) on the JVM. Interesting times!
I'd pick elm for the frontend as it's kind of similiar to haskell syntax-wise but more optimized for the frontend context. Here's a great tutorial for this: https://driftercode.com/blog/ihp-with-elm/
Especially since you can autogenerate Elm decoders based on Haskell types to reduce boilerplate and duplication between frontend and backend, while mostly retaining a strong sense of type safety.
You can also use Elm as an introduction language to new developers, without all of the complexity of haskell’s higher order abstractions, and then introduce them to haskell when they’re comfortable with elm.
(I did this with Servant as a backend, but IHP is wonderful as well!)
The value of type safety is more that you're more productive because you can faster change things (as the compiler tells you what needs to be changed). This allows for faster iteration on your product.
Correctness is indeed not the most important thing, it's kind of a nice side effect.
You cannot create web applications without JavaScript these days. It does not matter what is your server side programming language, you need to send to the client JavaScript code to be executed in browser. And that in itself is a shitshow. End of story.
From another perspective you could say that JS is usually just a compile target these days, in many cases the source language is modern version of JS but in many other cases it's ClojureScript, TypeScript or something else.
Empirically the evidence suggests all those languages without Haskells type system are doing fine considering that the vast majority of the software is not written in Haskell in its actually running.
Frankly hiring is just a way harder problem than building a Web app, and fixing JS, Java python or whatever engineers is significantly easier than finding reasonable Haskell engineers.
> the vast majority of the software is not written in Haskell in its actually running.
The vast majority of software has horrible domain and security bugs! I am not one of those people who complains about the bloated state of websites or app stores, but the vast majority of software is not “fine,” especially software written in Python or JavaScript.
Utter nonsense to imply that Haskell would suddenly fix bad security.
Haskell code can have security issues too hidden inside some too clever for it's on good language extension ridden Haskell code.
Too be honest I think there is likely only a handful of people like Ed Knett who actually grok and write effective Haskell code, and guess what he'd also write good C++ code as well.
I didn’t say Haskell would fix bad security! My point is that it’s nonsense to suggest that existing languages are “just fine.”
But many security bugs in C or C++ come down to sloppy types, sloppy pointers, or sloppy concurrency, all of which are almost impossible to do in Haskell. Haskell is not a replacement for C or C++ but its ideas are (and should be) influencing systems programmers.
I don’t actually like Haskell. But there is a reason why major organizations are considering Rust over C and C++: those languages are simply not sufficient for writing secure and robust software in the 21st century, and Rust has taken many of the “best parts” of Haskell to improve systems programming.
Where correctness really matters, you might find Haskell. That makes it a very niche language, as most developers don't really care about correctness.
But you do find Haskell in places where complexity is high and correctness matters. Mostly banking, infrastructure, defense, and research applications.
Everyone makes this point about correctness, but I have never seen an easy to understand illustration of this. Now I am by no means a leet programmer but I have been reading about programming for more than a decade and every article about this point feels confusing. I even tried writing small stuff in haskell, still don't get this point.
Correctness is only an interesting problem when it's not easy to underst example and. Easy bloggable examples aren't complex enough to be correctness challenges
Closest I can think of is Purely Functional Data Structure (Chris Okasaki) or Category Theory for Programmers (Bartosz Milewski), but they're not exactly what you're after.
The truth is that Haskell is still very academically focused, so you're likely to see papers of substance much more often than books or blog posts.
When you hire people to write Haskell there's two groups that always show up: young enthusiasts who are frustrated with imperative programming in their (usually first or second) day job, and academics who got transplanted into industry to work on hard problems. I've interviewed people doing formal verification of CPU circuits at Intel, people who work on compilers, people who work on verifying termination of programs (for missile guidance), and people who work on financial institution backends.
What I haven't seen is too many experienced, pragmatic engineers (rather than computer scientists) who have spent their career writing Haskell.
The thing is correctness should not be viewed as the role of the language alone. You have to look at the whole ecosystem and organizational engineering practices.
Obviously people were writing life-critical code in C or ASM. You wouldn't expect a car manufacturer to write an ECU in the same way a game dev studio writes a game, but C and ASM can accommodate both.
From my experience Haskell seems to make easy things hard. It's the same mistake as a small startup thinking they need to do whatever Google / Facebook / Amazon are doing, when they're operating at one millionth the scale.
EDIT> By not focusing on correctness as the role of the language alone, you have a smooth path from quickly iterating in a non-safety-critical environment to whatever level of safety you need. In engineering safety is empirical, anyway. You don't just build a system and then assume it will work absent tests, so the idea of getting the code perfect is a bit of a red herring. You're going to have to test it rigorously anyway before anyone will get on your airplane.
There's a large body of academic research on this, and many, many failures to learn from, including many that cost lives or cost hundreds of millions of dollars. In practice, there are lots and lots of bugs in any C program of any complexity. That's why flight system software is written in Ada, not C.
In the military where you need to verify correctness of security controls or answer questions like "does my missile guidance program terminate", you can't use C or Assembly.
To achieve it with a dynamic language like Python...well, you can't (and shouldn't even attempt it).
The trouble is, nobody has the budget for that. Not even most military groups. So instead of relying on human process, we leverage machine-driven verification through the use of type systems, provers, SAT solvers, contracts, randomized testing and other academically sound techniques for quality assurance.
> From my experience Haskell seems to make easy things hard.
Haskell isn't designed to make undergraduate programming exercises easy. It's designed to make hard programming problems tractable. If you have an easy problem you shouldn't use Haskell. Hell, if you have a hard problem you probably shouldn't use Haskell.
Agree with most of what you've written -- thanks for the thoughtful response. Was going to respond that I recall the F-35 uses C++, but that's probably not a good argument ...
Regarding choice of language, I've reached a level of general competence where I don't really find any intractable problems that I can't handle with my toolset. So part of this may be just personal preference where there's no justification for slogging through the cost of learning yet another language, especially if I don't find it fun to program in. I used to invest a lot of time in learning new languages but I no longer find this a good use of my time.
I own a handful of Haskell books, and I was originally interested in Haskell for abstraction / DSLs. One idea was to go directly from a human-readable binary file-format description to an importer/exporter for that file type.
Questions on a couple of themes:
1) I have gone back to uni to study biophysics and learning biology deeply as well as previous machine-learning experience is making me think that fuzzy / probabilistic / redundant is actually the way to go for reliable complex systems. What we consider to be complex computer systems are actually ridiculously simple compared to biological mechanisms.
2) Are we in a transitional phase such that inventing better programming languages is an inefficient (though of course interesting and possibly instructive) path? Do we get machine-programmers soon enough that programming-language design stops mattering? By analogy think of all the effort still being spent on machine-readable data formats when we're close to having machines that read. At that point anything human-readable becomes machine-readable and your schema doesn't really matter.
> Was going to respond that I recall the F-35 uses C++
AFAIK its critical flight systems are still in Ada, but it's a beast of a software platform. My company joined the effort last year and found that even our tiny area was an incredible mess of different technologies.
> Regarding choice of language, I've reached a level of general competence where I don't really find any intractable problems that I can't handle with my toolset. So part of this may be just personal preference where there's no justification for slogging through the cost of learning yet another language, especially if I don't find it fun to program in. I used to invest a lot of time in learning new languages but I no longer find this a good use of my time.
Personally I wouldn't learn new tools unless your existing toolset proved inadequate. But I'm pretty jaded at this point. Learning new techniques, on the other hand, I would never stop doing. Everything from design patterns to compiler design, from dynamic programming to category theory - it all has made me a better programmer regardless of the tools I have at my disposal.
> !) & 2)
I agree on 1, to a certain extent. The trouble with that approach is the development time of solutions is pretty excessive. Ironically probabilistic programming suffers from this same issue. I would be satisfied is software engineering was just actual engineering, which it isn't.
I don't think we'll achieve the transition of software into an engineering discipline until we have better methods of communicating to both the human and computer in parallel. My guess is we might be 50 to 100 years away still.
Just to clarify, when Haskellers say that Haskell is good for "correctness" they don't mean it is good for "verification"[1]. "Verification" means something like "formally proving the behaviour of a system". It is very hard. "Correctness" to Haskellers means modest things like "the compiler tells me when I passed a NULL (Nothing) to a function that didn't expect it" or "the compiler tells me when I mixed a part name for a part serial number".
Modest, but nonetheless Haskell does this in a ergonomic way which is very effective for writing quality software.
Haskell is fantastic for programming systems where types are not just a tool, but central to the goals of the program. E.g. a program for converting one type of markup files to another (from one type to another type), like pandoc [1], which indeed is written in Haskell.
In my experience the worst errors are those of broken referential integrity in distributed systems, misinterpreted product specifications and plain old "this doesn't do what i thought it did" code
"This doesn't do what I thought it did" should have a strong connection to needing to rely on reading documentation made from comments. In some sense that form of literate programming is about limiting the scope of each individual misunderstanding without eliminating any of the misunderstandings.
The software engineering research community has almost no idea how to measure "ease of writing correct programs". So it becomes very difficult to make meaningful claims that Haskell programs will contain fewer bugs.
As far as I understand, Haskell’s original intent was to be like a testing ground for research into functional language design, which I think it really succeeded at.
Additionally, I do know of more than a few production deployments of Haskell in finance, but that doesn’t necessarily mean that that is what Haskell excels at.
As someone who has extensive experience in both Haskell and Clojure, I can say that the latter is definitely better suited for “transformation of data”.
I think it does, as the OP was asking what Haskell is great at. Clojure actively positions itself as a language great for data transformations, as does the community, where Haskell doesn’t particularly tailor towards this. In fact this is the first time I ever heard someone say that Haskell is a great choice for that.
I hear many more Haskellers argue it’s a great language for writing parsers, and with that I very much agree.
But positioning as such doesn't necessarily make it true. Just because Haskell isn't advertised as the "data transformation" language doesn't mean it does better or worse than any other language at it.
In my personal experience of using Clojure, Haskell and Python for data transformation and parsers, Haskell does the best job for both. So this is the second time you hear it ;) Anyway, we are just throwing anecdata at each other. Personal stories are still useful for programming experiences since "which is better" isn't that easy to measure. I would be happy to hear more about your experiences with Clojure.
Haskell is great for screwing with undergraduates' minds. Box proofs anyone? I did it more than two decades ago and for some reason haven't decided to pick it up again.
The matter with Haskell is that one often doesn't use in-place updates and that the language is designed such that this can be more efficient than in many other languages though Ocaml probably does this better with it's incremental garbage compilation strategy.
In re: Haskell's goodness or badness, compare and contrast with, say, PHP (crap language with wild success.) Or Prolog (a stately Elven language with deep but obscure success.) Haskell is what it is.
In re: types and data, FP is good for that. See e.g. "Domain Modeling Made Functional" by Scott Wlaschin ( https://www.youtube.com/watch?v=Up7LcbGZFuo ) it's about F# but the concepts apply cross-language.
In re: FP PLs "done right" I submit Elm lang. I've been using Elm recently and it gets the job done. It's weird though: on the one hand, as a experienced professional it feels like a toy. The error messages feel almost insulting, like I'm being patronized. On the other hand, once I got over that (silly) reaction, they're awesome. Changing code is a breeze, because Elm leverages the crap out of the type system, and the structure of the code and runtime prevent whole vast categories of errors.
Combine that with the sort of Domain-Driven development that Wlaschin is talking about and "baby, you got a stew going!"
I was introduced to OCaml through Elm - a deeply-opinionated language with strict guard rails. In Elm, there is either a happy path or there is no path. As a newcomer, you're not overwhelmed or paralyzed with a plethora of choices on how to get things done simply because Elm limits your choices. Each tool in your toolbox is documented with simple examples how to use that tool.
After finally jumping the fence and exploring/devving with other OCaml languages (specifically F#), I still come back to Elm to see how it and the community does things: namely their best practices and explanation of fp concepts.
> A very clear indication of this is how Haskell treats programming terms. Instead of explaining Monads like all other design patterns out there, they insist on using some obscure definition from category theory to explain it.
The author is bashing people for explaining a concept from Category Theory with... dum dum dum... Category Theory?! Just because OP was looking for articles explaining monads as design patterns doesn't mean there aren't other people who, God forbid, are looking for theoretical articles about CT/Monads explained with Haskell.
Yes, there is value in pragmatism. Rant posts, on the other hand, have little value IMHO. Haskell has shortcomings like any other programming language. That absolutely does not make it a bad programming language.
Explaning monads in terms of category theory is like explaining regular expressions in terms of finite automata. It's a good idea if you're writing a textbook, but maybe not so much if you're writing documentation for users of a programming language.
> Rant posts, on the other hand, have little value IMHO.
Sure ;=)
But Haskell's explanations of Monads is in my experience really no adequate. Category Theory is the underlying since theory, but do you need to know about mechanics and gears to drive a car?
There are explanations of Monads which are much much easier to understand for most people (people not having much to do with Category Theory).
E.g. by now the concept of map, flat_map is fairly wide spread in most programming languages and you can teach about Monads in terms of that fairly easy. And then add the additional abstraction layer used in context of Monads like abstracting over the "external world state" (IO) and "combining computation descriptions".
> But Haskell's explanations of Monads is in my experience really no adequate. Category Theory is the underlying since theory, but do you need to know about mechanics and gears to drive a car?
Monads are part of the underlying category theory. So asking for a full explanation of Monads you are asking to explain part of the "mechanics and gears" in your car analogy.
> E.g. by now the concept of map, flat_map is fairly wide spread in most programming languages and you can teach about Monads in terms of that fairly easy. And then add the additional abstraction layer used in context of Monads like abstracting over the "external world state" (IO) and "combining computation descriptions".
I think I agree with your main point here that teaching the internal details first is often not the optimal method. You do not need to know how Monads work to use them to great effect in Haskell and other languages.
> but do you need to know about mechanics and gears to drive a car?
Depends on what you are trying to do. You don't need that knowledge in order to drive a car, but there are drivers that absolutely should know about mechanics and gears.
I think that 99% of programmers can get approximately all the value of monads by reading an article explaining how promises, options and lists all have this pattern in common, without any mention of monoids or endofunctors. I enjoyed the courses I did in category theory very much, but the benefit to my code has been zero.
You either think the astronomical number of bugs in delivered software is a problem or you don't (and good luck with that). The use of Haskell is a huge win on this metric and demonstrably so. You don't have bugs in any of the Haskell programmed OSes you actually use, nor your editor written in Haskell, your Haskell mp3 player nor your Haskell time machine and Haskell en-truthenator.
Pandoc is good. Some like xmonad window manager, git-annexe has some fans. There's probably 4 or 5 more too! Mostly centred around parsing.
And for all that you absolutely should learn Haskell. You'll enjoy it and it will enable you to think about programming in New and powerful ways. Just don't fall so deep you expect to actually ship anything you write.
The only Haskell software I regularly interact with is Hasura, and it has plenty of bugs (even ones around nulls and other things Haskell is supposed to magic away).
I wonder is there any good research/data for this claim correlation between bugs and used language. I know there is some for development practices but it's independent from the language used.
An Empirical study on the impact of static typing on software maintainability, Stefan Hanenberg, Sebastian Kleinschmager, Romain Robbes, Éric Tanter, Andreas Stefik/. Empir Software Eng, (2013-12-11). DOI: 10.1007/s10664-013-9289-1.
An Empirical Investigation of the Effects of Type Systems and Code Completion on API Usability using TypeScript and JavaScript in MS Visual Studio. Lars Fischer, Stefan Hanenberg, Proceedings of the 11th Symposium on Dynamic Languages (154--167), 2015.
A large-scale study of programming languages and code quality in GitHub. Ray et al., 2014
The TL;DR is: typing matter, but so does tooling. However, programmers in dynamic languages are slightly slower, appear to produce more defects. There is a measurable benefit of static typing, but it's small.
Economics of Software Quality has a function point to language conversion data, and function point to quality charts, so you could possibly say infer from that.
> You either think the astronomical number of bugs in delivered software is a problem or you don't (and good luck with that).
We don't need luck. Competition has already given us the answer that most bugs are ok. The vast majority of the software that creates literally trillions of dollars of economic activity, and pays most of our bills, is not mission critical. Software fails all the time with the only significant consequence being a developer has to spend some time fixing it. Sure sometimes money is lost. So is money lost when factory equipment needs repair.
Some software does deserve to be bug free when it might put lives at risk, like flight software or medical software. Perhaps even operating systems. But the vast majority of the software I use on a day to day basis does not fit that category. What significant harm does a bug in VS Code or Slack or even my OS cause me? None.
If bug free software gave a significant economic competitive advantage, smart folks would start writing it and win big in the marketplace. Considering this has had decades to happen, and has not, it's very unlikely that bug free is the winning competitive advantage when it comes to software. I'd guess the winning advantage is that the software is useful. Much like a car with many small problems is still useful.
The truth is that much of the software that exists today simply would not be worth building bug free and would never be profitable.
You can extend this outside of software development to see that it's true in a more general sense. Most of the manufactured products we buy are not perfect and do not last forever. Some even have flaws from the day you buy them, but flaws that can be worked around. Once on vacation I bought screwdriver at a dollar store. Poorly manufactured and it does have "bugs" compared to something I would have paid 10x the price for. But years later I still have it and it's good enough for some jobs.
You could potentially build a car that doesn't fail for any reason for a few hundred years. Only Bezos and friends could afford it. With that said, I'd like to see the negative externalities of pollution and waste included in the true cost of things we buy, so that we don't produce so many disposable things that society pays for in the long run. But that's a different discussion.
Please don't take this to mean that I don't take great pride in writing quality software that is as bug free as possible. I do. I also take great pride in meeting budget goals and deadlines. All successful businesses understand that competing goals must be balanced against each other.
A well thought out reply, thank you. Haskellers would typically agree with you. Unfortunately harry8 who you were replying to isn't one. He was being sarcastic.
I definitely agree with the documentation side of things, particularly lack of concrete examples.
I found that overall however, learning Haskell made me a better programmer, even if it's more useful as more of an academic than practical language.
Dealing with pure functions, no access to loops so recursion is paramount and it is rather beautiful and useful some of the tail recursion and pattern matching stuff.
The Haskell quicksort is the classic example of this:
> The Haskell quicksort is the classic example of this:
The Haskell quicksort is also classic example of something that is small, beautiful and still misses the point. Yes, it contains the core idea of quicksort (partion the list and divide and conquer) but it completely fails on the quick part, because the Haskell lists are are leaky abstraction of real computer memory.
A real quicksort in Haskell is much more convoluted.
What's a real quicksort anyway? You could argue that a reasonably fast implementation of quicksort is much more convoluted, which it most certainly is, but that doesn't make this implementation any less real.
A key aspect of quicksort is that it sorts the list in-place. If you dont sort in-place, you dont have quicksort and if you dont need in-place sort, then quicksort is the wrong choice anyway.
Of course, canonical Haskell does not have a concept of in-place, which makes showing quicksort in Haskell also a questionable idea.
> You could argue that a reasonably fast implementation of quicksort is much more convoluted, which it most certainly is, but that doesn't make this implementation any less real.
A reasonably fast implementation of quicksort is straight-forward in any language that has arrays/vectors with destructive updates. This implementation will have issues with pathological cases, but that's a problem of the quicksort algorithm, not of the implementation (whereas the Haskell one shown above has problems in the implementation).
One cann do this in Python, or any other language for that matter:
def quicksort ( xs ):
if len(list) == 0:
return []
else:
less = [x for x in xs[1:] if x <= xs[0]]
more = [x for x in xs[1:] if x > xs[0]]
return less + [xs[0]] + more
True, although you forgot the recursion. The Haskell filter expression is much nicer as well. You could perhaps be terser by using ternary conditionals:
def quicksort ( xs ):
return [] if len(list) == 0 else quicksort([x for x in xs[1:] if x <= xs[0]]) + xs[0] + quicksort([x for x in xs[1:] if x > xs[0]])
And IMO, that python version is one million % more readable. In particular, because you defined more and less before using them, vs after in the Haskell example. I don't know if there's a way that could be achieved in Haskell too though.
I used to think so as well, until I realised at some point that defining things like this means you focus on the actual "business logic" up front, but the applicable definitions are never far (visually, spatially and logically). In my opinion it lets you get to grips on the overall logic before hassling you with certain specifics.
That is quite elegant for that variant, which is a great fit for Haskell's features. How easy is it to write a different variant, like using a different pivot, or sorting in place?
Semi-recently I was interested in learning Datalog so I wanted to kick the tires on Datomic (written in Clojure). I ended up getting stuck on some things; posted questions to StackOverflow and didn't get a response. I then realized the Clojure tag is very low volume + engagement.
Then someone on Twitter told me that I'd be better off asking questions in the Clojurians Slack channel. So I go to sign up and... the signup link is broken. Had to flag down someone on Twitter or IRC to fix it.
As an outsider I felt a whiff of decay from the Clojure community (no offense)
most probably yeah, at least there are some informal uses whenever you make some diagram docs .. you'll end up aligning with uml 'vocab'.
I helped a european project on a uml graph versioning between various industrial uml applications, but I'd consider these niche.
IBM was heavy on UML since they bought Rational suite.. but I know IBM didn't use Rational for themselves (at least my department).. but surely Rational users did.
I think heavily regulated sectors are the most prevalent users of UML.. they like having a standard, having a lot of documentation etc.
Yeah, it’s kind of mind-blowing to me that this post is so highly rated when to me it just seems like a bunch of unsubstantiated claims like this, as well as blatant misunderstandings of fundamental functional programming concepts.
I guess haskell is just rarely enough on HN FP to get a boost just for that. Content is not worth spending much time IMO. To each his own, if the guy really suffers with Haskell then so be it, may he have a lot of fun with perl or js.
If you try to set a google alert for "Clojure jobs" and "Haskell jobs", or just go through "HN: who's hiring" of the recent years and compare search results for Clojure and Haskell, you'd see that it's not "just slightly above". Clojure currently is the most widely used FP lang.
Doesn't Scala beat Clojure in industry? That would be my experience and it has higher number of jobs on Who's Hiring. My ranking for adoption goes: Scala > F# > Elixir > Clojure > Haskell > OCaml. F# doesn't make much of a showing on Who's Hiring but I think it has stronger adoption in industry than Clojure.
Note that I'm not bashing or defending any of the PLs mentioned. It is merely the fact - today, Clojure is the most popular FP language being utilized in the industry. Check the number of podcasts dedicated to Clojure https://www.fpcasts.com; the number of conferences: https://purelyfunctional.tv/functional-programming-conferenc... list of companies using it, job listings, etc.
It doesn't mean that this all makes the language better or worse. Also, the overall share of languages with strong FP semantics is still way too small compared to the use of imperative PLs. That fact doesn't make OOP better than FP and vice-versa.
Wow, I honestly wonder if this was caused by mixing up “functor” in the C++ and Haskell senses…
To expand on that, for those in the audience:
In Haskell, a functor is a type constructor (like “list”, “optional”, “future”, “I/O request”, &c.) with a way to map a function over it, covariantly, in a way that preserves its structure—i.e. without changing the shape of the container or structure of the action represented by the constructor, just the contained elements or produced result.
This is based on the more general notion of a functor in mathematics, which is a mapping between categories. The Haskell version is much more constrained, though: it only maps between Haskell types, and it’s parametric (iow completely generic), not just any old mapping.
While in C++, a functor is a completely different thing: an object that can be called like a function. It’s thus equivalent to a closure, where the object fields are the captured values. And that sounds like the description being used here.
If the author was using the description to explain Functors to someone who only knew oop it's a reasonable start. I got the impression the author was implying that is basically all you need to understand Functors and is not the case.
Functor is a typeclass, which is the equivalent of an interface in Java, it's very basic, providing the ability to lift a function and execute it in the context of the functor (whatever that is, this is the interface, remember), a generalization of map.
So a lot of types have an implementation of Functor. In theory one of those implementations could be guilty of using hidden state and all that, but in practice all of them are just straightforward functions transforming values into a new value, not mutating them.
In short, not a single word of the description is correct.
Now I'm more confused. If there is no hidden internal state what is the difference between a function and a Functor? If it's just taking input and giving output without any internal state, that's just a function isn't it?
Edit: ok I did some more refreshing of memory. So Functor is an interface with some properties like identity[1] and distributive morphism (I think I'm wording that right). That's just an interface. I can implement that in Java or F# if I want. How is haskell helping here?
You can't define the interface in either language.
Implementations of Functor consist, in part, of type-level functions. In Haskell terms, these are "higher-kinded types". The standard example is the list type "[]" which, as a type-level function, takes an element type and gives back the type of lists whose elements are drawn from that element type.
In Java and F#, the only way to talk about the List type is in its fully applied context, where you've attached the element type. So maybe you've got "List<Int>", or you've got "List<String>" or you may have a generic "List<A>". What you don't have is the type-level function that's not been applied to anything. So there's no equivalent to the Haskell Functor implementation:
instance Functor [] where
fmap _ [] = []
fmap f (x:xs) = f x : fmap f xs
This is barely half the story. What makes this useful in Haskell is the typeclass overloading, which makes it effortless to write functions that abstract over arbitrary Functors, and use "fmap" multiple times locally for different Functor instances, letting the type system figure out what implementation is needed to map over the particular type you're working with. And in such abstract code, where you may know very little about the Functor instance you're working with, it's extremely important that they all be absolutely law-abiding: in many cases, the laws are all you have to work with.
These two features, higher-kinding and typeclass polymorphism, make it worth talking about Functors, and I don't think you can appreciate Functors in Haskell without seeing the interaction of these features and just how much it impacts the code style of the average Haskeller.
Man nothing against your dedication of explaining this to me, but everytime I talk about haskell it feels like a jargon salad. I very, very humbly ask you, so what? Like you wrote a short essay on this, and I still can't grok even in the slightest why this matters. Every other language I talk about, can at least tell me why certain feature is helpful, even if I don't get it. What I got from this is this allows abstracting mapping over types. But what does that give you?
Think about it this way. If the primary importance of something is only apparent from the big picture how is a user supposed to decide whether to use it or not? The big picture is rarely available to most programmers.
> Think about it this way. If the primary importance of something is only apparent from the big picture how is a user supposed to decide whether to use it or not? The big picture is rarely available to most programmers.
Maybe a user isn't supposed to. I believe that's the premise of Paul Graham's Blub Paradox [1]. I certainly didn't learn, say, Common Lisp, because I was doing a feature comparison. I had no idea what a lexical closure was at the time, and a couple of toy examples wouldn't have convinced me of their worth. I'd been programming quite happily without them for some time by then.
That is an excellent point. Haskell is such a fundamental shift in thinking, perhaps the only way to learn real application is to make something with it. But I would still maintain every language at least has a highly simplistic example of why certain features work well. In fact, my curiosity reignited by this discussion, I found this video series[1] on youtube which somehow made is 100x more clear where Functors, Applicatives and Monads are to be used. The tree example is pretty abstract, but it helped me connect these features to my work. I still don't know how to get most use out of it, but I think I get the USP. Still have the question about how haskell is helping here, because I can write a hidden state changing function in Julia if I want. But one step at a time
> Functors are basically an Object with a internal state changing method in typical OOP terms.
If I was writing a functional language, in an oop language, I could implement functors at least partially with an Object with an internal state changing method. I could not model/implement an Object with an internal state changing method in a fp language via a functor.
The main issue with the author's statement is that it makes a claim of approximate equivalence and does not back it up with additional evidence or examples.
>> I could not model/implement an Object with an internal state changing method in a fp language via a functor.
> I don't think anyone claimed that.
I was not trying to refute the opposite claim. I was giving info on the differences of functors and the referenced oop feature. My points run somewhat counter to the authors claim "Functors are basically an Object with a internal state changing method in typical OOP terms." or at least what I think some reads walk away with. Hard to say what is in the author's head with the provided text.
> Now we are getting somewhere. You said partially, what features are being left out?
I did not mean to imply features would be left out but rather I would use more than the one oop feature, "an Object with an internal state changing method", to implement functors.
I find the "I'm more productive with Python" claim in the post to be specious.
Maybe I've been abused (and abusive) by bad programming practices with python in the past, but it seems you really need to lint and exercise python code to have any kind of confidence that something you think is correct won't blow up at runtime.
False confidence that something is "ready to go", is the worst, and it can cost a lot of money, and sometimes human life.
Erlang has this problem too. It's so late binding you can make spelling mistakes and it won't be caught until runtime. It's actually the basis of some very powerful features, but you really have to know that coding in Erlang is not like coding in Rust, Haskell, or something else strongly typed.
So while I think the author has a point that concretions can get you into an inflexible, hard to refactor mess over time, I think sometimes those concretions don't have to be as bad as they seem.
Consider Go. Interfaces are a form of concretion too, but the advice is to keep them small. An interface of exactly one function can be a beautiful thing. I think it's better to have a type implement many interfaces than to have a type implement one huge interface.
> One, types are a concretion. If you’re looking for higher level of abstractions to get flexible behaviour, you’re ultimately going to have a world of pain
I don't see why types should be in the way of flexible behaviour. Frameworks like Spring in Java use types to direct dependency injection and it works well.
Also this:
> Types wrap data and treat it like a black box whereas schema describes the shape and content of data.
Types can be made abstract and blackboxy, and sometimes that's what you want. But doesn't a record type give info about its fields? Doesn't a sum type give info about possible alternatives in the values?
> As such haskell ultimately suffers a lot when they have to interact with the real world. Suddenly they are left reeling as they find out that the real world is, in fact, dynamic.
The trick is to know what we are really modelling. If we are deserializing domain objects from JSON, it makes sense to have types for the domain objects. If we are writing a tool like, say, jq, perhaps we should merely have a datatype for the JSON tree itself: http://hackage.haskell.org/package/aeson-1.5.5.1/docs/Data-A...
> Bottom up design is something we’ve learnt collectively as a good way to be much more flexible in responding to change.
Even if you want to design bottom-up, the moment you want to add anything to your program, you need a little top-down thinking, if only at the micro-level. You want to create something new that isn't there, and then think how to accomplish that with the tools you have.
I discovered Haskell in a comparative programming languages course I took at university last semester, and it completely changed how I think about programming. I can't speak for an industrial use case, but for a hobbyist writing open source software and personal projects, programming in Haskell has been an absolute joy and has reinvigorated my love for programming.
Tools like IHP (https://ihp.digitallyinduced.com/) are a great example of not only the beauty of the language, but when combined with Nix and the IHP IDE, a better development experience than I ever had with Rails or any other language.
If you are pragmatically minded, you can get stuff done in Haskell. In my experience there's a lack of online resources for this kind of work in Haskell, but that's what I and others especially in the IHP world are working on. If you just want to experiment, Haskell is great for that too.
I have coded in Haskell, although only to get a sense of it, and quite a long time ago. I'm not going to pontificate about it here, but I would love to be able to filter these comments based on the length and depth of experience of the commenter, both in Haskell and other language of choice.
There's a lot being said here that's true, and a lot that's personal opinion. I'd love to know the personal experiences that drive those opinions.
Alas, I suspect we will get the opinions, but no way to assess how accurate they might be.
I have been playing around with Haskell for many years, and wrote a short Haskell book. That said, I still consider myself a newbie at the language and have only used it for customer work one time.
Many years ago, I set aside learning Scala for a while, dug into Haskell (again), and for reasons that I don’t even understand myself, I then had an easier time using Scala. Same comment for Clojure (not my favorite language by far, but I have used it often professionally).
One thing resonated in the article “Very senior Haskellers calls for “type oriented programming” which goes like this: Write types and interfaces for the types and fill in the blanks.” I prefer just using simple types, and one thing I like about Clojure is just using the built in simple data structures.
In any case, I think it is very worthwhile using a wide variety of languages but I understand it when other people prefer to not spend the resources experimenting.
Given the opinion expressed in this blog post I wonder how the author intends to process data that they don’t know the type of :P
Yes, Haskell is born out of academia. That’s where its strengths come from but also its weaknesses. Many new languages and language features are directly inspired by Haskell’s strong typing and type safety and the world of programming languages is better for it.
I do question whether Haskell will ever be ready for mainstream - probably not. But hopefully some other more practical ML derivative will gain enough momentum to truly become mainstream. Arguably Scala is that language, but Scala feels the C++ of ML-derivatives - the language is too big and the syntax irks me.
I think an aspect of many paradigms and then the multiparadigm languages is that people don't see the original two camps of CS pioneers.
Haskell is from math, brevity, correctness.
Perl is a linguist's take on applied math.
I would prefer to write a proof in haskell and a manifesto that expresses what I think, however irrational, in perl.
("If I had more time I would have written a shorter letter." and general sentiment towards first language vs math education imply that perl will be the easier and more comfortable language for most if that is really the primary goal. But the slight irrational thinking as we add more layers means eventual collapse.)
Haskell has it’s fair share of problems, the major one being pointed out in this article, that in spite of all of SPJ’s noble efforts it’s still primarily stuck in the “ivory tower”, bogged down by tons of libs and docs that are full of academic or theoretical jargon, and unfortunately plagued in some sub communities, by people that have intellectual superiority complexes, which makes it alienating to a large number of potential users and leads to bitter posts like this.
But in terms of the language itself, I really don’t see the issues raised here and the author of the article comes off as having not understood the language very well. Libraries like Euterpea show that the language is insanely expressive and that they’ve, IMO, created a language that is very solid from a language design perspective. Sure there are wrinkles like records, but otherwise it’s generally pretty great from an expressive power standpoint.
Communication only works when people are speaking the same language. Of course you are going to struggle when that language is different than what you are used to (lazy, functional, pure) but if you spend time in language classes and eventually learn it, you’ll find that you can probably communicate your ideas just as well in this language too. If you decide you don’t like the sound of the language then don’t communicate in it, but that’s your choice, it’s not a knock on the language itself. This article feels like it was written by someone that struggled to learn Haskell and gave up and ranted about it rather than persevering or simply deciding it wasn’t for them and moving on. If you are comparing functors to objects in OOP I don’t think you actually groked it.
Yeah this article was bizarre for me to read. My understanding is that Haskell doesn't aim to be big and popular (as Rust and Go have as a nice counterexample) its intended use is in academia by CS researchers, and its users seem to be quite satisfied with it. The fact that it gets any sort of traction at all outside this suggests that it's actually exceeded expectations of its success.
> It is as if Haskell doesn’t want “normal” people to understand it
Well, yeah kind of. I don't think it's as hostile as that though, the language isn't actively against "normal" people (I guess, that means your common or garden OOP programmer) it just doesn't actively court them.
The comments about how the community treat newcomers are quite interesting though and if they are true then that's pretty shitty behaviour. It's one thing if the language isn't aimed at a given demographic of programmers, but it's another thing for people to use it as an excuse to beat down and berate others.
I know many academics that exclusively use Haskell or even Idris and Agda which are even more obscure but more correct.
Agda is so “correct” that the language is total and not Turing complete with the exception of a “partiality monad” that allows for partial and Turing complete computations — similarly to how Haskell isolates effects on the world from the main language and encapsulates them into a special type, Agda isolates partial computations.
And this language is very much used within specific academic fields.
The one valid definition of success for a programming language is that it looks pretty and thus the most successful programming language is Piet ( http://www.dangermouse.net/esoteric/piet.html )
I mean, joking aside, it's always fascinating when people can't wrap their minds around the idea that their criteria for success or quality or goodness are subjective and not universal.
When people talk about the beauty of Haskell due to it’s definition they are making a qualitative claim.
When people say Haskell sucks because of low adoption rate they are making a quantitative claim.
As a research language there is real value in Haskell not getting broad adoption, since that would increase the demands for maintaining backwards compatibility.
As an industry language that philosophy is a huge detriment because it introduces risk and extra costs in keeping up with an ever changing language.
These things are not a one dimensional structure, otherwise Java and C are pure perfection due to adoption and Rust and Haskell should be forcefully shutdown or whatever.
One niche I'm curious about is the use of Haskell for blockchains and smart contracts. Cardano is one of such project: https://docs.cardano.org. It's still very experimental and won't have smart contracts before March/April of this year (likely to be later). The blockchain and their smart contract DSLs are implemented in Haskell, that will be interesting to see how that compare to what Ethereum is doing with Solidity.
I really like Haskell, I do enjoy Rust as well. I really am not a fan of OOP languages, largely because the data flow can be anything you want it to be. When the data flow is linear I find code far more easier to understand.
One thing I dont like about Haskell:
GHC doesn't seem to be aware of some of its escape hatches (allowing for imperative code) and performs incorrect optimizations. I had some trouble working with IORef recently and had to write some hacky code to get it to work properly.
This has nothing to do with the programming language in question. The anti-abstraction, everything must be concrete argument leads to madness. You can do type oriented design with python3 or typescript.
Since writing this, I've discovered other systems with the same general philosophy:
* Avro IDL
* Fuschia/Zircon and the associated IDL (FIDL)
You can agree or disagree about the design decisions made by people contributing to Haskell language, but you have to admit that it's a pretty clever bunch of people. Haskell has contributed more to PL research in past few decades than any other language. That alone makes it a language every programmer should learn. Not because they should use it daily, but to learn what functional programming is in its purest form.
I have been giving Python a go for writing a programme which concurrently scrapes and collates data from multiple sources.
I chose to figure out how to do this in Python instead of my usual Haskell because I wanted to see how productive I would be compared to how I did similar things in Haskell. I wanted to get away from what I call type-wankery, which the author seems to be complaining about as well.
I’ve been using the Python asyncio library and have been thinking that Python could certainly do with monads in this context. Then I remembered what a ball-ache stacking monads is. In any case, I can’t help seeing programmes in an algebraic context after using Haskell.
The author, and many people who complain about types, seem to forget that types and interfaces exist whether or not they are explicitly defined.
Is the problem with such an attractive type system that they lead to people defining interfaces before they are fully explored? In that case, maybe it could be argued that a less attractive type system would keep people away from so much type-wankery.
It’s hard to know where the best balance is. I am considering F# now. It also seems to have working IDEs, which is nice.
Rust would certainly not have existed in it's current form without Haskell.
There are many such “progenitor languages” that focus primarily on theoretical elegance such as Smalltalk, Scheme, and Haskell that might not see huge adoption but do inspire a rather lasting influence on many languages that grew far larger than they.
I'm not sure why the article hates on types that much I mean:
- schemas are types
- types do not require encapsulation ("even" in java "plain old data" type which do not give any encapsulation are a thing)
- many popular languages without static type ascription have by now added optional type ascription or there exists a derivative which do so which are recommended for larger projects (e.g. TypeScript and python3 type hints)
- I can understand why people might not like nominal type systems, or to many explicit type ascription.
- In my experience when maintaining & changing software having a proper type system is making the task easier and faster (if the type system is not abused, idk how common type system abuse is in Haskell). And in my experience non nominal type system as seen in TypeScript are already helpful making changes easier then in JavaScript. But it's still harder to change (larger projects) in TypeScript then e.g. in Rust (assuming the type system was not abused!!).
- To be fair abusing type systems is in my experience often worse then not having a nice type system.
I used to hate on types, largely because I had mostly learned to program in Java, where you hade to write the name of the type twice just to initialize an object, and just the general tendency in Java to rely on explicitly specified types everywhere.
I felt that the value gained from all this extra work was minimal, so when I got into Python it felt like being unshackled from utterly pointless chains.
After around 10 years of developing in Java and Python/php, I learned Haskell, and realized that types are only shackles in languages that refuse to utilize them in an efficient and useful manner (i.e. type inference instead of endless type specification or type casting at every point of mutation. And don’t even get me started on the destructive horrors of automatic/implicit type coercion..)
In Haskell, types feel like the nubs on jigsaw puzzle pieces, guiding you to what (potentially) fits together, which drastically reduces the amount of programming details you need to keep in your head, so you can focus on the actual, fuzzy, real-life semantics of the system your developing.
This is where they really loose me. No, they don't. Only bad abstractions leak.
How did real limitations with statistics ("all models are wrong, some are useful") mutate into pessimism about designed, not discovered, systems? When did we all become such mindless empiricists?
About Haskell missing pragmatism, ever tried IHP? I don't think the development environment sucks, but you are free to disagree. In my opinion, the developer experience is ultraslick, it's easy for beginners to get started with and there is very little type theory to getting started.
Yes, Haskell has been bad at teaching to beginners, by forcing people to learn theory before build something, but it has been really catching on for the better recently.
I think a "bad" and "good" programming language can be subjective, but I have never tried more enjoyable languages than Elm and Haskell. I feel confident using these languages because of the static types.
Some will probably say that the usefulness of types is an illusion and I can live with that, but I don't think it is.
In my opinion the problem with Haskell is that terminology from computer language research (e.g. type theory) bleed into language which combined with sometimes appearing "elitarism" makes it just not very appealing for anyone not being deep into the relevant parts of computer since research.
I mean I don't need to know how engineers speak about the internals of my car to drive it (but it can be helpful), but Haskell kinda expects that.
Also it's not uncommon in Haskel to see functionality named not after what it does but after the theoretical computer since properties it has. Which might describe how you can combine it on a type system level but isn't that useful for easily writing programs and makes any "types as documentation" much worse.
I suspect this is because Haskell is proudly a "research"/bleeding-edge language. It's distinctly _not_ a Java or a .Net and that's ok, as it's not trying to fulfill the same role.
Things in Haskell have names from theoretical computer science, because people are using it to express and experiment with those ideas - they don't need or want to spend undue time describing a monad or a functor for the 15-millionth time, they want to use them in some new cutting-edge way.
I once saw (on this website) Haskell described as "the primordial soup of programming language features". Wild and weird ideas get tried-out in Haskell, and then 5-10 years later someone takes the lessons learned from the evolution of some feature in Haskell and brings it to mainstream programming in some elegant, distilled form. And I think that's super cool.
Please keep in mind, when Haskell introduced some of these features they decided to take existing terminology from theory rather than make something up. The big benefit from this is that people already with the theoretical terminology have an intuition to start from, plus, users of the language also build some intuition for the theory. Win win. Given that they often broke ground on the features and couldn't borrow from other languages can you say this was a bad decision?
There are languages more suited to program in, and there are languages more suited to move the field of programming itself forward. I guess Haskell belongs to the latter category and when people try to ram it into the former set, they get frustrated.
> "When you’re reading this article, or talking to someone, do you claim that you’re saying a string, a number, a Text, or something else. No! Data is simply data, and data is inherently dynamic"
I think I disagree with this one. Yes data are inherently dynamic but every data usually have types associated with it. For me type is like rule. Eg u8 is a 8 bit unsigned int which is 0-255 is just a rule.
In English we have grammar which is rule or another type so to speak. a-zA-z etc doesn't make meaning on its own but a combination of rules make it meaningful and beautiful.
The article is generally imprecise, and I completely agree with you. Stating that `... data is inherently dynamic` in this context is plain and deductively wrong. If we can not add type to data, we can't even interpret it as a bit-string because, hey, it's a type! (the type of a string of 1's and 0's)
Types are mostly used to reason about a program internally. But sometimes we need to consider the real world. When doing do we need to interpret data. In the Haskell-sense this means assigning types to it.
A language like Elm has exemplified how the internal / external data barrier can be implemented for strictly typed languages. Haskell just considers everything a string and lets it be up to the developer to parse it.
Any project I have built with Haskell, I could have with any other language. Maybe even faster (for some definitions of fast) But not with as little personal labor. I've found that no matter the scale, Haskell allows me to solve it without effort. Because I'm not jerry-rigging commands to do what I want. I just eventually Solve the problem.
I say this as a computer engineer, so "how the computer works" is my training. But god it's nice to build things without the inherent manual labor of imperative programming.
What? I've worked in a whole bunch of languages and I tend to find that Haskell is the easiest on this front. Hackage has all the definitions for packages cross linking to each other.
At least on the types-are-documentation front, I find that really helpful - just see what you have, what you want, and then find a way from one to the other. In JavaScript on the other hand it's hard to tell what the structure of some object needs to be.
I think we can find many analogies between the history of "Haskell/FP vs Mainstream Languages/OOP" with that of "RISC vs x86/MIPS". The article says Haskell is hard to learn/use and not appetible for business. A similar article could be written about RISC architecture in 2007 when x86 ruled. Then Steve Jobs built the iPhone and with it an empire based on RISC architecture. And now Apple Silicon it's threatening desktop and servers, last standing of X86.
Haskell has a lot of problems, but this article is not good, and does not describe them well. It reads like it was written by someone who has tried Haskell, and has sort of written some code in it, but has mostly been put off by bad and confusing tutorials, and has never really grokked the language. Indeed, look at the author's GitHub - of their 50 projects, only 1 is Haskell, and that is a handful of commits before giving up.
My main complaint with Haskell is that it's actually a great language, wrapped up in (1) hilariously beginner-unfriendly syntax and tooling, and (2) a library ecosystem that does not quite understand the value of friendly documentation (although this is actually improving a lot). There are also some weird warts around the language itself, but those are mostly tolerable and not so much worse than the weird warts in all other languages.
My specific complaints about beginner-unfriendliness in syntax and tooling:
- The compiler will infer the types of names as generally as possible, and will run this inference both forwards ("this variable was assigned a string, so must be a string later") and backwards ("this variable is used as a string, and therefore must have been a string earlier") depending on what information is available for inference. This means that type errors are hit-or-miss; about 60% of the time, the type error will point you to your actual mistake, and the rest of the time it will point you to where the type inference algorithm got stuck in unification (this may not be the location you made your actual mistake, because the algorithm will try its best to infer concrete types working under the assumption that you didn't make a mistake, so it sometimes gets stuck trying to unify types at an earlier/later point than your actual mistake).
- The syntax is VERY minimal, and prefers minimalism over readability. In particular, Haskell has no explicit parentheses for function calls, and has implicit currying! This means that it is very easy to accidentally get the arity wrong in a function call (for example, `foo (bar baz) quux` and `foo bar baz quux` mean very different things!), and create an extraordinarily confusing type error (in the best case, an error about applying things to the wrong number of arguments; in the worst case, errors about constructing infinite types).
- This minimal syntax also means that you can sometimes type something that is _almost_ correct, but is actually valid syntax that means something totally different! Fortunately, your almost-correct expression will not type-check (so you won't ever mean something else without getting an error), but it's still a head-scratcher in the beginning to figure out exactly what the compiler is taking your code to mean. In some of these cases, you may even type something that is not syntactically valid vanilla Haskell, but _is_ valid Haskell-with-some-extension, and the compiler will unhelpfully suggest that you turn the extension on even though it may not be what you want (and also provides no explanation of the extension).
- The compiler tells you what expression has a type error, but never actually tells you the exact term that has a type error. You have to sort of guess at this by understanding the rough high-level of how the type inference works. It's something you have to really pick up, and is hilariously unfriendly compared to e.g. `rustc --explain`.
- Print-debugging works differently than what you'd expect. Nobody ever teaches you that `trace` is a thing (seriously, this should be thing #2 that is taught), and nobody ever teaches you "you're getting trace messages in a different order than you expect because Haskell is evaluating expressions with graph reduction, not in the order of the lines of code you wrote").
Really this boils down to: even though Haskell will do a very good job at preventing you from holding it wrong, you really do have to have a decent grasp on what you're doing to write a program. As a complete beginner, you can't just put something you think will mostly work and then rely on simple compiler errors to tell you where to spot fix (unlike e.g. Go, Java, etc.). Fortunately, this gets much better over time - you learn how to interpret compiler errors better as you get a grasp on the language, and you learn tricks for building incrementally correct programs up using `undefined` and friends.
My specific complaints about documentation:
- Contrary to this article, types actually ARE decent documentation in Haskell. The constraints you can do with types in a pure language are much, much stronger than in other languages you've used before. Trust me - this is a Blub paradox thing. It's just a qualitatively different kind of type safety.
- Unfortunately, although types do a great job at preventing me from plugging the pipes together wrong, they still don't tell me anything about the _semantics_ of a function. Lots of older libraries do a very bad job at documenting semantics, but this is getting a lot better.
- Arguments in Haskell functions are not named (or at least, their names are not exposed in documentation). This makes for some confusion sometimes - what am I supposed to pass as arguments to this `foo :: String -> String -> String` function?
- Lastly, a lot of libraries document very academically. Here's an example: https://hackage.haskell.org/package/logict-0.7.0.3/docs/Cont.... The documentation here is written assuming you understand how Logic computation works, provides no examples, and you're supposed to read the module documentation and actually go and _read the paper_ (in fairness, it's pretty short) that they've linked. This is a far cry from the NPM world (which, for all its faults, has really embraced accessibility in documentation), where everybody has a quick start.
Overall though, it's a really good language. Once you get monads (and a HUGE part of this is not the idea of monads, but the really unfriendly syntax and error messages of Haskell; IMO the _real_ lightbulb moment is grokking the difference between data, types, data constructors, type constructors, and typeclasses), a lot of things make a lot more sense. The testing story is amazing, the refactoring experience is amazing, the incremental development and REPL stories are great, and there are some truly astounding abstractions you can use that are just impossible to express in other languages. It's my regular side project driver, and my team maintains a Haskell codebase in production.
Lastly, a word of advice to anyone looking to actually make a good-faith evaluation of Haskell: try it, make something substantive with it, and stick with it until you actually grok it. Ask for help from the FP Slack! Seriously, don't just pick it up, get stumped by error messages, and give up. The tooling is very beginner-unfriendly, but fortunately the humans over at the Functional Programming Slack (https://fpchat-invite.herokuapp.com/) are VERY friendly. As a first project, I'd recommend doing the Advent of Code, or building a small web application (IMO with Scotty, which feels like Express, rather than Yesod, which feels like Rails. Yesod is almost certainly better in production, but has a different and IMO less instructive learning curve). If anyone needs help, follow along with my AoC 2020 (https://github.com/liftM/advent2020), although be aware that some things I do there are a bit fancier than necessary.
I find that articles like this that complain about things like type abstractness or the mathy concept names really are not representative of the actual pain points with Haskell, and are FUD more than useful. Yes, the tooling is beginner-unfriendly. Yes, you will probably need to ask some questions on the FP Slack on your path to learning, things that a friendlier language would have documented. But this mystical aura of "ooh, there are types and they are very abstract!" is not reflective of reality, is not a criticism grounded in the reality of writing Haskell, and IMO is misleading for people genuinely trying to assess the language.
> Abstractions with types is a bad type of abstraction because it ignores the basic fact that programs deal with data, and data has no types.
Or in the conventional Haskell notation:
data HasNoTypes = BasicFact
Yet, everyone is busy adding typing to their favourite dynamic languages. Even languages like Javascript which whose designs show open contempt for the idea of type safety, are now getting support for static typing (typescript/flow).
IMHO, Haskell is a bridge between practical programming and mathematics. That's why they use "monad" instead of "flatMappable", for example. It is connecting two different languages that cover the same concepts.
Regarding types, there is a talk from Gilad Bracha titled "Types are antimodular", and it makes a similar argument as the article.
Does it explain why I got truly fascinated by Haskell and at the same time I did not feel like writing any useful program in it? (not that I ever write useful programs, just saying).
i would say, that because it is a hard language to learn, it holds itself back. otherwise it would thrive, people would be interested in it, libraries would flourish.
I feel Rust is somehow in the place as well, it has a rather steep learning curve and there are not enough mature libraries out there because of this.
I'm very sorry OP didn't stay the course, and opted to write an angry post instead.
> Every few months some newbie comes along to the Haskell reddit and asks why is Haskell documentation so confusing, and the post get destroyed by people telling them to get good or just pointing at academic papers.
The Haskell subreddit has a monthly 'Hask Anything', and is generally a friendly forum.
> “Haskell doesn’t suck, the development environment does”
There are certainly people who would agree with this, however our language server is pretty complete nowadays, we have amazing static analysis tools, a variety of formatters, I can't think of any tooling I desperately miss. Then again, I'm an editor person, not an IDE person...
> Write types and interfaces for the types and fill in the blanks. Sounds familiar? Because that’s we’ve been doing with Java and the like using UML
Ouch. It's very rare to hear anyone complain about having a type system with strong guarantees. UML... Well I guess I'm not a fan of specifying your design outside of your source code...
> Professional developers find themselves describing the system in such concrete ways using types that when the requirements change suddenly their precious castle is reduced to dust
Hmm, so generally the better at Haskell you are, the more you express constraints in terms of Haskell constraints, rather than concrete types. It's a style called `Tagless Final`. I recommend taking a look. It gives you strong guarantees, without breaking any castles.
> Somehow Haskellers think that they are more productive with Haskell when the reality in the real world is that only a few languages can proudly make that claim, such as Python and Lisp. These are languages battle tested in actual software products
As other people have pointed out, there are companies that use all three of these. I think it's safe to assume OP didn't measure the productivity of any developers before writing this.
> Abstractions with types is a bad type of abstraction because it ignores the basic fact that programs deal with data, and data has no types
Have you heard of datatypes?
> In most cases people are looking for schemas, not types
Schemas describe the structure of data. Types describe the structure of data.
> Types wrap data and treat it like a black box whereas schema describes the shape and content of data
The former here is actually more of a slight on encapsulation, which is something you can do in Haskell, but is probably more a goto recommendation of OOP languages. It can be pretty useful to only expose the inferfaces you want, and hide the implementation.
> most haskellers would agree that purescript is a better choice compared to GHCJS. No man is an island, and yet Haskellers generally convinced themselves that other languages need to learn from Haskell
Are you saying Purescript didn't learn from Haskell? Have you seen PureScript?
> Instead of explaining Monads like all other design patterns out there, they insist on using some obscure definition from category theory to explain it
Pick a monad explanation that suits you. There are plenty of posts / tutorials online.
Some people like burritos, some people like mathematics.
I recommend looking at the type of (>>=), as in the end, that's the only operation a Monad adds (above Applicative)...
> Look, when monads are used without the infix notation, the result is horrendous and it looks like yet another callback chain
I think you mean when they're used without `do notation`.
This point seems moot, seeing as we have do notation.
Explicit binds can be readable, if you're going pointfree.
> I tried it myself. Of course, there are very smart people who do understand Monads, but most people would just tell you to “get an intuition for it”, which to be honest is total BS
You'll get there, I recommend leaving the do-notation behind, and writing some functions over Maybe, [], and Writer.
Don't give up on the learning process, there's still time to write a `How I learned Haskell` blog post, or maybe you can write the umpteenth Monad tutorial :)
I feel like being Anti-Haskell is becoming a ridiculous meme.
The author makes a lot of statements, but no arguments, no proofs, and no explanations.
Also, the post is riddled with "The Community does X" memes, such as:
> Funnily enough everyone in the Haskell community blamed everyone else instead of [...]
> [Senior Developers tell you to just] write types and interfaces for the types and fill in the blanks
> Look, even most haskellers would agree that purescript is a better choice compared to GHCJS
> [Haskellers are] using some obscure definition from category theory to explain [Monads]
> Haskell proponents try to claim that type signatures are somehow documentation
> Every few months some newbie comes along to the Haskell reddit and asks why is Haskell documentation so confusing, and the post get destroyed by people telling them to get good or just pointing at academic papers. Really?
> suddenly their precious castle [of type declarations] is reduced to dust
and the author does not give a single link or concrete example, not a single proof or evidence of this actually being the case. In the author's defense, I would also think that the Haskell Community behaves exactly like that, if I had taken every comment on HN about Haskell as being correct and truthful.
One very absurd claim by the author that I'd like to point out is:
> programs deal with data, and data has no types
All data has a type (structure, schema, intent), otherwise it wouldn't be data. Instead, it would be electrical noise.
> Types wrap data and treat it like a black box whereas schema describes the shape and content of data.
I don't understand why the author makes a distinction between "types" and "schema". I get the feeling that the author makes the distinction only to bash on Haskell because Haskellers will talk of "data types" and not of "data schemas".
> Suddenly they are left reeling as they find out that the real world is, in fact, dynamic.
That's why Haskell has `Maybe`, duh. I don't buy into the statement that the "real world is dynamic". It's just as slogan-y as "The Real World is Object-Oriented", because it seems to be trivially observed, BUT, programming is not the Real World. One programmer has a data type that he/she sends over the wire. It would be weird if another program didn't anticipate the data but could still handle it correctly without expecting the data to have a certain type. Programming is not dynamic, programming is guessing "what the fuck did the other programmer try to send me" and defining a type for that. This is done either implictly by just accessing the fields or explicitly by defining the type and parsing the data.
---
All in all, I feel like this post adds nothing in value, and I am very annoyed that it has been ranked #1 on HN, but this stays consistent with how I personally perceive the broader HN Community.
I find the circle jerk around programming languages to be kind of tired. I realise this is a deeply ironic statement. But, personally, once i realised the vast amount of the rest of the world that is out there and can be explored, intellectually and physically, going back to arguing about programming languages made me feel a bit sick.
I think this can't be applied to other niche interests. There's something uniquely tireless, insignificant, and wanky about programming language discourse that makes it especially sickening to me.
I've gotta say...the font (Magnetic Pro) used on this site might be one of the worst-designed typefaces I've ever seen. It honestly looks like a random mixture of two bad typefaces.
Functional zealotry seems to be at the core of most problems with Haskell. It’s unfortunate that these concepts keep being shoehorned into numerous other languages.
That's not about Haskell but about the paradigms though. So rather, pure functional programming is hooking people. I understand why: once it feels natural, it is very painful to go back.
They had promises and async/await in 1995. They're trailblazers that other languages can follow (like rust did).
I would add that, after stack, tooling is not a problem anymore. It's not as good and polished as rust's cargo but it's ahead several other languages.
Still, as a language, Haskell is not ideal for teaching and productivity.
There too many different ways of doing things (eg. strings, records); compiler errors need improvement, prelude has too many exceptions-throwing functions (eg. head); exceptions handling is not something I want in a pure language; ghc extensions change the language so much that using some extensions almost feel like having to learn another language.
On documentation, I can't say I feel the need for it, but I understand some developers may be used to program against documentation and feel lost without it.
I think that Haskell is a great language to prototype pure business logic because of the type system and focus on purity, but it has several warts, because haskellers focus more on language research than DX.
The reason I stopped using Haskell is because I was bit by exception handling (which is a feature shared by many other languages, incidentally!) and by GC spikes.
I still like Haskell, it's closest to my "ideal" language than any other, but for production Rust is more useable (albeit being a bit uglier)