Hacker News new | past | comments | ask | show | jobs | submit login
Haskell in Production (felixmulder.com)
200 points by allenleein 8 months ago | hide | past | favorite | 237 comments



All the interesting code is in the next post, http://felixmulder.com/writing/2019/10/05/Designing-testable....

I'm not sure why they use a generic monad rather than ST, they don't need continuations for this.

The Reader monad with a big record is standard Haskell, it's basically what GHC uses: https://github.com/ghc/ghc/blob/1219f8e8a3d1b58263bea7682232...

data-has is less standard, it only seems to have been used seriously by one project, which has a bug on file to stop using it: https://github.com/myfreeweb/magicbane/issues/20

But overall it's interesting, we'll see where the series goes.


The capability pattern that Data Has implements is definitely used in other projects[1], and notably is endorsed by RIO[2].

[1]https://github.com/input-output-hk/cardano-sl

[2]https://github.com/commercialhaskell/rio#monads

Elaboration: https://www.parsonsmatt.org/2018/03/22/three_layer_haskell_c...


The GitHub issue I linked is basically "switch from data-has to RIO"; the issues with data-has come from its use of multi-parameter type classes. RIO avoids the issues by using single-parameter classes. AFAICT multi-parameter type classes are the subject of numerous GHC bugs/misfeatures and are best avoided in production.


MagicBane already uses RIO. RIO doesn't specify anything about the capabilities model its just a pattern. As noted in the ticket, the nice thing about Data Has is it already has instances for tuples. This is really handy in - for example - init code where you may want to have logging and config capabilities before the rest of your env is setup. But I agree its probably not worth the cost and the single parameter pattern is better.


It's always good to see examples of how a language is actually used, rather than just assertions of its superiority.

That said, if Haskell programmers truly consider this a superior and readable way to write code would tend to reinforce my preconception of them as an insular community elevating arcane jargon to a virtue (not unlike the APL programmers of yore).


Its strange how code written in unfamiliar languages is ... unfamiliar? The Haskell in that post is bog-standard. The definition of "arcane" is not "I don't know it".


I've worked in a pretty wide range of languages. I think I know "unfamiliar" and I can tell it apart from "arcane".

OP claims that "this article will emphasize writing easy to grok, maintainable code". The definition of "easy to grok" is not "once you've bought into the entire mindset, this will be obvious to you".


> I've worked in a pretty wide range of languages

Which languages are those? If the answer is "N flavours of imperative/OO languages" (like C/C++/Java/C#/Ruby/Javascript) then you really haven't worked in languages different enough for you to truly learn anything new. The rule is simple: if learning a new language is easy, then there isn't much new in the language for you to learn (except a different flavour of syntax or whatever).


True, most of my work was in imperative/OO languages, though the imperativeness ranged from "global variables for everything" to "No global state, predominantly immutable objects, predominantly single assignment".

But I've also worked in Prolog and SQL, and dabbled in languages like UNITY: https://en.wikipedia.org/wiki/UNITY_(programming_language), so I have some degree of openness to different paradigms.


> so I have some degree of openness to different paradigms.

That's great! I truly believe that learning new hard stuff is how a software developer goes from average to great. If you learn a little bit every day then after 5 years you will be SO much better than the average guy/gal sitting next to you.


"Easy to grok" definitely means "for experienced Haskell programmers". That is who the article is written for. It doesn't matter how many other languages you've used - they are mostly the same as each other and Haskell is really quite different. Your lack of familiarity is not evidence of a defect.


> "Easy to grok" definitely means "for experienced Haskell programmers"

Which is pretty much the definition of "arcane": "known to a small circle of initiates"


> reinforce my preconception of them as an insular community elevating arcane jargon to a virtue

I highly recommend not assuming that all "Haskell programmers" agree on what is "superior and readable" code. I also recommend trying to explain "arcane jargon" such as "inheritance", "encapsulation", "method invocation", "recursion", "dependency injection" etc. to people who are not already familiar with such jargon. They might perhaps start believing that you are "an insular person elevating arcane jargon to a virtue"?


Let's compare how Wikipedia starts the definition of a "monad":

"a monad [...] is an endofunctor (a functor mapping a category to itself), together with two natural transformations required to fulfill certain coherence conditions."

vs. "encapsulation":

"Encapsulation [...] is the bundling of data with the methods that operate on that data, or the restricting of direct access to some of an object's components."

Are you seriously going to argue that those concepts are equally arcane to somebody not familiar with the underlying paradigm?


You looked up the wrong definition.

"In functional programming, a monad is a design pattern[1] that allows structuring programs generically while automating away boilerplate code needed by the program logic."


> Are you seriously going to argue that those concepts are equally arcane to somebody not familiar with the underlying paradigm?

Most definitely! I have tried to teach C# to a non-programmer friend of mine and his brain exploded when I tried to teach him about classes. However I have taught a few programmer friends what a Monad is in a few minutes. The problem with the Wikipedia page is that it is correct but targeted at somebody who understands category theory. Which is useless if you just want to learn how to use it in (say) Haskell.


> Haskell is great for business and great in production

I disagree. It's a beautiful language but it lacks a lot of features to make it useable at scale. And in my experience Haskell engineers are extremely smart but the environment/culture they create makes it difficult to foster team spirit.

I've been in 2 companies in the last 4 years who initially used Haskell in production. One has transitioned to Go and the other is rewriting the codebase in Rust.


I’ve met some pretty damn solid engineers who started on Haskell and, even at a junior level in other languages, produce an elegant solution far more easily than a senior engineer in that language. You probably wouldn’t put the code in production verbatim but you can very easily see what’s going on and it isn’t haunted by spectre of early abstraction, which IMO is the biggest flaw of OOP at scale.

People think you want to see clean architecture in the form of lots of indirection and state and then you don’t see any real programming chops, just organisation or filing.

Not at all related to Haskell specifically, but damn, it does cut through the boilerplate. Although I can’t comment on any sort of Haskell at scale and I imagine they have the same issues when it comes to excessive point-free/applicative style.

I mean, from my naive perspective it’s easy to make classes out of everything, and to hold state and put side-effects everywhere, but you don’t want to deal with the trouble of a monad until you need it. So you have an automatic inclination towards cleaner code when you start functional and move on.


Haskell gives one plenty of rope to hang himself on complexity.

So much that developers develop an aversion to it as deep as fear. It's unavoidable, the ones that didn't develop it are still buried at the working of their first Rube Goldberg machine and unavailable.

You'll see plenty of blogs about a Haskell feature that end with "See? It's organized and safe. It works. Is it worth the complexity? No way! You'll never caught me using this thing that I just invented!"


Hi, I find that everything people here are complaining about (and they're valid complaints) has also been true of C++. C++ developed a lot of its complexity (particularly 15-20 yrs ago in the template space) after it got popular, so people were already wed to it.

I've used both quite a bit. Haskell's easier to learn to do well than C++ -- with the caveat that you can write java-style C++ productively without using the language very far. C++ in the "modern" style is more complex than Haskell. There are good reasons for that complexity, but it's still there.

I've seen plenty of projects where people have dug themselves into deep C++ holes -- everything is a template, all the files are headers, and the code is unreadable. These are business critical systems still in use in production worldwide.

The C++ community's really gotten good in the last 5 years or so about reigning in the bad impulses and getting people to write clean, clear, efficient code that has reasonable expressiveness.

Coming into Haskell from C++, I have the same instincts. Haskell's been a pure pleasure. The benefits are really there, and they're easy to get. You just have to think of the trade-offs.


> Haskell gives one plenty of rope to hang himself on complexity.

Interesting. And C++/Java/Javascript doesn't?

> So much that developers develop an aversion to it as deep as fear. It's unavoidable, the ones that didn't develop it are still buried at the working of their first Rube Goldberg machine and unavailable.

Wow that sounds like there would be lots of examples you can mention. Care to mention any example we can verify?

> ou'll see plenty of blogs about a Haskell feature that end with "See? It's organized and safe. It works. Is it worth the complexity? No way! You'll never caught me using this thing that I just invented!"

I have never seen such articles. Please show us a few links?


My experience is upper management will scapegoat Haskell the moment they get the chance. Outside-hired leaders who doubt it from the start. It's a weird unpopular language - an easy target.

If you just get over it & use Haskell, things will be fine. You'll get huge gains thanks to a lot of Haskell features & libraries, and you might have to write some more fundamental libraries yourself. Haskell makes that pleasant anyways. Worst-case, using Haskell could end up being a wash vs other languages.

Rewrites aren't always indicative of failures of language or the engineers writing it. They're also a useful mechanism for solidifying control for a VPE-type at a nascent but quickly-growing startup. Especially if said VPE-type wants to push out personnel that were there before them.


I totally agree. When a project using Java fails, people would never think to blame the choice of Java as a contributing factor. When a project using Haskell fails, people will consider Haskell to be contributing to its failure even if the root cause lies elsewhere (bad management, bad culture, etc).


Maybe not Java proper, but I've heard copious abuse heaped upon Hibernate.

Now, we shouldn't blame the tool, but the performance was bad, and the induced race conditions were a dagger to the heart for the project.


Hey you. I'm pretty sure I know who you are :) I hope everything's going well!

I definitely agree that there are many examples of ineffective Haskell culture. As a Haskell contractor I get to see how a lot of different shops do things. One thing I can't really tease apart is that most startups have a lot of engineering culture problems, and picking Haskell doesn't inoculate you from that effect.

The main cultural issue I see specific to Haskell shops is that frequently rabbit holes aren't discouraged. By that, I mean there is too much of a tinkerer mentality where engineers are allowed to or even encouraged to experiment with untested ideas instead of known effective solutions. That is a habit I had to deprogram myself from too. It's very fun! In your spare time.

I hope we get to work together again! But I'm usually only called in if there's trouble, so maybe I shouldn't :^)


This is pretty sound advice for any language. You always want most of your code to be dumb. You save high abstraction for things that have a sensible interface and are common enough to need DRYing. Haskell has a higher abstraction ceiling than most languages and there is less trodden ground, so there may be more opportunity to get lost down the rabbit hole. However Haskell2010 is a beautifully simple language that can solve all your problems (maybe with a bit more code than enabling 20 extensions will afford you).


I had a similar experience with a different language at a startup where one programmer particularly would produce the most horrid code, use new features and libraries where they weren't needed, generally overcomplicate everything. He was smart as hell and made that smartness a liability for everyone else. The bottom line in a business is the bottom line; making money. He didn't understand that at all. No level of abstraction was too high, even trivial stuff. Dumb code that did the job well... dream on.


> It's a beautiful language but it lacks a lot of features to make it useable at scale.

Can you expand on this? What features is it missing?


> it lacks a lot of features to make it useable at scale.

Can you elaborate on this?


in my (limited) experience, Haskell projects (and to a slightly lesser extent, functional programming projects) work best when thoroughly planned out in thorough whiteboard/spec sessions and are then implemented by a couple of gurus responsible for the code, who work almost exclusively in functional languages in their day to day. there seems to be a need for way more thought-per-line-of-code in Haskell/FP projects.

many engineers (and businesses!), culturally, come from the opposite angle (legions of generally pluggable engineers on java/python/golang plowing through tickets/features). that's fine, but it isn't super amenable to the Haskell world, because IME non-imperative setups require different mode of thinking about design, so harder to drop in and crush a ticket. i wish i could explain this more, but i'm struggling to articulate with examples.

i think the thorough types really help you get up to speed on data structures, but unless you are in a functional mindset pretty much exclusively, your (my?) brain spends a lot of time un-imperative-ing itself. also, laziness can be a factor, so need some expertise there.

it also helps to have conventions e.g. old school java (are we point free, or not? etc. etc.)

all in all i think it requires a lot of discipline that can easily break down, whereas some of the popular imperative languages you can still sort of plug along (=> punt the technical debt) despite that.


> all in all i think it requires a lot of discipline that can easily break down, whereas some of the popular imperative languages you can still sort of plug along (=> punt the technical debt) despite that.

Interestingly I have the exact opposite perspective. Writing imperative or OOP code requires me to be excessively disciplined. It is extremely easy to build un-maintainable spaghetti. There is a whole cottage industry of methods for your discipline of choice: Clean, SOLID, TDD, etc. All these disciplines seem to boil down to the same systemic result, push effects to the edges of you program so you can more easily test, evolve and maintain. Functional programming (of the typed variety) tends to allow me to write garbage code, that is testable and can be easily evolved and maintained because the paradigm encourages me to be a good actor.

I've refactored production code in imperative languages and typed functional languages and only one of them allowed me to make HUGE sweeping changes with ease and high confidence.


> There is a whole cottage industry of methods for your discipline of choice: Clean, SOLID, TDD, etc.

I'm not a big OOP fan, but I'm pretty sure Clean and SOLID are principles (or sets of principles), while TDD alone is a practice. I'm also not very familiar with Clean or SOLID, but I'm pretty sure they apply to Haskell as well, and I would expect that Haskell enthusiasts would ordinarily boast that Haskell allows (or encourages) them to write code that is more SOLID with less effort than other languages.

My criticism of Haskell is that there tends to be an obsession with writing the most SOLID/DRY/etc code possible at the expense of productivity. It's a code golf / ego culture.

Yes, this is cultural and not "built into the language" (quotes because I doubt there's a clear distinction between a programming language's culture and the language itself, but that's a debate for another time), but you can't unplug from the culture, because you need the ecosystem to solve any nontrivial problem (so you still have to interface with overly clever code).

Further, even if you could unplug from the culture (perhaps by writing everything in-house under your own improved culture), there are still no 'rails' that encourage naive, flat-footed, dumb code, and that's the kind of code you want 99% of the time. As far as I'm aware, there isn't even a clear standard for writing clear, naive code in Haskell.


SOLID applies only to OOP. Not using classes trivially satisfies the requirements of every letter. Hell, most design patters are trivial if you use functional programming.


Funny because I see fp as tdd^2 + IO on the side (npi) thus bringing all the benefits you like from oop. Only difference is I think a few decades of culture using these bu default leading to terse syntax and idioms.


> in my (limited) experience, Haskell projects (and to a slightly lesser extent, functional programming projects) work best when thoroughly planned out in thorough whiteboard/spec sessions

I think you need more experience then! I'd rather prototype in Haskell than the other language that I use day-to-day, that people say is great for "exploratory coding" (Python).


My experience is quite the opposite. I can throw together Haskell code and refactor it much quicker than equivalent C++/Java code. And I have a lot more experience with C++/Java than Haskell.


> […] it lacks a lot of features to make it useable at scale.

Can you give an example or two?


> it lacks a lot of features to make it useable at scale

Facebook (and other companies) run Haskell in production, at scale. So I wonder what those "lots of missing features" you mention are? With "lots of them" I am sure you can mention at least 5+?


We are a YC company doing very well, all our back end code is written in Haskell. We have produced a lot of functionality with a relatively small team. I would say we are existence proof that Haskell is good for business.


What happens when you need to scale up or if one or more of the gurus quits? Do you think Haskell will still be primarily used when you grow?


Ask Facebook. They are running Haskell at massive scale.


For what business units? For which use cases?

Also, they invest a lot into recruiting. My suspicion is that they give a considerable degree of freedom (often to the detriment of the organization in ways such as information silos) to engineering teams to help with recruiting.


For the problem you are solving and at your scale. Things start to change when you need to hire n+1 teams or engineers quickly.


They... might? I worked at a Haskell start-up for a bit. Hiring wasn't easy. But it doesn't seem to be easier at the mostly-JS start-up I'm working at now.



That doesn't speak to Haskell. You can say that's because there's not a lot of Haskell and that's likely at least partly true. But the difficulty hiring for a position depends on the number of people qualified and the number of other roles competing for those people. The fact that both of those numbers are smaller for Haskell than (say) Java doesn't tell us whether it'll be easier to hire for Haskell or Java.


True. Lets check the numbers. 90,000 developers took the survey.

Java: 41.1% 36990 developers

JavaScript: 67.8% 61020 developers

Haskell: 0% 0 developers

Jobs:

Java: 73,447 jobs

JavaScript: 59,647 jobs

Haskell: 492 jobs

https://www.indeed.com

Given the numbers, as a hiring manager, I will never ever suggest to any company to try to use Haskell or hire Haskell devs. I know many developers who use it though. They sucessful with what they are doing, turning business problems into Haskell problems and solving those. Sometimes patch the compiler, sometimes write a completely new one. As a tech leader I do not want to have these problems, even if I could hire enough people for projects (which I can't). I always like to read blogs about what is going on though. It satisfies my scientific curiosity but that is it.


Those are numbers. I'm not sure what they say, ultimately. If we believe them, then we can conclude: there are substantially more Java programmers, but substantially more unfilled Java positions, but a much bigger (... infinite) ratio of Haskell job to Haskell programmer, etc, etc. I'm not sure which of those wind up being most important.

I can say what I've said - that my experience of trying to hire the next couple Haskell programmers has not been harder than my experience of trying to hire the next couple JavaScript programmers.

There's also a big question of the quality of the survey, and how representative it's likely to be of your company's overall hiring pool - you say you have many Haskell programmers in your network, and your network is probably substantially fewer than 90k individuals, so something seems amiss.

I'm not interested in getting drawn into the rest of your ranting.


Yes indeed but not the way you think. There are many highly experienced developers out there who would love to use Haskell in their day job but currently can't. So your problem will be processing all the highly qualified CV's you will receive.


I cannot quantify “more than you think”.


Who determines that you are a very good company?


Who claimed they are "a very good company"?

The post simply says they are "doing very well" (financially). That is fairly objective.


Ok


This is a great guide and sound advice. And hopefully in a year we'll see the community converge on Polysemy (or fused-effects for performance) to make this strategy even more natural.


Polysemy still appears mysterious to me.

I would absolutely love an article that translates the “Designing Testable Components”[1] part of this article into its Polysemy equivalent.

[1] http://felixmulder.com/writing/2019/10/05/Designing-testable...


I've been out of the Haskell game for a while, so I hadn't seen Polysemy. Another thing on my reading list...


Always nice to see Haskell at work. I haven't learned the language myself yet, but I absolutely love postgREST, which is implemented in Haskell.


I'm not sure why this post is getting so much hate, it's well written and I really enjoyed reading it. Thanks!


People seem to hate Haskell because it's not [popular-language-that-they-use]. To think that Haskell is used, liked, and works, seems to upset people who don't like, use, or understand Haskell. Or, so it seems.


How is such an obscure language - with associated difficulty in finding talented engineers - ever going to be a better choice than a more mainstream language, which probably has many of the same features?


> with associated difficulty in finding talented engineers

This is a myth. There are now many experienced developers who currently can't use Haskell in their day job but would love to. If I decided to start another business, picking Haskell would give me access to a lot of top talent who would otherwise not be interested.


It's actually not that hard to find talented engineers, especially if you are a distributed/remote team. I've interviewed & given a thumbs up to a constant stream of Haskell candidates (either with Haskell experience or interest in learning).

That said, I've also seen a lot of Haskell engineers go through the interview pipeline & get thumbs up from all interviewers _except_ a single person in leadership who would later push to move away from Haskell for "hiring reasons" :eyeroll:


How long does it take the “new programmers who have not worked with Haskell before” to learn and understand all the ASCII-art in the example code? How $ differs from <$ differs from & differs from .

On top of trying to remember that <> is string concatenation and >>= and <- have something to do with monads.


I still can't understand why Perl is easily dismissed as "LOL line noise" but Haskell's apparently fine. At least dipping into Perl you can take on the line noise slowly so you have time to absorb each new technique, and you can write it just fine with hardly any of that if you want, and it won't hurt a thing—Haskell seems to dump all that on you up front, and it seems to require it, idiomatically. It's very off-putting.


Well, I had no idea what (&), but you just place the cursor on it in emacs and it will say `(&) :: a -> (a -> b) -> b` that, after you learn this language is enough to understand that `v & f` runs the function `f` with the parameter `v`.

If I was trying to discover what function does that, I'd type the `a -> (a -> b) -> b` (yes, it's that obvious) string on Hoogle, and it would show me the operator.

I have never got that experience with Perl. Every new symbol requires a query into perldoc in a new window, every time you don't know what symbol to use, you keep not knowing, because it's impossible to discover.

Besides, Haskell has a consistent value + operator + value structure, while Perl has a mess.


It's because Haskell's type system at least helps you avoid getting the line noise wrong. If you have to refactor some complicated line noise then the compiler holds your hand.


What would an error look like if I misused $ or &?

And yes, compilers hold a user’s hand in most languages (including PHP with type hints :) ).


    • Couldn't match expected type ‘([Integer] -> [Integer]) -> [b]’
                  with actual type ‘[Integer]’
    • Possible cause: ‘map’ is applied to too many arguments
      In the second argument of ‘(&)’, namely ‘map (* 2) [1, 2, 3]’
      In the second argument of ‘($)’, namely
        ‘map (+ 2) & map (* 2) [1, 2, 3]’
      In the expression: map (+ 2) $ map (+ 2) & map (* 2) [1, 2, 3]


In my opinion this doesn’t help much.


Of course not. You have to learn how the type system works to get it.


ghci is my friend. :i and :t go a long way as a backup for my memory. If Perl (or APL, another language criticized for having a vast array, you should pardon the expression, of unfamiliar operators) has something similar, I've not heard of it.


A couple of weeks in my case. It's not any more difficult than remembering that the -> is the cache-miss operator in C++.

You also don't have to learn the entire universe of Haskell before being productive in it. I still have no idea what profunctors, comonads, and other things are and I'm shipping production Haskell code.


If you follow the link to the second article of the series, you’ll see that the author does dump all upfront, hence the question.

A couple of weeks to understand the terse operators (ab)used everywhere in Haskell code is horrendously long time, if you ask me.


Two weeks, Just as much as for any other new language.

After two weeks new member can introduce simple features. After a month - relatively complex ones. Full development speed is achieved after half a year and is roughly equal to all languages because all this time people learn problem(s) domain(s) and its (their) mapping to the code.

For example, if I ask you to optimize Kaldi's HCLG-based decoder, you won't get any interesting results even after two months EVEN if you known C++ well. Just to tip you with something: you can save off about 10% of execution time just by presorting HCLG graph by BFS - it is scale-free and there are nodes which decoder will visit more often than others. To do this, you have to look at the graph, you have to know about scale-free graphs and their properties. Which are way outside of programming language scope.


> Two weeks, Just as much as for any other new language.

What?!! :) I was introducing new features within a day after switching from Java to C#.

New features within less than a week when switching to Erlang (having had little FP experience).

etc. etc.

> For example, if I ask you to optimize Kaldi's HCLG-based decoder

Why would you ask me that? Is it such a common problem?

How about: if I ask you to implement a streaming Google Dataflow job getting data from a Google PubSub and writing data to BigQuery? It’s a much more common request.


Oh, don't let me started on differences between Java and C#. You most probably won't write correct code in C# after day long introduction. Just to stop typing String with capital letter is a more than day long training.

With Erlang you have to master OTP, which is part of language and, frankly, the only reason to choose it. The supervisor tree alone worth a week to see how it will work with different failure cases.

To quote "Real Programmers Don't Use PASCAL":

My first task in the Real World was to read and understand a 2O0,OO0-line FORTRAN program, then speed it up by a factor of two.

https://web.mit.edu/humor/Computers/real.programmers

It is humor, but every joke contains some truth. Optimizing Kaldi's decoder is a critical task if you recognize several thousand voice queries per minute.

Again, to take analogy from outside world. Interpids mostly cruising in under-Mach speeds. It is their common problem. But they are designed to intercept other aircratfs who can fly at MachN speeds and to avoid missiles which are fly at MachN speeds. The difference between under- and over-Mach speeds is... critical? Wings of under-Mach speed airplanes would flatter at Mach1+ and destroy plane itself (see Mig15 wings). Wings of interpid will work on under-Mach speeds just okay.

Common problems are easy to solve if you are ready to uncommon and critical ones. Which are often lie outside of programming language realm, can be solved easily with almost any language and Haskell provides better way to solve them (EDSLs, for example).


Operators are just functions. Learning the type system is the key. You can easily check any function's type signature in the docs once you understand how the type system works.


Or, it's worth mentioning, you can do `:t (<$>)` in the repl to quickly get the type.


The exact same time it takes a programmer who has never seen Java or its close siblings like C++ to learn and understand all the keywords like "public" or "static" or "class" or "interface" or "extends" and how they work.

The only difference is that a random programmer you pick has probably already spent the time understanding Java and OOP, but not Haskell.


I really wish Java had associated types and static methods as parts of interfaces (i.e. constructors as part of the interface). Those two features make complex architectures much easier to design and use.


Interfaces can have static methods.


I believe they mean static methods as part of the interface signature, e.g. to define constructor/factory methods.

AFAIK there are issues with how Java handles dynamic dispatch that prevent this.


What would calling such a method look like?


Imagine that Collection had a static method `empty` that created an empty collection of the appropriate type depending on the type you are assigning it to, e.g. `List list = Collection.empty();` would create a list, `Set set = Collection.empty();` would create a set, and so on. This would be fairly similar to how things work in Haskell.


If you know you’re creating a List (it’s there on the LHS) why not write List.empty()?


Here's how it works in Rust. You can imagine this working in Java with reified generics: https://gist.github.com/rust-play/214adf5bd15b18ec2d5c63d97f....


Fascinating! What are the uses of something like this? Is this at all related to concepts or metaprogramming?


You know in this specific example, but not always. You could also do it in generic method where the collection type is parameterized.


If I remember correctly, Co-Star, the horoscope app, runs Haskell on the backend.


Boy those guys must be laughing at their customers.


we're hiring --> costarastrology.com/jobs ;)



This is an incredibly good series. Just what I've been looking for. I'm writing a Servant API myself and this really explains a lot of the fundamental concepts in plain English.


> We tried a lot of different patterns - readers, handlers, MTL, and tagless final.

Is there any books or long form articles digging into these different approaches? I know basic Haskell and heard of these different patterns a few times. I'm curious how it looks in a larger Haskell project and what the pros/cons are of each, and general popularity.

Edit: I see the next article in the series has more real code examples


The more I see it in use, the more I want to learn Haskell, but I can't think of a practical reason to do so.


I don't know about you, but sometimes I do things just because I want to.


I think the reason should/could be abstraction. If you find yourself doing kind of the same thing over and over but don't see how you could extract the repetative part in a scalable way in a language with less type support then there may be a good use case for Haskell there.


Learning things is how you keep your brain sharp.


I need to get halfway fluent in JS first, but good point.


Once you get there I recommend then learning Typescript (Javascript with types) and compare the experience with Javascript.


Compilers


The amount of knowledge required just to have a clean design o the simplest of things, i.e. a few functions that add and delete users is very big; almost all of Haskell's important parts need to be known.

Is that good? for me, it's an obstacle. I can't expect all the developers that come and go to have a perfect understanding of monads in order to use/debug/extend such code.

And what did Haskell actually buys us here? my apologies for being skeptical, but it doesn't seem to buy us anything...in other languages, such things as creating a user, deleting a user, logging actions requires a lot less boilerplate and complexity.


Everyone in the comments either doesn't like Haskell, because it's hard and different or niche or whatever, so they think Haskell is bad for production even though they don't code in it and haven't worked at a place that used it extensively.

The other group of people in the comments are people who love Haskell and are happy to have validation from the OP.

But I'm not really any wiser. I like Haskell, but I work in DevOps and the language simply isn't relevant there.


Everyone... except for the top comment?


It's interesting to watch the wonderfully meandering train of thought(s) triggered by a success report of using a non-mainstream programming language. So let's meander some more...

The strong reactions are understandable. New programming languages are added to the zoo faster than ever before. And within each language there's a steady influx of new frameworks (angular1, angular2, react, vue, ...) each with a learning curve. It's a challenging environment, and it feels like it's getting exponentially harder as time progresses. One must resist the trap of following every silly new idea.

So why bother spending even an afternoon with Haskell? Spend an afternoon playing with Python and you'll be writing code the next morning. Spend a day with Haskell and you'll have more questions than answers.

The fundamental problem is, most likely, the new paradigm. Functional programming takes some time getting used to. Sure, the functional style is creeping into mainstream languages, so most of us have seen and used it already. But pure functional programming is another level. Broadly speaking, everything is a function, a function in the mathematical sense: Same input, same output. Everything is immutable, it's all about chaining pure functions. Crazy, right? Why would you subject yourself to such rigour?

I'll argue that pure functional programming has something to offer that is more relevant than ever. 1. Pure functions are easy to test. Same input, same output, that is a strong guarantee. It'll still be a challenge to thoroughly test every single function, but at least each test is meaningful because there are no interactions with anything outside the function. 2. Pure functions are easy to compose and parallelize. Single-thread performance is hardly ever a relevant benchmark anymore. Sure, C is faster, but good luck trying to parallelize a complex C program. (The worst thing I can imagine is having to debug messed-up, multi-threaded C++ code.) 3. There is structure in how to compose pure functions, and it's structure in the mathematical sense, not in the fuzzy Gang of Four pattern sense. Much of the 'arcane', 'academic' jibber jabber is exactly about these mathematical structures. The power of having a rigorous method for structuring programs is immense. It really is. The hurdle is not a high as it may seem (unless you have a low affinity to maths - im which case you probably shouldn't be programming computers anyway). 4. Pure functions are easy to refactor. (See below.)

I'll also argue that strong typing has a lot to offer and will become more mainstream. 1. Make the compiler your friend, not your enemy. Strong types glue the functions together in just the right way and prevent me from making silly mistakes. 2. Business logic can be encoded in a compact and transparent way. Changes to the business logic (and it always does change, doesn't it?) are also compact and transparent. 3. Refactoring is just ... wonderful. Strong types combined with pure functions, that is a magical combination. Change the type and let the compiler tell you what the implacations are. As there are no side effects code can be refactored with confidence. It is a vastly different experience from any OO language.

Now, finally, to Haskell: It can be characterized as a strongly typed, lazy-evaluated, garbage-collected, pure functional programming language. (Lazy evaluation hasn't been mentioned, yet. It's something that needs to be understood and considered in order to write efficient code.) In "language design space", something like Haskell must exist and will always exist. (I say 'always' because maths has something eternal about it and pure functional programming and strong typing have solid mathematical underpinnings.) And in this "language design space" Haskell dominates. Contrast this with the compiled, weakly typed, imperative, low-level languages like C, C++, Golang, Rust. I have a hard time picking amongst those, but I believe everyone should know at least one of them.

So is it worth learning Haskell? I tried to argue that it's not a fad and the concepts are fundamental and increasingly important. To me, personally, it looks like a an investment I can benefit from for decades to come.

As a business, do the advantages outweigh the costs and risks? We all know that corporate culture often stands in the way of change. One root cause is the asymmetric payoff: A thank you if it goes well, and job-loss if it goes badly. In such an environment inertia is the rational choice. - But if your company truly believes to be an innovator, or leader in the field, or seeks to gain an edge over the competition, then Haskell should be on the list of things to try.


Advice for people using Haskell in "production": Don't. Save not only yourself the disappoint, but every other stakeholder around you. Use something tried and tested.


Let me guess: you have never programmed in Haskell?


I have programmed in Haskell. I have programmed in assembly. I have programmed in just about every language in-between. Haskell is at best, a sophisticated toy suited to a small subdomain of problems.

Have you ever tried to roll Haskell into production, when compared with something like C# or Java?


What was your experience like rolling Haskell into production?


Cabal is probably the most unsuited packaging system for production software. It has no useful test integration nor freezing ability. Tried to change profiling? You're in for a bad time. What about deployment. Want git? Nope. You can't. Want to build artifacts? What is this, TypeScript?


> Haskell is great for business and great in production.

Hot take: no it isn't. It is extremely hard to learn, has an extremely confusing + needlessly complicated syntax and I question the payoff immensely. I question the well-being of anybody who subjects themselves to the pain and torture that is Haskell.

If I stood up in a corporate business boardroom meeting for tech analysis on a new project and said "I want to write it in Haskell", I'd get laughed + kicked out.


Sounds like you've barely programmed in Haskell and don't know what you're talking about.

> It is extremely hard to learn

Haskell was the first language I learned. I didn't think this at all and I still don't. It doesn't strike me as any more difficult than learning Java or something.

You may think this because Haskell is a different paradigm than what you're used to, so while you may be able to get quickly started with Rust coming from a C++ background, Haskell will take more work because what you already know doesn't intersect with what you need to know quite as well. You may misconstrue this mismatch as Haskell being more difficult to learn.

> extremely confusing + needlessly complicated syntax

I think Haskell syntax is less confusing/complicated than many other language. Unlike a lot of other languages, Haskell doesn't have a lot of things baked-in. As an example, lot of oft-used functions are "user-space" functions that anyone could define themselves. Like `($)` or `otherwise`.

> If I stood up in a corporate business boardroom meeting for tech analysis on a new project and said "I want to write it in Haskell", I'd get laughed + kicked out.

This is such a stupid argument: what relevance is the opinion of boardroom meeting attendees on the suitability of Haskell for a project?

It sounds like you're hating on Haskell out of ignorance and some personal encounter with the language that went poorly. Instead of actually being informed.


I have taught both Haskell and Java and there is no comparision in difficulty. Teaching basically all of the Java language can be done in a couple of days (excepting generics, which take another day or two), and the language bugs are always "shallow".

Haskell on the other hand creates crazy errors which confuse students, and often require extensive teaching to understand what is going on.

I'm happy to accept the possibility expert Haskell programs will be more productive in the longer term, but the learning is MUCH harder.

For example, here's one "simple" haskell error:

Prelude> print 2 + 3

<interactive>:9:1: error: • No instance for (Num (IO ())) arising from a use of ‘+’

Now, it's not too hard to figure out what's gone wrong, but when doing beginning learning, explaining what (Num (IO ()))) is isn't something I want to be doing. No Java error gets that complicated.


That error message is easy to understand.

Explaining why a language can't handle print 2 + 3 is priceless.

It seems like an interesting language to learn for the sake of learning but introducing this to a beginner is not fair.


> I have taught both Haskell and Java and there is no comparision in difficulty.

Interesting. How much real world experience did you have with both languages before teaching it?


What kind of error would you like to see here instead ?


Maybe just one extra line like this?

    Prelude> print 2 + 3

    <interactive>:3:1: error:
        • No instance for (Num (IO ())) arising from a use of ‘+’
        • The expression `print 2` :: IO ()
        • In the expression: print 2 + 3
          In an equation for ‘it’: it = print 2 + 3
Just adding the one line telling telling you what was `IO ()` is helpful. ghci obviously knows what it was here.


This suggestion is a straightforward, easy to implement improvement. Many compilers have serious UX problems because their authors don't take UX seriously. Elm and Rust are two languages getting this right.


The Rust compiler does a great job of making suggestions for how you can fix your code, which is tremendously helpful for the majority of cases where your mistake was simple. There is no reason (in theory) that GHC can't offer the same level of guidance.


Counterpoint: learning your first language and learning languages later in your career are very different processes and learning Haskell with a background in OOP languages may be harder then learning it as your first.

> Arguments over programming languages often become emotional and skip past many of the practical business concerns that make programming languages valuable in the first place


> Sounds like you've barely programmed in Haskell and don't know what you're talking about.

You're right. I do not subject myself to Haskell because I can't fathom a world where JavaScript is less efficient than Haskell.


> I can't fathom a world where JavaScript is less efficient than Haskell.

You can't fathom a world where a compiler can more easily produce efficient machine code from a statically typed language with very few semantic corners than from a dynamically typed language with lots of sharp corners plus reflection? I think you may need to meditate on the task of compilation a for a bit.

Unless you mean efficient for the programmer, in which case you are mistaking the up front effort of getting things to type check for additional effort. Usually it spares you the same number of iterations in debugging and testing, and the type checker is a lot faster than running unit tests. Plus the first time you make a major refactor to a program by making the change and then fixing everywhere the type checker complains, and that turns out to be it, it all works, it feels like you just mutated into a superhero.


Efficient use of time I think the parent poster was referring to.

You could have the project finished quickier in Javascript. Would you disagree? Does choosing haskell increase development time (ignoring major rewrites)?


I'd say it depends heavily on what you're writing. If it involves graphics, Javascript is going to be substantially less annoying, at least initially. However, the version you write in Haskell will be much more reliable, and much easier to progressively refactor and scale up, whereas in JS, the weird edge cases and lack of static guarantees will gradually pile up and make the project a chore to maintain.

For something non graphics related that you're building by yourself, if you already know Haskell, it's a no brainer, due to the high expressivity ceiling. Once a team is involved, the right choice (as always) basically just depends on what people already know.


> You could have the project finished quickier in Javascript. Would you disagree?

That very much depends on the project (I have a lot of experience with Javascript, Typescript and Haskell).


maybe you shouldn't be spouting answers when you should be asking questions then


Then keep quiet.

I agree completely with @foopdoopfoop.


You can't do this here. We've had to ask you several times to follow the site guidelines. Would you please review https://news.ycombinator.com/newsguidelines.html and take the spirit of this site more to heart? Note the most important one: "Be kind." That doesn't vary depending on how annoying or wrong other comments are.


> Hot take: no it isn't.

Odd. I did some Haskell in prodution (hardware control and Unix daemons) and it was delightful.

> It is extremely hard to learn, has an extremely confusing + needlessly complicated syntax and I question the payoff immensely.

It is unfamiliar if you have only worked in the C family of languages. The syntax is quite similar to the rest of the ML family, which dates back to 1973. It's basically as old as C. I had the good fortunate of learning Standard ML around the same time as C, SQL, and Perl. Standard ML and SQL were by far the most straightforward to learn.

> If I stood up in a corporate business boardroom meeting for tech analysis on a new project and said "I want to write it in Haskell", I'd get laughed + kicked out.

Yes, you would, because why are you talking about programming languages in a boardroom? If you mean a more technical review committee, I wouldn't be so sure of that.


> The syntax is quite similar to the rest of the ML family, which dates back to 1973.

How close is it to say something Like F# then? I've toyed with F# a bit and was able to pick it up pretty decently, but a lot of Haskell still looks foreign to me


If you know F# you are going to feel a lot more comfortable than coming from say C/C++/C#.

The largest difference imo between F#/Ocaml and Haskell is the evaluation model where Haskell is lazy by default.


IMHO the largest difference is purity.

Not being able to mix IO and non-IO functions like this

    print “Enter name:”
    print (“Hello “ ++ readStdinLine ++ “!”)
in Haskell means you have to learn a new way of writing IO code.


Not really. You can mix them in Haskell too. Think of F# as Haskell except you are always working in the IO monad. Haskell doesn't require a "new way of writing IO code" at all. It just allows you to separate out the IO from the pure in a way that can be enforced by the compiler. But there is nothing preventing you from writing all Haskell code in the IO monad if you were so inclined - which is essentially OCaml/F#.


> Not really. You can mix them in Haskell too.

True, but it's not particularly syntactically convenient.


The entire point of monadic (do) syntax in Haskell is to make this convenient. I'm not following you exactly.


runeks mentioned

    print “Enter name:”
    print (“Hello “ ++ readStdinLine ++ “!”)
which can't be written like that with do notation. It's also pretty confusing code to write in any language, so maybe that's not a problem.


True, do notation does require you to add a line -

  main = do
      print "Enter name:"
      name <- getLine
      print ("Hello " ++ name ++ "!")
But I only said that it makes this convenient not that it allowed you to write any statement you want. It still makes it easy to intersperse pure and non-pure code.


The surface similarities between haskell and F# are fairly clear - emphasis on pure functions, no parens and commas for function calls, etc.

Alexis King wrote an interesting comment on one reason haskell can be difficult to pick up, which might be one cause of the foreignness you've noticed: https://www.reddit.com/r/haskell/comments/ddsvbk/you_are_alr...


Really close. :: between variable name and type instead of : and similarly superficial things. As the sibling comment pointed out, the semantics can take some time to wrap your brain around.


One of my colleagues was tasked (by me) to extend microcontroller model for new controllers, including new commands, different bus widths, etc. Models was written in Haskell and parametrized by different widths - program counter width, address space width, etc. As you can imagine, it was heavy on type-level code, has monads and hierarchy of type classes and had to produce C and Java code for different modeling tools.

Cooleague was not familiar with Haskell before, barely touched it. He also was recovering from hard divorce and, as he has said later, was not willing to work at all.

After a month he delivered changes needed, with tests and ready to use.

So my experiences (there are several others anecdotes) directly contradict what you say.

Now about RTS and language itself.

Haskell has threads greener than Erlang and/or Go. Go's thread stack size is 4K-8K, in Haskell it is about 1K-1.5K. It is possible in Haskell to have scheduler as a library http://hackage.haskell.org/package/meta-par for finer scheduling which is not possible with Erlang and Go.

So I question the well-being of anybody who subjects themselves to the pain and torture that is not Haskell.


Are you speaking from experience? The article is about the author’s experience from a year of using Haskell.

I don’t think it’s fair or constructive to disagree with his findings unless you also have experience using the language, and it doesn’t sound like that is the case.


So if somebody asserted that INTERCAL-72 is the proper language to use for production code, you would consider it unfair to contradict them until you've used INTERCAL in production code yourself?

Personally, I would not necessarily disbelieve somewhat propagating Haskell as a production ready language. However, I'm fairly confident that it would be thoroughly unsuited to any areas of work I regularly operate in.

I have tried a number of times to learn Haskell (from a variety of tutorials and text books). I find myself nodding along sympathetically at the fundamentals (functional purity, non-strict evaluation). I can follow the initial chapters. And somewhere between "Functor" and "Applicative", I tend to lose the plot.

Then I look at supposedly real world code like the URL shortener in Allen/Moronuki's "Haskell Programming from First Principles", and I just can't make heads or tails of it: https://github.com/bitemyapp/shawty-prime/blob/master/app/Ma...

   app :: R.Connection
       -> ScottyM ()
   app rConn = do
     get "/" $ do
       uri <- param "uri"
       let parsedUri :: Maybe URI
           parsedUri = parseURI (TL.unpack uri)
       case parsedUri of
         Just _  -> do
           shawty <- liftIO shortyGen
           let shorty = BC.pack shawty
               uri' = encodeUtf8 (TL.toStrict uri)
           resp <- liftIO (saveURI rConn shorty uri')
           html (shortyCreated resp shawty)
         Nothing -> text (shortyAintUri uri)
If that's supposed to be the way to write production code in the future, count me out!


What exactly is wrong with it? I've never even used the Scotty library or monad being used and I can understand it.


1. I have a vague idea what the second "do" is for. I don't have the slightest idea what the first "do" does.

2. I have some idea what "$" is, but I have no idea what it does in this context.

3. param "uri" is presumably extracting a parameter, presumably from the result of an incoming HTTP request. How is that request passed into "app"? How is it passed into the "get"? How is it passed to "param"?

4. Why does "parsedURI" need a type signature?

5. "Just/Nothing" more or less makes sense, but "Left/Right" (used outside the excerpt I quoted) is a notation that makes about as much sense to communicate meaning as "car/cdr" does in Lisp.

6. A response is generated. How does that response get back to the client?

7. What does liftIO do?

8. The overall syntax is just a pile of wrongness. There are left pointing arrows and right pointing arrows, which have completely different (and entirely non-symmetric) functions. There is an equals sign, which has a somewhat similar role to a left pointing arrow, but for some reason needs a completely different operator. There is a right pointing arrow in the function signature which is completely unrelated to the arrow in the case statement. There is an empty pair of parentheses doing Curry knows what. There are return values, which you can identify by following the peaks and valleys of the indentation.

So, TL;DR, this is an incomprehensible control and data flow, wrapped in an ugly syntax.


At the risk of proving your point (this part of the code is indeed a bit of a hash), but in the hope that this will help a little with learning, here are some answers:

1. The first do in your snippet does nothing - it's redundant. In the actual code you linked, though, it sequences the `get "/" ...` and the `get "/:short" ...` expressions into a single application initializer. When the app starts up (i.e. when `app` is called), this outer "do" runs through your endpoint-defining expressions and registers each endpoint in turn with the HTTP server.

2. That `get` function takes two arguments: the URI pattern and the action to run if the pattern matches. The whole rest of your snippet _is_ the second argument! Because of the low precedence of the "$" operator, using "$" means you don't have to enclose all that code in parentheses. It is a tiny bit of syntactic sugar that is wildly popular in Haskell-land because Heaven forfend any of your code ends up looking like LISP. ;-)

3. The answer is in the monad being used in this case to sequence these steps together (`ActionT Text IO <some return type>`). Think of `param "uri"` not as an expression returning a string, but a partially-constructed function that takes an HTTP request as an additional argument. The monad takes care of threading the request into this function when the request is actually handled, and `uri` will be bound to the resulting value.

4. I'm guessing it doesn't. Many type signatures in Haskell are optional, and are added if either 1) there's a genuine ambiguity in the code that needs to be clarified, 2) the author thinks it will make things clearer, or 3) the author hit a bug where the compiler inferred some crazy generic type that the author didn't intend or understand, and they put in some type signatures as sanity checks and constraints on what type is actually expected.


Thanks for your explanations! 2&4 make a lot of sense. 1, and even more so 3, lead to more questions than they answer, in my opinion. Monads appear to be an absolutely indispensible aspect of most substantial Haskell programs, but to me, they just look incomprehensible. I understand the advantages of a pure programming model for formal reasoning about programs, but in terms of communicating an intent with a program, I can't help thinking that the cure offered by Monads is far worse than the disease of mutable state.


The advantage of monads is that they whisk away a bunch of boring, repetitive boilerplate code that nobody wants to write and that people will even go so far in other languages as to create a bunch of global/thread-local/hidden-context variables to avoid writing!

The downside is that the abstraction and its applications, maybe a bit like pointers, do take a while to really master. Particularly when you start composing monad types to make other monads, as Scotty does with IO.


Look, I sympathize with you because Haskell is not easy to learn by any means so I understand someone not liking it. If I get time later I'll go item-by-item and explain. There's a lot of questions there but it's actually just a few fundamental concepts.

But I've seen engineers with only C/C++ experience get onboarded and write code with the same concepts & more complicated ones as the link shortener above. The key was onboarding by someone who knew Haskell well. Haskell companies need a culture of learning _and_ teaching to truly thrive and counteract downsides of the language like hiring pool and learning curve.


> I don't have the slightest idea what the first "do" does.

We know from the type signature that it returns a `ScottyM ()`, whatever that means. We don't know what that entails without having looked at the documentation. I don't think it's fair to criticize it on this basis, though, it seems basically the same as when you see an unfamiliar function in some python code and have to look up what it does.

However when we look at the docs: http://hackage.haskell.org/package/scotty-0.11.5/docs/Web-Sc...

There's literally no documentation for this type. Imo this is emblematic of a pretty dysfunctional culture in haskell of not bothering to document things properly.

By ctrl-f-ing "ScottyM" and looking around the page we can see that it's returned by the `get` and `post` functions, and get a vague idea that it's the type used to progressively build up a scotty web application. Then we see that the `scotty` function takes a `ScottyM ()` and returns an IO () that we can run as the main function of our program.

It can definitely be figured out, but the docs sure don't help much.

> this is an incomprehensible control and data flow, wrapped in an ugly syntax.

Parts of it are pretty bad, but I don't think the control flow being quite implicit here is actually much of a problem. Scotty is based heavily on Sinatra, a minimal ruby framework, and it kind of reminds me of Flask as well. In all of these, it falls to the programmer to decide what to return on a given route, and the sending back of the data is all handled implicitly and invisibly by the framework. So, I don't see this as an issue, and, if it is one, it isn't Haskell specific.


We can quickly find out that ScottyM is a Functor/Applicative/Monad though. And in this code (and using this library in general), that's all we need to know! FAM is a universal interface.


Knowing you can <$> <*> >>= doesn't mean you know what something does/what it's for though.


But it typically tells you how to compose such a thing, especially if it's 100% abstract to begin with (like Scotty.) It just becomes type Tetris.


As to the rest:

5. Completely agreed, especially since, semantically, it's almost always used as "Error/Success". Silly naming aside, though, it's extremely similar to Maybe, except that the failure case can contain a value (an error message, for example).

6. Think of your entire "action" (the second argument to get) as a function that takes a request and returns a response. Scotty takes care of invoking your action as necessary when a request comes in, and it also handles writing the response out for you.

7. liftIO takes an `IO String` computation as input, and transforms it into an `ActionT Text IO String` computation. This is done about the stupidest way possible: it returns an action that ignores the input request, generates a string, and "returns" it. liftIO is a heavily overloaded function that is defined for lots of monads that can easily incorporate IO computations, usually because they themselves are nothing more than dressed-up IO computations.

8. Yes, -> and <- are very different, and -> is overloaded. = and <- are very different because the latter relies on a monad to define how a value is extracted from the right-hand side and fed to the next part of the computation. The empty pair of parentheses is like void in C - a function with a return type of () doesn't return any value. In the case of our HTML actions, an action of type ActionT Text IO () may have a side effect of writing a lovely HTML response, but it doesn't produce a value like `param "uri"` does - it just does the side effect and it's done. Yes, indentation is important in Haskell, it's less regular (or you might say "more expressive") than in other languages, and in the code you linked it looks like it got jumbled somewhere along the way.


Could be a fun game. I'll guess. No haskell knowledge. PHP background.

1. Connect to remote connection. DO - wait..

2. $ Loop-over

3. It comes from a config file with a hardcoded value and is globally included

4. Because it could be reused to handle a different types.

6. Text is printing to standard output.

7. Creating and Repointing the response to the resp var


The only thing I see bad about that code is some bad variable naming, especially shorty/shawty, and possibly a couple bits of needless and unhelpful verbosity. Neither are inherent to Haskell.


> If that's supposed to be the way to write production code in the future, count me out!

Eh? Sorry but the code is really clear to me? Which parts do you find difficult to read?


This is roughly the same structure as the code you’d write in Python flask. There’s nothing unusual about it beyond Haskell’s syntax.


> It is extremely hard to learn

I picked up Haskell while still getting my degree on my own. My only training was in C. Payoff was huge within a couple months of starting (didn't even finish LYAH), especially when writing concurrent programs.

> I question the well-being of anybody who subjects themselves to the pain and torture that is Haskell.

Haskell is so effortless once you get over the learning curve though. I've used less effort & focus programming professionally in Haskell than I ever did in other languages. The types & way programs are structured make so many tasks 100% mechanical, with your only thought being where to direct the mechanics.

> If I stood up in a corporate business boardroom meeting for tech analysis on a new project and said "I want to write it in Haskell", I'd get laughed + kicked out.

Don't care about those people's opinions in the first place. Never will :) Should be viable for me to write professional software for years to come in Haskell.


> It is extremely hard to learn, has an extremely confusing + needlessly complicated syntax and

I agree with your other points but what's wrong with the syntax? I actually really like the syntax for the most part


I've been using Haskell on and off for almost ten years, and the "front end" of the language, including its syntax, is pretty much the only thing that I absolutely hate about it.

The endless operators, all the language extensions required in 2019, the madness called Template Haskell, the absurdity of point free style, the multiple ways of achieving things by cleverly using . and $, the uncanny valley of do-notation (why does it look different? F# gets it right). It's too easy to be clever. It's too easy to take shortcuts that the next person won't understand.

I'm completely fine with the underlying semantics! Lazy evaluation, the majestic typechecker, concepts like monads, functors, lenses. Even when it comes to syntax, I love how functions are declared, the way parenthesis work, currying, all the "simple stuff". I adore all those concepts and apply them to other languages. I long for a Haskell-like language without the stuff I mentioned above.

I think the problem is that in Haskell everything feels like an afterthought, everything feels "bolt on" rather than "built in". 99% of the hot stuff is implemented using libraries. That's similar to Lisp and C++, other languages I hate (I also love Lisp parens, but I hate macros). Now that I think about it, I prefer less extensible languages that require adding code to the compiler to extend it: C#, F#, Go, JavaScript.


If you can have better solution to the problem at hand with the library, you will solve it faster. Turnaround time for library is much less than for a compiler.

So if language provides a way to solve something with a library instead of compiler, it provides a way for faster path to solution.


>but what's wrong with the syntax?

I'll just link to this blog about readable Haskell. If everyone followed his advise, it would be much better.

http://www.haskellforall.com/2015/09/how-to-make-your-haskel...


Thanks. That's a good blog. The $ operator is definitely confusing in Haskell. F# uses <| to indicate a backward pipe, which I find much more intuitive. For example:

    print <| even <| 4 * 2
This means: Take the result of 4 * 2 and send it to `even`, then take that result and send it to `print`. Or you can write it with forward pipes to make it even clearer:

    4 * 2 |> even |> print
The thing about backward pipes is that they're often used to avoid parentheses, which is IMHO a good idea. It's just that thinking of them as "function application" is confusing. F#'s operators makes the data flow clear.

[Side note: The mathematical symbol for function composition is the circle operator, ∘, which is also confusing for the same reason. Does (g ∘ f)(x) mean g(f(x)) or f(g(x))? Personally, I can never remember. If math used F#'s >> and << composition operators instead, there'd be no doubt that f >> g sends the output of f to g, so (f >> g)(x) means g(f(x)).]


> The $ operator is definitely confusing in Haskell

I generally think of it as a sort of open-ended parentheses that encloses the rest of the line. Instead of a backward pipe. It means "evaluate everything after this character first then apply the preceding function to it" just like parentheses would.

    print $ even $ 4 * 2
is equivalent to

    print ( even ( 4 * 2 ))


You're right, and that's why it's usually called "function application" instead of "backward pipe". However, using $ as an operator is confusing because the associativity is ambiguous if you're not already comfortable with it.


> Does (g ∘ f)(x) mean g(f(x)) or f(g(x))?

The function composition operator can be vocalized as, in this case, "g after f". That is, apply the f and then g. Translating '∘' into 'after' is the easiest way to understand it.


Alternatively, I learnt it as "of" (from a mathematical standpoint). "(g ∘ f)(x) " is "g of f of x", so g(f(x))


Haskell provides the (g . f) notation as the standard, and also has (f >>> g) notation buried in one of its more advanced libraries if anyone finds the right-to-left reading too disorienting.


I don't see the above better than

    print(even(4*2))


Yeah, that's a small example, so it's fine either way. But larger examples with lots of parentheses can be overwhelming (see: Lisp).


Note that this is to make Haskell more readable to non-Haskell programmers (which is a bit of a weird goal IMO). It is not for readable Haskell in general. Advice like "not using $" just makes code less readable if you're familiar with Haskell and is quite frankly just bizarre. I do not think you'll find many programmers, Haskell or otherwise, who find lots of nested parenthesis to be the paragon of readability.¹

If the goal is to show snippets of cool Haskell code to other people this article is… fine I guess? But I don't think it's good for much more. Not even for teaching Haskell, since people should probably learn the language as it is used instead of some arbitrary "more readable to non-Haskell programmers" version. Using do-notation is probably even counterproductive for teaching since it obscures what's actually going on.

Honestly this sentiment from the Haskell community that Haskell is somehow bizarre and impenetrable to outsiders and needs to be somehow watered down so that normal people can understand it just feels extremely elitist and if anything only scares people away from Haskell.

¹ Lisp programmers aside, but even then in Lisp you're basically just drawing a tree and so the parenthesis kind of fade away, but in languages with infix syntax and operator precedence you can't ignore them.


>Note that this is to make Haskell more readable to non-Haskell programmers (which is a bit of a weird goal IMO). It is not for readable Haskell in general. Advice like "not using $" just makes code less readable if you're familiar with Haskell and is quite frankly just bizarre

This is why no one likes Haskell programmers and why the community just sucks.

You're telling me that it's _more difficult_ for a Haskell programmer to understand parentheses than for a non-haskell programmer to understand functors?

I can't think of a single haskell programmer that doesn't also know at least 2 C-style languages.

Meanwhile, there are many C-style programmers that don't know Haskell at all.

If the goal is to remain an impenetrable fortress / secluded monastery, the Haskell community is a safe bet for the Benedict Option[1]. But if the goal is to become popular and get others to understand the benefits of Haskell, it seems to be a culture problem within Haskell, more than anything else.

[1]: https://www.amazon.com/Benedict-Option-Strategy-Christians-P...


> This is why no one likes Haskell programmers and why the community just sucks.

I like most Haskell programmers and I don't think the community sucks more than any other language community. So I guess that proves that assertion wrong.

> If the goal is to remain an impenetrable fortress / secluded monastery

I am pretty sure there is no such goal :D

> if the goal is to become popular and get others to understand the benefits of Haskell

I am pretty sure that is a goal for some Haskell people and not for others. For me personally I had a lot of fun learning Haskell. It taught me to become a better developer in any programming language.


> Honestly this sentiment from the Haskell community that Haskell is somehow bizarre and impenetrable to outsiders

They’re not inventing that from thin air. I can understand Haskell with a great deal of effort, but it certainly requires a great deal of effort.

My wife is a non-programmer. She knows nothing about code. I could explain what the go code I write does in a few minutes and she could follow it with a minimum of hand waving. I don’t think I could explain any non trivial Haskell code in hours to even professional programmers


I have explained Haskell code in a few minuts to professional programmers and they got it. It took a bit longer to explain monads but it wasn't hours.


I have tried to teach C# to a smart non-programmers friend. That was really really hard. The idea of OO/classes just blew his mind.


I'd quibble with this, but that's probably largely because I used Hudak's "Haskell School of Expression" book as my road into Haskell, which seems to generally aim for readability at the expense of making everything into a monad. I gather it would be considered a bit of an outdated style now, but in retrospect I think it eased me into the pool.


> Honestly this sentiment from the Haskell community that Haskell is somehow bizarre and impenetrable to outsiders and needs to be somehow watered down so that normal people can understand it just feels extremely elitist and if anything only scares people away from Haskell.

What? The thread we're in is descended from a post about how terrible the syntax is!


Awesome advice! Thank you for the link.

Sadly, the author of “Haskell in production” breaks many of the rules in part 2 and code reads like ascii-art :(


Most people that have worked as developers/programmers/etc. that don't have a lot of exposure to or training in fp find it painfully hard to learn. This could be one of those cases.


To them, I would suggest that they remember back to when they learned their first imperative language like Java or C. For me, that was really difficult to wrap my mind around at the time, but now those languages are second nature to me. I kind of suspect that if everyone were exposed to a functional language first, there would be a lot fewer people who think they are extremely hard to learn.


I fondly remember being confused by “return” for weeks (granted, I was 9). What are you returning? And to where? And why was it only one thing? And where did it go when it got there?

It’s easy to forget how hard something was to learn once you already know it.


I remember when I started with Java at University.

Like, "hello, what is a method? What the hell is a class? Come on, stop kidding me, objects have methods???? How do I call them? Wait, how do I _write_ a method?"


What I kept bouncing off of was all the stupid as shit "sally sends a BUY message to Tommy's bagel shop" crap. I just needed about two to four pages worth of text & diagrams demonstrating how objects and structs are pretty much the same thing, and what the OO sugar inheritance and keywords and dynamic dispatch and such do under the hood, which ain't that complicated. Every single book I picked up (back in the '90s, when if you didn't know someone to personally recommend a book it was much harder to figure out which ones were decent and which sucked) wanted to use those awful, cutesy analogies to introduce OO and they just confused the hell out of me, and convinced me something dead-simple was hard.


There are many reasons Haskell is difficult to learn, and the functional programming aspects are only part of it. I'd venture a guess that it's vastly easier to go from say, python to clojure, than from python to haskell. Even neglecting clojure's impure escape hatches.


If you are learning something new and it isn't hard to learn then you are not learning anything truly new. Just variations/flavours of what you already know. If it is really hard then you are truly learning something new. No pain no gain.


Haskell syntax is great except for the indentation/layout rules. There is no simple definition of the grammar with regards to the layout. It's only specified as a headache-inducting operation of a state machine [1], and the implementation is quite hard to read. This is fine for writing Haskell, but quite painful if you want to parse it.

[1]https://www.haskell.org/onlinereport/haskell2010/haskellch10...


I'm not the OP but I would agree and my problem with the syntax is the number of symbols that don't make the semantics clear at all. The Lens library for example, but also a lot of standard stuff like the bind operator, it's extremely complex. Many people also seem to go overboard on the dot notation which can be very hard to read.


++ >> >>> >>= `concat` .


Hard disagree on the syntax. ML languages to me have the best syntax and their ease of currying is extremely attractive.


Syntax is the absolute best thing about Haskell and it's ML relatives, especially coming from C++ and Java as I did. It's succinct and very easy to read, without braces, parens and type annotations scattered everywhere. For example, defining a function is as simple as "let f x = ..x.."

The succinctness of Haskell really matters when dealing with non-trial code or types. Just compare the type signtaure of "SelectMany" in C# to an equivalent in Haskell.


"Complicated" and "weird syntax" are pretty subjective measures to define how good a programming language is. C++ is weird and complicated to me.


> I question the well-being of anybody who subjects themselves to the pain and torture that is Haskell.

I question the well-being of anybody who refuse to learn anything that is hard to learn. Learning is like body building. No pain no gain.


That's a really strong opinion to present with absolutely no evidence.


I will probably get stomped on for this but to me it's a giant elephant in the room.

When that one purist on the team proposes mandating having 100%, 80%, or any hard number of unit test coverage, most sane people tend to disagree and understand its wild impracticality and counter-productivity.

However - when one (ie, a Haskeller) says that the code must universally pass 100% strong static type verification coverage -- arguably SO MUCH worse cost/benefit than unit testing -- and you say "no that's impractical" it's suddenly a different game.

The elephant in the room is this. Haskell and strongly typed languages are highly impractical, academic pursuits. They do not at all respect the dynamism of the real world. You can certainly build production systems in any language including Haskell but by picking Haskell you are throwing sticks between your legs unnecessarily.


Just because an expressive static type system is available, doesn't mean you have to use all of its expressive power. There's nothing stopping you from writing your system using little more than String, Int, lists, IO etc. Of course, you probably have a higher chance of bugs, but that might be the right tradeoff for your usecase.

I agree that haskellers have a tendency to disregard the cost of using sophisticated type machinery. Which is unfortunate because haskell's greatest strength (IMO) is its ability to opt in to strong static verification for the 10% of code where it provides the most value.


> Just because an expressive static type system is available, doesn't mean you have to use all of its expressive power. There's nothing stopping you from writing your system using little more than String, Int, lists, IO etc. Of course, you probably have a higher chance of bugs, but that might be the right tradeoff for your usecase.

This is not accurate or prove me wrong. Opting out of Haskell's type system may be "do-able" but only in a sense. And even so the programming ergonomics will be terrible vs. a language designed with typelessness in mind.


> And even so the programming ergonomics will be terrible vs. a language designed with typelessness in mind.

Not my experience at all. I switched from Javascript to Typescript and I am way more productive now. So it is definitely not in general true that "the programming ergonomics will be terrible vs. a language designed with typelessness in mind".


This is opinion. Haskellers have opinions too. Whatever you like and whatever gets the job done: bravo


Nope. Not everything is "every way is just as good as the other".

>Whatever you like and whatever gets the job done: bravo

"The job" is this: it's an optimization problem of maximizing development throughput. That's the job in the business world, at least.

There can and should be a conversation about doing this job well.

Mandating strong static typing across the board is exactly what Haskell does. My claim is this is death to productivity/throughput when you compare it to alternatives. The death part can be argued --the best leg Haskellers can stand on afaict is that overall throughput is increased due to decreased attention to the kinds of bugs prevented by strong typing. This needs to be proven by them.

Otherwise -- by definition -- strong type systems are asking you to take special care to meet a type proof that is not necessary to do to implement correctness in other PLs.

Every time I've implemented a solution in a strongly typed system I always run into the type prover complaining about something that Might be but that I can prove is not the case. Here is the limit of the type checker - it is simply not aware enough to understand what we're doing. So it puts needless requirements left & right. This is hostile to productivity.

Now you go.


I believe you're exaggerating the frequency that the compiler rejects valid programs. It happens, but no more frequently than random runtime type errors occur in dynamic code.

In any case, one can just as easily argue that the value of static types come not just when the code is first written, but under the legion of modifications that need to occur. Not to mention how self documenting it is which also aids modification.

Some people just don't like static typing, which is fine, but making the statement that it's categorically worse is hard to defend.


This is not an opinion thing. If you’re saying that type checkers can reject code at no worse than the same rate as dynamic runtimes then you are just plain wrong. In any case the burden is on you to show this with the slightest sketch of a proof to how this is possible. And what is this magic type system that can do this, bc it sounds like some super AI.

The reason I’m confident in my objections is because this is what I get from static typing fundamentalists: never a concession as to its cons and costs. Is it that static typers can’t see the relative costs bc they are not using the full power of dynamic languages?


> the full power of dynamic languages?

There is nothing you can do in a dynamic language you can't do in a typed language. There is no "special power" that only dynamic languages have. If you truly believe that dynamic languages have "special powers" that typed languages can't have then I recommend studying a bit of basic computer science. Starting with the Turing Machine or Lambda Calculus.


> There is nothing you can do in a dynamic language you can't do in a typed language.

There is nothing one can't do in assembly that you can't do it in a higher-level PL.

The argument is not what one can or can't do.

The argument is that one world (dynamism) allows for higher throughput than a static-checked environment. This can be discussed but the "can do" argument is of no interest.

You "can do" anything given enough time. One doesn't even need a programming language. You can manually execute instructions by hand. But this is not the discussion.


> The argument is that one world (dynamism) allows for higher throughput than a static-checked environment.

I have programmed in both dynamic languages and typed languages for 35+ years. And I am most definitely more productive using a typed language. Otherwise I wouldn't. So I wonder why our experiences are so different?


> This is not an opinion thing.

Of course it is an opinion thing.

> never a concession as to its cons and costs.

Pretty much what I get all the time from people who don't know how to use type systems productively, and then think that because they can't nobody else can. If types doesn't work for you, don't use them. They work for me so I use them.


I can use strong typing productively. I can use dynamic systems _way_ more productively.

When all things are equal, the impedance is not from lack of skill. It is staring at you straight in the face: it's a rules-based type prover that -- by definition -- significantly restricts the set of valid programs.

This is the definition of a type prover.

The argument should be that the value of this restriction outweighs the cost. But no one is yet making this claim. Which should make one wonder.


> But no one is yet making this claim. Which should make one wonder.

I for one are most definitely making that claim. Based on 35+ years of programming in both single typed and typed programming languages. But clearly your experience is different.


> My claim is this is death to productivity/throughput when you compare it to alternatives.

A claim with no evidence to back it up. My experience (for example) is exactly the opposite.


If it happens every time then it should be easy for you to give a small example. Please do! It’s very hard to understand what you mean otherwise.


Pattern matching for one. I can prove that 3 cases won’t be met but I have to ‘fill in’ them anyway.

ADTs with non-Maybe fields. I can prove that all is fine if this ‘slot’ isn’t filled in.

Trying to pass an opaque reference down a function call chain.

There’s 3.


Sure, thanks, I know where Haskell's type system will ask you to try again if you haven't provided all the information it wants. I'm asking for examples where it genuinely wasn't sensible to change your code to match what the compiler wants.

> Pattern matching for one. I can prove that 3 cases won’t be met but I have to ‘fill in’ them anyway.

This is a funny one because, no, GHC will not ask you to fill them in! You have to turn on a warning for it to complain. The fact that the warning is not on by default is one of my pet peeves with GHC!

> ADTs with non-Maybe fields. I can prove that all is fine if this ‘slot’ isn’t filled in.

Another funny one, because you can just use `undefined` or `error` and GHC won't complain! Personally I'd probably prefer that it did.

> Trying to pass an opaque reference down a function call chain.

I don't even know what you mean by that.


> Pattern matching for one. I can prove that 3 cases won’t be met but I have to ‘fill in’ them anyway.

>> This is a funny one because, no, GHC will not ask you to fill them in! You have to turn on a warning for it to complain. The fact that the warning is not on by default is one of my pet peeves with GHC!

Okay, so let's do the inverse then. I've got a pattern matched case that the type prover proves will never occur (so it throws type error) but I want it to occur.

The answer I know is obvious: I have to add it to the type union. But it should be obvious at this point that the moves I am making are not driven by the outcome I want but by the rules of the type checker. At some point you have to say, Why am I playing these games? Do they really yield tangible, business-facing value?

> ADTs with non-Maybe fields. I can prove that all is fine if this ‘slot’ isn’t filled in.

>>Another funny one, because you can just use `undefined` or `error` and GHC won't complain! Personally I'd probably prefer that it did.

But I don't want an error. I can prove that the absence of the field is OK/not-an-error.

If you say then make it a Maybe, I will tell you sorry I don't have the time to update my entire code-base to contend w/ this novelty to the type checker (I say to the type checker, but this is never a novelty to me and my dynamic code).

> Trying to pass an opaque reference down a function call chain.

>> I don't even know what you mean by that.

my value --(flows at runtime through)--> [some framework, say a web framework code to a, say, a Label that has no idea about my value's type domain] --(flows through pixels to)--> user's eyes


It's really interesting because I hear these kinds of claims a lot from Clojure programmers. How did they arise in the community? I find it hard to believe that so many Clojure programmers have actually tried Haskell and come to exactly the same conclusion.

Does Rich Hickey repeat these points somewhere? If so could you point me to where? I'd love to see them. I'm a big fan of Are We There Yet, Simple Made Easy etc. and find Haskell a great implementation of them.

> I have to add it to the type union. But it should be obvious at this point that the moves I am making are not driven by the outcome I want but by the rules of the type checker.

That's not how I see it at all. I wish I could understand why you do see it that way!

> But I don't want an error. I can prove that the absence of the field is OK/not-an-error.

You misunderstand. If you know that a field of a data type is going to be unused you can fill it in with an `error` call that you know will never be hit. You can even omit the field entirely! That's considered bad form but if you like it then knock yourself out.

    data Foo = Foo { bar  :: Int
                   , baz  :: String
                   , quux :: Bool
                   }
    
    doSomething :: Foo -> IO ()
    doSomething foo = do
      print (bar foo)
      print (baz foo)
    
    example1 = Foo { bar  = 42
                   , baz  = "Hello"
                   , quux = error "quux will never be inspected"
                   }
    
    -- > doSomething example1
    -- 42
    -- "Hello"
    
    example2 = Foo { bar  = 42
                   , baz  = "Hello"
                   }
    
    -- > doSomething example2
    -- 42
    -- "Hello"

> my value --(flows at runtime through)--> [some framework, say a web framework code to a, say, a Label that has no idea about my value's type domain] --(flows through pixels to)--> user's eyes

What's wrong with a polymorphic parameter?


What I find crazy about these conversations is that I will say “I don’t like that I have to do X...”

And the response is always “But that’s easy you just do X...”

The argument is not that you can’t do some task in Haskell.

With enough time you can do anything in any language.

That’s the given.

The argument is I want throughput.

If I’m doing X I want to do it because it is necessary to the problem solution.

This is why you hear these laments from Clojure programmers. Because when we code we are doing the minimal amount of work to produce correct programs.

Strongly typed systems add fanfare that Clojure programmers have found no need for.


Thank you for your response.

I'm afraid I don't understand. Your point seems backwards. On the specific point in question you said (paraphrasing) you don't like filling in a field of an ADT if you can prove it's not used. I pointed out that in Haskell you indeed need not fill it in. You said (paraphrasing) you don't like matching on a constructor if you can prove it's not used. I pointed out that in Haskell you indeed need not match on it. You don't want to do X. I'm telling you that you don't need to do X!

Anyway, I'm more interested in the general point. Where do Clojurians get these ideas about Haskell? Are there specific blog posts or videos you can point me too? I'd rather confront the general case than specific instances buried deep down in forum threads.


Thanks for your reply, as well.

But there’s no indoctrination going on. It’s right there:

> If you know that a field of a data type is going to be unused you can fill it in with an `error`

I don’t want to fill anything in.

I don’t want to have to tell the checker what I know and the world already knows just because it thinks it needs to know it.

There’s no indoctrination. We don’t have to “fill in” anything in Clojure.

I wonder sometimes if this is a thing where you can’t see the air because it’s all around you?

In static-type space you are (by definition) a slave to the type checker. Sometimes static-type enthusiasts will say they lean on the checker, it helps them. But I have a weird sense that this is a kind of Stockholm Syndrome.

Sorry I’m a bit caustic sometimes, but it’s all good natured I promise - the hope is it draws out a point that will prove me wrong.

You clearly know your domain well, and I appreciate the back & forth. But I do stand by my position (until knocked out of it.)


> Thanks for your reply, as well.

I'm glad we're still going deep down in this comment thread after everyone else has gone home.

> > If you know that a field of a data type is going to be unused you can fill it in with an `error`

> I don’t want to fill anything in.

You don't have to! I said as much a couple of comments back "You can even omit the field entirely!". I gave a code example demonstrating that.

Anyway, (what I assume to be) your larger point still stands, i.e. that you don't even want to have to define the data type upfront.

I'm a huge fan of Rich Hickey's talks "Simple Made Easy", "Are We There Yet?", etc.. Haskell is such a good implementation of his philosphy yet he doesn't seem to realise it. Clojure fans are some of the most vociferously opposed to Haskell.


Sorry to mishear your prior point about omitting the field entirely..

But let's set that aside and focus on the pre-declaring of your ADTs, as you suggest.

This is no trivial matter. In fact this is the crux of the issue if not a large part of the issue.

Pre-declaring what knowledge is "allowed" to flow through your system -- if you throw all other objections away -- this one is a enormous blocker still for the Clojurist. (Well, I should only speak for myself at least..but I suspect.)

Clojurists rely almost universally across the board on ad hoc knowledge (ie dynamic maps) with zero ceremony required before getting to introduce and capitalize on this knowledge .... and Clojure is the perfect 'glove' for handling these ad hoc packets of knowledge..it was explicitly designed afaict on this exact way of handling data.

So if Haskell is not oriented around this same construct then I don't think your read that Hickey's philosophy is much-realized by Haskell is accurate.

You can take this further and make the very obvious point that in real life we never ever have to go through any pre-declarations or ceremony like this.

And in my experience if productivity in coding is a function of anything it is _how close the programming model is to your natural way of thinking about the world_. When that cognitivie impedence is reduced massive creativity is unleashed.


> Sorry to mishear your prior point about omitting the field entirely..

That's OK. Forum discussions are not the highest fidelity communication medium.

What I mean about Rich Hickey's philosophy and Haskell are things like the quotations below. I think they're great insights and they apply really well to Haskell, at least to Haskell written in the way that a significant subset of us like to write it. On the other hand I'm not much convinced by arguments about the "natural way of thinking about the world". Firstly, imperative programming proponents are convinced that the world is imperative and it is natural for humans to think imperatively. Secondly, I became a much better programmer using Haskell that using Python, despite Python naturally "fitting my brain" (a popular, and valid, quote amongst Python programmers) and me having to "fit my brain to" Haskell.

Anyway, that's mostly just my opinion. On the other hand, an objective technical innovation that could bring a huge amount of ergonomic benefit to Haskell is a type system feature called "row types". That would allow us to have anonymous records which we could add fields to and delete fields from at will, just like in an untyped language, whilst also providing the type system guarantees that we're used to. I think that would be hugely beneficial but for some reason that doesn't seem to be on the horizon.

====

Are We There Yet?

* "Someone who's building houses out of bricks does not have to worry about the insides of the bricks"

* "The best units for that are the pure functions. Takes immutable values, does something with them, has no effect on the outside world. Returns another immutable value."

* "Mutable state - it makes no sense. Mutable objects - they make no sense."

The Value of Values

* "Place itself has no role in an information model. It is only an implementation detail."

* "Values can be shared freely and the way you share them is by aliasing them."

* "Reproducible results are another great benefit of values."

* "Easy to fabricate"

* "Values make the best interface"

* "Values aggregate ("compose") PLOP doesn't compose"

The Language Of The System

* "What's wrong with SQL is that there's no machine interval to SQL. They only designed a human interface to sequel." (foreshadowing embedded domain specific languages)

* "We want to seek out and build libraries and languages that are like instruments in all the ways I just talked about, in particular the simplicity aspect."

====


I take your point that all of these Hickey comments resonate with ideals of Haskell.

The last piece of the puzzle would be to discuss why Hickey -- in addition to these points -- also is overtly against static typeing and the kinds of idioms that show up in static-type-oriented programming (like ADTs and Maybes etc--he rails against these as well. See "Effective Programs" talk just for once citation.)

So we may be at the point where we're both like "Wow Hickey is so right about all these points" but where you depart from him on this static typing question. (Which should be somewhat interesting given that it is such a big/fundamental thing and his experience has led him to all these great points and yet he is making a huge misstep on his opinion about types? This is no point to 'win' this argument on or anything but it should be of interest.)

Anyway -- so we have these problems with ADTs and Maybes and such and then perhaps we can get past those and say well what about "row types"?

If we squeeze the toothpaste tube down to this point where we can say, yes, no ADTS, just row types... then we're not far off at this point. But it should also be noted that at this point the delta between statically typed and not is now possibly fairly small enough that the comparison is of not much interest. It's pretty easy at this point (it seems to me) to be like, Cool this type system can tell me if I try to access an "age" property as a String.

Okay I guess that's kinda cool but I think the benefit at this point is not earth shattering.

And I also think at his point that I for one would get tired having to explicitly do type coercions when indeed I did want a nil or I did want a String.

I would at this point just keep squeezing the tube until I had Clojure.

In other words, I think Rich Hickey didn't just get those other points right and this one wrong, I think he really did factor in the whole picture. I think he made the right choice and he stands by it of course after years of Clojure's existence. And he is certainly one to keep revisiting a problem when he thinks its not quite right. And I also agree, I think his aversion to types is correct from a practitioner's perspective.

Obviously this stuff has some subjectivity to it (in a sense) but I also participate in these conversations so fiercely b/c it's not just subjective. There is a real consequence in terms of hours-to-delivery that I really do think matter. And I think you can build stable, correct programs (this is demonstrably true) w/o having to contend with a type checker. And no matter which way you slice it you do have to pay for the type checker.

The kinds of things that static typers usually say they're getting from the type checker, I just have never found a need for them. And when I"ve been forced to use a type checker my productivity increases as much as I can find a way to work around the type checker. All of this AND I'm running production, serving customers with demonstrably stable/correct software. So what is the type checker going to give me?

To the point of the cognitive impedance between programming and thinking -- I don't really care when people claim that XYZ is more conducive to thinking ... to say we think imperatively (and I know people say that) but that is just wrong. I'm not talking about how novice programmers want to program. I'm talking about how we as people in the real world think -- how we talk, how our brains work in general. We think very declaratively--...imperative thinking slows down most if not all human brains. Also, we think in terms of vocabularies (words with semantics)... we do not think in ADTs and taxonomies of types. Whenever I'm forced to think in ADTs in a code base my creativity sinks to the floor. Because all an ADT is is someone's conception of what some Idea was at a given (static) point in time. So when I think of a creative use for a concept I often can't pursue it if an entire code base is predicated on a static conceptualization.

The static typer will claim that "Look you can change it and everything will break and tell you where to fix it" This is looked at as a feature. This is not a feature. In my dynamic code base these changes are severely localized so that a creative decision over here does not cause a production capability over there to fall over.

Sorry if this is a bit abstract. Wondering if any of it resonates or if it is coming to abstract. To me it's very tangible in practice. I want to be able to make ruthless locallized changes in my code bases without the entirety of it breaking. This latter thing is usually what happens in programs that are "wholly proved" and where every piece of code is connected to most other pieces via.. types.


At this point I wonder if we're at the end of our journey, at least for now. I get so much value out of Haskell's type system I can't ever imagine giving it up. Row types would be the cherry on top (if they're ever implemented). It seems that you get so much value out of lack of a type system that you can't imagine ever giving it up. I have no basis on which to understand this point of view. It just leaves me dumbfounded. Perhaps it doesn't help that my most familiar point of reference for untyped programming is Python. I guess Clojure is very different.

> In my dynamic code base these changes are severely localized so that a creative decision over here does not cause a production capability over there to fall over.

This is a very specific claim which it might be helpful to dig into further. How can you ensure that removing a record from a field at the start of a pipeline will not cause a function at the end of the pipeline to fall over, since it was expecting to see it but didn't (and the intermediate parts of the pipeline were completely agnostic to it)?


Firstly, I think there's some sort of time limit on posting on HN stories. If we get cut off then please email me because it would be interesting to finish off the discussion!

http://web.jaguarpaw.co.uk/~tom/contact/


Right. And when another programmer changes the ADT so that this is no longer true your dynamic language will now crash in production instead of telling you right away before putting it into production? Or are you telling me that your code always have 100% code test coverage checking all possible execution paths for all pattern matches?


Here’s another that kills me every time, I want to compose where I know types are mismatched and I am okay with a runtime fault downstrean.

This is hugely powerful to move very fast in, eg, REPL-driven development environment where you build and verify pieces of code incrementally.

Not being able to do this is a enormous productivity killer, relative to say a dynamic/Clojure.


> I want to compose where I know types are mismatched and I am okay with a runtime fault downstrean.

Sounds like another good candidate for `error` or `undefined`


And these are insanely expensive to deal with over time when you compare to the alternative.


> insanely expensive

Wow that sounds really expensive! How did you actually measure that in the real world?


I spent too many years programming in all kinds of dynamically typed and statically typed systems and I kept a clear head and didn't cling to a blind religious persuasion. I've also watched the performance of static type enthusiasts closely.

I thought that when I updated a type that all of my running around fixing type errors had to do with me missing something but alas that was not the case. This is how the static enthusiasts do their work, all the while claiming that they are being more productive. When I call a static typer our on this the answer is always the same: "But you'd have to go update those case statements/maybe matches/type signatures anyway!" Meanwhile I'm over in a dynamic system not having to do any of that. Refactors are minimal not cross-the-entire-codebase.

Static type systems are training wheels. When a static type enthusiast takes his training wheels off, the bike falls over and s/he screams "See! I told you. I'm more productive with types!" And then they put the training wheels back on and think that they have proven that static verification is superior. What they don't know is that there is enormous missed opportunity once you learn how to ride a bike w/o training wheels.

The truth is you can produce much faster without a rules-based static type prover between you and your runtime environment if you know what you're doing. This should be obvious b/c in one world you have a filter/prover that you must past to ship. In the other world you do not. If the filter/prover was worth its weight then you should be able to clearly see the win. But in business systems the kinds of bugs found by filter/provers are of the "null pointer" variety -- the fastest and simplest bugs to fix when they are found; having a heavyweight filter/prover save me from these simplest of bugs to fix is simply not worth it.


Look I have been programming for 35+ years in both dynamic and typed languages. And my experience is that I am more productive when using a typed language. It sounds as if your experience is the opposite. I wonder why our experiences are so different? It would be interesting to find out.


If you have a place where you emit sum types, and you increase the set of possible cases you can emit then you do have to cross-the-entire-codebase to update all consumers of that sum type to handle the new case.

If you have a place where you consume product types, and you increase the set of fields that you inspect on that sum type, then you do have to cross-the-entire-codebase to update all producers of that product type to emit the new field.

I don't see how it could be any different. How would you handle this in Clojure?


astuary is not suggesting opting out of Haskell’s type system. He/she is suggesting using constructs whose interaction with the type system is straightforward.


> SO MUCH worse cost/benefit than unit testing

My experience is exactly the opposite. Using the type system is less effort, guides your development more and, in a way, provides more consistent guarantees than spending time on a wide unit test coverage.

Obviously, these things are not totally mutually exclusive though.


If you are truly doing an apples to apples comparison here, I suspect you’re using a struct subset of your dynamic language. Otherwise I can’t see how the addition of a rule-based static-check pass filter gets you faster.

What are the two static & dynamic languages you’re comparing by the way?


For example now I'm building a Servant API on Haskell. It's basically a strongly typed API that you don't need to validate with tests. You only need to test the handlers that handle business logic.

In other languages with a less sophisticated type systems I would need to build validation functions manually and obviously test them. Hell, an interpreted language might even boot up with functions that have faulty code, ultimately producing runtime errors when invoked. Haskell generally would not (because of the type system). Adding type constraints basically limits the range of possible inputs/outputs, requiring you to write far less tests.


My examples would be Javascript vs Typescript. I am SO much more productive in Typescript than in Javascript and the number of bugs have dropped substantially.


> by picking Haskell you are throwing sticks between your legs unnecessarily.

You clearly have no idea how types work and how to use it to make you more productive as a developer. Being able to switch from Javascript to Typescript made me so much more productive.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: