Hacker News new | past | comments | ask | show | jobs | submit login
Haskell is our first choice for building production software systems (foxhound.systems)
269 points by Albert_Camus 14 days ago | hide | past | favorite | 291 comments



> Many programmers encounter statically typed languages like Java or C++ and find that the compiler feels like an annoyance. By contrast, Haskell’s static type system, in conjunction with compile-type time checking, acts as an invaluable pair-programming buddy that gives instantaneous feedback during development.

the reason they "find that the compiler feels like an annoyance" is because their first exposure to Java / C++ is in school where they have an assignment due for tonight and the compiler won't stop banging pages of errors about std::__1::basic_string<char, std::__1::allocator<char>> and what the fuck is that shit I just want to make games !!11!1!

In contrast Haskell is often self-taught which gives a very different set of incentives and motivations.

As a mostly C++ programmer making sure that I get compiler errors as often as possible by encoding most preconditions in the type system is one of the most important part of my job and make the language very easy to use when you use an IDE which allows to click on an error and going to the right place in the code.


Annoyance about C++ errors isn't only about the error occuring. With me, it is predominantly about the utter unusability of the error messages. C++ has postprocessors you can use to get your 20 page STL errors down to a few lines just by reversing the expansion the compiler did to show you mere mortal something that you might recognize as your code instead of template-cthulhu.

Haskell has such situations as well, but usually far less verbose. Getting something to typecheck because you wrote down something incompatible uninferable still sucks. But far less than C++.


In contrast, Rust's compiler sometimes gives suggestions for changes which can often just be copied.

I'm always puzzled by people who see this as somehow a good thing.

If the Rust compiler can figure out what the type should be, why doesn't it just do the cross-function inference, and leave the complicated nested implications which only obscure the intent and effect out of it?

If having the programmer specify types is an important check on the correctness of the code that is written, how is blindly copying, without understanding some 60+ character type specification string from an error message going to help demonstrate correctness? All it does is make two sections "consistent". It isn't something the programmer understands or specifies as a type check.


> why doesn't it just do the cross-function inference

It could! This is an explicit design choice. There are a few different reasons. They're all sort of connected...

In general, Rust takes the position that the type signature is the contract. If you inferred the types on function signatures, changing the body of your function could change the signature, which means that breaking changes are harder to detect. It also leads to "spooky action at a distance" errors; I could change a line of code in function A, but then the compiler complains about the body of some unrelated code in a totally different part of the codebase, because that changed the signature of function A, which changed the signature of function B, which is called in function C. My error shows C is wrong, but I made a mistake in the body of A. That's confusing. Much nicer to say "Hey you said the signature of A is X but the body is Y, something is wrong here."

I am gonna handwave this one slightly because I don't fully remember all of the details, but full program inference and subtyping is undecidable. Rust doesn't have subtyping in general for this and other reasons, but lifetimes do have subtyping. I am sure this would get ugly.

Speaking of getting ugly, Rust is already criticized often for compile times. Full program inference would make this much, much worse. Again with that changing signatures issue, cascading signature change would cause even more of your program to need to be recompiled, which means that bad error message is gonna take even longer to appear in the first place.

I think there might be more but those are the biggest ones off the top of my head.


> I am gonna handwave this one slightly because I don't fully remember all of the details, but full program inference and subtyping is undecidable. Rust doesn't have subtyping in general for this and other reasons, but lifetimes do have subtyping. I am sure this would get ugly.

Correct. Haskell is the only language I know of with globally decidable type inference, and uses the similar hindley-milner method as Rust... but no doubt some of Rust's language features can break global inference. In Haskell, many common language extensions can also break global inference.

I think if Haskell was written today they probably wouldn't pick global inference as a goal, Haskell "best practice" types the function boundaries in the same way that Rust enforces.


SML and OCaml also have global inference, and in fact Haskell initially got it from there.

A lot of the "weirder" parts of Haskell are there because early on Haskell was pretty much "LazyML" and then it started growing into something different.


Because the computer being right 90% of the time when something is ambiguous doesn't preclude the programmer from understanding it (while still providing them a best guess at their intent for those times they don't), but DOES mean that the program doesn't assume the wrong thing that remaining 10%.

I am not sure where you got the idea that Rust error message suggestions lead to blindly copying 60+ character type specifications. They tend to be much more localized and understandable in my experience.

I got the idea by following the Rust tutorials and then making 'simple' programs, seeing the errors, going to Rust resources and getting the advice 'just cut and paste the expected type'. The expected types generally had 6-8 ':'s, and three to four deep nested type specifiers.

If a significant part of those types was just something like `std::collections::` or similar then I'm not sure I see the problem.

Suggestions should probably trim redundant prefixes like that, but recognizing standard library namespaces shouldn't be a big obstacle to understanding either.


I haven't been back to rust since, so I don't have the specifics. But it was clear to me that this was not a helpful way to program.

It was also crystal clear that, like C++, Rust puts many barriers to true abstraction. You have to know many, many details of how a specific type is implemented, sometimes several levels deep, to correctly use it at a high level. The cognitive overhead is enormous.


In Rust's target domain(s), those details often turn out to be quite relevant.

Hiding them basically amounts to targeting a different domain. This is very different from being irrelevant and noisy.


That's not entirely true. It gives suggestions in the cases where the C++ compiler does too. There are more than a few very cryptic errors you can encounter with Rust. I like it though, just needs more work.

Depends on the compiler. In my very limited experience I've found that Clang is far superior to GCC in this matter (but rustc is better still, apart from iterator errors)

I am curious about which GCC, G++ version you have in mind. Clang definitely took the lead, but by GCC caught up and I prefer GCC's do Clang. But even I am behind the bleeding edge quite a bit so not sure how things stand now.

It's been a while since I used C++ but I have never seen suggestions of the same quality that the rust compiler produces. Do you have a link that shows the error messages you're talking about?

For sure though, not every error has a great message and some can be cryptic. But those cases are relatively rare these days IMO


I don't agree here. As someone who's only ever dabbled with Haskell, I found its error-messages pretty cryptic, no better than C++. Perhaps they make more sense to someone with significant Haskell experience.

For what it's worth, I believe modern C++ compilers give much better error messages than older ones.


Haskell error messages are far less verbose, but I find them hard to read. What makes it worse is that often the error message involves a lot of deeply nested types that were hidden from me before by library writers using an alias. So I’ve had a lot of “where did that come from?” moments.

I had this problem when being onboarded into a TypeScript codebase recently. Unfortunately I don't think there's a technical solution to it: the type system simply won't stop you from making types that are way too complicated (unless you use a language like Go where the type system is intentionally lobotomized, which obviously comes with its own problems).

We're used to avoiding complexity when it comes to logic, but maybe there's less awareness when it comes to types. Maybe what we need is to have a bigger conversation around that so people realize it's something that needs to be on their radar.


The thing is, these types aren't usually complex in the way that logic is. Complex logic generally means lots of loops and conditions and such, maybe shared mutable state. Linear code that just calls a series of independent functions one after another is not generally considered complex, even though it may ultimately result in the execution of lots of code. What generally leads to these confusing errors is just simple types composed end-to-end. The problem is that while compilers are good at picking out the specific part of a function that's causing problems, they generally can't do the same thing with types, so it's kind of like if every time you got an exception, the entire module source was pinpointed as the problem.

Typescript needs some work on displaying type error messages, often the last line is all that needs to be read.

Type systems are really good for detecting contradictions; they're much less good at figuring out which piece should be different. For any given type error, there may be half a dozen different changes to the code that would resolve it in different ways. Type systems often have no real way of guessing the user-intent there. There are loose heuristics, like... an explicit return type is more likely to be intentional (and therefore correct) than the type of the local value that's being returned. But I'm not aware of any formalism around this "ranking" of which types are most likely to be unintentional. Maybe we need one.

The only immediate solution I can see is to keep types simple enough that the user can fit the entire relevant type-space in their head (and in the IDE dialog!), so that they themselves can determine which part is actually "wrong" (as opposed to just contradictory).


Not only do modern compilers give better error messages (with room for improvement still), it is possible to make use of enable_if and if constexpr/static_assert to give proper error messages for templates, and when C++20 gets widespread enough, concepts.

Modern clang (usually, there are edge cases) gives pretty good error messages over all. And like you point out, if you use constexpr you get even better messages. Hell, I watched some cppcon videos over the weekend and learned that clang can even detect out of scope access of temporaries when in a constexpr (but not otherwise sadly), and also that in many cases, a lot more of a program can probably be constexpr than we might think.

The talks were by Jason Turner, who has an ARM emulator implemented entirely as constexprs and a test suite that runs at compile time (so if it compiled, the tests passed). Obviously for actually interacting with it, its not running at compile time, but the logic has the ability to run at compile time, which is pretty cool.


Jason Turner has very cool talks, check the C64 one if you still haven't watched it.

I haven’t watched that one yet, thanks for the recommendation! I’ll give it a look tonight.

I personally call it "old school" vs "new school".

"Old school" is asically the programming languages that originated in the 70s-80-90s. An incorrect but an illustrative way to describe error messages for them is "The programmer needs to suffer". They are any combination of cryptic, terse, complex, exposing internal machinery of the compilers and linkers etc. There are many reasons for this: computers were not powerful enough to afford better code analysis, parsing, backtracking etc; the users of the tools also knew the tools and could tinker with them etc.

"New school" is from late 2000s on. I usually say it started with Elm. Clear messages pinpointing the exact problem, solutions to problems inside error messages, error clarity as one of the priorities in language/compiler/tool design.


Some of it caused by too much reliance on tools like yacc.

Category grammars were much better but more resource demanding, just like proper ASTs instead compiling as one goes.

Ironically there were already some attempts to Smalltalk like tooling for C++ back then, but again as you mention too much resource constraints.


Actually, the reason why I found static typing annoying in the past (and why I felt more productive in Python) were the types are really low level (missing basic things like tuples) and lack of type inference. You have to repeat the type information, a lot. And also you have to declare lot of intermediate data structures.

In Python, this became easier and one could focus on the data transformations, thinking about the code a level higher.

But then I learned a bit of Haskell 5 years ago, and with type inference, this problem goes away. So it convinced me back to benefits of static typing. (Although I still feel the most productive in Python, their library APIs are IMHO unmatched in any language. But Haskell is catching up.)


I've joined several companies, and getting into a Python code base was the most difficult one, because of the lack of typed method parameters. It was easier with Java and better still with Scala - same for Typescript vs. Javascript. I would be interested, is your Python experience green field or joining a large project? (just wanting to know, not implying anything on your part).

I had the exact same experience: being a new employee at companies A and B, with large Python and Scala codebases respectively. At company B it was far easier to get on board and start writing code, and it has remained easy. Trying to make sense of Python was truly demoralizing, and a significant part of why I left company A.

> getting into a Python code base was the most difficult one, because of the lack of typed method parameters

Yup, this is probably my main issue with Python. I love writing it when I'm working on personal projects, having to read massive Python code bases at work feels unnecessarily tedious due to how much work you have to do sometimes just to find out the type of something.


It was an experience on my personal projects, cca 20 years ago. That's when I started using Python and it really felt more powerful than C++ (and Java) back then. Then around 2008 I became interested in Common Lisp, and then later I made the choice to learn Haskell instead of Clojure. It took me a while before actually getting where is the improvement in Haskell and functional programming. This was all for my own personal projects, which are small programs.

I would still recommend against using Python on a big project. Funny thing, I remember talking to my boss (a mainframe programmer) in 2010, when I solved something with a script in Python, that I wouldn't use it in production, but it's great for small things. Forward to 2020, plenty people use it in products. Maybe the Haskell will be the same, you also have many people today saying "well this is good for experiments but I wouldn't write a product in it".


I bounced off of Python a few times before finally taking to it (at a time when I had no choice). I honestly believe that this situation has improved since Python 3.5 or so, purely on the basis of improved standards of documentation, including many more libraries adding type hints to their public interfaces.

I've recently been back to Clojure, and it's the same old aggravation again. It seems that, oftentimes, the only way to know what arguments a function is prepared to accept is to already know what arguments a function is prepared to accept.

I don't want to come across as being too down on dynamic typing - I'm currently in the process of trying to get my company over to Python by any means necessary. What I really want to challenge is the idea, popular in many dynamic typing circles, that static types just get in the way. They can also serve to communicate essential information. If you aren't communicating that information through type annotations, then it's absolutely essential that you do it by some other means.


I don't want to come across as being too down on dynamic typing - I'm currently in the process of trying to get my company over to Python by any means necessary.

Curious, from what and why? ML?


I worked on a large-ish C++/Python codebase and touching the Python parts was always extremely frustrating for me.

The lack of .h files alone is a huge grievance for me - I had to scroll past definitions even to know names and arities of class methods, read constructors to know what's in class members.

SciPy was nice enough for turning data into a picture, though.


> The lack of .h files alone is a huge grievance for me

That's an interesting point... Besides C and its derivatives, and OCaml, do other languages have separate definition files? It seems like newer languages, even statically typed, normally don't.

I suspect the reason is that you have to duplicate all definitions, which seems like rote work. It also feels less necessary with IDE tooling: IDEs I know have a view for all definitions in a file.


They don't, but, at least in object-oriented languages, much of overall experience is easy enough to replicate with formal interfaces and coding standards.

I thought the nice thing about Modula-2 was that you could browse through interface files to understand some code and then go into implementations.

It's true, but there are so many ways to skin that cat. Smalltalk, for example, gives an even more fluid way to browse at a high level and then drill in, and it doesn't (necessarily) have source code files in the first place.

> getting into a Python code base was the most difficult one, because of the lack of typed method parameters

One place I worked used to be a Python shop, but had migrated most of its services to Java. Chatting with one of the engineering leads for a large Python system that had a lot of business logic, he said where Python actually fails to scale is lines of code and developers because lack of types makes it harder to reason about and harder to make changes safely. This obviously changed now that type annotations are a thing.


For new python work that might grow, I highly recommend pytype annotations. It’s not perfect but it’s as close to the best of both worlds that I have ever seen in production.

there are controlled studies supporting this idea the function parameters not having types slows down productivity

It would be helpful if you linked them ...

I appreciate that you wrote that you "feel" more productive with python. I 100% agree with this feeling at most scales of code size, and the feeling matches reality at the small scale. However, I've found that this feeling of productivity doesn't match reality in the large. I use Haskell for programming in the large despite this feeling of unproductivity, because in fact, I am way more productive.

Just to clarify, I agree that Haskell can be more productive than Python, I was talking about productivity of Python (or, for that matter, Lisp) compared to C-like languages of the era. But with Haskell's type inference, the Python's advantage is lost (if we forget, for a moment, extremely well-designed Python's standard library with focus on convenience).

> in the large

At what point do you feel like this line is crossed? "In the large" can mean different things to different people.


100k+ lines

> the types are really low level (missing basic things like tuples) and lack of type inference

how far ago was this in the past ? C++ had tuples and type inference for ten+ years officially now - gcc 4.4 had it in 2009


C++ cannot have non-local type inference due to, you know, object-oriented part of it.

This means that you cannot say something like this:

  auto sepulka;
  auto bubuka = zagizuka(sepulka);
Because if zagizuka's parameter is a structure or a class, you have a selection of parents. On a contrary, you have a selection of descendants of the result type of zagizuka() for bubuka, each having their own copy or assignment constructor.

[1] http://lucacardelli.name/Talks/2007-08-02%20An%20Accidental%...

[2] https://en.wikipedia.org/wiki/Intuitionistic_type_theory#Mar...

[1] shows how hard it is to make right type system with inheritance. I believe these slides mention 25 years of collaborative effort. Compare that to [2] where it took 8 years to design intuitionistic type theory (1971-1979) by mostly single person.


C++ doesn't have type inference at all which I understand requires constraint solving.

It has a much simpler type deduction system where the type of an object is deduced from its initializing expression, i.e. deduction always flows in one direction.

It is nowhere as powerful, but it does cover a lot of use cases.

One advantage (in addition to the ease of implementation) is that, except for circular definitions, there are no undecidable cases and it is not necessary to restrict the type system to prevent them.


not that im advocating it per se, but couldnt you deduct the tyope based on what `zagizuga()` does with `sepulka`?

for example

  def sepulka(zagizuga)
    zagizuga.doSomething()
    zagizuga.doSomethingElse()
would infer the type of zagizuga as some object that implements two methods `doSomething()` and `doSomethingElse()`... i think that should be doable (and possibly extremely slow) right?

maybe i missed something...


Yes, it is doable, but what if there several memory-layout incompatible classes which implement both methods?

E.g, A implements virtual doSomething() and B and C inherit from A, add some different fields and both implement doSomethingElse() which they should overload for their inheritance from class Z.


good question, i guess it depends on the language... multiple inheritance without namespacing would either result in either method getting chosen randomly, or a build error...

for example, in swift you cant even inherit from two protocols that have default implementations... and i think in c++ you also cant call the method without specifying namespace...

so, i suppose, if you wanted to go all the way, you could even do namespace inference

  def sepulka(zagizuga)
    zagizuga::Something.doSomething()
    zagisuga.doSomethingElse()
so zagizuga is infered to be some type that inherits from `Something` namespace and expects that namespace to defined `doSomething()` function, in addition to providing `doSomethingElse()`

though that seems getting a bit fragile irl maybe...


There's also the culture around the language to fold in.

A culture of writing code assuming inference and structural typing is quite different than it merely being available.


I have zero problems using type inference, tuples, etc in my code. Other developers I deal with and who do use C++ have no problems using those "novel" concepts either. So I am completely at loss about what type of culture you are talking about here. It looks like grasping at a straw type of argument to me.

I'm talking about all libraries and books written since it's development up until the early 10s; and of all those teams, libraries, and code which are legacy.

Haskell has never had a decades-long history of 'compiler-oriented programming', ie., excessive declarations, and so on.

The idea that C++ has a haskellish culture is patently absurd, even if the vanguard regard itself as presenting tending toward that direction.


I am in no way implying that "C++ has a haskellish culture". Neither I would consider it of any advantage. All I said is that modern C++ programmers have no problems using the concepts. There is plenty of those that are used in gobbles of libraries as well. Sure old libraries do not have it but so what?

So, OCaml or something?

Or has Haskell added structural typing?


Haskell is structurally typed...

Hmm, what do you mean? Haskell is generally considered nominally typed (or rather types introduced by its newtype and data declarations are ...). "Structural typing" typically refers to things like polymorphic row types and polymorphic variants.

Sure, my mistake. I meant something looser.

Only that the types can be analysed structurally (ie., pattern matched).

In C++, etc. there's a "radical nominalism" in which the type was very opaque, ie., encapsualated.


It is complicated. Templates do allow some form of structural typing and the new-in-C++17 structured bindings do allow for decomposing and inspecting types (although this being C++ it is kind of awkward). Structured bindings are expected to evolve into full pattern matching in the future.

> You have to repeat the type information, a lot.

Nope you don't, that's what typedefs are for. They're underrated for sure though. People don't use them nearly as much as they should. They're incredibly valuable for avoiding precisely this problem.


That doesn't stop you having to type e.g.

String foo = "bar"; String baz = foo;

The `String`s can be completely avoided in languages with type inference because it's obvious that a string literal is a string.


I thought the complaint was about the logic duplication, not the extra keystrokes. If you want type inference you already have auto. If you want to minimize your keystrokes, you're using the wrong language to begin with, whether there's type inference or not. C++ is designed for writing software robustly, not quickly. (<-- This is not a trivial or obvious statement btw. It took me several years to grasp this. And I viewed C++ from an entirely different perspective when it finally sunk in for me that I would appreciate C++ much more if I decided to make minimizing keystrokes a non-goal.)

> If you want type inference you already have auto

or auto&&. Did you really intend to make a copy?


You probably did intend it to be a copy if you're binding it to a variable and need it to be non-const (like in the example)!

Ah, but you knew that because that's how the code was written!

If it was instead

    String foo = "bar";
    auto baz = foo;
you don't know for sure. But the code compiles so it obviously ok!

No, I'm saying even in that case you know it was intended to be a copy. If you wanted that to be a reference then you'd either (a) just do the obvious thing which is to just use the original variable name instead of creating a new variable out of the blue for no reason, (b) leave a comment explaining why you're not doing the aforementioned obvious thing, or (c) use a self-explanatory variable name to provide the explanation instead of a comment.

Is it really that painful to write "String" each time? You spend at least a fraction of a second anyway to verify that you're writing the right thing, to reconsider if you should use an object or constant or refactor the function to work with a Boolean instead of a naked string; why is writing out the type such a big deal everytime this topic comes up?

I remember my first attempts at programming and being annoyed that I can't add a string and an int; ever since that little bit of housekeeping of using types made sense to me and I can clearly see how it eliminates entire classes of errors.


> Is it really that painful to write "String" each time?

I find it more painful to read code that has too many type annotations. I also find it painful to read code that has too few, so I'd argue there's a bit of an art to it.

But languages that have type inference but allow type annotations at least allow you to try to hit that balance.


> I remember my first attempts at programming and being annoyed that I can't add a string and an int; ever since that little bit of housekeeping of using types made sense to me and I can clearly see how it eliminates entire classes of errors.

Type inference doesn't make these errors go away.

And about your other point, it's unfair to look at just a simple case of writing "string" or not as the only thing inference provides. Although I'd argue that leaving out types where possible helps readability-- it's really the more elaborate cases or intermediate steps during a longer transformation that inference helps with. Not to mention the fact that inference in closures is also really nice.


To me it's not the trivial cases like this that make type inference useful. It's when you get longer types like `Arc<Mutex<HashMap<String, String>>>`. Granted, that could be solved with a `type` declaration (or `typedef` in C++) but it's still convenient to be able to say: `let mut x = Arc::new(Mutex::new(HashMap::new()));` and let the compiler figure out the rest based on usage.

Java now has (limited) type inference, bar. As does C++, auto. They're limited but they remove a lot of tedium.

And it's my experience that that is only a benefit to the person who wrote the code, and only for a short time.

Generally, I prefer being able to read a line of code and understanding exactly what it does. If I need an IDE and have to repeatedly try to find the definition of something then, in my opinion, that's wasting my time.

C++'s 'auto' is really useful but it's over-used IMO. I think that there's a belief that if you're not using 'auto' everywhere then you're not writing 'modern' C++. Just becuase your code compiles doesn't necessarily mean it's correct.


+1 for auto being overused. I always felt I was shouting into the void (hah) by saying the same thing... it's nice to see someone else agrees.

I find that types can reduce readability as well as enhance it. They add noise and make it harder to concentrate on the variable names which are often much more important than the types which are often (but certainly not always) obvious from context.

Interesting point.

I'd never have believed it myself, but find myself using acronyms instead of variable names when the type allows it.

    void foo(MyType mt, const MyOtherType& mot);
It's the variable names that are the noise, types are everything. And no, it's not Hungarian notation either in case anyone suggests it!

However, it maybe doesn't work that well with things like class member names. YMMV


I guess it partly depends on how varied your types are. In some domains you can find yourself working with 10 variables that are all strings, or all floats/integers. At that point the type isn't that helpful for distinguishing which variable is which.

I should have added that fairly liberal use of type aliases also helps.

    using B64String = std::string;

    B64String encode(const std::string& text);
    std::string decode(const B64String b64);

Code that specifies types instead of using auto is, barring compiler bugs, usually less correct. The compiler knows better than you what the type really is.

Shifting topics a bit, typedefs don't allow me to write generic code. Templates do, but templates bring in their own problems, in addition to not being expressive in the right ways: I can have an array of T, but I can't specify that T is Numeric?

The fact C++ doesn't have Numeric but instead has int and long and unsigned and long long and float and double all off on their own is another problem: The compiler knows enough about them to have complex promotion rules but doesn't know enough to allow me to refer to all of them under one name in my code.


You're asking for the impossible. What you want is precisely what templates are, but you also want them to not be "templates" for... some bizarre reason.

> I can have an array of T, but I can't specify that T is Numeric?

Sure you can. If you have C++20 concepts:

  template<class T>
  concept Numeric = std::integral<T> || std::floating_point<T>;
  
  template<Numeric T>
  T twice(T x)
  { return x + x; }
Or if you're on a C++11 compiler:

  template<class T>
  typename std::enable_if<
    std::is_arithmetic<T>::value,
    T>::type twice(T x)
  { return x + x; }

the C++ 20 version can be simplified a bit to:

    Numeric auto twice(Numeric auto x)
    { return x + x; }

Probably not a good idea since the caller won't know what the return type is at that point, and the return type would become dependent on the implementation, which breaks function abstraction.

And imagine what would happen when you get a few more 'auto' variables in the return expression. Suddenly your return type will depend on the implementation of your callees. And the code can then quickly become impossible to understand.

auto is overused.


> Suddenly your return type will depend on the implementation of your callees

Why would that be a problem ? It's super common in templates and has never troubled me the least


It might be common practice but it shouldn't be. There are lots of reasons this is a bad idea; here's just a sampling:

1. "My return type is whatever I happen to return" circumvents the ability of the type checker to ensure correctness.

2. More generally, the purpose of a specification (a function declaration in this case) is to declare what is required of a compliant implementation, and to provide a way to check the validity of that implementation. But when you make the types all become auto-deduced, you're basically reducing the specification to a ~ shoulder shrug "it does whatever it does" ~.

3. Moreover, as I alluded to in the comment, it quickly becomes near-impossible to meaningfully separate the definition from the declaration, whether that's because you want to hide it or because you want to compile it separately. Simply put, you lose modularity. It seems like a minor thing when (as in the example) the return value doesn't depend on types inferred from other callees' return values, but as soon as that ceases to be true, you suddenly tie together the implementations of multiple functions. At that point, your functions lose much of their power to abstract away anything, since as soon as you change the return expression for one function, it has the potential to break code (up to and including causing compilation errors) in the the entire chain of callers. (!)

4. Templates end up getting re-instantiated far more often than they need to be (which can slow down both the compilation and the runtime efficiency). You almost certainly don't want '0' and '(size_t)0' to result in duplicate instantiations when dealing with sizes, for instance.

5. Issue #4 can also result warnings/errors/bugs, since now you have a function that returns a different concrete type than you likely intended, which can result in everything from spurious warnings (signed/unsigned casts, for instance) to actual bugs (later truncation of other variables whose types were inferred incorrectly as a result).

6. The code becomes difficult for a human to read too. You now no longer have any idea what types some variables are supposed to be. Not only does this hamper your ability to cross-check the correctness of the implementation itself (just as with the declarations, in #1) but unless your function is trivial, this quickly makes it harder to even understand what the code is doing in the first place, never mind what it's supposed to do.

7. Proxy types become impossible to implement, since they won't undergo the intended conversions anymore.

All this just to reduce keystrokes might be a common trade-off, but a poor one. I can come up with more reasons, but hopefully this gets the point across.


Bah. Leave it to the latest versions of C++ to show me up.

That's the point though, isn't it? To improve on areas that were lacking in older versions. It can be hard to keep up, though!

It's not just the latest version though? C++11 could already do what you wanted.

edit: I see dataflow beat me to it. I'll leave this here anyway.

> I can have an array of T, but I can't specify that T is Numeric?

This is what type-traits and 'concepts' are for, right?

> The compiler knows enough about them to have complex promotion rules but doesn't know enough to allow me to refer to all of them under one name in my code.

This is what std::is_integral gives you.

https://en.cppreference.com/w/cpp/types/is_integral


> the reason they "find that the compiler feels like an annoyance" is because their first exposure to Java / C++ is in school where they have an assignment due for tonight and the compiler won't stop banging pages of errors about std::__1::basic_string<char, std::__1::allocator<char>> and what the fuck is that shit I just want to make games !!11!1!

Well I can't imagine how much more annoyed they'd be when using an interpreted language which lets the code run just fine but then fails at runtime in mysterious and subtle ways requiring hours of manually scanning though code and print statements when the compiler would have caught a decent subset of those errors with helpful messages about the exact line they need to fix.


I feel like it probably isn’t worthwhile to litigate the merits of static typing every time there is a HN post that’s vaguely adjacent to the topic.

For the amount people care about it, there isn’t much evidence in either direction. And most studies that do exist are limited to small programs typically written by novices. Yale’s Singapore campus are going to be running two instances of the same course in parallel soon, one in python and one in ocaml. Perhaps that will provide a datapoint about learning the languages but maybe there will just be a lot of selection bias or library or environment or teacher differences. And how easy it is to learn a language probably isn’t the main datapoint to care about anyway.


> there isn’t much evidence in either direction

Oh, I think there are great evidences, such as: dialyzer, or ruby3 and python3 shifting towards type signatures everywhere and gradual typing, or recent racket focus on typed racket. Oh, and the rise of typescript of course.

I mean, I've abandoned Python years ago, and I was quite surprised when I discovered python people are adding type annotations everywhere.

Sure, gradual typing is not a strictly enforced as in statically typed languages, but seems people are agree that modularity and abstraction without type signatures is painful in sufficiently large programs.


What you have described is an anecdote rather than any study that attempts to be impartial which is what I really meant when I wrote ‘evidence.’ I’m sorry for not being clear enough.

I don’t feel like this anecdote is evidence because I don’t think it’s inconsistent with the trend towards more static types over the last 5-10 years or so. For this anecdote to be convincing I would need to think that programming language design happens because of carefully thought out and researched decisions and quick feedback as good languages are used and bad languages are dropped, but I don’t believe this.


This is not an anecdote by definition, since these languages represents quite a huge marketshare (and most of the rest langs are already statically typed).

Yes, this is not a sound rigid empirical proof, it's an observation (i.e. unlike the case of anecdotal, you can measure the marketshare of user, how many do use dialyzer or typescript, etc etc). We can't simply ignore any observations that are not scientifically rigid, otherwise the whole edifice of philosophy or even some natural sciences should simply perish.

I don't think you can just omit that. I'm a fan of Lakatos here, if I have some observation, I think one need at least as convincing evidence or more rigid one to prove otherwise.

> because of carefully thought out and researched decisions

I think it's a better evidence exactly because it's what language users are asking about, and what large chunk of language users choose to use when they got a choice. This shows that quite a big share if not majority of programmers value type annotations.


> I think it's a better evidence exactly because it's what language users are asking about, and what large chunk of language users choose to use when they got a choice. This shows that quite a big share if not majority of programmers value type annotations.

To be clear, this is where I disagree. I don’t want to claim that people don’t think hard about language design or that users aren’t asking for these features as I think both of those statements are true. But I strongly disagree that languages doing things (and those languages being popular) is good evidence that those things are good.

I think a lot of language design is driven by fashion (ie keeping up with what similar languages are doing) and I claim that this is a more convincing reason for python having some gradual typing.

I think large number of people is moving to/from from python for gradual typing in aggregate, and I don’t think it’s happening on the margin either. I think any wise decisions about languages are more likely to be driven by practical considerations (what do people know, what are they used to, what can people be hired for, what libraries are available, what platforms are supported, what performance is required, and so on).

Just because python has a large market share, it doesn’t mean it’s users are supporters of gradual typing, it just means that they thought python was a good idea when they first started using it and they haven’t justified the cost of changing to something else. The users didn’t choose gradual typing they just chose “upgrade the language to the next version”

Even if I agreed with your claim that many users asked for gradual typing, I don’t agree that in aggregate users ask for things that will be good for them or good in general. Maybe users are just trying to figure out a way to say “we want our programs to be less buggy” and think this might help. I think there are better examples in programming language design of what can happen if you keep giving users what they are asking for.


It would if python actually encouraged runtime coding.

In my opinion dynamic programmers need to embrace the runtime environment and use it as part of their development methodology. Unfortunately most popular dynamic languages have woeful runtime environments.


yep, i was going to say the same thing, the biggest issue with dynamic vs static is maybe people are missing the point:

dynamic languages (like smalltalk) were designed for live coding where the results are immediate, when that is broken and coding is done in a "dead" environment of course dynamic typing will cause problems that arent caught until runtime... but the original idea was one shouldnt have had to wait until runtime in the first place!


So, Common Lisp and Smalltalk?

Any others?


I'd imagine some Scheme environments are good as well, like Dr Racket.

Unfortunately the same amount of effort that has been applied to static type checking has not been applied to dynamic language environments.

It would have been very interesting to see what could be possible with the more advancement.


Factor for sure

> Well I can't imagine how much more annoyed they'd be when using an interpreted language which lets the code run just fine but then fails at runtime in mysterious and subtle ways requiring hours of manually scanning though code and print statements when the compiler would have caught a decent subset of those errors with helpful messages about the exact line they need to fix.

You cannot get mad at errors you don't know about.

Also letting the user find and report the error allows you to mark your tickets "done" and move on, which makes management happy.


In my experience, you quickly develop an intuition for where things are going wrong with interpreted languages.

Ex: "Oh, cannot access property x of undefined? Something must be going wrong in y object"

Python definitely feels a lot more helpful than JS though. Can't speak for other interpreted languages like Ruby.


The thing is, though, that you mostly only get errors for code that is actually executed. So, your program is only fully type checked when all code paths are executed. In the case of python one can ameliorate this situation a bit by using mypy. At my job I see very often code being broken because, e.g., the signature of a function was changed but not in all places and so on. Now somebody will say that the IDE can solve that but these colleagues who are regularly breaking the code are actually using IDEs and it somehow still does not help. I have come to think that code that is not compiled and/or otherwise type checked is just not very serious and certainly not worthy of production environments.

> your program is only fully type checked when all code paths are executed.

The solution to this is to make sure that, during testing, all code paths are executed. And that’s something you should be doing anyway, to find bugs that aren’t type errors.


I've worked with Python codebases that had very extensive test suites and I've still encountered many cases of bugs slipping through that a static type check would have caught. Its really hard to make sure tests are fully comprehensive. About the best you could do is generative property-based tests, but then the feedback loop is not great as it may take minutes, hours, days or weeks for a particular problem case to be generated, while the static check would have caught it at compile time or even interactively in your IDE.

I don't hate dynamic languages, but this is a pretty major weak spot for them, in my personal opinion, and one that's bitten me a number of times.


That is impossible with almost all non-trivial software. Testing proves only the presence of errors not their absence.

> Testing proves only the presence of errors not their absence.

I've never thought of it that way, but that totally makes sense.


You can think of the compiler for a statically typed language doing exactly that at runtime for a subset of potential errors -- the type errors. Some people claim that they never make them and they may well be correct. I do commit such errors so compiler is a friend, but a pedantic friend.

I am all for high test coverage. One should not underestimate the effort in that, though. Some time ago I covered some two thousand lines of code completely in tests. As in, all code paths, checking all side effects. Kind of an effort in the spirit of "dealing effectivly with legacy code" by Michael Feathers. They were quite non-trivial and it took me about two months. Doing such a thing may not always be feasible. Also if code is written by others they may have covered fewer code paths in tests than one would have liked. A type checker will check all code paths but the tests that your predecessor failed to write are not checking anything.

having just spent an hour over vscode live share with a student who's learning javascript, I have a pretty good idea.

But I've worked with people who saw all compiler errors as things of the devil and wanted to defer as much as possible to runtime.


faculty helps students debug their code?? where?

I'm in france haha. why wouldn't you help a student who asks you kindly ?

Parent is probably referring to situations where faculty are too busy researching, writing grants, or just plain don't feel like it, and tell students something like

"go play with it"

"look it up"

"google it"

"read the fudging manual"

Of course, there are many great faculty who _do_ care greatly about teaching and always help students.


At my Uni we had students from higher years do volunteer time during lab sessions for lower year students. You would just wander around the lab helping random people who were stuck.

Sometimes there would be some proffessors there too to help you out.


Amen to that. I love dearly both C++ and Haskell, but I remember those times when I wanted to cry because the C++ error message broke the OS clipboard when I was trying to copy it to a text editor so that I could write a program to analyze it and find which "const" didn't match in the jungle of type names. I have never had that situation with Haskell.

Oh dear :o

> As a mostly C++ programmer making sure that I get compiler errors as often as possible by encoding most preconditions in the type system ..

When selecting a language for a recent project, that needed to run correctly, without extensive debugging that would be hard to simulate (too many states and interactions), I had a couple of important criteria:

0) checked static typing

1) ADTs (that are reasonably easy to use, read and write)

2) pattern matching (no way I'll get all the if/else right)

3) reasonably easy to write static const (pure functional) code

4) memory-safe

I've considered rust, but settled on haskell, as I needed it fast and I know haskell. While technically any Turing-complete language would work, I don't think C++ would be a fit for must-work code, even disregarding (4).

While I haven't used C++ in a while, it seems to me, encoding the constraints would be 3-10x as much code, or even more, with many checks/cracks left, and a lot of readability gone.

Clean and correct functional haskell code took a bit longer to write than say happy-path imperative python, but after fixing 2 or so bugs that manifested on pretty much the first (partial) use (like incorrect "<" vs. ">" and a bad constant), it has been running happily ever since. I didn't even bother simulating a full system configuration before a real-world customer acceptance test, because components worked on 1-2 inputs I tried, setting up a system would take a couple of hours and I couldn't think of reasonable failure scenarios. I haven't experienced similar correctness in other languages.


I learned both Haskell and Rust self-taught an still find the latter's type system a bit of a cage for it's lack of higher kinded types, frankness be.

I know not much of Java, but my sentiments concerning C++ are even worse.

I do not regularly program in Haskell and far more often in Rust.


"I don't know much about this thing but I don't like it, and I know even less about this other thing and I like it even less!"

For what it's worth, C++ has HKTs in the form of template template parameters, making it possible to write, e.g., monad transformers, which cannot be done in Rust, last I checked. Now as for whether you'd actually want to do this in a production codebase...


Rust metadata (not just types) have an habit of getting in your way.

It is all for good reason, you can't ditch the GC and have control over the memory structure without the compiler complaining about details here and there. But fixing those strings versus slices and iterator type mistakes is really annoying.


I find the compiler an annoyance in Haskell just as much as C++ - it just forces me to write more code (more liability) and leads to overly rigid type system designs, like modeling behavior with classes or traits/type classes. I’ve only found these ways of writing software to be universally worse than simple module-oriented programming, writing C-like code in languages like Python or Ruby and only selectively using C extensions for isolated cases where speed provably matters.

Compilers do not offer compensating benefits, like catching bugs or ensuring behavioral correctness, that justify all the extra rigidity, slowness, and especially liability of all the extra code (even in Haskell).


Strong point! As a non-CS engineering student learning C++ for the first time, the compiler all but turned me off from wanting to be a programmer. No one explained why the compiler was even there, it just seemed like an annoying hindrance stopping me from getting good grades on the homework.

Fast forward a decade and I'm evangelizing statically typed FP at conferences. The value of the compiler is redeemed after self-teaching and learning the "Whys".


I run into the same thing with Java.

Code that always upsets me is something like `Map<String, Object>` where a concrete type would work so much better (and faster).

Using a statically typed language, the most important thing you can do is USE IT. Let the type system save you from problems. Encode whatever you can in the types.

Bypassing it by casting always causes headaches.


"acts as an invaluable pair-programming buddy that gives instantaneous feedback during development."

This is the key bit, this is called static analysis, you don't need a type system in your language to do this, and you don't need to force doing it at compile time

Most developers appear to conflate the two, uncoupling static analysis and type systems would benefit most workflows


> you don't need to force doing it at compile time

What assembly instructions should the compiler emit if you write sum ["foo", "bar"] ?


Unfortunately, the Haskell ecosystem has been ruined (well, almost) by unnecessary, redundant abstractions and narcissistic idiots who pushes them.

I recently tried to compile haskell-language-server and stack from sources. 157 and 168 (or something) dependencies, full of redundant esoteric bullshit, compat packages, lifted crap, etc. It is even worse than J2EE where it was the same redundant wrapping and indirection, but brain-dead straightforward verbose crap.

To use Haskell correctly, like the classic xmonad and similar projects, requires discipline, knowledge and good taste for just right abstractions, like Go stdlib or Scala3 standard library.

Yes, it doubles development time, which must be spent on understanding anyway, but fast food fp code, full of redundant abstractions, is a worst nightmare to maintain.


This article might be a bit overeager and overzealous, but how you use haskell is up to you -- don't use those underlying packages that disgust you if you don't like them. Haskell offers benefit at every level of abstraction. There are many ways to write Haskell and you do not need a bunch of the higher level stuff. 99% of the time you are just fine with the data modeling (simple Algebraic Data Types) and type classes, along with a cursory understanding of monads via "do" notation.

This comment reads like someone seeing the worst of J2EE, and going back to C++. I'd characterize haskell as having the type system that Java wishes it did.

Why are you trying to judge how haskell should be written for your use case by looking at haskell-language-server, stack, and xmonad? Those are the domains of haskell experts -- one is a language server, the other is one of the pre-eminent build tools, and the other is tiling manager.... Are any of those your use-case?

There are real problems with haskell, and forcing you into complexity is not one of them -- a steep learning curve (for certain concepts), hard to debug space leaks, and a relatively small ecosystem are the biggest issues.


> how you use haskell is up to you -- don't use those underlying packages that disgust you if you don't like them

These kinds of arguments are particularly lazy. Of course, Haskell's ecosystem is not so large that it's trivial to find a well-maintained, high-quality version of a library that meets one's other criteria. Programmers of a particular language are at the mercy of that language's ecosystem.

This line of reasoning reminds me of how C++ programmers would deflect criticisms of problematic features by arguing that one could use only the features that one wanted (thereby effectually creating or curating their own sub-language) and only choosing dependencies that were equally written in that sub-language. So easy!


I think saying "packages" was an error on my part, because how this is different from the usual C++/other language example is that Haskell is a ML language, and gives you a lot of abstractions (rather than packages per say) to use. Generally using packages in haskell is easy because you don't care what abstraction they were written in (almost always IO is there somewhere, or they offer a doTheThingIO function which is the lowest common denominator), and if they're completely functional libraries then it really really doesn't matter.

The commenter was railing against the abstractions the codebases used, not the underlying packages actually. I don't want to get into explaining it, but it is very easy to write production-ready simple haskell, but also very easy to spend hours building abstractions in the type system (normally) to build yourself a straight jacket. OK so if I explain it a little bit, there are at least three ways that have surfaced in recent time (lets say the last 5 years) on ways to structure large effectful haskell codebases (which is most software you'd want to write).

- Everything in IO (just do everything in the IO monad)

- monad stacks & transformers

- The ReaderT pattern

- Effects (free/freer, polysemy, fused-effects, etc)

All of these approaches have high quality libraries to support their use, but you could stay at that first one (everything in simple IO) and be very happy for a very long time.

I'd argue that C++'s problematic features are of a different nature -- when one of them goes wrong you normally have a much more disastrous outcome (whether at runtime or when you're trying to grok the code). For example compare C++'s templating system to haskell's support of generics for example, one is infinitely safer to approach and easier to understand than the other for the simple case, due to how the languages are built (i.e. no inheritance). Haskell it's more up to you to strangle yourself with the complexity -- the necessary abstractions are pretty simple (typeclasses, monads are simple in use, if not in concept), but the unnecessary abstractions basically scale to PhD.

I will absolutely concede that Haskell does encourage you to reach for higher and higher levels of abstraction for diminishing returns. But I will take Haskell's abstractions over Java's abstractions any day of the week, even though Haskell's can be more inscrutable.


Well said; thanks for the explanation.

No problem,I am not an expert in the space but there are just my feelings on haskell.

Let's see if FoxHound is around in 1-3 years :)


> To use Haskell correctly, like the classic xmonad and similar projects, requires discipline, knowledge and good taste for just right abstractions

Hum... Knowledge and an acquired taste, yes. You'll need those. Discipline not. Discipline is exactly what Haskell doesn't require.


No, sorry, it has very little to do with technology. There's at least a dozen languages you could have chosen, and that others have chosen for any given use case. within some bounds, it makes very little difference.[0]

The reason you've chosen Haskell and, by the way, also the TLD ".system", is that you've constructed your identity in such a way that "advanced language with a steep learning curve" is something that fits.

There's absolutely nothing wrong with that. It's a bit emotional, but those exist for a reason. If you consider that idea libellous, you can always cite PR motives for plausible deniability and point at this HN story as evidence.

[0]: Elm, by the way, strikes me as borderline with regards to the bounds of reasonableness, considering the state that community is in. As such, it's more evidence your left (right?) metaphorical hemisphere may have had a finger on the scale.


> The reason you've chosen Haskell and, by the way, also the TLD ".system", is that you've constructed your identity in such a way that "advanced language with a steep learning curve" is something that fits.

I think you're projecting too much on them. They found Haskell performant and are promoting it, I don't see any problem with it. How is any different from all the Rust evangelism HN sees all the time?


I think his point is that there are N languages that are performant and they could have probably chosen any of them to achieve their goal, so the choice is primarily aesthetic.

The article doesn't say they have exhausted all languages, just that it is their first choice. Not their only choice but first choice.

I really don't have a problem with their choice–I wasn't being ironic. It's perfectly acceptable to make "I like it" or "it works for us" choices.

I do believe, very very mildly, that there's a strain of thinking among the tech crowd that glorifies this Spock-like emotional detachment and I'm-so-rational mindset. Two issues, actually:

First, such a mindset is neither possible nor would do much good. There are stroke victims that survive with full mental capabilities with regards to logical reasoning but entirely devoid of emotions. These patients can still ace the SAT, but they fail spectacularly at daily life. As it turns out, you just cannot decide on a doctor's appointment without emotions. They'll spent hours vacillating between two good choices. Emotions are incredibly well-tuned heuristics that cut down your mental load to manageable levels. As any part of being human, they are sometimes ill-fitting for modern times: there's absolutely no reason to make me flinch when I spill hot coffee over my hand. But mostly they just work.

Second, it's slightly annoying when people deny that they are subject to emotions, and it gets up to Ryanair-levels of discomfort when they announce that it makes them superior to all those emotional social science majors, illogical politicians, women "throwing a fit", superficial designers etc. If I got a KDE theme every time someone accused Apple users of being blinded by eye candy, I'd still be left with only half of what Kubuntu ships.

But Rust is cool.


I completely agree about your thought about emotions and the Spock-like emotional detachment and the importance of emotions in decision making and every day life.

I disagree that this is related to the original posted article about a company that has chosen as their first language Haskell. I do wonder how your responses would differ if they had chosen Elm originally and perhaps it is both you and I that should evaluate our emotions are why we even commented...


I think these are very keen observations of reality, and they're very well put.

I've long-admitted that my heuristics for choosing a tech stack are very similar to what you describe. I analyze the requirements and give each technology a pass/fail grade. Then, amongst those that pass, I simply choose the one that I find the coolest.


It’s believed (by some) that emotion is a necessary prerequisite for intelligence.

Something needs to be a driver for action, and emotions fit the bill nicely.


> There's at least a dozen languages you could have chosen, and that others have chosen for any given use case. within some bounds, it makes very little difference.

It's about quality of life and picking the right tool for the job. There are some problems I can solve in Haskell, that I simply could not solve in Java, it would be too hard and too much work. Java is a simple language and therefore it's much easier to reason about the performance and space usage. There are thus many problems where Java would be a better fit.


Java is much easier to reason about the performance and space usage because it's a language with strict evaluation. Most programming languages use strict evaluation, including OCaml, F# and Scala.

Author here. Having written a lot of Haskell, I don't find lazy evaluation to be an issue nearly as often as it is a benefit. Yes, it can be difficult to reason about at times, but more often it ends up leading to simpler code. I would say that if you're consistently highly concerned with evaluation order, then yes, Haskell might not be the language for you.

But I would also say that you shouldn't be concerned with evaluation order when writing Haskell, in the same way you usually shouldn't be concerned with what the query optimizer is doing when writing SQL.


> because it's a language with strict evaluation

But that is not the only reason. Haskell does much more aggressive optimisations than Java (and Scala, OCaml, F#). A large part of Haskell's space-usage reasoning issues come from the combination of lazy evaluation and these aggressive optimisations. Java and Scala code can contain plenty of deferred evaluation too, for example iterators, but the compilers simply don't do (and cannot do) such aggressive optimisations. Of course, this also means that much pure functional Scala/Java code can be poorly performing.


I have no horse in this race. Reading the posted article I have a hard time understanding why you posted this comment. The article has reasoned arguments related to software development. Your comment is pure emotion which seems to be your main claim against the article.

I had the same feelings as the parent - this article did not make good arguments for using Haskell specifically in building production systems, it instead argues that Haskell is a good programming language yet many of the benefits listed can be found in other languages.

Even if Haskell was objectively the 'best' language, it'd likely be a poor choice for most teams simply due to familiarity and developer speed.

I'd be extremely hesitant to hire the services of this company entirely because they use Haskell and it'd be a nightmare to maintain after their contract.


I think the article was easy enough to pass by as I agree it didn't make GOOD arguments specifically in building production systems. I also give the author some slack because that article would be quite heavy to address the entire development cycle up to production systems.

From my reading the article makes no claim that Haskell is the best language. The purpose as I read it is to explain why Haskell is their first choice while addressing an audience that has a passing knowledge of Haskell.

You being extremely hesitant to hire the services because of their technology stack is great and fine. There are many stacks and services I completely avoid in dealing with so I agree there.

"It'd be a nightmare to maintain after their contract" is something someone would likely say to any language that is not their preference.


Why don't you argue against the points made in favor of Haskell in the article? You can probably tell more about why they use Haskell that way. It's certainly a better indication than your ridiculous TLD divination.

> fmap renderPost postList

for(var post: postList) renderPost(post);

Or

postList.stream().map(renderPost)


Why Haskell is our first choice for building production software systems: a rationalization of our excitement to get to write Haskell in production

Here, I've fixed the title


Would you say that you've engaged in good faith with the point the author was trying to make?

Eh, I kind of agree with the parent comment. The author didn't bring up any compelling points that couldn't be found in other modern languages (granted these likely borrowed from Haskell). As an outsider to Haskell I was hoping for some more concrete use cases for picking the language.

Honestly there are no compelling arguments for picking Haskell over other languages in the same domain.

Limited open-source to leverage, incredibly limited and costly hiring opportunities, not that great tooling and integrations.

Any upside you can sell from a pure programming point of view (there are some very valid ones) pale in comparison to the negatives it brings to your overall business.


I've found article arguments incorrect - what they describe as Haskell's features and more are easily available in other languages as well. Strictly enforcing function style on the other hand looks to me as un-feature. From my long experience strictly enforcing any particular paradigm/style in programming is amounts to plugging round holes with the square pegs.

I'm a partisan, of course, but I always found ocaml to be the practical choice for ML-style functional languages. All the functional type-y goodness without being shackled to it, you can let yourself off the hook if you need to, you can just write a dang loop if that's what needs doing, compiles down to native, just a good practical mix.

Other than Jane Street, nobody much uses it though, so that's that.


Absolutely

Good luck scaling this to organization of 100+ engineers. You will soon learn the tradeoff between writing and reading code. And the stark realities of the dev hiring markets and the thing called a learning curve.

They won't scale to 100+. From what it looks, it's a boutique dev shop that likes doing Haskell and they are going to stay that way +/- a few people here and there depending on how successful their projects are.

This may or may not be the right business strategy, however if everyone would be going with a notion of only using popular languages because it's easier to hire for them, by now we'd be using JS as a backend language.Oh,wait...


You're off by probably an order of magnitude. You can absolutely scale this organization to 100+ engineers. I've worked at a company with ~30 Haskell developers and I was involved in hiring most of them without lowering our hiring bar. If we really needed more it would have been quite easy to lower the bar a little and increase the numbers significantly. Here's another example of a company that has a large Haskell team:

https://iohk.io/en/team/#team=development

Now if you needed to hire 1000 developers, then you'd have more of a problem, and perhaps Haskell wouldn't be the right approach. But in my experience, Haskell engineers get a multiplier effect over the average non-Haskell engineer because the code is more concise and the average engineer skill is higher. I don't know what the multiplier is, but it's definitely non-zero and positive. This not only pays the obvious direct benefits, but also reduces the communication overhead of your team and the number of managers you need to hire.


Someone has to take the first step to solve the chicken and egg problem. If there are jobs requiring Haskell, it might get more users.

I write Haskell professionally, and I can confidently say that there are plenty of jobs. Some large tech companies (eg. Facebook, GitHub, Twitter), quite a few banks, a lot of consultancy companies, and plenty of random companies I'd never heard of.

It is quite difficult to get a first Haskell job though, because they mostly require production Haskell experience, so there's your chicken and egg problem.


Working with Haskell professionally sounds like a very interesting career path. Could you point to some resources to get better at the production Haskell skills that these companies are looking for? Perhaps gaining experience with projects which use Haskell in a similar way to the companies you mention could help with finding that first Haskell job.

Sure thing.

I'm not sure what your current level is, but I can give some general advice for people that happen upon this:

---

Haskellers are generally expected to understand most of the typeclassopedia (https://wiki.haskell.org/Typeclassopedia), don't worry about learning it all in one go. I had to read this page many times before I grokked most of it.

---

Avoid tutorials that overuse analogies. A Monad only adds one operation to Applicative:

  class Monad m where
    (>>=)  :: m a -> (a -> m b) -> m b
This reads as: `m` is a monad if, given an `m a`, and an `a -> m b`, you can construct an `m b`.

---

It's important to be really good at using Monads that support multiple effects, to create little DSLs. If I want a component of my program to support throwing errors, creating a log, and reading an environment, (all purely), I'd use something like this:

  type MyDSL
    = ReaderT Environment
        (WriterT [String]
          (Except ErrorType))
These are monad transformers from the mtl library.

Where I work we use free monads instead of monad transformers, but that's just an implementation detail, it's used the same as a transformer stack.

---

Create a cool project, Haskell people like languages. When I was interviewing I showed off a tiny lisp-like language implemented in Haskell (https://github.com/414owen/phage). This was my first non-trivial Haskell project so don't judge it too harshly.

---

Read Haskell Weekly (https://haskellweekly.news/newsletter.html). It's a great source of ideas and knowledge.

---

A lot of Haskell shops use, or are migrating towards using, nix (https://nixos.org/).

---

Apply! The Haskell market seems to favor the interviewee. In the end, I had more than one offer, even for my first Haskell job.

Good luck!


That's very insightful, thank you!

I don't think that it's wise to sabotage your own future and productivity as a company just so you can pave the way for some language to become more popular.

It isn't, of course. But the crowd of other people want you to do that.

It's the role of applause (and in your case, downvotes). The crowd throws cheap adulation at individuals who act against their own interests.


The peanut gallery :-))

I don't think that it's wise to try to optimize for some kind of speculative long-term success at the expense of higher early-stage costs that reduce your odds of getting there. This is similar to companies that choose their initial technology with scalability in mind before they're even remotely close to needing to scale. I've actually done this only to discover that scalability has a very definite cost and when you're small it has an outsized impact on your burn rate. If you have success, you're going to figure out a way to make the changes you need. Case in point: Facebook. They successfully grew a PHP codebase into one of the most popular apps in the world. It definitely cost a lot more money for them to make PHP work, but when they got to that point they were a lot less cost-sensitive.

Planning for that far down the road is the least of your worries. And any plans you make along those lines are not likely to be very accurate anyway. You're much better off optimizing for the near to mid term. Based on the hosting costs described by the OP they are already reaping a tangible value here.


I think at worst, Haskell is a a minor productivity cost for a company vs a mainstream language, and if it is, it's hard to pin it on the language.

So given the upside to paying people to using Haskell (they get to learn it for life, many join + grow the community, they enjoy working for your company more), I think it's worth that kind of harm to a corporation.

I'll keep trying to sap corporate resources into Haskell I take with me for life at least :)


Using "boring" Haskell (less higher level magic) alleviates this to some extent - without being boring. The organzation should decide which features and code styles to use. I agree that the summer intern's multi-layer home-built monad transformer stack and custom operators can be a pain.

I feel like this isn't discussed enough. I can't comment on the technical merits of Haskell but growing an organization and replacing engineers is so much more difficult when you're using tools that aren't mainstream.

The market works somewhat differently for small companies there. Yes, there are fewer people with relatively niche skills, on the other hand you have an easier time to attract the few you need. Not every company wants to become large.

> on the other hand you have an easier time to attract the few you need

How is it easier to find a Haskell developer vs finding a Java/Python/PHP developer?


I've never hired a Haskell developer, but anecdotally from my friends and associates who have, if you put out an advertisement for a Java/Python/PHP developer you get 500 applications from average candidates. If you put out an advertisement for a Haskell developer you get 5 applications from good candidates.

You probably got 500 applications for Java/Python/PHP, of those 450 average and let's say 50 good.

With Haskell you just get 5 good ones. You probably don't start with Haskell as your first language but rather move into it after you are a senior in another language. If you are lousy in Java, you probably won't go and learn Haskell or some other niche language.


That sounds like a reasonable assessment.

For reference, that got called the "Python paradox" back then when Google was exploiting it. Of course, Python is now mainstream, so it doesn't have this effect anymore.

I hear far more complaints about how difficult it is to find good people from companies hiring for mainstream languages than from those using more niche stuff, I would assume mostly due to larger competition among employers for the former (and in parts better community access for small shops in niche languages and self-selection of who learns the niche languages)

> I hear far more complaints about how difficult it is to find good people from companies hiring for mainstream languages than from those using more niche stuff

I would assume that:

(1) Those using niche stuff are less likely to be hiring under the impression that the main measure of skill is years of experience with a language, and

(2) those using niche stuff are, on average, doing more interesting work that attracts more intellectually curious candidates.

As a result, the mainstream firms get worse candidates, and try to compensate by asking for even more years of experience, and asking for years of experience not just with language but specific libraries and other tools, hoping that will get them more skilled candidates, at least for their specific toolchains. But doubling down on that just gets them candidates that are less capable (because even to the extent years of experience are useful, there are diminishing returns, and people who have spent a huge amount of time with the same stack also are likely to be in the “1 year of experience, repeated N times” category, rather than N years of learning and compounding knowledge. (Also, because at a certain point you start making impossible demands, increasing the degree to which the hiring process filters for dishonesty.)


I believe it's another example of Paul Graham's Python Paradox, just repeated more than 15 years later

http://www.paulgraham.com/pypar.html


> I hear far more complaints about how difficult it is to find good people from companies hiring for mainstream languages than from those using more niche stuff

Of course, there is just more of them in the first place. The other effects that you describe might also be true but keep in mind what the base rates are.


Optimizing for worker fungibility, in a vacuum, seems like a -EV "playing not to lose" strategy. It's understandable that this line of thinking is common though.

I also believe that to be the case as an employee. Ie, being a generalist is probably -EV for your career but it feels safer so it's kind of a contrarian position to say "be a specialist".


> Optimizing for worker fungibility, in a vacuum, seems like a -EV "playing not to lose" strategy.

That's why you don't optimise for it in a vacuum. You weigh the potential benefits of switching to Haskell versus the additional cost of maintaining/growing a Haskell team.


Not sure, but how big are the pools of Haskell-devs over at Galois, FPComplete, HKIO, Facebook, Tsuru Capital, Type Safe, ...?

Probably under 100, but not sure though.

I dont think this learning curve/hiring thing makes sense. Haskell can help attract smart dev, it makes refactoring so easy increased learning curve is off set.

I'd say it's harder to outsource, and there may not be as many libraries. Compare to Ruby/Rails for web, or Python for ML, and you have to write more by yourself in Haskell.


Thanks for sharing, but honestly, what does your comment add to a discussion of the post, which is about what works for them? You're making a rude comment that assumes they don't already know these things. Have you considered the possibility that the tiny slice of the world you've experienced is just that?

Because what works for one company may not work for another. It's useful to point to any potential cons of a particular approach.

I didn't find the comment rude.


Their point is valid, though the comment reads condescending to me. Maybe it reads condescending to me because I don't agree with it, but I believe it could have been phrased in a less adversarial way:

«Haskell does not scale to organizations of 100+ engineers. There is a trade-off between writing and reading code, which Haskell does not fare well in. Also, given the realities of the dev hiring market, steep learning curves would be very detrimental to the success of the company»

What would have been your reaction if I had replied "look up the word condescension in the dictionary" to you? It would lead to bad discussion, regardless of whether my point about condescension was right or not.


Is is actually hard to hire for Haskell? You can’t just tell some random to learn it (because his head will explode), but my impression is that you’ll have candidates coming out of the woodwork who could never get away with using it before but always wanted to.

Quite a few universities in the UK teach haskell as a way to start everyone on a level playing field and to introduce various concepts.

Similar situation in Poland. Also my anecdotal experience is that Haskell classes are not considered to be the most difficult, rather they are somewhere in the middle.

The University I went to in Sweden used Haskell for its intro classes. Intermediate classes where taught in Java.

Single datapoint: The last time I posted a job ad (I was the hiring manager) I got ~40 good applicants within a week. I expected much less and got pretty overwhelmed.

In my experience, it's easy to hire the next Haskell programmer and hard to hire the next ten.

My experience with onboarding non-Haskellers has been pretty good, though I would certainly admit the argument that they were unusually talented individuals.


That's just untrue and Paul Graham wrote a great essay on that(Python paradox, mentioned in another comment in this tree). I have been working with Elixir for last 5 years, so I guess I can give some first hand opinion.

1. When your company uses esoteric language, your job offers usually have "we can teach you the language" instead of "we require x years experience in language and y, z libraries)". That is a really great filter for people you want to work with, as they have to accept they will be learning from the start.

2. There is a smaller number of engineers qualified, but the number of companies they can apply to is even more reduced. I think all in all you're winning in this equation as a hiring manager.

3. Your hiring process get cheaper, since it moves from having to filter candidates heavily to focus on matching the right person for the team. People who are interested in non-mainstream languages already fit some of the criteria you have to select for otherwise.

4. As the number of people to hire easily is smaller and onboarding even a talented person without experience in tech takes time, you get much better with selecting problems you want to tackle and grow in more reasonable pace. That has great benefits for your organisation.

I know for sure that if I am ever starting my own startup, I will use no mainstream technology. If I want JVM, I will use Clojure instead of Java, if I want web, I will use Elixir instead of Rails/Django, if I go low level, it's Rust, no C++ etc.

It's counterintuitive, but it works better. People think about size of the whole market, but they should think about percentage of candidates applying being fit for the job.


I've found that code readability decreases (for me) with the generous use of <$> and $.

Edit: removed ` characters.


Ever heard of Pandoc? With your logic, python is the only reasonable language.

I don't think "have you heard of this project with <5 main contributors" is a useful response to questioning if it scales to large groups.

Haskell is an excellent language, and you are free to choose not to drown yourself in the most complex uses of it.

Haskell has the type system Java wishes it did, and half of the reason languages like Rust are interesting is because they've learned from Haskell (which is the point of Haskell, a research langage, though it happens to also be a pretty darn good language for building practical things). Simple basic data types like `Maybe t` and `Either l r` are such a revelation that you wonder how you lived without them.

I've shared this anecdote before, but Option<T> in Java is an example of the blub paradox[0], and discovering Haskell and finding out about Algebraic Data Types (ADTs) and the Maybe type cured my blub. The crux was this: Option<T>s seems to "infect" any codebase you use it on, because you realize that anything can fail and be null -- living in java land made it seem like it was out of place it's actually Option<T> that is right -- if you allow nullable types in your code base, or you do operations that can fail, properly representing that failure is the right decision.

Without over stating some of the best features of Haskell are:

- Compile time type checking (this cannot be understated) and non-nullable types

- Expressive and simple data type creation via `data`, `type`

- An excellent system for attaching functionality and composing functionality to data types via `typeclass`es and `Constraint`s.

- An emphasis on errors as values (unfortunately exceptions are in the language too, but you can't really stop them from existing)

- Forced delineation between code with side-effects and code without (this results in some complexity if you come from a world with side-effects everywhere and no control)

- Fantastic runtime system with good support for concurrency, parallelism and shared memory management.

- Very easy refactoring (if you're not adding any complexity/abstraction) because you can just change what you want and let the compiler guide you the rest of the way.

Haskell has it's warts (hard to debug space leaks, relatively small ecosystem, the ability to drown yourself and your team in abstraction), but it's just about the most production-ready research language I've seen.

Whether or not you like it, the likelihood it's already improved your life in whatever language you're using is very high.


You dropped this:

[0] http://paulgraham.com/avg.html

(Scroll to "The Blub Paradox", about a third of the way down.)


If you want to get a feel of the productivity using Haskell in production start with a simple CRUD app and use IHP (https://ihp.digitallyinduced.com/) to build it. You will have something usable within a day - GUI and all. Then move further down the rabbit hole from there.

How about servant, and some popular js framework on top of that? IHP might be putting a lot of effort into the project, but the code generation part... I (personally) don't like that at all. And it's not common practice in haskell.

Second this -- Servant is one of the best examples of server-side haskell there is, and from what I understand IHP is relatively new in comparison (correct me if I'm wrong).

Servant is one of the best if not the best example of how haskell's higher level abstractions can benefit practical bread and butter programming (which making APIs is these days) tasks, and where type safety is a huge benefit.

Writing servant handlers can also feel mostly imperative depending on how much you use `do`.


IHP is very opinionated in the way how it approaches building web applications (Code Gen, Project structure, Naming, ..). But exactly this kind of opinionated design makes it possible to be very productive, compared to doing everything yourselves.

IHP is opinionated in the sense that it does state on the server side and uses (something similar to) turbolinks to make it look fast. But generating additional types for SomeDataType like ViewSomeDataType etc - my gut feeling tells me these should be implemented trough type classes instead. New data types shouldn't be generated for cases like this.

Disclaimer: I've only looked at the docs of IHP, but this was what it looked like it was doing.


I find it extremely frustrating that I have to use Nix to use this framework. We don't use Nix at my company, and we likely never will. I can easily incorporate packages from hackage into our codebase. I really don't want to have to vendor this project myself. Why make a great web framework and then create such a large constraint on who can use it reasonably?

We use nix in production at my company. It’s horrible. It’s only good for ecosystems that rely heavily on context or libraries installed by operating system package managers. Languages that have self contained package managers like JavaScript, Haskell or python don’t benefit. In fact in those cases nix actually makes things worse by making almost everything way more complex and error prone then necessary as literally nothing is designed with nix in mind.

Haskell is nice and all, but I'm not a huge fan of this take. I can't help but think that many of the arguments boil down to something like 'you can write types so that the compiler checks things for you' (not a quote), whilst the author disregarded the Java/C++ compiler as "an annoyance" (a quote). The rest of the article is mostly a comparison between Haskell and PHP/Python/JavaScript, and most laid out benefits boil down to static typing.

Sure, Haskell's type system is nicer, and the error messages are, I'm sure, more helpful (although the Java/C++ ones make sense when you learn what they mean).

There is an example of domain modelling in Haskell:

    type Dollars = Int

    data CustomerInvoice = CustomerInvoice
        { invoiceNumber :: Int
        , amountDue     :: Dollars
        , tax           :: Dollars
        , billableItems :: [String]
        , status        :: InvoiceStatus
        , createdAt     :: UTCTime
        , dueDate       :: Day
        }

    data InvoiceStatus
        = Issued
        | Paid
        | Canceled
The syntax is nice (ish, CustomerInvoice is a bit ugly), and terse. But, I've seen this a million times in Java, and that works fine.

Quote:

  Modeling domain rules in the type system like this (e.g. the status of an invoice is either Issued, Paid, or Canceled) results in these rules getting enforced at compile time, as described in the earlier section on static typing. This is a much stronger set of guarantees than encoding similar rules in class methods, as one might do in an object oriented language that does not have sum types. With the type above, it becomes impossible to define CustomerInvoice that doesn’t have an amount due, for example. It’s also impossible to define a InvoiceStatus that is anything other than one of the three aforementioned values.
All of this is table stakes in Java/C++ too.

Other brief rebuttals:

  Haskell has a large number of mature, high-quality libraries
No way this beats Java. I don't know the C++ ecosystem well, but I assume C++ wins too.

  Haskell enables domain-specific languages, which foster expressiveness and reduce boilerplate
Be careful what you wish for.

  Haskell has a large community filled with smart and friendly people
I think at the end of the day Haskell just feels fun to write, if you're the sort of person that likes it. That's fine. But I don't think going all-in on Haskell is the right call for most companies.

> But, I've seen this a million times in Java

Perhaps when Java gets record types, sealed classes, pattern matching and other features. But right now, Domain Modelling in Java (and C++) is really painful compared to a higher-level language like Haskell.


> the author disregarded the Java/C++ compiler as "an annoyance" (a quote).

To be fair, the context of that quote is:

> Many programmers encounter statically typed languages like Java or C++ and find that the compiler feels like an annoyance.

I think this is a fair statement, although it would also be fair to say "many Java and C++ programmers find their compiler errors useful". I'd guess these two camps would remain mostly the same when using Haskell.

You're right that most of the article is roughly comparing a good example of static types (Haskell) against a bad example of dynamic types (PHP).

> I've seen this a million times in Java, and that works fine.

My biggest problem with Java (and the JVM) is the existence of `null`: it completely undermines type signatures. In the above Haskell example we "know" (see caveat below) that a `myInvoice :: CustomerInvoice` is a `CustomerInvoice`, whilst in Java a `CustomerInvoice myInvoice` might be a `CustomerInvoice` or it might be `null`; likewise `myInvoice.billableItems` is a `[String]` in Haskell, whilst in Java it might be a `List<String>` or it might be `null`; in the former case, each element might be a `String` or it might be `null`.

Caveat: Haskell values are lazy by default, so errors may only get triggered when inspecting some deeply nested value; in that sense we might say that a Haskell expression of type `T` might be a `T` or might be an error (known as "bottom"). We certainly need to keep that in mind, but one nice thing about bottom is that it can't affect the behaviour of a pure function (we can't branch on it). In that sense returning a value containing errors, which are later triggered, is practically equivalent to triggering the error up-front (pure expressions have no inherent notion of "time", unlike imperative sequences of instructions). The interesting difference is that we can also use such values without triggering the errors, iff the erroneous part is irrelevant to our result ;)

Having all types nullable by default makes 'proper' null-checking incredibly verbose, not to mention tricky; the alternative is to cross our fingers and hope our assumptions are right. What makes this frustrating is that such checks are exactly the sort of thing that computers can help us with, and type systems are particularly well suited for! Hence the presence of `null` cripples Java's type system in a way which can't be worked around (without essentially layering a separate, null-less type system on top to check for nulls!).

Also note that the presence of null causes every domain model to collapse. Let's say we want to write a conversion method, e.g. from `CustomerInvoice` to `Document`, and we don't want to worry so much about `null`: hence we write in our javadoc that as long as the given CustomerInvoice contains no null values, this method will never return null; let's say we throw a NullPointerException in those invalid cases. Great, our users now have fewer edge-cases to worry about; they don't have to check for null, and they don't have to catch NullPointerException if their input is correct.

Except, once we start implementing our method we find it needs to call some other helper method, e.g. `statusToTable`; if that method returns a null result, we would be unable to construct the `Document` value that we promised. What can we do in that case? We promised we wouldn't return `null`, so maybe we throw a NullPointerException? If we do that, those calling our method might get a NullPointerException even if they gave valid input! We might throw a different exception instead, like AssertionError, but the effect would be the same. Hence we can't guarantee to our callers that we don't return null (or some equivalent that they must deal with, like NullPointerException or AssertionError); that, in turn, means they can't provide such guarantees to their callers, and so on. At any point, we might get a null (or equivalent exception), and the whole house of cards comes crashing down.

Maybe we trust that helper method doesn't return null, but how can we know? Maybe we check its documentation or source code to see whether it might return null; but we find that it calls other methods, so we have to check those, and so on. If we do this, we would also have to pin our requirements to the precise versions of the libraries that we checked. In case you couldn't tell, that process is essentially manual type checking (for a very simple system with two types: 'Null' and 'AnythingElse').

Of course, this is sometimes inherent to the problem, e.g. if a HashMap doesn't contain the entry we need then there's nothing we can do. However, most code doesn't have such constraints (except perhaps out-of-memory), but there's no way to tell that to Java (in mathematical language, Java weakens every statement to admit trivial proofs).


The claims in the article sound weak, because they communicate in an informal and natural way.

"Haskell's type system is more expressive than X and Y" is a strong claim and can be proven by showing that X and Y need to compose run-time workarounds for a given property that can be checked statically in Haskell.

"Functional Programming reduces the surface area for Bugs" is a strong claim and can be proven by showing that a single mutable reference strictly introduces a set of possible bugs that were not possible before and that these bugs cannot be checked in language X.

It is kind of annoying that these discussions often seem superficial, cultural or partisan, when in fact they could be much more rigorous.

Now, If we assume or find these claims to be true we can finally proceed with the real discussion: What are the costs and benefits of these properties in a given setting?


> "Functional Programming reduces the surface area for Bugs" is a strong claim and can be proven

Is there really a formal proof or Software Engineering paper that proves this?

I was told this in my FP class in university, but it pretty much sold to me as gospel.

In practice I agree with the statement - I certainly feel there's an inherent "cleanliness" to FP.

But I also feel that the argument is not only about program correctness; many people ultimately conflate it with developer productivity. And here's where I feel that things fall apart a bit: I feel as if sometimes it's much quicker to do things with state, so maybe the time you save debugging is time you add elsewhere?


Intuitively you can derive that this is true informally past just an overall feeling.

It’s simple really. Functional programming is just imperative programming without one feature: mutability. Thus if functional programming is just regular programming with a reduced feature set it means it has the same error surface area as regular programming minus the surface area of errors caused by mutability. Hence by proof the error surface area is smaller.

Now think of of all the errors caused by initializing a variable as null and changing it later rather then immediately initializing an immutable variable with the correct value and you can intuit just how big the error surface area actually is.


>Is there really a formal proof or Software Engineering paper that proves this?

Yes (page 7):

https://arxiv.org/pdf/1901.10220.pdf


> Is there really a formal proof or Software Engineering paper that proves this?

I wonder if there is one such formal proof as well.

Intuitively it is trivial: functions a are a subset of all procedures, state can introduce unique bugs, these bugs are not found in functions, so you're dealing with a subset of all possible bugs.

Another intuition is this: By introducing state you increase complexity. A procedure in isolation is not necessarily referentially transparent, but a function is. You cannot reduce the procedure with it's evaluation at any given point in time, because it is 'connected' to the surrounding program via that state. Now you'd have to show that increased complexity introduces unique bugs.

I'm simply not equipped (yet) to make such claims, but I'd love to hear from experts on these matters. I know that you can formally verify stateful programs, so it is likely not an issue of what is possible. But I damn sure know it is much easier to reason informally about functions than about procedures, except if the procedure merely has local state.

> And here's where I feel that things fall apart a bit: I feel as if sometimes it's much quicker to do things with state, so maybe the time you save debugging is time you add elsewhere?

I can only speak for myself here, but yes certain algorithms are more intuitive if implemented imperatively. But I found that the set of these algorithms shrink over time by getting used to FP. Vice versa there are also algorithms that are much more easily written with functions. Then there is core idiom of the language you're using. If it is imperative OO, then writing functional programs can sometimes feel cumbersome and less readable.

There are many factors that may or may not apply as well. For example functions are easier to compose and decompose, since they are by definition simpler. However imperative procedures are sometimes easier to read "from top to bottom", because they enable a more real-world-y mechanical/visual mental model.


> certain algorithms are more intuitive if implemented imperatively.

FWIW, "Dancing Links" is a wonderful technique that would be worse than pointless in an immutable language.

https://arxiv.org/pdf/cs/0011047.pdf


It is absolute nonsense to say that using Haskell will improve productivity or maintainability. There are problems like bad libraries, complicated performance profiles, virtually no developer adoption, limited ghc build targets, package management, lack of tooling, slow compile times. Choosing Haskell is likely a terrible choice despite the type system, which seems to be its only advantage.

I think most programmers nowadays face no interesting problems to solve. They crave for a mental challenge, but instead of looking for a job that requires solving hard engineering problems, they believe they can satisfy their mental needs with coding in “somewhat” hard language.

I think you make a valid point in general about coding professionally at most jobs, and I know I've fallen into this desire myself while working on endless CRUD apps over the years. That said, I do think the article brings up some good points about domain modeling. After becoming somewhat proficient in Scala I've found these same features (ADTs) mentioned in the article helpful for the important part of these boring CRUD apps: modeling data at the various application boundaries (API, domain layer, database layer, etc). I now find using weaker type systems and/or imperative code to be either more error prone or more verbose (due to validation + extra tests).

Of course there are other parts of Scala, Haskell and similar that require more mental gymnastics than I'd like, such as composing asynchronous operations; flatMap and monad transformers may be "elegant" once you really understand them but damn is async/await easier to just write and move on with your life.


As a front-end developer whose job is is to write configurations(so not even actual code) for a form library I picked up Rust for this specific reason. Could've been any other language, but this one scratches my personal itch.

I simply don't get the hate this article is getting, are some HN readers really that bad at reading comprehension? The authors clearly mention it is "our first choice" and then they go on to present their findings with great clarity. Nowhere do they evangelize Haskell like other language like Rust for example. I never see such comments on threads on other languages, even though some of articles posted are of subpar quality.

In the end the insecurities and failures of snarky commentators don't matter to others who are in the arena solving real problems in production with an unsexy language.


Quote: "GHC, the most commonly used Haskell compiler, produces extremely fast executables, especially when compared against other languages commonly used for application development, such as PHP or Python"

Really? You comparing apples with oranges? Why not, if you're at the step of comparing compiled versus interpreted languages, compare it with Java too?

Now, do the same comparison versus C++, let's see who wins when talking about speed.


Has anyone had a look or knows of production systems made with a Haskell-like language named Curry? (https://curry-lang.org/)

Sounds a lot like Haskell with Prolog...

“ Curry is a declarative multi-paradigm programming language which combines in a seamless way features from functional programming (nested expressions, higher-order functions, strong typing, lazy evaluation) and logic programming (non-determinism, built-in search, free variables, partial data structures). Compared to the single programming paradigms, Curry provides additional features, like optimal evaluation for logic-oriented computations and flexible, non-deterministic pattern matching with user-defined functions.”


I've played with it, using the kics2 implementation. I made a rough package for Nix, which might be useful if you have problems installing it:

http://chriswarbo.net/git/warbo-packages/git/branches/master...

You might like the Mercury language too: https://mercurylang.org/


You can already have Prolog embedded in Haskell, using LogicT.

Here is the paper by Kiselyov: http://okmij.org/ftp/papers/LogicT.pdf


Tangentially, did you actually manage to make an HTTPS connection to that site? I can only manage HTTP.

Okay, so this is admittedly snarky but we've seen this sort of blog post so much that it has practically become an Onion article: Why Haskell Is Our Secret Weapon, by Startup You've Never Heard Of.

And then when someone points out that nobody knows who they are or what they've built, we get commenters talking about how company X, Y, and Z are also using Haskell. And those claims also come up short...most of them can't say where or how it is being used because they don't know...just that at some point in the past, someone emails were exchanged between someone@bignamecompany.com and someone@haskell.org, and now there is a piece of copy on the Haskell website that disingenuously claims BigNameCompany is powered by Haskell!. Who cares how pervasivly it is used...if someone writes a config parser with Haskell, all of a sudden we can claim that BigNameCompany would fall over on its face if Haskell wasn't there protecting it.

Come on. Nobody cares that Haskell is your secret weapon if you've never overcome an opponent with it. Or built an entire profitable company on its back. All these types of posts do is fake an authority so you can jump straight to your fallacious argument by authority.

If you want to argue the merits of your favorite language, then do it. Don't make us sit through an argument about how your language makes you special when you aren't even noteworthy enough for a 10 sentence Wikipedia blurb. There are a lot of valid and powerful technical arguments in this article, but they're ruined by framing them all around the premise that we care about how it makes you and your startup special.


I'm not really a haskell fan, but the lion's share of that effect more that likely comes from the small sample size of companies using haskell. Even if it were somehow superior, there just are enough people trying it to be coming up with a unicorn startup or two.

Now, the lack of skilled haskell programmers on the other hand, that's a pretty scary proposition if you're starting a company and may find yourself riding on a rocket, needing as many able hands as you can possibly find.


> Okay, so this is admittedly snarky but we've seen this sort of blog post so much that it has practically become an Onion article: Why Haskell Is Our Secret Weapon, by Startup You've Never Heard Of.

But occasionally it pays off in a really big way.

Like WhatsApp cashing out for $19 billion, on a product they never could have scaled with so few engineers without Erlang.

Like Viaweb and Common Lisp, where Paul Graham says the language allowed them to move much faster than their competitors. One anecdote was about talking on the phone to a customer reporting a bug, and actually fixing it on the live system and asking the customer to try again, and the customer was shocked to find it now worked.

Like ITA, who created the best in class flight search system in Lisp and then sold to Google.

Every once in a while, an unpopular but powerful technology really is the secret sauce for a winning product.


WhatsApp was sold to a company that made even more money off the back of a PHP stack.

At least they don't have exactly one company that has had some decent success with The Language in production, and were it not for the evangelism of The Language Community, you wouldn't know who the company was. And if you ask, "how does one do multithreading in The Language?", The Community will shout at you about how you don't really need multithreading and The Company has been super successful and never once needed multithreading and by the way multithreading will be a feature in The Language Very Soon. :)

I don't think the article warrants such a harsh reaction. Was it the word "our" in the title? They're just writing from their perspective.

I'm at a technology research company that is primarily using C++ as the main company tool. The development team is small, and the code is the result of 22 years of constant revision by PhDs. One of our Never To Be Violated Rules, simply because the number of hidden landmines is far to numerous, is template programming. When generic programming is required we use a web language like PHP where the type of something is contextual and one can be free and loose and sloppy if they want. Between the two extremes of our formal as fuck C++ base and the anything goes generic web languages we maintain surprisingly high levels of productivity, with very low bug counts. Having a tiny team helps, as we all know the code based inside out.

There is a serious risk using a programming language that is too different from majority of languages. Like Haskell, Erlang, Lisp variants.

It requires a lot of effort to learn, development tools are scarce, and you can't easily hire a new worker simply because it's minor.

Eventually, original developer left the company for one way or other, leaving a code and half baked documents only the early developer fully understand. Good luck maintaining those software. You can't. It's either abandoned or replaced.


Biggest problem I had with Haskell was once you know the language you also need to learn a pile of extensions that any serious project is using. Also there is a tendency in the community to always look for the "best" (abstract) solution. It makes the whole ecosystem fast changing.

I would prefer a more stable platform designed for engineers , something like Clojure but with types. Ocaml has a small community and Scala brings unnecessary complexity with its support for OOP.


Haskell does not actually get rid of side effects in practice. I find that tons of Haskell code involves do notation which is basically code that embraces monadic side effects which sort of defeats the purpose that this article says of pushing side effects to the edge.

Really in order to “push side effect to the edge” people need to avoid using monadic composition as much as possible which I see Haskell programmers rarely doing in practice.


Monadic composition is used everywhere from simple failure (Maybe or Either) through to genuine side effects such as IO. The presence of monads does not necessarily mean side effects.

Haskell never claimed to "get rid of side effects", the idea is make them explicit in the types and to be able to reason about them.


Right, but the article says that Haskell pushes these side effects to the edge. I am not saying haskell claims this, I'm saying the article claims this. Haskell actually doesn't do this in practice as tons of people use do notation and state monads. The more people use do notation, the more they are embracing side effects. Literally I've seen haskell code where all function definitions had some form of do notation which is basically against the claim made by this article.

To push side effects to the edge you have to only use do notation and monads when you absolutely have no choice, which is not done in practice with haskell.

>The presence of monads does not necessarily mean side effects

The presence of a functor does not mean side effects. The presence of a monad implies composition and binding which does imply a side effect. Even maybe monads composed have side effects that can produce output that the function itself can never produce on it's own.

For example let's say I have a maybe monad that will never produce "Nothing."

   b :: int -> Maybe Int
   b x = Just x
but I can produce a side effect by binding it with Nothing.

  Nothing >>= b
The above yields "Nothing," even though it is not part of the definition of b. It is a contextual side effect passed on through monadic composition. Normal composition outside of monadic composition usually does not have this property.

You seem to be lumping all side-effects together as equally bad? I don't think you can expect to push everything out to the edges, for example partiality. I take your point that a lot of imperative programming is done in Haskell (e.g. State monads). However, I think what most Haskellers mean when they talk about pushing effects to the edge, is pushing IO and other less benign effects.

>You seem to be lumping all side-effects together as equally bad?

Never implied anything was bad or good. Just saying that Haskell style programming does not push side effects to the edge.

>I don't think you can expect to push everything out to the edges, for example partiality.

Of course you can't push everything to the edge, but haskell style programming doesn't attempt to do this. It embraces the side effects and no one actually pushes anything to the edge. Partiality was just an example, the point is the bind operator will have a side effect on b so you can no longer treat the output of b as a pure black box. People who use haskell use the bind operator all the time indicating that their code is littered with side effects. Which again isn't necessarily bad, it just is what it is.

>However, I think what most Haskellers mean when they talk about pushing effects to the edge, is pushing IO and other less benign effects.

But my argument is this is not often done. I've seen tons of giant IO functions wrapped in do notation. Generally, no big attempt is made to segregate IO or side effects away from pure logic. Everyone just writes a monad and starts using do notation.

Again if you avoid using monads as much as possible in haskell you are pushing side effects to the edge. If you don't do this, which is basically what most haskell programmers end up doing, then you are not pushing side effects to the edge.


> But my argument is this is not often done. I've seen tons of giant IO functions wrapped in do notation. Generally, no big attempt is made to segregate IO or side effects away from pure logic. Everyone just writes a monad and starts using do notation.

Yeah, "functional core/imperative shell" or "pushing IO to the edges" is a weird myth. Really the strength of Haskell is "functional core/IO code carefully threaded through functional core".

What's a good descriptive slogan for that? "Functional pipework/imperative reactants", invoking chemical engineering?

To cycle back to your point, I don't think the failure of this slogan actually points to any weakness in Haskell.


>To cycle back to your point, I don't think the failure of this slogan actually points to any weakness in Haskell.

Yeah agreed, it's just a style of programming within the functional paradigm. Not necessarily bad or good.


Well, I don’t know why but this submission has attracted particularly misinformed, blatant wrong or hyper emotive responses. Maybe Haskell killed people’s pets or something.

This company looks to be a team of three (probably very smart) engineers who build cool custom software for likely relatively small clients. The software is likely only supported and modified by them. In that scenario something like Haskell makes sense - however once you need to scale your engineering footprint beyond 5-10 people it becomes virtually impossible to rationalize using a niche language like Haskell.

What is that claim based on? Plenty of companies use Haskell to great success scaling to well beyond 10 people. It may be a niche language, but in a remote-working world, and with a truly massive number of developers in the workforce, there still end up being a sizeable number of professional Haskell developers available to most companies.

Also unclear how long they’ve been doing this... I’m unable to find any info about provide projects other than very high level blog style stuff

What I'd like to understand: When/why does one choose Haskell over other functional languages e.g. F# or OCaml?

It looks to me like they would satisfy the same points that the article makes.

Edit: just did a quick comparison of the last 2 SO developer surveys, and it looks like Haskell "replaced" F# in their popularity ranking last year.


Haskell's laziness and purity can make it a bit trickier to use constructs that are common in ML-like (or Scheme-like) languages, like mutable variables. This nudges Haskell libraries in a slightly different direction, e.g. making more use of control structures like monads, arrows, continuations, etc. which authors in other languages wouldn't reach for so readily. This has an effect on the ecosystem, since people want their systems and libraries to be compatible with each other's APIs.

The result is that "the Haskell way" can seem a little more intimidating than the more "pragmatic" approach of MLs.

(I write this as someone who writes a lot of Haskell, and dabbles in StandardML!)


all them are equally competent languages. F# with corporative support and a giant ecosysten, ocaml being very fast and portable and haskell being very... special and pure.

Thanks for your reply!

I see the corporate aspect of F#, but can you elaborate on what you mean by "special" about Haskell?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: