Hacker News new | comments | ask | show | jobs | submit login
Mastering Time-to-Market with Haskell (fpcomplete.com)
142 points by Tehnix on Nov 22, 2016 | hide | past | web | favorite | 116 comments



IME Haskell development has a sort of bell-curve to it. Initially, you're spending a lot of time prototyping, fumbling around trying to find the right abstractions. Here Haskell mostly gets in the way: you have to declare up-front which functions do I/O, etc.

But once the core abstractions are settled, you start to reap its power. The type system catches tons of potential errors. Combinators allow for enormous expressiveness. Here you're rolling: Haskell is in its zone!

But then you hit a wall. Laziness makes for brutal debugging. Singly linked lists actually suck. Performance optimization is a black art. You find yourself longing for a language with simple semantics and mechanical sympathy. Now Haskell is bumping up against the real world.

Haskell has its sweet spot somewhere between "bang this out by 5pm" and "ship this to a million users". (No surprise it's popular in academia.)


> Initially, you're spending a lot of time prototyping, fumbling around trying to find the right abstractions. Here Haskell mostly gets in the way

The opposite is true: here Haskell helps to find the right data structures/abstractions incredibly fast. The ability to quickly construct and reconstruct a high-level skeleton of your program without actually having to implement anything makes prototyping much easier and faster than in comparable dynamic languages.

> But then you hit a wall. Laziness makes for brutal debugging.

Laziness can sometimes surprise you in bad ways, but it's not like it's impossible to debug.

> Singly linked lists actually suck.

Just replace them with the right data structure then? Even for big codebases and fundamental program abstractions that's a 15min job, because of types.

> Performance optimization is a black art.

I agree. But then again - you don't have to write everything in Haskell. Writing the critical 1% in C, testing the shit out of it and calling it from Haskell is absolutely fine.


Why, in your opinion, is it faster to construct a "high-level skeleton" than in dynamic languages?

> Laziness can sometimes surprise you in bad ways, but it's not like it's impossible to debug.

The last message I get was something like "Array.Array: Index out of bounds". Program crashed. No idea where the error originated. The answer I got was "yeah, in GHC 8 there will be some support for stack traces".


Because of types, basically. You can construct the core data structures of your program and the types of the functions operating on those data structures without having to actually implement said functions. When the types fit together, i.e. your program is logically consistent, you can start filling in the actual code.

This is much faster than writing a large chunk of code, playing around in the REPL thinking everything is ok, only to realize one week later that you didn't think through some core aspect of your design and then having to refactor a sizeable piece of your codebase.

> The last message I get was something like "Array.Array: Index out of bounds". Program crashed. No idea where the error originated.

It is an unfortunate historical artefact that there are still partial functions (i.e. functions that may crash) in the Haskell base libraries. It is generally advised to avoid such functions, but this is not very clearly communicated to newcomers to Haskell.

For example, Elm, a language largely inspired by Haskell, doesn't have partial functions. This is why it prides itself in "not having any runtime errors in production" (excluding things like compiler bugs or unsafe JavaScript libraries, of course).


The error was my own. Only I couldn't figure out where exactly.

Such errors can't be avoided. I need arrays for performance.

If performance is no concern I can use Maps, but I still need indexing ("partial functions") for architectural reasons: separation of concerns - can't make "safe" graphs where everything holds a reference to everything "related".

I just don't buy the idea that partial functions can be avoided.


I.e. look at how PureScript does it [0]. Haskell could do the same.

[0] https://pursuit.purescript.org/packages/purescript-arrays/3....


Sure, I can use a Maybe variant, but the consequence would be that I wrap the whole program in a layer of maybes for cases that I "know" shouldn't happen but happened -- because I do make mistakes.

Or in other words, an "AssertionMonad". No thanks, I'm perfectly fine with assertion errors (i.e. "Index out of bounds"). I just need help with finding my own errors. Stacktraces are a good thing.


I agree, stack traces are a good thing.

For the time being, make a helper function:

ixWithString :: String -> Int -> Array a -> a

check the bounds inside and error out with a custom message when the index is outside the array bounds.


> For example, Elm, a language largely inspired by Haskell, doesn't have partial functions.

I may be wrong, but I don't think this is technically true. I don't think Elm has only total functions, but it does avoid a lot of common, error-prone partial functions (e.g. accessing elements from a collection).


I am not following Elm closely and may be wrong, but I'd be surprised. The creator (Evan) is really picky about having unsafe features in the language.

It's even impossible to upload a package to the official repository containing JavaScript code without prior approval, because the native code may crash!


It is faster for large programs. For small programs, I think Python is great but you also have to consider that there is no such thing as prototype code and a small script (usually but not always) balloons into something more involved than was originally intended.

THAT is where Haskell shines, the ability to refactor and expand fearlessly and aggressively.

I think for large or complex programs Haskell is much faster for constructing a skeleton because it's usually not the logical operations that trip us programmers up, it's the data flow and correct alignment of types, which are implicit in a dynamically typed language and explicit in Haskell.

So, it's kinda-sorta like writing your ideas down in a structured language which enables you to "see" the abstractions in a rigorous light, then the ability to refactor aggressively and safely kicks in so once you can see your abstractions you can quickly adapt for better ones.

So I think the initial work is made quicker in that it forces you to excogitate then because it's in some concrete form you can see it clearly and take advantage of quick turnaround time in refactoring.


> But then you hit a wall. Laziness makes for brutal debugging. Singly linked lists actually suck. Performance optimization is a black art

To be fair, these issue are more-or-less solved by newer functional languages, such as OCaml or Rust.


Practically speaking, OCaml is not exactly just a drop-in replacement for Haskell. For instance, Haskell has world-class support for STM which makes it arguably one of the best languages out there for single-machine multicore concurrency.

Also, OCaml is still two decades old. I find it very bizarre that you attempted to group it in with Rust as a "new" language.


Technically you are correct (the best kind of correct) but from where we sit today in time, Haskell and OCaml appear as are peers. They probably have some 4~5 years between them, and probably less if you consider ML.


You mean non-purely functional languages.


I assume you mean strict (the opposite of lazy) instead of non-pure (which means imperative).

Well, even then, Rust does not only give you control over execution order, but also tight control over memory management, while keeping it safe.

Haskell allows for good reasoning about correctness. In OCaml, you can additionally reason about execution order, but memory management and garbage collector are still somewhat "unpredictable" (i.e. hard to reason about). On top of that, in Rust, you can reason clearly about memory usage and aliasing.

These new, modern languages follow a clear path about keeping quality and correctness (hence safety) while having more and more control over performance, without going back to low-level where you would loose clarity about correctness and safety.

(Note, however, that the distinction between correctness and performance is over-simplified here, because performance can also be part of correctness, e.g. for real-time systems, or in systems where scalability is an important requirement.)


> pure (the opposite of imperative).

Pure is not quite the opposite of imperative; declarative is the opposite of imperative, although in practice it is true that pure functional languages tend to be more declarative that imperative functional ones.

You can have pure (AKA referentially transparent) imperative languages if they are synchronous[1]. The synchronous style is especially well suited to reactive/interactive applications, and quite popular in hard realtime applications.

Also, we don't really have exact definitions to any of these terms (imperative, pure, functional, declarative). Here is my attempt for approximate, problematic definitions:

* Functional -- a language that models most/all computations as (possibly partial) mathematical functions, and constructs a program by assembling those functions.

* Imperative -- a language that models computation as state transitions, specifying what state should the computation have at each step.

* Declarative -- a language that describes what result should the program give (as a function, a relation or a behavior) rather than at each step.

* Pure -- a language where the semantic value of the composition of syntactic terms can be determined by the semantic value of each of the component terms and no others.

[1]: https://en.wikipedia.org/wiki/Synchronous_programming_langua...


You've just defined “pure” as “can be given a denotational semantics”. I have news for you: even ALGOL can be given a denotational semantics.

---

@pron

Nowadays I prefer the terms “effect-free” and “effectful”, rather than “pure” and “impure”, since I don't want to suggest that effects are somehow “wrong” or “evil”. By definition, values are effect-free. An effect is anything that invalidates some equational law that holds for values.

For example, integer equality is decidable, and `x == x` evaluates to `true` when `x` is any integer value. However, if we substitute `x` with the expression `foo()`, where:

    int counter = 0; // or any other initial value
    int foo() { return counter++; }
Then `foo() == foo()` doesn't evaluate to `true` anymore. Hence `foo` is an effectful procedure.

Of course, nontermination is an effect too. If we had defined `foo` as:

    int foo() { while(true); return 0; }
Then `foo() == foo()` would similarly fail to evaluate to `true`.


Yeah, I know my definitions are problematic. Remember that the term "referential transparency" was introduced to computer science by Strachey in his Fundamental Concepts in Programming Languages as a property of procedural, imperative languages. How would you define pure?

---

> An effect is anything that invalidates some equational law that holds for values.

Ah, I can certainly accept that as a definition, except that I'm not sure how different it is from mine, because the equational law you refer to applies to syntactic terms (`foo` in your example) and the language's equality operator (`==` in your example). It's easy to come with an equivalence relation that would hold for `foo`, but not at the syntax level (e.g., equality that takes `foo` to mean `foo`'s definition and the content of the heap, and the value of `foo` would not be its return value but its behavior -- i.e., the behavior of foo for equal heap contents is always the same).


> except that I'm not sure how different it is from mine, because the equational law you refer to applies to syntactic terms

An effectful language can be given a denotational semantics. At least from what I've seen, when most people say “pure”, they mean “effect-free”.

> It's easy to come with an equivalence relation that would hold for `foo`, but not at the syntax level [emphasis mine]

Yep, indeed, that's the whole point.


> when most people say “pure”, they mean “effect-free”

But saying "effect free" doesn't mean much unless you define what an effect is, and what constitutes an effect depends on the language. I meant to define the same thing without referring to another vague definition.


I already defined above what an effect is. My definicion is technically precise and language-independent.


OK, so you define effect-freedom using the language's equality operator. I agree it's a better definition than mine, but I think assembly language and BASIC would still qualify as effect-free (provided there's no concurrency) even though some people would not consider them pure.


> OK, so you define effect-freedom using the language's equality operator.

Not the runtime equality testing operator. Rather, the language's static notion of contextual equivalence.

It just happens to be the case that most high-level languages provide a built-in equality testing operator that works on primitive types, returning `true` iff its operands are contextually equivalent values.

> I think assembly language and BASIC would still qualify as effect-free

I don't even know what a good notion of contextual equivalence for any assembly language would be.

As for BASIC, I haven't used the old ones, so I can't comment on them. I've used Visual Basic, which most certainly has effectful procedures.


> Not the runtime equality testing operator. Rather, the language's static notion of contextual equivalence.

How is that different from denotational semantics?

> The ones that can be effectful or effect-free are specific computations, not whole languages.

Well, we could define an effect-free language as one where all programs are effect-free. But anyway, what is the "static notion of contextual equivalence" in assembly (or BASIC), and how can you write an effectful program in such a language?


Reasoning about correctness in Standard ML is easier than in either Haskell or OCaml, since induction on strictly positive datatypes actually works as a reasoning principle. You don't need to manually assert silly preconditions such as “this list is finite” (Haskell) or “this list isn't cyclic” (OCaml).

On the other hand, neither Haskell nor the MLs tackle the problem of safely managing ephemeral resources. Rust does this, but it sacrifices the ability to conveniently share immutable data structures (Rc and Arc exist, but they aren't very convenient). The good news is that this doesn't have to be a dichotomy. You can have GC for in-memory data structures and deterministic destruction for everything else: https://news.ycombinator.com/item?id=12766781


> The good news is that this doesn't have to be a dichotomy. You can have GC for in-memory data structures and deterministic destruction for everything else

That's very interesting. Please correct me if I'm wrong, but I think this is not the final solution, either, because you still have the GC running, which prevents strict proofs about performance e.g. for real-time systems. Here, the Rust approach has benefits as it enables you to get very far without a GC.


Sure.


Purity is orthogonal to laziness. Languages like Idris, Purescript, and Elm are pure but also eagerly evaluated.


Except that laziness requires purity.


Only if you insist on determinism :-)


>But once the core abstractions are settled, you start to reap its power

This is so true. Once you get the types settled for your project, then it becomes a breeze to quickly add things and refactor over and over again, because maybe the spec or idea changed.

I've recently experienced this in both a Haskell project (using the Yesod framework) and I'm currently working on an Elm project, where I very much feel this strength also.

I'll have to agree with you on the debugging though - it is somewhat of an art, and not one I've completely mastered yet, since I haven't had that much need for it for now (haven't done that many production haskell projects).


> IME Haskell development has a sort of bell-curve to it. Initially, you're spending a lot of time prototyping, fumbling around trying to find the right abstractions. Here Haskell mostly gets in the way: you have to declare up-front which functions do I/O, etc

There's some evidence Haskell is pretty good at rapid prototyping.

Are you familiar with this '94 paper on rapid prototyping [ http://www.cs.yale.edu/publications/techreports/tr1049.pdf ] by Paul Hudak (RIP), sponsored by the Office of Naval Research & ARPA? The paper is a report on an experimental comparison of programmming languages for prototyping, and it's pretty cool because they were asked to implement a portion of a (heavily simplified) AEGIS weapons system! Unfortunately the paper and experiment are dated and have a number of methodological shortcomings (results were self reported; the examining panel didn't look at running code; requirements were very vague and mostly determined by each team, which makes the comparison difficult), but the results are pretty interesting regardless:

- Of all the teams, only the two Haskell teams delivered working code (though the examiners didn't look at working programs). This must be stressed: no other team delivered running code at all, much less on time!

- One of the Haskell prototypes delivered more features than the initial requirements asked for.

- The programmer of one of the Haskell teams had NO experience with Haskell before the experiment.

- Both Haskell prototypes were also the shortest in line count. Way shorter than the C++ prototype, which wasn't even finished or working in time!

I'd like to see the same experiment repeated under more rigorous conditions and with more modern languages, but the results certainly hint at Haskell being more than adequate at prototyping.


1994 was well before the IO monad became the standard IO paradigm for Haskell, and a year before there was even a proposal on the table for monadic IO for Haskell. It makes me wonder what the language was like back then and whether the results would be repeatable with today's language.


Sure! I think Haskell would fare pretty well today, because the essential aspects haven't changed. It's still very suitable for prototyping and exploratory programming.

Of course, today it would have to go toe to toe against modern dynamic languages. In this case it'd be essential to demonstrate running code, which is arguably a huge advantage of static typing over dynamic typing; if you're only comparing source code, dynamic languages can get away with being terse and claiming "trust me, it works".

I imagine Haskell would still leave modern C++ coughing dust at the start line :)


There's a reason Real World Ocaml has a section dedicated to the run-time system and no book on Haskell will ever go near it.


> You find yourself longing for a language with simple semantics and mechanical sympathy.

“Simple semantics” as measured how? The only good metric I can think of is the size of a formal specification.


Warning: be warned before you commit to Haskell. Not all is rosy about Haskell. You may find yourself in quagmire if you don't know for sure what you are going to get from Haskell, especially from the libraries. Although this is true for other languages also, the library support for Haskell is yet far from satisfactory as compared to the library support found for Python/Java. The Haskell community seems to have been divided over it.

Not so ago there was some discussion about "batteries" included with Haskell. [1] It compared situation of Haskell with that of Python/Java etc, worth reading if you are about to go the Haskell route.

It seems, the priorities (academic, commercial, library support and so on) of the members of Haskell community are at crossroads and they cannot seem to resolve those very good, IMHO.

My take: Haskell is good for learning some really deep concepts, but may be not so good when it comes to commercial projects, unless you are a Haskell veteran and also have an army of Haskell veterans with you.

[1] http://osdir.com/ml/haskell-cafe@haskell.org/2016-10/msg0001...


There's also a document maintained on GitHub that gives a rating of Haskell's support for different domains:

https://github.com/Gabriel439/post-rfc/blob/master/sotu.md

Which backs up your statements; many of the sections under "Common Programming Needs" are rated immature: Databases, Debugging, IDEs...


It sounds quite unsurprising that Haskell (or for that matter, any language that isn't java/python/c) doesn't have an ecosystem comparable to one of the most successful industrial languages around.

I'm not saying everything is great with Haskell, but so far, I haven't run into major holes for what I'm doing, library-wise.


You would think that a really rigorous language like Haskell would foster a better ecosystem of libraries than ad hoc languages like Python or JS but that has not been the case at all so far. I'm not sure exactly why but it's one of the main reasons I'm taking a wait and see attitude to Haskell.


> a really rigorous language

Is it rigorous in theory or in practice?

An academic veteran might want to better express his ideas in code while a professional veteran might want more readable code with better test coverage. And that's just 1 axis where the two viewpoints might diverge or even come in conflict.


I'm trying to imagine what "rigorous in practice but not theory" might look like. It seems to me that rigor derives from theory and can then be applied in practice. Haskell is rigorous in both.


I'm not talking about Haskell "the language", but about Haskell "the ecosystem".

Industrial rigor regarding code means:

* consistence in using a coding style

* having adequately named modules, functions and variables

* having adequate comments

* having an adequate level of code coverage through automated tests

* having performance and regression tests

* having good release notes

* etc.

Many of those things, required for high quality libraries, are often skipped for the academic projects Haskell is known for.

So that's why Python or even PHP or Javascript, as less "rigorous" theoretical languages, can have more "rigorous" libraries in practice.


So, like anything, the available libraries vary, but the important ones score pretty highly. Haddock documentation (like Doxygen) is considered a basic part of the job. "cabal test" will run embedded test harnesses. Stackage contains a set of libraries considered to be stable and of sufficient quality that most projects don't need to have qualms about using them. See for instance https://www.stackage.org/haddock/lts-7.10/http-client-0.4.31...


> In summary we've seen that: Haskell decreases development time...

Have we actually seen that or have you just asserted that? Is this really true, and if it is, by how much? Haskell has been around for a couple of decades now, and has had least two hype cycles (I remember that when I was in university in the late '90s, Haskell was the next big thing). It does not seem to expand significantly even within organizations that have tried it (and that's a very negative signal), with at least one notable case where the language has been abandoned by a company that was among the flagship adopters.

In general, we know that often linguistic abstractions that seem like a good idea in theory -- or even seem to work nicely in small programs -- don't end up having a significant effect on the bottom line when larger software is concerned. People say that scientific evidence of actual contribution is hard to collect, but we don't even have well-researched anecdotes. Not only do we not have strong evidence in favor of this hypothesis, but there aren't even promising hints. All we do have is people who really like Haskell based on its aesthetics and really wish that the the nice theoretical arguments translated to significant bottom-line gains.

This blog post by Dan Ghica, a PL researcher, really addresses this point: there is nothing to suggest that aesthetically nice theory translates to actual software development gains, and wishful thinking (or personal affinity) simply cannot replace gathering of data: http://danghica.blogspot.com/2016/09/what-else-are-we-gettin...


It's hard to measure these types of things. But if you are interested, there is a related paper regarding that: http://haskell.cs.yale.edu/wp-content/uploads/2011/03/Haskel...

Although the study in the paper isn't very practical, it's still an interesting experiment.


> It's hard to measure these types of things.

I'll settle for well-researched case studies.

> there is a related paper regarding that

That's a step in the right direction, but the paper doesn't discuss software development, but prototyping. We know that "theoretically aesthetic" languages do well in specification and prototyping.


>I'll settle for well-researched case studies

Don't know if it helps, but they have some case studies here https://www.fpcomplete.com/case-studies.


That would have helped a lot if those really were case studies. Unfortunately, they're just marketing material for FP Complete (with statements like "In Haskell, Acme Inc. found the perfect solution!"). There's nothing wrong with marketing material, but that's not what I meant by case studies (I meant actual technical reports).


You appear to be misrepresenting the blog post. His example of an aesthetically motivated language is Python, not Haskell. Haskell's design was not motivated by aesthetics or superficial ease-of-use, which could well be its downfall. Haskell was and still is motivated by enabling software development gains, e.g. abstraction, composition, concurrency. You may disagree that it has achieved these goals, but to that I would say don't knock it until you have tried it.


> Haskell was and still is motivated by enabling software development gains

The same can be said about nearly every language. Haskell is first and foremost a research language motivated by answering a research questions, which is what it is like to program using a certain mathematical formalism that is aesthetically appealing to some. Its designers didn't set out to find the best abstractions etc., but to test the validity of a specific theoretical approach.

> but to that I would say don't knock it until you have tried it.

I'm not knocking it. I'm saying that given how expensive it is to try, perhaps some actual information rather than marketing would do a better job in persuading people to use it. Saying that I'm not persuaded by virtually zero information is no knocking anything, except perhaps the advocacy effort.


>> Haskell was and still is motivated by enabling software development gains

> The same can be said about nearly every language.

This clearly isn't true. For example many languages optimise for ease of use, shallow learning curve, programming in the small etc. Some optimise for efficiency and runtime determinism. Features like type-systems are great for software development (programming in the large), but do negatively impact ease-of-use.

> Haskell is first and foremost a research language motivated by answering a research questions

This isn't true. The Haskell committee contains many people from industry. Haskell just happens to have come from academia and is still heavily used in academia.

> Saying that I'm not persuaded by virtually zero information is no knocking anything

That's a huge exaggeration to suggest there is zero information. There are many experience reports published all the time at conferences and in academic journals, going back decades. None however will give you the quantitative metrics you seem to be asking for, because it is almost impossible to do such a comparison.


Nobody knows if Haskell's design is great for software development, and if it is, by how much and whether it justifies the costs. That's precisely why we need data.

> Haskell just happens to have come from academia and is still heavily used in academia.

Haskell isn't heavily used in academia at all. My guess is that most CS professors -- like most developers -- have hardly even heard of it, let alone used it. It is heavily used among researchers who's job it is to study the design of languages like Haskell. Languages that are heavily used in academia are C, Python, Java and maybe a few more.

> There are many experience reports published all the time at conferences and in academic journals,

Where? I've looked. If you want to see what a good report looks like, take a look at this one[1]. It's got some background, and then lot's of analysis of pros and cons, with plenty of numbers. I found it so convincing that I tried the language it advocated, and ended up adopting it. I'd seen similar (even more comprehensive) reports for Java back in the early '00s before deciding our team should adopt it. I haven't seen anything similar for Haskell. And again -- the more costly the adoption, the more information required before trying.

> going back decades. None however will give you the quantitative metrics you seem to be asking for, because it is almost impossible to do such a comparison.

I don't want a comparison, and what I'm asking for has been done at some point for nearly every language that has gained widespread adoption. I just want to know how much time and effort has gone into a single project, a breakdown of costs, and a subjective analysis of pros and cons. You see such reports being written from time to time about other languages (although usually with far more adoption than Haskell).

[1]: Slides: http://tla2012.loria.fr/contributed/newcombe-slides.pdf

Report 1: http://glat.info/pdf/formal-methods-amazon-2014-11.pdf

Report 2: http://link.springer.com/chapter/10.1007/978-3-662-43652-3_3


> Nobody knows if Haskell's design is great for software development

We can certainly say that uncontrolled side-effects, manual memory-management and dynamic types is bad for software development in the large.

I'll stop here because I don't which to argue the semantics of what does are does not constitute significant use or significant research.


> We can certainly say that uncontrolled side-effects

We most certainly cannot say that. There's no indication or even a hint that uncontrolled side-effects are a cause of expensive bugs. As a Haskell developer, however, I assume that you consider mutation to be a side-effect (because in pure-FP mutation is a side effect), and it is true that uncontrolled, non-transactional global-state mutation is not such a great idea, but most developers know not to do that in any language, and if you want language-level enforcement on mutation, you can get that for a fraction of the price of Haskell. There's also nothing to suggest that Haskell's approach is a good one (as opposed to, say, Clojure's or Erlang's) or even that Haskell's cure isn't worse than the disease.

> manual memory-management

Absolutely, and you can get automatic memory management for a fraction of the cost.

> and dynamic types

You get static types for a fraction of the cost, too.

> argue the semantics of what does are does not constitute significant use

It's fine for a language to not have wide adoption. That doesn't mean it's not good or even not great. It's also fine -- and I do that, too -- for people to say, I really like that language, I find it elegant, and I feel it helps me develop. What isn't fine is endless hype with claims of significant bottom line benefit but little effort to collect even anecdotal evidence (and I don't mean evidence that Haskell programs can run, or that you really like the language; I mean evidence of the extent of the claimed benefit).


> There's no indication or even a hint that uncontrolled side-effects are a cause of expensive bugs.

Uncontrolled side-effects destroy the ability to reason about software and when one can no longer reason effectively, bugs will happen. This is obvious to anyone who has worked on large systems. Whether it matters or not, depends on the domain. In finance, it is absolutely critical that we can reproduce a calculation and that for example, the current date is not read from the system deep inside the code somewhere. Having the compiler guarantee this for us is a huge win.

> Absolutely, and you can get automatic memory management for a fraction of the cost. > You get static types for a fraction of the cost, too.

What do you actually know about the cost of Haskell adoption? If you want to accuse me of making unsubstantiated claims, your position would be more tenable without making them yourself!


> Uncontrolled side-effects destroy the ability to reason about software and when one can no longer reason effectively, bugs will happen. This is obvious to anyone who has worked on large systems.

I have worked on systems comprising 10MLOCs over a career of ~20 years, both as a developer and as a manager, and this isn't obvious to me at all. But I separate mutation from side-effects, because mutation is better defined, and what side-effects mean varies by language. Global mutation is important to control, but there are lots of ways to do that. Other side-effects (and especially IO)? Not so much.

> Having the compiler guarantee this for us is a huge win.

It is almost as trivial to guarantee that in Java, although maybe not during compilation.

> What do you actually know about the cost of Haskell adoption? If you want to accuse me of making unsubstantiated claims, your position would be more tenable without making them yourself!

Oh, while I haven't written much more than Hello, World programs in Haskell (but I have programmed in SML a bit years ago, and a fair bit in Scheme and Clojure), I am no stranger to the theory of pure-FP (simply typed LC w/ polymorphism, functional semantics, monadic composition etc; I'm less familiar with call-by-value semantics, though). It is pretty radically different from most other styles of programming, including imperative-functional, both in terms of coding as well as tools such as debuggers and profilers.


I also have ~20 years experience developing software. I have worked on 10+ MLOC Java and C++ systems at UBS, Deutsche Bank, Morgan Stanley, Lehman Brothers and others. I came to Haskell precisely because of my experience working on such large scale systems with mainstream languages. I have spent the last 4 years as a professional Haskell developer, first at Barclays, now at SCB.

While I could probably stomach giving up purity in a small project, you'd only be able to take away sum types and pattern matching from my cold dead hands.


Personally, I am a bit skeptical of language-level features in general (as opposed to runtime features) to provide significant benefit compared to their high cost[1], and doubly so in general purpose languages, but that is precisely why I am interested in good technical reports about the pros and cons of languages, and those are sorely missing. At this point, whatever public information we do have seems to suggest that no company is significantly better than most others when it comes to delivering complex large software. There are a few exceptions where specific attributes are concerned. For example, Altran UK (Praxis) do seem to be rather exceptional in their ability to provide large verified software (they use Z for specification and Ada SPARK for coding), and they do have technical reports that are not completely devoid of useful information.

[1]: This isn't even about a particular language (although different languages do have different cost of adoption). Any language that doesn't have excellent interop with existing libraries and legacy code has a very high cost of adoption (and yeah, we've tried Scala years ago, and the experience was far from good).


>Have we actually seen that or have you just asserted that

I'd say they have probably seen it. The author of the post himself works at FPComplete, and they use Haskell for their consulting business. That and there are some case studies on their site (I'm fairly sure they all have experience in other languages too).


There aren't any case studies. Under the headline "case studies" they only have some marketing material.


Not the author and I don't know much besides a little dabbling in Haskell, but I'm sure that quote holds true, but only if the engineer already is an intermediate-advanced Haskeller, which probably translates to at least a couple years of daily Haskell work.


> but I'm sure that quote holds true

How can you be sure? Have you seen Haskell shops churn out software significantly faster than others?


We use Haskell for "rapid application development" in the front office at Standard Chartered bank. We have excellent typed APIs over most of the banks systems and can assemble applications in hours or days, rather than weeks or months as is typical. The language is as succinct as python, but thanks to types, purity and many standardised abstractions, we get better code reuse and maintenance.


Have you written a well-researched case study? If not, please do. It's very hard to assess such statements otherwise. After at least six years of Haskell at SC, is it spreading to larger software in the bank? If so, how fast? If not, why not?

Are other banks really spending orders of magnitude more on that kind of software? I've heard that SC adopted Haskell when Lennart Augustsson -- who wrote the first Haskell compiler -- moved to SC from Credit Suisse. When he left CS, they stopped using Haskell. If Haskell gives an order-of-magnitude difference in productivity as you claim, why did CS drop it after Augustsson had left?


AFAIK, Credit Suisse only used Haskell for one particular project and that project came to a natural end. CS did end up adopting functional programming and F# became their strategic choice.

I never claimed "orders of magnitude" productivity from a language alone. Haskell has every weakly-typed mainstream imperative language as a valid subset. You need good people, teams and processes to get the best out of the language. This means it is difficult to do case studies without getting dismissed as anecdotal.

Most other banks have settled on Python for similar use cases. I hear on the grapevine that they have significant maintenance problems (unsurprisingly). Haskell is spreading to larger software in the bank and we are currently hiring.


First, anecdotal case studies are much harder to dismiss than the nothing we have now, and as case studies accumulate they become harder and harder to dismiss. Also, dismissal is not the point. People and organizations are free to make their own considerations and choose their tools. I'm not talking about a marketing campaign. It's about information. If I don't have information, choosing Haskell would be far less rational than not choosing it (in the absence of information, it is more rational to stick with what you have rather than pay heavily for something that may have a positive, neutral or even negative effect).

Second, that Haskell "has every weakly typed mainstream imperative language as a valid subset" (which I don't think is accurate, because projects are not built from scratch, and libraries match a certain programming style) does not mean that Haskell is worth it. For the sake of argument, suppose Haskell has a 5% improvement in maintenance cost over Python but a 10% higher cost of adoption, then it's not worth it.

> CS did end up adopting functional programming and F# became their strategic choice.

That's great, but doesn't have much to do with Haskell. The investment required for a pure-FP language like Haskell seems to be much greater than the investment in an imperative functional language like F# (or Clojure). It seems like CS, after having evaluated Haskell, decided that it may not be worth it.


> People and organizations are free to make their own considerations and choose their tools.

Not true. If enough FUD is spread about Haskell, for example it being a failure at CS, then it becomes difficult to convince management.

> If I don't have information, choosing Haskell would be far less rational than not choosing it

No one is suggesting choices be made blindly. Managers should hire good people and then delegate those decisions to those people. If they want to use Haskell, don't stop them due to lack of case studies.

> For the sake of argument, suppose Haskell has a 5% improvement in maintenance cost over Python but a 10% higher cost of adoption, then it's not worth it.

I don't subscribe to nineteenth-century management science. My point was that software development is more about people than technologies. You will never be able to compare Haskell and Python like that, without comparing the people behind the code. What we can say for sure, is that many good developers are asking to develop in Haskell. The best way for other developers to understand why, is to try it.

> That's great, but doesn't have much to do with Haskell.

You were implying that Haskell had failed somehow because SC had stopped using it. Decisions within large organisations are rarely made on purely technical grounds. I can assure you there are many Haskell advocates still there.


> If enough FUD is spread about Haskell, for example it being a failure at CS, then it becomes difficult to convince management.

Haskell is not yet at an adoption level where anything said about it would qualify as FUD. If I told you I have a great cold remedy and all of my friends really like it, you saying that you may want some more carefully researched information first doesn't qualify as FUD. I don't see why anyone can be expected to make a rational decision based on nothing but marketing from enthusiasts. Haskell is not even at the point where management is the part of the organization that needs convincing.

> If they want to use Haskell, don't stop them due to lack of case studies.

That is precisely the job of responsible management: to stop technical teams from using expensive technologies, that are not only unproven, but hardly have any actual anecdotal information at all. The precise difference between management and developers is that management sees the bigger picture. As a manager I wouldn't mind technical people experimenting with Haskell or any other unproven technology on side projects, though.

> The best way for other developers to understand why, is to try it.

Yes, but when you come out with an experimental, very expensive drug, you don't say, "the best way for people to see if it works is to try it". Instead, some people try it, and then they research and report on their experience. After all, you're not asking me to try a new sorting routine, but a whole new language ecosystem. I understand why some early adopters and PL enthusiasts would be happy to try, but most organizations don't and shouldn't act in this way. There are tons of promising technologies. Is every company supposed to just try each one? Do we all need to try Elixir, Kotlin, Clojure, Scala, Haskell, F#, and Go? Now, it may not be fair, but the fact is that the more expensive the experiment, the more data we'd want before we'd allocate resources to conducting it. A language like Kotlin is nearly free for Java shops to try. Haskell? Not so much. The best way for other developers to understand is, therefore, for the few early adopters to research and report on their experience, with some actual data (cost, duration, size etc.).

> You were implying that Haskell had failed somehow because SC had stopped using it.

Yes, that is a very negative signal. I don't see how choosing F# changes things.

> Decisions within large organisations are rarely made on purely technical grounds.

True, but in general, organizations shouldn't make decisions on "technical grounds", but "bottom line grounds". Even that doesn't always happen, but it generally does, and it's reasonable to guess that if Haskell had had a huge advantage, it wouldn't have been abandoned. Also, remember that the converse is also true. If abandoning a language isn't conclusive evidence that it isn't cost-effective, adopting it isn't conclusive evidence that it is. That's why we need some data. It can be anecdotal data, but it must be actual data, and not a marketing lecture.

> I can assure you there are many Haskell advocates still there.

I'm sure there are. Unfortunately, the number or enthusiasm of advocates is never a substitute for evidence. It's perfectly fine that some early adopters are swayed by the advocates, but it's also perfectly rational not to be swayed by it. I absolutely believe that some people truly enjoy Haskell and feel that it makes them more productive, but without some data, it would simply be irresponsible to pay so much based on marketing alone.


> That is precisely the job of responsible management: to stop technical teams from using expensive technologies, that are not only unproven, but hardly have any actual anecdotal information at all. The precise difference between management and developers is that management sees the bigger picture.

In most cases management isn't qualified to make technical decisions. Managers rarely see the bigger picture.

I've worked on a Haskell project that worked, never crashed, was lightning fast and was meeting deadlines for new features without a problem. Then higher management decided to throw away probably around 7-10 man-years and rewrite the whole thing in Java, quite literally "just because Java". We even had a consultant from FP complete come over to take a look at our codebase and talk in front of management about the advantages of a mature codebase in Haskell. Management didn't even come.

Abandoning Haskell, in my case, had absolutely nothing to do with its technical merits and everything to do with incompetent management. When I left one year after they started rewriting the project in Java (I was staying to help maintain the Haskell code while the transition was happening) everything was buggy as hell, slow and unmaintainable. Not to mention that half the Java guys they hired had already left again.


First, given the large effort involved and how much you wanted Haskell to succeed, I assume you carefully collected lots of metrics. By publishing them you could actually do Haskell and the software community at large some actual good other than just complain about incompetent management (and I've whined about management, too). You may have missed the chance in that company, but your well-researched report is sure to convince some others, so that's not a total waste at all. Second, such incompetence often works in the other direction, too, usually in companies where incompetence takes a different form, which is why a little actual information would be nice. Your technical report about a 7-10 man-year Haskell-and-then-Java project would make a great contribution! For an experiment of such non-trivial effort, and given that the project was later re-written in Java, you may even find a prestigious publication to publish your report.


Back then I would have loved to do it because of the exact reasons you mention (i.e. it would have been a perfect case study of the same project implemented in two different languages), but unfortunately I wasn't at liberty to disclose specifics.

All I could do is collect metrics like lines of code and intentionally vague experience reports from former colleagues who are still there, but I guess that's hardly gonna convince anybody.


> Haskell is not yet at an adoption level where anything said about it would qualify as FUD. If I told you I have a great cold remedy and all of my friends really like it, you saying that you may want some more carefully researched information first doesn't qualify as FUD. I don't see why anyone can be expected to make a rational decision based on nothing but marketing from enthusiasts. Haskell is not even at the point where management is the part of the organization that needs convincing.

Maybe "FUD" isn't precisely the right term, but Haskell does seem to attract a special level of attacks that isn't applied to other languages at a similar level of maturity.

> Is every company supposed to just try each one? Do we all need to try Elixir, Kotlin, Clojure, Scala, Haskell, F#, and Go?

Well, companies have to make the choice one way or another, and they don't seem willing to pay what it would cost to study these questions to a scientific level. And the decision to stick with "industry best practice" (probably currently Java) is not a safe option either - anyone who does that runs the risk of being outcompeted by those who pick a better lanugage.

> A language like Kotlin is nearly free for Java shops to try.

Oh? To my mind a language that young, backed by a relatively small company, and whose popularity is heavily based on marketing (including a history of repeatedly announcing features before they were actually implemented) would come with a high risk factor.

> The best way for other developers to understand is, therefore, for the few early adopters to research and report on their experience, with some actual data (cost, duration, size etc.).

Sure - but the business incentives don't seem to be there for those early-adopter businesses to publish that kind of report (whether the outcome is positive or negative). How can we make that happen?


> Maybe "FUD" isn't precisely the right term, but Haskell does seem to attract a special level of attacks that isn't applied to other languages at a similar level of maturity.

It's a level of attack reserved for a special level of unsupported hype. I have absolutely no problem with those who say they really like Haskell, it fits well with how they think, and they feel more productive with it. I even have no problem with those who say that they believe the language has objective benefits in some domains and circumstances and that it deserves a closer look. But if you go around claiming that we can significantly reduce the cost of software development using your language (or any tool), I think an intsy-bitsy amount of even anecdotal evidence is required, especially after two decades.

You may not believe me, but I truly admire Haskell's elegance (even though I believe pure-FP is not the right path for most kinds of software; I have my own pet hypotheses). I just don't like the religious ecstasy that surrounds it.

> and they don't seem willing to pay what it would cost to study these questions to a scientific level.

Because in order for a business to invest in a costly experiment, there should first be some evidence that there's a good chance for the experiment to pay off significantly. Businesses aren't research institution, and they can't afford to invest in researching every new technology that somebody really likes.

> anyone who does that runs the risk of being outcompeted by those who pick a better lanugage.

As soon as the competition pulls ahead, people will be quick to flock to the new language. That's what happened with late Java adopters.

> would come with a high risk factor.

You're right that the risk isn't nonexistent, but it's relatively low (and Kotlin is backed by a larger company than the ones backing most new languages, with the exceptions of Go, Swift and maybe Rust). There's automatic translation and seamless interop, and gradual adoption is very easy. You can pick a class at random and translate it to Kotlin to test if build/debug/profile/deploy etc. works well, and then translate more classes. I'm not sure such a tool exists, but it should be fairly trivial to translate Kotlin back to very readable Java if the need arises to revert for whatever reason.

> but the business incentives don't seem to be there for those early-adopter businesses to publish that kind of report

I think that the business interests are there. If you invest heavily in a language, you'd want it to gain wide adoption so you get more maintenance, more libraries, easier recruiting, more tools and lower risk. The business interests would not be there if Haskell provided an incredible boost in productivity, that is so large that the competitive advantage offsets the downsides of low adoption. I don't think anyone is even making that claim.


> carefully researched information first doesn't qualify as FUD

How carefully researched was your very public assessment of what happened at CS?

> Do we all need to try Elixir, Kotlin, Clojure, Scala, Haskell, F#, and Go?

Of course not. Those interested in learning more about pure functional programming should try Haskell, the others have different niches they are trying to fill.

> If Haskell had had a huge advantage, it wouldn't have been abandoned

No one is claiming Haskell is some sort of magic bullet for all use cases. If you require ease-of-use, are writing disposable software, or high performance software, then it may not be the best choice.

Nobody is arguing against further studies, but I suspect that most of our industry isn't really interested. As I have said elsewhere, Haskell does not optimise for ease-of-use or aesthetics. It has a growing niche and will continue to be successful in that niche, with or without further productivity studies.


> How carefully researched was your very public assessment of what happened at CS?

It wasn't. I simply stated that it's a negative signal; that's not FUD. It is a negative signal whether or not I mention it. Mentioning it gives Haskell advocates a chance to address it. You don't have to, but whatever you do, the signal is there. But it's just one signal. I don't think anyone should make a decision based on just one signal.

> Those interested in learning more about pure functional programming should try Haskell, the others have different niches they are trying to fill.

What should those interested in increasing productivity learn? This entire discussion isn't about learning a new programming style, but about achieving a goal of reducing development costs.

> No one is claiming Haskell is some sort of magic bullet for all use cases.

So what is the claim, and is there actual evidence (even anecdotal) to support it?

> I suspect that most of our industry isn't really interested.

The industry is extremely interested in anything that can significantly reduce the cost of developing software. We don't care if it's called Haskell or an omniscient debugger, or a static analysis tool.

> and will continue to be successful in that niche, with or without further productivity studies.

That's terrific, but don't blame the industry for being short-sighted and investing heavily in something that has little evidence of a great promise. You can only blame the industry for ignoring evidence, not for ignoring hype.


> What should those interested in increasing productivity learn?

Productivity for whom? And over what timescales? As I have been trying to say, it is not that simple, you are oversimplifying.

> So what is the claim, and is there actual evidence (even anecdotal) to support it?

It sounds like you are completely satisfied with existing approaches, in which case there is little point in me trying to convince you.

> The industry is extremely interested in anything that can significantly reduce the cost of developing software.

Industry as a whole, does not have a good track record of adopting even good ideas in a timely manner. c.f. Garbage collection. So again, I think you are oversimplifying.


> Productivity for whom? And over what timescales? As I have been trying to say, it is not that simple, you are over simplifying.

I'm only simplifying as much as the claims in the article. It's just pretty hard to understand what the claims about Haskell are given all of the vague hype. The article we're discussing says, "Haskell decreases development time by narrowing the scope of development to your domain." Obviously, I understand there are caveats, and it's unclear how big the savings are. I just want reports with some meat.

> It sounds like you are completely satisfied with existing approaches, in which case there is little point in me trying to convince you.

The question of being "completely satisfied" is irrelevant. I'm not a researcher and my goal isn't to dream up better approaches, nor do I have the resources to invest in experimenting with every new tool. I am, however, very interested if someone says they have a tool that can provide significant savings. If there were only some more actionable information, convincing me would be quite easy. I want to be convinced, but throwing general promises around serves to confuse more than enlighten.


> and can assemble applications in hours or days, rather than weeks or months as is typical.

So you are building in hours and days applications that other banks are spending weeks and months on.

I'd love to see a few case studies!!


It does takes significant investment to get to this point though. SCB have been building APIs and components in Haskell for 8 years. We can now glue these parts together very easily and can be confident it will work, thanks to strong types. With less disciplined languages and/or teams, 8 years of development can yield a codebase that most will want to throw away and start again.


The article assumes an existing team which is a bit problematic when talking about time to market. If you start the analysis earlier (two people discussing some ideas in a coffee shop) then I'd argue that a language like Haskell can be problematic if your metric is time to market. You might very well make that up later by having a more robust code base or reaping any of the other asserted benefits but the existing gallery of premade and tested building blocks in other languages seems to be richer. It's probably also going to be harder to add people to your team (on average).

I would have liked to see a comparison to other functional languages (say Elixir or OCaml) and not just Java and C#. I'd also argue that picking Java instead of a more agile environment (there are some cool lightweight Java frameworks but most people will associate it with the rather heavy enterprise stack) when comparing regarding time to market is a bit odd. Granted I'm mostly thinking about webapps (but the article mentions Yesode).

Still a nice article (since my post sounds overly negative upon rereading).


> The article assumes an existing team which is a bit problematic when talking about time to market. If you start the analysis earlier (two people discussing some ideas in a coffee shop) then I'd argue that a language like Haskell can be problematic if your metric is time to market.

Well, the best language for an early-stage business is the one the founder knows, but that's always going to be true. There's no reason that language can't be Haskell.


Although very fresh into the Haskell world myself, I tend to agree with the author that, when I know what I am doing, my Haskell code is usually written in less time and has less bugs.

Having said that, Time-to-Market is only partially influenced by my-code, the biggest part is the code that I don't have to write, i.e. third-party libraries.

In my Haskell adventures I am having trouble finding third-party libraries for even the most popular things, e.g. Cassandra. As far as I can tell there are two libraries, the `cassandra-cql` and the `cql-io`, the first hasn't been updated for a year now, and the second has only 3 stars, which makes me uneasy.

So, although I can see where the author is coming from, I don't think you can beat Java, Ruby, JS or Python in that sense. Unless of course your code/project doesn't have a lot of dependencies.


I've found Scala to be a really good compromise–it has most of the functional features of Haskell, a wide array of existing libraries, and access to Java libraries when you need them. Plus implicits/typeclasses make it easy to extend otherwise annoying Java libraries to make them easier to use.


Yes, when working in Haskell, expect to do some yak shaving.

It's some incredibly productive yak shaving, it's even fun, but it's still yak shaving.

But I'll point that Haskell has also many wonderful libraries, for stuff you won't see in other communities. The libraries that are available are of an unmatched quality. Odds are a library marked as experimental on Hackage is more reliable than a mature one on the Python Index.


Mine has a single star but you may find it useful. https://github.com/eklavya/hascas


Thanks. I love that you have a fair-sized concrete example for usage. I am giving it a try now.


Every couple of months, I think to myself that I ought to buckle down and make an effort at really learning Haskell. And I go through some books and some toy projects, and then I hit the wall. The problem is, I can't actually use Haskell in anything production, because nobody else that I work with is going to be able to figure out how it works, and if I move on to some other job, my employer is never going to find somebody with Haskell experience in this area - at least for what they'd be willing to pay.

F# looks like a more pragmatic choice, and I've been looking hard for a good place to bite off to use it going forward. But again, it'll be bad for the bus factor.

So the best option I've found is to just start using more functional concepts and patterns in Java/C#. C# in particular seems to be leaning in this direction with the features added in the latest language versions.


I think this thread which I started this last weekend about Haskell productivity vs Rust might be relevant here:

https://m.reddit.com/r/rust/comments/5dtfp2/haskell_more_pro...

(Tried to ask on HN first, which once again proves the value of subreddits, as small focused communities: https://news.ycombinator.com/item?id=12989041)


> Haskell developers are self-selecting

This is not going to change.

I love Haskell, and I appreciate the effort many are making to evangelize the language, but I am experienced (ie. cynical) enough to believe that it's never going to become truly mainstream.


That's my experience as well. As a Haskell user I once felt that I had a secret, that it's so much better than the rest. Now, years later, it seems many pretty much can't tell a difference between a good and a bad language. Or rather, that many pick a language by the language's spread rather than its properties.


Haskell would be a lot easier to manage in production environments if resource-usage was more predicable - meaning strict by default.


And I believe that based on the enlisted properties/features, Elixir may qualify pretty well, too. What is more, Elixir may be a better choice in regards of time-to-market + developer happiness.


Yes, but that's a dynamically typed language and everything changes when comparing it. There are so many tradeoffs.

Clojure is quite quick to write and time to market can be very fast, as with most lisps, but you pay the price at debug time. We recently shipped a production app that had a small typo that the compiler would have easily caught, but instead it crashed the site when this one particular task was run.


Your experience with Clojure might not in fact generalize to "most lisps", which have very good compilers that catch all kinds of errors. Certainly the "low hanging fruit" ones like references to unbound variables or functions and such.

What was the typo?

> it crashed the site when this one particular task was run.

Might that also be because that was the first time the task was run at all, since the code was written or altered?

Code that compiles (such that we are confident it is probably free of silly typos, other than those somewhat rare typos which play out in some way that still compiles), might not be correct; it still requires testing.


The typo had to do with accidentally naming a variable in a let binding to the same name as a built-in function. It happily compiled but when that code actually ran in one particular use-case, it crashed due to the way the binding and the scoping worked in that case.


> accidentally naming a variable in a let binding to the same name as a built-in function.

Not even a problem in a Lisp-2 like ANSI CL!

  (let ((list '(1 2)))
    (list list list)) -> ((1 2) (1 2))
Facepalm. We could argue you were burned by the stupid Lisp-1 namespacing. Under the separate function and variable namespace of a Lisp-2, you would have to bind a local function in order to shadow a global one.

Even without a compiler, we can implement a warning for this. The code walker which expands macros is aware of lexical environments and can issue diagnostics when suspicious-looking shadowing is going on, or unbound variables are referenced and such.

Not only is this not the fault of the language being dynamic, but the problem could exist in a static language, like, oh, C:

   #define DECLARE_MY_PRINTF int (*printf)(const char *, ...) = my_printf;

   {
     DECLARE_MY_PRINTF:
     printf("hello, %s\n", "world");  // goes to my_printf via shadowing local var
   }
ISO C does not require a diagnostic for this. Gcc has -Wshadow. ISTR -Wshadow is not turned on by -Wall or -Wextra; you have to use it explicitly.

Of course, the above depends on the local printf pointer actually having the right type so that the call is well-formed. If we just have "int printf" or whatever, the type system will catch it.


I guess it depends on what you value. Me and my colleagues have found that Elixir is a massive pain when it comes to confusing, non-local run-time error messages. I also feel pretty out in the cold in terms of having a reliable type system that I can use for domain modelling... and yes, I've tried Dyalizer on multiple occasions.


Why comparisons to C#/Java are mentioned so many times but not a single mention of F#?

Maybe with F# author would see decrease of development time compared to Haskell?


Or OCaml, which is kind-of the original language F# was copied from.


If you were a developer with 10 yoe looking for something new to get into, what would you choose at this moment and thinking about the near future: Haskell, Scala or F#?


Near future: Scala.

There is more professional work being done in Scala, and it makes it more palatable to managers that it runs on the JVM and allows them to leverage some of the existing knowledge of their Java programmers.

In the financial world, however, F# use is on the rise.

I currently work in F# but not in finance, and the lack of mature libraries for common things (like JSON serialization!) makes it a major pain in the behind.

Were I told to select a functional language, "quick!", I would select Clojure when caffeinated, and Javascript (ES7) when not.

FWIW: I subscribe to the notion that dynamic typing with unit testing will get you a correct and working program faster than static typing. In the case of Clojure, you now have clojure.spec, which means you can write conformance criteria as functions and generate tests and test data.

http://games.greggman.com/game/dynamic-typing-static-typing/

https://news.ycombinator.com/item?id=10933524


Haskell forces you to think in a certain way. So if you want to learn about functional programming, then when you're searching for programming examples all those examples will be in a functional style. You can then apply those styles to Scala or F#. Personally I'd lean towards Scala (on the back of Spark) - there's the Coursera Scala Functional Programming course [1] which will give you a good start.

  [1]: https://www.coursera.org/specializations/scala


I've always liked OCaml and with Facebook putting some real momentum behind it and the OCaml/JS story getting better and better all the time I'd say now is a good time to pick it up. It's more pragmatic than Haskell, simpler than Scala, and not tied to the .Net ecosystem like F#.


It depends what you are looking for. If you are looking for work then Scala beats the other two hands down. Then comes F# and Haskell somewhere in the end.

If you're looking for enlightenment, pick up Haskell, maybe OCaml and skip F#.


Well, F# beats both OCaml and Haskell in industrial support and tooling, specially if we take .NET libraries into account.


(For me Scala is the present - I've been doing it full-time for about 6 years now.)

Short answer: whichever you can get a job in. That's more important than the design differences between them, which are relatively small.

Long answer: I wouldn't use F# because nice as it is you're eventually going to want HKT. Between Scala and Haskell I'd pick based on your learning style - Scala lets you write familiar comfortable code and incrementally adopt functional features, Haskell forces you to dive straight in and do things the Right Way from the start.


I would recommend none-of-the-above.

Having gone down the Haskell rathole, it's clear to me that the advertising is better than the product. It's complex and difficult to learn and the "less errors because of typing" mantra turns into 'how in the world am I supposed to find my error'. In practice, I very rarely run into errors caused by type problems.

I know nothing about F#, but that's actually my criticism of it. The F# community is small. I occasionally hear about people playing with F#, but even that is rare. (although it could be that I just don't know enough about it and F# and it's community is awesome)

Scala has a reasonably sized community and is being used in production plenty. I personally don't see any benefit beyond the fact that some people like the syntax better than Java and can do less boilerplate (which, if not paying attention could lead to higher resource consumption).

If I were looking for something new, I'd look into Go and Rust. Both have strong followings, a lot of active development to learn from, and strong job markets moving forward (IMO). Or alternatively reverse-expand back into C and start playing with embedded RTOSs - which has the benefit of both being very interesting and an area where senior people could make a significant impact.

... Now, if you really wanted one of the three - I would choose Haskell for learning, Scala for job market, F# if you are already a .net developer.


Thanks Steeeve, really informative answer.


I'd recommend Scala, even if you are not a JVM lover. The language itself is very un-java-ish, but you can hook it to Java libraries if needed, which is very handy


As far functional languages go, Erlang, Elixir and Scala would get you further career-wise, and if you want to play in the sorta-functional world, Swift is also a high-in-demand language that borrows a lot from Rust and Haskell.

However, Haskell and OCaml are the more classic functional, static-typed languages, and while Haskell has more job opportunities than OCaml, it is probably the steepest learning curve and highest time commitment of all these languages.


F# and Scala, given that we use both JVM and .NET stacks.


If you don't know Haskell or Lisp, I do recommend that you learn those. Now, I don't think they will open any immediate job stream for you (at least not now - Haskell is improving on that direction), but if you want to improve as a developer, you should know those two.


Many thanks for all these answers (I've upvoted all of them). Super useful. Really.


The take-away for Haskell tends to be if there is some kind of a commitment at some "core" level to it (i.e the the long-term members of the tech team, CTO, etc.). If there is none and people move in-and-out of the company, then it's more practical to use a more mainstream language, which is easier and less far-out to learn.




Applications are open for YC Summer 2019

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: