Friend of mine is always trying to convert me. Asked me to read this yesterday evening. This is my take on the article:
Most of my daily job goes into gluing services (API endpoints to databases or other services, some business logic in the middle). I don't need to see yet another exposition of how to do algorithmic tasks. Haven't seen one of those since doing my BSc. Show me the tools available to write a daemon, an http server, API endpoints, ORM-type things and you will have provided me with tools to tackle what I do. I'll never write a binary tree or search or a linked list at work.
If you want to convince me, show me what I need to know to do what I do.
I wasn't really trying to convince anyone to use Haskell at their day job: I am just a college student, after all, so I would have no idea what I was talking about!
I wrote the article a while ago after being frustrated using a bunch of Go and Python at an internship. Often I really wanted simple algebraic data types and pattern-matching, but when I looked up why Go didn't have them I saw a lot of justifications that amounted to "functional features are too complex and we're making a simple language. Haskell is notoriously complex". In my opinion, the `res, err := fun(); if err != nil` (for example) pattern was much more complex than the alternative with pattern-matching. So I wanted to write an article demonstrating that, while Haskell has a lot of out-there stuff in it, there's a bunch of simple ideas which really shouldn't be missing from any modern general-purpose language.
As to why I used a binary tree as the example, I thought it was pretty self-contained, and I find skew heaps quite interesting.
> > functional features are too complex and we're making a simple language. Haskell is notoriously complex
This is a true statement. (Opinion yada objective yada experience yada)
> In my opinion, the `res, err := fun(); if err != nil` (for example) pattern was much more complex than the alternative with pattern-matching.
This is also a true statement. (yada yada)
The insight I think you're missing is this piece right here: `we're making a simple language`. Their goal is not necessarily to make simple application code. That's your job, and you start that process by selecting your tools.
For certain tasks, pattern matching is a godsend. I'm usually very happy to have it available to me when it is. And I do often curse not having it available in other languages to be honest.
But Go users typically have different criteria for what makes simple/reliable/maintainable/debugable/"good" code than Haskell users have. Which is why the two languages are selected by different groups of people handling different tasks. You're making a tradeoff between features and limitations of various languages.
And the language designers have an even different criteria for those things. In this case, adding pattern matching would absolutely make the language itself more complex, and they apparently don't believe that language complexity is worth the benefits of pattern matching. I think that's a perfectly reasonable stance to take.
I'm not sure if I understand you: the `res, err := fun(); if err != nil` pattern shows up everywhere in most Go code, and I think that pattern-matching would be a better fit for it. Swift does it pretty well, as does Rust, both of which occupy a similar space to Go.
I get that there's a tradeoff with including certain features, I suppose I disagree that the tradeoff is a negative one when it comes to things as simple as pattern-matching, and I think it should be included in languages like Go.
I'm not arguing against pattern matching. Like I said, I prefer it where possible. I'm also not arguing in favor of multiple return with mandatory checked err values. (Though I prefer either over the collective insanity that went into making exceptions the default approach to handling errors in most languages.) I'm just pointing out that I think you're missing a key word in the stance of the go language developers.
They're not saying that if err != nil is better or worse, simpler or more complex, etc... than pattern matching for application code.
They're saying that supporting pattern matching makes the language itself more complex, and they're not in favor of that tradeoff. You're focusing on application complexity, and that's a very different thing.
Both the go language authors as well as the kind of developers that choose to use go think of the relative simplicity of the language itself as a feature. Even if it causes the application code to be slightly more complex. It's just another dimension that can be used when comparing programming languages, and one that group tends to value more than other groups.
Oh ok, I understand. I don't really buy the idea that go is a simple language, I have to say. A lot of go's design choices read (to me) as needlessly complex, like features were added to address corner cases one by one instead of implementing the fundamental feature in the first place. "Multiple return values" instead of tuples; several weird variations on the c-style for-loop; special-cased nil values instead of `Maybe` or `Optional`; `interface {}` and special in-built types instead of generics, etc. ADTs and pattern-matching would obviate "multiple return values", nils, and greatly simplify error handling.
A very instructive exercise for anyone who is or intends to be a software developer is to write some sort of interpreter and/or compiler. (As well as a virtual machine and/or emulator) Depending on your approach this can take a weekend, a few months, or the rest of your life.
For instance, and amusingly enough written in golang, one of the most respected recent books on this topic is `Writing an Interpreter in Go` and its sequel `Writing a Compiler in Go`. https://interpreterbook.com/ and https://compilerbook.com/ Both of these books are reasonably short, and have the reader make meaningful progress within a weekend.
Going through the motions of actually making your own programming language (or reimplementing an existing one) teaches you a lot of things you wouldn't otherwise expect about how to write general code, how to use existing languages effectively, and how things work under the hood. It's also one of the best ways to really get a practical feel for how to approach unit testing.
It's an exercise I'd recommend if you haven't gone through it already. It might make it really click for you why some features that seem like a no brainer and should be in every language aren't, and why some undesirable "features" are so prevalent.
> It might make it really click for you why some features that seem like a no brainer and should be in every language aren't, and why some undesirable "features" are so prevalent.
I hate this kind of "I have secret knowledge, why don't you spend T amount of your time on some big project to maybe come to the same secret insights I mean". If you have an opinion on why pattern matching is so complex and undesirable, just come out and say it please. Otherwise I'll just call you out as not really having an argument.
> I have secret knowledge, why don't you spend T amount of your time on some big project to maybe come to the same secret insights I mean
Alternate interpretation, I learned something valuable from doing this thing, perhaps you'd be interested in doing so as well since the book that took months or years to write will do a better job teaching it than I will in a five minute break while typing on HN.
It's always impressive when freely sharing knowledge and tips is somehow taken as being insular and exclusive.
> If you have an opinion on why pattern matching is so complex and undesirable
Where did I say pattern matching is undesirable? It sounds more like you just want a fight here.
Remember the HN guidelines:
> Please respond to the strongest plausible interpretation of what someone says, not a weaker one that's easier to criticize. Assume good faith.
> It's always impressive when freely sharing knowledge and tips is somehow taken as being insular and exclusive.
But you didn't share knowledge. You suggested that you had knowledge that was pertinent to the topic at hand. But you didn't share it. You did share tips for resources where one can learn more, and that's great. But you didn't add something like "... and that's where I learned that pattern matching is undesirable because <technical reason>".
> Where did I say pattern matching is undesirable?
This whole thread was about you saying that pattern matching was undesirable from the point of view of Go's designers or implementors due to their design goal of simplicity. Then you mentioned those compiler resources. The only reasonable interperetation for me is that you wanted to say that you did indeed know concrete technical reasons why pattern matching in Go would be complicated and therefore undesirable.
> This whole thread was about you saying that pattern matching was undesirable from the point of view of Go's designers or implementors due to their design goal of simplicity.
The only use of "undesirable" in any of my comments was in regard to features that are prevalent across languages today. If you must know I was thinking of inheritance and exceptions specifically.
As far as pattern matching goes, I was making no arguments except to say that I like it, adding a feature like pattern matching adds some non-zero amount of complexity, and that the go authors are apparently uncomfortable with that complexity. As I am not a go author, I am unsure of their exact reasoning and would not think to say why they believe that. My implication was not that I have an exact concrete reason for why the go authors feel the way they do. It was merely that I don't inherently disbelieve them when they say they have a reason.
In fact my exact wording was "I think that's a perfectly reasonable stance to take", which does not imply agreement, only a lack of strong disagreement. In other words I don't think they're ignorant of the matter or misrepresenting the situation.
> But you didn't share knowledge. You suggested that you had knowledge that was pertinent to the topic at hand. But you didn't share it. You did share tips for resources where one can learn more, and that's great. But you didn't add something like "... and that's where I learned that pattern matching is undesirable because <technical reason>".
The comment that appears to have gotten you riled up was after the person I was talking to saying they understood. After a discussion about language complexity I thought that it would be appropriate to suggest some resources on a "quick" project that can help build an intuition on that topic. And to be honest, it's a project I like to find excuses to suggest. I find people tend to be surprised at how easy and fun it can be to make some meaningful progress.
I understand that you would like for me to somehow short circuit that process, but I don't believe I am capable of building someone else's intuition by posting a throwaway comment on HN. Intuitions are typically built on experience and tinkering, not reading someone else's experiences.
That you view that project suggestion as a continued argument is unfortunate, I can assure you that was not my intent. Again referencing the HN guidelines, I encourage you in future to try to read people's posts first with the assumption that they are being genuine and only fall back to an assumption of malice when you absolutely have to. Long drawn out arguments over semantics don't help anyone.
> A very instructive exercise for anyone who is or intends to be a software developer is to write some sort of interpreter and/or compiler.
Another exercise, perhaps less demanding in this regard, is to explore using Free Monads[0] to implement an EDSL[1] for a problem domain. Of course, the approachability of this varies based on the person involved.
> For instance, and amusingly enough written in golang, one of the most respected recent books on this topic is `Writing an Interpreter in Go` and its sequel `Writing a Compiler in Go`.
Yeah I'm definitely not saying anything bad about the Dragon book here.
But I know there's a recency bias when people are evaluating tech books, so if there's a good book from the last five years I'll recommend that over a great book from the last 15, just so there's a higher chance of the recommendation actually being used.
If anyone is curious about an updated resource, I've found Modern Compiler Design much more approachable than the Dragon Book: Published in 2012, it includes chapters on designing object-oriented, functional, and logical compilers.
I remember having some bugs in Python due to one element tuples, I don't think I would have had the same issue if Python had multiple return value instead..
You keep missing the point entirely. Go was created to solve a very specific Google scenario: offer a valid alternative to C++ and Java for whatever they do in Google. It's not a language created to make college students or language hippies happy..if you are looking for that look somewhere else. Go can be picked up by any dev with minimal experience in C/C++/Java in 1-2 weeks and that was one the main design targets. Another one was fast compile times, adding all those nice features you'd like would also make the language more complex to parse and compile. I think you can talk about how much you like Haskell all day long, but if you keep using Go as a comparison you simply show you have no clue of what you are talking about. It's literally apples to oranges.
Maybe I am missing the point! It certainly wouldn't be the first time in an argument about programming languages.
I do understand, though, that the purpose of Go is not necessarily to push the boundaries of language design. I also understand that it's important the language is easy to pick up, compiles quickly, etc.
I think that some of Go's design decisions are bad, even with those stated goals in mind. Again, I don't want to overstate my experience or knowledge of language design (although I do know a little about Google's attitude towards Go, since that's where I spent my internship learning it), but some features (like "multiple return values" instead of tuples) seem to me to be just bad. Tuples are more familiar to a broader range of programmers, aren't a strange special case, are extremely useful, and have a straightforward implementation. Also, I don't want a bunch of fancy features added to Go: ideally several features would be removed, in favour of simpler, more straightforward ones.
I do agree, I would prefer tuples to multiple return in go.
Perhaps they find it easier to teach to users coming from languages with less or no type inference? Java and C++ programmers in my experience don't tend to be familiar with tuples, despite there being a tuple in the C++ stdlib. My purely uninformed guess is that it's because of how verbose declarations can get in Java, or in C++ without auto/decltype from C++11.
My best advice is do not try to learn functional programming via Haskell. It has done more to turn people off of functional programming that just about anything.
If you want to learn statically typed functional programming learn Elm (which takes only a few days), then one of F# or OCaml.
If you want to learn dynamically typed Functional Programming, learn Clojure or Racket/Scheme.
They amount of investment it takes to see return on learning Haskell makes it terrible for an introductory language. And every proponent of it glosses over this part. It has some advanced concepts but its not an introductory language.
There is so much benefit you can get to your coding from learning FP that you should pick a language that allows you to see and judge that value prop on your own quickly, not have to invest so much time to be haskell proficient to try to get the return on your learning.
Or go all in and actually expose yourself to an entirely different programming paradigm, there is so much more to "FP" that you can only find in Haskell and beyond.
> It has done more to turn people off of functional programming that just about anything.
The only thing worse in my book is FP evangelists. FP has some cool ideas and is an interestingly different way to do things compared with imperative programming, but enough exposure to the “FP is obviously the best paradigm and anybody who isn’t a fanatic is clearly just unenlightened” crowd will sour anybody on it.
That's because you already knew haskell and knew what you were missing out on. Elm having a less expressive type system makes it a much better introduction to concepts like ADTs and pattern matching, higher order functions, and enforced immutability, because there's less to trip you up.
a) useful utilities for actually writing applications
b) decent documentation. Not just pages of type signatures, but demonstrations of the libs usage.
Everything you need is here: JSON, config, units of measure, streams, testing, validation, a really nice JDBC wrapper (https://tpolecat.github.io/doobie/)
Throw in https://http4s.org/ and you've got yourself a rock-solid, purely functional stack with sensible, documented APIs, a more readable syntax and better tooling support.
I urge anyone who learnt some Haskell, thought "man this shit sucks, I'm never going to write something useful in this" to at least give FP Scala a chance. Here's a useful service template to start hacking with: https://github.com/http4s/http4s.g8.
I feel conflicted about this having programmed in both OO-heavy and pure FP Scala. On the one hand, sure if you want to write pure FP in Scala, some of the tooling is better than Haskell. Most notably the IDE situation with IntelliJ's Scala plugin has made leaps and bounds in progress the last few years and mostly handles pure FP Scala code just fine. And having access to the JVM ecosystem is an absolute god-send and huge boost to productivity. This is true even outside of library dependencies when coding. If you try deploying to production with Haskell, there's often a large gap in production tooling (monitoring, diagnostics, GC tuning, etc.) when compared to the JVM.
On the other hand Scala has its own share of infelicities when it comes to pure FP. I've mentioned this elsewhere, but the core language of Scala, that is the language that is left after you desugar everything, is OO and the FP part is really just a lot of syntax sugar mixed in with standard library data structure choices. That means if you're coming from a pure FP background a lot of things will seem like hacks (methods vs functions/automatic-eta-expansion, comparatively weak type inference, the final yield of a for-comprehension, subtyping to influence prioritization of implicits for typeclasses, monomorphic-only values vs. polymorphic classes, etc.). Treating Scala as an OO language side steps a lot of these warts.
And then there's the social factors; the Scala community is split on how much it embraces pure FP and pure FP is a (significant) minority within the community. This carries over to the library ecosystem where things are basically split into Typelevel/Typelevel-using and non-Typelevel libraries. Many workplaces have a fear (well-placed and not so well-placed) of the Typelevel ecosystem. Years ago Scalaz was something akin to the bogeyman in some places. Cats has a bit of a softer image, but still comes off as an "advanced" library in the Scala ecosystem.
Most of the social weight I feel is behind the non-pure-FP parts of Scala. Sure some of the libraries in pure FP Scala have good documentation (but I don't actually think the situation here is far better than Haskell's once you leave the core Typelevel libraries). The ones with excellent documentation though live outside the land of pure FP (Akka, Play, Spark, Slick, etc.).
You know, I really agree with a lot of this. It's maybe not an unreasonable argument to make that maybe OO-heavy or Akka-ish Scala might be a better investment for a lot of people, considering the weight and momentum of the communities maintaining these, and these expressiveness of the language in these domains.
But I think if we're only talking about pure FP, or at least something close to very pure, I think you're still getting a better deal than Haskell, even in spite of all those quite legitimate downsides you mentioned. My own personal biases mean that I will always prefer pure FP to anything else (I personally didn't love Akka and Play when I used them briefly), but that's an argument for another day.
Perhaps much of what you describe can be attributed to Scala being a multi-paradigm language. As with other programming languages of this nature, supporting multiple paradigms can be both a strength and weakness, depending on those whom use it and their expectations.
Whether this is right/correct or wrong/incorrect is left as an exercise for the reader ;-).
It's not really "equally" multi-paradigm though. All of Scala's FP parts can be desugared into OO. The reverse is not true. This has persisted in Dotty, where proposals to e.g. make typeclasses an atomic entity have been superseded by proposals to continue to encode typeclasses with separate mechanisms. That's the annoying part when doing pure FP in Scala. You can sometimes feel like you're fighting against the grain in a way that isn't true when on the OO side.
> It's not really "equally" multi-paradigm though.
True. Most multi-paradigm languages are not equal in each paradigm with which they support.
> All of Scala's FP parts can be desugared into OO. The reverse is not true.
This is a bit of a strawman argument, as any functional programming environment can be implemented by, or "desugared into", an object system. Contrast this with the fact that mutable OOP systems cannot guarantee Referential Transparency[0] and your second assertion is proven.
However, this simply proves that Scala supports more than one programming style. Whether a given code base employs FP, OOP, imperative programming, or some mixture therein, is a decision left for the system authors and not the language. It is left as an exercise for the reader to determine if that is good or bad.
> This has persisted in Dotty ...
AFAIK, Dotty is intended to be a new experimental language. I do not follow its development nor progress.
> That's the annoying part when doing pure FP in Scala. You can sometimes feel like you're fighting against the grain in a way that isn't true when on the OO side.
I respect what you identify but do not agree with your annoyance. But that's just me.
I don't think it's a strawman. Every language that has syntax sugar gets split by the community into the "core" language and the "sugar" on top. This is a very different comment than saying every programming paradigm can be implemented in terms of another. There are examples of languages where OO is a syntax sugar layer and FP is the core abstraction that OO desugars into. There are not many since for a variety of reasons people do not like doing this, but there's some.
A more fleshed out version is O'Haskell (which unfortunately died out in the early 2000s).
Mutability can be built (or more accurately faked) on top of referential transparency as well through syntax sugar in a very similar fashion to how Scala builds FP on top of OO. Indeed this was the original impetus behind do-notation in Haskell, but it stopped short of trying to make do-notation look like ordinary Haskell code. If you had syntax sugar that elided the difference between do notation and ordinary equality and unified IO with normal types then you'd have mutability in your language implemented through syntax sugar on top of an immutable core language (you could call it automatic IO expansion to be cheeky that automatically inserts a call to pure in front of any non-IO code used in an IO context). In fact I could see a rather reasonable case to be made for such a construct.
Scala similarly "fakes" (not necessarily a bad thing!) a lot of stuff. This is how automatic eta expansion and special function instantiation syntax (the arrow as opposed to new Function1(...)) elide the difference between methods and functions in Scala and let the language pretend e.g. that methods are first-class entities that you can pass to another method (which is not true, they must be wrapped in a class first just like Java) and let you pretend that methods have the same type signatures as functions (when in fact methods have a special method type that can be polymorphic whereas function are always monomorphic. In fact you cannot write the true type of a method in a first-class way in Scala; it is a special type that exists outside of the normal type hierarchy that is referred to as a "method type" in the Scala spec). This is leaving aside the encodings of typeclasses, ADTs, fully polymorphic functions (FunctionK in cats), etc.
In all these examples it is the FP concept that is "faked" (higher-order methods and polymorphic functions respectively in the two examples) and the OO concept (method taking an ordinary instance of a class and generics in methods) which is fundamental.
Dotty is explicitly blessed as Scala 3. I would highly recommend keeping high-level tabs on it if you're a Scala programmer. You don't need to know the specific details of it, but note that Scala 2.14 will be built specifically with Dotty in mind. It is the future of Scala (https://www.scala-lang.org/blog/2018/04/19/scala-3.html). And it comes with a lot of goodies! I'm really excited about it. More importantly for this discussion the current encoding of typeclasses in Scala 2 still desugars to implicits + OO classes instead of the other way around (which is perfectly possible where typeclasses are the core abstraction and implicits and OO classes are built using typeclasses).
Another great resource for doing FP in Scala is "Functional Programming in Scala"[0]. It's a very well written book and goes far in introducing key FP concepts IMHO.
So, I love me some ML-style languages, including Haskell, but I've also come to think that Rich Hickey is right about the real problems of business programming not being well solved by digging in on things like static typing.
For example, pattern matching against static types is cool, but, compared to pattern matching directly against data, Clojure-style, is even cooler. One makes the code a bit more concise and readable, but not necessarily a whole lot more maintainable. The other takes one of the more annoying and error-prone portions of my (say) Java code and renders it far more manageable.
I don't know how you can claim that static typing & pattern matching don't help with maintainability. In languages with exhaustiveness checking and good pattern matching you can often make a change in one spot and literally just follow the compiler errors to implement a new feature. They help massively in refactoring, and that's important for maintanability.
I find they don't help with maintainability in most my business applications because they're solving the wrong problem. It's sort of like when someone spends a lot of time carefully optimizing a piece of code that doesn't have anything to do with the application's actual performance bottlenecks, just because that's the part that's more fun to optimize.
The part of the business applications I work on that's a problem is dealing with the outside world. The data is messy. It's inconsistent. The protocols I'm using to communicate are invariably something horrendously loosey-goosey like JSON or XML. Stuff like that. And so, an inordinate amount of time I spend doing business applications in static languages ends up being spent on taking the messy, messy outside world and trying to create a clean, well-typed, rigorous façade for it so that I can operate on it inside my blissfully statically typed fantasy world. And all that static typing never seems to save me in practice, because the software quality problems I run into almost never crop up in the bits that I can operate on in a Haskell-friendly way. It's invariably in some mismatch between the outside world and my domain model that I failed to deal with accurately, which means that it's in my mapping code. Worse, oftentimes it's because of my mapping code, because my statically typed domain model ends up accidentally placing requirements on the input that aren't strictly needed by the business logic; I just unwittingly introduced them in the course of my efforts to get the types to line up.
In practice, this means you would rather nils permeate throughout your system rather than being caught at a system boundary, i.e., where you parse and validate that loosey-goosey outside world data.
In most the systems I work in, null permeates both the input and the output, and can often even have its own semantic value. i.e., "there is a value for this key, and that value is null" might actually be semantically distinct from "there is no value for this key". . . it's gross, but it happens, and whether or not it happens is often outside my control.
And I am beginning to suspect that it's actually safer, and even simpler, to fully live in and be fully aware of the reality of the business domain I'm working in, than it is to try and live in a bubble and pretend that life isn't complicated. And I'm also beginning to think that it might be wise to mind my own business. . . which includes not worrying about whether or not a value is null unless and until I find that I need to care whether or not it is null. Rejecting input because a value wasn't set when I had no intention of even looking at it is just such a grave violation of Postel's law. If I find that I'm only doing it because I need to satisfy the type checker. . . seems like a foolish consistency to me.
Perhaps if I could live in an alternate reality where things like JSON and MongoDB hadn't happened, and we instead decided that clean and consistent data is every bit as important when sitting on magnetic disks or traveling through fiber optic cables as it is when bouncing around in silicon. Oh, that would be wonderful. I dream to live in that world. But that doesn't seem to be the reality I occupy.
> [null] can often even have its own semantic value. i.e., "there is a value for this key, and that value is null" might actually be semantically distinct from "there is no value for this key"
That happens. Surely, if you know about this in advance, you can use a type along the lines of
data Field = Field Int | Null | Empty
And if you don’t have this kind of knowledge, well... That’s just a problem waiting for the right time to surface, whether you are using Haskell, Clojure or whatever else.
> And I am beginning to suspect that it's actually safer, and even simpler, to fully live in and be fully aware of the reality of the business domain I'm working in, than it is to try and live in a bubble and pretend that life isn't complicated.
I would’t say Haskell forces you to live in the bubble. Haskell forces you to think, in advance, about the relations between fields and types, sure. It doesn’t force you to use only simple, bubble-y types, though; the types can be something general, or some abomination (like the one above). I’m not aware of a use case where I wouldn’t be able to say “this will always be something”.
The only major distinction is, I would say, the place in code where you deal with the types.
Clojure: In the functions all over the place (-), and for some fields, never (+).
Haskell: Always in the topmost layer of your app (+), but you have to deal with all of them (-).
That’s the basic tradeoff between those two languages. Which pros and cons are more important depends heavily on your use case.
From the example you gave, it seems like we're talking past each other.
The systems I write also have to deal with JSON with nullable fields, and with fields I ignore while parsing. Aeson for instance gives you complete control over how strict or lenient you wish to be when trying to parse data.
The idea I was trying to convey was that if you care about marshalling data into types a la Haskell, then you can code less defensively when writing code for the data you actually care about. You do that defensive validation in just one place, as opposed to sprinkling nil checks all throughout your system.
If I'm understanding your system correctly, it sounds like you have unreliable input, and you want the same output, only with some of the fields updated if they're there. Haskell readily lets you do this too. The lens-aeson library is perfect for this.
Also, I am not sure how you differentiate between a null value and no value, but whatever mechanism you're using you could also use to model those two different types of null as actual types in Haskell.
> Oh, that would be wonderful. I dream to live in that world. But that doesn't seem to be the reality I occupy.
It's hard[er] to discern tone through the medium of text, but it sounds like you're suggesting Haskellers live in some fantasy world where all the data is perfect and everything is pure.
I don't really understand that perspective. I make 100% of my income from writing online business software in Haskell. I employ other programmers to write Haskell for me too. We live in the same world you live in, but we might just approach it differently.
> And I am beginning to suspect that it's actually safer, and even simpler, to fully live in and be fully aware of the reality of the business domain I'm working in, than it is to try and live in a bubble and pretend that life isn't complicated.
Is agree.
Of course, if you can't describe the values in a business domain in types, I'd argue you aren't fully aware of it's reality. And when you've done that modelling, you aren't only aware of the reality, but you've encoded it so that the knowledge is preserved not only for operational use in the program, but also for anyone who reads the program.
I see roughly the type of issue you face. For the case where there's some good value/effort ratio to invent internal representations, Haskell's being "strong" on parsing tasks make it a good fit. When this ratio is less clear: I found lenses, albeit a bit foreign to typical Haskell code, are a really good addition (for me it's akin to a new language, like `jq` for arbitrary in-memory structures) for extract/read tasks.
Anecdotal evidence: I recently had to turbo-build a tool to generate statistics over five-digit numbers of very complex business objects (including dates, strings, IDs, boolean as whatever string, ill-named columns) scattered in a number of systems (some web apis plus some CSV extracts from you don't really-know where). Using 'raw structures' like Aeson's JSON AST with lenses was more than good enough. Lenses have typically solved the "dynamic / cumbersone shape". Then I had to create a CSV extratc with like 50 boolean fields, reaching to intermediate techniques like type-level symbols allowed to really cheaply ensure I was not mixing two lines when adding a new column. I could even reach to Haxl from Facebook to write a layer of "querying" over my many heterogeneous source to automatically make concurrent and cached fetches for it.
The main difficulty in this setup is to keep the RAM usage under control because of two things. On the one hand, AST representations are costly. On the other hand, concurrency and caching means more work done in the same memory space.
Overall, got the data on time at relatively low effort (really low compared to previous attempt - to a point that some people with mileage at my company thought it would be impossible to build timely). Pretty good experience, would recommend to a friend.
Rich is a smart person but I think he's missing some gaps in his knowledge of type theory and experience working with Haskell.
In one of his recent talks he made the claim that `Either a b` is not associative which.. well it is since it's provably isomorphic to logical disjunction which _is_ associative.
I thought what he might be looking for are _variant_ types which are possible to implement in Haskell but are a bit complicated for reasons. There are libraries for it or you can try languages like Purescript or dependently typed languages like Idris, Agda, or Lean.
Regardless I don't find his particular brand of vitriol appealing. If he doesn't really have a lot of experience working with Haskell like type systems, why does he feel the need to have an opinion about them?
To be fair I used to have a lot of the opinions pointed out in the article and reflected in many of the comments here. An old blog post of mine [0] muses on the utility of static types. I was seriously into Common Lisp at the time.
The problem with past me then was that I hadn't taken the time to learn and understand Haskell to form an opinion. I had learned Common Lisp out of frustration to win an argument that it as an old, crusty language that nobody used or needed anymore... and lost. I hadn't done the same yet for Haskell and would join the chorus of people repeating things like, "Haskell is an academic language but is not pragmatic for real-world use." It's embarrassing looking back on it.
I've learned enough Haskell in recent years to ship a feature into a production environment and teach a small group of people to hack on it. It's pretty great and I much prefer working with it than I do with weakly-typed or dynamically typed languages. The amount of work I can do with the amount of effort I have to put in has a great ratio in Haskell. The initial learning curve to get there is hard but it's worth it in the end.
What's the difference between 'business programming' and other types of programming? I don't really know what distinction people are trying to make here.
Dealing with the messy real world with exceptions to rules and evolving shape of data vs writing compilers and other internally consistent closed systems.
Sure, but idea is that the idiomatic way of working in the language accomodates passing around data that is not necessarily closed in shape. Ie intermediate functions will by default pass along also attributes that they don't have knowledge of, for example. And checking data shape conformance is customizable (by the "spec" system).
Ok, let's not do the runtime vs compile checks thing here. I was just pointing out there are options that solve similar problems. There are other ways to deal with sub-functions not needing access to the whole structure as well. But let's not expect Haskell and clojure to have exactly the same features.
If you want to use clojure, then go for it. Use what you want to use.
I think that, at least insofar as I understand the problem I was trying to speak to, it's so deeply entangled with the runtime vs compile checks thing that it's impossible to have a coherent discussion without dealing with the subject head-on.
Here's where I come down on it:
There are some kinds of projects where you can cut off most potential problems at the pass with compile time checks. In those cases, yes, you absolutely want to statically render as many errors as possible impossible. Compilers come to mind as a shining example here.
There are others where the nastiest bits invariably happen at run time, though. And, for a significant number of those, the grottiest bits fall under the general category of "type checking" - not checking types in the structure of the code itself, per se, but checking types in the actual data you're being fed. And, since you don't get fed data until run time, that means all that type checking has to be done at run time. There's no sooner time at which it's possible. There's some tipping point where that becomes such a large portion of your data integrity concerns that it's just easier to delay all your type checking until run time, so that you are dealing with these things in a single, clear, consistent way. If you try to handle it in two places, there's always going to be a crack between them for things to slip through.
I am sorry that Haskell and clojure people have to fight. You don't see me telling Clojure folks when and where to use the tools they enjoy working with.
I think Haskell is an excellent language for servers and API's. It really excels as a backend language. So, I'm sorry you think Haskell is only good for compilers, but I think the range of use cases it's good at is much broader than "Compilers".
Haskell is best thought of as a better Java. I wouldn't select it for every problem, but server API's and backend work is a really good fit.
Also I think Clojure is great. We can both co-exist in this world though. It is possible.
It's unfortunate the OP picked on python - it's not the style of post that I would write.
I am sorry that you somehow think this has become a Haskell vs Clojure fight. Me, I actually use Haskell a lot more than I use Clojure, and generally think it's a great language with a lot to offer.
But I also believe another very important thing: There is no silver bullet.
Because I believe that, I am able to recognize that even the things I love have some limitations. And I don't believe that this should be a fight, and that is why I think I should be able to articulate what I have found to be the limitations of a tool, and acknowledge that some other tool that other people like might have something to offer in this area. Without being perceived as a hater for doing so.
Others may see it differently but I see it as the kind of programming where:
* Bugs are typically caused by misunderstandings of requirements or something odd about the interactions between different systems, and rarely about internal logic.
* Where quick is often better than proven correct.
* Where requirements are in constant flux and where a lot of code is tossed out because it was the result of a failed experiment or an unwanted feature.
Accidental vs essential complexity is a similar concept:
> Brooks distinguishes between two different types of complexity: accidental complexity and essential complexity. Accidental complexity relates to problems which engineers create and can fix; for example, the details of writing and optimizing assembly code or the delays caused by batch processing. Essential complexity is caused by the problem to be solved, and nothing can remove it; if users want a program to do 30 different things, then those 30 things are essential and the program must do those 30 different things.
The difference between solving a problem, and solving a problem for someone else for money.
If I'm solving a problem for myself, if it breaks in 'prod' then "Ooops", I try and avoid that, if I'm expecting other people to use it I will document public interfaces and write unit tests, but my focus is on scratching my own itch, not getting paid.
If I'm writing code that 1000s of peoples lively hoods, or millions of users buying decisions are being made on the stakes are higher. I might decide to use a more rigid language like Java, because the chances that I'm going to be given the freedom to replace rather than repair classes is slim. Similarly, if I persuade a client that microservices are the way forward, I'm going to spend significant time making sure we have a monorepo so each service has the same time line, an automatic deployment pipeline and I'm going to want to be able to defend my technical choices with economically sound data... and thats where Hickey kicks in. Many of the economically sound data sets are actually vapour.
As someone who takes sublime pleasure in writing types around a domain, Rich Hickey's rightness on this issue makes me profoundly sad.
Here's an article I read a while back (from the same author it turns out!) which nearly converted me to the dynamic-types camp: https://lispcast.com/clojure-and-types/
For me, that article and the lecture it talks about planted the seed, and then a year or two of doing data engineering type work watered and fertilized it.
That observation on `Maybe` really hit home in a big way. Not at first. Eventually. I used to think that banning null and using `Maybe` instead was the best idea ever. I still love the basic idea, and wish I could always work that way. . . but nowadays I'm so frequently working in the limit case, where everying is either optional, or used to be optional, or will be optional in the future, or is officially required but somebody didn't get the memo. And it's like in some Zen parable where the student keeps getting hit with a stick until they're enlightened. Bruised, bloodied, and enlightened. You either have it or you don't. む.
For me it was the part about typing JSON. I work at a company with a Common Lisp back-end and Let Me Tell You. Trying to enforce the JSON structure it generates using any kind of front-end JS types, so that nil-punning doesn't crash the UI every other day, is an exercise in attrition. Unfortunately JS is not Clojure and so can't as elegantly digest inconsistent data, but I certainly have learned to appreciate the appeal of that philosophy.
> pattern matching against static types is cool, but, compared to pattern matching directly against data, Clojure-style, is even cooler
Maybe it's cool, but I think static types can be "cool" too. When you take the fashion statement out of the question, what are you left with? One is polymorphism at runtime and the other is at compile time. I'll take the compile time one.
The tools are available, they can make things like cli apps and micro services trivial. However if you have never used a ML language before you will have a steep learning curve as very different to C style/based languages.
I was once of the opinion Haskell is academic, what can you use it for in the real world. Then I studied with it, played with it admittedly on and off over 1-2 years, hit hurdles where I had to think as so different to what I’ve learnt before. Eventually it clicked, it’s very hard and frustrating now in my day job using typical enterprise or popular languages. It’s not about convincing, it’s about having a open mind and wanting to learn something different
"Packages exist for doing this" isn't the same thing as "this is a good ecosystem for doing this kind of work" though.
I'm fairly convinced that Haskell is good at preventing the kinds of bugs which you might run into writing, say, a parser or other kinds of very complex logical code, but I'm less convinced that the nature of the language helps with the kinds of issues you get hooking together APIs, databases, etc.
Each of these things is well-explained by the documentation of the respective libraries. I'm not sure why you feel like you need someone to write you a long-form story in order to learn how to do these things; convince yourself of the merits that others already see.
I disagree. Look at the difference in documentation for Haskells Amazonka versus Clojures Amazonica. There are no simple code examples to get you going. Took me forever as a Haskell newbie to get DynamoDb integration working. In Clojure I just copy an example and play with it
The situation seems to have improved a bit since last I looked, but I still think it needs a basic howto about how to do stuff. I know you can figure most things out by looking at the types, but that's exactly where newbies lack experience, and you have search quite a bit for the information here, but thanks for the link. It's definitely better than it was
Yes, there is definitely no single through-line from "i know nothing" to "I can now program a microcontroller in Haskell" or whatever. It's a language which grew out of academia and still has a whiff of self-learning about it.
Those look really promising. My boss is big on TDD and open to developing cutting edge technologies. I'm going to have a look and see if I can propose a project using these tools.
FP Complete is a consulting company. They promote Haskell, and they boast about getting hired to help a company analyze their code base to fix a space leak. I wouldn't trust my business to a language that requires hiring a consulting firm to do fundamental debugging. I've written memory leaks, but never needed to hire consultants to debug them.
Consulting is rarely about helping problems that too hard to fix by a good developer. They're often about helping problems that the customer doesn't want to put (or hire) good developers on.
Haskell is lazy by default, and sometimes builds up large unevaluated expression trees that need to be forced. Other languages are eager, but litter the code with abstractions like Callables and Futures and channels just to not do something.
Of course. It uses the Text [0] and Scientific [1] packages under the hood. The internal ADT representing JSON is actually very simple (as json is very simple itself):
data Value
= Object (HashMap Text Value)
| Array (Vector Value)
| String Text
| Number Scientific
| Bool Bool
| Null
Sum types infinitely useful in regular, non-algorithmic code you find every day. Look at something like redux and redux actions, those are essentially sum types and would benefit greatly from a pattern matching syntax. The benefits for this stuff is literally anywhere.
Until you’ve done Windows programming for a while, you may think that Win32 is just a library, like any other library, you’ll read the book and learn it and call it when you need to. You might think that basic programming, say, your expert C++ skills, are the 90% and all the APIs are the 10% fluff you can catch up on in a few weeks. To these people I humbly suggest: times have changed. The ratio has reversed.
Read the whole article, it's pretty amazing. Parts of it are just as relevant to mobile development today as to desktop development back then.
You might be interested in checking out https://typeclasses.com/. You have to subscribe, but they have some free content, including this nice section: https://typeclasses.com/phrasebook, which gets right into printing to the console, and working with state, multi-threading and mutation.
When I first studied compilers, I was amazed that writing a compiler used every other subfield of computer science I'd studied. It's the acid test of language design. A language that can easily be used to write compilers can do almost anything well.
And every non-trivial program I've worked on is 90% of a compiler. (You could describe compilers as just "some business logic in the middle", too, if you were in Architecture Astronaut mode.) You don't think HTTP servers use "pattern matching"? You don't think API endpoints would benefit from "ridiculously easy" testing? This is your bread and butter.
This article is showing how to implement if-statements and null-checks in Haskell, and reduce your code size by half. I bet you have some of those in your software. I'm not sure how this could be much more relevant, without reducing its usefulness by being overly specific.
You are conflating computer science (complete writing) with software application development.
In the real world, software has a CS "core", plus 95% boring data copying and gluing APIs together, where availability of libs and tools is far more important than theoretical correctness properties and the most general abstraction possible. This is why Haskell looks so nice in blog posts but terrible in a production system.
In Haskell, updating fields in a records is still an active area research.
Doesn't the presence of "general abstraction" allow us to write less of the "boring" parts? I thought that's the whole reason for it.
Isn't "correctness" a useful property, even for a web service? I think that being able to eliminate entire categories of bugs is terrific, in any situation.
Sure, it's always nicer to have good libs/tools than to have to write your own (though that distinction is much less important when your language has great abstraction capabilities). Are there any libs/tools you're missing in Haskell? The way your comment is phrased makes it sound like good old fashioned FUD.
I'm not sure what you mean by the last sentence. It seems to still be an active area of research all over the place. Look at Swift/Rust/Go/Java, or Postgres/Mongo/Datomic, or ext4/zfs/btrfs/ntfs. Everybody updates records in very different ways. It's not like all Algol-derived languages have the same data model.
> In Haskell, updating fields in a records is still an active area research.
In Haskell, everything is still an active area of research. Mutable data already exists in Haskell and has sensible semantics.
But the goal of Haskell (so far as I can see, at least) is good, reliable, maintainable code which produces good, reliable, maintainable programs. If this is not what you look for in a programming language then it might not be for you.
I wrote a library for that. Hooray you can version your types and seamlessly upgrade them and the compiler will never let you cross streams by accident.
It's all there to be used, it's just unfortunately Haskell proponents seem not to talk about it as much. Most of the discussion is around "interesting" stuff.
My company does everything in Haskell, but it's almost all just boring plumbing code. APIs, JSON, databases, HTTP stuff, HTML templating, etc. It works great.
Exactly this. I've been meaning to write a blog post about what my team does with Haskell for the last 3 years, and while I think there are some gems of information we could provide to the world, at the end of the day it's incredibly boring stuff that works well for us and isn't really worth mentioning because it's not that different from what everyone else is doing with their own favorite language and tooling.
No system requires monad transformers or extensible effects. In some cases they become useful, particularly when dealing with unusual computational contexts; but most of the time you can use IO, and sacrifice the sharpest edge of type-safety for an easier job of plumbing.
Yes, but as I was saying logging, errors and so on can all be handled directly inside of IO. Given that you can't do any of these things in a pure function anyway, the only loss of dropping into the wider context of IO is type safety.
I learned Haskell and it's a giant waste of time. I'm not the only one who regrets functional programming either. Since then I vowed never to take things seriously that have zero connection to real world results. It's not possible to regret mastering x86 assembly for example, because even if that skill is relatively useless, it makes you better at many tangential things which can be decidedly useful; C and OS programming for example. Functional programming doesn't have this potential because it (proudly) exists as an abstract model with no connection to the machine or any physical processes for that matter. It's the string theory of computer science.
If you like math/category theory, go deep on the math itself. Your knowledge will be transferable to more than just some man-made story (like a programming language).
> Since then I vowed never to take things seriously that have zero connection to real world results. It's not possible to regret mastering x86 assembly for example, because even if that skill is relatively useless, it makes you better at many tangential things which can be decidedly useful; C and OS programming for example. Functional programming doesn't have this potential because it (proudly) exists as an abstract model with no connection to the machine or any physical processes for that matter
Hm interestingly I actually felt like learning Haskell had a huge benefit to my day job, probably much more than I imagine learning x86 assembly would. (Though admittedly I have learned some assembly in the past and that was also helpful).
I feel like Haskell forced me to write better code by forcing me to think about side effects. I don't know that I would actually use it in a production project because unfortunately real projects often rely a lot on state, even if constrained to a subsystem, and I still find it difficult to reason about the performance of a Haskell program.
Not trying to invalidate your point; perhaps you were already very good at what I learned via Haskell =] I admit I also find Haskell much more enjoyable to program in for the most part.
> I feel like Haskell forced me to write better code by forcing me to think about side effects.
I see, but wouldn't that be possible by "forcing" yourself to think about side effects in the language you were already using?
I mean, at the end of the day, it's learning the functional programming concepts that is supposedly going to make you a better programmer. Why not just start learning those in Java, Python, JS, etc.?
One thing I hear a lot from people who've learnt Haskell is they admit they're probably never going to use it in a real-world project (for so many reasons). Then, isn't it inefficient and a waste of time to learn parts of Haskell that are only found in Haskell?!
If I were to learn FP (which I will soon), I'd choose to learn it in the language I'm using now. It's not only more efficient, but also I'd enjoy being able to put what I've learnt in practical use.
> I mean, at the end of the day, it's learning the functional programming concepts that is supposedly going to make you a better programmer. Why not just start learning those in Java, Python, JS, etc.?
That could totally be possible for many, but for me, I need something more concrete. I had read about Haskell and the benefits of immutability and agreed from a high level, but until I actually used it, I didn't feel like I understood it.
> I mean, at the end of the day, it's learning the functional programming concepts that is supposedly going to make you a better programmer. Why not just start learning those in Java, Python, JS, etc.?
Because every OOP/imperative programmer I've ever known for 18 years has the easy way out in not thinking about side effects or immutability and they never proactively reach for them.
Granted my bubble is not representative of the world but this trend is nevertheless quite telling. I also never proactively reached for FP techniques in OOP/imperative language until I learned my first FP language.
This is a very narrow-sighted perspective. Most application-level programming these days happens in languages that are "abstract model[s] with [little] connection to the machine or any physical processes". You just happen to be a type 2 programmer: https://josephg.com/blog/3-tribes/
One thing I've found interesting in my very limited experience of Haskell is the connection to formally verifiable properties. Programs written in C or assembly very often have correctness and safety problems—among many others, memory corruption problems. Haskell and many related systems help provide tools to check programmatically that various large classes of error condition can't be present in a code base.
You might think that this isn't worth the mathematical rigamarole that comes with it and that it's grown up with, but as people have seen in a number of other HN threads, formal methods are having a renaissance now and the tools we have that engage with them can get us a lot further than they could in the 1960s.
I have written many bugs, of many different kinds, in other languages that could have been detected automatically by a type system like Haskell's. I'm not suggesting that that makes the other languages inherently bad, or that other programmers (or I) couldn't adopt other methods that would also help avoid these errors, but I think the ease with which Haskell's approach can do it is something interesting to consider.
This is what attracted me to FP to begin with, the "if it compiles it works" meme.
Program verification and functional programming are separate things. Ada predates Haskell by at least a decade, and it's not functional at all. Rust is kind of a revival of that in proving memory safety via borrowing; not functional either.
But the set of programs which can be formally proven is smaller than the set of all programs, so I'd rather not miss out by only making formally proven software. (The entire field of deep learning is a good example of useful code which can't be formally proven.)
I totally disagree. Functional programming teaches practices that are helpful regardless of what paradigm you use. Pure functions by definition adhere to dependency injection and single responsibility principle. While most programming languages don't enforce immutability, being aware of mutation is a generally helpful skill. There's a reason that lambda have become table stakes for new programming languages, and that's because composing functionality is a generally useful feature. I have never been paid to write in a functional language, but learning from functional languages has always improved my general programming skills.
Doing FP makes you a better programmer because you're writing programs. I'm taking the null hypothesis that after a year of writing functional code you would have improved just as much as you would have writing non-functional code. I have literally never seen anything to demonstrate otherwise other than hand waving and personal anecdotes.
> Pure functions by definition adhere to dependency injection and single responsibility principle.
Dependency injection and SRP are other man-made constructs with dubious utility in the same vein as functional programming.
> There's a reason that lambda have become table stakes for new programming languages, and that's because composing functionality is a generally useful feature
Lambda in functional programming is supposed to be a primitive you use to do everything ahem y combinators ahem. In Java 8/C++11/Swift, the lambdas you speak of are used only as embedded subroutines.
Functional programming selects for better than average programmers to begin with--the "programmer's programmer" that writes code for fun and probably visits this site. You're unlikely to convince to learn Haskell the person who writes enterprise .NET, never used anything but Windows, and who never opened a text editor after work. The functional programmers were already good before they became functional programmers. Then you get the cargo cult type of intrigue. "X writes really good code. X also uses $FP_LANG! Be more like X!"
Yeah, this more or less completely misses the point of what a working programmer would need to be doing. I honestly can't remember the last time I had to implement a data structure for real, and not just for the hell of it because I was bored. The closest I come is generally some sort of domain-specific shell that composites some data together.
I feel like the F# community tends to be more grounded in reality. Or at least I'm more exposed to the side of it that is trying to popularize it in Microsoft, as a useful tool for Domain Driven Design and the like.
I'm relatively new to Haskell, but Yesod has a good reputation for web application logic, and moreover comes with a free book introducing both itself and Haskell: https://www.yesodweb.com/
I've been curious: what does it take to find a job in software that isn't just gluing things together? Where do you have to go to write interesting algorithms at work? In our golden age of open-source, does that stuff only ever happen at FAANG and research labs?
Those jobs exist, but are few and far between, since cool ideas tend to be packaged up into reusable libraries, and don't need to be redeveloped, while everyone else uses these reusable libraries differently, so you have a constant need for "sanitation engineers". Tensorflow or OpenCV or UnrealEngine get written once, then everyone uses them to build whatever they're building. You write some cool glue around these, that's the interesting code, then spend most of your time wiring everything together into a larger package that actually does something useful.
Once you've wired everything together, now you need to track version, build artifacts, manage release processes, test them and qualify them, etc, so you do a lot of QA and DevOps work.
QA, DevOps and sanitation engineering is so product specific that it can't be packaged up, it's a craftsmanship position, and that's why all of us spend most of our time doing this kind of work.
Yeah, that's what I figured. It's a bit depressing when you think about it. I guess for most of us that itch just has to be scratched by pleasure-programming.
I don't know how long you've been doing this, but I've been continuously at it for almost 30 years now, and no matter where, there are challenging problems to solve, particularly outside the "web app with a backend" family of products. Doing optical recognition of malformed chicken patties as they whiz by at 60mph on a conveyor belt sounds mundane, but it's a lot more fun than writing REST API's to a SQL backend.
In your career, you will go through new and excited, then sort of bored, then you will find a niche that's both interesting to you, and that you're good at - you become an expert in something. At that point, you do as much of that as you can, and the sanitation engineering doesn't seem so bad when its directed at something interesting to you, especially when you work with colleagues who are better than you at something and challenge you professionally.
I look at this the other way around: if you’re only writing CRUD applications and relatively simple user interfaces for them, you tend to end up doing join-the-dots programming, but for almost any other field I’ve ever worked in, things get more interesting.
For example, we have a client at the moment who makes a type of device with a lot of user-configurable behaviour. An embedded web server allows access to its UI from a browser, and we were originally brought in to build that web UI for them. On the face of it, this is a substantial but straightforward SPA development, just one where the back end happens to talk to APIs that communicate with various physical components in the device rather than a traditional database.
However, it turns out that the way users view that device and how they want it to behave is very different to the way you have to program the various physical components to make useful things happen. That means even in this superficially simple project, we have some interesting algorithms in the front-end to present application-specific visualisations of the current state of the system, and we have a much more involved algorithm sitting behind that UI that converts the user’s tidy, intuitive view of the world into the very untidy and often counter-intuitive data formats required internally to program the hardware components.
To make this comment slightly more topical, I’ll also mention that the behind-the-scenes algorithm is essentially a form of compiler, taking a serialised version of the user’s world as input and running through a pipeline that systematically converts the data into the required internal formats. The first generation was written in Python, and has proven to be reasonably successful, but we are always a bit nervous about maintaining it just because of the number of edge cases and interactions inherent in the world we’re operating in. For the second generation, we made a big decision to go with Haskell instead, and for this sort of work, there were very welcome benefits including greater expressive power when writing data-transforming code and a strong type system that prevents mistakes like accidentally forwarding data from one pipeline step to the next without applying an appropriate transformation.
I agree with Beltiras’s point in the GP comment, and I probably wouldn’t choose Haskell to implement the kinds of software mentioned in that comment for much the same reasons. However, it definitely has real value in the sort of situation I described above, where we have both integrations with other systems but also substantial data crunching to do.
Everywhere! 3rd gen factory robotics, and manipulator control. Movie cg renderers and asset transformation pipelines (heck, get busy and finish inkscape!) Ehr systems. ERP systems. Risk management systems. Decision support systems. Trading systems. Field service support systems. Scheduling, packing, routing systems.
Now everything has an ui, and data innies and outies, but gosh! That's just connectors to the diamond!
What I'm about to say is a criticism of you, but please don't take it personally. It's also a criticism of me, because my day job is essentially the same as yours.
In general terms, almost everything you and I do is either a CRUD app, or something overcomplicated which would be better-implemented as a CRUD app. There's not technological advancement happening here. And usually you're not even doing a new CRUD app, you're just reimplementing an earlier CRUD app with better CSS and JS and a different marketing team to tout it. IF there's an innovation in your company it's not the CRUD app developer who's innovating. We're just reimplementing the wheel over and over, because the other wheel implementations are closed source and owned by a competitor.
If you want to innovate, you have to take on harder problems that aren't CRUD apps. That's where languages like Haskell shine. Haskell doesn't shine because it's better, it shines because it's different, and suited for different tasks. The tasks for which Haskell is suited haven't been saturated yet, so there's still room for innovating on the technical side of things.
So yeah, I can't show you how to do what you do with Haskell--the reason you'd want to use Haskell in the first place is to do something different from what you (and most other developers, myself included) are doing. The reason you'd want to write Haskell is to solve technical problems which haven't already been solved.
You're right to bring up binary search trees and linked lists as criticisms of the Haskell community, because those are also pretty solved areas: touting binary search and linked lists as the powers of Haskell completely undersells Haskell. Haskell learning materials fall into two basic categories: complete beginner stuff, and Ph.D. theses written in an alien language. This is, unfortunately, part of why there's any innovation to be had here: having very little mid-level learning material creates a barrier for entry that keeps people from reaching a level where they could innovate. The same is true for the communities of many less-widely-used languages.
This has been a rather cynical post, I realize. I'm not sure there's any recommendations here: I'm certainly not going all-in on learning Haskell and using it to innovate, myself. CRUD apps pay the bills and innovation is risky. I find interest and novelty in other areas of my life.
Most of my daily job goes into gluing services (API endpoints to databases or other services, some business logic in the middle). I don't need to see yet another exposition of how to do algorithmic tasks. Haven't seen one of those since doing my BSc. Show me the tools available to write a daemon, an http server, API endpoints, ORM-type things and you will have provided me with tools to tackle what I do. I'll never write a binary tree or search or a linked list at work.
If you want to convince me, show me what I need to know to do what I do.