Most of my daily job goes into gluing services (API endpoints to databases or other services, some business logic in the middle). I don't need to see yet another exposition of how to do algorithmic tasks. Haven't seen one of those since doing my BSc. Show me the tools available to write a daemon, an http server, API endpoints, ORM-type things and you will have provided me with tools to tackle what I do. I'll never write a binary tree or search or a linked list at work.
If you want to convince me, show me what I need to know to do what I do.
I wrote the article a while ago after being frustrated using a bunch of Go and Python at an internship. Often I really wanted simple algebraic data types and pattern-matching, but when I looked up why Go didn't have them I saw a lot of justifications that amounted to "functional features are too complex and we're making a simple language. Haskell is notoriously complex". In my opinion, the `res, err := fun(); if err != nil` (for example) pattern was much more complex than the alternative with pattern-matching. So I wanted to write an article demonstrating that, while Haskell has a lot of out-there stuff in it, there's a bunch of simple ideas which really shouldn't be missing from any modern general-purpose language.
As to why I used a binary tree as the example, I thought it was pretty self-contained, and I find skew heaps quite interesting.
This is a true statement. (Opinion yada objective yada experience yada)
> In my opinion, the `res, err := fun(); if err != nil` (for example) pattern was much more complex than the alternative with pattern-matching.
This is also a true statement. (yada yada)
The insight I think you're missing is this piece right here: `we're making a simple language`. Their goal is not necessarily to make simple application code. That's your job, and you start that process by selecting your tools.
For certain tasks, pattern matching is a godsend. I'm usually very happy to have it available to me when it is. And I do often curse not having it available in other languages to be honest.
But Go users typically have different criteria for what makes simple/reliable/maintainable/debugable/"good" code than Haskell users have. Which is why the two languages are selected by different groups of people handling different tasks. You're making a tradeoff between features and limitations of various languages.
And the language designers have an even different criteria for those things. In this case, adding pattern matching would absolutely make the language itself more complex, and they apparently don't believe that language complexity is worth the benefits of pattern matching. I think that's a perfectly reasonable stance to take.
I get that there's a tradeoff with including certain features, I suppose I disagree that the tradeoff is a negative one when it comes to things as simple as pattern-matching, and I think it should be included in languages like Go.
They're not saying that if err != nil is better or worse, simpler or more complex, etc... than pattern matching for application code.
They're saying that supporting pattern matching makes the language itself more complex, and they're not in favor of that tradeoff. You're focusing on application complexity, and that's a very different thing.
Both the go language authors as well as the kind of developers that choose to use go think of the relative simplicity of the language itself as a feature. Even if it causes the application code to be slightly more complex. It's just another dimension that can be used when comparing programming languages, and one that group tends to value more than other groups.
For instance, and amusingly enough written in golang, one of the most respected recent books on this topic is `Writing an Interpreter in Go` and its sequel `Writing a Compiler in Go`. https://interpreterbook.com/ and https://compilerbook.com/ Both of these books are reasonably short, and have the reader make meaningful progress within a weekend.
Going through the motions of actually making your own programming language (or reimplementing an existing one) teaches you a lot of things you wouldn't otherwise expect about how to write general code, how to use existing languages effectively, and how things work under the hood. It's also one of the best ways to really get a practical feel for how to approach unit testing.
It's an exercise I'd recommend if you haven't gone through it already. It might make it really click for you why some features that seem like a no brainer and should be in every language aren't, and why some undesirable "features" are so prevalent.
I hate this kind of "I have secret knowledge, why don't you spend T amount of your time on some big project to maybe come to the same secret insights I mean". If you have an opinion on why pattern matching is so complex and undesirable, just come out and say it please. Otherwise I'll just call you out as not really having an argument.
Alternate interpretation, I learned something valuable from doing this thing, perhaps you'd be interested in doing so as well since the book that took months or years to write will do a better job teaching it than I will in a five minute break while typing on HN.
It's always impressive when freely sharing knowledge and tips is somehow taken as being insular and exclusive.
> If you have an opinion on why pattern matching is so complex and undesirable
Where did I say pattern matching is undesirable? It sounds more like you just want a fight here.
Remember the HN guidelines:
> Please respond to the strongest plausible interpretation of what someone says, not a weaker one that's easier to criticize. Assume good faith.
But you didn't share knowledge. You suggested that you had knowledge that was pertinent to the topic at hand. But you didn't share it. You did share tips for resources where one can learn more, and that's great. But you didn't add something like "... and that's where I learned that pattern matching is undesirable because <technical reason>".
> Where did I say pattern matching is undesirable?
This whole thread was about you saying that pattern matching was undesirable from the point of view of Go's designers or implementors due to their design goal of simplicity. Then you mentioned those compiler resources. The only reasonable interperetation for me is that you wanted to say that you did indeed know concrete technical reasons why pattern matching in Go would be complicated and therefore undesirable.
The only use of "undesirable" in any of my comments was in regard to features that are prevalent across languages today. If you must know I was thinking of inheritance and exceptions specifically.
As far as pattern matching goes, I was making no arguments except to say that I like it, adding a feature like pattern matching adds some non-zero amount of complexity, and that the go authors are apparently uncomfortable with that complexity. As I am not a go author, I am unsure of their exact reasoning and would not think to say why they believe that. My implication was not that I have an exact concrete reason for why the go authors feel the way they do. It was merely that I don't inherently disbelieve them when they say they have a reason.
In fact my exact wording was "I think that's a perfectly reasonable stance to take", which does not imply agreement, only a lack of strong disagreement. In other words I don't think they're ignorant of the matter or misrepresenting the situation.
> But you didn't share knowledge. You suggested that you had knowledge that was pertinent to the topic at hand. But you didn't share it. You did share tips for resources where one can learn more, and that's great. But you didn't add something like "... and that's where I learned that pattern matching is undesirable because <technical reason>".
The comment that appears to have gotten you riled up was after the person I was talking to saying they understood. After a discussion about language complexity I thought that it would be appropriate to suggest some resources on a "quick" project that can help build an intuition on that topic. And to be honest, it's a project I like to find excuses to suggest. I find people tend to be surprised at how easy and fun it can be to make some meaningful progress.
I understand that you would like for me to somehow short circuit that process, but I don't believe I am capable of building someone else's intuition by posting a throwaway comment on HN. Intuitions are typically built on experience and tinkering, not reading someone else's experiences.
That you view that project suggestion as a continued argument is unfortunate, I can assure you that was not my intent. Again referencing the HN guidelines, I encourage you in future to try to read people's posts first with the assumption that they are being genuine and only fall back to an assumption of malice when you absolutely have to. Long drawn out arguments over semantics don't help anyone.
Another exercise, perhaps less demanding in this regard, is to explore using Free Monads to implement an EDSL for a problem domain. Of course, the approachability of this varies based on the person involved.
> For instance, and amusingly enough written in golang, one of the most respected recent books on this topic is `Writing an Interpreter in Go` and its sequel `Writing a Compiler in Go`.
Queue obligatory reference to "the dragon book":
Compilers: Principles, Techniques, and Tools
1 - https://www.quora.com/What-is-an-embedded-domain-specific-la...
2 - https://suif.stanford.edu/dragonbook/
But I know there's a recency bias when people are evaluating tech books, so if there's a good book from the last five years I'll recommend that over a great book from the last 15, just so there's a higher chance of the recommendation actually being used.
I mentioned the dragon book by obligation, not in comparison to the works you referenced.
Is 'recent' the key word here? ;) cause that is a very bold claim to make.
I remember having some bugs in Python due to one element tuples, I don't think I would have had the same issue if Python had multiple return value instead..
I do understand, though, that the purpose of Go is not necessarily to push the boundaries of language design. I also understand that it's important the language is easy to pick up, compiles quickly, etc.
I think that some of Go's design decisions are bad, even with those stated goals in mind. Again, I don't want to overstate my experience or knowledge of language design (although I do know a little about Google's attitude towards Go, since that's where I spent my internship learning it), but some features (like "multiple return values" instead of tuples) seem to me to be just bad. Tuples are more familiar to a broader range of programmers, aren't a strange special case, are extremely useful, and have a straightforward implementation. Also, I don't want a bunch of fancy features added to Go: ideally several features would be removed, in favour of simpler, more straightforward ones.
Perhaps they find it easier to teach to users coming from languages with less or no type inference? Java and C++ programmers in my experience don't tend to be familiar with tuples, despite there being a tuple in the C++ stdlib. My purely uninformed guess is that it's because of how verbose declarations can get in Java, or in C++ without auto/decltype from C++11.
If you want to learn statically typed functional programming learn Elm (which takes only a few days), then one of F# or OCaml.
If you want to learn dynamically typed Functional Programming, learn Clojure or Racket/Scheme.
They amount of investment it takes to see return on learning Haskell makes it terrible for an introductory language. And every proponent of it glosses over this part. It has some advanced concepts but its not an introductory language.
There is so much benefit you can get to your coding from learning FP that you should pick a language that allows you to see and judge that value prop on your own quickly, not have to invest so much time to be haskell proficient to try to get the return on your learning.
The only thing worse in my book is FP evangelists. FP has some cool ideas and is an interestingly different way to do things compared with imperative programming, but enough exposure to the “FP is obviously the best paradigm and anybody who isn’t a fanatic is clearly just unenlightened” crowd will sour anybody on it.
The pure FP Scala community understands your complaints more than the Haskell community.
https://typelevel.org/ is chock full of
a) useful utilities for actually writing applications
b) decent documentation. Not just pages of type signatures, but demonstrations of the libs usage.
Everything you need is here: JSON, config, units of measure, streams, testing, validation, a really nice JDBC wrapper (https://tpolecat.github.io/doobie/)
Throw in https://http4s.org/ and you've got yourself a rock-solid, purely functional stack with sensible, documented APIs, a more readable syntax and better tooling support.
I urge anyone who learnt some Haskell, thought "man this shit sucks, I'm never going to write something useful in this" to at least give FP Scala a chance. Here's a useful service template to start hacking with: https://github.com/http4s/http4s.g8.
On the other hand Scala has its own share of infelicities when it comes to pure FP. I've mentioned this elsewhere, but the core language of Scala, that is the language that is left after you desugar everything, is OO and the FP part is really just a lot of syntax sugar mixed in with standard library data structure choices. That means if you're coming from a pure FP background a lot of things will seem like hacks (methods vs functions/automatic-eta-expansion, comparatively weak type inference, the final yield of a for-comprehension, subtyping to influence prioritization of implicits for typeclasses, monomorphic-only values vs. polymorphic classes, etc.). Treating Scala as an OO language side steps a lot of these warts.
And then there's the social factors; the Scala community is split on how much it embraces pure FP and pure FP is a (significant) minority within the community. This carries over to the library ecosystem where things are basically split into Typelevel/Typelevel-using and non-Typelevel libraries. Many workplaces have a fear (well-placed and not so well-placed) of the Typelevel ecosystem. Years ago Scalaz was something akin to the bogeyman in some places. Cats has a bit of a softer image, but still comes off as an "advanced" library in the Scala ecosystem.
Most of the social weight I feel is behind the non-pure-FP parts of Scala. Sure some of the libraries in pure FP Scala have good documentation (but I don't actually think the situation here is far better than Haskell's once you leave the core Typelevel libraries). The ones with excellent documentation though live outside the land of pure FP (Akka, Play, Spark, Slick, etc.).
But I think if we're only talking about pure FP, or at least something close to very pure, I think you're still getting a better deal than Haskell, even in spite of all those quite legitimate downsides you mentioned. My own personal biases mean that I will always prefer pure FP to anything else (I personally didn't love Akka and Play when I used them briefly), but that's an argument for another day.
Whether this is right/correct or wrong/incorrect is left as an exercise for the reader ;-).
True. Most multi-paradigm languages are not equal in each paradigm with which they support.
> All of Scala's FP parts can be desugared into OO. The reverse is not true.
This is a bit of a strawman argument, as any functional programming environment can be implemented by, or "desugared into", an object system. Contrast this with the fact that mutable OOP systems cannot guarantee Referential Transparency and your second assertion is proven.
However, this simply proves that Scala supports more than one programming style. Whether a given code base employs FP, OOP, imperative programming, or some mixture therein, is a decision left for the system authors and not the language. It is left as an exercise for the reader to determine if that is good or bad.
> This has persisted in Dotty ...
AFAIK, Dotty is intended to be a new experimental language. I do not follow its development nor progress.
> That's the annoying part when doing pure FP in Scala. You can sometimes feel like you're fighting against the grain in a way that isn't true when on the OO side.
I respect what you identify but do not agree with your annoyance. But that's just me.
0 - https://softwareengineering.stackexchange.com/questions/2543...
https://programming.tobiasdammers.nl/blog/2017-10-17-object-... is an example from first principles in Haskell.
A more fleshed out version is O'Haskell (which unfortunately died out in the early 2000s).
Mutability can be built (or more accurately faked) on top of referential transparency as well through syntax sugar in a very similar fashion to how Scala builds FP on top of OO. Indeed this was the original impetus behind do-notation in Haskell, but it stopped short of trying to make do-notation look like ordinary Haskell code. If you had syntax sugar that elided the difference between do notation and ordinary equality and unified IO with normal types then you'd have mutability in your language implemented through syntax sugar on top of an immutable core language (you could call it automatic IO expansion to be cheeky that automatically inserts a call to pure in front of any non-IO code used in an IO context). In fact I could see a rather reasonable case to be made for such a construct.
Scala similarly "fakes" (not necessarily a bad thing!) a lot of stuff. This is how automatic eta expansion and special function instantiation syntax (the arrow as opposed to new Function1(...)) elide the difference between methods and functions in Scala and let the language pretend e.g. that methods are first-class entities that you can pass to another method (which is not true, they must be wrapped in a class first just like Java) and let you pretend that methods have the same type signatures as functions (when in fact methods have a special method type that can be polymorphic whereas function are always monomorphic. In fact you cannot write the true type of a method in a first-class way in Scala; it is a special type that exists outside of the normal type hierarchy that is referred to as a "method type" in the Scala spec). This is leaving aside the encodings of typeclasses, ADTs, fully polymorphic functions (FunctionK in cats), etc.
In all these examples it is the FP concept that is "faked" (higher-order methods and polymorphic functions respectively in the two examples) and the OO concept (method taking an ordinary instance of a class and generics in methods) which is fundamental.
Dotty is explicitly blessed as Scala 3. I would highly recommend keeping high-level tabs on it if you're a Scala programmer. You don't need to know the specific details of it, but note that Scala 2.14 will be built specifically with Dotty in mind. It is the future of Scala (https://www.scala-lang.org/blog/2018/04/19/scala-3.html). And it comes with a lot of goodies! I'm really excited about it. More importantly for this discussion the current encoding of typeclasses in Scala 2 still desugars to implicits + OO classes instead of the other way around (which is perfectly possible where typeclasses are the core abstraction and implicits and OO classes are built using typeclasses).
0 - https://www.manning.com/books/functional-programming-in-scal...
For example, pattern matching against static types is cool, but, compared to pattern matching directly against data, Clojure-style, is even cooler. One makes the code a bit more concise and readable, but not necessarily a whole lot more maintainable. The other takes one of the more annoying and error-prone portions of my (say) Java code and renders it far more manageable.
There's a recent LispCast that talks about this a bit: https://lispcast.com/what-is-data-orientation/
The part of the business applications I work on that's a problem is dealing with the outside world. The data is messy. It's inconsistent. The protocols I'm using to communicate are invariably something horrendously loosey-goosey like JSON or XML. Stuff like that. And so, an inordinate amount of time I spend doing business applications in static languages ends up being spent on taking the messy, messy outside world and trying to create a clean, well-typed, rigorous façade for it so that I can operate on it inside my blissfully statically typed fantasy world. And all that static typing never seems to save me in practice, because the software quality problems I run into almost never crop up in the bits that I can operate on in a Haskell-friendly way. It's invariably in some mismatch between the outside world and my domain model that I failed to deal with accurately, which means that it's in my mapping code. Worse, oftentimes it's because of my mapping code, because my statically typed domain model ends up accidentally placing requirements on the input that aren't strictly needed by the business logic; I just unwittingly introduced them in the course of my efforts to get the types to line up.
In most the systems I work in, null permeates both the input and the output, and can often even have its own semantic value. i.e., "there is a value for this key, and that value is null" might actually be semantically distinct from "there is no value for this key". . . it's gross, but it happens, and whether or not it happens is often outside my control.
And I am beginning to suspect that it's actually safer, and even simpler, to fully live in and be fully aware of the reality of the business domain I'm working in, than it is to try and live in a bubble and pretend that life isn't complicated. And I'm also beginning to think that it might be wise to mind my own business. . . which includes not worrying about whether or not a value is null unless and until I find that I need to care whether or not it is null. Rejecting input because a value wasn't set when I had no intention of even looking at it is just such a grave violation of Postel's law. If I find that I'm only doing it because I need to satisfy the type checker. . . seems like a foolish consistency to me.
Perhaps if I could live in an alternate reality where things like JSON and MongoDB hadn't happened, and we instead decided that clean and consistent data is every bit as important when sitting on magnetic disks or traveling through fiber optic cables as it is when bouncing around in silicon. Oh, that would be wonderful. I dream to live in that world. But that doesn't seem to be the reality I occupy.
That happens. Surely, if you know about this in advance, you can use a type along the lines of
data Field = Field Int | Null | Empty
And if you don’t have this kind of knowledge, well... That’s just a problem waiting for the right time to surface, whether you are using Haskell, Clojure or whatever else.
> And I am beginning to suspect that it's actually safer, and even simpler, to fully live in and be fully aware of the reality of the business domain I'm working in, than it is to try and live in a bubble and pretend that life isn't complicated.
I would’t say Haskell forces you to live in the bubble. Haskell forces you to think, in advance, about the relations between fields and types, sure. It doesn’t force you to use only simple, bubble-y types, though; the types can be something general, or some abomination (like the one above). I’m not aware of a use case where I wouldn’t be able to say “this will always be something”.
The only major distinction is, I would say, the place in code where you deal with the types.
Clojure: In the functions all over the place (-), and for some fields, never (+).
Haskell: Always in the topmost layer of your app (+), but you have to deal with all of them (-).
That’s the basic tradeoff between those two languages. Which pros and cons are more important depends heavily on your use case.
The systems I write also have to deal with JSON with nullable fields, and with fields I ignore while parsing. Aeson for instance gives you complete control over how strict or lenient you wish to be when trying to parse data.
The idea I was trying to convey was that if you care about marshalling data into types a la Haskell, then you can code less defensively when writing code for the data you actually care about. You do that defensive validation in just one place, as opposed to sprinkling nil checks all throughout your system.
If I'm understanding your system correctly, it sounds like you have unreliable input, and you want the same output, only with some of the fields updated if they're there. Haskell readily lets you do this too. The lens-aeson library is perfect for this.
Lots of examples for that here: https://www.snoyman.com/blog/2017/05/playing-with-lens-aeson
Also, I am not sure how you differentiate between a null value and no value, but whatever mechanism you're using you could also use to model those two different types of null as actual types in Haskell.
> Oh, that would be wonderful. I dream to live in that world. But that doesn't seem to be the reality I occupy.
It's hard[er] to discern tone through the medium of text, but it sounds like you're suggesting Haskellers live in some fantasy world where all the data is perfect and everything is pure.
I don't really understand that perspective. I make 100% of my income from writing online business software in Haskell. I employ other programmers to write Haskell for me too. We live in the same world you live in, but we might just approach it differently.
Of course, if you can't describe the values in a business domain in types, I'd argue you aren't fully aware of it's reality. And when you've done that modelling, you aren't only aware of the reality, but you've encoded it so that the knowledge is preserved not only for operational use in the program, but also for anyone who reads the program.
Anecdotal evidence: I recently had to turbo-build a tool to generate statistics over five-digit numbers of very complex business objects (including dates, strings, IDs, boolean as whatever string, ill-named columns) scattered in a number of systems (some web apis plus some CSV extracts from you don't really-know where). Using 'raw structures' like Aeson's JSON AST with lenses was more than good enough. Lenses have typically solved the "dynamic / cumbersone shape". Then I had to create a CSV extratc with like 50 boolean fields, reaching to intermediate techniques like type-level symbols allowed to really cheaply ensure I was not mixing two lines when adding a new column. I could even reach to Haxl from Facebook to write a layer of "querying" over my many heterogeneous source to automatically make concurrent and cached fetches for it.
The main difficulty in this setup is to keep the RAM usage under control because of two things. On the one hand, AST representations are costly. On the other hand, concurrency and caching means more work done in the same memory space.
Overall, got the data on time at relatively low effort (really low compared to previous attempt - to a point that some people with mileage at my company thought it would be impossible to build timely). Pretty good experience, would recommend to a friend.
In one of his recent talks he made the claim that `Either a b` is not associative which.. well it is since it's provably isomorphic to logical disjunction which _is_ associative.
I thought what he might be looking for are _variant_ types which are possible to implement in Haskell but are a bit complicated for reasons. There are libraries for it or you can try languages like Purescript or dependently typed languages like Idris, Agda, or Lean.
Regardless I don't find his particular brand of vitriol appealing. If he doesn't really have a lot of experience working with Haskell like type systems, why does he feel the need to have an opinion about them?
To be fair I used to have a lot of the opinions pointed out in the article and reflected in many of the comments here. An old blog post of mine  muses on the utility of static types. I was seriously into Common Lisp at the time.
The problem with past me then was that I hadn't taken the time to learn and understand Haskell to form an opinion. I had learned Common Lisp out of frustration to win an argument that it as an old, crusty language that nobody used or needed anymore... and lost. I hadn't done the same yet for Haskell and would join the chorus of people repeating things like, "Haskell is an academic language but is not pragmatic for real-world use." It's embarrassing looking back on it.
I've learned enough Haskell in recent years to ship a feature into a production environment and teach a small group of people to hack on it. It's pretty great and I much prefer working with it than I do with weakly-typed or dynamically typed languages. The amount of work I can do with the amount of effort I have to put in has a great ratio in Haskell. The initial learning curve to get there is hard but it's worth it in the end.
If you want to use clojure, then go for it. Use what you want to use.
Here's where I come down on it:
There are some kinds of projects where you can cut off most potential problems at the pass with compile time checks. In those cases, yes, you absolutely want to statically render as many errors as possible impossible. Compilers come to mind as a shining example here.
There are others where the nastiest bits invariably happen at run time, though. And, for a significant number of those, the grottiest bits fall under the general category of "type checking" - not checking types in the structure of the code itself, per se, but checking types in the actual data you're being fed. And, since you don't get fed data until run time, that means all that type checking has to be done at run time. There's no sooner time at which it's possible. There's some tipping point where that becomes such a large portion of your data integrity concerns that it's just easier to delay all your type checking until run time, so that you are dealing with these things in a single, clear, consistent way. If you try to handle it in two places, there's always going to be a crack between them for things to slip through.
I think Haskell is an excellent language for servers and API's. It really excels as a backend language. So, I'm sorry you think Haskell is only good for compilers, but I think the range of use cases it's good at is much broader than "Compilers".
Haskell is best thought of as a better Java. I wouldn't select it for every problem, but server API's and backend work is a really good fit.
Also I think Clojure is great. We can both co-exist in this world though. It is possible.
It's unfortunate the OP picked on python - it's not the style of post that I would write.
But I also believe another very important thing: There is no silver bullet.
Because I believe that, I am able to recognize that even the things I love have some limitations. And I don't believe that this should be a fight, and that is why I think I should be able to articulate what I have found to be the limitations of a tool, and acknowledge that some other tool that other people like might have something to offer in this area. Without being perceived as a hater for doing so.
* Bugs are typically caused by misunderstandings of requirements or something odd about the interactions between different systems, and rarely about internal logic.
* Where quick is often better than proven correct.
* Where requirements are in constant flux and where a lot of code is tossed out because it was the result of a failed experiment or an unwanted feature.
> Brooks distinguishes between two different types of complexity: accidental complexity and essential complexity. Accidental complexity relates to problems which engineers create and can fix; for example, the details of writing and optimizing assembly code or the delays caused by batch processing. Essential complexity is caused by the problem to be solved, and nothing can remove it; if users want a program to do 30 different things, then those 30 things are essential and the program must do those 30 different things.
If I'm solving a problem for myself, if it breaks in 'prod' then "Ooops", I try and avoid that, if I'm expecting other people to use it I will document public interfaces and write unit tests, but my focus is on scratching my own itch, not getting paid.
If I'm writing code that 1000s of peoples lively hoods, or millions of users buying decisions are being made on the stakes are higher. I might decide to use a more rigid language like Java, because the chances that I'm going to be given the freedom to replace rather than repair classes is slim. Similarly, if I persuade a client that microservices are the way forward, I'm going to spend significant time making sure we have a monorepo so each service has the same time line, an automatic deployment pipeline and I'm going to want to be able to defend my technical choices with economically sound data... and thats where Hickey kicks in. Many of the economically sound data sets are actually vapour.
Here's an article I read a while back (from the same author it turns out!) which nearly converted me to the dynamic-types camp: https://lispcast.com/clojure-and-types/
That observation on `Maybe` really hit home in a big way. Not at first. Eventually. I used to think that banning null and using `Maybe` instead was the best idea ever. I still love the basic idea, and wish I could always work that way. . . but nowadays I'm so frequently working in the limit case, where everying is either optional, or used to be optional, or will be optional in the future, or is officially required but somebody didn't get the memo. And it's like in some Zen parable where the student keeps getting hit with a stick until they're enlightened. Bruised, bloodied, and enlightened. You either have it or you don't. む.
Cycorp? Do you need more Lisp developers? I'm trying to switch jobs.
Maybe it's cool, but I think static types can be "cool" too. When you take the fashion statement out of the question, what are you left with? One is polymorphism at runtime and the other is at compile time. I'll take the compile time one.
http://hackage.haskell.org/package/optparse-applicative for small cli apps
http://hackage.haskell.org/package/envparse For any Docker microservice
http://hackage.haskell.org/package/aeson-18.104.22.168/docs/Data-A... For all JSON work. Sometimes I’ll use it with lenses which is massively powerful but a rabbit hole
I’ll use http://hackage.haskell.org/package/stm when dealing with parallel execution
https://github.com/brendanhay/amazonka For anything dealing with AWS
https://github.com/haskell-works?tab=repositories Projects for Kafka and avro
http://hackage.haskell.org/package/warp For trivial micro services or Scotty if more than a few endpoints
http://hackage.haskell.org/package/persistent For dealing with Postgres
http://hackage.haskell.org/package/parsec For dealing with any text parsing.
The tools are available, they can make things like cli apps and micro services trivial. However if you have never used a ML language before you will have a steep learning curve as very different to C style/based languages.
I was once of the opinion Haskell is academic, what can you use it for in the real world. Then I studied with it, played with it admittedly on and off over 1-2 years, hit hurdles where I had to think as so different to what I’ve learnt before. Eventually it clicked, it’s very hard and frustrating now in my day job using typical enterprise or popular languages. It’s not about convincing, it’s about having a open mind and wanting to learn something different
I'm fairly convinced that Haskell is good at preventing the kinds of bugs which you might run into writing, say, a parser or other kinds of very complex logical code, but I'm less convinced that the nature of the language helps with the kinds of issues you get hooking together APIs, databases, etc.
That's the only one occurrence I can find. I think the GGP made a good job of answering that.
I'm not going to walk over the differences as each of those would be a many hours task just for explaining.
Instead, the next high-profile HN Haskell link will be the thousandth demo/intro/tutorial that yet again implements some core data structure.
Rather than showing how Haskell is great for working with databases.
Or working with Kafka.
Or integrating with AWS.
Or parsing text.
Or running microservices.
Or any of the other hundred things that I'm going to do a dozen times before I need to reimplement binary trees.
= Object (HashMap Text Value)
| Array (Vector Value)
| String Text
| Number Scientific
| Bool Bool
And also to point to this excellent article: https://pchiusano.github.io/2017-01-20/why-not-haskell.html
Until you’ve done Windows programming for a while, you may think that Win32 is just a library, like any other library, you’ll read the book and learn it and call it when you need to. You might think that basic programming, say, your expert C++ skills, are the 90% and all the APIs are the 10% fluff you can catch up on in a few weeks. To these people I humbly suggest: times have changed. The ratio has reversed.
Read the whole article, it's pretty amazing. Parts of it are just as relevant to mobile development today as to desktop development back then.
And every non-trivial program I've worked on is 90% of a compiler. (You could describe compilers as just "some business logic in the middle", too, if you were in Architecture Astronaut mode.) You don't think HTTP servers use "pattern matching"? You don't think API endpoints would benefit from "ridiculously easy" testing? This is your bread and butter.
This article is showing how to implement if-statements and null-checks in Haskell, and reduce your code size by half. I bet you have some of those in your software. I'm not sure how this could be much more relevant, without reducing its usefulness by being overly specific.
In the real world, software has a CS "core", plus 95% boring data copying and gluing APIs together, where availability of libs and tools is far more important than theoretical correctness properties and the most general abstraction possible. This is why Haskell looks so nice in blog posts but terrible in a production system.
In Haskell, updating fields in a records is still an active area research.
Isn't "correctness" a useful property, even for a web service? I think that being able to eliminate entire categories of bugs is terrific, in any situation.
Sure, it's always nicer to have good libs/tools than to have to write your own (though that distinction is much less important when your language has great abstraction capabilities). Are there any libs/tools you're missing in Haskell? The way your comment is phrased makes it sound like good old fashioned FUD.
I'm not sure what you mean by the last sentence. It seems to still be an active area of research all over the place. Look at Swift/Rust/Go/Java, or Postgres/Mongo/Datomic, or ext4/zfs/btrfs/ntfs. Everybody updates records in very different ways. It's not like all Algol-derived languages have the same data model.
In Haskell, everything is still an active area of research. Mutable data already exists in Haskell and has sensible semantics.
But the goal of Haskell (so far as I can see, at least) is good, reliable, maintainable code which produces good, reliable, maintainable programs. If this is not what you look for in a programming language then it might not be for you.
My company does everything in Haskell, but it's almost all just boring plumbing code. APIs, JSON, databases, HTTP stuff, HTML templating, etc. It works great.
As for libraries we use heavily (in no particular order): Yesod, Persistent, Esqueleto, Lens, Lens-Aeson, Hedis, STM, Wreq, Shakespeare, Hspec.
All of which tend to introduce monads.
When you have multiple moands on the go you need some combining them.
I feel like the F# community tends to be more grounded in reality. Or at least I'm more exposed to the side of it that is trying to popularize it in Microsoft, as a useful tool for Domain Driven Design and the like.
Once you've wired everything together, now you need to track version, build artifacts, manage release processes, test them and qualify them, etc, so you do a lot of QA and DevOps work.
QA, DevOps and sanitation engineering is so product specific that it can't be packaged up, it's a craftsmanship position, and that's why all of us spend most of our time doing this kind of work.
In your career, you will go through new and excited, then sort of bored, then you will find a niche that's both interesting to you, and that you're good at - you become an expert in something. At that point, you do as much of that as you can, and the sanitation engineering doesn't seem so bad when its directed at something interesting to you, especially when you work with colleagues who are better than you at something and challenge you professionally.
Much less than 30 years :-)
Thanks for the insight/encouragement!
For example, we have a client at the moment who makes a type of device with a lot of user-configurable behaviour. An embedded web server allows access to its UI from a browser, and we were originally brought in to build that web UI for them. On the face of it, this is a substantial but straightforward SPA development, just one where the back end happens to talk to APIs that communicate with various physical components in the device rather than a traditional database.
However, it turns out that the way users view that device and how they want it to behave is very different to the way you have to program the various physical components to make useful things happen. That means even in this superficially simple project, we have some interesting algorithms in the front-end to present application-specific visualisations of the current state of the system, and we have a much more involved algorithm sitting behind that UI that converts the user’s tidy, intuitive view of the world into the very untidy and often counter-intuitive data formats required internally to program the hardware components.
To make this comment slightly more topical, I’ll also mention that the behind-the-scenes algorithm is essentially a form of compiler, taking a serialised version of the user’s world as input and running through a pipeline that systematically converts the data into the required internal formats. The first generation was written in Python, and has proven to be reasonably successful, but we are always a bit nervous about maintaining it just because of the number of edge cases and interactions inherent in the world we’re operating in. For the second generation, we made a big decision to go with Haskell instead, and for this sort of work, there were very welcome benefits including greater expressive power when writing data-transforming code and a strong type system that prevents mistakes like accidentally forwarding data from one pipeline step to the next without applying an appropriate transformation.
I agree with Beltiras’s point in the GP comment, and I probably wouldn’t choose Haskell to implement the kinds of software mentioned in that comment for much the same reasons. However, it definitely has real value in the sort of situation I described above, where we have both integrations with other systems but also substantial data crunching to do.
Now everything has an ui, and data innies and outies, but gosh! That's just connectors to the diamond!
In general terms, almost everything you and I do is either a CRUD app, or something overcomplicated which would be better-implemented as a CRUD app. There's not technological advancement happening here. And usually you're not even doing a new CRUD app, you're just reimplementing an earlier CRUD app with better CSS and JS and a different marketing team to tout it. IF there's an innovation in your company it's not the CRUD app developer who's innovating. We're just reimplementing the wheel over and over, because the other wheel implementations are closed source and owned by a competitor.
If you want to innovate, you have to take on harder problems that aren't CRUD apps. That's where languages like Haskell shine. Haskell doesn't shine because it's better, it shines because it's different, and suited for different tasks. The tasks for which Haskell is suited haven't been saturated yet, so there's still room for innovating on the technical side of things.
So yeah, I can't show you how to do what you do with Haskell--the reason you'd want to use Haskell in the first place is to do something different from what you (and most other developers, myself included) are doing. The reason you'd want to write Haskell is to solve technical problems which haven't already been solved.
You're right to bring up binary search trees and linked lists as criticisms of the Haskell community, because those are also pretty solved areas: touting binary search and linked lists as the powers of Haskell completely undersells Haskell. Haskell learning materials fall into two basic categories: complete beginner stuff, and Ph.D. theses written in an alien language. This is, unfortunately, part of why there's any innovation to be had here: having very little mid-level learning material creates a barrier for entry that keeps people from reaching a level where they could innovate. The same is true for the communities of many less-widely-used languages.
This has been a rather cynical post, I realize. I'm not sure there's any recommendations here: I'm certainly not going all-in on learning Haskell and using it to innovate, myself. CRUD apps pay the bills and innovation is risky. I find interest and novelty in other areas of my life.
If you like math/category theory, go deep on the math itself. Your knowledge will be transferable to more than just some man-made story (like a programming language).
Hm interestingly I actually felt like learning Haskell had a huge benefit to my day job, probably much more than I imagine learning x86 assembly would. (Though admittedly I have learned some assembly in the past and that was also helpful).
I feel like Haskell forced me to write better code by forcing me to think about side effects. I don't know that I would actually use it in a production project because unfortunately real projects often rely a lot on state, even if constrained to a subsystem, and I still find it difficult to reason about the performance of a Haskell program.
Not trying to invalidate your point; perhaps you were already very good at what I learned via Haskell =] I admit I also find Haskell much more enjoyable to program in for the most part.
I see, but wouldn't that be possible by "forcing" yourself to think about side effects in the language you were already using?
I mean, at the end of the day, it's learning the functional programming concepts that is supposedly going to make you a better programmer. Why not just start learning those in Java, Python, JS, etc.?
One thing I hear a lot from people who've learnt Haskell is they admit they're probably never going to use it in a real-world project (for so many reasons). Then, isn't it inefficient and a waste of time to learn parts of Haskell that are only found in Haskell?!
If I were to learn FP (which I will soon), I'd choose to learn it in the language I'm using now. It's not only more efficient, but also I'd enjoy being able to put what I've learnt in practical use.
That could totally be possible for many, but for me, I need something more concrete. I had read about Haskell and the benefits of immutability and agreed from a high level, but until I actually used it, I didn't feel like I understood it.
Because every OOP/imperative programmer I've ever known for 18 years has the easy way out in not thinking about side effects or immutability and they never proactively reach for them.
Granted my bubble is not representative of the world but this trend is nevertheless quite telling. I also never proactively reached for FP techniques in OOP/imperative language until I learned my first FP language.
You might think that this isn't worth the mathematical rigamarole that comes with it and that it's grown up with, but as people have seen in a number of other HN threads, formal methods are having a renaissance now and the tools we have that engage with them can get us a lot further than they could in the 1960s.
I have written many bugs, of many different kinds, in other languages that could have been detected automatically by a type system like Haskell's. I'm not suggesting that that makes the other languages inherently bad, or that other programmers (or I) couldn't adopt other methods that would also help avoid these errors, but I think the ease with which Haskell's approach can do it is something interesting to consider.
Program verification and functional programming are separate things. Ada predates Haskell by at least a decade, and it's not functional at all. Rust is kind of a revival of that in proving memory safety via borrowing; not functional either.
But the set of programs which can be formally proven is smaller than the set of all programs, so I'd rather not miss out by only making formally proven software. (The entire field of deep learning is a good example of useful code which can't be formally proven.)
> Pure functions by definition adhere to dependency injection and single responsibility principle.
Dependency injection and SRP are other man-made constructs with dubious utility in the same vein as functional programming.
> There's a reason that lambda have become table stakes for new programming languages, and that's because composing functionality is a generally useful feature
Lambda in functional programming is supposed to be a primitive you use to do everything ahem y combinators ahem. In Java 8/C++11/Swift, the lambdas you speak of are used only as embedded subroutines.
Functional programming selects for better than average programmers to begin with--the "programmer's programmer" that writes code for fun and probably visits this site. You're unlikely to convince to learn Haskell the person who writes enterprise .NET, never used anything but Windows, and who never opened a text editor after work. The functional programmers were already good before they became functional programmers. Then you get the cargo cult type of intrigue. "X writes really good code. X also uses $FP_LANG! Be more like X!"