
What Is Good About Haskell? - nuriaion
https://doisinkidney.com/posts/2019-10-02-what-is-good-about-haskell.html
======
Beltiras
Friend of mine is always trying to convert me. Asked me to read this yesterday
evening. This is my take on the article:

Most of my daily job goes into gluing services (API endpoints to databases or
other services, some business logic in the middle). I don't need to see yet
another exposition of how to do algorithmic tasks. Haven't seen one of those
since doing my BSc. Show me the tools available to write a daemon, an http
server, API endpoints, ORM-type things and you will have provided me with
tools to tackle what I do. I'll never write a binary tree or search or a
linked list at work.

If you want to convince me, show me what I need to know to do what I do.

~~~
mumblemumble
So, I love me some ML-style languages, including Haskell, but I've also come
to think that Rich Hickey is right about the real problems of business
programming not being well solved by digging in on things like static typing.

For example, pattern matching against static types is cool, but, compared to
pattern matching directly against data, Clojure-style, is even cooler. One
makes the code a bit more concise and readable, but not necessarily a whole
lot more maintainable. The other takes one of the more annoying and error-
prone portions of my (say) Java code and renders it _far_ more manageable.

There's a recent LispCast that talks about this a bit:
[https://lispcast.com/what-is-data-orientation/](https://lispcast.com/what-is-
data-orientation/)

~~~
chrisseaton
What's the difference between 'business programming' and other types of
programming? I don't really know what distinction people are trying to make
here.

~~~
fulafel
Dealing with the messy real world with exceptions to rules and evolving shape
of data vs writing compilers and other internally consistent closed systems.

~~~
platz
If you have messy/dirty data then you just use an associative data structure
like a Map in Haskell, just like any other language.

~~~
fulafel
Sure, but idea is that the idiomatic way of working in the language
accomodates passing around data that is not necessarily closed in shape. Ie
intermediate functions will by default pass along also attributes that they
don't have knowledge of, for example. And checking data shape conformance is
customizable (by the "spec" system).

~~~
platz
Ok, let's not do the runtime vs compile checks thing here. I was just pointing
out there are options that solve similar problems. There are other ways to
deal with sub-functions not needing access to the whole structure as well. But
let's not expect Haskell and clojure to have exactly the same features.

If you want to use clojure, then go for it. Use what you want to use.

~~~
mumblemumble
I think that, at least insofar as I understand the problem I was trying to
speak to, it's so deeply entangled with the runtime vs compile checks thing
that it's impossible to have a coherent discussion without dealing with the
subject head-on.

Here's where I come down on it:

There are some kinds of projects where you can cut off most potential problems
at the pass with compile time checks. In those cases, yes, you absolutely want
to statically render as many errors as possible impossible. Compilers come to
mind as a shining example here.

There are others where the nastiest bits invariably happen at run time,
though. And, for a significant number of those, the grottiest bits fall under
the general category of "type checking" \- not checking types in the structure
of the code itself, per se, but checking types in the actual data you're being
fed. And, since you don't get fed data until run time, that means all that
type checking has to be done at run time. There's no sooner time at which it's
possible. There's some tipping point where that becomes such a large portion
of your data integrity concerns that it's just easier to delay all your type
checking until run time, so that you are dealing with these things in a
single, clear, consistent way. If you try to handle it in two places, there's
always going to be a crack between them for things to slip through.

~~~
platz
I am sorry that Haskell and clojure people have to fight. You don't see me
telling Clojure folks when and where to use the tools they enjoy working with.

I think Haskell is an excellent language for servers and API's. It really
excels as a backend language. So, I'm sorry you think Haskell is only good for
compilers, but I think the range of use cases it's good at is much broader
than "Compilers".

Haskell is best thought of as a better Java. I wouldn't select it for every
problem, but server API's and backend work is a really good fit.

Also I think Clojure is great. We can both co-exist in this world though. It
is possible.

It's unfortunate the OP picked on python - it's not the style of post that I
would write.

~~~
mumblemumble
I am sorry that you somehow think this has become a Haskell vs Clojure fight.
Me, I actually use Haskell a lot more than I use Clojure, and generally think
it's a great language with a lot to offer.

But I also believe another very important thing: There is no silver bullet.

Because I believe that, I am able to recognize that even the things I love
have some limitations. And I don't believe that this should be a fight, and
that is why I think I should be able to articulate what I have found to be the
limitations of a tool, and acknowledge that some other tool that other people
like might have something to offer in this area. _Without_ being perceived as
a hater for doing so.

------
gabipurcaru
I've been using Haskell for quite a bit in production. My personal take, as an
engineer who is generally skeptical of fancy language features:

Plus:

\- The type system. It can make your life a huge pain, but in 99% of the
cases, if the code compiles, it works. I find writing tests in Haskell
somewhat pointless - the only places where it still has value is in gnarly
business logic. But the vast majority of production code is just gluing stuff
together

\- Building DSLs is extremely quick and efficient. This makes it easy to
define the business problem as a language and work with that. If you get it
right, the code will be WAY more readable than most other languages, and safer
as well

\- It's pretty efficient

Minus

\- The tooling is extremely bad. Compile times are horrendous. Don't even get
me started on Stack/Cabal or whatever the new hotness might be

\- Sometimes people get overly excited about avoiding do notation and the code
looks very messy as a result

\- There are so many ways of doing something that a lot of the time it becomes
unclear how the code should look. But this true in a lot of languages

~~~
atoav
I never really understand why I would want a type system until I learned Rust
and was forced to learn it. Now I don't understand what I was thinking
before..

~~~
mehrdadn
What were you using before?

~~~
atoav
Mostly python and C. With python I didn't really have a type system and with C
I never truly realized what the typesystem could give me.

~~~
fulafel
I suspect one would learn appreciation of limited type systems from the other
direction by using a typeless language.

~~~
leshow
I wouldn't. Why would you want to take errors that happen at compile time and
create runtime errors? In dynamic languages you still have type invariants,
it's just now they are invisible and can break your code at runtime.

~~~
cogman10
A few things come to mind.

For a dynamically typed language, a REPL ends up being essential. In a lot of
ways, REPLs can be a superior form of programming. Those are often much harder
to get to work with static type languages (often too much ceremony around the
types).

The other thing that comes up is sometimes those compile time errors are
somewhat pointless. For example, in many cases the difference between an int
and a long are completely inconsequential. But further, whether or not your
type is a Foo with a name field or a Bar with a name field or an Named
interface simply does not matter, you just want something with a name field.
While static typing would catch the case of passing in something without the
name field, it unnecessarily complicates things when you want to talk about
"all things with a name field" (Think, in the case of wanting to move Foo,
rename Bar, etc).

Then there is the new concepts you need to learn. With dynamic typing, you can
write "add(a, b) { return a + b; }". But how do you do that with Static
typing? Well, now you need to talk about generics. But what if you want to
catch instances where things are strictly "addable?" now you are looking at
constrained generics. But what if you want a specialized implementation? Now
you are potentially talking about doing method overloading. What if you want a
different style of adding? Now you might be talking about plugging in traits.
Typing and type theory have a tendency to add a requirement that you learn a
whole bunch of concepts, but also that you learn how to correctly use those
concepts.

It is no wonder dynamic typing has it's appeal. Dynamic languages are
generally low on ceremony and cognitive burden.

I say all this being someone that likes static typing. Just want to point out
that dynamic typing has it's appeal. Obviously, the big drawback is when you
come back to a dynamically typed language and you want to fix things. It can
be insidiously hard to figure out how things are tied together and you get no
aid from the language.

~~~
gmfawcett
> Then there is the new concepts you need to learn. With dynamic typing, you
> can write "add(a, b) { return a + b; }". But how do you do that with Static
> typing? Well, now you need to talk about generics. But what if you want to
> catch instances where things are strictly "addable?" now you are looking at
> constrained generics...

You lost me here. All of those "what if's" seem to apply equally to
dynamically typed languages. If I want "a + b" to work with two Python classes
that I just wrote, I'm probably going to have to implement __add__ methods on
both classes, and possibly with non-trivial implementations. It's not like
dynamic typing makes everything magically addable, with no burden on the
developer. Wouldn't you agree?

Not to mention that languages such as Haskell and OCaml have REPLs too. They
are not at robust as, say, Common Lisp's -- but REPL-driven development is
hardly a stranger in the statically typed camp.

I agree that both camps have their appeal, though!

------
tom_mellior
This is a good discussion of a problem that suits Haskell well, but it's
unfair to Python in some respects.

For example: I haven't read or written a lot of Python in a while, but would
Python programmers really want to implement mutation in such a class by
copying dicts around? The hand-wringing about "oh no, I wrote the condition as
_not_ is_node" is silly since one could just define an is_leaf method that can
be used without negation. And "changing heapToList to return a lazy sequence
makes it no longer return a list, oh no!" is just as silly, since one would of
course not do that but define a separate heapToSequence (and probably base
heapToList on that).

Also: "pattern matching and algebraic data types which have massive, clear
benefits and few (if any) downsides". They have downsides whenever your data
is not tree-shaped. Yes, a _lot_ of data is tree, but then a lot isn't. I work
in compilers, which is often touted as a prime application of ML-family
languages, and this is very true, but not 100%. If you can have sharing in
your abstract syntax tree (like a goto node that might refer to a target node
somewhere else in the tree), you start to have difficulties. And you get even
more difficulties when you try to model control flow graphs and the like.
Nothing insurmountable, but still things where it's suddenly good to have
other tools than only algebraic datatypes. OCaml is good in this regard.

~~~
platz
You are free to use a Map in Haskell whenever it suits you. Nothing requires
to to use ADTs when it would not make sense to do so.

~~~
tom_mellior
Sure, I said the issue was not insurmountable. But (and yes, I know Haskell
programmers won't necessarily agree) there are contexts in which following a
direct, mutable reference is superior to indirecting through a map.

~~~
platz
Sure, Desktop GUI programming tends to be one of those contexts. But I think
those contexts are much rarer in Server APIs, and the internet has shifted the
focus of programming from GUIs to APIs.

I wouldn't recommend Hsakell for a desktop GUI, but it's excellent in the API
world. It's more of a backend language.

~~~
leshow
There are some really cool things in haskell-gi. Haskell has the ability to
make some wonderful DSLs. Have a look: [https://haskell-at-
work.com/episodes/2018-11-13-gtk-programm...](https://haskell-at-
work.com/episodes/2018-11-13-gtk-programming-with-haskell.html)

------
h91wka
Haskell is a beautiful research language. Its usecase is to supply subjects
for PhDs and MScs, which it fulfills perfectly. Also it's extremely fun to
learn and play with.

I would never bring it to the production though, reasons being:

1) Production code should be understandable by an on-call person at 4 am. If
business logic is buried under layers of lenses, monad transformers and
arrows, good luck troubleshooting it under stress. And real systems do break,
no matter type safety.

2) It's a research language, and a big part of the research is about control
flow. And therefore haskell has _way too many_ ways to combine things: monad
transformers of different flavors, applicative functors, arrows, iteratees,
you name it. And libraries you find on hackage choose _different_ ways to
combine things. In the business code you probably want to combine multiple
libraries, and you inevitably end up with unholy mess of all these approaches.
Dealing with it takes more time than writing business logic.

3) Developers look at these fancy research papers and try to reproduce them.
As a result, very basic things become extremely hard and brittle. I saw a
real-live example when applying a transform to all fields of a record took a
team 2 days of discussion in slack, because "writing this manually?! it won't
scale to record with 100 fields".

4) Architecture is extremely sensitive to the initial choices due to isolation
of side effects. Because if you suddenly need to do something as simple as
logging or reading a config in a place where it wasn't originally anticipated,
you're in for a bumpy ride.

~~~
rishav_sharan
Agree with #1. I think Elm is probably the best functional language in this
regard. Contains all the important bits yet is small and opinionated enough to
be easily readable.

If only weren't web only and run as a private project. An elm like language
with llvm backend would be amazing

~~~
platz
Elm is like the Go of the FP world. It really works for some people, but
others find working in a kneecapped language infuriating.

~~~
rishav_sharan
IMO its the best language for absolute beginners to FP.

------
whateveracct
I've worked at multiple Haskell shops. They've all had problems. None of the
problems were actually attributable to Haskell..but Haskell was an easy target
for blame by management. I'd say that's Haskell's biggest problem in a
professional setting.

I know Haskell and Go about equally. Meaning I know all the language features
& common libs & have a grip on their runtimes. Go on day 1 was way easier to
write and understand than Haskell on day 1. Now that I've normalized their
learning curves, Go didn't get much easier to work with. Haskell did - I use
my brain way less when writing Haskell than when writing Go.

But even then, Haskell or Go. It's all just the same stuff.

I've pretty much given up talking on the Internet to people about Haskell.
Arguing against some of the pts in this thread about how Haskell isn't good
for production.

I'll continue to write Haskell for pay for the foreseeable future. If I'm
lucky, I'll do it the rest of my career. I don't see any reason why not.

~~~
proc0
> ..but Haskell was an easy target for blame by management. I'd say that's
> Haskell's biggest problem in a professional setting.

I would say that's a software company's biggest problem instead. I really
don't understand how someone with little or no coding experience can be in a
leadership position with other programmers who are supposed to be problem
solvers and have all the details of how to build something. The bigger the gap
in knowledge, the more communication breaks down and the hierarchical
structure becomes useless.

~~~
whateveracct
> someone with little or no coding experience can be in a leadership position

I had one where the leader in question had coding experience but didn't know
Haskell at all.

Despite this [1], they tried to read some code (that the Haskellers has no
issues with) and couldn't, so he deemed the codebase unreadable and eventually
called for a rewrite. Worse, during the debate about the rewrite, there was
constant discussion about how the code was unreadable and bad, but it was
never sourced to this leader. Instead it was asserted with weasel words only.

[1] Maybe it was because of this..this leader was driven by confidence in
their experience and seniority.

------
lidHanteyk
Python may be terrible, but this poster doesn't know Python at all. Here's how
to fix the first snippet:

    
    
        leaf = object()
    

Huh, that's funny, it's _shorter_ than the Haskell? Why is that? Let's keep
going.

    
    
        def merge(lhs, rhs):
            if lhs is leaf: return rhs
            if rhs is leaf: return lhs
            if lhs[0] <= rhs[0]: return lhs[0], merge(rhs, lhs[2]), lhs[1]
            return rhs[0], merge(lhs, rhs[2]), rhs[1]
    

Ugh. My mouth tastes funny. Livecoding on this site is always disorienting. I
need to sit down for a bit. Exercise for the reader: Continue on in this style
and figure out whether the Haskell really deserves its reputation for
terseness and directness.

Edit: I kept reading and was immediately sick. __dict__ abuse is a real
problem in our society, folks. It's not okay.

    
    
        def insert(tree, elt): return merge(tree, (elt, leaf, leaf))
    

The Haskell memes are growing stronger as I delve deeper into this jungle. The
pop-minimum function here is a true abomination, breaking all SOLID principles
at once. I can only imagine what it might look like in a less eldritch
setting:

    
    
        def popMin(tree):
            if tree is leaf: raise IndexError("popMin from leaf")
            return tree[0], merge(tree[1], tree[2])
    

We continue to clean up the monstrous camp.

    
    
        def listToHeap(elts):
            rv = leaf
            for elt in elts:
                rv = insert(rv, elt)
            return rv
    

The monster...they knew! They could have done something better and chose not
to. They left notes suggesting an alternative implementation:

    
    
        def listToHeap(elts): return reduce(insert, elts, leaf)
    

Similarly, if we look before we leap:

    
    
        def heapToList(tree):
            rv = []
            while tree is not leaf:
                datum, tree = popMin(tree)
                rv.append(datum)
            return rv
    

And again, the monster left plans, using one of the forbidden tools. We will
shun the forbidden tools even here and now. We will instead remind folks that
Hypothesis [0] is a thing.

Haskell's an alright language. Python's an alright language. They're about the
same age. If one is going to write good Haskell, one might as well write good
Python, too.

[0] [https://hypothesis.works/](https://hypothesis.works/)

~~~
oisdk
I really wasn't trying to compare Python to Haskell, rather I was trying to
show a few example features in Haskell with the Python code as a reference for
the "standard" way to do a binary tree type thing. Other than the (admittedly
awful) `__dict__` stuff, the rest of it is pretty standard. In contrast, the
code you've written here is non-mutating, and uses tuples to represent a tree.
If you were to google, say, "BST in Python" I'd wager almost none of the
implementations would follow that style. If I was to write a skew heap in
Python (that I intended to use), I would likely do it in a non-mutating way
(although I certainly wouldn't use tuples and `leaf = object()`).

The point of the post was really to argue that simple features like pattern
matching, ADTs, and so on, should be in languages like Python and Go. Also I
wanted to make the point that functional non-mutating APIs could be simple and
tend to compose well: the `unfoldr` example was all about that. In that vein,
it was important that I compare the Haskell code to an imperative version.

For instance, with your `reduce` improvement: I _agree_ that the `reduce`
version is better! It's simpler, cleaner, and easier to read. But Python these
days is moving away from that sort of thing: `reduce` has been removed from
the top-level available functions, and you're discouraged from using it as
much as possible. The point I was making is that I think that move is a _bad_
one.

Finally, while the Python code here is shorter, you still don't get any of the
benefits of pattern-matching and ADTs.

* You can only deal with 2 cases cleanly (what if you wanted a separate case for the singleton tree?).

* You are not prevented from accessing unavailable fields.

* You don't get any exhaustiveness checking.

~~~
lidHanteyk
Python has some basic pattern matching. ADTs are alright, but if you notice
that MLs implement them by tagged unions, then really this is a request for
syntax and ergonomics, not semantics.

Python is untyped. This fundamental separation between Python and Haskell is
non-trivial, and can't be papered over. Your complaints about exhaustiveness,
field existence, and case analysis are _all_ ultimately about the fact that
Python's type system is open for modification, while Haskell's is closed; in
Haskell, we can put our foot down and _insist_ that whatever we see is an
instance of something that we've heard of, but in Python, this is simply not
possible.

I agree, when it comes to Python's moves. I am about ready to leave Python 2,
but I'm not going to Python 3.

~~~
oisdk
While I am all for stronger type systems, I don't agree that you need it to do
sum types. We can already do one half of ADTs (classes ~= product types), I
just want the other half!

In my mind, the syntax would be something like this:

    
    
        sum_class Tree:
            case Leaf:
                pass
            case Node:
                data: Any
                left: Tree
                right: Tree
    
        def size(tree):
            case(Tree) tree of:
                Leaf:
                    return 0
                Node(_, left, right):
                    return 1 + size(left) + size(right)
    

A combination of data classes and pattern matching.

------
fistOfKross
I got curious about Haskell some years ago because of a complete career
disappointed with other languages. Ive put out some semi big apps in this
language, that have been working well in production. I code for fun and
Haskell is the most fun. Sure, there has been a couple of problems with
tooling and its hard to find best practices, but I seem to be able to live
with. I get a little depressed when having to work with other languages. For
me its the end game.

------
ekimekim
I think python was a poor comparison here. The claim "python doesn't have
algebraic data types" doesn't really make sense. Algebraic data types let
something be an X or a Y. In python, everything can be an X, or a Y, or a Z,
or anything else. In the tree example, an ideomatic tree structure in python
would just be a 3-tuple (left, right, value). There's no need to define
anything.

Don't get me wrong, I'm all for algebraic data types and think every
statically typed language should have them, but it's nonsensical to talk about
them in the context of a dynamically typed language.

The author commented elsewhere on this page that the article was mostly a
response to missing these features in golang - I think that would've been a
much clearer comparison that showed what you were really missing.

------
zimbatm
One thing that makes programming difficult is how much context needs to be
held in the live part of the brain. Past N items (let's say 4), the brain has
to swap those values and it makes programming much much slower.

Most of the article is spent explaining this sideways; with Haskell you need
to hold less aspects in the head because they are either eliminated entirely
(purity) or can be deferred to the compiler (types), ... What we learn is that
Haskell is easier to implement tree-like structures.

I think this is what most language proselytes are trying to convey in their
articles. But don't talk about explicitly. Inevitably they will pick a task
that is easy to achieve in the language because the developer environment
aligns well for that use-case, and then let the reader infer that this applies
to all programming tasks.

Basically we are trying to benchmark humans, without building a proper model
of how humans work. The next evolution in programming will be done by properly
understanding how we work and interact with the computer.

------
nicoburns
My Haskell isn't good enough to translate, but I'd love to see these examples
in Rust. I believe Rust has all of the Haskell features mentioned in this
article, but with a much more familiar syntax.

The tree data type in Rust could be:

    
    
        enum Tree<A> {
            Leaf(A),
            Node(A, Box<Tree<A>>, Box<Tree<A>>),
        }

~~~
sampo
Why should

    
    
        enum Tree<A> {
            Leaf(A),
            Node(A, Box<Tree<A>>, Box<Tree<A>>),
        }
    

be more "familiar" syntax than

    
    
        data Tree a
          = Leaf
          | Node a (Tree a) (Tree a)
    
    ?

~~~
tiborsaas
I know what enum is, but data for me is just 1 and 0.

Are we assigning here with the =? What does the pipe symbol mean? Why pipe
instead of another =? Why the weird formatting?

To me Haskell's look is too off-putting. With the Rust example I have a good
guess what will the resulting object will look like.

But I know it's just a learning experience. Once I would know Haskell your
example looks more elegant to me. I just don't get it from looking at it :)

~~~
the_af
The formatting is no weirder than Python's.

You could ask many of the same questions of Python's non-C/non-Java like
syntax:

\- What is "def"?

\- Why the weird formatting? (And unlike Haskell, Python's tends to be
stricter!)

\- Why do I need to write ":" after some lines but not others? It doesn't work
like the semicolon in C-like languages!

\- What's with the "if __name__ == '__main__'" weirdness I see in some Python
programs?

\- What's this weird [f(x) for x in ...] syntax? It doesn't look like anything
in C. What's with the brackets anyway?

Etc.

Yet Python with its "weird" syntax and constructs is a hugely popular
language...

~~~
tiborsaas
I tend to agree, but python is much easier to google:

[https://www.google.com/search?q=haskell+pipe+operator&oq=has...](https://www.google.com/search?q=haskell+pipe+operator&oq=haskell+pipe+operator)

[https://www.google.com/search?q=__name__+%3D%3D+%27__main__&...](https://www.google.com/search?q=__name__+%3D%3D+%27__main__&oq=__name__+%3D%3D+%27__main__)

~~~
the_af
Python is more popular than Haskell, which is why it's easier to google.

Do note Haskell tutorials and communities abound, and you have excellent
online tools such as Hoogle (in which you write the type of what you think you
want and it responds with "these are functions with a similar type signature,
with their documentation"). It's easy to google Haskell things, just not as
easy as googling Python things :)

Do note the type definitions from the example are Haskell 101 and will be
covered very early in almost every tutorial, for example Learn you a Haskell.

PS: it's not a "pipe operator" you're looking for. This isn't an operator at
all! The "|" you're looking for it's in a definition, and it means a union of
alternatives (this type can be "this" or "that" or "this other thing"). If you
think about it, this "union-or" is written the same as the bitwise-or from
more popular languages :)

------
contravariant
Wait, why do the leaves contain no data? What's the point of even adding them
then?

With data in the leaves you could easily do something like:

    
    
        @dataclass
        def Node:
            data: "Any"
    
        @dataclass
        def Tree(Node):
            left:  Node
            right: Node
    

using the new dataclasses module. You could also add a type parameter to the
above if you really wanted to.

The pattern matching will need to be done manually though, following python's
philosophy of duck-typing.

Edit: An alternative involves abusing the pattern matching that python _does_
have to write things like:

    
    
        data,*subnodes = myTree
        for node in subnodes:
           # etc
    

but whether that's really a good idea is debatable.

~~~
tom_mellior
> Wait, why do the leaves contain no data? What's the point of even adding
> them then?

You need something to represent a completely empty tree. You also need
something to represent in a Tree node that "there is no child here". It makes
sense to use the same thing for both. In Python and other languages with
nullable references you can just use None (or null, or ...) for this. But ML-
family languages have no nullable references. You could use option types
instead, but that would look pretty complex, something like (my Haskell is
rusty):

    
    
        data Tree a = Maybe (TreeStructure a)
    
        data TreeStructure a = TreeNode (Maybe (TreeStructure a)) (Maybe (TreeStructure a))
    

(You should be able to use Tree a in the definition of TreeNode, but I wanted
to show the "real" structure, which you would also have to care about when
pattern matching.)

It's easier to have an empty leaf instead. Personally I would possibly call it
EmptyLeaf or maybe NoTree instead of just Leaf.

~~~
contravariant
At this point we're basically discussing what kind of tree you want. Including
if you truly need an empty tree (which is _maybe_ necessary if you want to do
Monad like operations that don't return a result, but then doing so on a tree
is pretty weird in the first place, and I'm not entirely sure what you'd do if
a node in the middle of the tree returned Nothing, do you just throw away its
children?). The best design will depend on what you need a tree for as well as
in what language you will implement it.

But yeah if you want an empty tree I'd recommend just using None in python.
You'll just need checks like 'if node' where you'd otherwise have pattern
matching; you could improve QoL a little by defining an iterator over the
existing children. If you really want a separate object then you run into all
kinds of annoying stuff, including the fact that by default python will
allocate separate objects for all of them, which is 1) slow and 2) bad for
memory usage.

~~~
tom_mellior
> if you truly need an empty tree

I'm sure in Python you regularly have uses for empty lists, dicts, and
strings. (Probably not empty tuples, but what do I know.) Why would empty
trees be particularly strange or exotic?

~~~
contravariant
For what it's worth I prefer to use empty tuples as a cheap empty iterator
(empty lists are mutable and not unique, so the empty tuple is a bit nicer).

The concept of an empty tree is not too exotic, but I'm struggling a bit to
find a use for it (which is not to say that there isn't one). What makes it
different in my mind is that with lists/dicts/strings it makes perfect sense
to filter them, which is a bit weirder with trees. It's easy to imagine a
scenario where you filter a lists and end up with no items, I'm struggling a
bit to figure out what happens if you filter a tree, do you just throw out the
entire subtree if one of the parents is filtered out? It seems to me that the
answer depends strongly on what you want to use the tree for, and if you know
that you will likely also know whether you need an empty tree or not.

Just to illustrate why the 'remove the subtree if the parent is filtered away'
option is not obviously the correct one, consider that it also makes sense to
just remove the node and return a list of disjointed trees. In that case the
'empty' case is just an empty list of trees.

~~~
tom_mellior
Filtering any data structure means building up a new filtered copy, not
removing from the current one, no?

    
    
        >>> x = [1, 2, 3, 4]
        >>> y = list(filter(lambda n: n < 3, x))
        >>> x, y
        ([1, 2, 3, 4], [1, 2])
    

Same with trees: No removal is needed for filtering, you could implement it
something like:

    
    
        def filter_tree(f, tree):
            filtered_tree = new_empty_tree()  # it's useful if this is *not* None!
            for element in tree:
                if f(element):
                    filtered_tree.add(element)
            return filtered_tree
    

Regardless, removal of individual, even internal, nodes from trees is of
course possible without removing entire subtrees. The details depend very much
on the actual kind of tree, but you can start at
[https://en.wikipedia.org/wiki/Binary_tree#Deletion](https://en.wikipedia.org/wiki/Binary_tree#Deletion)

------
axilmar
To play the devil's advocate here: no post speaks about any serious
disadvantages of Haskell.

So, for a programming language that has so much to offer, why isn't it adopted
more?

Could it be that it doesn't actually increase productivity to such a degree to
justify the cost of change?

Legit question, I am not trying to be a troll.

~~~
reikonomusha
I can’t say for Haskell, but I can say for Lisp, which similarly gets these
“look how great” articles but also similarly doesn’t see a huge up-tick in
usage.

Whether you’re a student fresh out of school, or you’ve been unemployed for 10
years as a sysadmin, or you’re an expert programmer already, Common Lisp is
accessible to you. Those examples are real; they’re backgrounds of folks I
either used to or currently work with. Being paid helps immensely.

So it’s not that the language is unlearnable, unreadable, or out-of-reach.
(The pot-shots that random commenters in forums like these take on the
language are usually shallow or even outright wrong.) In some cases, it’s even
demonstrated to be asymptotically more productive.

So what’s the deal? I personally think it’s just that productivity isn’t in
and of itself incentivizing enough. You know Python and C++, you’re relatively
proficient at them, you know how to get the job done with them, why learn
something new? Haskell/Lisp won’t get you a job (necessarily), it won’t allow
you to do something with a computer that you fundamentally couldn’t do before,
and it’ll suck up a lot of your time to figure out what’s going on with them.
Moreover, there’s no big organization behind it (like Mozilla or Facebook or
Microsoft or ...) so where’s the credibility? A bunch of researchers at some
university? A bunch of open-source hackers?

I think one has to be personally invested in becoming a broadly more
knowledgeable and more skilled programmer, just for the sake of it, and (IME)
most people aren’t like that. I think one has to also have a penchant for
finding simpler or more fundamental ways of solving problems, and that has to
turn into an exploratory drive. Even if one _is_ like that, learning Haskell
is one of a hundred possible paths to improve oneself.

My comment shouldn’t be misconstrued as supporting a mindset of it being OK to
just know a couple languages well. I think the hallmark of an excellent
programmer is precisely this broad knowledge of not just tools, but ways of
thinking, in order to solve the gnarly problems that come up in software.

------
cannabis_sam
It’s hilarious to read all kinds of rationalizations for why haskell is not
useful.

Servant + Aeson beats any api backend in any language, period. If you combine
that with Elm on the frontend you’ve got a great onboarding path for new devs
to learn enough haskell to work on the backend.

Of course, for production, ignore lenses, monad transformers, free monads,
effect systems, etc. They’re awesome, but the complexity is not worth it in
practice at this time.

------
acroback
This article reminds of my coworker, touts Haskell all the time, ends up
writing shitty Java code which is difficult to understand and performs like
molasses.

Real world is not perfect, it is immutable with plenty of side effects. Using
Haskell for day to day messy work is not trivial and should not be considered
IMO, u nless you have Haskell gurus all around.

I would rather take a dumb language like Go or Java over Haskell for work
code.

~~~
mbo
> Real world is not perfect, it is immutable with plenty of side effect

Perhaps we should use a language that is immutable and can reason about side-
effects. Like Haskell?

------
TheSmoke
offtopic: if anyone's interested -- Standard Chartered Poland is looking for
Haskell hackers:
[https://twitter.com/MikolajKonarski/status/11782723158152192...](https://twitter.com/MikolajKonarski/status/1178272315815219200)

------
chungus
tldr: I am NEVER nervous about refactoring some Haskell code.

Good:

After working in a variety of organizations using, typed but also dynamic
languages I'm now writing all my back-end code in Haskell. I'm becoming more
and more convinced that for multi-year, multi-programmer applications (a
language like) Haskell is the only way to make it sustainable, while still
being able to add features.

Stephen Diehl has a great writeup on "what he wish he knew when he was
learning haskell"
[http://dev.stephendiehl.com/hask/](http://dev.stephendiehl.com/hask/)

It's difficult to say to someone "Just go read books for a couple of months
because you need to understand purity, laziness, cross compilation, monad
transformers (go read The Book of Monads), 20+ language pragmas. etc etc"

It does however feel like I'm learning useful stuff, and it's a lot of fun to
get an executable that runs FAST.

~~~
gnud
ML seems to be a much more developer-friendly approach to me. Eager
evaluation, no problem "escaping" to imperative code, no "purity". All the
benefits of functional programming, a good type system, and a great module
system.

I'm constantly sad that Standard ML is so outdated. No good tooling, no real
unicode support, etc.

At least we have ocaml and F#.

~~~
toastal
PureScript is an strict ML too and even closer to Haskell than F# and OCaml as
it doesn't have the object-oriented bits and is pure (doesn't allow the same
level of side effects like mutability as those two without the very explicit
'unsafe' functions).

------
MadWombat
This is about as far as I got with this...

"While it solves the problem of methods, and the mutation problem, it has a
serious bug. We can’t have None as an element in the tree!"

Uhm... what?! Why not? You check that your left and right are None and if they
are it is a leaf. And if they are not it is not. What your data value is
doesn't matter and you can have as many None values in the tree as you like.
Your tree doesn't need leaves to define where it ends, it ends when there are
no more branches.

~~~
oisdk
What about the singleton tree with just None in it? As in, what's the
difference between `node(None, leaf(), leaf())` and `leaf()`?

Or the following:

    
    
                 x
                / \
               /   \
              /     \
             y       None
            / \      / \
           /   \    /   \
        Leaf  Leaf Leaf Leaf

~~~
MadWombat
This

    
    
                 x
                / \
               /   \
              /     \
             y       None
            / \      / \
           /   \    /   \
        Leaf  Leaf Leaf Leaf
    

Looks like this.

    
    
                 x
                / \
               /   \
              /     \
             y       None
    

There is no reason to explicitly define leafs

~~~
oisdk
But then wouldn’t the right node with None in it be the same as a leaf?

Could you specify what you mean in code maybe? If you don't explicitly define
leaves than how can you represent the empty tree?

~~~
MadWombat
here, as I said, there is no need to represent an empty tree

[https://gist.github.com/MadWombat/798c6d993a7d2ac4ac74d6624a...](https://gist.github.com/MadWombat/798c6d993a7d2ac4ac74d6624a9a56b4)

~~~
oisdk
So you just throw an error if you try and sort the empty list?

Anyway, I'm pretty sure what you've said here was addressed in the article.
The version you've presented here is exactly the first alternative I showed,
which isn't as good as a version written with ADTs because you can't represent
an empty tree.

~~~
MadWombat
Doesn't have to throw an error, could just return an empty list, doesn't
matter. All I am saying is that the problem the article describes, where lack
of algebraic data types in python makes it impossible to describe the tree
structure doesn't exist. There is no need to explicitly represent an empty
tree. It is enough to know where your tree ends. Yes, it is more elegant with
the ADTs, but it is not impossible without them.

Also, if I really wanted to implement Leaf type, I would probably do it via
__new__ and a bit of metaprogramming.

------
zapzupnz
For those who were wondering about seeing this in Swift. You probably weren't,
and I'm aware I could've leaned a bit harder on things that were built-in, but
I was basically trying to meet in the middle between representing the Haskell
code as written in the interview and something vaguely Swifty.

[https://gitlab.com/snippets/1900852](https://gitlab.com/snippets/1900852)

------
euske
I think Haskell is conceptually cool, but I just can't stand its symbols and
grammars. To me, "=>" means comparison and "|" means an OR, and I'm cringed to
see $s and \s in a program code that makes it look like LaTeX. I also prefer
the boundary of terms and expressions to be consistently marked up with
parenthesis. I know this is just a matter of taste, but sometimes it affects
your motivation a lot.

~~~
jolmg
> "=>" means comparison

In what language? Did you mean ">=" or "<="? They mean the exact same thing in
Haskell.

> "|" means an OR

It also means the same thing in Haskell.

    
    
      data Maybe a = Nothing | Just a
    

"data of type `Maybe a` is either `Nothing` OR `Just a`."

    
    
      max a b | a > b = a
              | otherwise = b
    

"max of a and b is either a when a > b OR b otherwise."

    
    
      odd x || x >= 5
    

"x is odd OR x is greater than or equal to 5"

A cool thing about Haskell is that it lets you define new operators. In the
parsec package, you get the operator <|>, which keeps the meaning of OR but
works with parsers.

    
    
      csvCell = csvQuotedCell <|> csvNonQuotedCell
    

"a CSV cell is either a quoted cell OR a non-quoted cell".

> I'm cringed to see $s and \s in a program code that makes it look like LaTeX

I don't think they're so common that the comparison is valid. $s implies
Template Haskell, which should be used sparingly. \s implies lambdas, which
are nowhere near as common as the use of backslashes in LaTex.

> I also prefer the boundary of terms and expressions to be consistently
> marked up with parenthesis.

The way you wrote this, makes me think you prefer to write 2 + 5 * 2 as 2 + (5
* 2). You can do that, just like in any other language. I don't think it's
common to add parenthesis redundantly in any language though.

If you actually meant something like preferring f x y to be written as f(x,
y), it's not just a matter of taste. It's so the syntax makes sense with
partial application. You can do

    
    
      f x y = ...
      g = f x
      h = g y
    

and it would be more confusing to write

    
    
      f(x, y) = ...
      g = f(x)
      h = g(y)

~~~
tome
> $s implies Template Haskell

I think it's more likely to mean function application ...

~~~
jolmg
At least, I've never seen someone use the `$` operator and put the right
operand without any spacing in between. Not only would it cause errors on
activating Template Haskell because GHC would interpret it as a TH splice[1],
but also it would be weird to read because one tends to use the `$` operator
when they have a multi-term expression they would otherwise not want to
parenthesize. For example, take a look at this line in Yesod[2]:

    
    
      $logInfo $ pack $ show (a, b, c)
    

The one without a space is a Template Haskell splice and the ones with a space
are using the function application operator. If logInfo didn't need to be
expanded by Template Haskell and we deactivate that extension, we could write:

    
    
      logInfo $pack $show (a, b, c)
    

But I doubt anyone would because the implied parentheses around those
operators are:

    
    
      (logInfo) $((pack) $((show) (a, b, c)))
    

[1]
[https://downloads.haskell.org/~ghc/7.8.4/docs/html/users_gui...](https://downloads.haskell.org/~ghc/7.8.4/docs/html/users_guide/template-
haskell.html)

[2]
[https://github.com/yesodweb/yesod/blob/c8aeb61ace568cdc2bc81...](https://github.com/yesodweb/yesod/blob/c8aeb61ace568cdc2bc81d54f75c1a7d40ad3324/yesod-
core/helloworld.hs#L40)

~~~
tome
Yes, but I suspect euske, being unfamiliar with Haskell, just used an
unfamiliar form.

------
tu7001
I think we can code ADT in Python: [https://github.com/lion137/Functional---
Python/blob/master/t...](https://github.com/lion137/Functional---
Python/blob/master/trees_as_class.py)

------
haolez
Haskell seems cool, but if I’m going to invest time in a different programming
paradigm, I’m more curious about APL/K/J. They seem more useful as well (to
me).

~~~
_bxg1
Practically speaking what I've heard is that the value of learning Haskell
isn't so that you can then go use Haskell, it's so that you can then go write
code in other languages as if it were in Haskell

~~~
haolez
I think that’s unfair. Haskell is used in the real world to solve really hard
issues, like anti-spam at Facebook.

I’m simply more impressed by array-oriented languages :)

------
chowells
I have to disagree with the thesis here in its entirety. ADTs and pattern
matching are not what makes Haskell good. Every article like this one just
serves to show people the non-compelling bits of Haskell. The parts you can
use in every other language, if you want to.

The parts of Haskell that make it a good language aren't the things that you
can just write a tutorial for. They're about software engineering, not code
snippets.

Purity and immutability remove entire classes of bugs caused by spooky action
at a distance. When you assert this, people claim "I don't have those bugs",
forgetting about the time someone else changed a function they wrote to mutate
one of its arguments, breaking code three steps up the call chain.

Parametric polymorphism documents what information a function cannot use
within its definition. If you point this out, people ask what good that
serves. There's no way to explain how much easier it is to get things done
when you can write a function and know that no matter what values are passed
in, there _cannot_ be special cases that trip you up.

I see people try to explain why the `Maybe` type is better than null values,
and have their explanations rejected with "You still have to check for it. All
you're doing is changing the syntax of the check." I've seen variations on
that theme in maybe 10 different HN threads over the last 6 years. When all
you talk about is ADTs and pattern matching, why would anyone ever look at the
bigger impact of the type system? The relevant detail here is that an
`Integer` can never be null, not that you use `Maybe Integer` to talk about
potentially missing values.

Further in that same direction, I see people say that the `IO` type just
complicated things because your program has to do I/O anyway, so it always
needs to be in `IO`. This is exactly the same as the `Maybe` problem, but that
similarity is even further from being addressed by articles about pattern
matching and ADTs. No, the similarity isn't "monads". Anyone who talks about
them here has missed the point entirely. The point is that parametric
polymorphism completely prevents distinguishing IO values from non-IO values,
so code that's not written to work with IO values cannot do IO accidentally.

There are a lot more cases, especially when you get into more sophisticated
things possible in the type system using ghc extensions like generalized
algebraic data types or higher-rank types.

But all of the reasons you should be using Haskell in reality come down to
practical large-scale software design concerns. The language lacks features
that make several common classes of bugs possible. It makes several other
common classes of bugs take a lot more work to implement than the non-buggy
way to solve the same problem. These aren't things you can just write a short
article about. They're things that require years of experience and
introspection to see are even problems, and a willingness to accept that a lot
of the problem is the ecosystem, not an individual failure to execute. None of
that fits in an article.

I think articles about how great pattern matching and ADTs are make the
language look worse, because anyone with some experience can say look at
what's actually happening and say "I can do that in <other-language>, Haskell
clearly doesn't have anything to offer." In other words - stop writing these
articles. They drive people away from Haskell, not encourage them to look at
the good parts.

~~~
eli_gottlieb
Honestly, I think it's worth bragging about just how much code re-use and
modularity one can get out of combining parametric polymorphism with ad-hoc
polymorphism (type-classes).

~~~
chowells
I agree with that. Foldable and Traversable are complete marvels of usability.

~~~
eli_gottlieb
I literally, just this morning, found myself writing some annoying repeated
glue-code, and realized I could shorten it up where it mattered by taking the
glue and turning it into a type-class.

IMHO, people brag about type-classes too much with respect to the particular
type-classes that embody category-theoretic constructions, and not enough
about their original application to _ad-hoc_ polymorphic overloading. If I can
think of a Task X which I have to do for a variety of somewhat different types
in somewhat different ways, but which is _used_ in a polymorphic way, then I
can absolutely make a type-class out of that.

It's like how people think the big secret to object-oriented programming is
inheritance, but actual OOP experts tell you to prefer interfaces (which are
almost-but-not-quite just like type-classes!) and abstain from building large
inheritance hierarchies.

------
unixhero
I wouldn't know. It was impossible to learn.

~~~
ur-whale
>I wouldn't know. It was impossible to learn.

People shouldn't downvote you: the sarcasm bit is funny, and it does actually
point out to one of the major reason Haskell is not gaining more acceptance in
spite of its many great features:

    
    
         . the syntax is completely alien (to the point of turning quasi APL-ish in some cases) and scares away potential users.
    
         . the community is so focused on esoteric stuff that the "how do I do X that's super easy to do in traditional languages" is completely missing from the conversation.

~~~
jonsen
the syntax is completely alien (to the point of turning quasi APL-ish in some
cases) and scares away potential users.

the community is so focused on esoteric stuff that the "how do I do X that's
super easy to do in traditional languages" is completely missing from the
conversation.

------
patientplatypus
The problem with Haskell is that it's smarter than most developers. You can
study really hard and maybe you'll be good at Haskell. And then you'll find
that there are no common libraries for the things you want to do because all
the other developers were writing them in Python.

Such is life.

------
functionalias
Being a good person is a goal in itself.

In the same way, Haskell is a good language, it's a moral language. Of course
this causes some pain, but it's worth it.

