
Haskell People - allenleein
http://argumatronic.com/posts/2017-09-27-haskell-is-useless.html
======
quickthrower2
Nice post.

A monoid is an interesting and underrated "pattern" for objects where you can
have a "nothing" and an operator that smashes two values together. The
operator may be 'lossy' as in 3 + 4 = 7, or it might preserve the originals to
some extent as in "3" \+ "4" = "34".

I realized the power of this when working with a new library and thinking "how
do I concatenate these", all I had to do was check that the data was a Monoid
and used mappend. There is something very satisfying about that.

Plus if you build some utilities on monoids, they'll work with lots of
distinct objects, from parsers to numbers to lists to all sorts of other
things.

What Haskell lacks in the community is people blogging about how to get more
mundane things done like setting up a CRUD website, API, or desktop app. I
have a site in my profile that attempts to fill that gap.

~~~
diminish
>> algebraic structure I talk about the most passionately is the monoid

Can you ELI5 monoid? what problem does it solve which isn't easy in other
languages without it?

edit: thx everyone.

~~~
mightybyte
A monoid is something that allows you to take two of the same kind of thing
and smash them together to be one of that same kind of thing. It also requires
you to have an "empty" thing that can be smashed together with any other thing
without changing the result.

In code:

    
    
        smash :: Monoid a => a -> a -> a
        empty :: Monoid a => a
    

A monoid is a property of things, not languages, so the idea of "a language
without a monoid" doesn't really make sense.

~~~
catnaroek
Ah, so this is a monoid:

    
    
        instance Monoid Int where
            smash 25 x = x
            smash x 25 = x
            smash x y = 2*x^2 - 3*x*y
            empty = 25
    

Thanks for explaining!

~~~
mjhoy
There are also the monoid "laws" (not checked in Haskell):

    
    
        empty `smash` x = x -- left identity
        x `smash` empty = x -- right identity
        (x `smash` y) `smash` z = x `smash` (y `smash` z) -- associativity
    

Your definition doesn't fit any of these. (Edit: you've rewritten to meet the
first two; but it is not associative.)

~~~
catnaroek
mightybyte didn't state the associativity law in his original definition. As
for the left and right identity laws, he did state them, and I did intend my
instance to satisfy them (otherwise I wouldn't have special-cased 25, of
course), but I messed up. I've fixed my post since.

~~~
conistonwater
Just for reference, the laws are stated in full in haskell's docs,
[https://hackage.haskell.org/package/base-4.10.0.0/docs/Data-...](https://hackage.haskell.org/package/base-4.10.0.0/docs/Data-
Monoid.html).

~~~
catnaroek
I was deliberately “acting stupid” to criticize an insufficiently precise
definition.

\---

> That is usually a terrible idea.

Not at all. Think about the structure of a proof of negation. You make a
“silly” assumption and “play along” until you reach a contradiction.

~~~
conistonwater
That is usually a terrible idea.

------
dmitriid
This entire post is so true.

As a personal (not so personal anecdote). Purescript[1] Book[2] claims "No
prior knowledge of functional programming is required, but it certainly won’t
hurt. New ideas will be accompanied by practical examples, so you should be
able to form an intuition for the concepts from functional programming that we
will use."

And then it goes through a rather gentle introduction of various concepts. And
then this is the first time ever that `map` is mentioned:

> Fortunately, the Prelude module provides a way to do this. The map operator
> can be used to lift a function over an appropriate type constructor like
> Maybe (we’ll see more on this function, and others like it, later in the
> book, when we talk about functors)

WTF.

"lifting over typeclasses wat". The whole concept of lifting isn't discussed
until _six chapters later_.

[1] A Haskell-derived/Haskell-based language that compiles to Javascript,
[http://www.purescript.org](http://www.purescript.org)

[2] [https://leanpub.com/purescript/read](https://leanpub.com/purescript/read)

------
hellofunk
I like Haskell a lot, but it has one aspect that has only bothered me. A lot
of incidental complexity is added due to the prevalence of custom
functions/operators with arbitrary associativity and fixity, on top of normal
precedence rules. Perhaps it is because I got spoiled by writing lisp
(Clojure) where such concerns never exist in such a uniform language. But
writing Haskell can lead you to make lines of code that don't always read in
the order you'd expect. I was always having to lookup the fixity and
associativity of things to know what was going on just in a single otherwise
straightforward line of code.

~~~
kmicklas
Agreed. Though this is really a problem with text based editing (which
obscures the tree structure), not Haskell.

~~~
chris_st
I remember hearing this said about Lisp programming in the early 1980's.
Specifically, there was a "tree-based" editor for InterLisp on the Xerox Lisp
machines... I only ever met one person (hi Marty!) who even basically
understood it.

I really hate the "we've never had one so we'll never have one" argument about
anything, but I think in this case it may well be true, although the argument
stems from the fact that a lot of people have tried to make "tree based"
editors, but I've never heard of anyone making one that got any kind of
traction.

Even the guy who could use the Xerox tree editor preferred emacs (on other
machines).

~~~
kmicklas
The problem with tree editors has always been the lack of standardization, not
any kind of conceptual or UI problem. Generally authors take one of two roots:

1) Provide a tree based editing UI layer on top of text files since text still
rules. In my opinion this is almost like the worst of both worlds, though it
can be successful. Paredit is probably the best example of this (and many
lispers swear by it).

2) Create a Grand Integrated Vision of How to Fix Every Programming Problem
Ever Created. This can make a nice demo, but of course never turns into a
practical product any time soon.

Until we have a standardized tree or graph interchange format that is designed
to satisfy the common denominator of the full range of languages, protocols,
etc., as text does today, all structured editors will be fighting an uphill
battle.

------
Tade0
Interesting title.

In my experience, Haskell is for Haskellers more than just a language. I have
found that every Haskeller I talk to is, in the specified order:

    
    
      1. A Haskeller.
      2. A programmer.
      3. A human being.
      4. A person of a specific gender.
    

They really are "Haskell People". What's even more bizarre is that for some
other languages this hierarchy seems to be reversed.

That being said I'm glad that the word "sucks" was finally removed from this
article:

[https://wiki.haskell.org/The_JavaScript_Problem](https://wiki.haskell.org/The_JavaScript_Problem)

~~~
Roboprog
Thanks for the article link.

I love that "late binding" is listed as a flaw. Some would call that a
feature. Those of us in the "some" must be wired a bit differently that the
"early binding" crowd, I guess :-)

~~~
runeks
How can deferring checks of correctness to runtime be considered a feature?
Haskell has an option to defer type errors to runtime, so I don’t see why
early binding wouldn’t be preferable, since you can have it as you like.

My life quality has increased substantially since switching from Python to
Haskell, and no longer getting runtime crashes saying “AttributeError: <x> has
no attribute <y>”.

~~~
zzzcpan
"How can deferring checks of correctness to runtime be considered a feature?"

Correctness is not something that matters in the real world. Even defects
don't really matter. What matters is problems that defects cause. It's a
subtle difference, but very important one. Because you can either limit the
scope of those problems at runtime or try to make sure that there are as few
defects as possible to cause problems in the first place. First approach
doesn't force you to specify anything to check correctness for, is flexible,
productive, prepared for the real world. Second approach is much less
flexible, much less productive, doesn't work well in the real world, where
things fail not only because of defects, but by nature. So, this is how it can
be considered a feature.

(While writing this I realized that Roboprog probably meant late binding as a
feature from OOP, where it is considered necessary for OOP to even exist, not
reliability.)

~~~
runeks
> Correctness is not something that matters in the real world. Even defects
> don't really matter. What matters is problems that defects cause. It's a
> subtle difference, but very important one.

What, exactly, do you consider the difference between these two?

To me, saying ”correctness doesn’t matter” is equivalent to saying “it doesn’t
matter whether your app does what you want it to do”, which makes no sense to
me.

------
aaron-lebo
There's probably some truth to this. My mathy friends have a deep appreciation
of Haskell. I always try to get into it for a few weeks, marvel at it, grok
some new things, then inevitably end up in more "practical" languages (only
because there are almost no domains, save parsers and compilers, where Haskell
is obviously the "best" fit).

This may describe why: Haskell it built for the sake of itself, it's
coherence, it's design. It is first and foremost designed to do things the
Haskell way, which may be conceptually pure and "right" but aren't necessarily
the most straightforward. In contrast, most other languages exist to solve a
problem C++ (performance and control), PHP (a templating language for the
web), Python (web, data science), JS (browser), etc. This being my
understanding/paraphrasing of what the author is saying.

I think the answer is somewhere in the middle: the good parts of Haskell,
selected for pragmatism (strict instead lazy like most languages), with an
emphasis on being approachable to users. There's a lot of languages that are
almost there. Elm gets the ease of use but strips too many features. OCaml is
practical but has std lib split, multicore issues, and is kind of crufty. Go
is arguably too simple, Rust is too low-level, Kotlin is almost there but the
JVM can't replace languages that can produce straight binaries for the most
part. Nim is very easy to write in a functional style and is maybe the most
practical language I've ever used. It lacks the support, libraries, and
mindshare of the others, though.

What kind of worries me is there is a strong incentive for languages to
promulgate themselves into domains they aren't especially suited for, so it's
more likely that one of these flawed languages will be Frankensteined into a
domain it doesn't really belong and we'll be hacking around those
disadvantages in a couple decades. Maybe the real answer is to take a step
back and consider what is a middle ground that gets 90-95% of what anybody
needs. I'm not convinced that something like Nim or a new language couldn't be
99%, with the 1% of the time being necessary to drop into domain specific
languages. If one of the big corps put support and funding into the "right"
language, it's not really a difficult job, but it is those damn incentives.
How do you fix them?

~~~
kobeya
Actually the same could be said about Python and the Python way, pythonic
programs, etc.

~~~
aaron-lebo
The thing about the Python way is it is dogmatic but it is also very
practical: it's dogmatic about being practical. For loops and list
comprehensions are more familiar or easily explainable to many programmers, so
Guido kind of hid map/filter/reduce instead of leaning on them or having
alternatives like other languages (Cloure or JS).

I've got friends who I'm not sure can read that are able to understand Python.
Haskell isn't like that.

~~~
kobeya
Haskell aims for simplicity. That is favoring the reader/maintainer of code
over the writer. It’s a different trade off that is not obviously worse.

~~~
orf
Two of the core tennents of Python is: simple is better than complex and code
is read much more often than it is written.

~~~
kmicklas
It amazes me that someone could write down that principle and then design an
untyped programming language.

~~~
Roboprog
Dynamic types can be more problematic to modify, but many of us find them
easier to read: assume that the code actually worked and does something
reasonable, now skim for the gist of it (without having to see a bunch of
extra detail).

Assembler is untyped (just bytes and words). I'm not big into Python, but I'm
pretty sure it has types, but they are late/runtime bound.

Dynamic types are probably not a good choice for an army of idiots, but if
dynamic types were so completely unworkable, you would think that they would
disappear, eh?

That said, I'd rather see avionics written in Ada than Python, but not every
problem needs that level of scrutiny and pain.

~~~
yen223
> assume that the code actually worked and does something reasonable

That's almost never the case if you're in a situation where you're reading
code

------
zengid
The podcast that she links too is really interesting [1]. Talking about what
its like to learn Haskell as a first language. Really interesting for me
because I'm getting into Elm, and their approach is to use really concrete
ideas and names instead of Monads and such. She even mentions that Elm is a
good way to start approaching Haskell concepts (around 13 minutes).

[1]
[https://twitter.com/thefrontside/status/912327851386470400](https://twitter.com/thefrontside/status/912327851386470400)

------
georgewsinger
The experiment to bring "Haskell people" into the world of VR has so far gone
well
([https://github.com/SimulaVR/Simula](https://github.com/SimulaVR/Simula)),
but I have sometimes wondered if Haskell is too off-putting of a language to
bring traditional graphics programmers into this project.

------
sidlls
The content of the article makes it abundantly clear the author isn't talking
about "understand thing," but rather specifically "understand Haskell." These
aren't identical.

~~~
trattodet
I didn't take it as that. I think his argument translates to other functional
languages to some degree. I was working on a Scala project where the lead
developer exhibited the exact behavior the author describes. Scala is a bit
more "useful" than Haskell because it borrows from Java, and it's not too much
of a leap to write something you can use. Opening a file in Scala and
processing it is much less of an understanding task than it is in Haskell.

The "understand first, then use" aspect of many functional languages is what
kills systems in the crib when you're trying to build something that provides
business value.

~~~
sidlls
> The "understand first, then use" aspect of many functional languages is what
> kills systems in the crib when you're trying to build something that
> provides business value.

This notion that one must "understand first, then use" is limited strictly to
the particulars of the language. It's equally true of other languages: one
must understand at some level how the language works to build anything with
it.

It also isn't clear how having to understand Haskell and having to "understand
thing" are the same thing, as the author asserts.

An uncharitable reading of the article is that it's a sophomoric insult to
people who don't use Haskell, as it apparently dismisses them all as people
who don't care to understand things but only to more or less mindlessly and
practically randomly build stuff. I surely hope that wasn't the intent.

