Hacker News new | past | comments | ask | show | jobs | submit login

Can't for the life of me figure out why people equate FP with the abomination that Haskell is.

Erlang is FP. Javascript is FP. Ocaml is FP.

Type classes, or anything else that has "types" in them such as dependent, or liquid, are not FP, they are types. Types that found their way into a couple of FP languages.




I don't think many people (and not the parent) "equate" FP with Haskell, but Haskell is in a way the most functional mainstream language because it is the only lazy one. You can have a purely functional language without laziness in principle, but, as Simon Peyton-Jones points has said, laziness, despite having serious costs, keeps a language designer honest by making it impossible to add side effects. The non-lazy languages all have side effects. A language like OCaml can still be relatively functional because of the way it is typically used. While JavaScript is not traditionally used in a functional way but can be.

http://www.cs.nott.ac.uk/~gmh/appsem-slides/peytonjones.ppt


You’re mixing up laziness and purity. Purity makes side effects impossible, so to speak, not laziness.

Haskell is the most functional language because a Haskell function is a mathematical function, which can only transform its arguments into a value. Everything is a constant in Haskell, and functions transform one or more constants into a single, new constant.


No he/she isn't. The argument (as advanced by SPJ) is that it is so awkward for a programmer to use side-effects in a lazy language that it is (practically, not theoretically) impossible for a language designer to add them.


I don't know Haskell but how is laziness related to FP?


The only way I can think to relate them is (a) FP tends to highlight the importance of value-semantics over all others, (b) non-termination is an effect, (c) FP also, subsequent to a, tends to emphasize control of side effects, (d) in a terminating lambda calculus all evaluation strategies are confluent/equal under the value-semantics, thus (e) laziness is particularly _available_ in a FP language.


Attempt at translation from CS-speak:

Laziness matters less in a FP language because if we consider non-termination an effect (impure), all pure functions should behave exactly the same regardless of whether they're "lazy" or not (plus in FP sameness is defined as same values, due to "value-semantics" - unlike OO where every object has a unique identity, different from all others) because in the absence of side-effects order of evaluation is not important.

Of course reality is different, and laziness has very visible and important effects when Haskell programs run on today's computers :)


Ha, thank you.


> Haskell is in a way the most functional mainstream language

1. Is it the most functional language because it's lazy? No

2. Is it the most mainstream language because it's lazy? No

3. Is it the most functional mainstream language because it's lazy? No + no = no

Laziness does not a functional language make.

> You can have a purely functional language

What would be the purpose of a pure FP? Oh. There would be no purpose.

> laziness ... keeps a language designer honest by making it impossible to add side effects

wat

Laziness is delayed execution. That's it. There's _nothing_ stopping you from delaying a side effect.


> Laziness is delayed execution. That's it. There's _nothing_ stopping you from delaying a side effect.

Laziness is about more than just side effects.

I think "evaluation" or "reduction" would be better words than "execution" here. Laziness (call by need) is an evaluation strategy for (beta-)reducing expressions, which has two nice properties:

- If an expression can be reduced without diverging by some evaluation strategy, then it can be reduced without diverging using call by need.

- Efficiency, in the sense that no duplicated work is performed.

The other common evaluation strategies are call by name and call by value. Call by name has the first property, but not the second; so there are cases when it's exponentially slower than call by need. Call by value has the second property, but not the first, so there are cases when it diverges unnecessarily.

This 'unnecessary divergence' is a major reason why most programming languages end up overly complicated to understand (at least, mathematically). For example, consider something like a pair `(cons x y)`, and its projection functions `car` and `cdr`. We might want to describe their behaviour like this:

    ∀x. ∀y. (car (cons x y)) = x
    ∀x. ∀y. (cdr (cons x y)) = y
This is perfectly correct if we're using call by name or call by need, but it's wrong if we're using call by value. Why? Because under call by value `(car (cons x y))` and `(cdr (cons x y))` will diverge if either `x` or `y` diverges. Since the right-hand-sides only contain one variable each, they don't care whether or not the other diverges.

This is why Haskell programs can focus on constructing and destructing data, whilst most other languages must concern themselves with control flow at every point (branching, looping, divergence, delaying, forcing, etc.).


Thank you! I clean forgot about call-by-need vs. call-by-value


>Laziness is delayed execution. That's it. There's _nothing_ stopping you from delaying a side effect.

This is true, but unconstraimed side effects are too difficult to reason about in a lazy language to make them practical. So in practice, very few lazy languages have unconstrained side effects.


What definition of FP are you using? Because if it's just first-class functions, even Visual Basic has this now.


Exactly.

Let's take Wikipedia:

-- start quote --

In computer science, functional programming is a programming paradigm—a style of building the structure and elements of computer programs—that treats computation as the evaluation of mathematical functions and avoids changing-state and mutable data. It is a declarative programming paradigm, which means programming is done with expressions[1] or declarations[2] instead of statements. In functional code, the output value of a function depends only on the arguments that are passed to the function, so calling a function f twice with the same value for an argument x will produce the same result f(x) each time

-- end quote --


Only Haskell strictly conforms to that definition. None of your other examples do.


No. It only means that other languages support multiple paradigms.

Haskell doesn't strictly conform either, because it allows side effects (event though it's "only through escape hatches"). There's no such thing as a "pure functional programming language".

Let's take it from the top:

"In computer science, functional programming is a programming paradigm—a style of building the structure and elements of computer programs—that treats computation as the evaluation of mathematical functions and avoids changing-state and mutable data."

All of the languages I listed support this style. The moment you write an IO monad inside a function in Haskell, you break the illusion of Haskell's strict conformation to the definition.


Now you have the condescending tone! However, it appears you don't understand the IO monad. Programming using the IO monad is pure functional programming with full referential transparency. Please read Phil Wadler's paper, you will not understand this from that JavaScript snippet. The only backdoors are escape hatches like unsafePerformIO which are used for low-level libraries and FFI, they can be disabled with pragmas and/or compiler switches.

It is much harder to stay true to that definition using the other languages, multi-paradigm or not. That is why Haskell is so often mentioned in the context of FP.


As soon as you have side-effects (an IO is a side-effect) you can through your assumptions about "strict conformation to definition" out of the window:

--- treats computation as the evaluation of mathematical functions and avoids changing-state and mutable data. ---

> It is much harder to stay true to that definition using the other languages, multi-paradigm or not.

I doesn't matter if it's "hard". Your argument was that "Only Haskell strictly conforms to that definition."

No. Haskell conforms stricter. It doesn't mean that none of the other languages are not FP, or cannot be used to program in functional style.

> That is why Haskell is so often mentioned in the context of FP.

Yes. And the problem is that people treat the features that Haskell has as requirements to be considered a functional programming language.

- Typeclasses/Liquid types/Dependent types are not FP

- Pattern matching is not FP

- Laziness is not FP

- A whole bunch of anything else is not FP


> As soon as you have side-effects (an IO is a side-effect) you can through your assumptions about "strict conformation to definition" out of the window

Again, you are unfortunately quite wrong. You do not understand the IO Monad. No IO is ever performed in any code written inside the IO monad (unless using unsafePerformIO).

Please take some time to fully understand Haskell before criticising it so openly on a public forum. Perhaps then it might not all seem rather pointless.


So, you're telling me that Haskell never does any output and never reads any input.

In this case (an only in this case) would it strictly conform to FP definition.


Haskell is used to wire-up the IO actions, they are then delivered to the runtime via the main definition. The runtime performs the IO actions.


Indeed Haskell the language doesn't do any IO. Instead, it creates a tuple containing

* a description of the action to execute (e.g. read line from STDIN)

* a function that will take the result of the previous action (e.g. the line) and return the next action to execute (e.g. writeLine)

  [readLine, \line -> writeLine(line)]
This main action is just a description, the runtime itself takes it, does the actual IO described in the first part of this tuple, evaluates the second (function) part of the tuple with the result of that IO, receives a new pair of action/function, does the action's IO part, evaluates the new function and so on until it gets a nil (end of the "linked list" so to speak)

You could imagine this as a linked list of actions, where the "link" is actually a pure function you call with the result of execution the first part to get the rest of the list (or nil to terminate). This is still pure because the action itself doesn't do anything. If you return an action from a function, it doesn't actually execute, its just a value to be interpreted.

Does that make a real difference or is it just theoretical "purity"? Yes it does make a difference.

For example, if you create two action values, you haven't actually run anything yet, just created descriptions of actions. You could put them in a list and create an action description that will tell the runtime to run them in parallel, for example using: https://hackage.haskell.org/package/parallel-io-0.3.3/docs/C... ... or in sequence https://hackage.haskell.org/package/base-4.10.0.0/docs/Prelu...

For refactoring, it means that you can still take functions that return actions and substitute them with their values, e.g. if you have

  readItems 0 = return []
  readItems n = do
    x <- readLine
    y <- readItems (n - 1)
    x :: y

  main :: IO
    items <- readItems 3
    putStrLn "The first item is " ++ (head items)

you could pull (readItems 3) out of main!

  read3Items = readItems 3
  
  main :: IO
    items <- read3Items
    putStrLn ...
and everything is exactly the same, since all you've pulled out is a description of an action. Equational reasoning (you can substitute an equation with its result) still works - which is great for refactoring!


Basically, the moment you have

  readFile  :: FilePath -> IO String
  writeFile :: FilePath -> String -> IO ()
and any function invoking those two (and any functions invoking these function etc. etc.) your "haskell strictly conforms to FP definition" flies out of the window.

And yes: I used the term "IO monad" incorrectly.


These functions do not perform IO, they return IO actions that can be further composed. They are invoked from other functions that also return IO actions, again no IO is actually performed until the top-level final composite action is run by the runtime.

All composition of IO actions is performed with full referential transparency and adherence to the Wikipedia definition.


Potato potato. In the end, when the program is run, functions will be run, and side-effects will be executed.

Moreover, if you invoke these functions multiple times they will produce different results.

Hence, no strict adherence to FP principles.


> Moreover, if you invoke these functions multiple times they will produce different results

No this isn't true! Again, you clearly do not yet understand Monads. Please read Phil Wadler's paper.

What you have successfully demonstrated, is that the JavaScript snippets you have been advocating are not sufficient to understand Monads.


> if you invoke these functions multiple times they will produce different results.

the io monad is a pure function that produces impure code for the haskell runtime to execute (this isn't exactly accurate, but i think is an okay way to think about it). It will always produce the same impure code, and so is itself pure.


Riight. The "Platonic ideal" again. "It's not the language it's the runtime!"


You're making some good points here. Neither category theory informed typeclasses nor laziness are required for pure functional programming (henceforth PFP). Up until recently Haskell was the only PFP language in common use, so all those things got conflated in a lot of people's minds. Now we have PureScript (which doesn't have laziness) and Elm (which doesn't have laziness or typeclasses).

That said, you really should listen to @willtim about the `IO` type. Purity really is a big deal, and `IO` doesn't break it at all.


> As soon as you have side-effects (an IO is a side-effect) you can through your assumptions about "strict conformation to definition" out of the window

You misunderstand what an IO action is. An IO action in Haskell is a value which describes a side effect. For example, an IO Int is a value that describes a side effect which, at runtime will produce a value of type Int inside the IO monad.

The difference is that the IO Int is a value, it’s a constant which describes how to perform something at runtime (also called promises in some languages). It’s like a callback function, which takes a value as an argument that will be available when it’s called (at runtime).


What makes you say Haskell is an abomination?


Short version: You need a PhD in type theory to get anywhere.

Basically every obscure overly complicated concept that Haskell throws at you (all the while pretending to be the only true FP language out there) can be explained in 5 to 10 lines of Javascript: https://github.com/hemanth/functional-programming-jargon

Compare and contrast.

- Monad explained in Javascript: https://github.com/hemanth/functional-programming-jargon#mon...

- Timeline (sic!) of monad tutorials for Haskell: https://wiki.haskell.org/Monad_tutorials_timeline

The worst crime against humanity though is Haskell crap seeping into other languages (such as ramda, for instance: http://ramdajs.com)


> You need a PhD in type theory to get anywhere.

That is extremely false. Haskell isn't even a good playground for academic type theory -- you'd want Agda etc. for that. The development of the language over the last few years has been characterized by pragmatism and a focus on backwards-compatibility, which is why you can take code from something like ten years ago and have it run without issues on modern versions of the Haskell compiler with little to no modifications. (Let's not talk about how long code written in "modern" JS lasts.)

And I'd really like to see type class constraint resolution with functional dependencies, or Hindley-Milner type checking, or something of that sort implemented in "5-10 lines" of JS.

"There he goes again with his mumbo-jumbo," you say. That's right, you don't need to care about those things to write Haskell. What you meant is implementations of typeclasses ("interfaces") like Monad, Functor, and so on: they don't take much more code in Haskell.

  Array.prototype.chain = function (f) {
    return this.reduce((acc, it) => acc.concat(f(it)), [])
  }

  instance Monad [] where
    xs >>= f = concat (map f xs)
    return = pure
And we didn't even have to go the "this" route! Notice that your 5 - 10 lines of JS don't let you write code that works in any monad, whereas I can easily write

  whenM :: Monad m => m Bool -> m () -> m ()
  whenM cond action = do
    condition <- cond
    if cond then action else pure ()
In Elm, you'd have List.whenM, Array.whenM, Maybe.whenM, ... or a straight-up false type signature like their Eq ones, and in JS, a bunch of prototype methods with no unifying threads.

--

As for an example of why I think Haskell has the right ideas (few of us will say it's the "best language evar"):

I'd really like to see a JS version of the Servant library, which takes an API spec for a server and actually generates a fully functional server from that. Here's a description:

https://news.ycombinator.com/item?id=14149200

Does this strike you as idle theoretical self-enjoyment?


> This is extremely false [referring to "You need a PhD in type theory to get anywhere."]

almost immediately followed by

> And I'd really like to see type class constraint resolution, or Hindley-Milner type checking

You don't even see the irony in that, do you?

> they don't take much more code in Haskell:

riiight. I won't even go into the number of things that need to be explained there before you even start explaining what the code does.


I hadn't really finished editing my comment then. (I was eating at the time, haha.) I just saw your reply now.


So, you could say your comment was ... lazy?

(huehuehue, lame joke, I know :) )


Commenting on the edited comment :)

> Notice that your 5 - 10 lines of JS don't let you write code that works in any monad, whereas I can easily write

Maybe, maybe not. Depends on your requirements, really. The core language might never get this, but these 5-10 lines of code do some very important things:

- they explain monads faster and clearer than any of the countless monad tutorials that exist for Haskell

- they demystify monads and show that: hey, you've probably been writing monads all along (and re-implemented them yourself countless of times, no doubt)

- they (by necessity) dumb down the jargon-heavy lingo for easy consumption by average Joes like me :)

Edit: that page in particular has also shown me that I have used easily half of Haskell's things (functors of all flavors, monads, comonads, etc. etc. etc.) countless times over the years in Javascript and Erlang. I didn't even know I did, because no one scared me off with the theory, and strange explanations and names :)


Is it fair to say that your argument here is that "this resource was extremely valuable to me for understanding certain concepts in ways that Haskell-oriented resources in the past have not been"?

I think that's a totally fair criticism. I also believe that the Haskell resources can provide further value to you (and others in your position) over time if you choose to study them. Similarly, studying category theory or type theory or logic could.

Are these practical things to do? It depends upon your goals.


Sure! I don't disagree: Haskell learning materials are a far cry from adequate, and we definitely need to learn from, e.g. the Rust/Elixir/Elm communities here. For now, this is worth trying:

http://haskellbook.com/

Also, #haskell on IRC has been, without a doubt, one of the friendliest learning environments I've ever seen. Drop by sometime if the mood strikes you. :)


This is not really a definition of a Monad. For example, there's no mention of the Monad laws.

Monads are a very general and powerful abstraction that are not adequately described by your example. My advice to anyone is to read Phil Wadler's seminal paper, it is very easy to read.


> Does this strike you as idle theoretical self-enjoyment?

It does.

How many PhDs does one require to understand/correct/debug all the :> and :<|> etc.?


Speaking from experience, zero.

> debug

But the whole point of a good compiler is that it tells you when you're wrong! (Instead of having to write hundreds of tests (and thousands of Node test runners).)

> :> and :<|>

You can just treat them as syntax, like the largest proportion of every other language, but with the opportunity of actually being able to write things like that yourself later.

Observe:

    type API = "polls"                                           :> Get  '[JSON] [Poll]
          :<|> "polls" :> Capture "question_id" Int              :> Get  '[JSON]  Poll
          :<|> "polls" :> Capture "question_id" Int :> "results" :> Get  '[JSON]  PollResults

The :> operator separates parts of a path, and the :<|> separates different URL patterns. This is the equivalent of this API from the Django documentation:

    urlpatterns = [
        # ex: /polls/
        url(r'^$', views.index, name='index'),
        # ex: /polls/5/
        url(r'^(?P<question_id>[0-9]+)/$', views.detail, name='detail'),
        # ex: /polls/5/results/
        url(r'^(?P<question_id>[0-9]+)/results/$', views.results, name='results')
    ]
The only difference is it has less regexes in it, is capable of being checked for nonsense by a compiler much smarter than me, and gives you the aforementioned "server for free". I have had URL pattern-match errors with Django in the past, and having your compiler check that there aren't any is excellent.

Easier to maintain? Check.

Easier to read? Check. (If nothing, because of the lack of regexes.)

Defines the response type too? Check.

Easy to refactor? Check! Tired of typing "polls" at the beginning? Just lift it out: turn

    type API = "polls"                                           :> Get  '[JSON] [Poll]
          :<|> "polls" :> Capture "question_id" Int              :> Get  '[JSON]  Poll
          :<|> "polls" :> Capture "question_id" Int :> "results" :> Get  '[JSON]  PollResults
into

    type API = "polls" :> 
             (                                            Get  '[JSON] [Poll]
          :<|>  Capture "question_id" Int              :> Get  '[JSON]  Poll
          :<|>  Capture "question_id" Int :> "results" :> Get  '[JSON]  PollResults
             )
Types are first-class :)

> How many PhDs does one require to understand/correct/debug all the :> and :<|> etc.?

Definitely less than it takes to become comfortable with the quirks of literally everything in JS: perhaps you should give something an honest shot before telling people who have derived real-world benefits from using it in production that it's useless?


> Speaking from experience, zero. You can just treat them as syntax, like the largest proportion of every other language, but with the opportunity of actually being able to write things like that yourself later.

So, basically, "learn this thing without understanding what it does" :-\

Reminds me of teaching Java to newbies: "oh, just type this syntax, you have to memorize it, don't worry about it".

> Definitely less than it takes to become comfortable with the quirks of literally everything in JS: perhaps you should give something an honest shot before telling people who have derived real-world benefits from using it in production that it's useless?

A real app is not just "hey, memorize this DSL and type it". I've found Haskell unapproachable on multiple occasions. And yes, I've completed my obligatory "Haskell from first principles" and "Learn You a Haskell for Great Good!" :)


Well, Servant isn't introductory Haskell material. My point was to show that the advanced type system features of modern Haskell are useful in the real world, for, e.g. building webapps that real people use and ship to production.

If you've picked up the stuff in LYAH, you're ready to learn what type operators are (that's where :> and friends come from, they're just things like Maybe but written infix, so they're probably defined as

data a :> b = ColonPointyThing a b

or something like that.) Servant then pattern-matches on these types, essentially. For instance, if I can handle an API that serves endpoint A, and one that serves endpoint B, I can handle an API that serves both:

    instance (Handler A, Handler B) => Handler (A :<|> B) where

       handle req = ...
That's the idea.

You'd hardly expect a beginner to pick up, I dunno, using React and Redux on a Webpack hot-reloadable setup on day 1 of "Javascript 101", but React is one of the best ways to sell modern web development (at least when I've been buying).


The problem is basically how to arrive at the "same level" in Haskell and in JS.

Basically: what does it take to define (or even to just understand) a type-level DSL in Haskell, and a sufficiently advanced library in Javascript.

I can take apart almost any JS library and see/understand how it works. How much type-foo do I need to understand Servant? Or any other sufficiently complex Haskell library (a friend of mine has struggled mightily with Yesod and gave up after a month or so).


I'm pretty sure you'll be able to follow the Servant docs, which are quite nice:

https://haskell-servant.readthedocs.io/en/stable/

There are definite advantages, and if you're willing to put in as much work as one, say, unconsciously puts in while setting up JS frameworks, understanding libraries like this is definitely doable. (Without the dissertation.)


The docs are nice, but they complement the code :)

Basically to actually understand what's going on, I'll need to learn DataKinds (at the very least).

And just to implement something as simple as BasicAuth, this is what I have to start with (https://haskell-servant.readthedocs.io/en/stable/tutorial/Au...):

  {-# LANGUAGE DataKinds             #-}
  {-# LANGUAGE DeriveGeneric         #-}
  {-# LANGUAGE FlexibleContexts      #-}
  {-# LANGUAGE FlexibleInstances     #-}
  {-# LANGUAGE MultiParamTypeClasses #-}
  {-# LANGUAGE OverloadedStrings     #-}
  {-# LANGUAGE ScopedTypeVariables   #-}
  {-# LANGUAGE TypeFamilies          #-}
  {-# LANGUAGE TypeOperators         #-}
  {-# LANGUAGE UndecidableInstances  #-}
You rarely need anything beyond regular JS to understand the inner workings of most libraries.


Servant isn't the "simplest" way to get BasicAuth: you get what you pay for.

You can just use one of the raw HTTP servers, like wai or something. They expose an interface not unlike the ones you'll find in, say, Go or C++ or Node or something.

Also:

  {-# LANGUAGE FlexibleContexts      #-}
  {-# LANGUAGE FlexibleInstances     #-}
  {-# LANGUAGE MultiParamTypeClasses #-}
  {-# LANGUAGE TypeOperators         #-}
  {-# LANGUAGE UndecidableInstances  #-}
These extensions only serve to lift a couple of restrictions that the compiler imposes because the original Haskell Report did. DataKinds and TypeFamilies are the real "new ideas" that you need to pick up after a book at the level of LYAH to understand Servant.


> Servant isn't the "simplest" way to get BasicAuth: you get what you pay for.

But that's not what I complained about ;)

It's funny how in a separate thread someone insists that I have to understand the whole concept of a monad, its three laws, read Wadler's paper etc.

In his thread however, it boils down to: oh, just memorise these lines of code, and just blindly copy-paste them wherever.

So, my complaint is: to actually understand what's going in the "simplest way to do BasicAuth" I need a whole lot more than just look at the code.

I need to understand why I need no less than ten (!) language extensions before I even begin to implement BasicAuth. And five of them just to workaround some Haskell limitation based on some report from 1998 (I'm guessing)? What happens when I move on to OAuth? What will I need then?

Also, "oh, you don't really need PhDs in type theory to use Servant" slowly descends to "oh, here are two things regarding types that you'll have to learn". On of them has no useable documentation except some comment on StackOverflow. The other one requires you to be well-versed in type theory.

Hence my complaint about "you need a PhD in type theory to work with Haskell".


> "simplest way to do BasicAuth"

The simplest way to do BasicAuth in Haskell isn't Servant. I feel like you're intentionally misinterpreting what I'm saying.

As for resources, here's a great intro to type-level Haskell:

http://www.parsonsmatt.org/2017/04/26/basic_type_level_progr...

> For some reason, functions that operate on types are called type families.

That's the kind of thing you need to know.

I doubt the average JS user knows how V8 optimizes execution of code, or whatever. That's what your insistent complaints about type theory amounts to: none of the articles I'm linking to mention inference rules or type judgments or what-have-you. That's what all the research papers about Haskell are for, which you do not need to read to use this language. Even advanced Haskellers don't do type theory (category theory, maybe, not type theory). Only a couple of people working on the compiler do.

More DataKinds:

https://stackoverflow.com/questions/20558648/what-is-the-dat...

http://ponies.io/posts/2014-07-30-typelits.html

https://www.schoolofhaskell.com/user/k_bx/playing-with-datak...


> The simplest way to do BasicAuth in Haskell isn't Servant. I feel like you're intentionally misinterpreting what I'm saying.

True, I re-read what you wrote, and argh. I need to learn to read.

> As for resources, here's a great intro to type-level Haskell: > That's the kind of thing you need to know.

So, we're basically returning to the root of my complaints

> I doubt the average JS user knows how V8 optimizes execution of code

Wait. Are you telling me now is that writing a BasicAuth implementation in a "simple library that doesn't require you to have a PhD in type theory" is on the same level of complexity as knowing the inner workings of an advanced Javascript VM?

> That's what your insistent complaints about type theory amounts to: none of the articles I'm linking to mention inference rules or type judgments or what-have-you.

No, they don't.

I'm looking at Servant and its examples. In order to write an extremely simple and basic piece of code, I, as a programmer:

- have to pull in no less than 10 language extensions

- five of those extensions are just workarounds some obscure Haskell rules (?)

- in order to understand just the basics of what's going on in there I need to know why when and where these extensions are used, how they work etc.

What happens the moment I step outside the bare necessities of the extremely simple BasicAuth implementation, for instance?

> More DataKinds:

This exactly what I wrote: as soon as you step outside into a real world of Haskell, oh "here's a list of increasingly obscure things you need to know. Maybe two people on StackOverflow know about them. For the rest, please proceed to your nearest university to obtain a PhD or two".


Your argument seems to be that "Real World Haskell" uses obscure features that I and many others don't understand, thus Haskell is complex. This is true if it is (a) impossible to write "Real World" Haskell without using these features and (b) that these features are truly complex and not just unfamiliar.

An alternative hypothesis to (a) is that Real World problems can be solved by simple Haskell, but more sophisticate Haskell features pay their way often enough that skilled practitioners choose to use them nearly always. I don't know if I completely buy this, but I also don't know that I completely buy that there aren't examples of Real World Haskell that are simple.

Of course (b) is easy to criticize and painful to do so since it'll ultimately be this indefensible argument of "if only you knew what I know then you'd agree with me" which I think is stupid. Unfamiliarity is a complexity since it forces investment on all that would learn it---languages which avoid unfamiliarity are faster and more valuable tools for avoiding forcing that investment.

The only counterargument is a global one: if these techniques _are_ worth the investment then they will over time have an increasingly large impact on the culture of programming at large. Already this is coming true with first class functions, immutable data, preference for stronger typing, option/maybe types. Your personal investment into learning further ideas may be worth it if they pay out over a longer time period either by preparing you for where things are going (speculative) or by diversifying your thought process immediately (less speculative).

So you get people encouraging folks to learn Haskell because they personally have made the judgement that learning these things is great. If you're unconvinced that's a totally reasonable position to take. OTOH, learning new things can be fun and there's at least a small hill of anecdotal evidence that these things can pay their way at times.


> OTOH, learning new things can be fun and there's at least a small hill of anecdotal evidence that these things can pay their way at times.

Life is finite, number of things to learn is near-infinite.

Do I have the lifetime to learn 10 Haskell language extensions to understand how the most basic piece of code works?


Why are you asking me?

If it's to imply that the answer is "obviously not" so as to project it to other readers then why are you trying to answer for them?

Clearly others have decided that what you suggest is either unnecessary or worth it. But it's your choice.


> I doubt the average JS user knows how V8 optimizes execution of code, or whatever. That's what your insistent complaints about type theory amounts to: [...] Only a couple of people working on the compiler [use type theory].

> Wait. Are you telling me now is that writing a BasicAuth implementation in a "simple library that doesn't require you to have a PhD in type theory" is on the same level of complexity as knowing the inner workings of an advanced Javascript VM?

I rest my case. You're not arguing in good faith here, although you seemed to be doing a pretty good job at times. I'm saying that knowing enough to be able to add big new features to the Haskell compiler, like the computer scientists who drive GHC development, is similar to knowing how V8 works. :)


> I'm saying that knowing enough to be able to add big new features to the Haskell compiler, like the computer scientists who drive GHC development, is similar to knowing how V8 works. :)

Once again: THIS IS NOT WHAT I'M COMPLAINING ABOUT

I wonder if you have enough good faith to even see what I'm talking about.


>> Speaking from experience, zero. You can just treat them as syntax, like the largest proportion of every other language, but with the opportunity of actually being able to write things like that yourself later.

> So, basically, "learn this thing without understanding what it does" :-\

> Reminds me of teaching Java to newbies: "oh, just type this syntax, you have to memorize it, don't worry about it".

I don't think what's recommended is the same.


Let's consider this quote:

--- start quote ---

> :> and :<|> You can just treat them as syntax, like the largest proportion of every other language, but with the opportunity of actually being able to write things like that yourself later.

--- end quote ---

This is exactly what's recommended: just blindly type these things, your understanding is not required.

It becomes worse. Link: https://news.ycombinator.com/item?id=14890937

You need 10 language extensions to implement the simples things. Answer to that complain?

Oh, five of them are just workarounds [so, just blindly copy-paste them]


You certainly would need a PhD to fully understand Monads from that small JavaScript snippet. The Haskell link you gave gives Phil Wadler's original paper as the first link. It is easy to read, explains everything beautifully and full of many examples. Learn some basic Haskell for no other reason than to read seminal papers such as these. To favour some random JavaScript hacker on the internet and steer others away from the original work is anti-intellectualism.


Ah, here comes the condescending tone I've so come to appreciate from the Haskell programmers.

"Go and read", "anti-intellectualism".


Wadler's paper is an excellent piece of exposition that's us at the level of an upper-year undergraduate textbook. There's nothing condescending about referring a professional to a relevant paper in their discipline, but it is troubling when a professional won't even read over a paper.


It's troubling when people assume there's only one paper that a professional should read. Or that a professional cannot choose between papers to read. etc.


How did anyone imply this? A single free, reputable resource was offered, but many more exist.


You are misquoting me. I said steering others away from the original source of work to an interior source (incomplete at best) is anti-intellectualism.

I do not mean to be condescending, but I feel very strongly about this.


In order not to copy-paste, I'll link to my reply in another thread: https://news.ycombinator.com/item?id=14890766


Wait... you're tone policing haskell users after referring to even the _adaptation_ of functional techniques as a "crime against humanity?"

Please rethink this approach. It is a bad approach. It fails to capture (what I think you) your argument (is) and antagonizes people needlessly. And quite frankly, a lot of people are being VERY nice by not following in the tradition of absolutely burying javascript for its nonsensical primitive type semantics.


> Basically every obscure overly complicated concept that Haskell throws at you (all the while pretending to be the only true FP language out there) can be explained in 5 to 10 lines of JavaScript

So presumably the same concepts can be explained in 5 to 10 lines of Haskell too.

I think you're confusing the refinement and polishing of ideas that's taken place in Haskell over the last two decades with the succinct presentation of those ideas once they've been worked out.


But they can't, can they. Or we wouldn't have the bazillion monad tutorials.

The funny thing, this is the trouble that plagues other Haskell-inspired work (such as Purescript)


It's obvious that the concept of monad can either be explained in 5-10 lines in both JavaScript and Haskell, or neither. Which are you claiming?


It's not obvious.

I'm claiming that:

- the concept of monad can be explained in 5-10 lines in Javascript (demonstrable)

- the concept of a monad requires multiple years and tens of tutorials in Haskell (also demonstrable)


I just want to reiterate: point 1 is totally false. That description you linked is incredibly incorrect, captures almost nothing of the spirit of what a monad is, and is somewhat disingenuous.

Lots of people get excited about "monads" and then rush out to write tutorials to try and capture whatever mental model they're using. These mental models may arrive at correct results most of the time, but they're often not really transferrable to another human.

Learn You A Haskell takes a different approach, in which you arrive at creating monads because you naturally derive them as a way to deal with the tedium of functional code w/out such mechanisms.

"Monad tutorials" are becoming much less frequent now that such approaches are offered. Everyone just says, "Go read this chapter or two of this freely available book and you're good to go."

You know, just like any major feature in javascript.


Demonstrate a pair of tutorials aimed at the same level of reader, one in JS, and one in Haskell, where you end up with a better understanding of the idea of a monad reading the former (as opposed to the implementation of the Monad instance for a list).


TIL there's some "idea of a monad".

Basically this is (in my mind) what's wrong with Haskell: it's overly concerned with the Platonic ideal.

Meanwhile that one page on jargon has shown me that I effortlessly implement any and all of those things daily (and understanding what I'm doing) without the need to understand "an idea". I just use the tool that solves the problem. If someone insists on calling this "monadic composition", or "lifting over typeclasses", or "zygohistomorphic prepromorphisms", so be it.


It's code reuse for concepts. Wouldn't you agree that reusing intuition about things is good? It's the same as knowing what big-O is instead of just memorizing "bubble sort is slower than insertion sort, insertion sort is sometimes faster than quicksort but usually not", or knowing what concurrency is instead of memorizing the API of a library in your favorite language.


The entire "concept" of a monad fits in that description I linked to. It can be easily reused (which I've done numerous times with it, and with other concepts on that page).

Haskell for some reasons insists that I should only go for "The concept of a monad, which arises from category theory, has been applied by Moggi to structure the denotational semantics of programming languages" and

  A monad is a triple (M,unit,⋆) consisting of a type constructor M and two operations of the given 
  polymorphic types. These operations must satisfy three laws given in Section 3.
  
  We will often write expressions in the form
    m ⋆ λa. n
Should I? Really?


Yes, you should care about the laws!

They allow you to edit code without the aforementioned released-last-week test runner having to check all your code after a big refactor. I mean, you can't call something with a "flatMap" method and a "return" method a monad! There are tons of nonsensical definitions that fit that which are going to become very unpleasant to use quickly.


> They allow you to edit code without the aforementioned released-last-week test runner having to check all your code after a big refactor.

Oh wow. How did I ever live with big refactorings before?

> There are tons of nonsensical definitions that fit that which are going to become very unpleasant to use quickly.

I recently learned that, apparently, I've been using "monads" and "monadic composition" for years now, never knowing what it is. Can't remember any unpleasantness that would "quickly arise".

There are other things than blind following after a Platonic ideal.


The unpleasantness arises when writing Monad instances that don't follow the laws. Setting your disingenuity aside, try using

  xs >>= f = concat (reverse (map f xs))
or

  Array.prototype.chain = function (f) {
    return reverse(this.reduce((acc, it) => acc.concat(f(it)), []))
  }

(I don't know how JS works, but you get it) instead of

    xs >>= f = concat (map f xs)
and enjoy refactoring your code, blissfully and consciously ignorant of the laws that make a monad a monad. The laws aren't just supposed to enrich the "life of the mind".


So, what problems will I experience while refactoring this code (if I ever wrote it)?


It won't satisfy the properties you've come to expect from the implementations you know for lists, optionals (Maybe), and so on.

Suppose you're going over some code, where you have something of the form

   xs >>= someFunction
(where xs is a list) and you change someFunction to "return":

   xs >>= return
Then you'd expect, from experience, that you can rewrite this to just

   xs
(which you can check works with all the monads you know about) but that doesn't hold for this wonky definition of the Monad instance for lists. Indeed, note that return for lists has the definition

   return x = [x]
so, with our bad >>=,

   [1, 2, 3] >>= return
   = concat (reverse (map (\x -> [x]) [1, 2, 3]))
   = concat (reverse ([[1], [2], [3]]))
   = concat [[3], [2], [1]]
   = [3, 2, 1]
which is not the same as [1, 2, 3].

https://wiki.haskell.org/Monad_laws


> Basically this is (in my mind) what's wrong with Haskell: it's overly concerned with the Platonic ideal.

I read this and I think what's got you rustled here is that Monad is such a generic concept. It's quite higher level and so you can do novel things like write functions that don't know how they're executing, just that they are.

As an example:

    -- Config is a typeclass that enables getting a
    -- keyval from a config. The return type is MonadIO
    -- because config might need IO.
    loadTarget :: (MonadIO m, Config a) => a -> m a
    loadTarget config = do
        v <- grabKeyVal "host" config
        w <- grabKeyVal "port" config
        return (v,w)
What does that code do? The answer (if it's written carefully) is that it depends on what the underlying monad is! And that's a good thing, in many cases. If the monad is Maybe + IO, then you have a conditional loader.

But if the monad is an array and IO then you can specify many hosts and many ports and this code enumerates them all. If that's passed to a ping function like so:

    loadTarget config >>= pingHostPort

    -- alternatively

    doPings config = do
      hostPort <- loadTarget config
      pingHostPort hostPort
Well then your code will do the right thing, but a different thing, based entirely on the types alone! And you can generalize this out to even more powerful types. For example, you could write a web app that server side could do local network ports for you (why? you're a maliclious hacker of course!). In that case it might make sense to use the Continuation monad.

tl;dr and finally:

You say this is stupid abstract stuff, but the folks delivering features to you in the Javascript world disagree. You have generators now, which are a much more real and fair explanation of how to model monads in Javascript than that silly code snippet you posted that doesn't capture the spirit of them at all.

What's more, careful application of these concepts leads to libraries which are just better than anything you can have without appealing to generators. A great example of this Purescript-config. Here is an actual (redacted) same of some code I use at work in an AWS Lambda function to read the environment:

https://gist.github.com/KirinDave/9af0fc90d005164743198692f3...

So I have complete error reporting (full sets of missing keys) just by describing how to fetch the values from the environment. I can transparently switch to any other file type under the covers by changing from using "fromEnv". I only know that there is a key-value store in there.

Doing this in OO is really, really hard to get right, because imperative OO code cannot easily parameterize the execution strategy hierarchically without appealing to generics and ad-hoc polymorphism. That's hard.

The applicative style adopted here is very simple to re-interpret, because we can parameterize code on monads and applicatives (which explain the execution strategy of this code beyond its coarse structure) as we see fit.

You can do that with generators but it's frustratingly hard. Doing it in a truly generic way? Even harder. For this approach, the results are free (and Free, but hey).

I can give you other examples of how Purescript makes certain difficult aspects of javascript simply vanish, if you'd like.


Thank you for examples.

I have one complaint though:

> Doing this in OO is really, really hard to get right,

Why do you equate FP with monads? Moreover, why do you equate FP with Haskell/Purescript (statically typed FP with monads)?


The use of monads is a side-effect (ha!) of committing to purity throughout a language, and that's what FP is being equated to: pure statically-typed FP.

(You can argue about how justified that is, of course. I'm not going into that, but you might want to see what John Carmack has to say[0]. No, he doesn't end by saying "we should all convert to the church of Haskell now", but he does talk about how large-scale game programming refactors are made easier when you're working with no (or very disciplined) side effects.)

Monads are not the only way to deal with effects while keeping purity, although they were the first discovered and so on: algebraic effect systems as in Koka[1] (and as simulated in Idris or PureScript) are another alternative. Koka infers effects, so it's probably easier for a C-family programmer to pick up (I know little about it, though).

[0]: https://www.youtube.com/watch?v=1PhArSujR_A

[1]: https://www.microsoft.com/en-us/research/project/koka/




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: