
Currying is not idiomatic in JavaScript - fagnerbrack
http://2ality.com/2017/11/currying-in-js.html
======
tolmasky
Some of these points are true of currying in general -- IMO most of the time
you can't tell what's a function call and what's a curry (Haskell people would
say "its the same thing!"). However, separating these concepts leads to a lot
of super interesting results. I wrote about this in this blog post:
[http://tolmasky.com/2016/03/24/generalizing-
jsx/](http://tolmasky.com/2016/03/24/generalizing-jsx/) . I argue that the JSX
syntax makes for an awesome syntax for currying.

Edit: Some follow up articles:

1\. Using this scheme, you can actually have default parameters in currying:
[https://runkit.com/tolmasky/default-parameters-with-
generic-...](https://runkit.com/tolmasky/default-parameters-with-generic-jsx)

2\. An example of how to apply this to Babel syntax trees:
[http://runkit.com/tolmasky/generic-jsx-for-babel-
javascript-...](http://runkit.com/tolmasky/generic-jsx-for-babel-javascript-
ast)

~~~
gizmo686
>Haskell people would say "its the same thing!"

Haskell can get away with this because it is a pure language. If you have `f a
b`, you do not really care when f gets evaluated (or, at least, when you do
care currying is generally not what makes figuring it out difficult). Even
without currying, Haskell makes it difficult to know when a function is
evaluated.

------
agentultra
It _can_ be idiomatic in Javascript if you choose a functional style.

I think Javascript has become a _kitchen-sink_ language. It is, as you say, a
language that affords many styles. If you pick a sub-set of the language that
is functional you can see that currying is quite idiomatic in that context.

That being said I think library support is a necessity when choosing a more
functional approach in Javascript as the "core" experience in JS caters to an
imperative, C-derived language (as per many of the examples). If you work with
Ramda you can get most of what you need today to make working with currying,
and it's compositional capabilities, pleasant.

Consider _R.curryN_ :

    
    
        const add = R.curryN(2, (a, b) => a + b)
    

You can now call this:

    
    
        add(1)(2)
        add(1)
        add(1, 2)
    

And you have options in terms of order of parameters:

    
    
        const sillyAdd2 = add(R.__, 2) // a silly example
        sillyAdd2(3) // => 5
    

And if you want even more flexibility there is Fantasyland[0] and Sanctuary[1]
among others.

Javascript is fertile ground, in my experience, for getting developers to
experiment with and adopt functional programming paradigms.

[0] [https://github.com/fantasyland/fantasy-
land](https://github.com/fantasyland/fantasy-land)

[1] [https://github.com/sanctuary-js/sanctuary](https://github.com/sanctuary-
js/sanctuary)

------
mattbierner
In my experience, currying using helper libraries also makes debugging more
difficult, as you have to step though higher order functions and cannot always
easily tell how the argument values were calculated. This can be mitigated
somewhat and the tools are improving but I generally found that, the more
clean I made my JS code using functional programming concepts, the more
difficult it was to debug when everything went wrong. And without a type
system, I had to debug much more than in a lanague like scala

~~~
taeric
It is funny, because this is the argument against most use of macros in lisp.
To the point that everyone knows that macros make it harder to debug code.

The lost part, is few people actually "debug" in the "interactive debugger"
sense of the word anymore, such that many things that were frowned upon in
lisp and related languages are now getting a lot of exposure. And I feel it is
truly sad how few people know how to step through a program nowdays.
(Obviously projecting some. Maybe I'm in an odd corner where nobody uses an
interactive debugger, but it has been a long time since I found coworkers that
were used to using one. And, in this, I'm including REPL based workflows. Yes,
I know they are strictly different. I just feel they are in the same family
for this discussion.)

~~~
barrkel
When the abstraction stack gets high enough, stepping turns into a bit of a
minefield - if you step into where you should have stepped over, you can get
lost in irrelevant weeds. Microsoft and some others have tried to put step
barriers in code that prevent accidental step into for this reason ("only my
code").

But it's also a change in coding conceptualisation. Personally I understand
things from the ground up, from composition of very low level components. I
find it hard to reason about a system's performance and failure modes without
doing so. But increasingly developers have only a surface area knowledge about
the things they use, and instead know a lot in terms of breadth instead of
depth. This approach is more effective for building something quickly by duct
taping disparate things together, and this is the majority of modern
commercial coding. These people don't need debuggers: it gives them too much
information. They need examples, and they code by idiom and analogy instead.

Things like event dispatchers and async also greatly diminish the power of the
debugger.

~~~
taeric
Agreed. And I actually fully agree with this being a strong reason not to use
macros in lisp. Made a bit more important there, because stepping through a
macro involves not just being in other code, but jumping between phases.

That said, my main point is that where many folks used to advise caution, such
that instead of promoting lisp, they would create new languages where they
could hide some of the magic, it seems that people are gung-ho on many of the
higher abstractions now, regardless of the mental costs they bring.

------
RA_Fisher
I'm not sure that idiomatic is a very rigorus idea when it comes to progamming
languages. Anyone have good reads in that vein?

~~~
sethrin
It's not a rigorous idea, but I'm not sure what you might be wanting to read
about, exactly. However, given that most of the language features that the
article described are relatively new to the language, or are proposed language
features, I think the author has a strong claim.

------
seniorsassycat
Has anyone read the semantics of the linked partial application proposal [0]?

> Note that this also means that more involved references are captured in
> their entirety and should be stored in a local variable if they may have
> unintended side-effects should the partially applied function result be
> called more than once
    
    
            const a = [{ c: x => x + 1 }, { c: x => x + 2 }];
            let b = 0;
            const g = a[b++].c(?);
            b; // 0
            g(1); // 2
            g(1); // 3
            b; // 2
    

`a[b++].c` is not evaluated until the partially applied function `g` is
called. Event `a[b++]` and `b++` are not evaluated.

It is unclear to me when `const g = Math.random().toFixed(?)` would evaluate
`Math.random()`.

[0]: [https://github.com/rbuckton/proposal-partial-
application#sem...](https://github.com/rbuckton/proposal-partial-
application#semantics)

------
sshine
tl;dr

Currying is nice when whitespace is function application and ugly and
inconvenient when you have to write foo(x)(y) and the libraries you deal with
aren't consistent about this. Currying in Haskell is nice for exactly these
reasons.

Also, [https://www.amazon.de/Das-Curry-Buch-Funktional-
programmiere...](https://www.amazon.de/Das-Curry-Buch-Funktional-
programmieren-JavaScript/dp/386899369X)

~~~
emodendroket
Did I miss something? I see no mention of white space in this article.

~~~
theoh
The article says

"Most functional programming languages with automatic currying have syntax
where there is no difference between add(1, 2) and add(1)(2)"

and goes on to talk about unidiomatic syntax.

In Haskell, the two options are "add (1,2)" and "add 1 2" The latter, curried
form, involving whitespace, works because Haskell parses it as an application
of a to 1, followed by an application of the resulting value to 2. This syntax
is inspired by the lambda calculus, so it's not really whitespace that's the
concept, just "juxtaposition of terms" implies application.

~~~
59nadir
> In Haskell, the two options are "add (1,2)" and "add 1 2" The latter,
> curried form, involving whitespace, works because Haskell parses it as an
> application of a to 1, followed by an application of the resulting value to
> 2.

This is a bit of a misrepresentation. There is nothing "uncurried" about using
these parens here. It's simply making a function call on two parameters take a
tuple of two values instead. There's no point to it at all and there's no real
value in doing it in your APIs. I have no idea why the Haskell wiki insists on
framing it like that.

Using the "curried form" has no effect on how the function call will perform
or behave at all. Partially applying functions can have performance
implications, though.

~~~
theoh
Consider, though, the Haskell functions "curry" and "uncurry" which transform
functions between these two styles. They are different types, so I don't see
how it is a misrepresentation.

------
gizmo686
>Most functional programming languages with automatic currying have syntax
where there is no difference between add(1, 2) and add(1)(2).

This isn't true of any functional programming language I can think of.

What happens in most functional programming languages is that it is idiomatic
to make functions curried by defualt, and only make non-curried functions when
you have reason to.

For example, in Haskell, you could have either:

    
    
        add1 :: Int -> Int -> Int
        add1 x y = x+y
        
        add2 :: (Int, Int) -> Int
        add2(x,y)=x+y
    

which would be called as:

    
    
        add1 2 3     --equivalent to (add1 2) 3
        add2(2,3)
    

There is even functions to convert between these:

    
    
        add1 = uncurry add2
        add2 = curry add1

~~~
biscarch
All functions in Haskell are curried by default

Your `add2` uses a tuple instead of currying whereas `add1` is already
curried. In ghci:

    
    
        Prelude> (+) 3 4 == (+ 3) 4
        True
    

So you could have an `add3` just by applying 3 as the first argument.

    
    
        Prelude> let add3 = (+ 3)
        Prelude> add3 5
        8
    

hoogle describes uncurry as a function on pairs.
[https://www.haskell.org/hoogle/?hoogle=uncurry](https://www.haskell.org/hoogle/?hoogle=uncurry)

> uncurry converts a curried function to a function on pairs.

~~~
tome
> All functions in Haskell are curried by default

That's not really a meaningful statement. Functions are not really _anything_
"by default".

~~~
chriswarbo
Care to elaborate? I'd say functions in Haskell are many things "by default":

\- Curried

\- Partial

\- Lazy

\- Values

\- Recursive

\- etc.

~~~
tome
Let's elaborate on "curried" since that was what OP was interested in. How are
functions in Haskell "curried" by default? Sure, you can write

    
    
        f x y = 2 * x + y
    

and that will define a function that takes an Int, say, returns a function.
But why is that any more "default" than

    
    
        g (x, y) = 2 * x + y
    
    ?

~~~
biscarch
All functions in Haskell are single-argument functions. Multiple arguments are
syntactic sugar. The syntax sugar can make it confusing to talk about but if
you define an `add` function that takes two arguments x and y:

    
    
        add x y = x + y
    

it's sugar for multiple single-argument functions.

    
    
        add = \x -> \y -> x + y
    

When examining it this way, the `g` function above would translate to

    
    
        g = \(x, y) -> 2 * x + y
    

Which is a function taking a single tuple argument. It is still "default
curried" but the argument being passed in is a single argument rather than
multiple so we don't expand it to multiple single-arg functions. Perhaps it is
more illustrative to show the definition as the single argument it is rather
than using haskell's destructuring to pull x and y out of the tuple.

    
    
        g tuple = 2 * (fst tuple) + (snd tuple)
    

and a ghci session for completeness:

    
    
        Prelude> let g (x, y) = 2 * x + y
        Prelude> g (1,2)
        4
        Prelude> let y tuple = 2 * (fst tuple) + (snd tuple)
        Prelude> y (1,2)
        4
    

So when we say that Haskell functions are "curried by default", what we're
referring to is roughly the underlying single-argument nature of haskell
functions.

------
always_good
I deal with stuff like [http://package.elm-lang.org/packages/elm-
lang/core/5.1.1/Jso...](http://package.elm-lang.org/packages/elm-
lang/core/5.1.1/Json-Decode#map7) and I'm not sure how "auto"-currying is ever
worth it when it comes at the expense of arity overloading and what else.

I'd rather have to explicitly curry functions at the callsite with anonymous
functions every time.

------
lerie82
I think currying is hot...

------
h2j24
Is currying useful other than as a shorthand for "value object" creation? You
can easily create the equivalent of a curried add function:

    
    
        class Adder {
            base: number;
            constructor(base: number) { this.base = base; }
            add(o: number): number { return this.base + o; }
        }
    

While slightly more verbose, I believe this form to be the more powerful of
the two because one has access to all of the features of imperative
programming and is easier to compose with other objects.

~~~
curun1r
Easier to compose with other objects, but harder to compose with other
functions. For example:

    
    
        const compose = (...f) => f.reduceRight((f, g) => (...x) => f(g(...x)));
        
        const add = x => y => y + x;
        const mul = x => y => y * x;
        const div = x => y => y / x;
        
        // (6x + 4) / 2
        const complexOp = compose(mul(6), add(4), div(2));
    
        // should return 50
        complexOp(16);
    

How would you accomplish this using classes?

~~~
h2j24

        class Answer {
            getAnswer(input: number) { return (input * 6 + 4) / 2; }
        }
    

All joking aside, you would achieve composition the same way, except using
interfaces.

    
    
        interface Operation<T> {
            apply(input: T);
        }
    
        class Addition implements Operation<Number> {
        }
        class Multiplication implements Operation<Number> {
        }
        class Division implements Operation<Number> {
        }
    
        function compose(Operation<T>... operation, T input) {
            T last = input;
            for (Operation<T> op : operations) {
                last = op.apply(last);
            }
            return last;
        }
    

Of course, the above code is slightly verbose, but even if you could remove
all the boiler plate it still doesn't look like good imperative code. This is
what confuses me about currying: when translated to the imperative equivalent,
it looks terrible.

------
megaman22
Currying is one of those things I haven't grokked completely. And, shit, I
just used grok as a past tense verb...

I'm not sure if i just haven't drunk enough koolaid or if curriers are beyond
the pale.

~~~
dmitriid
I find currying rarely useable in real life. Because you rarely need a fixed
parameter to a function. And when you do it's solved in a better way by
storing it in a state somewhere (even global variables will do in a pinch).

The closest real world example I can think of right now is calling a
webservice. Which is usually something like

    
    
       const baseUrl = 'https://some.tld'
       function callExternal(resourcePath, data) {
          fetch(baseUrl + resourcePath, data)...
       }
    
       // and then somewhere
    
       callExternal('/some/path', {some: data})
    

With currying you could do it like this:

    
    
       const baseFun = function(baseUrl, resourcePath, data) {
          fetch(baseUrl + resourcePath, data)...
       }
    
       const callExternal = baseFun('https://some.tld')
       // ^ you "pin" the first argument to always be 'https://some.tld'
    
    
       // and then somewhere
    
       callExternal('/some/path', {some: data})
    
    

I personally have had the need to write a curried function myself maybe twice
over the course of the past 17 years.

~~~
icebraining
Yes, closures can be used to achieve a similar effect to currying. That said,
they only work with functions you create yourself. If baseFun came from a
library, you'd have to write an extra function just to close over baseUrl.

Of course, one can wonder how many times can one usefully curry with external
functions, but I think that's partially a result of not having currying syntax
in the first place. For example, the fact that Ruby has blocks massively
influences the typical design of its libraries, when compared to Python.

