Hacker News new | comments | show | ask | jobs | submit login
Destructuring and Recursion in ES6 (raganwald.com)
115 points by StylifyYourBlog on Feb 2, 2015 | hide | past | web | favorite | 53 comments



Hah, I'm tempted to go through the Ocaml 99 problems (which was inspired by the Prolog version) using ES6 [1]. Turns out, with recursion and a head and tail function, you can do virtually anything with lists.

I hope the popular JS engines optimize the hell out of the gathering operator, and adds TCO (it's on the ES6 spec, evidently). Then we won't even need to cross our fingers when we describe JS as a functional language to our peers.

And with the increasing emphasis on immutability, as preached by the React crowd, with cross seeding from the Clojurescript-React connections, JS grows even closer to being a Scheme [2].

If ES7 would lock down 'use strict' mode against even more of JS's weak-typing bloopers, things would be even better yet. I don't see why they shouldn't. They whole reason for 'use strict' is to give people who need backwards compatibility an out.

Incidentally, I'm also having great success using ES6 arrow functions at work, thanks to 6to5 [3]. I do a lot of d3 work, and arrow functions have made using d3, particularly the ubiquitous d3 accessor functions, so much nicer. Highly recommended.

For the first time since Node, I'm actually getting excited about Javascript's prospects. Now, if only they'd add types, and sum types...

1. http://ocaml.org/learn/tutorials/99problems.html

2. No, I don't actually believe that JS is a Scheme in disguise.

3. https://github.com/6to5/6to5


> If ES7 would lock down 'use strict' mode against even more of JS's weak-typing bloopers, things would be even better yet. I don't see why they shouldn't. They whole reason for 'use strict' is to give people who need backwards compatibility an out.

Why we shouldn't? Because we (as in browser vendors) can't just break compatibility with all the websites that use strict mode today — that's a bad (/bad/) experience for our users, to have something they used every day in version x of our browser not work in version x+1. The "use strict" pragma won't ever be used for anything more than the ES5 strictness, and there's an increasing unwillingness to introduce further mode-switches. But never is a long time, and I think many of us would like to see something like TypeScript integrated into the language, and possibly introduce a strictly typed mode.


> Why we shouldn't? Because we (as in browser vendors) can't just break compatibility with all the websites that use strict mode today

OK, fair enough. That was admittedly a glib comment on my part. However, if JS ever wants to be more than a compilation/transpilation target in the long run, it will need to face this challenge and come up with a solution.

Static typing is a definite solution, but I'm skeptical that this will happen in ES7 (or perhaps I just don't want to get my hopes up).


Some form of gradual typing is almost certain to reach ES eventually. I don't think anyone is really against it. Whether that goes far enough to make everyone happy… well, it probably won't. But I'm somewhat hopeful about all the research being done around gradual typing, and around TypeScript by MSR in particular that we'll end up with something truly useful and that solves 99% of the use-cases.


While I think the whole pattern matching + recursion thing is important to understand, once it's understood, I think it's usually better to move on to combinators like fold/reduce/map/flatmap and such. Plus Javascript still lacks the ability to branch on a pattern match.


> While I think the whole pattern matching + recursion thing is important to understand, once it's understood, I think it's usually better to move on to combinators like fold

Of course, `fold` is just another language for expressing pattern-matching and well founded recursion, right? (Maybe I've got my theory mixed up, but do notice that I snuck the word "well founded" in there.) Then, of course, the other combinators that you mention (`reduce` / `map` / `flatmap`) are all just folds.

On the other hand, this shouldn't be taken as a refutation of your argument; recognising and making prettier / more lightweight / even just more visible a common pattern is valuable, even if it does not, strictly speaking, increase computational power.


Yep, going from arbitrary recursion to `fold` to `reduce` to `flatMap` to `map`, you're actually discarding power and expressiveness. But the good news is that you're exchanging raw expressiveness for a tool that fits the job at hand more precisely.


I agree that they're not terribly practical ways to program (I wouldn't use them in production). Still fun to play with.


Any links to your work with arrow functions and d3?


I've only started using ES6 in anger over the past few weeks, so I don't have anything public at this time. However, if you're familiar with d3 at all, then you know this pattern well:

    someSelection
      .attr("r", function(d, i) { return i * 20; })
Well, with ES6, this becomes so much nicer to write:

    someSelection
      .attr('r', (d, i) => i * 20)
Nothing earth-shattering, but when you're writing dozens of these accessor functions in each function, it makes d3 much easier to work with.


It's hard to believe this is javascript. I write that with a lot of love. These are some of my favorite language features but It's going to take some adjustment to be comfortable using and reading them in JS.


The more I see and learn about ES6 the more I think JS code is going to look completely different in a couple years than it looks today - and in a good way.


Wait till you see ES7 async and await :-D http://imgur.com/HvQbw48


How do you return a result (this function isn't doing so)? I'm guessing getJSON could be returning a Promise but there must be a way to do the same in an async function.

EDIT, here's the spec, looks like you can just return: https://github.com/lukehoban/ecmascript-asyncawait


Which can be used today via 6to5!


I had first seen async/await in traceur-compiler. I was using it with Node.js' fs module two years ago, I believe.


6to5 compiles ES7 too?



ES6 makes a lemonade out of a pineapple.


ES6 transpiling has been a bit of a revelation to me lately. Object destructuring is also exceedingly useful.

On a bit of a tangent from the article, I'm interested by the use of const as the default for variable declaration. My background is such that I've always considered constants as things that are class level constants (thank you Java), but reading up on Swift has made me rethink this.

Clearly const has the potential to reduce runtime errors. I'd appreciate it if anyone could recommend any further reading on the subject.


I'm not sure about the best possible resource to read, but in the traditional of functional programming, immutability is preferred in nearly all cases. This is particularly important when it comes to data with long-lived scope (e.g. members of objects, data within the scope of closures, etc.) because the functions that operate on non-constant data lose purity and become more difficult to reason about and test. In other words, mutability leads to side-effects, wherein operations on data in one part of a program are visible within other function, without explicitly being passed in.

In many functional languages, mutation of variables is only allowed in special cases, and in some languages, it's not allowed at all. This can be weird if you learned programming in a tradition where modifying variables is one of the first things you were taught, but once you get used to not mutating your variables, the benefits really start to shine. As you mentioned, it often prevents entire categories of errors. For instance, I haven't written a for loop with an index variable in well over a year, in favor of using map/flatMap/filter/fold and friends, and for that reason, I can't remember my last fencepost error.

Like I said, I don't know the best resource, but learning Scala was really influential for me in immutable programming, and the Functional Programming Principles in Scala Coursera course [1] and the (unrelated) Functional Programming in Scala book taught me a lot of what I know on the topic.

[1] https://www.coursera.org/course/progfun

[2] http://manning.com/bjarnason/


In JavaScript const is not immutable.

    const foo = {};
    foo.bar = 'baz';
This is valid, you cannot reassign foo, but you can mutate it.

Native immutables in JavaScript would be great, but would be hard to pull off given prototypical inheritance.

EDIT: Now const + Object.freeze gets you close to immutable. I'm not sure what that means for inherited functions, can the parent prototypes not be mutated either?


Yeah, this is also true of the `val` statement in Scala too. I suppose though that in Scala, one would probably make all data members of classes `val` as well, whereas in Javascript, there's no in-line syntax for constant object properties (Object.freeze being a bit verbose to use everywhere).

But even that limited form of enforced immutability is helpful. I personally find it easier to understand a complex function when I can assume that no names are being rebound.


Double down on Rúnar's book! You have only learned half of Scala until the day that "FP in Scala" is trivial to you.


Ta muchly - I shall investigate.


I've never liked this corner of JS, and haven't kept up on ES6 or any of the modern dialects of JS, but it looks like `undefined` is still hanging around as a first-class thing?

    const length = ([first, ...rest]) =>
      first === undefined
        ? 0
        : 1 + length(rest);
Having worked in languages that support multiple (destructuring or not) function definitions, this doesn't look quite right.

Here's something I typed into chrome's JS console:

    > [1,2,3].length
    3
    > [undefined, 2, 3].length
    3
Wouldn't the above-defined ES6 `length` function return 0 for the latter array?

To mock up a similar syntax that would handle this more gracefully (and more sensically):

    const length = case
        ([]) => 0,
        ([first, ...rest]) => 1 + length(rest);
Wouldn't it be better in general to be able to destructure arguments using syntax? If you can't use a case-like syntax for this, what do you do to be able to handle different structures?


From the footnote:

"Well, actually, this does not work for arrays that contain undefined as a value, but we are not going to see that in our examples. A more robust implementation would be (array) => array.length === 0, but we are doing backflips to keep this within a very small and contrived playground.”

http://raganwald.com/2015/02/02/destructuring.html#fn:wellac...


ah, thanks for pointing that out, I came here and ctrl+f'ed for 'undefined' but did not do the same in the article.


Yes, the ES6 `length` function would return 0 for [undefined, 2, 3]. I hope it was only used as an example in the article and nobody is planning to really use this. (Edit: see sibling comment; the article is already aware of this issue.)

And yes, ES6 pattern matching is still quite limited.


I've been pondering a few style questions recently:

1. When to use 'function' vs '=>'?

Leaving aside hoisting and handling of 'this', 'function' and '=>' are usually interchangeable:

    function foo(x) {
        // do something
    }

    let foo = x => {
        // do something
    }
I prefer to keep using 'function' because it's immediately obvious to the reader that we are creating a function. With '=>', the reader doesn't know if foo is a function or not until reaching '=>', which is located after the argument list. There's also almost no terseness gain. A potential win for '=>' though would be the ability to declare the function as 'const'.

With closures, '=>' wins because the reader already expects a function, and the terseness gain is significant:

    somearray.map(function(x) {
        // do something
    });

    somearray.map(x => {
        // do something
    });
2. When to use {} around the function body?

    somearray.forEach(x => /* a single statement */);

    somearray.forEach(x => {/* a single statement */});
{} around the function body are optional if there's only a single statement. However, for clarity, I still use {} when I want to make it clear that we are not using the return value. I think that might be overkill though: the reader should aready expect that 'map', 'filter' and friends will use the return value, while 'forEach' will not.

3. 'let' or 'const'?

This one is already mentioned in a sibling comment, so let's keep the discussion there.

I realize that these are tiny style decisions that will probably incite endless bikeshedding, but I'm still interested in hearing opinions from fellow HNers. :)


Haskell possesses similar syntax, save that one must prefix an anonymous function with \, like so:

    \x -> x * 42
It's possible as well to use it bare like that, but only in very particular places, so it's generally much clearer to use it wrapped in parens:

    (\x -> x * 42)
Perhaps some similar convention would make it clearer where a lambda is being used in .js?


I wish named arrow functions could make it into the spec, but until then, I'll personally stick with the full, named function declaration for all of my functions because it makes stack traces much more readable, which far outvalues the saved keystrokes that arrow functions provide when building non-trivial applications.


  const description = (nameAndOccupation) => {
    const [[first, last], occupation] = nameAndOccupation;  
    return `${first} is a ${occupation}`;
  }
  description([["Reginald", "Braithwaite"], "programmer"])
  //=> "Reginald is a programmer"
This is going to make code a lot more readable. I'm excited!


Me too. I'm hoping that V8 starts to support destructuring soon (https://code.google.com/p/v8/issues/detail?id=811) so I can optimize my io.js code.


or does it? You are passing arrays of arrays with a potentially random index position based convention that makes no sense to me at first read.

Is that how you pass on real-world code a generic person data around?

Is that how you don't use i18n for string and how you ignore occupations that starts with vowels?

Destructuring is nice as other new features in ES6 but abusing these just because there are there ... no, I don't think that's going to improve any code or any readability.


You can destructure objects as well.

So:

    const description = (nameAndOccupation) => {
      const { name: { first, last }, occupation } = nameAndOccupation;
      return `${first} is a ${occupation}`;
    }
    description({ name: { first: "Reginald", last: "Brathwaite" }, occupation: "programmer" });
And this last _is_ in fact how real-world code would pass person data around. Your i18n points stand, of course.


That's really nice. Is it possible to do the destructuring directly in the argument list?


Yes:

    const description = ({name: {first, last}, occupation}) => `${first} is a ${occupation}`;
Edit: just noticed that `last` is not used, so we can remove it from the destructuring:

    const description = ({name: {first}, occupation}) => `${first} is a ${occupation}`;


If you are passing JSON around, then arrays may be better for streams of data than json, with key/value ... if you are passing tabular data from one service to another, then you may very well pass an array you then want to destructure into an object.

For that matter, maybe even messagepack, protocol buffers , etc over 0mq or direct sockets... although straight json + gzip is pretty effective.


How is that more readable than:

function describe(nameAndOccupation){ return `${nameAndOccuptation.name.first} is a ${nameAndOccupation.occupation}`; } ?

When people use features purely for the sake of using those features, the result is not "more readable". Ever notice that pseudocode is almost always imperative?


I should have been a little more explicit: the string interpolation, though not the focus of the article, is part of what interests me w.r.t. readability.


obligatory mention that 6to5[0] lets you use all this stuff, and more, today. I've been using it for 3 months now and it has totally transformed the way I write JS applications, it's awesome.

[0] https://6to5.org/


Also coming to ES6 is destructuring objects:

  function doSomething(aObj) {
    let { x, y } = aObj;
  }
  
  doSomething({ x: 1, y: 4 })


Yet another validation of Greenspun's tenth rule. Now if only ES6 would get proper tail recursion then we'd be all set!



In that case, I'm shocked that no Lispers have showed up to castigate raganwald for writing something like

  fn(first, foldWith(fn, terminalValue, rest));
With an accumulator foldWith could call itself directly! b^) Actually this indicates to me that ES6 should implement folds itself.


So basically pattern matching, shorter syntax for lambdas, string manipulation à la Ruby (but with backquotes, why?) and… confusing list with array EVERYWHERE in the language? No TCO? A `map` (sorry, `mapWith`) that creates then concatenates lists?

2015 sounds amazing!


That seems quite a negative view. The features mentioned in the article are only a small part of what is coming in ES6.[1]

The “fat arrow” syntax for defining anonymous functions is much shorter and neater than the cumbersome notation we’ve had in the past:

    items.map(function (item) { return do_something_with(item) })
becomes

    items.map(item => do_something_with(item))
and there are both expression and statement variations of fat arrow functions, so we don’t need the return in this sort of situation either. There are also some other significant differences, particularly in what “this” means within the anonymous function.

Combine that cleaner notation with the introduction of generators[2] and we can have lazy fluent interfaces for transforming data efficiently, along the lines of:

    items.map(item => item.interesting_property)
         .filter(item => is_wanted(item))
         .reduce((item1, item2) => combine_usefully(item1, item2), initial_value)
It’s not yet completely clear how we will take advantage of these possibilities in standardised ES6, because being backward compatible is a priority so you can’t just make sweeping changes like having all the existing Array.prototype functionality become lazy. This article[3] gives some ideas of how the kinds of utility library we’ve used in the past might look if they were written to take advantage of ES6’s new features, and presumably even if ES6 doesn’t include this kind of generator-based lazy functionality as standard it won’t take long before libraries are written to do it if they haven’t been already.

Incidentally, that emphasis on backward compatibility also explains the use of back-ticks for strings with substitution. You need a new syntax that doesn’t change any existing behaviour, so regular single- and double-quoted strings can’t be used.

There’s a lot more to ES6 besides just these areas. The article and this HN discussion are simply concentrating on some of the more interesting data-munging parts of it. For example, it also finally has a useful module system.[4] It really is quite a step up from today’s JavaScript in actual use; perhaps a good analogy would be the shift from Python 2 to Python 3, where the older version still does a decent job but a lot of little details are better in the newer one and the cumulative effect feels like a qualitative improvement in the language.

[1] In practice, these features are already available today with only a small amount of effort, because you can use a transpiler like 6to5 or Traceur to convert ES6 code into ES5 that will run quite happily in any modern browser. These transpilers are straightforward to install and literally a one-liner to run, similar to installing SASS to convert an SCSS stylesheet to CSS for use in the browser. They are already integrated nicely with most other popular tools like Gulp, Grunt and Browserify. At worst you need a polyfill for a few features, but for example 6to5 provides one polyfill that does everything, so again this is a very simple thing to incorporate into your project if you do need it.

[2] https://developer.mozilla.org/en-US/docs/Web/JavaScript/Refe...

[3] http://aaronstacy.com/writings/if-underscore-was-written-in-...

[4] Very sensibly, the transpilers used to work with ES6 today can just turn those ES6 modules into AMD or CommonJS modules for compatibility with other existing tools. For example, you can use a 6to5 plug-in with Browserify, and it will convert ES6 modules into CommonJS ones that Browserify itself can then work with in the usual way. Again, this is yours for the price installing the plug-in using npm and an extra command-line option to Browserify, or something similarly straightforward using other package managers/build tools.


    items.map(item => item.interesting_property)
             .filter(item => is_wanted(item))
             .reduce((item1, item2) => combine_usefully(item1, item2), initial_value)
I'm not a JS programmer, but isn't this just

    items.map(has_interesting_property)
         .filter(is_wanted)
         .reduce(combine_usefully, initial_value)

?


Sorry, you’re right, that wasn’t a very good example.

The use with `map` really is different: with a fat arrow, you can easily pick out a single property from each object in your container (not just whether it exists, which seems to be what you were getting at in your alternative).

But yes, the fat arrow is unnecessary in the trivial filter and reduce examples I gave. I should have provided a slightly more involved example, such as this:

    items.map(item => item.interesting_property)
         .filter(item => is_wanted(item, some_criteria))
         .reduce((item1, item2) => combine_usefully(some_context, item1, item2), initial_value)
Now you’ve got some extra arguments to supply, so you can’t just pass the functions in directly when calling `filter` and `reduce`.

In today’s JS (meaning ES5) you would generally need the older anonymous function notation. You might still have some shortcuts available, such as `Function.bind` if the order of parameters lend themselves to convenient partial application as in the `reduce` case here:

    items.map(function(item) { return item.interesting_property })
         .filter(function(item) { return is_wanted(item, some_criteria) })
         .reduce(combine_usefully.bind(this, some_context), initial_value)
Of course in ES6 you can just choose whichever representation you prefer:

    items.map(item => item.interesting_property)
         .filter(item => is_wanted(item, some_criteria))
         .reduce(combine_usefully.bind(this, some_context), initial_value)
The other thing to watch out for when jumping between how today’s mainstream JS handles these kinds of manipulations and the new ES6 tools is what happens with `this`. It’s one area where current anonymous functions and the new fat arrow notation do behave differently, and unlike the `bind` example above, it’s not necessarily explicit what `this` will be when you start passing functions into other functions.


Completely unrelated, but even assuming the author of the article posts here, are you from Columbus OH? I've gone to One-Line Coffee (the source of the first pic) myself a number of times!


Don't want to come off overly bitter, it's certainly interesting, but I'd like to see some practical uses of this, besides reimplementation of map() and other "low-level" functions...


How about multi value return from functions.

``` x = foo(); oh = x[0]; no = x[1]; ```

vs

``` [oh, yeah] = foo() ```




Applications are open for YC Winter 2019

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: