Hacker News new | comments | show | ask | jobs | submit login
ES6 Overview in Bullet Points (github.com)
146 points by bevacqua 689 days ago | hide | past | web | 45 comments | favorite



Great writeup. I welcome most of the additions, but I somehow cannot get behind things like:

var {foo} = pony is equivalent to var foo = pony.foo var {foo: baz} = pony is equivalent to var baz = pony.foo

I am not sure why, but most languages get into a state where they seem to encourage non-readable code. What was wrong with 'var baz = pony.foo' to start with?


the core use case is

  var { foo } = pony
I agree this is kind of silly in isolation, but more often than not, its used like this:

  var { foo, bar, baz } = pony
which is honestly not that hard to read, and is much better than

  var foo = pony.foo;
  var bar = pony.bar;
  var baz = pony.baz;
which is very useful when you're referencing properties of pony a lot. Since you're going to see destructuring used like

  var { foo, bar, baz } = pony
typically, I think its a very nice idea to always do your variable assignments like that even when its a little obtuse. I'd rather not see multiple styles for variable assignment.

  var { foo: f } = pony
is just there as an escape hatch in case foo already exists in the namespace. It is not something you would typically do.


It's not hard to read if you already know what it does. As it stands, it's a barrier to learning and a detriment to consistency in style.


That's a general argument against any added feature. The question is, is it useful enough? This particular feature seems valuable and cheap: I write code all the time that would be cleaner with it, and the meaning is natural enough that I independently invented essentially the same syntax years ago as a pair of Lisp macros (dealing in a-lists instead of hashtables).


> As it stands, it's a barrier to learning and a detriment to consistency in style.

Not true. It's actually super-easy to learn and meshes well with existing variable declaration styles.


I'm not sure why you spent two lines of text to say "I disagree."


I wish they'd let you return stuff this way, like:

    return {a, b, c} = foo;


What would be the point of that? Just return `foo` and destructor in the callee.


That's definitely another option (and what I use now!), but I do what I posted in Elixir a lot so I guess I'm used to it.


But how is it different from just returning foo?


One difference is that you could return data to callers that you don't want them having access to.


Encapsulation/information hiding?


It's pretty handy when unpacking a JSON blob so you can manipulate individual fields:

  let { b64_img, height, width, filename } = JSON.parse(text)


I find it most useful for import statements:

import {Button, Text} from 'react-native';

and in function declarations:

function f({foo, bar})


However: import statements do not actually destruct the default export, it does something entirely different. It looks up exported bindings.


    var { email, full_name } = get_user();
    send_email(email, `Hi ${full_name}!`);


It's the new with.


Another crazy excellent (and free) ES6 reference is http://exploringjs.com/es6/ by Dr. Axel Rauschmayer - one of the ES6 committee members.

JS is a very complicated language - and the language spec in written in a way - that while I am sure is excellent for implementors and compatibility - is pretty much unreadable by language users.

Axel's book is quite authoritative and very readable. You may still go to the spec for finer points - but it sure is 10,000 times easier if you mostly understand the mechanism (and motivations) first.


For those looking for something more in-depth:

ES6 in Depth series @ Mozilla Hacks: https://hacks.mozilla.org/category/es6-in-depth/page/2/?utm_...

The first article in the above series (where you should probably start): https://hacks.mozilla.org/2015/04/es6-in-depth-an-introducti...

Another brief guide to ES6 features, with more code blocks and less bullet points:

https://github.com/lukehoban/es6features


Sweet! FWIW there's also an in-depth guide on ponyfoo.com as well.

Starts here: https://ponyfoo.com/articles/a-brief-history-of-es6-tooling.

All articles: https://ponyfoo.com/articles/tagged/es6-in-depth


This is awesome, but can I just say foo/bar/baz drives me nuts. How about `{outerProperty: {innerProperty: 'innerValue'}}` ? That just seems so much easier to follow to me, especially in these deep-nested destructuring examples.

But anyway, seriously, this is awesome. The author is responsible for a large part of my ES6 knowledge.


Foo and bar drive me nuts. They feel antiquated and uninspired.

If we never see them again it will be too soon.


The advantage of foo/bar/baz is the cultural understanding that the contents isn't important. It's prevalence in programming is because programming languages have strict rules about where tokens are needed. We can't just wave them away, so we fill them with words that we all know mean "nothing important is meant by this".

Things get muddier when you need to say "the contents aren't important, just that it matches this other token over here".


If you're interested in using ES6 features, TypeScript provides a great way to get most of these features (and of course optional typing as well). Babel might be slightly more feature-complete when it comes to ES6 features, but TypeScript provides such a fantastic front-end dev experience that I definitely recommend it. (Biggest pain point is incorporating typings for libraries which are pure JS and not TS.)


> Temporal Dead Zone - Attempts to access or assign to foo within the TDZ (before the let foo statement is reached) result in an error

Can anybody explain why "let" variables are hoisted to the start of block given this Temporal Dead Zone exists? Is it an artifact of modern JS runtimes?

And given the TDZ what practical benefit does knowing that the variable is technically hoisted provide?


I've never liked the phrase 'variable hoisting'. It implies the compiler actively moves the variable declaration.

What's actually happening is lexical scoping - a variable declaration is associated with a lexical scope. For var declarations, the lexical scope is the function, for let declarations, it's the block. When a variable is referenced, it first looks for the variable associated with the lexical scope of the variable reference, then walks up the chain of lexical scopes until it either finds a variable or hits the top scope and the variable doesn't exist.

Lexical scoping is easy to reason about and fairly easy to calculate. The majority of languages used today use lexical scope (the only exception in popular languages I can think of is perl which lets you use either lexical or dynamic scope, though I'm sure there are others).

A consequence of this design is that you can't have multiple variables with the same name in the same lexical scope. Most languages will raise an error if you redeclare a variable - javascript is unusual in that it doesn't.

Javascript is also unusual in that you can reference variables before they're both declared and definitely assigned. It's these design choices, interacting with lexical scope, that gives javascript such weird and notable behaviour that people have decided it needs a name - 'hoisting'.

So why are let variables 'hoisted' to the start of the block? Because that's how every other language does it. Because it's cheap and dead simple to reason about.

If it didn't, that would mean you could have multiple variables with the same name in the same block. That would be more difficult to keep track of, both for the compiler/runtime and for the programmer. It would also be largely pointless, because once execution gets to the code past the second variable declaration, you can't reference the variable created by the first declaration (unless you capture it with a closure).

    function example() {
        let a = false;
        console.log(a);
        let a = 10; // Past this point, I can't get to the first a anymore
        console.log(a);
    }
The TDZ addresses the design decision of being able to reference a variable before it's declared.

    function example() {
        console.log(a); // Without a TDZ, this will print undefined. With TDZ, this will be an error.
        let a = 10;
    }
If you google around for examples of 'hoisting' gone bad and run through what would have happened if vars had TDZ, you'll see that all of them would be avoided.

Why is it useful to know both about block scoping and TDZ?

Block scoping lets you know that:

    function example(b) {
        let a =10;
        if (b) {
            let a = 20;
            console.log(a); // This will print 20, because it refers to the variable in the if block
        }
        console.log(a); // While this will print 10, because it refers to the variable in the function block
    }
While TDZ lets you know that:

    function example() {
        console.log(a); // This will raise an error
        let a = 10;
    }


Thanks for the reply...I definitely get the advantage and popularity of lexical scoping. I'm not sure about your claim about every language doing hoisting though; take Java as an example:

    class Example {
      String f = "foo";
 
      void foo() {
        System.out.println(f); //foo
 
        {
          System.out.println(f); //foo
          String f = "bar";
          System.out.println(f); //bar
        }
 
        System.out.println(f); //foo
        String f = "baz";
        System.out.println(f); //baz
      }
    }
I understand that Java doesn't perfectly match var or let, in that the lexical matching of local variables is not the same as either. But Java really doesn't seem to be doing hoisting here, and it's certainly clearer what f refers to before the local variable is declared (though I like the explicit error the TDZ gives the best).


Java defines the scope of a local variable to be "the rest of the block in which the declaration appears". The variable declaration is 'hoisted' to the top of the scope, which is defined as where the variable declaration is. Tautological! The rest of your example then falls out the fact local variables can shadow member variables [1].

C# scopes local variables more similarly to javascript's "let".

    class Example {
        public void Foo() {
            Console.WriteLine(f); // Error: The name 'f' does not exist in the current context
        }
        
        public void Bar() {
            Console.WriteLine(f); // Error: Cannot use local variable 'f' before it is declared
            String f = "a";
        }
    }
In Bar, it knows about the variable f, evidenced by the fact the error message is different to Foo. In Javascript terminology, f was 'hoisted' to the top of Bar. In regular terminology, f is in the lexical scope created by Bar.

[1] Local variables can only shadow member variables, not other local variables. The justification the specification gives for shadowing is so that superclasses can introduce protected member variables without forcing subclasses to rename their local variables. You can still reference the member variable using "this.memberVariable".


[deleted]


I guess the point is that:

    // line 1
    { 
       // line 2
       let x = 5; // line 3
       // line 4
    }
    // line 5
What is the actual end language user difference between saying the scope of x is lines 2-4 but magic (TDZ) makes it unavailable on line 2 vs. saying the scope is from line 3 to 4 with the TDZ being an implementation detail?


One case where the distinction matters is this:

    function x() {
      console.log(a);
    }
    let a = 10;
    x();
If you thought 'a' was only in scope on lines 4 and 5 of the above snippet, you might assume that this snippet would return an error, but it doesn't; it prints 10. That's why it's a temporal dead zone, not a lexical dead zone.

That being said, I agree this is a weirdly overemphasized aspect of ES6. The TDZ is not some weird mysterious boogeyman. It's the thing that makes sense in nearly all cases.


Can you give me an example?


As someone who's been using ES6 for several months now, I was surprised to learn a few new things here! A huge wall of bullets, but a pretty useful one. Thanks!


Looking good and informative, but to be honest I am really interested in use cases. I think I understand how to use generators but I have no idea where I could use them in real-world scenarios. Same goes for WeakMaps, Proxies... Anyone care to give some examples?


Check out http://jlongster.com/A-Study-on-Solving-Callbacks-with-JavaS... and the follow up post. It's easy enough to start using generators alongside your existing async solution (callbacks or promises).

Proxies can be used for data-binding, similar to Object.observe.

I've also yet to find a case where I need WeakMaps instead of an Object or Map. All the benefits seem to be a bit too theoretical, but I likely haven't dug into it hard enough.


I've used a WeakMap for a kind of memoization.

Assume an immutable object may undergo an expensive transform that returns an object, which is then stored in a WeakMap keyed to the immutable. When there are no new references to that object, the garbage collector will clean it up, but otherwise the expensive transform may be avoided with a lookup on the WeakMap.

It provides a pretty nice way to decouple your caching from the rest of the codebase.


checkout Koa to see a great use case of generators. By yielding to asychronous code (instead of callbacks) you can write JS that looks like it's synchronous.


What is the advantage of using generators versus promises?


http://stackoverflow.com/a/28032438 gives a great overview to the pros and cons of both approaches.


In this case, Koa uses generators and promises together to emulate the behavior of async/await style concurrency. You can see how Babel 5.x transforms ES7 async/await into generators and promises at https://github.com/babel/babel/blob/v5.8.34/packages/babel/s...


Too many use cases to list, but generally a nice way to for...of anything, for example a recursive binary tree traversal:

    [Symbol.iterator]() {
      if (this.left) {
        yield* this.left[Symbol.iterator]();
      }
      yield this.value;
      if (this.right) {
        yield* this.rightSymbol.iterator]();
      }
    }
    ...
    for (const value of myTree) { ... }


I'm currently using Proxy to transparently version data that is saved in an IndexedDB.


Does anyone have any idea why WeakMap and WeakSet aren't iterable?


If they were iterable, then they couldn't be weak, because you've effectively given an API to reach references unconditionally, thus meaning they are always reachable, thus meaning none of them can ever be GC'ed.

for (x of myWeakSet) <-- no object in myWeakSet is ever safe to GC

Now, if you were to say "well, make them weak and make iteration only happen on the items that have no External references", you've now created a really tricky situation where iteration essentially forces a GC pass because it needs to know whats reachable and what isn't at iteration time.


> If they were iterable, then they couldn't be weak

Of course they can be both weak and iterable. Java's WeakMap does it.

> because you've effectively given an API to reach references unconditionally

How so? The iteration would only return objects that hadn't been collected.

> you've now created a really tricky situation where iteration essentially forces a GC pass because it needs to know whats reachable and what isn't at iteration time

Why would it need to check what is reachable or not before iteration? WeakSet doesn't guarantee that it only contains otherwise live objects does it? The Mozilla documentation just says that the references in a WeakMap or WeakSet 'do not prevent garbage collection'. It doesn't say a WeakMap 'only contains otherwise live objects'. It makes no guarantees like that at all.

However I'll answer my own question as I found some more documentation - it's apparently to reduce non-determinism, so that you can't observe the operation of the GC.


I imagine you are referencing this comment "One could make WeakSet implementations that are iterable, but those can lead to non-deterministic algorithms (depending on GC behaviour) if used in the wrong way, and therefore the ES committee decided not to make the contents available."

I suppose when I said "you've now created a really tricky situation where iteration essentially forces a GC pass because it needs to know whats reachable and what isn't at iteration time", I was directly responding to this behavior: if you have a GC pass before every iteration, it clearly would solve the non-determinism problem, as you would have an iteration that had well-defined behavior. I took it as a given that an iteration that sometimes returns things with no external references and sometimes doesn't would be completely absurd, and thus the GC pass would be required.

If you are telling me that this is precisely how Java's works however, then I guess I should have known better and checked that a language like Java apparently does allow this ... interesting behavior. (I certainly don't know how it works there)




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact

Search: