Hacker News new | comments | ask | show | jobs | submit login
Overview of JavaScript ES6 features (adrianmejia.com)
336 points by adriansky on Oct 24, 2016 | hide | past | web | favorite | 235 comments

I'm surprised by the state of const/let nowadays.

The well-known good practice: use const by default; use let when it's needed. At the release of ES6, it was the way to go. But everyday I notice libraries—and some really famous— that use let everywhere in their docs, or some really influent developers from Google or Facebook who share samples of code on Twitter using let when it's not needed [1]. I don't know why. Seems like most people now think that const is for declaring constants (in the traditional meaning, like const API_URL) when it's just the normal way to declare variables that don't need to be reassigned (so basically most variables).

Dan Abramov said: "some people say const is ugly" [2]. Well, if it's a matter of appearance...

[1] https://twitter.com/addyosmani/status/789126892402204673

[2] https://twitter.com/dan_abramov/status/783708858803978240

I resisted const for like 10 minutes. You get used to it really quickly and then when you see a 'let' in your code you just know something is happening with that variable afterwards.

When I have to go back to programming in a language without const declarations it feels bad.

I know I should use const. I lazily leave it until the end then try to shoe-horn it in. Then it ripples through the code until I give up. Then I hate myself.

I've started to use const for every variable, then if I get an linting error about modifying a const, I switch it to let.

It took about two weeks to train my hands/brain to type const before let/var.

I use https://www.npmjs.com/package/eslint-plugin-const-immutable for linting const usage. It even detects object props mutation!

This is what makes Eslint so great. If you extend Airbnb's (or write your own very strict) config, it will really enforce best practices for things like this.

I think running eslint --fix will even change your "let"s to "const"s where appropriate, but don't quote me on that.

>I think running eslint --fix will even change your "let"s to "const"s where appropriate, but don't quote me on that.

Yep, this is the relevant rule - http://eslint.org/docs/rules/no-var

Though it just replaces all var with let. I expected it to intelligently use either const or let depending on how the variable is used ¯\_(ツ)_/¯

You're looking for: http://eslint.org/docs/rules/prefer-const

It does exactly what you describe :)

--fix is a godsend.

Is it just me, or is Javascript (and more generally, all front end technology) more susceptible to these trivial holy wars?

While I agree that const/let is a useful convention for communicating mutability, it isn't nearly a big enough deal to warrant the attention it receives from the community.

It's not just const/let; I rarely make a front end PR that isn't bike-shedded to death over subjective styling choices, single quotes vs double quotes, etc. It often feels like conforming to flavor-of-the-week, subjective decrees from frontend trendsetters is more important than the substantive work that your code is doing.

The fact that a const can't be changed is great but in my mind the bigger thing is what the code communicates. Using const vs let tells future devs who work on the code that the value was meant to remain unchanged. If they need to change it, that's a signal that maybe there's more to it than they currently understand.

I agree that pull requests that mass change from let to const are largely a waste of time, but usage of const vs let is a legitimate discussion within your team.

> Is it just me, or is Javascript (and more generally, all front end technology) more susceptible to these trivial holy wars?

- Tabs vs spaces.

- Vi vs Emacs

- Weak vs strong typing

- where to place {} in block statements

- where to put commas

No, programming in general is susceptible to these trivial holy wars.

Weak vs strong typing is hardly "trivial"... it's the very foundation of a language.

Not trying to derail the convo or take sides, but one of these is not like the others ;)

Hardly anyone argues weak vs. strong types, the battle is primarily between dynamic vs static typing.

Vi vs Emancs? Weak vs strong typing?? tabs vs spaces??? where did you get those??

The ONLY holy war in JS is the semi-colon one

That's what OP is saying: programming has plenty of holy wars that predate JavaScript.

You are right, I misread his comment

Clearly you have never encountered the strange "comma first" brigade.

Au contraire, for a while I was one of them. But I think we were seen as too weird to even bother with a holy war

Though I don't see why this is a holy war in the first place. Clearly, no-semicolon is the only way to go.


[loads machine gun...]

We have a little more than vi and emacs these days as competition not two sided wars.

Sublime vs Atom if you prefer.

The point still stands

Atom is clearly better until you want to open a file larger than 64 bytes.

Presumably each project/organization/etc. has its own style guidelines (or at least unwritten conventions). If you're not following them, then it's not a surprise people are calling you on it. If, on the other hand, they don't exist then it would be weirder.

const vs let is an "immutable by default" vs "mutable by default" type of difference. it's not just a style difference, it can help you write stateless code if you assume immutability.

but yeah.

Native objects in JavaScript are already immutable as in you can not change them, only create new objects. Const will not make your object immutable! Try this:

  const foo = {bar: 1}
  foo.bar = 2;
It will only avoid having the pointer re-pointed to another object. It's better to just try avoiding global variables, and use a naming convention like uppercase and/or underscore for constants and global variables.

Const is probably only meant for constants like the name suggests.

I think something like this is pretty safe from reassigning:

    let foo = 1
If you want more control over your properties, you can use the ES5 feature "defineproperty" where you can set writeable: false

You can call `Object.freeze()` if you want immutable properties.

    const foo = {bar: 1}
The only thing const about it is the reference `foo` cannot be reassigned:

    foo = something_else; // error
Unlike `const` in C++, `const` in javascript in not very useful in my opinion. `let` is shorter and more readable.

Even that is only true for shallow objects

All true. But what I said was,

> it can help you write stateless code if you assume immutability

> const vs let is an "immutable by default" vs "mutable by default" type of difference

Sorta. I mean you're right but at the same time if you're using const for an object or array then it doesn't really matter without `object.freeze()` because it's not immutable only the reference is and I don't think most JavaScript developers understand that (at least not most of the ones I've run into).

Note that Object.freeze only works for shallow objects

> Is it just me, or is Javascript (and more generally, all front end technology) more susceptible to these trivial holy wars?

What constitutes holy wars are often the most accessible[1]

[1] https://en.wikipedia.org/wiki/Law_of_triviality

The arguments are so bitter because the stakes are so low.

It should have been "let"/"let mut", not "const"/"let" (or some other scheme that makes immutable bindings terser). I recall people warning of this outcome at the time of standardization.

I don't think it's a terminology problem. In fact, "let" and "const" are probably the right terms. Both are descriptive and exist in other programming languages.

`final` is a better term for real constants

I wish they had come up with something else than "const". I can't not see it as anything other than "the traditional meaning like const API_URL". Those writing the spec should have forseen this and used something else. I suspect that this is something like arrow functions. The concept of arrow functions is great, but using something that looks like "equal to or greater than" was poor judgement. I know that this was taken from other programming languages and I can't help but suspect that using const to mean something other than it's tradition meaning is something borrowed from some other language.

How do you feel about `const` for variables that are not reassigned but are still mutated?

    const array = [1, 2, 3];
    array.push(100); // unexpected?
I prefer to use let for bindings to objects that get mutated, even if the variable is not re-bound.

Yeah, I wrestled with that briefly.

For anyone not in the know, declaring `const` on an object only makes it where you can't re-assign the object, but you can still mutate its 'members'.

Personally, I use `let` just to symbolize that it isn't a constant and does change even though I could declare it a `const`.

I think it makes the code more readable.

As far as I know, the "correct" approach is to act as if every object is deeply frozen (perhaps actively testing using Object.freeze... IMO https://github.com/ecomfe/babel-plugin-freeze-const has the right idea but is nothing more than an idea), and transform things into new objects functionally, using .map and friends.

The problem is that, while a lot of things can be expressed elegantly this way, sometimes the most maintainable thing, or the most efficient thing, is to have some side effects in a deep loop in a certain algorithm, and it's very difficult to do this with purely functional JS.

The real solution is to allow your const bindings to be mutated when you need to, and comment explicitly to make things perfectly clear!

It doesn't help that many variables are objects, and const means that you cannot reassign the variable, not that you cannot modify it.


    const x = {};
    x.foo = 'it works!'

This is the same as a const ptr in C: the pointer can't change but the value it points to can. While it may seem broken it has its uses. Doing deep watches to disallow object mutation would be impossibly expensive (I think?).

Not sure if this is valid C syntax, but in C++ you can do something like:

  type const * const
Which is a const pointer to a const type, making both the pointer and the data immutable.

Maybe in ES9 we will have

    immut x = {};
    x.foo = 'it doesn't work! :)'
At the moment, we can use these libs to achieve it

- https://github.com/rtfeldman/seamless-immutable

- https://github.com/facebook/immutable-js

We don't need any libraries to solve this issue (please don't bring in libraries to do weird stuff like this; that's a dependency that you'll be stuck with forever over your entire codebase for essentially zero reason IMO).

Just use `const` + `Object.freeze()`; it'll get you 99.9995% of exactly what you want.

I'll give you two reasons: performance and ease of use. When you need a copy of a large object with a small change, performing the copy with native JS is going to be slower than doing it with a specialized data structure like a hash mapped trie[0] (which is what Immutable.js uses). Also, if you're trying to keep your data truly immutable, that copy operation is going to be a pain to write with the built-in tools, whereas it's super easy to return a copy of an object with a change to a single, deeply nested property with Immutable.js. I agree that it's premature to reach for a library before you need it, but let's not pretend there aren't rather large drawbacks to using Object.freeze and Object.assign.

[0]: https://en.wikipedia.org/wiki/Hash_array_mapped_trie

> When you need a copy of a large object with a small change, performing the copy with native JS is going to be slower than doing it with a specialized data structure like a hash mapped trie[0] (which is what Immutable.js uses)

Fair enough though I'm not convinced you should be hitting this type of use case in your code (kinda inefficient and sounds awkward to make a small change to a large object and yet need both objects to continue to be in memory, separated). At least not typically / frequently.

> if you're trying to keep your data truly immutable, that copy operation is going to be a pain to write with the built-in tools, whereas it's super easy to return a copy of an object with a change to a single, deeply nested property with Immutable.js.

While it is a little bit of a pain almost every framework and probably half of the libraries in existence on npm have a copy function. I'd like to think it's a rare use case but when it's needed you likely don't have to install a new library to handle a copy operation.

> I agree that it's premature to reach for a library before you need it, but let's not pretend there aren't rather large drawbacks to using Object.freeze and Object.assign.

I'm not sure anyone was pretending anything of the sort here and I don't understand the assumption of such. There are plenty of drawbacks but I'm also not convinced it doesn't fit the 95% use case.

Sorry, that might have come off a bit harsh. I was just trying to make the point that Object.freeze doesn't give you nearly 99.9995%, nor even 95%, of what immutable data structures give you. I don't think the apps I'm working on are particularly complex, but if you're using a Flux architecture with something like React/Redux, this is a very common pattern, and it doesn't take long to get to a point where the gains in performance and usability far outweigh the cost of having a dependency.

> While it is a little bit of a pain almost every framework and probably half of the libraries in existence on npm have a copy function.

The problem isn't just the usability of the copy operation (and even if it were, you still have the performance issue); there is also the problem of JavaScript not enforcing immutability of nested objects with Object.freeze. So if you want to enforce that, you're going to have to call Object.freeze on every child object, which becomes a nightmare to write and maintain.

> I was just trying to make the point that Object.freeze doesn't give you nearly 99.9995%, nor even 95%, of what immutable data structures give you.

Sure. That's why I made sure to say it'll give you 99% of what you want. Not 99% of immutable data structure capabilities :).

Though I'm sure folks may disagree with what I'm suggesting they "want" but I do not believe it's a good pattern to follow where a complex object needs to be immutable in JavaScript.

> JavaScript not enforcing immutability of nested objects with Object.freeze. So if you want to enforce that, you're going to have to call Object.freeze on every child object, which becomes a nightmare to write and maintain.

Which is why you should never ever do that. It's a bad pattern in a dynamic language to try and make complex structures completely immutable. It's best to simply find alternative ways of accessing them if you do not want to expose it to modification.

  const x = Object.freeze({
    y: {
      foo: 'bar'
  x.y.foo = 'baz'
  console.log(x.y.foo) //baz
immutablejs might be overkill, but not having, and using, a recursive freeze is going to bite a lot of people if the advice is just 'const + Object.freeze'.

> immutablejs might be overkill, but not having, and using, a recursive freeze is going to bite a lot of people if the advice is just 'const + Object.freeze'.

No one should be trying to use a recursive freeze (if they are I would argue their data structure is poorly suited to be immutable).

I'm not saying `const` + `Object.freeze()` gets you Immutablejs I'm saying it gets you, likely, what you want / need.

Our team was just hit with this:

  const x = Object.freeze([
    {id: 1, value: 'foo'},
    {id: 2, value: 'bar'},
    {id: 3, value: 'baz'},


  //Bug in the code
  x[0].value = 'test'
First comment on the bug report was:

'This shouldn't be possible as the array is const and frozen'.

their data structure is poorly suited to be immutable

An array of object seems completely reasonable. Even an array of objects, where those objects themselves have keys which are objects/arrays doesn't seem unreasonable.

We'll just have to agree to disagree.

> First comment on the bug report was: > 'This shouldn't be possible as the array is const and frozen'.

Yeah const and Object.freeze() are not exactly the most intuitive depending on your level of JavaScript internals knowledge (and even then I don't think const is very intuitive but I digress).

> We'll just have to agree to disagree.

Fair enough! I would just caution trying to make complex objects immutable; it can be handy if you're developing a library and don't trust the dev on the other side (to a degree, copying may be preferable depending on the context) but it can complicate things and lead to some performance and development pattern issues.

`const` means that the variable binding itself is immutable. It only affects the variable binding, not the value it points to.

If it affected the value it pointed to, what would happen in this type of situation?

    let x = {};
    const y = x;
    x.a = 5;

This is trivially testable in most browsers inspection tools.

The answer is it works fine. x.a === y.a === 5

This is because you are simply declaring the binding of y to the object bound to x constant. This does not impact your ability to rebind x or to alter the contents of the object, it simply prevents you from rebinding y.

    let x = {a:5}
    x = {}
    console.log(x.a) // undefined

    const y = {a:5}
    y = {} // Uncaught TypeError: Assignment to a constant variable.

    let x = {a:2}
    const y = x
    x.b = 12    
    x = {}
    x.b = 13
    console.log(y) // {a: 2, b:12}
    console.log(x) // {b: 13}

The question was rhetorical to try to demonstrate that it might result in surprising behavior if const just manipulated its values to make them become immutable. If adding the `const y = x;` line made the object referenced in `x` be immutable, then the 3rd line would fail, which I don't is a consequence desired even by people who thought const made things immutable.

I guess one alternative idea for how const would work could be an implementation where `x.a = 5` worked but `y.a = 5` failed. But then what happens if you pass `y` into a function which then tries to assign the `a` property on it? Is the const-ness part of the value passed into the function, or is it a property on variable bindings, and you could only pass `y` to functions that accepted a const variable? That kind of function type checking isn't something usual to javascript currently. And then is the const-ness deep? Is `y.foo.a = 5;` blocked? Mutable DOM elements are a big part of javascript. If the object happened to contain a reference to an element that needed to be manipulated, then you won't be able to do something like `y.foo.element.textContent = "new foo content";`. Going down this road it's now getting to be a pretty big feature that doesn't cleanly fit with the rest of the language or common tasks.

Maybe the naming is a little unfortunate: Javascript's `const` has more in common with Java's `final` than C's `const`.

Oh I didn't realize that. I assumed that const means immutable.

Now I am going back to change all the object declarations to const.

I heard interesting argument against const, it went like this:

Const very rarely saves you from bugs and the bugs that saves you from are very easy to find and fix. On the other hand the time wasted by thinking where whether to write const or let and the time wasted by switching consts for lets (and other way around) outweighs the time saved by potentially preventing these easy to find bugs. To sum it up, if consts does not really provide any extra value, why not make your life easier and just use let everywhere.

It was from very senior c++ programmer, so I am not sure how well that translates to JavaScript.

That makes a lot of sense for c++ constness, which has virtually nothing to do with JavaScript's const. C++ constness is significantly more far-reaching and therefore more work to get right. I agree with his position.

JavaScript const is just a matter of typing 2 more characters and in exchange you make your code more readable (because you communicate "this stuff will never change from here on", which makes understanding the stuff that does mutate easier). There are no far-reaching consequences. Just type "const" everywhere.

> C++ constness is significantly more far-reaching and therefore more work to get right.

I'm not familiar with C++, can you explain more?

A const variable means that variable won't change. So `const int i = 6` means i will never change. But often you'll use pointers or references, these are C++'s way for a variable to point to data elsewhere. You can also make the pointers or references themselves const. Finally, you can make functions const too, which lets you turn C++ into half a haskell.

First pointers and references. For simplicitly I'm going to ignore references and focus on pointers - the difference is not very interesting in this context. In most languages you probably know, referring to objects is done implicitly: variables that "are" objects are actually references to objects located elsewhere, and variables that are primitive types (numbers, booleans, strings) are just right there. Because of this, in JavaScript you can do this:

    var a = {};
    var b = a;
    b.moo = 6;
    a.moo === 6; // true
In here, a and b point to some object that "is" neither a or b - the object itself just floats around in memory and it'll exist until neither a, b, nor anyone else points to it anymore and the GC decides it has to go.

In C++, you'll need pointers or references for this, eg.

    somestruct a;
    somestruct* b = &a; //b now points to a
    a.moo = 6;
    b->moo == 6; // true
(-> is just C++ shorthand for "follow the pointer and then do a property lookup).

Ok now const.

    somestruct const a;
    a.moo = 6; // error! can't modify a const.
Ok well how about

    somestruct a;
    somestruct* const b = &a;
    b->moo = 6;
    a.moo === 6; // true
This means the pointer is const. That works because we never change where b is pointing to. So how about:

    somestruct a;
    somestruct const* const b = &a;
    b->moo = 6;
    a.moo === 6; // true

Oh damn. Crying baby. Const functions will have to wait for some other day.

If you didn't want a to be mutated, you'd make it const in the beginning :)

Rust makes it much harder to make these kinds of erros by making variable ownership, reference lifetime, and const-by-default core to the language design. https://doc.rust-lang.org/book/ownership.html ... You do end up spending a lot of time "fighting the compiler," even compared to template-heavy C++ codebases, but the systems you create are free from an entire class of errors.

I'm not sure if you meant to imply otherwise, but your last example won't (and shouldn't) compile with clang/gcc.

clang++: error: cannot assign to variable 'b' with const-qualified type 'const somestruct *const'

g++: error: assignment of member ‘somestruct::moo’ in read-only object

It also doesn't help that let is 3 letters versus const is 5; programmers if anything will default to the faster to type option.

This. I personally agree with everyone saying const > let, but I just don't find myself caring enough about it to add the inconvenience of typing 5 characters instead of 3 every time I need to create a variable.

You'd think after all these decades I'd no longer be surprised that programmers try so hard to minimize their typing, since it has no demonstrable positive impact on code quality. I guess premature optimization is in the blood of some people.

It's not about working hard to minimize typing, it's more about not wanting to switch from a state of little typing to a state of more typing. Also, it has nothing to do with optimization; I don't think there exists a single person who uses "let" over "const" because of performance reasons. I also don't think anyone uses "let" over "const" because they think it improves code quality.

The right path have to add more value then the path of least residence.

True, which is why we drop any var/let/const for prototyping

It also doesn't help that in Swift, 'let' is equivalent to JavaScript's 'const', so if you write code using both languages, it's easy to forget that and just use 'let' everywhere.

The only difference between const and let is it flags an error on reassignment (in strict mode), so that is its primary purpose. It can also be used to conspicuously note intent not to reassign but it is not helpful to VMs or programmers to explicate whether or not each and every variable is reassignable. Maybe just get on board with those influential developers and use const to lock things down (in strict mode) or to make a point - not because let is 'not needed'. I would rather const was used somewhat sparingly, giving it narrative impact.

it's sad the longest of [var, let, const] is the most useful one. They should have made let behave like const

The most interesting part about template strings was skipped over: tagged template literals. You can prefix a template string with a function which will get called with the list of string parts and the values passed in ${...} parts, and then it's up to the function to choose how to join the values up into the resulting string (or hell, you could make it return something besides a string if you want). The function can even access the raw version of the string containing any backslash escapes in it as-is. The default `String.raw` function is handy if you're writing something like an SQL query with a few \ characters that need to be in the final query. Both of these strings are the same here:

    const a = "SELECT id FROM foo WHERE name CONTAINS r'\\n'";
    const b = String.raw `SELECT id FROM foo WHERE name CONTAINS r'\n'`;
You could even assign `String.raw` to a variable first, and then make strings look like raw string literals of other languages:

    const r = String.raw;
    const s = r`SELECT id FROM foo WHERE name CONTAINS r'\n'`;
Another good use of template strings is automatic HTML encoding (with a small module of mine on npm): https://www.npmjs.com/package/auto-html

SQL actually demonstrates a good use case for custom template handlers. It's not SQL, but we're using a template handler to make writing parameterized queries trivial for ArangoDB: https://github.com/arangodb/arangojs#aql


    var userCollection = db.collection('_users');
    var role = 'admin';
        FOR user IN ${userCollection}
        FILTER user.role == ${role}
        RETURN user
The template handler returns an object with the parameterized query string and the parameter values, which the `query` method understands. Because collection parameters are syntactically distinct from regular parameters this also avoids accidentally passing in arbitrary strings as collection names -- and of course it completely avoids injection attacks as a category.

In my experience this is actually much more comfortable to use than a "fluent" API that tries to map the query language to the programming language (i.e. `select('id').from('foo')` and so on).

Full disclosure: I wrote that library.

How does this work? I can't find any examples and I also don't really understand what mechanism makes your npm module work.

A tagged template literal is just a special ES6 syntax for calling a function with a specific set of arguments. You could pass a function which returns all of the arguments as an array to see what's passed in. Try the following in the console of a modern browser:

    function log(...args) { return args; }
    log `abc ${'blah'} fooo\n ${12+3} bar`;
Or you could even shorten the above down to this:

    ((...args)=>args) `abc ${'blah'} fooo\n ${12+3} bar`;
The expression will evaluate to this:

    [["abc ", " fooo\n ", " bar"], "blah", 15]
The first argument is an array of the parts of the literal text, and then the rest of the arguments are the values that were passed in. Additionally, the first argument (the array of string parts) has the `raw` property set to point to an array of string parts with the backslash escapes uninterpreted.

Here's an example re-implementation of `String.raw`:

    function raw({raw}, ...values) {
      const parts = new Array(raw.length*2-1);
      parts[0] = raw[0];
      for (let i=0, len=values.length; i<len; i++) {
        parts[2*i+1] = values[i];
        parts[2*i+2] = raw[i+1];
      return parts.join('');

Tagged templates look interesting. It's an overlooked feature that not many people know about

For those using Babel already or want to use ES6+ on a set of supported browsers: we have started work on https://github.com/babel/babel-preset-env. Would appreciate more testing!

It allows passing in a set of supported browsers and transpiling what is required using the lowest common denominator env (uses https://github.com/kangax/compat-table for data).

  // .babelrc with preset options
    "presets": [
      ["env", {
        "targets": {
          "chrome": 52,
          "ie": 11,
          "browsers": ["last 2 versions", "safari 7"]
If you have questions, ask below (or make an issue)

Wow, this is really excellent. Definitely will give this a shot!

I always turn to an ES6 article on babeljs.io when I check out ES6 features.


One of the features I like is default parameters. I'd like to add a usage that isn't mentioned in the article. It's really handy to pass optional parameters since default parameters can be nested.

    class Hoge {
      foo({active=true, collapsed=false, silent=false}={}) {
    let hoge = new Hoge();
    hoge.foo({collapsed: true});

"you can start using it right now" if you don't care about IE8, IE9, IE10, and many mobile browsers and are willing to ignore 20% of your customers.

e.g: http://caniuse.com/#search=let

const does not have block scope in those browsers either, but will work, adding to your debugging confusion.

template literals and multiline string has no IE support at all (you need edge: https://developer.mozilla.org/en-US/docs/Web/JavaScript/Refe...) so you can forget most most enterprise clients

Similar things are missing for most features from the article.

I wonder how many JS dev have to answer to actual customers because none of my clients in the last decade who would have accepted to have a possible failure on 1/5 of their visitors. Are they all bloggers or working in hype start ups ?

We use Babel to compile ES6/ES7/ES8 to ES5.


It's not what the article is about. The article is obviously talking about native ES6 features, otherwise it would advertise es7 features as well and advise to use transpilers.

There's no reason why someone using babel needs to use ES2016+ features, they can stick to ES2015 just like this article.

I'd actually recommend avoiding the still-not-standardized features as you'll have less/no change when it lands in browsers.

I looked at the prospects of starting a React, Aurelia, or VueJS projects a few days ago and a year ago. Anyone who wants to wait to upgrade to these new things will be really happy to see how much nicer these things get every year. Tooling and starter kits just keep getting better, and best of all, you could probably wait a long time to switch and miss out on nothing important.

That's a good point, and one of the reasons I've been reluctant to use much ES6 until it's solidly baked into > 95% of the browsers / devices / etc. (delivery platforms? whichever descriptor) that I would need to support.

A transpiler just adds another layer of complexity that it's best to avoid as much as possible, in my opinion. It's not a perfect example, but if you've ever worked with CoffeeScript and run across a bug or unexpected side effect in the final rendered code . . . that should curb some of the transpiler enthusiasm.

Some of the ES6 features look handy enough, but I'd prefer waiting to see which ones shake out as actually useful over time, versus which are just momentary novelties.

> "20% of your customers" Depends on the product.

Regardless, you can easily transpile a codebase to support those browsers.

If you aren't doing that already, you probably should be. Otherwise you're supporting a codebase that is quickly going to become very dated and (in comparison to a more modern codebase) much messier.

Yeah but this is not the purpose of the article. The article suggest you don't need to and can use the feature right away. Otherwise it would mentions even mention await, async, yield, etc. and say you can use it too.

On the web browser side, I don't recommend using ES6 yet, without any kind of fallback. Internet Explorer 11 is still used, as are devices on older iOS versions. (without counting people using the default browser on pre-Lollipop Android)

Also - for anyone writing code where performance matters, the question isn't when browsers support the syntax, it's when each JS engine's optimizations support it.

E.g.: until some months ago just putting "let foo" into a function would cause V8 to bailout (meaning the whole function gets executed slowly, even if the actual let statement gets removed as dead code).

Unfortunately I've never found any good references on the optimizability of ES5+ features, so I've been avoiding them so far.

This is a real concern, but it definitely carries the usual caveats about premature optimization and needing to measure regularly to confirm that it is a real concern and that the performance landscape hasn't shifted since the last time you measured it.

The best suite I've seen is https://kpdecker.github.io/six-speed/ which measures node and the various modern browsers which Sauce Labs supports and appears to be run semi-regularly.

Must be careful with the results on that page. It shows map-string as being slower for ES6 (using `new Map()`) compared to ES5 (using `{}`), and yet I found the opposite, that ES6's Map() is faster in my benchmark[1].

[1] https://gorhill.github.io/obj-vs-set-vs-map/

That is a great reference, but in general I don't find myself caring much about the raw performance of individual statements that way. My concern is that this or that new syntax will prevent a function from getting inlined, or prevent the engine from guessing type information it otherwise would have guessed, or whatever - just because those bits of the optimizing compiler are newer and less robust.

I agree that this is a valid concern. And I do not trust the six-speed test to do the right thing here. See for example https://github.com/kpdecker/six-speed/pull/42 where the test claims to be measuring the speed of destructuring, but in Firefox the result is entirely due to the effects of destructuring on the engine's ability to eliminate dead code. While that is relevant to performance, all it means in the end is that if you destructure something and then pointlessly throw away the result, that it will run much slower than using an ES5 assignment to pull out the field and then pointlessly throw it away. It says nothing about actual code that destructures and then uses the result vs ES5 code that pulls out the field and then uses the result. And that PR was closed because it's showing up an optimization gap, and kpdecker wants to force vendors to implement optimizations -- which is fine, except this is an optimization for something that is irrelevant to production code.

This might just be an isolated incident, but it shakes my confidence in the utility of the six-speed suite. I actually do want to know whether there's a speed difference between const { a } = obj vs const a = obj.a, and the suite does not test that. (Worse, it kind of claims that it does, but reports something else instead.)

If 'let' prevented inlining, I would want to know, but I'd have to look very closely at the six-speed benchmarks to figure out whether it's detecting that. And the range of subtle reasons for deoptimization is vast, so despite working on a JS engine myself, I doubt I'd be able to tell whether a given microbenchmark is meaningful or not.

(Note that the Firefox devtools does have a "Show JIT Optimizations" that can tell you why things aren't getting optimized, but it's incredibly cryptic, undocumented, and scaremongering.)

I'd assume the author would be receptive to pull requests for things like the `let` deoptimization.

`let` doesn't deoptimize anymore - actually V8 has made great strides here, and a lot of new syntax will go through the optimizer. More generally though, I wouldn't think the stuff I'm worrying about would show up in microbenchmarks.


That's the planning doc for v8 optimization of ES2015+ and an interesting read.

This is an important distinction, and as far as I know, none of the new features are very optimized. If you really want to write performant JS without a build step, you basically have to write it ES3-5 style, use for loops over maps and foreaches, etc

> none of the new features are very optimized

I think for-of is getting pretty good. It's a bit of a pain to optimize because the iteration protocol in ES6 is designed in such a way that you have to do heroics (scalar replacement) to have any hope of optimizing it well. But engines are getting there.

For anyone who wants to use ES6 in production, https://babeljs.io/ is amazing.

It's both amazing, and 600MB worth of dependencies. We use it for server-side code. It's high quality, and we only have a few issues with it, but I can't wait to be able to ditch is (pretty much when async/await lands in a stable node).

600MB worth of development dependencies, which don't effect the code being sent to the client. Just wanted to clarify.

I'm using it server-side, so yea, you're not wrong.


    du -ch ./babel*
from my `node_modules` directory yields

    3.2M	total
so I'm gonna need a citation on that 600MB claim.

Admittedly, if you're using a flat structure like NPM3 does, then everything else is at the same level :)

It caught those! doing `du -ch ./babel` says it's only 20k, babel-core is 148k, and babel-cli is 104k.

537M here. babel* matches the following packages:

node_modules/babel node_modules/babel-core node_modules/babel-eslint node_modules/babel-plugin-array-includes node_modules/babel-plugin-transform-runtime node_modules/babel-preset-node5 node_modules/babel-register node_modules/babel-runtime

Of course you can just pretend I'm lying.

I believe that you're getting that number, but there might be something wrong with your install. I just installed all of those packages and ended up at 6MB.

I don't think it's such a stretch. From facebook's yarn announcement[1]:

> or example, updating a minor version of babel generated an 800,000-line commit that was difficult to land and triggered lint rules for invalid utf8 byte sequences, windows line endings, non png-crushed images, and more. Merging changes to node_modules would often take engineers an entire day.

I just did a fresh install in a new directory and ended up with 114M worth of dependencies, so I'm not entirely sure what the difference is.

My point is. 50MB, 114MB, or 500MB worth of javascript dependencies is a massive footprint. It works and I'm relatively happy with what it does, but I don't see this as a stable, long term thing.

[1]: https://code.facebook.com/posts/1840075619545360

    mkdir babel-size-check
    cd babel-size-check
    babel-size-check yarn init -y
    yarn init v0.16.0
    warning The yes flag has been set. This will automatically answer yes to all questions which may have security implications.
    success Saved package.json
      Done in 0.05s.
    babel-size-check yarn add babel babel-core babel-eslint babel-plugin-array-includes babel-plugin-transform-runtime babel-preset-node5 babel-register babel-runtime
    yarn add v0.16.0
    info No lockfile found.
    [1/4]   Resolving packages...
    warning babel@6.5.2: Babel's CLI commands have been moved from the babel package to the babel-cli package
    [2/4]   Fetching packages...
    [3/4]   Linking dependencies...
    [4/4]   Building fresh packages...
    success Saved lockfile.
    success Saved 80 new dependencies.
      Done in 1.94s.
    babel-size-check du -ch -d1
    17M	./node_modules
    17M	.
    17M	total

600MB is a huge exaggeration. Looks like about 34MB for babel-cli, babel-preset-es2015, and babel-preset-stage-0.

even once all the language features you want are in, Babel is amazing for other things such as optimizations, transforms, injecting assertions depending on environment, inlining stuff, static analysis, etc. If you use Flow its used to strip out annotations, and if you use JSX (not just in React) its great there too.

We get a lot of millage out of it even when our target platforms support all the language features we need. Your millage may vary.

buble is a good alternative - small and fast. Doesn't support all of ES2015, but it does support the examples in the article.


Have used it in production and it's amazing.

Have you tried Minify(Babili) by the way?

Indeed. Are there any 'good' strategies for this (loading ES6 for supported browsers)? I imagine you'd have to do UA sniffing on the server side, and I suppose it's kind of a moot point if you're using React/JSX.

There's no need unless you need runtime-only concepts, like new string methods. Use https://babeljs.io/ to turn nice es6 into es5. (I imagine you already know this, if you're using react/jsx)

Just try to eval a sample of ES6 syntax you use, and if it throws error, load ES5 code. You'd need to feature check all the features that you use, to cover the browser versions with incomplete ES6 support.

Working on https://github.com/babel/babel-preset-env right now which should help with this

You should be using babel to serve all your target browsers. But you can still use ES2015 during development; it's an absolute joy.

If you're developing a desktop-only app (or you're making a native app for the mobile frontend) ES6 works pretty well.

At least I haven't received any complaint yet, and if you can optimize early on without any short-term cost then it's the best thing you can do.

Thank god Chrome and Firefox dominate the market.

For developing browser extensions I think ES6 is currently can be used safely (at least for Chrome).

While I'm a big believer in most of the ES6 changes (arrow functions! let/const! classes! generators!), I am not a big fan of many of the new destructuring features. They can actually make your code less approachable if you don't already know what's going on.

Exactly, and this is a big problem where I work. I believe code should be readable, even by those with only cursory knowledge of the language. Object shortcuts is also a problem I think. For example, I had a method like this:

const getObj = (id, store) => { return { id: id name: store.something.name }; };

The linter gave an error on it because I used {id: id}. It was like the linter was trying to make my code harder to read.

I think es6 in the wrong hands quickly falls prey to the problems of ruby/scala where it can become incredibly terse and hard to parse unless you are used to the author's particular style.

You know instead of:

  const getObj = (id, store) => { return { id } }
You could write:

  const getObj = (id, store) =>  ({ id })

I'm pretty sure that's a good illustration of my point.

This is a matter of personal preference.

I find `return` to be quite distracting and annoying for a small function that spans part of a line, while you and some others may prefer the explicitness of the `return` keyword.

I would agree that nested destructuring should be used quite cautiously. Code legibility is incredibly important to the success of any serious project.

The deep matching gets too cluttered, but the simple case I find to be quite readable and useful:


I completely agree and it's nice to see someone else with the same thoughts because I've found a ton of opposition regarding this. I hate the destructuring syntax. While I understand how it works now I felt like it took me too long and feels very non-obvious so I try to avoid it in my code unless absolutely impossible to avoid. I don't find it intuitive especially for newer developers.

I agree that object destructuring makes the code far less readable. Array destructuring however is easy for anyone to grok.

It depends I think.

For example, in my opinion

    const Header = ({ children, iconName, iconSize, title }) => { ... };
is more readable than

    const Header = (props) => { ... };

    const Header = ({ children, iconName, iconSize, title }) => { ... };
Once you get used to the destructuring parameter idiom, sure. It also conveys more information.

But that statement is overloaded in that it makes use of implicit object shortcuts which has a bit of a learning curve for longtime ES5 users.

    const Header = ({
        children: children, 
        iconName: iconName, 
        iconSize: iconSize, 
        title: title }) => { ... };
When object destructuring is nested it can be confusing and more verbose.

  function thisIsBad ({
    someKey: renamedSomeKey,
    someOtherKey: renamedSomeOtherKey = 'otherDefaultValue',
    meta: { innerMetaKey = 'someInnerMetaKeyValue', otherInnerMetaKey, ...remainingMeta }
  } = {
    someKey: 'defaultValue',
    someOtherKey: SOME_CONSTANT,
    meta: {}
  }) {
    // ...
    return { someKey: renamedSomeKey }
My favourite messed up part of the syntax is the way the `:` character is used to describe either the default value of a property or a way of renaming the name of a property internally (depending on whether it is to the left or right of an `=`). And also, the way you can destructure the inside of an object, and automatically lose the original value. For example `meta` is not accessible within the function defined above.

This is syntax which can simplify your code if you apply it carefully, but will ruin your code if you over-use it.

I had to debug code like that the other day. Before you debug it you have to mentally grok wtf is going on. What a mess.

And suddenly I get it. Very nice example.

I agree about the object destructuring. The examples set off loud alarm bells in my head. Very cute and very unreadable.

Like others are saying, though, the array destructuring seems fine.

> Because even though the if-block is not executed, the line 4 still redefines var x as undefined.

This is because of hoisting. Not quite right as described.

Yeah, that example should be changed - the "before" version isn't code anyone would write on purpose, and which JSHint/JSLint would flag as a bug.

Appreciate the article, but this is relatively dated information. While I can see that many enjoyed the article, many have also been working in ES6 for over a year now. If you're ready to join, I highly suggest everyone make their way to https://babeljs.io/ and everything it has to offer resource-wise or tooling-wise.

You might as well use babel and then you can use async/await which makes code way more readable.

"best practices": "use class instead of manipulating prototype directly"

Who comes up with these "rules"? This is not a hard and fast rule. Manipulating the prototype of an object is not "dangerous".

I'm sick and tired of some loud mouth saying something is dangerous without explaining why. WHY? Why it it dangerous? Don't talk at me. Provide me a sound reason and case to justify what you say. Talk is cheap.

It makes me not trust anything else in this article when someone is flippant.


This has some warnings at the top as to why.

__proto__ !== prototype

One points to the parent of an object and the other points to the object's prototype (which in turn has it's own `__proto__ property). Accessing `prototype` is just fine, but if you want to access `__proto__` then you should use `Object.create()` instead.

Ugh. All this "make Javascript like Java" stuff needs to stop.

Cool stuff: Tail call elimination. Arrow functions. Assignment spread for multiple return values and "rest" / default parameters. Proxying (combined with computed get/set on properties) for easy decoration.

Classes and let: meh.

Let is a game changer imo. In the world of tens of dependencies, knowing that your variables are scope restricted reduces your cognitive load. It's one less thing that can go wrong.

I agree with you on classes, but that's probably more because I've never been a big fan of OO in practice. It doesn't really seem to have seen much use in the greater js ecosystem though, unlike let/const.

The block which is long enough to need block scope is long enough to be a function.

Special bonus to couple decomposing assignment with IIFE so that any variables/symbols which "escape" from the nested function are explicit (and the locals simply disappear with the nested function).

Const would be nice, if it went further - treating the const declared identifier as if it were deeply frozen, at least within the scope of the const identifier. Alas, that touches on what is wrong with most programming languages in common use today: mutable by default, but we should be requiring "groveling" to make something mutable.

Honest question here ---

What is the difference between let and global variables? There are hundreds of articles written about the doom associated with PHP globals, but let appears to be universally lauded. I must be missing something, but I can't tell where.

It's spelled out pretty well in the article. But if you want another example, consider these two code blocks:

  var foo;
  var bar;
    let foo = "hello";
    var bar = "world";
This produces:

The reason being that the `let` statement restricted that variable to the block it was in (defined by the { and }). `var` declares the variable globally, allowing it to be accessed outside of the {}.

We prefer now to use `let` and `const` over `var` because it doesn't pollute the global namespace. With the asynchronous nature of Javascript, it's theoretically possible for you to declare a variable with `var`, assign it a value, then immediately use that value and find that it's different than what you expected because of another function using the same variable name. This isn't possible with `let`.

> `var` declares the variable globally"

Not quite. var's are hoisted to the top of their most local function.

      var x = 123;
      var y = 123;
    // Here, x is not defined, but y is
  // Here, neither x nor y are defined
The above code essentially gets translated to the following:

    var y;
      var x;
      x = 123;
      y = 123;
    // Here, x is not defined, but y is
  // Here, neither x nor y are defined

From the article--

let x = 'outer';

function test(inner) {

  if (inner) {

    let x = 'inner';

    return x;


  return x; // gets result from line 1 as expected

test(false); // outer

test(true); // inner

This makes it seem like let creates global variables. Why would you want to return a variable from outside the function? Doesn't that create massive overhead in terms of keeping track where variables are initially set? Easy to understand in this example, but what if let x = 'outer'; is defined at the top of a 5000 line script and this function appears near the bottom?

Edit: Turns out I don't know how to format code. This is in the first example of section 3.1 Block Scope Variables.

If you were to then reference 'x' from another block of code, say in another <script> element in the case of web development, 'x' would not be a defined variable, whereas with 'var', it would be.

This is mostly just a case of 'let' restricting a variable to the block it is in, and the child blocks. In your example, `let x = 'outer';` is sort of acting like a global variable, but the importance is that if another script were to be running, it could not access that instance of 'x'.

Ehw. Yeah, I sort of assumed that you had a top level function / IIFE (in a .js file, FWIW) in which the var's were nested.

Somebody writing stuff, into the global namespace, directly in <script> tags, has bigger problems :-)

Ahh, I get it, Thank you! In the same way var scopes to window if defined outside of a function, let scopes it to the current script block. That is very neat.

I'll read more into uses of let over var. Function level scoping a la var feels like less mental overhead, but as I read more I'm sure my opinion will change.

Javascript (in browser) is async, but not concurrent (like say, threads). JS will run the code in an event handler to completion (or not - while(1){}) before starting the code for the next event. Thus, code from one execution path cannot update variables in another path "immediately".

You could use a variable in a callback for an event that was assigned on the line above, and it since has changed, but it's important to remember that the callback (or promise, etc) does not actually run until an arbitrary time "later".

"var" declares function scope, not global scope. If you really want global scope, declare the variable with no keyword preceding it (e.g. "foo = 3").

Close. Assigning "foo" without var will look up the lexical scope stack from most nested to global. If nothing exists, it will indeed make a new global. However, if there is a "foo" somewhere in that scope stack, it will update that variable.

    var a = function () {
      var foo
      var b = function () {
        foo = 3
    console.log( foo )  // undefined
In the code above, the "foo" in function a is set to 3, rather than creating a global. (changing the "vars" to "lets" would do the same thing, FWIW)

Also, "use strict" mode will not let you make a global that way. If in strict mode, you have to put the "var" outside of any function, then assign it (either on that line, or later) to make a global.

> With the asynchronous nature of Javascript, it's theoretically possible for you to declare a variable with `var`, assign it a value, then immediately use that value and find that it's different than what you expected

Can you give an example ? My understanding is that the closure freeze the variable in the time the function was called. It can happen if you do not use a closure (function) though.

Here's a post on Stackoverflow that can explain it better than I can: http://stackoverflow.com/questions/21363295/understanding-ja...

Here's how you should do it:

  for(var i=0; i<=3; i++) count(i);

Or a real world example:

  for(var i=0; i<texture.length; i++) createPattern(i);

  function createPattern(i) {
    texture[i].onload = function() {
      pattern[i] = ctx.createPattern(texture[i], 'repeat');
I've made a blog post about closures: http://www.webtigerteam.com/johan/en/blog/closure_en.htm

It would be cool to write like this (but you cant):

  pattern = texture.map(t => t.onload => ctx.createPattern(t, 'repeat'))

Javascript closures are like Ruby blocks and closures: you get to "touch", as well as "look" - and also see "touches" that happened in between times elsewhere.

I have a question:

What does `for element of arr` buy me over `arr.forEach(element => ...)`

I don't find the for...of syntax particularly appealing or useful, but I might be missing something. Is it a matter of preference?

for...of is more flexible. While forEach is a method on Array.prototype, for...of is a consistent syntax that can be used in more places. For instance, iterables:

  function *myIterable (v) {
    while(--v) yield v
  let launchCountdown = myIterable(60)
  for (let i of launchCountdown)
    console.log(`t minus ${i} seconds`)
So in effect arrays can be thought of as iterables in ES6. So IMHO it allows for more consistent behavior with the way iterators in other languages like Java and C++

Ah, I see, good point. And how would you handle cases (that I end up using quite often) such as :

arr.map(...).filter(...).forEach(...) which allows me to iterate over the filtered result? One would assign the result of filter to a variable and call for...of on that?

EDIT: Also, I never saw any mention of for...of working for object literals (à la `for (let [key, value] of obj`), I suppose that's out of scope, correct?

> arr.map(...).filter(...).forEach(...) which allows me to iterate over the filtered result?

Just like that. It would be pretty nice to have generic filter/map that can work on arbitrary iterables, but we don't have that right now. :(

> Also, I never saw any mention of for...of working for object literals (à la `for (let [key, value] of obj`)

  for (let [key, value] of Object.entries(obj)) {
it's not in "ES6"/ES2015, but it's in ES2017 drafts and is implemented in Chrome and Firefox but so far not elsewhere. The need to do Object.entries instead of obj.entries() is annoying, but needed in case your literal has a property named "entries"... There's also Object.keys() and Object.values(), of course.

Generics on iterables are super easy:

    function* map(iterator, mapper) {
      for (const elem of iterator) {
        yield mapper(elem);

    function* filter(iterator, filterer) {
      for (const elem of iterator) {
        if (filterer(elem)) {
          yield elem;

    function forEach(iterator, eacher) {
      for (const elem of iterator) {

    function reduce(iterator, reducer, initial) {
      let current = initial;
      for (const elem of iterator) {
        current = reducer(current, elem);
      return current;

Yes, but you have to have that boilerplate every time you want them instead of them just being around. It's not fatal, but is annoying.

It's also useful if you want to call break or continue.

for-of works with any iterable.

break/return/throw work as expected.

I imagine (eventually if not already) for-of will be slightly more performant, but that's just a hunch.

Personally I find the for-of syntax more readable.

Not only that, but in the future when async/await lands (or right now if you are willing to compile and can put up with potential breakage), for-of becomes a godsend.

The lack of inner functions means that you can await during part of a loop without a bunch of fuckery with the inner function with forEach.

for...of isn't only for arrays. Anything that implements the Symbol.iterator functionality can use for..of, which is pretty nifty for some custom classes and also includes things like the new Map and Set collections, see: https://developer.mozilla.org/en-US/docs/Web/JavaScript/Refe...

However, even in just arrays there is arguably a benefit. Namely, an extra function being created and invoked with forEach. While in a JITing compiler, the performance difference between for...of and forEach might be negligible (or non existent), you can't always be sure what the JIT is going to do. For hotspots, I would probably prefer for..of

If you're not in a JIT, for-of performance is going to be terrible too. Instead of creating a single function (which may not happen _anyway_ in the non-jit case, depending on how it's implemented) you now have to create an iterator result object for every single thing you get out of the iterator.

In a JIT, forEach is actually _easier_ to optimize well (just need to inline the callback function, though of course that can fail for various reasons) than for-of (need to inline the calls to next() on the iterator, need to do escape analysis on the return value of next(), need to do scalar replacement on the return value of next; note that this is a strict superset of the work needed to optimize forEach).

for-of can be nicer to read, and can work on arbitrary iterables. Those are its key strengths. Performance just isn't, unfortunately.

Ah, you're actually answering my question from above. I was wondering if there was a way to emulate the behaviour I've gotten accustomed to in Ruby vis à vis `Enumerable#each` where the (key, value) are yielded to the block passed to each. It doesn't seem to be possible to do that with object literals in JS, but you mention that Map would do (and it actually does, see [1]). Nice.

[1] https://developer.mozilla.org/en/docs/Web/JavaScript/Referen...

I gave a more complete overview of ES6 (and ES2016 and so on) features at a user group last month. Slides:


The slides are mostly code examples and I tried to go for completeness rather than detail, so there are some mentions you don't normally see in these overviews (e.g. the article doesn't mention proxies).

Equally spacing the points on the timeline may make for an aesthetic image, but it fails to illustrate the author's point. "As you can see, there are gaps of 10 and 6 years between the ES3, ES5, and ES6. The new model is to make small incremental changes every year." No, we can't see. In that pretty graphic, ES5->ES6 is exactly the same distance as ES6->ES7.

If someone wants to dig more on ES5 vs ES6 , check this presentation (use side arrow for navigation) http://coenraets.org/present/es6/#1

This one is a useful reference, as well: http://es6-features.org/#Constants

Probably add pointers to where the images were taken from.



Great article!

It would be fantastic to make all the code snippet interactive with the klipse plugin.


If you want to learn ES6 I highly recommend the Tower of Babel interactive tutorial: https://github.com/yosuke-furukawa/tower-of-babel

What about import from? (modules)

In my opinion it's the biggest improvement from ES6. However, you still need Babel to use this feature, even in Chrome/Firefox.

For those interested:

import: https://developer.mozilla.org/en-US/docs/Web/JavaScript/Refe...

export: https://developer.mozilla.org/en-US/docs/Web/JavaScript/Refe...

You can't use it in the browser yet. That's why it has been excluded. You need typescript or babel to use it

Some of these features are really nice, but JS is on a path to become as complex as C++.

That's the natural result of being unable to remove any features to maintain legacy compatibility.

Well unlike the majority of C++ warts, JS is doing a very good job of "hiding" it's ugly parts with things like "use strict"

Things like `with` and a bunch of ugly parts were hidden with the introduction of "use strict", and i'm more than confident it will happen again in the future with something similar that can let you "opt in" to an even stricter JS that leaves behind the "current" warts of javascript in favor of much better ways.

Can someone smarter than me (not a high bar) explain why they didn't just fix var, rather than introducing let?

backward compatibility would be one reason.

Helpful article. Small typo on section 3.5. shouldn't 'construtor' == 'constructor' ?

Javascript's weak comparison operators aren't that bad.

Can anyone recommend books or online courses that teach ES6 for someone just starting out with JS?

The book from the "You Don't Know JS" series is very thorough: https://github.com/getify/You-Dont-Know-JS/tree/master/es6%2...

Learn ES5 first, you will need it(majority of JS code out there), ES6 is just sugar over ES5, so knowing ES5 will make it easier to understand ES6.

What's interesting is that the author apparently has a QUERTY keyboard layout...

ha, I do

var's function scope is a feature! You don't have to place variable declarations in the header! (they are hoisted) Placing the var declarations where they are used makes the code easier to understand.

The point of constructor functions is not having to write new. So classes does nothing besides syntactic sugars over the prototype system, witch actually makes it more complicated and the code harder to maintain.

Async programming is hard, but not because of callbacks. Promises is just a wrap around callbacks, witch just adds complexity and more boilerplate. It will get better with async/await but I will still prefer simple callbacks.

Arrow functions are very nice for one liners, but will ruin your code if you use them everywhere instead of named functions. You should always name your closures, then it's easier to debug and lift them out.

Arrow functions get auto-named in many browsers based on how they are used. This auto-naming is actually being somewhat standardized between browsers and means there is no difference between anonymous-style function () {} and arrow functions.

Promises are more than just a wrapper around callbacks because they also standardize behavior between "stacks" of callbacks, by instead "chaining" them and creating a standard infrastructure for things like error propagation down a chain. That error propagation and the ability for simple action chaining is worth having over "simple" callbacks, even without the syntactic sugar of async/await.

let/const have similar benefits to function hoisting in that you don't need to declare let/const variables until when you use them. In this case the runtime behavior throws an exception if you try to use them before they are declared (in the so-called hoisting dead zone).

Promises spreads like a virus so that all asynchronous functions eventually end up returning a Promise. They are hard to understand and it's easy to forget handling errors or return values. I think callbacks are much easier to understand and get right using named functions and closures.

If you want to call everything in serial and wait between each step, why not make it synchronous instead of .then chaining ?

There are other high level operations (combinators) with Promises beyond just .then() chains: Promise.all(), Promise.race() out of the box of the spec; others from various libraries. Doing the equivalent with raw callbacks is much more complicated.

The "magic" that Promises bring to code is that they "just look serial" with .then() chains. That's part of the point of Promises and that's part of the infrastructure what powers the ability to write async/await code. Just because it looks nice and serial doesn't mean it is synchronous. (In High Functional speak, Promises are the Continuation monad, and the near isomorphism with synchronous code (especially async/await) is a wonderful product of standardizing the monad.)

Which is to say: tl;dr: Promises spreading like a virus is a feature, not a bug.

> var's function scope is a feature! You don't have to place variable declarations in the header! (they are hoisted) Placing the var declarations where they are used makes the code easier to understand.

For the vast majority of variables, you don’t need hoisting (i.e. you can declare them in the appropriate scope and only end up using them after the declaration).

> The point of constructor functions is not having to write new.

Um, what? This is completely wrong. You have to write new with constructor functions unless you specifically add code to check whether you’re in a constructor and re-call with new, which is bad practice.

> Async programming is hard, but not because of callbacks. Promises is just a wrap around callbacks, witch just adds complexity and more boilerplate. It will get better with async/await but I will still prefer simple callbacks.

No, async/await is just a wrapper around promises. Promises are pretty great overall, and if your Promise code looks like callbacks then you might be using them wrong. (I would always use bluebird[1] over native promises, though.)

I agree with you that large parts of ES6 add unnecessary complexity, but I don’t think your examples are well-chosen.

[1] http://bluebirdjs.com/

> Um, what?

Sorry for the confusion, the ES5 example looked like a "factory" function, so I assumed that was what he meant. You do not have to wrap the constructor function in another function! ...

  function Animal(name) {
    // This code is run when the object is created. No need to wrap another function around Foo
    var animal = this;
    animal.name = name;
  Animal.prototype.speak = function speak() {
    var animal = this;
    console.log(animal.name + ' makes a noise.');
Here is a "factory" function:

  function AnimalFactory(name) {
    var animalName = name;
    function speak() {
      console.log(animalName + ' makes a noise.');
    return {speak: speak}

  var animal = AnimalFactory('Animal'); // You can omit new
  animal.speak(); // animal makes a noise
  setTimeout(animal.speak, 1000); // you can also do this

>>> You don't have to place variable declarations in the header! (they are hoisted)

probably splitting hairs here but imo "hoisting" as in using variables before they are declared is generally a bad practice especially considering that init assignments are not "hoisted".

>>>Arrow functions are very nice for one liners, but will ruin your code if you use them everywhere instead of named functions.

Based on arrow functions' relationship with "this" pretty sure the intent is to primarily use arrow functions instead of anonymous functions especially when passing them around as params. Even with one-liners you have to be wary of what "this" is when dealing with an arrow function inside of an object for example.

Promises and callbacks have fundamentally different behavior. Promises are much more reliable (only executed a single time, always executed asynchronously vs possibly synchronously, etc.) and also have slightly different execution behavior in the event loop. People emphasize the syntax differences, but that's not the most useful thing about Promises imo. You Don't Know JS's async book has an excellent in-depth writeup on it.

The function passed to the promise is executed synchronously, the call to `then` is always asynchronous.

True, worded that poorly. The resolutions are always executed asynchronously, versus a callback approach where the code receiving the callback can inappropriately and unexpectedly choose to execute the callback synchronously.

> You should always name your closures

Yes. Why? Because you should have readable tracebacks. Why? Because you should receive all meaningful errors reports from the devices.

Promises definitely do not just "wrap around" callbacks https://blog.domenic.me/youre-missing-the-point-of-promises/

How is having var be function scoped instead of block scoped a useful feature?

Parent literally said why in the rest of their comment. And if you still don't understand why, this probably goes back to understanding JavaScript fundamentals. Not understanding function scope vs block scope is the number one smell for me that someone did not learn JavaScript correctly.

I don't think anything in the rest of their comment has anything to do with block scoping vs function scoping.

I do understand the technical difference very well - I have a PhD in implementing programming languages with function scoping like JavaScript.

If you agree with the person I was replying to maybe could you humour me and explain why you think function-scoping var is a useful feature?

> I do understand the technical difference very well - I have a PhD in implementing programming languages like JavaScript.

I am happy for your PhD and that it is for implementing programming languages like JavaScript.

My assertion is the advantage of having function scoping is apparent to those who understand function scoping. Block scoping-style programming in JavaScript was always, in my experienced, shoe-horned in by people who wanted to make JavaScript more like Java. That is all there is to my point.

But I do understand function scoping. I understand what it is, what its semantics are, and how to implement and use it. And the advantage over block scoping isn't apparent to me.

But even if it should be apparent to me, why can't you explain the reason to me? What is this - some kind of argument that is impossible to comprehend unless you already agree with it?

I can understand your argument that block scoping was shoe-horned into JavaScript, post hoc, but that isn't a technical argument for the benefit of function scoping, is it?

I understand your question now was more about function scoping versus block scoping from a language design perspective, and not about specific to JavaScript. It wasn't clear from your comment you were heading in that direction. Having var enable you to make use of function scoping is useful to JavaScript developers who have, up until this point in history, have always had function scoping. As parent mentioned, using var allows you define the variables where you are. It makes the code more readable. As far as function scoping versus block scoping I don't have much of a say, as I am not sure there are any meaningful advantages, merely minor ones, but none that would impact productivity if you were experienced in one or the other. Maybe you could convince me otherwise?

even if block scope was better, its not worth having three ways to declare a variable. Most JS programmers do not know about scope and they do not declare variables, but if they do, they more likely mean it to belong to the function, like in VBscript or PHP, rather then to the if statement. Explaining why you should declare variables is hard. Now I also have to explain block scope, and angel brackets now makes a huge difference!

  if(...) // use angel brackets
    foo = 1; // declare variables
  if(foo) ...

I really don't think that most JS programmers write code without declaring variables. Anyone using strict mode, ES6, or ESLint/JSLint/JSHint is already forced to declare variables appropriately.

Personally, I think the benefits of let are worth killing off var. I have honestly never seen an example where hoisting has been more clear than the alternative (declaring variables before they're used).

> My assertion is the advantage of having function scoping is apparent to those who understand function scoping.

I assert that if you can't say what the advantage is then you don't understand it yourself.

"You'd see the advantages if you understood this, and since you don't you must not understand it" is not a reasonable thing to say to people.

I understand it well and think relying on hoisting is a bad idea, as do most people. Declaring a var in a block and using it outside of that block (which is what hoisting enables) is the opposite of "Placing the var declarations where they are used".

Please provide an example of what you believe to be a good usage of hoisting.

uhhh edge still hasn't destructuring

[a, b] = [b, a];

Great !

Using const and `splice` breaks the rules:

const info = [1,2,3,4]

const newInfo = info.splice(2);

'info' has changed

The const keyword makes the object reference constant. It doesn't make the object's value constant. You can change the contents of 'info' (info.push(5) is fine). You just can't change which object the variable points to. (info = [] will throw).

If you know C/C++, the code 'const info = []' makes 'info' a constant pointer to a list not a pointer to a constant list.

If you want to stop a variable from being changed, use Object.freeze() - https://developer.mozilla.org/en/docs/Web/JavaScript/Referen...

This was what I came to comment on; when introducing ES6 features, this always trips people up. Thanks for "constant pointer to a list, not pointer to a constant list", I will use that next time I explain this.

This also means that objects defined with const can be mutated:

const x = { y: 2 }; x.y = 3;

Doesn't error. However with types that are immutable, like numbers and strings, const really is a constant: const HELLO = "hi" will never change.

Common misconception, const only disallows reassignment. For instance,

const number = 1337;

number = 10; // fails

But objects/arrays are not immutable in js, so you can do this:

const person = { name: 'Dude' };

person.name = 'Dudette';

Which is perfectly valid. If you want full immutability, i recommend you check out https://facebook.github.io/immutable-js/, been running it in production, a real pleasure to work with.

FWIW Immutable.js is not deeply immutable. Seamless-immutable however is.

> Seamless-immutable however is.

Only in development mode, by freezing the objects.

Any popular "deep freeze" utilities around?

    function deepFreeze(obj) {
      const out = Object.freeze(obj);
      Object.keys(out).forEach(key => {
        if (typeof out[key] === 'object') {
          out[key] = Object.freeze(out[key]);
      return out;

I think you meant "out[ key ] = deepFreeze( out[ key ] )" in the innermost step :-)


The only language I've found / used so far that has these (expected) mechanics is Swift, which, when using a `let` for e.g. a list, will make the list itself immutable (in addition to the reference). It's really something that is confusing in every language that has some form of 'final', be it Java (which added immutable list implementations or wrappers to their existing collections), C++, JS, or what-have-you.

> The only language I've found / used so far that has these (expected) mechanics is Swift

Rust has similar semantics: https://is.gd/hsRjPh versus https://is.gd/ROgzXl

IIRC you can also declare const objects in C++, but constness is more complex and overridden there.

const only guards against changing the reference that it was assigned to. This won't break because info is still assigned to the same array. It doesn't matter that the array was mutated.

destructing is very useful and encourage good coding styles

Is it? Personally I'd say that was bad code. What so wrong with using the original objects?

Putting aside the need to variable swap once a year or so, all the other examples look really confusing to me and unclear what they're doing. The `Deep Matching` especially.

It's really nice for when you have to do operations where you want a "tuple" in Javascript.

Consider the case of finding all companies along with their hires from the last week, but only if the company has a hire from the last week:

    companies.map(c => [c, employeeStorage.getEmployees(c)])
             .map(([company, employees]) => [company, employees.filter(e => e.hireDate > aWeekAgo)])
             .filter(([company, employees]) => employees.length > 0)

>Is it?

Yes, but not in the silly example showed.

Consider something like :

    zip(fooArray, barArray, bazArray).map(([foo, bar, baz]) =>  foo + bar + baz)
    zip(fooArray, barArray, bazArray).map((arr) => arr[0] + arr[1] + arr[2]);

    var lst = [[1,2], [3,4], [5,6]];
    for (var [a,b] of lst) { console.log(a + b); }
You could write a `zip` function to merge multiple iterables together and iterator them. Or use something like `.enumerate()` to yield indices along with the values.

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact