Hacker News new | comments | show | ask | jobs | submit login
Ask HN: If you targetting ES6 by default, how did you rationalize that choice?
61 points by tcrews 396 days ago | hide | past | web | favorite | 60 comments



Might be a good chance to suggest using our new preset: https://github.com/babel/babel-preset-env/.

TL;DR - automate your Babel config options based on targets.

babel-preset-env: A Babel preset that can automatically determine the Babel plugins and polyfills you need based on your supported environments.

It takes the data from compat-table [1] to generate a mapping [2] between a Babel plugin and the first version that a browser/env supports. We calculate the least common denominator of your targeted envs to determine the final set of plugins to compile with.

(Feel free to ask questions, I help maintain Babel and the preset). Just released a 1.0 a few weeks ago and looking for more help and usage! Looking into more help with removing unnecessary polyfills, and determining plugins based on performance via benchmarks of native/compiled.

And yes there is a lot of work that goes into making this work correctly (and in the foreseeable future with ES2015+). Would appreciate any help moving forward. And maybe we should just replace/recommend this preset instead of anything else to fight the fatigue..

[1]: https://kangax.github.io/compat-table/es6/ [2]: https://github.com/babel/babel-preset-env/blob/master/data/p...


* It makes the language much easier to work with

* Most people familiar with JS will pick up on them very easily

* It makes "patterns" like immutable data and some straightforward async code MUCH easier (object spread, async/await, etc...)

* node.js supports it (with a slight speed penalty, however it hasn't bottlenecked us yet so we aren't worrying about it yet)

* Most modern browsers support many of the features, and those that don't the polyfills and compilation is pretty much "drop in" with the rest of our build system.

* Dead code elimination has completely changed how we architect our projects for the better

* const gives us less bugs by preventing overwriting and scope confusion

* One of the main applications we are using it on only supports newer browsers that support ES6 natively anyway (due to needing certain browser APIs), so using the full extent of those browsers doesn't seem like it's going to cause any issues.

and finally...

* There doesn't seem to be any downside. We are only using stuff which is already in the spec, or close enough to being finalized that we are comfortable. If performance is an issue we can just not use it in performance critical code, and it's just nicer to work with.

I will say that it's not all daisies and roses though. We are worried what will happen if the module loading spec goes in another direction than we think it will, not to mention that interfacing with external ES6 libraries is either done through transpiled common-js code (which loses most of the benefits of ES6), or through "hacky" solutions like a "nonstandard" field in the package.json which lets something like webpack load the ES6 version (which we have no idea what features they are using, might need transpiled, might need fixed, etc...).

Like anything, it's a bit of a gamble. But I'm confident that we will save more time and have less bugs by using the new features than we will lose to fixing modules to be inline with the spec or dealing with incompatibility problems.


One interesting counterpoint to what you're saying: async/await are not ES6.

Although I'm 100% on board with ES6 adoption, there's a slightly worrying element of everyone having a different definition of what it is.

Some NPM packages have a "jsnext:main" attribute allowing you to redirect the "main" JS file to load, but it requires you to only use ES6 modules + features currently supported by Node. So you end up transpiling two versions from the original source. It's messy.


Neither is object spread, but yeah I've stopped worrying about those labels. For the most part people treat "ES6" as "all new JS features", and that's not going to go away soon.

Though when I see ES2015, I tend to take it literally.

But moving forward it's going to be important to see what version of the language people are using in their libraries. I can already see github badges for "ES2015 only" or "ES2016+" or something.

Or maybe a plugin which will scan the sources you import into your project and give you warnings if a library is using a feature you aren't supporting/transpiling...


Broken windows theory- ES6 code is more elegant to look at, which encourages people to write prettier code.

Working on a team of experienced non-browser devs, ES6 syntax allowed them to take the language seriously and actually try to do the right thing rather than 'commit atrocities of Javascript' (actual statement).


I think the word "target" in the question is unclear, or at least has been misinterpreted. Many of the answers here seem to be about why you would write your source code in ES6, but at least to me, the "target" code is the code that actually ends up executing (whether on the server or in the user's browser or whatever).

I think it all depends on your context and use case. If you're running code on the server, there's no reason to compile down to ES5 if you know that your node version can run your code without modification. At my company, we write browser-based tools for scientists, so we have a bit more flexibility in asking users to use modern browsers. We still use babel and target ES5, but we may move off of it before normal consumer websites do.


> If you're running code on the server, there's no reason to compile down to ES5

Actually, ES6 is not running at full speed in Node yet. So there are still some performance benefits on transpiling.


The runtime performance lost using ES6 features is minuscule compared to the runtime performance lost by using Node.js in the first place. If the difference in performance for the ES6 features actually matters then you have made a grave mistake using Node in the first place.


Are you comparing node to C or Go or something else?

It's funny, my big a-ha moment on performance happened when I was working at a bank and could only code in vba (i.e. excel). All of a sudden -- even on small data sets -- O(n^2) was unacceptable and I needed to think about algorithms to get to O(nlgn).


IMO, you should set up an analytics funnel to measure landing page conversion or (for web apps) usage metrics based on browser version.

Whenever the cost of supporting old, ES5-only browsers (measured by the amount of time spent debugging/adapting code * dev hourly rate) surpasses the income brought in by users on those browsers, it's probably time to make the switch.

For the web app case, you obviously don't want to just cut out access to those users (that would be pretty bad customer service! that can totally ruin your reputation), so you also need to factor in the costs of educating your users about the upgrade, giving them enough time and/or providing an alternative means of using your app. (ex: an Electron-based app installer)

Considering how good the polyfills and transpilers currently are (assuming you already have those set up) the maintenance costs of supporting ES5 are very low. It makes very little financial sense to not support it for existing applications ATM.

If you're just starting with an app, that is going to launch in one year or more (assuming you're focused on a general audience , instead of a specialized market like corporate where browsers are updated more slowly) then I'd probably already start without worrying about ES5 at all.


Or you can charge extra to support those users - chances are good that they will either pay or even better, force them to have the conversation about upgrading their systems & do so.


This is the best comment. Thanks k you for making it about the customer and the user instead of "code beauty".


I think people are making it about code beauty because ES6 is code, not a user concern. Right now you're pretty much transpiling down to ES5 for anything user facing.


The code that I write will still be used in 5 years.

I can write ES6, stick babel.js on it and my code will be both present-proof and future-proof.


As someone who writes ALOT of vanilla JS and doesn't really get into the new fangled frameworks, tools, transpiling..

Two things: template strings and arrow functions. Async is life-changing, but lack of support - can't use it yet.. Greatest feature since XMLHttpRequest IMO.


Presumably you mean in the browser, rather than node.js? When you can compile and use polyfills with babel and webpack, then why not?


Babel and webpack make debugging with the v8 debugger a nightmare. How do you approach that and avoid using console.log everywhere? console.log can be great and 90% of the time it's enough but for that other 10% I'd rather my debugging observe the execution context instead of being part of it.


As long as sourcemaps are setup correctly it's not bad at all.

That being said, setting up sourcemaps correctly can be a bear... But it's taken most of the pain away from our debugging issues.

And if that's not possible, as long as you aren't using some of the "heavy to transpile" features like async/await, looking at the translated code directly (and stepping through it) is generally close enough to get the gist of what's going on.

Edit: the bigger headache for me is code which loses it's full stack-trace, which is becoming more and more common when transpiling async/await for reasons I haven't had time to look into.


In addition to sourcemaps, another option that I've been meaning to try is to skip most of the babel transforms for typical development builds. Chrome has had full ES2015 support for a while now and it looks like it now has async/await on stable, so it's getting to a point where it should be possible to just debug your ES2015/ES2016/whatever code within Chrome without needing source maps.

You'd still need to do some transpiling for imports/exports and JSX, though, depending on what you use. And running different code in development and production has its risks.


(I feel like i'm all over this thread...)

I actually tried this for a bit. I found that it really increased the complexity of the build system for not that much of a gain.

Instead of having a "dev" build and a "prod" build, we needed a "chromeDev", a "otherBrowsersDev", and a "prod" build.

So now you have 3 separate environments for your babel configs which you need to keep in-sync, and you run the risk of the semantics being slightly different natively vs the transpiled version (Which IMO isn't that bad when you go from transpiled->native as the transpiled is normally a subset of the "native" functionality, so you are less likely to hit issues)


I don't have any such issue.

What I've been doing is just writing in ES6 and testing on Chrome, then after its towards V 1.0 I'll add webpack to the staging branch.

Very little work to switch from a bunch of script tags to a bundle towards the end of active development, much faster than bundling every single time I change the code a little bit.

That is, of course, unless I have to compile typescript, then I'll just target ES5 anyways.


We have been using [Bublé](https://buble.surge.sh) as our transpiler, and it's output is extremely readable. Bublé only supports a subset of ES6, but I think that's a worthwhile tradeoff for better debugging. Sourcemaps are nice too, but sometimes you have to go under the covers to find the real problems.


Isn't this what SourceMaps are for?

Edit: What he said /\


By "targeting ES6" do you mean writing ES6 and transpiling to ES5, or actually shipping ES6 to browsers?


Exactly! Shipping ES6 code to browsers.


My code runs on Node >6.9.1 so it's really a no-brainer. The synthax is clearly superior.


You know what's weird, I don't like the arrow functions in es6. And, I'm a coffeescript guy. I just don't like that the syntax is conditional on so many different things. The number of parameters, line length, etc...

I just wrote this function and used `function` rather than `=>` because I couldn't get the syntax right using an arrow function:

    const showTable = ["email", "phone", "website"].reduce(
      function (previous, channel) { return previous || !!institution[channel] && !/^\s*$/.test(institution[channel]); },
      false
    );
Of course, the inconsistencies are what I don't like about js in general.


It's not that hard...

    ["email", "phone", "website"].reduce(
      (previous, channel) => previous || !!institution[channel] && !/^\s*$/.test(institution[channel]),
      false
    );
But I'd prefer the more readable:

    const hasNoEmail = /^\s*$/.test(institution.email || '');
    const hasNoPhone = /^\s*$/.test(institution.phone || '');
    const hasNoWebsite = /^\s*$/.test(institution.website || '');
    const hasAny = !(hasNoEmail && hasNoPhone && hasNoWebsite);


Well for one you're missing a closing } in that function. But you could just do

  const showTable = ["email", "phone","website"]
    .reduce((previous, channel) => { return previous || !!institution[channel] &&    !/^\s*$/.test(institution[channel])
  }), false)

You can always just replace

  function() {
with

  () => {
and the only difference between the two would be binding this. Or you could do this:

  const showsTable = ["email", "phone", "website"]
    .reduce((previous, channel) => previous || !!institution[channel] && !/^\s*$/.test(institution[channel]), false)


But, that's kinda my point. You shouldn't need the { for a one line function.

also, I tried the solution you gave and it doesn't work :(


FWIW, Array.prototype.some() would probably be more idiomatic here.


Where were you on my code review :) Great suggestion.


Chrome Extensions! Writing Chrome Extensions is a great way to use all the latest and greatest javascript for a practical project.


I limit myself to ES6 features supported by Node LTS on the server-side.

I limit myself to ES5 code on the client side for now and probably at least next year too.

By writing code that is compatible with the platform it runs on I'm able to avoid having a slow, lengthy or complex build process to reach that state.


Not to nitpick, but the word "rationalize" usually has a negative connotation — to rationalize something is to attempt to justify/explain something that wasn't a rational choice to begin with [1]. In short, you're implying that people's decision to use ES6 wasn't based on a conscious, analytical decision.

If that's not what you meant, perhaps "justify" would be a better word to use. Or maybe just: "Why?"

[1] https://www.merriam-webster.com/dictionary/rationalize


How would you rationalize not using ES6?


Polyfills add a ton of size.

https://www.youtube.com/watch?v=L3JJ8qSIg2k


https://github.com/babel/babel-preset-env/ should help with this to remove the polyfills that are already supported, and with some future work to remove polyfills are unused.


I'm working on something that will be solely an electron app, so I'm using typescript and targeting ES6. Besides the bundling, the resulting code looks very similar to the typescript code minus the types.


If you want to attract the best talent and not lose the good engineers you already have, keeping up with the times is a must. ES6 is one major advancement for front-end developers and they want to be part of it


By not caring about IE11?

You simply have to ask yourself, who do you want to sell your stuff to.

If you got a rather technical audience for example, chances are hight that most of them are using current browsers. So you simply develop for them and get a very simple build process, which eliminates many potential bug sources.

If you want to sell to Siemens, Bosch, Daimler. Chances are hight that they use IE11.

You can also start building for current browsers and if too many people complain or you get the feeling that you lose too much users by ignoring IE, you simply add Babel or something.


I’m about to start coding my site from scratch using ES6 module + async/await syntax. I’m doing it because it will improve readability, which in turn will make the code more maintainable.


I'll be waiting till async/await (ES7 maybe?) is available natively in browsers and node.js and its error handling story is cleaned up. Right now, in node.js if you're using streams2/3, it's quite the mess to recover gracefully already. Future of exception domains in node.js isn't clear to me either. Also, browsers aren't there yet either (eg. Fetch API error handling and aborting).


In a nutshell, the js community is doing a great job of moving the language forward, and there is a great deal of cost to be borne by using the oldest version of the language that all browsers support. Browser vendors (in general) are the laggards, but there is an easy enough way to work around them.

ES6 also nudges the code toward a bit more uniformity that can lead to better medium term maintainability.


I'm going to use Typescript regardless of which flavor of ES/JS I expect to target, for type safety alone. Since I'm already using a build system, writing for the latest ES/JS spec supported by Typescript is entirely about convenience of the new specs themselves. I can use Typescript to downlevel as necessary for the expected runtime environments.


Am I the only one to not understand the question?


Not OP, but here's how I interpreted it:

"ES6" is what people used to call "ES2015". So whenever I see "ES6", I translate that to "JavaScript with a lot of features that JavaScript used to not have"

In the JavaScript world, some people "target" an older version (usually ES5) because for a long time, browsers and Node.JS didn't support "ES6" features. So while many have written "ES6", at least in the past they would have to "transpile" (read: compile) it, and the output is ES5 JavaScript (which many browsers can read).

But in recent times there has been an increase in people who aren't transpiling down to ES5; in other words, they give browsers their "ES6" code, and just hope the browsers understand it. OP is wondering why people would do that, and many responses here provide spectacular explanations.

EDIT: reading through responses, it seems many people interpreted the title to mean "Why do you write JS using things that debuted in ES2015 or after ES2015?". In other words, people ignore "targetting" and assume that everyone is still using a transpiler that targets ES5


I'm not sure - which part is not clear?


Targeting ES6 by default, what does that mean? Rationalize the choice? Is he talking for the developpers? For the users? In the latter case, what not use Babel/TypeScript? I'm confused.

Please add some text to your title when you post on "Ask HN" (context, details, why do you ask the question, etc).


Recently I've been advocating a move to distributing uncompiled ES6 packages, and shipping ES6 to browsers that support it and compiling using a single simple preset (ES2015 except modules).

There are several big reasons:

1) ES5 "classes" cannot extend ES6 classes (it's not possible to emulate a super() call in ES5), so:

1a) All the extendable built-ins are basically ES6 classes. If you want to extend Array, Map, Set, HTMLElement, Error, etc. you really should use ES6, and deal with emulating extending those classes in the compilation step.

1b) Really useful patterns, like ES6 mixins ( http://justinfagnani.com/2015/12/21/real-mixins-with-javascr... ) don't work when ES5 and ES6 versions are mixed in a prototype chain. That is, an ES6 mixin, compiled to ES5, can't be applied to an ES6 superclass. So if you distribute ES5 you're forcing everything above you on the prototype chain to be ES5.

2) It's much easier to debug uncompiled code, and all major browsers (as of Firefox 51) and node support ES6 well. Shipping uncompiled ES6 is a dream to work with comparatively.

3) ES6 is a much better compilation target for things like async/await because of generators.

4) ES6 is easier to statically analyze, so tools like TernJS and TypeScript (which can do analysis of regular JS too) can give better completions, etc.

5) It makes the pipeline from source -> packaging -> depending -> building for deployment much simpler.

Packages shouldn't assume too much about their eventual environment. That used to mean not assuming that the environment had ES6 or things like Promises. But times change and now that all the current environments support ES6, packages shouldn't assume that environments _don't_ support it.

So packages shouldn't directly depend on polyfills that most current environments have (Promise, Object.assign, new Array methods, etc.). Instead they should target standard ES6 and let the app developer who knows what they're targeting choose the necessary polyfills and down compilation. There's really too much bloat from packages forcing the inclusion of multiple Promise polyfills, or versions of core-js.

Also, it used to be that compiling dependencies was a major pain. You'd likely have to write custom build rules that compile and stage each dependency - because each dependency might have different language features and polyfill they might use. Now the packagers like WebPack and Rollup are so good at finding all dependencies statically, and we have a new stable plateau of language features in ES6, that the packager can compile everything it needs.

Of course if you use features beyond ES6, then those should be compiled to ES6. This implies a rule of thumb to move forward: Once all major browsers and node LTS have a feature, start assuming that feature and don't compile it out. For example, once Firefox and node LTS get the exponentiation operator, start distributing ES2016 (see http://kangax.github.io/compat-table/es2016plus/ ).


I'm using Typescript. The amount of bugs it catches (e.g. undefined/null errors and unexpected string/number bugs being the biggest ones) makes it more than worth it.


Next stop: Purescript :-)


Are there any good examples of things you can do in Purescript you can't do in Typescript? I'm finding the latter a good compromise of not going against the grain too much while getting a lot of the type safety I'd expect from a functional language.


You can't do anything extra since ps compiles to is anyway. However you can get all of the advantages of a powerful type system, and far more compile time guarantees.


Do you have examples of compile time guarantees I mean?


Sure

* Your values wont be null. So if you have a type say Object, and a value x of type Object. x will never be null. If in some cases you need values that can be something, or nothing, there is a "Maybe a" type and an "Either a b" type (often used for errors, i.e. it is either an error of type a or a result of type b). This gives you a lot of control and expression around things that might not have values.

* Functions have no side effects. You define a function and it has inputs and outputs but it wont go off and do something else. You can be sure that your functions aren't modifying global state, navigating to new urls etc. even without inspecting the functions they call, and the functions they call.

How does anything get done? Well the desired effects are returned as data. And you can tell by the type what kinds of effects it can produce, and what it can't.

Now I spend a lot of time on C# legacy code thinking 'ah a public property, great who is calling that, what is this function calling, OK a,b,c,d. Now what does a call, ok a1 a2 a3 a4. Ok they look OK now what about b etc. Lots of detective work that is not needed in Haskell/Purescript because the type sigs tell you almost everything you need to know from an architectural point of view about what is going on, because there are no side effects of functions beyond producing data to be consumed.


On personal projects? babel-preset-stage-0 #yolo


Hello! I assume you are talking about we development.

As a developer, the new versions of EcmaScript include good features that make easier the development of our applications. Also, ES6 is not a beta, it's a new version of the language. We are talking about frontend, so the execution environment of our application is unknown. However, the benefits of using the new features are big.

For me, the most important one is the new methods on basic types. With old versions of EcmaScript, I needed to rely in third party libraries to add some functionalities to my project. With the new versions of ES, my code is more independent from external sources.

Also, Babel provides backward compatibility for old versions of the language. As I mentioned, we don't know the execution environment of our project, but the TC39, the committee behind the decisions of the new features, has thought in this. Adding backward compatibility is a requirement for new features in the language.


"With the new versions of ES, my code is more independent from external sources."

What about all of those external sources brought in to support building\transpiling\source map generation\whatever?


I consider Babel a trusted source. I prefer to transpile the code by Babel instead of using 15 libraries developed by one person. Also, I can remove some transpilation plugins as soon as the browsers implement it




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact

Search: