
ECMAScript 2017: the final feature set - obilgic
http://www.2ality.com/2016/02/ecmascript-2017.html
======
kybernetikos
The biggest problem with js right now is the fragmented and inconsistent
collections apis.

Collections in js are modelled with Arrays, (Weak)Maps, (Weak)Sets, Iterators,
iterables/iterators, strings, promises, Nodelists, typed arrays, objects
(where structs would be used in other languages), generators, collections
provided by third party libraries (e.g. immutable structures)

There is no convenient set of methods that work on all of these, and no
protocol that you could use in your thirdparty library to behave in a
consistent and expected way.

The closest we have is iterables, but you can do very little with an iterable
beyond iterating it. No map, no filter, no flatmap, no reduce. You can create
a new array out of it and then use the array methods (which don't exist on any
other collection), but that's pretty dumb too.

"We'll let libraries decide" is the usual cry, but it's impossible to write
functions that will appear only on iterables or iterators, since they have no
common prototype other than Object.

Ideally the language itself would provide a small, coherent protocol that all
collection and collection-like things could implement, and then all the good
stuff could be built on top.

Languages like clojure do a good job of this.

~~~
xamuel
>no map, no filter, no flatmap, no reduce

ES6 generators pretty much solve this, especially once you learn "yield*"

~~~
WorldMaker
ES6 generators are a low level construct. The concern is that you still need
to add a dependency to a library such as wu.js [1] for the higher order
functions around generators/iterables.

(Compare with C# where a laundry list of higher order LINQ operators are
provided out of the box as extensions to IEnumerable. Or Python's 2.x
itertools modules, many of which have filtered into the top level of the
standard library in Python 3.)

[1] [http://fitzgen.github.io/wu.js/](http://fitzgen.github.io/wu.js/)

------
algesten
Is it just me that think "JS" and "parallelism" is a bad idea? I was "stuck"
in Java-land for over 10 years, along came NodeJS and saved me. In no small
part was the absence of threads part of the success.

During my Java days, I saw countless broken attempts at writing parallel code
(typically only saved by the Java VM not exploiting all possibilities the spec
allows for, such as only updating shared global state at the end of a
synchronized block).

Along came NodeJS and proved to us lost java souls, that with non-blocking
I/O, you don't need hundreds of threads to make performant code. In fact, one
thread is enough, also for driving complex UI.

Now we're given wonderful new Atomics and SharedBuffer, I bet there will be an
underlying expectation that to "keep up" you better start driving complex UI
in multiple web workers, because, performance.

But the complexity of coding parallel stuff hasn't gone away. Atomics and Run
To Completion functions isn't simple.

I would not be surprised if I'm two years from now am debugging broken
attempts at writing parallel code in something like react-parallel.

~~~
cel1ne
Look up communicating sequential processes and never have a problem with
multi-threading again.

~~~
camus2
> Look up communicating sequential processes and never have a problem with
> multi-threading again.

Go does that yet still need mutexes, has race conditions and memory
corruption.

~~~
cel1ne
"never have a problem with multi-threading again" was not meant literally.
Rather that grasping the concept of "sharing data by communicating" helps you
structure your programs.

------
riffraff
> Minor new features[...] String padding

well, that's certainly one way to avoid another `leftPad` mess :)

~~~
deathanatos
… does it work at a level higher than code units? From what I recall,
leftpad's implementation worked at the level of code units, i.e., it padded to
a particular number of code _units_ , which is bugged for just about any use
case one might apply it to.

That is, if you had the strings "a" and "é" (where the latter is "e\u0301")
padded "equally" to length 3, leftpad would pad them approximately thusly:

    
    
       __a
       _é
    
        > leftpad('e\u0301', 3, '_');
       <- "_é"

~~~
ygra
I very much doubt it. If it did, they would have to use grapheme clusters as
measurement which nearly no one does. The easier way out is to just say »we
have a trivial implementation that can only be used on strings where the
developer already knows what's in it«.

------
pragueexpat
OK, some stuff is nice (async), but seems like the standard keeps adding
syntactic sugar while not addressing more pressing daily challenges, i.e. deep
cloning (Object.assign does not go deep).

~~~
WorldMaker
It's a hack, but `JSON.parse(JSON.stringify(obj))` is a relatively performant
deep clone.

Very few languages that I know have deep cloning outside of some sort of
serialization hack anyway (C#'s deep cloning is a reuse of/leftover from
binary marshalling), so this is roughly par for the course, so far as I'm
aware.

~~~
paulddraper
> is a relatively performant deep clone

Relatively short, but IMO not relatively performant. All the JSON
encoding/decoding takes its toll.

~~~
WorldMaker
Relative to most purpose built deep clone libraries I've tried and most manual
deep clones I've seen tried that use some combination of DFS and
Object.assign.

Obviously YMMV, and your performance needs likely differ from my own.

(Personally, at this point I try to use immutability [with ImmutableJS or
friends] over deep cloning, but sometimes a deep clone is still handy.)

------
holydude
I wish there was a way to avoid dealing with JS on both backend and frontend.
There are many transpilers yes but the common language to write everything in
many many companies is either ES5 or ES6, rarely TypeScript or CoffeeScript.

------
coltonv
The last 2 ECMA releases (2017 and 2016) have felt pretty dull honestly. I
wonder if their approval process for features is too slow. In 2016 we got
almost nothing, power operator and array.includes(). 2017 We finally got async
functions, which have been in consideration for years now, a few helper
functions, and shared memory. Object Rest/Spread would have probably made up
for it, but now we'll have to wait at least another year even though it's been
stage 3 for quite some time.

With the language moving this slow compared to faster releasing languages like
Python, Elixir, and Go, I wonder if Node.js will start to see a decline in
hype since the language hasn't improved much in the last 2 years. What do you
guys think?

~~~
thomasfoster96
> The last 2 ECMA releases (2017 and 2016) have felt pretty dull honestly.

It’s not exactly surprising. ES6 came out almost 6 years after ES5, so there
was quite a few changes then. The yearly release cycle was always going to
mean fewer features being standardised each release.

While I do follow from a distance some of the work TC39 does, I can only
assume that most people involved have realised that having an orderly process
for feature proposals - and not trying to rush heaps of features into a single
release - is a much better way to do things.

> With the language moving this slow compared to faster releasing languages
> like Python, Elixir, and Go

Python, Elixir and Go have the advantage that they have one main/reference
implementation and can break backwards compatibility whenever they want
(although that hasn’t gone brilliantly for Python) by just incrementing a
version number. ECMAScript runtimes have to be able to run programs that were
written almost two decades ago - backwards compatibility can’t be broken - and
you have several major implementations (V8, Spidermonkey, etc.) to deal with.

The tail call feature from ES6 is already being revisited because it caused
problems [0].

> I wonder if Node.js will start to see a decline in hype since the language
> hasn't improved much in the last 2 years.

Tools like Babel mean that people who really want to use new or experimental
language features can do so. There are dozens of language features in the
proposals pipeline [1] and many (if not a majority) can be used via a compiler
like Babel or via a polyfill.

Node.js also has the advantages of having it’s own 'standard library’ and an
enormous package ecosystem to use. I think it will be fine.

[0] [https://github.com/tc39/proposal-ptc-
syntax](https://github.com/tc39/proposal-ptc-syntax) [1]
[https://github.com/tc39/proposals](https://github.com/tc39/proposals)

~~~
hajile
> The tail call feature from ES6 is already being revisited because it caused
> problems

Proper tail calls being in the spec don't cause problems. Safari shipped with
them a while ago. Chrome/node has them behind a flag. Google wants them to be
explicit and everyone's bikeshedding about which bloated syntax is the correct
one (or if it should even have its own syntax).

The actual debugging issues they talk about are non-issues in my book.
Catering to new devs who don't understand tail calls is stupid (by that
metric, we should get rid of a lot of other things too). We don't need stack
traces for tail calls that loop (just like we don't need them in a for loop).
A shadow stack works just fine for CPS (continuous passing systems) and is
still better than the debugging tools we have for stuff executing across the
event loop (aka nothing worth mentioning).

