

New-less JavaScript (2013) - voltagex_
http://jfire.io/blog/2013/03/20/newless-javascript/

======
Rygu
I use `delete object[key]` all the time to unset a property. I do think in the
JS world the use of `new` is inconsistent, but avoiding the keyword does not
rid of that fact. Your new-less libraries may be consistent with each other,
but not the rest of the JS world that do it their own way.
([http://xkcd.com/927/](http://xkcd.com/927/))

This seems like a problem the language should solve incrementally. Especially
with ES6 classes and modules!

~~~
maga
`delete` is harmful, consider setting property to `undefined` or `null`
instead.

~~~
kalms
Why is it harmful to delete a property that disappears forever?

~~~
aardvark179
It is going to depend partly on the internals of your JIT. Since JS objects
are so mutable it's common to use their shape as the basic check for fast
method dispatch. So if you take a Foo and add a property x then it will become
a Foo(x). If you make lots of calls on objects of this shape then the method
lookups will make it into inline caches and be very quick, calls against an
object of shape Foo() or Foo(y) or Foo(x,y) will be treated separately. Inline
caches for method dispatch are generally very small, so you'll omly get really
fast dispatch for a few different shapes at any particular call site so the
more consistent the shapes the more likely you are to keep the JIT happy.

~~~
josteink
So your argument is that we should currently and in the future constraint our
usage of language-features based on the current state of language-runtimes?

How are we going to keep pushing the boundaries then? Had JS started with that
sentiment, we would have a V8 JS runtime running about 100th the speed it does
now for general purpose code, written with the aim of being as clear and
reusable as possible.

Personally I prefer good, clear and concise code, over code written trying to
appease some secondary affects on the inside of a black box which is not
guaranteed to stay the same.

This is clearly an optimization, and I'm not saying all optimization is bad
per se, but all optimization should be justified if it affects code-clarity.
Are you sure these optimizations are warranted? Have you profiled your code
and identified that this is a bottleneck for performance-critical sections of
your code?

A former Nvidia-engineer had a really good rant about what sort of things your
thinking leads to:

[http://www.gamedev.net/topic/666419-what-are-your-
opinions-o...](http://www.gamedev.net/topic/666419-what-are-your-opinions-on-
dx12vulkanmantle/#entry5215019)

Key quote: "So the game is guessing what the driver is doing, the driver is
guessing what the game is doing, and the whole mess could be avoided if the
drivers just wouldn't work so hard trying to protect us."

I agree it's not a direct analogue, but the same line of thinking still
applies.

~~~
aardvark179
My argument is that you should be aware of what is required to implement a
language (at least to some extent), and what may go against implementation
assumptions. These things are always a tradeoff, and if you mutate objects
enough through their lifespan then maybe implementations will move more
towards looking at the methods attached to an object, and ignore the shape of
the data except when accessing a property (the methods are already normally
stored off to one side on the assumption that you won't change them), but that
will either make objects larger (need to store a ref to the shape and to the
methods per object) or require an extra indirection (need to follow from the
shape to the methods) so may incur a slight overall performance penalty.

Yes, we are all in the same game as GL driver developers and game devs, and
that goes for all language implementations. The fine details of what gets
optimised best may change from VM to VM, and from release to release, but it's
important to understand the general assumptions that these systems are built
around, because they don't change quickly.

In general dynamic languages that allow properties to be created and deleted
on individual objects are a complete PITA to optimise, and more generally
mutability of classes makes stuff harder, so in general even though you have
these abilities in JS and other languages your code will often perform better
if you limit your use of them, especially in areas where performance is
critical. This doesn't mean you shouldn't use those mutability features, but
you should have some notion of the cost you might be incurring.

------
drinchev
One of the greatest things about preprocessing languages is that they usually
use optimized and hard-tested features of the specific language.

Let's examine CoffeeScript ( which is one of the most popular ) :

    
    
        class Foo
          constructor: -> return 'bar';
    
        foo = new Foo();
    

will compile to :

    
    
        var Foo, foo;
    
        Foo = (function() {
          function Foo() {
            return 'bar';
          }
    
          return Foo;
    
        })();
    
        foo = new Foo();
    

Which IMHO, follows the very best practices of javascript. Since JS is such a
flexible language if you don't have any coding standards, your code could
become hell and unmaintainable very quickly.

I would advise not to use the proposed "New-less JS", because it breaks the
general coding style guides that a lot of libraries use and yes it would be
possible to write it that way, but keep in mind that your code would be one
step less readable from random developers.

~~~
cromwellian
Is that really optimized/optimal? I'd be curious to see jsperf benchmarks of
this method of constructor vs the straighforward approach.

The JS VMs tend to kind of finnicky and optimized towards certain types of
idiomatic Javascript and it's my guess that the further off the common path
you stray, the more likely you are to hit performance gotchas.

Your best bet is to transpile code that looks much like the benchmark suites
they run. :)

~~~
drinchev
I also agree with what you are saying. JS in terms of usage is so big that I
guess developers behind JS compilers will have to optimize for "the most
popular way of solving that problem".

In any case, why I mentioned preprocessors is that there is a lot of history
behind every feature. As of this there are a couple of github issues [1] that
explain why this approach has been used.

For the performance. Seems that "new" is way faster. [2]

[1] :
[https://github.com/jashkenas/coffeescript/issues/912](https://github.com/jashkenas/coffeescript/issues/912)
[2] : [https://jsperf.com/obj-create-vs-new/4](https://jsperf.com/obj-create-
vs-new/4)

------
1day1day
"JavaScript borrowed the new and delete keywords from its less-dynamic
predecessor languages. They feel a bit out of place in a garbage collected
language and are a source of confusion for newbies "

They are? Delete maybe, due to language symantics - but not new. I think it's
unfair to bundle them together in terms of newbie confusion.

Also every time I see one of these type of blog post, discussions in a book
the work arounds are even worse.

JS is easy to understand, by the time you need 'new' it is not that difficult
an extension of your current knowledge. As far as 'delete' even an advanced JS
developer could go a year without calling it

~~~
shangxiao
'new' is a source of confusion for newbies and experienced devs alike,
sufficiently enough for them to ask questions on SO:
[http://stackoverflow.com/questions/9468055/what-does-new-
in-...](http://stackoverflow.com/questions/9468055/what-does-new-in-
javascript-do-anyway)

Even I have to refresh my memory on these ambiguous parts of the language from
time to time.

------
z3t4
The nice thing about JS is that it has so many "religions". But some people
think there should only be ONE religion. While One religion would be easier
for newbies, destroying all other religions would kill the language.

So can we just agree that many religions are nice. It's OK to try and convert
people, but it's Not OK to force someone by changing the language itself.

With that said, I'm a bit worried about more and more religions entering the
JS language. Like classes and block scope (ES6). And how some, both good and
bad new features are forever changing the language.

------
rtpg
I think a good complement tothis concept is that of "this"-less Javascript,
relegating it only to things like mixins.

So many headaches are to be had if you're not careful and using some decent
functional things.

~~~
AnkhMorporkian
I don't think there's anything inherently evil, bad, or even particularly
confusing about this. As long as you understand how it's being attached to a
running function there's very little confusion to be had. The problem is that
most of the resources for learning completely gloss over this incredibly
important aspect of the language.

~~~
insin
...and too few of the posts complaining about it mention strict mode or ES6
fat arrow syntax.

------
jbergens
The library StampIt uses factory methods instead of new to allow some extra
features. Works a lot like .extend() in other libraries but with some extra
options.

[https://github.com/ericelliott/stampit](https://github.com/ericelliott/stampit)

------
EugeneOZ
It's a movement in wrong direction. It's artifact that we can call class name
without "new" and method name, because before ES6 there wasn't such thing as
classes. Now we should just deprecate this artifact.

------
zimbatm
The best reason to have new-less constructors is because it's now possible to
use them as higher-order functions.

    
    
        [1,2,3].map(User)

~~~
insin
As long as the constructor doesn't take multiple arguments [1], otherwise:

    
    
        [1, 2, 3].map(id => User(id))
    

[1]
[http://speakingjs.com/es5/ch15.html#_pitfall_unexpected_opti...](http://speakingjs.com/es5/ch15.html#_pitfall_unexpected_optional_parameters)

~~~
elclanrs
I find it useful to have `unary` and `apply` around:

    
    
      [1,2,3].map(unary(User))
      [[1,2],[3,4],[5,6]].map(apply(User))

------
leichtgewicht
How is this news? It says 2013 in the title ...

------
simula67
I feel like being bullied for having to learn Javascript to do web
programming. I wish there was a push towards supporting arbitrary languages
for doing web development.

~~~
ChrisAntaki
Well, that's possible with compile-to-JavaScript languages. Also you have the
option of plugins like Unity, if you're going for something graphically
adventurous.

That being said, JavaScript is a powerful tool, if wielded correctly. It's so
flexible. In a lot of ways, it's like a pizza. One man orders a pizza with
anchovies, another with green peppers. And on and on. Eventually it's like
they've got two completely different dinners.

~~~
simula67
> Well, that's possible with compile-to-JavaScript languages.

I have tried to read code that was translated to Javascript using some source-
to-source compilers and they are very hard to understand. On the other hand
for a language like C, it is very easy to understand the output ( assembly ).
Don't get me wrong, understanding the intricacies of assembly for each
processor architecture is a difficult process, but the rules that define
assembly language itself are not so, there is an opcode and some operands and
each instruction performs an action on the operands or stored state in various
registers. Javascript ( at least for me ) is too high-level to attempt to read
a machine translated output. Programmers need to know at least _one_ level
below the abstraction to which they are programming to.

~~~
jbergens
I would say the output of most of the compile-to-js languages are easy or at
least non-hard to read. It is usually the minification that causes problems
and you don't have to do that in a development environment where you usually
attach your debugger. It's usually also hard to debug a running application
written in a static and compiled language (unless you have symbol tables and
similar).

