Hacker News new | past | comments | ask | show | jobs | submit login
JavaScript is Dead. Long Live JavaScript (michaux.ca)
160 points by peroo on June 26, 2011 | hide | past | favorite | 66 comments



Great and thorough article. Here's a TLDR:

* Javascript has some warts, and here are some examples.

* It's going to take way too long to get these fixed to the point that we can actually start using new versions of Javascript because of slow browser adoption.

* To get around this problem, people are treating Javascript as a compilation target for other languages, the most prominent example being CoffeeScript.

* Discussion follows of features that would make Javascript better serve its increasing role as a compilation target.

* There is discussion of Harmony throughout.

Sometimes offering a TLDR indicates that the article isn't worth reading "so here are the author's points so you can skip it". That's not the case here. For example, I learned about "Tennet's Correspondence Principle", which helps to give a name to the different ways procs/blocks and methods/lambdas behave in Ruby, as explained in this article[1].

[1] http://www.robertsosinski.com/2008/12/21/understanding-ruby-...


What if the standard could be changed from a language to a runtime specification? For example, a subset of LLVM assembler could play the role of a substrate (very much like what NaCl is doing) and leave the language part open. Runtimes with sufficiently rich capabilities can evolve over a much longer period than languages. This could liberate client-side coding from language specific-ness just like it doesn't matter what language you use on the server side.

That might sound a lot like Java, but it can can use some fresh thinking. I'm quite excited about NaCl on this front, for example.


This is a great idea, and I agree NaCl is the most exciting thing I've seen in a while. However, the modern JS engines already have JIT compilers for JS. As these improve more, JS effectively becomes your "bytecode". That's also an entirely reasonable direction that maintains backward compatibility.


"It's going to take way too long to get these fixed to the point that we can actually start using new versions of Javascript because of slow browser adoption."

Which isn't true, because you could compile newer versions of javascript to older versions. I see languages like coffeescript as interesting experiments in how the language can evolve, but eventually it needs to come back home to javascript and get deployed natively in browsers.


Seeing how a handful of ideas in CoffeeScript have been rumored to be included in the next JS release, it seems like individual experimental language design might be the way to push for change.


Yes but many of the ideas in CoffeeScript were likely slated for ES.next before CoffeeScript. So it has been working in both directions.


Good point. I hadn't thought of compiling old versions of javascript to new versions - that keeps everyone writing in the same language and seems like a better solution to me.


There was an ES4-to-ES3 compiler. The Traceur compiler takes many of the Harmony ideas and compiles to ES3.


Oops, meant to say "compiling new versions of javascript to old versions", not the other way around.


> If your application uses the let idiom, wouldn’t it be nice to have new syntax for it?

Mozilla introduced this back in Firefox 2, and you can replace var with it, so TFAA's

    for (var i=0, ilen=elements.length; i<ilen; i++) {
        var element = elements[i];
        let (num = i) {
            LIB_addEventListener(element, function(event) {
                alert('I was originally number ' + num);
            });
        };
    }
becomes:

    for (let i=0, ilen=elements.length; i<ilen; i++) {
        let element = elements[i];
        LIB_addEventListener(element, function(event) {
            alert('I was originally number ' + i);
        });
    }
Of course, since this does not quite seem to be a tight loop you should be using Array.prototype.forEach instead:

    elements.forEach(function (element, i) {
        LIB_addEventListener(element, function(event) {
            alert('I was originally number ' + i);
        });
    });
> The JavaScript idiom that potentially spans the most lines of your program may be the inheritance idiom.

That's not an idiom. And there are numerous implementations of classes on top of prototypes out there, if there's one javascript "problem" which does not need new syntax it's that one.


Your example shows how the author's conclusions are wrong. IMHO the biggest milestones of Javascript life have been his libraries and the creative uses we have seen around them. From XMLHttpRequest to jQuery (and Prototype, Mootools, etc) and now Node.JS there have been highly disruptive innovations over this basically flawed language.

Maybe we should give more attention to the basic process behind this, creating libraries aka Programming Motherfucker.


Actually, the forEach version can end up faster than the let version... especially if you self-host forEach so that the JIT can do its job (e.g. inlining the LIB_addEventListener calls so there aren't actually any function invocations, etc).


> Actually, the forEach version can end up faster than the let version...

Theoretically maybe, but I've yet to see that. A regular `for(;;)` is generally 3-5 times faster than using `Array.prototype.forEach`.

Re. JIT, hand-rolling a foreach in javascript can be up to twice as fast as the native one, depending on the features you leave out.


> Theoretically maybe, but I've yet to see that

Compare the let version to the code in https://bugzilla.mozilla.org/show_bug.cgi?id=602132


"Optional Parameters and Default Values"

Since JavaScript will still invoke a function call even if the number of arguments don't match up, parameter defaults will not completely eliminate all the problems he lists.

  function(a = 1, b = "Smith", option = {}) { .. }
Will still break upon trying to invoke it without parameter 'b' since param 'option' will just take its place while option will become an empty object.

Furthermore he argues against "option = option || {};" only to do the same thing in the function header: "function(a, b, option = {}) { .. }"? What difference does that make?

The whole section in my opinion is negated by passing object literals to functions. Stuff like:

  func({ a: 1, b: "Smith", option: { .. });
That makes it easy to read and easier to handle since names are agreed upon by both the caller and the function handling the invocation:

  var opt = config.option || {};
This is a common practice in JavaScript already, and it beats the solutions he wished were added to the language.


I'm very much in agreement with objects-as-named-params. Default params just mitigate the problem, ambiguous function calling still rears its head as soon as you have multiple optional arguments of the same type.

As to the "option = option || {}", he was referring to the ultimate "option = {}" assignment if you passed a falsy value (0, "", false), which is different than if you didn't pass a value (undefined). Which is resolved by "option = option === undefined ? option : {}", which is even larger and kinda ugly.


Ahh, I see. The thing is, I can't think of a single way to make {} be treated as false in the boolean context, so 1) I don't think you'd ever need anything more than "option = option || {}", 2) I think the example would have been better off using an integer or string where you actually might have to use the ugly long format.

EDIT: Unless of course you get some non object value passed in and you want to be sure to keep it regardless of whether or not it is falsy. Not sure why you'd want to do that -- I am sure I would NOT like to see someone do that if I had to deal with their code.


His point is that if config.option is supposed to be an integer, not an object, then your proposed syntax falls down when trying to determine whether 0 was passed or whether nothing was passed.


"Do you want to maintain a compiler? Does anyone on your team have the skills to do that? That ulcer is going to get pretty big while you prepare to explain to the CEO that you now need to rewrite the UI in JavaScript."

GWT could be really strong. But it's this single maintainer problem that I think prevents it's penetration into the market.

Sadly when I hear people talk about the "strengths" of GWT, debugging and IDE debuggers is what I hear most. If that is the core reason for using GWT it seems shortsighted.

And for me personally I find that using Java (static typed system) to compile down to JavaScript (dynamic function system) is too big a paradigm switch for developers to maintain. Other languages, specifically functional type languages are far better for maintaining language paradigms in compiler situations.


Why isn't the 'single maintainer problem' a concern with other programming languages? What language DOESN'T have a single organization leading its development? Even worse, some projects (incl. coffeescript) seem to be controlled by just a handful of individuals who can easily be hit by a bus, hired away to do something else, lose interest, etc.

I think the only way to defend against that is to have an open source project, and to have a sufficiently large user base such that someone is sure to take over if the owners drop the project.

I think the strength of GWT is that for some languages/teams, a statically typed language is more appropriate than a dynamic language. And isn't more choice a good thing? Avoid the whole 'this is a hammer, everything else is a nail' approach?

I do agree that some teams choose GWT for inappropriate reasons though.


Good questions/points.

I would only respond with that in particular, GWT is more than a compiler - it is as much a JRE as anything. So there is more to it. I personally don't see Google doing much public interaction with the development of the GWT compiler. If it's left to Google and its own time then my point remains.

JavaScript on the other hand really doesn't have a single organization leading it's development. Maybe this is an argument to be made about its perceived slow progress. Language design by committee is pretty slow.

More choice is good. Always. But right tool for the job should always win.


So... what you're saying is that having better tooling is a bad thing?

Disclaimer: I've done GWT and I liked it.

I did not find any Java<->Javascript paradigm shift problem at all. You simply forget about the Javascript and just write Java.

The big benefit of using GWT over writing 'native' javascript is that you get the same behaviour on all browsers.

The big benefit of using GWT over using other java based web frameworks is that GWT moves the client's state down to the client, where it belongs.

If you look at most other Java web frameworks like that (e.g. Struts and it's direct competitors), they expend enormous effort porting data to and from the client, and trying to hack around the limitations of the HTML post operation - namely that blank values aren't returned (checkboxes!!) and that everything is passed as text (numbers!!). So what these other frameworks do is they end up making you write all these model objects, and then try to simulate the client state. JSF is the worst of these as it takes it the furthest, they're not just emulating the model, but also the view as well with their overly-complex de-hydration and re-hydration of components.

If you tell the client state to bugger off back to the client where it belongs, you save a good 30-40% of your typical web framework effort right there.

Eliminating Javascript browser/version idiosyncrasies saves you another 20-30% of your effort (more if the UI is very dynamic, less if it isn't).

-----

I think you're posting on the basis of assumptions you've made based on what you've heard about GWT, rather than experience with it.


He calls inheritance in Javascript a wart. It's one of its major strengths, IMO: the fact that you can directly override methods for a single object. Class inheritance, as he'd like to see, is not all it's cracked up to be. Just look at Java with its anonymous inner classes, for one incredibly ugly construct...

BTW an easy way to produce traditional class inheritance in Javascript is to have a single instance from the class you're trying to inherit from, as the prototype. (To work properly that object should be "abstract": have no ties to any physical resources, like a browser widget, a database handle or a file handle.)


> Class inheritance, as he'd like to see, is not all it's cracked up to be. Just look at Java with its anonymous inner classes, for one incredibly ugly construct...

I'm not sure why you're implying java's anonymous inner classes have much relation with class inheritance.

The vast majority of class-based languages do not have (or need) anonymous inner classes (though some languages allow for creating unnamed classes on the fly by instantiating metaclasses).


Yeah, javascript's prototypal inheritance is flexible enough to simulate classical inheritance. I wouldn't call that a wart.

http://www.prototypejs.org/learn/class-inheritance

http://dojotoolkit.org/reference-guide/dojo/declare.html

Plus, I tend to prefer prototypal inheritance to classical inheritance. I do agree that the constructor pattern using the new keyword and the prototype object is a bit odd, but that's not the only way to have objects inherit from other objects in javascript.


I've used both methods of inheritance.

Prototypical inheritance in the IO Language is a beautiful thing.

Not so much in Javascript.

Class based inheritance in Smalltalk and its dynamic mind-children Python and Ruby are similarly elegant and flexible. Not so much in C++ or Java.

The problem isn't with Prototypical inheritance, its with the implementation of it in Javascript.


The wart is that since classical inheritance isn't built into the language, we get:

1) Crummy syntax. 2) Everybody doing it a different way.


I see your point to some extent. I don't think the functions for mimicking classes provided by libraries like Prototype and Dojo have particularly crummy syntax. But since there isn't a standardized way to create classes, implementations are usually a little different.

Still, according to this logic, any language that doesn't support classical inheritance has warts. I'm not sure I agree with that. JavaScript doesn't have classical inheritance built in, but I haven't seen anything to make me think that it's really needed from a programming standpoint. Your point #2 is valid, but could that can be attributed to the programmers who use the language rather than the language itself?


He did mention the GWT Java to Javascript compiler. I have been using GWT (actually SmartGWT) a lot lately because one of my customers uses it. Except for long build times, the GWT Java to Javascript compiler has a lot going for it. Once a project is built and running in test + debugger mode, the client side development experience is pretty good.


Why not just have a library in Javascript, for example:

  Q.each(arr, function(i) {
     this.rocks();
     arr.pop(); // watch out doing this in the loop
     arr.push('foo'); // watch out doing this in the loop
  });

  Q.each(obj, function(k) {
     this.rocks();
     delete obj[k]; // should be safe maybe
  });

  var p = Q.pipe(crazy stuff involving pipes and callbacks)
  doSql("SELECT * FROM users", p.fill('users'));

  getUsers = Q.getter(getUsers); // turns it into a really  smart getter

  getUsers = Q.batch(getUsers); // turns into into a batched getter
and lots of other stuff

Why must the language take care of everything for you when libraries can do it? That has been the case with C, C++ and other languages.


Features that can be added through libraries should be added that way. You cannot add new syntax which involves scoping of variables through libraries. How would you do destructuring assignments with a library and have it as succinct as it can be done with syntax?


Yeah, some features can be pushed into the language, but I like languages which have few rules. Like C and like CHESS :)

My favorite languages are C, Java and Python though. So go figure...


If you like minimalistic languages, have you checked out Scheme? Because of its macros, the core language can stay small and the users can expand its syntax.


I tend to agree that compiling to JavaScript is the future of browser programming. CoffeeScript seems to be a nice language, but I bet there will be many more to come in the next year or two.


I hope someone writes a decompiler for one of these supra-JS languages (not sure if CoffeeScript is Turing-complete) to run against existing Javascript codebases. It could well speed up adoption if a mass-tangle of JS can be boiled down to a more readable and understandable structure.


  > not sure if CoffeeScript is Turing-complete
Wait, why wouldn't it be?


It is, though that's not very surprising. Here's a quick proof:

S-K combinators[1] are a Turing-complete subset of lambda calculus. Here's an implementation of them:

  k = (x) ->
    (y) -> x
  
  s = (x) ->
    (y) ->
      (z) -> x(z)(y(z))
1: http://en.wikipedia.org/wiki/SKI_combinator_calculus


Heck if I know, I'm not CS trained.

Point being, CoffeeScript and other compilers would get a big boost if someone came out with a way to convert existing JS codebases into these new, more readable, more maintainable (as the story goes) representations.


js2cs - https://github.com/mindynamics/js2cs

Though I've found doing the conversion manually is a better way to learn CoffeeScript. Plus, it is a point for optimization and refactoring since CoffeeScript allows for doing certain things more easily or in better ways. The automated conversion only does so much.


Not sure if that's what you meant, but there is already JavaScript to CoffeeScript compiler: http://ricostacruz.com/js2coffee/


CoffeeScript allows loops no? Therefore it is turing complete (practically).


I was giving a simple metric by which to check turing completeness. If the language allows you to do infinite loops e.g. "while loops" then it is turing complete.


Practically you need some desicion making control structure (cond, if) and an emulation of read/write tape (arrays)


Well actually unbounded recursion and infinite memory is what is required for turing equivalence. Conditioning can be present in a non turing complete language. I added the practically because all computers on which language models are built on are finite. It is easier to get turing completeness than to ensure you haven't accidently allowed it to sneak in - as evinced in C++ templates.

http://en.wikipedia.org/wiki/Primitive_recursive_function#Co...


What happened to the idea of standardising Mono (the open source .NET) bytecode as a new browser language?

Then you can use whatever java-like/basic-like/functional-like language you want, and have it compile to bytecode. Also, with sophisticated JITs for that style of bytecode already lying around, you could probably instantly beat javascript's performance even under the best engines, because of static typing etc. rather than relying on ridiculously complicated static analysis.

It'd have to be supported side-by-side with javascript for the forseeable future, but the earlier you get a change like this in, the better...


1. JavaScript's speed is actually not far from Mono's now. And constantly getting closer.

2. I don't think anyone but Microsoft would support .NET bytecode in the browser, simply because Microsoft controls .NET and has patents on it. It would take a lot more to reassure other browser vendors than Microsoft's existing CPs.

3. Running dynamic languages on Mono is slow. Look at the speed of all the dynamic languages on Mono (or the JVM for that matter), and compare them to native implementations of dynamic languages, in particular JavaScript and Lua. The native implementations beat dynamic languages on Mono by a large margin, simply because .NET is a bytecode made for static languages. Of course you can say that you prefer to have static languages in the browser over dynamic ones, that's a legitimate opinion of course, but not everyone would agree.

4. Standardizing on bytecode can inhibit innovation. JavaScript doesn't have a bytecode standard, which let JS engines implement very different ways of running it (V8 doesn't have a bytecode interpreter at all, for example). Of course standardizing on syntax also inhibits innovation, just in other ways, it isn't clear which is better.

5. Static languages compiled to JavaScript (that is, that use JavaScript as their 'assembly') are getting quite fast. Some benchmarks I ran found them to be around 5X slower (on the development versions of SpiderMonkey and V8) than gcc -O3, compared to Mono which is 2.5X slower, http://syntensity.blogspot.com/2011/06/emscripten-13.html

6. There is already Silverlight/Moonlight which does .NET in browsers, and it hasn't been very successful. (Of course it is a chicken and egg thing, if it were bundled in browsers it might be more popular. But the failure of Silverlight is a disincentive to add Mono to browsers nonetheless.)

For all these reasons, I don't think Mono has much of a chance to be included in browsers. Most of the same arguments apply to other static-language bytecodes like NaCl.


>> 1. JavaScript's speed is actually not far from Mono's now. And constantly getting closer.

http://shootout.alioth.debian.org/u32/benchmark.php?test=all... <- That still looks sort of far to me.

>> 3. Running dynamic languages on Mono is slow. Look at the speed of all the dynamic languages on Mono (or the JVM for that matter), and compare them to native implementations of dynamic languages, in particular JavaScript and Lua. The native implementations beat dynamic languages on Mono by a large margin, simply because .NET is a bytecode made for static languages.

It's not fair to compare JavaScript, which is already approaching a limit of how fast it can go, with the speed it gets running in Mono. Why? The two implementations have a vast difference of amount of energy and resources thrown at them; had Google, Mozilla, and Microsoft wanted JS to run fast on Mono, it would run fast on Mono.

Also saying that .NET is a bytecode made for static languages is kind of iffy now that the "Dynamic Language Runtime" is a part of .NET


1. The Alioth results are not necessarily final - they compare a single JS engine, and we have several fast ones now (SpiderMonkey with type inference can be significantly faster on some benchmarks, for example). Even so, the median speed there is 2X, which is fairly close. Admittably there are some bad cases though, in particular pidigits (badly written benchmark code? bug in v8?).

2. It is true that JS on Mono has had far less work done, and that the DLR exists. However, the fact remains that dynamic languages are a late addition to the JVM/.NET model. For example, one very important thing for dynamic language performance is PICs, and to my knowledge there is no good example of fast PIC performance on the JVM or CLR. In fact, we don't even have a good example of a generic virtual machine that can run multiple dynamic languages fast (Parrot exists, but is not that fast) - all the fast dynamic language implementations are language-specific, so it shouldn't surprise us that VMs built for static languages don't do that well either.


In my opinion it makes no difference that we have several fast engines where some are faster at some things than others. When executing in the browser you don't get to pick and chose how and where your application will be executed. If you run into performance problems on one of the engines you can: A) dismiss a subset of your users and their performance problems, telling them to use a browser with a faster engine (they wont), B) only allow certain functionality based on a user agent string, or C) limit your applications scope to one that runs suitably in the slowest of the engines you're willing to support. In essence, if the application runs great in browser A but chokes in browser B, are you willing to say bye bye to your A users to take advantage of performance gains on B? I've been in this situation, and in my experience I've always had to look away from the faster browser rather than the user.

Outside the browser you probably have a little more freedom, but it's not like you get to pick and chose in the style of "Oh, I'll execute this function in V8 since it does this faster, and that function in SpiderMonkey since it's faster there". For this reason, I don't think the fact Alioth only has measures for one engine would make a significant difference in the overall comparison. You'd be, for the most part, gaining performance in one place by sacrificing in another.

Anyway, in my personal experience, I've ran into performance problems in JS a lot more often than with C#. I also have to go through a lot more tedious practices to ensure my JS code runs as fast as it can, where as in C# Some.lookup.with.lots.of.dots.does.not.scare.me(). That's why your claim sort of surprised me. Then again, the last serious JS performance problem I had was 6 months ago (before FF4), so maybe a lot has happened in those 6 months.

By the way, I'm not too informed on how type inference is done in SpiderMonkey, so I may be completely wrong in mentioning this, but it sounds like they're trying to speed up a dynamic language by mimicking static typing. If that's how far they're going to improve performance, maybe soon enough JavaScript will in fact sit better in the Mono/.NET/JVM?


I agree with your point about multiple JS engines, indeed you can't pick and choose the best results. What I was trying to say is just that the best results we see are an indication of where things are going. But again, I agree, we are not there yet and right now, each user has just one JS engine, and problems on some benchmarks. Static languages have much more consistent performance.

About the last 6 months: Yes, a lot happened during that time, namely FF4's JaegerMonkey and Chrome's Crankshaft. Both are significant improvements.

About typing, yes, in a way that could let this code run faster inside the JVM or Mono. If you can figure out the types, you can generate fast statically typed code for those VMs. However, type analysis can be both static and dynamic, should integrate with the PICs and so forth. So even with that, I don't expect dynamic languages to be able to run very fast on static language VMs.


>>in particular pidigits<<

1) Always read the program source code!

Why the performance difference between the C# Mono #2 program

http://shootout.alioth.debian.org/u32/program.php?test=pidig...

and the C# Mono #3 program?

http://shootout.alioth.debian.org/u32/program.php?test=pidig...

2) Always read the program source code!

Why the fast fast fast V8 regex-dna performance?

http://shootout.alioth.debian.org/u32/program.php?test=regex...


What does "not fair" mean? (A fable)

They raced up, and down, and around and around and around, and forwards and backwards and sideways and upside-down.

Cheetah's friends said "it's not fair" - everyone knows Cheetah is the fastest creature but the races are too long and Cheetah gets tired!

Falcon's friends said "it's not fair" - everyone knows Falcon is the fastest creature but Falcon doesn't walk very well, he soars across the sky!

Horse's friends said "it's not fair" - everyone knows Horse is the fastest creature but this is only a yearling, you must stop the races until a stallion takes part!

Man's friends said "it's not fair" - everyone knows that in the "real world" Man would use a motorbike, you must wait until Man has fueled and warmed up the engine!

Snail's friends said "it's not fair" - everyone knows that a creature should leave a slime trail, all those other creatures are cheating!

Dalmatian's tail was banging on the ground. Dalmatian panted and between breaths said "Look at that beautiful mountain, let's race to the top!"


Ok, sorry for using the term "not fair", I should have probably said that it's "unsound" to judge the speeds of current dynamic languages on mono/.net to the speeds of v8, SpiderMonkey, etc. The cause being that the speeds of the latter were fueled by very high browser competition and a lot of resource investment. Dynamic languages on .NET got the benefit of neither of these, so it should not be surprising that they are slower than their native implementations (which also get better funding/bigger communities). It would have made more sense if Microsoft or other companies threw millions of dollars at the Iron* languages and still couldn't make them fast.


> which is already approaching a limit of how fast it can go

Do you have data to back this up? At least SpiderMonkey has projects in the works that give significant speedups on various workloads already, and lots of headroom left...

I would not be surprised to see another factor of 5 or so speedups on various code in the next few years in JS implementations.


"At Google I/O they mentioned V8 is about as fast as it will get." from: http://news.ycombinator.com/item?id=2669494

Yeah I know, it's not a proper citation, I tried to find where exactly that was said at Google IO but found nothing so far. Either way, I didn't question it at the time I read it because given how dynamic JavaScript is I'd imagine there's only so much you can do to speed it up. Then again, this was coming from Google, and for all anyone knows the cause might just be them focusing more on Native Client instead of V8 for apps that need performance.


Ah, interesting. Yeah, that sounds like they're just planning to stop optimizing V8 or something, since you can clearly do better than that. The type inference branch of Jaegermonkey is already faster than V8+Crankshaft on compute-heavy (as opposed to GC-heavy, where V8's better garbage collector gives it a big edge) workloads, and that's without LICM or smart register allocation or any of the other global optimizations that are still coming online.

It's unfortunate that Google is deciding to focus on Native Client, with its portability issues, if that's what's going on.


He's wright, apart from pidigits the V8 execution is within the same order of magnitude as a Mono execution.


"In 1997, I collected a list of languages with compilers to JavaScript. There were JavaScript extension languages: the now-defunct ECMAScript 4, Narrative JavaScript, and Objective-J. There were pre-existing languages: Scheme, Common Lisp, Smalltalk, Ruby, Python, Java, C#, Haskell, etc. There were even brand new languages HaXe, Milescript, Links, Flapjax that were designed to address web programming needs."

Am I reading this right? Objective-J (2008), C# (2001), HaXe (2005), etc having compilers to javascript in 1997?


You know how on January 10th, 2011 you were still writing 2010 by accident? I was off by a decade. Corrected.


The key here might not be implementation but adoption.


I maybe wrong but it seems that caterwaul provides a macro system for JS (but I never tested it): http://caterwauljs.org


I like the prediction, but I think the biggest hurdle to compiled javascript will be mobile browser usage. The javascript interpreters in the mobile browsers right now are dog slow compared to their desktop cousins.


Android 2.2 and later uses Google's V8 JIT. iOS 4.3 and later uses Apple's Nitro JIT. Firefox 4 and Opera 11 also have JIT compilers enabled for ARM devices.

Mobile processors are still slower than desktop ones, of course, but JavaScript on mobile today is already faster than JavaScript on desktop just a few years ago. For example, the benchmarks on http://arewefastyet.com/ show V8 running only about 5x slower on an nVidia Tegra 250 ARM board than it does on an Intel Core 2 Duo Mac Mini.


I really hate when "X is dead; long live X!" is misused like this. The original phrase is "The king is dead; long live the king!", used when the OLD king died and the NEW king takes over.

Googling around, I see tons of misuse: Example of bad usage: "White Stripes are dead (long live white stripes)"

Example of proper usage: "Palm is dead; long live Palm!" (this would mean that Palm got acquired or reformed) "Paper is dead; long live paper!" (could be proper if used in an article about how paper for PRINTING is dead but paper lives on in other forms)

REF: http://en.wikipedia.org/wiki/The_King_is_dead._Long_live_the....


They didn't say "The King Henry VIII is dead. Long live the King Edward VI". In the JavaScript article, The "OLD king" is "JavaScript the source language". The "NEW king" is "JavaScript the compilation target". So the idea of old and new is in there. Sort of. :-)


Improvements to javascript would be great, but none are actually needed, the language is quite functional as is. Learn how to code or use coffeescript or develop your own new language to compile into javascript to support whatever language crutches you need.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: