But I agree, the hoops that had to be jumped through are a damned shame.
A big advantage of asm.js over PNaCl is that it's not LLVM.
On the other hand PNaCl is very tightly coupled to LLVM. Any time I find myself checking if a some $feature is available in PNaCl, the discussion revolves around what LLVM can or can't do and how LLVM does it, see  for most recent example.
I just can't see how PNaCl could be meaningfully reimplemented. Reimplementing it from scratch would require implementing something bug-for-bug compatible with LLVM. That's a gargantuan task. On the other end of the spectrum, why not just use the original implementation? Then it would work very much like any other plugin. I don't think it's a good solution. It seems the only remaining option is implementing it on top of LLVM. But then you still share all potential limitations (e.g. in portability) and all security issues with all other LLVM-based implementations. It could end up being not that bad -- it means more work on LLVM itself, making it better -- but I'm not sure if that would offset the other problems. How many features that appeared first in chrome or webkit were then reimplemented elsewhere in a different, better way? Do we want to end up with only one browser? No, there are lots of things we wouldn't have if Firefox didn't happen, there are lots of things we wouldn't have if Chrome didn't happen. Do we want to end up with only one PNaCl runtime?
It is definitely possible to implement a non LLVM back end for PNaCl.
The bug you quote is quite the opposite of what you state: Mark was explaining how complex and potentially often varying LLVM's representation of atomics was. There is a need for atomics to express the full breadth of capabilities C11/C++11 has, but it has to be solved in a way that is close to the standard and no more, non of LLVM's quirks and legacy cruft (LLVM predates these standards, launched PNaCl does not). The primitives also have to be packaged portably so that they can be expressed correctly on any hardware targets (this happens to be tricky to get right, especially since Clang usually makes assumptions about the target). I think PNaCl got this one exactly right, but then again I'm the one who implemented this. :-)
I don't think it's fair to call LLVM a frontend for asm.js just because LLVM can compile to asm.js. By that logic LLVM is also a frontend for x86. That doesn't mean anything. Asm.js is a platform, LLVM is a compiler, it's expected that LLVM can target it asm.js, so can other compilers. Hopefully both asm.js and PNaCl will be targeted by multiple compilers not using LLVM.
> It is definitely possible to implement a non LLVM back end for PNaCl.
Anything's possible. Reimplementing PNaCl from scratch (custom everything) obviously will take much more work than implementing it on top of LLVM. However i honestly think that there would be a large chasm between PNaCl on top of LLVM and PNaCl on top of any other virtual machine. And this is a problem.
I think i'm too focused on LLVM, sorry about that. What I should have written but didn't: PNaCl is hard to reimplement, and not only because it's a large project. For example iirc Chrome doesn't explicitly support the asm.js pragma and doesn't try to be compliant, but it already performs many of the optimizations from the its spec and the Chrome team appears receptive of changes that make this sorta support better. If things continue that way, soon it won't matter if Chrome supports the pragma or not. I.e. right now Chrome supports (say) 10% asm.js and eventually it could support up to 90% with no problems. On the other hand can Firefox implement PNaCl piecemally? What would a 10% reimplementation of PNaCl look like? I think even a 90% or anything else short of a full implementation will not be interoperable.
I don't agree. I think that LLVM bitcode was not really designed for this either . Google has not succeeded in eliminating all of the undefined behavior from LLVM . At least asm.js has a fully defined semantics as specified by ECMA-262. For any hope of interoperability between browsers, that's critical. Things like order of iteration of properties were once left undefined by ECMA-262, but pages began to rely on Netscape's behavior and other browsers had to reverse engineer what Netscape did. Eventually they were added to the standard. Likewise, without all the undefined behavior removed from PNaCl, anyone who wanted a separate implementation of PNaCl would have to reverse engineer what Chrome did.
There is also the issue of compilation speed: asm.js was designed to compile quickly by a backend that doesn't do much optimization. LLVM, on the other hand, was optimized for -O2, which compiles much less quickly. PNaCl at -O0 is far worse than either V8 or OdinMonkey. On the Web, compilation speed matters a lot.
I think an actually technically superior approach, if backwards compatibility were not a concern, would involve:
2. A mechanism for providing a C binding to WebIDL-based Web APIs. This would eliminate the necessity of Pepper and would mean that any new Web APIs would instantly be available to native code.
PNaCl, unfortunately, is neither of these. asm.js isn't either, but I think it's closer to this ideal than PNaCl is.
Undefined behavior: it's mostly gone by the time the pexe is created, and our intent is to remove all that's left (e.g. shift by larger than bit width is trivial to fix). I think what's left is mostly non-issues that can't be easily fixed without breakage, so honestly I don't think undefined behavior is any kind of a deal breaker at this point. asm.js still doesn't have canonical NaNs so it's not fully defined ;-)
Note for other readers here: yes C/C++ have undefined behavior, but both PNaCl and asm.js settle on actual behavior before something gets shipped to the browser.
On compile speed: agreed, but I'll take the "it's getting better" approach here (hey, asm.js can do it for runtime!).
I'm not quite sure why having a non-SSA representation would be a requirement. I agree that there are advantages, but I think that's true of both approaches.
Extra bindings and non-Pepper: agreed.
My point wasn't to imply that PNaCl has done a bad job moving mountains to make LLVM into something stable and portable—in fact, the project has done an incredible job with it. I'm just saying that LLVM is not really any more of a natural fit than JS is. JS starts with compatibility and defined semantics and needs to add a low-level type system to become a portable native runtime. LLVM starts with a low-level type system and needs to add compatibility and a defined semantics to become a portable native runtime. In neither case was the system designed for it, as Dan's email shows.
> Undefined behavior: it's mostly gone by the time the pexe is created, and our intent is to remove all that's left (e.g. shift by larger than bit width is trivial to fix). I think what's left is mostly non-issues that can't be easily fixed without breakage, so honestly I don't think undefined behavior is any kind of a deal breaker at this point.
I just wish PNaCl hadn't shipped with any known undefined behavior. (I don't believe NaN canonicalization problems were known at the time asm.js shipped.)
Does PNaCl still have undefined behavior if you access outside a valid allocated region, as that document states ("accessing undefined/freed memory")? It is defined in asm.js, as the application ships its own malloc and there is no type system to tell the runtime what kind of pointer something is. Supposedly Microsoft got into trouble with their libc's malloc because they couldn't change it without breaking apps that relied on accessing freed memory in certain ways. I can see that being true on the Web as well…
I also wish there weren't any UD, but we did try to do a good list and I think we chose which to fix and which to punt on purpose.
I see memory allocation (both for code and data) as well-defined randomness: saying that the allocated base addresses are random allows many interesting types of sandboxing to be used which couldn't be used otherwise (more on that in the future). I think this will lead to interesting perf gain in some cases, and interesting security gains in others. More pragmatically it currently allows ASLR while still providing memory access without indirection (i.e. reg-reg addressing) on x86-32 and ARM.
Yes, I'm associated with that group.
AFAIK V8 does not use dominance frontier based algorithm for SSA construction. It just inserts phis eagerly at loops headers and on forward edges it inserts phis when merge is needed.
You can (and people do) similar things to sandbox JS, by running it in an iframe or a web worker, for example, plus some static analysis.
Also, the structure of asm.js ensures that the code inside it cannot access outside except through a small number of statically analyzable entry points. You can likewise verify no one modifies the asm.js data array by putting the entire codebase in a closure, and doing a trivial static analysis to see that it is not used.
There are ways to integrity checks manually, of course, but as far as I know the browser does not perform these integrity checks automatically (and it would be difficult to do so--you'd need to implement JS signing, and you'd need to implement PKI to get the right public keys to verify the JS signatures).
That's a pretty odd way to end up lumped with a technology.
I need to perform an integrity check on the JS to satisfy the second requirement, and the integrity check should not depend on the server that served the JS (since a compromised server could lie about its hash, for example). Moreover, the check needs to be automatic, and trustworthy. One solution is to check the hash of the JS against known-good hashes from the developers (i.e. get the JS from the website, and get the hash from a CA), and then cache the JS locally until I determine that a new version of the JS exists (I want to avoid re-downloading it over and over--that only gives MITM and MITS attackers more chances to serve me bad JS, and it's slow). Not an easy problem; otherwise we'd be doing it already :)
PNaCl offers the infrastructure to do this. I would use asm.js if it did so as well.
This is the key. With asm.js the worst case is slow, not totally broken.
Old problem: Downloading
New problem: Downloading. Large JS apps have to asynchronously load potentially very large codebases, every time the app is loaded
Old problem: Installation
New problem: Installation. What if the user has NoScript? AdBlock? Mobile Opera vs. Mobile Safari vs. Chromium vs. IE.
Old problem: Special User Permissions; gate keepers
New problem: Proxies. Websockets. Anti-virus software. On-machine firewalls. Corporate firewalls. "Special" toolbars and plugins that redirect the browser.
The web is not a "software distribution platform". It is not a "distribution platform", because code is retrieved on-demand and run, and there is no simple way to "cache" that locally and re-run it. (Chrome App Store does not count, because it is antithetical to the model you've described). It is not even software in the traditional sense that you mean, because everything is inherently client-server, so most of the app lives on the server or in the cloud.
I don't understand your "installation" issue. A user with NoScript knows how to turn it off for a web application they want to use, and generally do so. They just want to retain control over JS execution. AdBlock is not related to actual application delivery, but revenue generation, somewhat orthogonal.
Having to support cross-platform quirks is also something that isn't unique to the web. If you support multiple versions of iOS for your app, you will hit similar issues. Not to mention if you wanted to actually be cross-platform and work anywhere other than iOS with the same app.
Downloading: easily solved with browser caching. This is still faster than trying to, say, walk a very non-technical user through downloading and opening an installer.
Installation: not an issue for most users. NoScript/AdBlock are a small percentage of users for most sites and since most of the web breaks that way, many of them have learned to recognize when their local system broke something.
Permissions: the problem in both cases are the gatekeepers and every single “new problem” you listed was also an old problem of equal or greater impact. If the gatekeepers are determined and unaccountable, they'll block everything either way. The difference is that the web side has a better security model and is more likely to be open because you're following a known precedent – Google, Amazon, Facebook, etc. have created enough demand that even the most fascist IT departments allow HTTPS. This is not the case for the traditional desktop model where every local software install is considered independently.
Unless you use an old browser, this is not a problem anymore.
The browser, html/css/js, is the most ubiquitous common runtime that has ever existed.
A lot of services have an iPhone app, an android app, and sometimes even a desktop program. But if you have a website, you get all of those platforms at once.
Sure you have to do things differently, usually more slowly, but you can do whatever you do on any device with a browser.
Browsers are themselves native apps, and therefore web apps will always be less capable than native apps, because web apps are subject to the one-two punch of the limitations imposed by native apps and the browser.
To the point of paying the fee of app in app, you're right that there will always be a penalty. But "will browser apps ever run as fast as native" is not really the right question. The right question is "will browser apps ever be able to run fast enough."
Finally, a clarifying point. When i said "do whatever you do" I meant that a website does the same thing in every browser, not that websites currently can do the same thing as native apps.
Exactly. The humble hyperlink is the most amazing thing about the web. It's simple yet devastatingly powerful.
I'm not a snob about the programming languages of the web. I don't care what language I need to learn to create content for the web, or what tools I need to use. The language and the tools will always be a means to an end — that is, the radical ability to provide free and instant access to any content for everyone on the Internet.
(from a blog post I wrote "why I create for the web": http://blog.neave.com/post/64669185529/why-i-create-for-the-... )
As a content delivery platform, the web is unparalleled, ridiculously good. I have paid for content (for example, NSFWCorp), and will do so again in the future.
That may change in the future, but I doubt it: any program that's a sufficiently good web app can be rewritten as a desktop app with more capabilities. Ultimately the web may be the go-to place for trivial or gimmicky software, but the most powerful apps will be peers to the browser.
Everything has to go through the centralised, closely guarded app shops. You'll have to go through silly certification processes to get your app into the app shop and if the platform owners (or some minion working in certification) feels like it, they can just remove your app without warning.
Compare this to a web app. You deploy the stuff on a web server of your choice, the user clicks on a link. Done.
Java has gotten a bad rep lately due to some high-profile drive-by-malware bugs. But if the java codebase would have gotten the same intensive care that the webkit codebase got, this would no longer be an issue.
Many people remember java to be sloooow. When I first came into contact with it in school, that was certainly the case, but since a couple of years it has had a modern JIT that could easily rival native code.
Java applets are ugly, sure, but that is largely due to the decades old AWT, and the poor font support it used to have. With SWT, you can have native widgets (dunno if they work in Applets, but they are nice on the Desktop), and with antialiased drawing you can get the same results as with HTML5 canvas.
Java applets (and Flash, and Silverlight) died for marketing reasons, and political reasons. There were no unsurmountable technical issues. The outcome is that we are stuck with "worse is better" for the foreseeable future, only max. 50% to 1% of the possible native performance, and a bunch of restrictions we only slowly realize what they mean (no sockets, no signed applications, no anonymous/serverless mashups, less hardware access than we used to have, suboptimal caching, suboptimal tooling like languages, debuggers, content creation tools (I haven't seen anything that can replace Flash for simple vector animations yet) and so on.
The insurmountable issue is that it requires users to do installation work.
But! in the brave new world of HTML5 and so on, you still can't assume that everybody has all these features. Either they are stuck on older browsers (at work, or my old laptop that I rarely use), there are subtle implementation differences bewteen browsers (although I have cutting-edge Android devices, the cool demonstrations often don't work nicely in them), or finally the browser is OK, but the computer is too slow.
I only have one computer that can run all this newfangled WebGL stuff at decent speed, and it's my gaming PC at home.
On the point of older browsers:
1) the trend of % of people using older browsers is going down. Whereas the number of people using modern browsers without flash and/or java is going up (ie iOS mobile).
2) someone with an older browser expects certain parts of the web to be broken. Being broken in IE6 somewhat says "were more modern than you, try again after you upgrade.". I think people who see this are likely to come back at a future date, whereas someone who can't use it bc of flash is unlikely to think their problem will go away in the future.
It is like saying, if I build a web-app, the user needs to first install the browser. True, but once installed, other web-apps have a zero cost of installation.
I.e. the installation point of java is inside of my conversion funnel for some x > 0% of people I am targeting.
Additionally, the JVM security model has a much larger surface area than JS, allowing for all kinds of things JS doesn't, which, I'm going to guess is why security has been worse.
However Snap.svg (snapsvg.io) made by the Raphael creator and backed by Adobe seems to be the best vector animation replacement yet. It is really a mashup of Flash vector and Silverlight declarative style, almost an iteration (you still lose the compiled nature of it and very good compression by default though as it isn't swf -- just like Silverlight). Still nothing has all the features of flash except maybe Unity, minus the vector part, but plugins are looked at in a worse light now.
So we are in this transitionary stage where the new stuff is better but it takes much more work to get it to do the same across all platforms. WebGL, hardware accelerated <canvas> and <svg> and libs that glue those nicely like Three.js or 2D EaselJS and vector libs like Snapsvg will see it through, or further iterations of those. Once WebGL takes hold across all browsers and a couple years we will be in new, more capable hardware accelerated lands. Flash was a big software rendering, CPU hogging bummer at the end.
Java in the browser is an abomination though. As a client side language...forget it. I love it as a server runtime though.
The first difference between asm.js and any other assembly languages is that the only existing asm.js assemblers happen to assemble executables that are sandboxed by your browser. You can't do syscalls to use sockets or files or the Windows registry. You can't fork() or run non-asm.js executables. Still, if you can live with the "syscalls" that the browser gives you, you can run native code in any browser. The browser is your operating system.
It's like being able to write a novel where the text is both valid English and valid Spanish at the same time, and the plot is the same no matter which language you speak.
The browser as an operating system is currently worse than a Java runtime plugin in that the browser has less functionality. However, browsers are currently better in that they have more penetration and a higher velocity of improvement. Long term, the browser as a full-featured operating system makes more sense than as a mere scriptable document viewer slash plugin container.
Another problem with Asm.js in practice is that it's used with Emscripten which doesn't define the syscalls you talk about. All the DOM/WebAudio/WebRTC/etc API's have to be redone in Emscripten's headers. GL was easy, because WebGL is so close to EGL and friends. But what about DOM manipulation? Emscripten can't really do this yet.
So maybe it's not an assembly language or a bytecode, but that's a pretty unimportant distinction. Those are all intermediate representations (IR). A good browser will JIT your asm.js IR to machine code in much the same way that a good JVM will JIT your bytecode. An old browser will interpret it more slowly, but it will still work.
A good browser will AOT asm.js, not JIT it.
(In other words: pre-compiled, not compiled on the fly. Consequence: immediately as fast as a JIT would have… eventually… made it, but initial pause while compiling.)
sure asm.js is encoded as text, but its not encoding anything close to what assembly languages do.
Modern assembly languages often at a slightly higher level of representation than 1-to-1 with the instruction set so it's tough to draw a tough line, but yes, generally speaking, you should be able to translate the majority of the language to machine code with opcode tables. It's also interesting that machine code isn't even the lowest level representation of machine instructions on many architectures, which internally use microcode to implement architectural instructions.
My point was that LLVM bitcode and asm.js are both intermediate targets, but not exactly the same.
Not sure what you point is about microcode... That is an implementation detail which, even for eg, x86 LEA where you think you are leveraging it, it is not important or useful beyond trying to understand performance characteristics. What you get to work with is whole instructions.
According to the emscripten FAQ, gzipped emscriptem output (which uses asm.js) is about the same size as gzipped native code.
Asm.js is just a subset of JS that, when used in blocks, is precompiled. In that sense, it's much more like writing C than assembly (or at least C-like subroutines to be embedded in larger scripts).
If asm.js is "native code," then what an example of VM bytecode that is not "native"?
native means the 'byte code' of your hardware... not some VM?
So if that's what we're going for, we should forget asm.js and PNaCl and just use BF and expect very close to native speed.
But this won't in fact give native code speed, despite how easy BF is to JIT. Why do you suppose that is?
And you are seriously wrong with BF JIT, you at least have to merge runs of same instructions. (Not to mention other common optimizations, c2bf has only some of them.) Of course it would only matter if you want BF to be fast...
The original claim in this thread was that asm.js "is native code." This annoyed me in the same way it would annoy an astronomer if you called an asteroid a planet through some chain of tortured logic.
Then OP and others started saying it's "basically the same as native code" because it's "very easy to JIT," but that's clearly not true either thanks to the BF comparison.
So now you're further qualifying by saying ask.js is close to native code because it's easy to JIT "with similar performance." But if I point out corner cases where native code is much faster because of vectorization or use of specialized instructions, you will have to qualify further.
All I am pointing out is that calling asm.js "native code" is inaccurate and misleading. Especially when this is used to imply that asm.js (or PNaCl for that matter) is the be-all end-all answer to giving native code performance. Sure it's a lot closer to native than JS is, but it's still pretty far from actually being native.
I think many people just say "stars" while they should say "planets", "asteroids", "stars", "galaxies" etc instead, even when they are well aware that stars are not same as other planetary objects. Well, I think I have made many required qualifications in the second comment as your point becomes clearer. (And I think many others have the similar assumptions, but that's another story.)
> All I am pointing out is that calling asm.js "native code" is inaccurate and misleading.
The point of these questions is that the GGP's statements are absurd. There is no universe in which asm.js is "native code".
It's interesting to note that x86 and amd64 code isn't even truly "native". They're just bytecode IRs that are interpreted by a CISC virtual machine emulated in microcode running on a RISC cpu that you can't program directly. Everything is an IR. Python is an IR for the thoughts in my head. It's turtles all the way down.
So BF is "basically" native code too, since it can be JITted very easily? In fact, it's much easier to JIT BF than asm.js, so according to your definition it is even more "native" than asm.js.
The way you are using "native" takes away all of its meaning.
> Everything is an IR. Python is an IR for the thoughts in my head. It's turtles all the way down.
It's really not.
Yes, every executable representation is a representation (though we don't usually call representations "intermediate" if they are designed to be executed directly).
But the hand-wavy idea that because two things are both representations they are "basically the same thing" is so far from true that it is the opposite of insight. The truth is that the differences between representations is one of the deepest concepts in compilers and VMs.
It's certainly not native code since no CPU exists that can execute asm.js code directly.
Interactive sites suck, if you want that, use a sandbox so I can easily discard it.
In many cases it just reveals a lack of understanding of why C is powerful. It lets you choose your implementation details very heavily (not heavily enough imo [!]) to the point where few languages can do things it can't. Meaning that comparing like for like code is naive - you should compare a copy of the 'faster' implementation in C to the implementation in the language being benchmarked. Due to design limitations its basically impossible for Java to seriously compete with C...
I do generally agree that Java gets a serious bad rap for nearly nothing though...
AFAIK (I'm not a java developer), any language that can target the JVM can target Dalvik.
You can think of asm.js as a bytecode (like JVM bytecode) that all browsers can interpret. Some browsers are now getting JITs for this "bytecode".
You don't write asm.js in the same way that you don't write assembly or JVM bytecode by hand. You write Java or Clojure or Scala or C or Haskell. Your compiler then turns that into either JVM bytecodes or asm.js assembly. And your JVM or browser will JIT that bytecode into native code.
The real issue now is that the JVM is currently a more capable platform, but the browser has broader penetration and arguably better security. Raw performance is becoming a non-issue with asm.js.
C is massively more cross platform than Java - a JVM implementation often relies on C. C compilers are often the first things implemented for new platforms. C is very close to pure native code - to the point where many equate C/C++ with native. All of Java can be beaten or matched by C performance wise because of the power it gives to leverage the hardware in precisely the same way the JVM can but without (so many) overheads that are designed in.
The web really isn't suited for app development at all, as the native mobile markets have demonstrated, while the viability of it as a document delivery platform diminishes every time the content gets hidden behind a massive layer of scripts.
2. It's nearly all open standards and open source, not proprietary closed source controlled by one company. OpenGL, EcmaScript, W3, Mozilla, Chromium, blink, webkit.
3. It works on mobile devices, flash doesn't.
4. The tools are out there. Check out appcelerator or unity's tools.
7. Flash avoids all kinds of privacy settings and plugins in your browser. Flash gives you less control.
8. Even Adobe has moved past Flash, these arguments are all done, why haven't you moved on?
> 3. It works on mobile devices, flash doesn't.
False. Many (120000+) Flash apps are actively running with Adobe AIR on App Store and Google Play. AIR is a technology to package SWF as a native app.
> 4. The tools are out there. Check out appcelerator or unity's tools.
Unity can not export for HTML5 yet. Apparently they are working on it but I don't think it's that easy (As for the original article, I DO MIND the initial load time. That will be a big problem for Unity as well).
> Compiled other languages to ActionScript?
Please don't denounce Flash without a knowledge of Flash. I'm already not a user of Flash and currently developing with Unity and Cocos2d-x instead, but I feel I need to advocate Adobe guys from unfair bashing.
Flash was sandboxed (just as Java). You were strongly restricted in what you could access with ActionScript. Of course, the security was poorly executed. I blame it on the fact that it was developed in a different time, without the experience and tools we have today, and that it was closed-source. Still, native plugins don't have to be inherently insecure.
> 2. It's nearly all open standards and open source, not proprietary closed source controlled by one company. OpenGL, EcmaScript, W3, Mozilla, Chromium, blink, webkit.
True, but I'm starting to think "Open Standards" was a huge Trojan horse.
- It lead to, or continued, a huge monopolization of platforms. Only a few large companies are able to maintain modern browsers (see the demise of Opera). This is bad, because they can push politically motivated restrictions on their users. (Firefox has a whitelist to only allow certain media codecs to play, although the system would support more. I could modify the source for myself - I did - but it's of no use, as the people visiting my sites won't be able to use it. I don't have the marketing power of Mozilla or Google.)
- The "open" "web" platform limits what kinds of apps you can write. You can't really write apps without an (accountable!) central server. Without sockets, you have no Bittorrent, no P2P, no Tor, no Instant Messaging...
- I think it also pacifies people who would otherwise be worried about today's locked up platforms. "You can always write a web app if you don't get in the app store."
> 3. It works on mobile devices, flash doesn't.
Which was a political decision, not a technical one. There used to be thousands of free Flash games, that would have ran with minimal porting on mobile devices. But Apple couldn't control flash, they wouldn't get their 30% cut from Flash apps, and they couldn't censor apps. So they forbid Flash on iOS, and crippled it on OS X, which was one of the main reasons for its demise.
Flash had support for native video playback before HTML. It had 2D acceleration for animations before there was canvas, and Shockwave Flash had 3D acceleration years before WebGL.
Free-slash-open-source programs are like banks in this way. In principle, a bank that fails can always be shut down rather than bailed out, and this is what justifies the existence of private-sector banks. In principle, an open-source program can always be forked if you can't persuade the maintainers to make the changes that you want, and it's always been agreed that this is a central, essential requirement for a program to be considered free-slash-open. But some programs are, in practise, TBTF - Too Big To Fork. A program can be "big" not only by having a large codebase but also through network effects, such as having vast amounts of client software tied to one of its interfaces. The big-boy Web browsers are TBTF in both these ways. So if, for example, you're insulted by Google's decision to knife MathML (as everyone should be), it's relatively easy to roll a Chromium with MathML inside, but you'd still effectively be just maintaining a branch, because you'd have no hope of maintaining "your" browser independently if Google took the whole Chromium codebase in a direction you didn't like - and more importantly, good luck getting users to use your browser or developers to create MathML webpages to support it.
A second example of the phenomenon is the Gnome/KDE mess - part of the reason that the Linux desktop sucks is that, even if you have a clear idea of how it could be better, it's still a whole lot of man-hours to spin up an alternative implementation, get apps customised for it, and so on. In general, an area is the domain of TBTF to the extent that you have to win a political persuasion battle or spend a truckload of your own money before you can produce a viable implementation of your alternative idea.
The solution, to the extent that there is a solution, to these problems is a technical one: find a way to shrink large programs and/or break them up into small, reasonably independent ones. (Of course all social/political problems are technical problems in disguise just as all technical problems are social/political problems in disguise. ;) ) In the case of the web, this is why the vertically-integrated Web browser must go away https://news.ycombinator.com/item?id=6720793 .
Haxe compiles to the same thing ActionScript compiles to, which seems like the appropriate comparison.
http://en.wikipedia.org/wiki/Haxe (And looking it up, apparently it does compile to AS, though I don't think it typically goes via AS when generating a swf. I might be wrong though.)
For the interest of historical accuracy, I must point out that Adobe Alchemy predates Emscripten by years.
Right, (and I'm also in part replying to my sibling replies here) can we stop talking about "sandbox"ing as though it's something concrete and real?
Something being "sandboxed" doesn't really mean anything - or rather it does, but only in an abstract way. Everyone who uses the word "sandboxed" means something slightly different, and every time in history someone has implemented a "sandbox"ing" system, their idea of "sandboxing" is slightly different.
One person's "sandboxed" means "disables language access to the functions that could affect the system" where another person's "sandboxed" means "uses clever os features to isolate all execution into a separate container".
This is why you can endlessly debate whether x or y is or isn't "sandboxed".
Flash actually was sandboxed. Poorly, yes -- but so were JS VMs until very recently. It was only a matter of time.
While the Flash IDE itself was closed-sourced, the format itself was almost entirely open-source -- and third-party tools have been available for a long time to compile SWFs on the level: http://en.wikipedia.org/wiki/Adobe_Flash#Open_Screen_Project
Entirely a political argument.
Mobile markets have also demonstrated tightly controlled walled gardens, where fast iteration, platform independence and openness have been replaced with strict rules and controls by the market controlling entities. Is that really a direction you want to move in?
The web has had quite a long time to get there, yet it is still easier to write a decent application on NEXTSTEP that spawned the web then on the web. You would think we could at least make it as easy as HyperCard or Visual Basic 1.0, but we are still trying to push UI into a format that wasn't designed for it. I'm not sure the ultimate solution, but I get the feeling it will be what comes next and not more iterations of the same.
It may not be suited for app development, but people want to run applications in their browser. I'd rather use gmail than outlook, google docs than microsoft office. If I can have photoshop in the browser, I'd rather use it there than install it separately.
This is the new reality. Deal with it. HTML is no longer just a document markup language.
I see asm.js as the Revenge of Compiled Languages. Coupled with generic interfaces for accessing underlying graphics and audio hardware, we're just right back where we started with Java applets. Write your apps in whatever language; run in the browser.
One cool thing about JS is that you have runtimes for it in computers, tablets, phones, TVs and most current videogames so you can experiment and build stuff for a variety of hardware that no other language can reach as easily (of course you can reach anything with C but it is not easier).
Remember all that could be done in JS two years ago and all we could do now. Imagine by the end of the next year what we'll be able to.
I don't care if people are using LLVM to cross-compile other languages to the JS runtime, this kind of approach and research makes better runtimes and both camps benefit, the people who hate JS can use their fav language and those that enjoy JS end up with a better runtime.
As you say getting other languages to run is a good thing because those who don't like JS don't have to use it (or indirectly use it with stuff like Coffeescript).
IMHO most people that do not enjoy JS is because they approach it from a mindset of an OOP programmer. People try to treat it like Java or approach it like a toy language to cook quick jQuery script then they feel frustrated. I am not saying that JS is everyones cup of tea but some people like me enjoy it.
Each of those is good for some use cases, but none is good enough for everything. Just like we have many languages for native development and web servers and so forth.
Given your experience that "99% of the bugs are not emscripten specific", I'm very impressed with how emscripten can retain the semantics of the native code across the conversion and optimisation process. Obviously thats what any compiler does, but in this case the target environment seems to me to be much more complex than a physical cpu: threads, memory management, etc.
And beyond that, we'd all like something that gives us safe native-speed rendering control in the "sandbox". Silverlight was meant to, as was Flash/Flex. But those were proprietary, and we didn't want one company to have "control of the web." HTML 5 hasn't been what we hoped for.
So basically, we're one big, divided bureaucracy that is not making rational decisions (what big divided bureaucracy does?).
I guess what might happen is something new will eventually come along (who knows how long it will be) that actually displaces the web as we know it. It will be an adoption-tsunami, similar to what the web itself was, and therefore it will be able to ignore this series of historical accidents that we're chained to today.
Bytecode doesn't mean smaller download or faster startup, or necessarily better performance. It might, but you need numbers to show that.
In practice, right now startup is faster on asm.js than PNaCl, and vice versa for execution speed, but in both cases the differences are not large. Compare for yourself:
Do not be fooled by the surface appearance and the name. Look at what it does. It is a bytecode, just one with a bizarre serialization. (And as a result of that serialization, the most literal interpretation of "bytecode" admittedly doesn't quite work, but it's a bytecode in every way except for a literal fixed-length code of bytes.)
That demo runs at 60 FPS in my browser, allowing me to casually spend more money on more video games. Your objections seem ideological, as opposed to perf benchmarks, distribution improvements or other quantifiable terms.
Can you expand on why it doesn't make sense?
I agree that the binary is probably relatively efficient, but a) only if your code deals with 32bit floating point numbers, b) the binary (erm js file) is probably going to be bigger (hence increased download times) c) the compilation does not come for free.
The article seems pretty doomy regarding asm.js - is it really going to take off and become an unweildy/frozen standard? Or is it of interest only to people writing game engines in pure JS / vanilla browser technologies?
Personally I'd love for a really low level language (statically typed, little if any magic, no-GC, multi-threaded) to work across all browsers, but I don't think browser vendors can just sit and say - here is the perfect cross browser language that will make all of net easy.
Perhaps the best strategy is to have low level strict language on top of which you build more easier to use constructs - which describes IMO both asm.js and PNaCl.
This could be a chance to skip JS altogether.
It's kind of hard to redesign an airplane in flight, and because of the way the web works, that's a problem that applies to browsers a lot more than some other pieces of software.
It is true that IE6 was 5 years in flight. However that has changed. Google replaces Chrome plane each 6 weeks.
I really would like to see Dart in Chrome stable release. Sometime ago I thought that pure VM, like JVM, would be better. Now I think it is safer to leave scripts loaded as plain text. Just like HTML, CSS. It is hard to guess how whole web ecosystem would work with scripts loaded as bytecode.
i'm convinced that fear and lack of understanding are the genuine problems here...
browsers and the webstack are fantastically shoddy and poorly engineered though and that is a serious obstacle... but it doesn't make the problem intractable or hard. 'just' effort.
Create a package containing the native code.
Serve ASM.js and the native web library.
This package is then installed in the browser and the browser switches to it when it detects code that is about to be used that matches the installed web library signature.
If the browser supports native web libraries, it uses that.
Otherwise it falls back to ASM.js.
Either way, we gain performance and we can compile code natively AND to ASM.js.
Please don't get me wrong, I was just looking for confirmation of the same error. I love Firefox. Even I never switch to Chrome for common office work.
I know microsoft will stop supporting XP, but hopefully Mozlla don't .
Unfortunately, in some third world companies (like where I work), IT guys prefer not to change what works until it is absolutely necessary.
Sorry for my English.
PS: I'm not that IT gut.
You really, really need to upgrade somehow.
It really is necessary.
If you can't upgrade Windows, you need to consider moving to a different operating system.
There are some proof-of-concept projects out there that do runtime generation of JS on the fly for things like recompilation and such in the browser right now. I believe Shumway  does it (recompiling ActionScript VM bytecode to JS on the fly) and PyPy.js has an experimental JIT too . The runtime library for my compiler JSIL also does a significant amount of runtime code generation to improve performance and reduce executable sizes .
 http://jsil.org/ ; https://github.com/sq/JSIL/blob/master/Libraries/JSIL.Unsafe... and others
It doesn't require requires a plugin install like NaCl
That's because you didn't read the article.
Which among other things, already points to that blog post.
On a serious note, you can't really blame people for not supporting a browser that needs hack for every single version.
I am well aware of the history and I hate IE < 10 too. I use FF as main development browser, and beside others of course Chrome and IE 11. The trouble comes when people only test their website with Chrome. I am using IE 11 as I have to at my workplace and it has by far the best UX (IMHO). FF still uses only one process and cannot handle my habit of using dozens or even hundreds of tabs in an optimal way.
asm.js is not a faster jQuery. It's not for doing yellow-flash alerts, or for more quickly selecting and manipulating DOM elements.
asm.js is a way to ship traditionally desktop-type software, including software which was originally written in a language like C or C++ and compiled into asm.js from that source, to a web browser, to be run in the web browser.
I don't see this ever happening. They would in effect be eliminating themselves. They would have to find new jobs or even careers.
Once the VM is standardized, what about HTML/JS/CSS. Well who the hell wants to use those slow moving legacy technologies?
So the standardization now becomes for python, for C#, for scala and lisp ETC.(and their associated UI frameworks). Not controlled by the W3C at all - thus their extinction.
It's more than this though. The W3C has an agenda and it is not to advance technology, it is to slow it down. They want everything moving so slowly that standards can be followed across the board. They want JS/CSS/HTML to be the end all not just in the browser, but everywhere. I think that this should be pretty clear if you follow their trail going back 10-15 years.
It is like a socialist government in a way. The promise is to keep everything stable and let everyone be on equal footing (equal here because the technology moves so slowly that nobody can be left behind by it.) They have to kill and silence many revolutionists who want freedom along the way to do so but consider themselves justified in doing so. Meanwhile, in a neighboring free government with limited govt, people flourish. They have more ups and downs true, and mistakes are made along the way, but after 10 years the free country is wealthy and flourishing, while the socialist one is stagnant and poor.
Think of the mere opportunity of innovation that would exist if a language creator could sit down and create a new language and UI framework universally for browsers in a well established and supported way. This lack of freedom is stagnating innovation.
Let the people decide. Make a standardized VM and your HTML/JS/CSS stack and let the people vote with their choice of options that appear.
For what it's worth, we're already very close to the point where "a language creator can sit down and create a new language and UI framework universally for browsers".
Mozilla and Google already want a VM as evidenced by their work. It would not take any convincing to get them there.
if VMs can make the W3C irrelevant, then they will do so regardless of W3C's approval of any particular standard.
Now you're underestimating the power of an established standard. For something to really flourish, enterprise has to adopt it. Without a standard, this will not happen. This is a bit of chicken and the egg scenario.