Hacker News new | past | comments | ask | show | jobs | submit login
Why Asm.js Bothers Me (mrale.ph)
274 points by espadrine on March 28, 2013 | hide | past | favorite | 163 comments



> OdinMonkey is a hammer that will kill incentive to optimize JavaScript that humans write.

Eh, I'm not worried about this. Many apps will have no reason to migrate to asm.js, and for as long as some big and important apps are written in JavaScript, there will be incentive to optimize plain JavaScript.

JavaScript and other dynamically-typed languages exist because in many cases they are the best and most convenient way to write apps. But suppose this weren't true; suppose that in the long-term developers started favoring statically-typed languages for web apps, either because of performance or because of usability (Eclipse-like IDE convenience; hard to provide in a dynamic language). Is the author saying that we should artificially prop up usage of dynamic languages by taking away some of the inherent performance benefits of static languages? That doesn't sound like the best way to achieve technical excellence in the long term.

In the marketplace of ideas and technologies, let things succeed or fail based on their demonstrated merit, rather than trying to pick winners and losers based on our preconceptions.

> Somebody might say that [my proposed bytecode] does not run everywhere. Nope. It does run everywhere: just take JavaScript and write a simple one pass translator from this bytecode to JavaScript.

You can think of asm.js as just that; a one-pass translator of an implicitly-defined bytecode to JavaScript. If you want to view/edit it as a more traditional-looking byte-code, you can easily implement a YourBytecode<->asm.js compiler. It just so happens that asm.js is a backward-compatible representation, so works more conveniently as the standardized wire encoding.

(I have not studied the asm.js spec in detail, but I've seen pcwalton describe it as an alternate encoding of LLVM bitcode, so I suspect that the idea of implementing a YourBytecode<->asm.js compiler is actually reasonable given the asm.js definition).


I should qualify my statement with some caveats:

* asm.js has a very primitive type system, unlike LLVM bitcode. I think this is better for a distribution format; the complex types make sense for compiler IR optimizations but not so much for distribution after the optimizations are already done.

* asm.js doesn't have goto, unlike LLVM bitcode. I'm told that this doesn't seem to matter much in practice, as the Relooper is quite refined now. However, I'm certainly willing to believe that there are some oddly-shaped CFG's that this will hurt. Perhaps JS will need goto.

* asm.js doesn't have 64-bit types. This is unquestionably unfortunate, as it will need 64-bit types to achieve native parity. (Of course, JS needs 64-bit types anyway—it's important on the server, for instance.)

* asm.js doesn't have SIMD, unlike LLVM which has support for SIMD with its vector types. This will need to be added on to be competitive on some workloads. This is a good opportunity for collaboration between Dart and JS, as Brendan pointed out.

Regarding the original post, (and speaking only for myself, not my employer) I actually agree with mraleph to some degree. From an aesthetic point of view I don't like the subset approach as much as anyone. But if V8 doesn't implement asm.js AOT compilation, ignores the "use asm" directive, and achieves the same performance as Odin, then that's actually a good outcome in my mind. At least asm.js will have fulfilled the role of an informal standard that engines and compiler writers alike can agree to target for maximum performance.


asm.js is untyped, unlike LLVM bitcode

Is that really true? The way I read the spec was that asm.js was all about adding type annotations to JavaScript in a backwards-compatible way.

And just thinking logically, it's hard to imagine asm.js optimizing anything beyond what JS is already doing without explicit type annotations – and we already know asm.js gives insane speed increases with a compiler that understands the type annotations.


There are degrees of typed-ness. :) What I mean to say is that asm.js has a much weaker type system—in particular, asm.js does not have aggregate or structural types. The only types are numerics. This is in contrast to LLVM's type system, which features structs, arrays, and so forth. This gives the LLVM compiler much more information for important optimizations such as scalar replacement of aggregates, but those optimizations are typically done before asm.js emission.


> in particular, asm.js does not have aggregate or structural types.

It's like BCPL!


This is a feature, not a drawback. Aggregate types for C-like languages tend to bloat the code for no real optimization benefit. See the comparison between LLVM IR and Emscripten-generated JavaScript here: http://mozakai.blogspot.com/2011/11/code-size-when-compiling...

(Of course, for GC'd languages, we will need aggregate types.)


Actually, aggregate types help optimization a lot in C like languages, because it makes it easy to disambiguate between accesses based on offsets. IE two random int pointers is not helpful, two accesses to fields through a pointer to a structure is helpful.

In non-pointer C like languages, yes, they are mostly a burden.

The reason the .bc in the example given is large is because the toolchain does not try to optimize at all for intermediate .bc size. Final .bc size should be relatively sane.

I don't see that I can download the .bc files from that blog post, but I'm mildly curious if they are before or after running opt, because if i had to guess, based on the fact that it gzip's well, i'd guess it's before running opt on it.


Sorry, I should have been clearer: the types are definitely helpful for optimization. For distribution I'm not so sure. The IR-level optimizations where types really help (scalar replacement of aggregates, etc) are already run before the code hits the wire.


It depends. For distribution where you know you will never optimize at runtime, yes, it's completely worthless.

But part of the reason you may find it worthless is because none of these VM's really perform the strong alias analysis and load/store redundancy removal techniques you could perform at runtime.


> (Of course, for GC'd languages, we will need aggregate types.)

Maybe, maybe not. I've had parts of a precise, copying, compacting (commercial grade) garbage collector running under emscripten.

It "crashes" still, but seems viable.


Oh man, I'm so tempted to write a BCPL compiler for asm.js now.


Martin Richards compiler is pretty easy to add a backend to. I got the distribution he covers in his BCPL for young people (raspberrypi) guide. I targeted VideoCore for some tinkering I was doing, asm.js sould be fairly easy - just the relooping stuff would be the main effort.

Would be interesting to see Martin Richards classic benchmark compiled to asm.js from BCPL and compared with the port to js.


+1 BCPL! Like the first language after it was B. Then C. So shouldn't C++ be P? :)


Ah, CFG being "control flow graph"? I don't think I've seen that usage of "CFG" before.


What other sense of cfg is there...


Context Free Grammar


Yeah, that one. It just didn't work in the context.


To address a few points:

I'm not sure why you think oddly shaped CFG's that have goto's would hurt it. At worst, you can always turn oddly shaped CFG's into sanely shaped ones at the cost of code duplication.

Very early on in the days of the tree-ssa project, Sebastian actually implemented goto elimination for GCC. It made zero performance difference in any real code (even those people building goto heavy interpreters). So it was removed.

In any case, real, sparse, SSA based optimizations don't care about even fully connected CFG's (There are plenty in GCC's testsuite). Yes, dataflow optimizations care, but if you aren't doing something sparsely, you should fix that. :)

As a complete aside:

I agree with the original poster that asm.js is a leaky abstraction, but disagree with everything else. All sane optimizing compilers do lowering of some sort (i'm aware of V8's direct code generation, as well as Go's. At least Go's normal compiler doesn't make the claim that this generates amazing native code, only reasonable native code). Even gcc has what is known as "high gimple", which looks like C, but is normalized a bit, and then "gimple", which looks like C, but is even more normalized.

(LLVM, by comparison, has something a lot less like C, but lately lots of metadata has been added to recover some of the losses)

All asm.js does is expose the "GIMPLE" level. If the argument is "it's not sane for folks to write asm.js instead of javascript (the equivalent to "it's not sane for folks to write GIMPLE instead of C), this is kind of a truism. To the degree asm.js folks think people should be writing asm.js formed code directly, they rightly deserve to be mocked :P. (Generating it at runtime, of course, is a different thing altogether)

To the degree the original poster's argument is "optimizing asm.js takes away the desire to optimize anything not asm.js", this is well, wrong. Performance benchmarks and apps are still written in normal JS. Just because GCC/OdinMonkey has GIMPLE/asm.js doesn't mean they don't try to optimize regular C/javascript. The whole point of having GIMPLE/asm.js is to make it easier to optimize testcases by normalizing them so you can guarantee that if you perform this optimization, it will work as well no matter how crazy the actual original input code.

Yes, you can almost always extend everything to handled the unbounded set of the real language. You can see ways to directly do better codegen from JS without an intermediate form. In fact, there were for a long time, source to source optimizing C and C++ translators.

One of the reasons they basically all died is because they were 100x harder to maintain and improve than all of the standard "high IR/mid IR/low IR" arrangement of optimizing compilers, and over time, the optimizing compilers won. By a large margin. It wasn't even close.

Heck, GCC did very well for many years with no normalized version of the high level language: It used to go from AST to very low level IR and work on that. Right up until about 1994, this worked well. Then it started losing to compilers with a normalized high level IR (ICC, PGI, XLC, everyone). By factors of 2-10.

If the world was all still fortran, and we were using fortran in the browser, maybe the "i don't see the need for asm.js" would be a reasonable discussion.

With a language like javascript, it's not.


> I'm not sure why you think oddly shaped CFG's that have goto's would hurt it. At worst, you can always turn oddly shaped CFG's into sanely shaped ones at the cost of code duplication.

Very true, I hadn't thought about that. That's a great point. I believe that the Relooper falls back to a large switch statement for irreducible control flow (which is very rare), but it could do code duplication instead.


If you search gcc-patches, we quantified the cost of doing this at one point, though i have no recollection of the results.

Back in the day, I talked with Ken Zadeck (who used to be my office mate at IBM) and some others at IBM Research, and it turns out there is a bunch of good unpublished algorithms and research on both eliminating and analyzing irreducible control flow.

Sadly, this seems to be a case where this knowledge won't make it out of IBM :(


This challenge covers much of compilers research. Many heuristics and techniques that are used in practice in compilers (certainly in ours and GHC and ocaml and other research ones I've talked with) have solutions we've all verbally shared with one another on how to solve these problems. But we're all convinced we couldn't get a paper out of it (novel enough, but it's too hard to meet the "evaluation bar"), so it remains stuff we talk about during breaks at conferences and in "Limited" circles on G+...


> Perhaps JS will need goto

I was under the impression that labeled break/continue were the equivalent.


Yes, they are equivalent (AFAICT). You can generate irreducible control flow that will still require breaks to exit loops.

However, it is possible to eliminate (even in the irreducible case) all gotos, switches, breaks, and continues from a structured program.

goto's can always be transformed into conditional statements or loops.

break/continue can be eliminated by an extension of the control flow elimination, and use of variables + conditional flow.

A real world implementation can be found here: http://gcc.gnu.org/ml/gcc-patches/2002-05/msg00109.html

(it was never merged, we decided we were fine with goto/switch/break/continue)


You can use labeled break and continue for a lot. That said, there is some irreducible control flow that the Relooper falls down on. It's quite rare in practice, though—with if, while, break, continue, labeled break, and labeled continue, the Relooper can reloop nearly all control structures seen in actual code. But, as I mentioned, I'm willing to believe you can come up with a benchmark where you'll need goto.


> there will be incentive to optimize plain JavaScript

I am not that worried about average JavaScript code. I am more concerned about computational cores that people will start rewriting to asm.js.

Or here is another question: how do you keep incentive to optimize Emscripten generated code so that eventually you can kill "use asm" and go full speed without it?

> You can think of asm.js as just that; a one-pass translator of an implicitly-defined bytecode to JavaScript.

No, when I said about one pass translator, I meant it should run on the client. asm.js is not just that.


> I am not that worried about average JavaScript code. I am more concerned about computational cores that people will start rewriting to asm.js.

> Or here is another question: how do you keep incentive to optimize Emscripten generated code so that eventually you can kill "use asm" and go full speed without it?

I hope that some JS engine does no special asm.js optimizations, but at the same time achieves the same speed. That would be both an amazing technical achievement, and a very useful result (since presumably it would apply to more code)! :)

Btw, I actually was pushing for that direction in early asm.js-or-something-like-it talks. So I think I see where you are coming from on this matter, and most of the rest of your post as well - good points.

However, I strongly believe that despite the correctly-pointed-out downsides, asm.js is by far the best option the web has: We need backwards compatibility + near-native speed + something reasonably simple to implement + minimal standardization difficulties. I understand the appeal of the alternatives, but think asm.js is far closer to achieving all of those necessary goals.


> Or here is another question: how do you keep incentive to optimize Emscripten generated code so that eventually you can kill "use asm" and go full speed without it?

Why is that important? I don't get it.

Emscripten -> JS is lossy; it throws away information that the code generator could use to generate more efficient code. A clever JIT can re-discover that information by observing the code in action. And sure, that's a satisfying challenge from a VM implementor's perspective, why is it a priori worse to just tunnel that information through via asm.js rather than throwing it away?

Even if the VM is perfect at this, there is a non-zero runtime cost that, I would expect, scales linearly in the size of the application.


> Why is that important? I don't get it.

For various reasons.

I believe it can be done, thus because it can be done I don't see why it should not be done. Why keep two front-ends (two parsers even!), two separate IR generators etc in the system if you can have one?

I also think that such code can occur and occurs in real world applications as well. And I want any JS code go faster to its limit, without requiring people to rewrite anything.

It is true that dynamic compilation incurs certain overhead and requires warm up. But it is also true that AOT compilation is not cheap either (that is why a special API to cache generated code is being suggested).


" If you want to view/edit it as a more traditional-looking byte-code, you can easily implement a YourBytecode<->asm.js compiler."

There are quite a few hurdles, and some of them will be resolved with some stuff Mozilla has proposed in ES6. But as it stands now, you can't use CLR IL bytecode or even JVM bytecode.


Sorry, when I said "your bytecode," I meant whatever bytecode the author had in mind that is trivially one-pass compilable to JavaScript. Ie. one that has compatible semantics. Mapping an arbitrary bytecode onto asm.js is not going to be trivial, since VMs have nontrivial differences in semantics surrounding garbage collection, concurrency, etc.


You couldn't (reasonably) compile the CLR or the JVM to asm.js? I'd assume you could, which would seem to point toward a translation of bytecodes (since that's what the asm.js-hosted CLR/JVM would be doing) being feasible…


You can't because both are garbage collected, which asm.js is unable to access (unless you wrote your own garbage collector)

At the end of the day, asm.js uses some code patterns (like taking bit-or with 0, where the spec mandates that it occurs as if the data is coerced to 32 bit integers) to optimize performance of those operations. It is only capable of doing so when you strip away most of those features which make JS a really nice language to write code in.


But at that point the VM-atop-asm.js-atop-the-JavaScript-engine would be the thing doing the GC against its own asm.js-provided heap. (Right?) Perhaps performance would be awful, though I guess I don't immediately see why: if asm.js can get reasonably close to native speed (within 2x, I think I've read?) and .NET/Java can get within a comparable distance of native, then the two together should be slower, but not unusable for many things, I'd think.

Upon seeing your edit: much of the speedup also comes from the lack of dynamicity, too, from my understanding.


"VM-atop-asm.js-atop-the-JavaScript-engine would be the thing doing the GC"

"much of the speedup also comes from the lack of dynamicity, too."

In the language, JS gives you no control over its underlying GC. Every single assumption that asm.js leverages depends on the determinism, and introducing a GC into the mix really messes with AOT optimizations


But the JavaScript engine's GC isn't applicable here, is it? Because everything from the asm.js layer up is using a privately-managed heap implemented as a single (?) JavaScript typed array (ArrayBuffer) with no JavaScript-engine-level visibility of the distinct asm.js-layer values stored in it.

Edit: perhaps the thing I'm missing is that the discussion is about direct translation of CLR IL/JVM bytecodes into asm.js without any vestige of the CLR/JVM still running alongside. I was merely thinking of compiling the CLR or JVM to asm.js then hosting unmodified CLR IL/JVM bytecode atop that. Which seems feasible, though not as performant as a VM-less translation which I agree doesn't seem possible.


"the first danger that I sense: OdinMonkey is a hammer that will kill incentive to optimize JavaScript that humans write. Why waste time optimizing normal, dynamic, fluid like water JavaScript if you can just ask people that they write something more structured and easier for compiler to recognize? "

I'm eagerly awaiting the day I can write single-threaded Go or Java, have it turned into asm.js and never have to write another line of yucky Javascript again. I like this proposed alternate future.


If something like asm.js could possibly kill the incentive to optimize handwritten code, we'd still be writing everything in assembly.

Asm.js is aptly named, because it's targeted at small, tightly-looped, extremely performance-critical operations over large amounts of homogeneous data. That's one of the few things people still use assembly language for on the desktop. That's all asm.js really is: a compiler hint that you've gotten desperate for cycles, so here are some coding conventions that will make things easier to optimize.


In that sense, like a lot of the 'let's run absolutely everything in a browser', it seems like a waste of energy.

I'd much rather that time and effort were spent on optimizing the web infrastructure for the kind of heavy weight LOB applications that are a) currently being put on the web in a serious way and b) have performance issues than on hypothetically enabling web based CAD programs or AAA video games.

Things like figuring out a solution to the circular reference problem caused by having separate DOM and js garbage collectors. An agreement on a perfomant client side storage scheme. XSS protection for mere mortals. And so on.


I think you're completely missing the point of asm.js. It's not possible to build small portions of your app with this really (no JS object access, no strings, etc), and it's not intended for that either way. It's really just a way of compiling C and C++ applications to run in the browser efficiently.


Actually, you can build small portions of an app with it. That's pretty much all you can do with it, for the very reasons you stated: no JS object access, no strings, etc. All you can do is the same stuff you'd need assembly for anyway: super-tight inner loops that work over lots and lots of raw numeric data.

That's an important part of compiling C and C++ applications to run in the browser efficiently, but it's only one small part of the puzzle. C and C++ compilers put a lot of work into optimizing this same sort of code, and asm.js gives them a path to bring some of those optimizations to the browser. But no C/C++ application, even compiled with emscripten or the like, is ever going to be compiled solely to pure asm.js. It's just one part of a larger equation.


"never have to write another line of yucky Javascript again."

GWT existed since 2006: https://developers.google.com/web-toolkit/


I've used it and like it well enough (except for the horrible compile times- which perhaps asm.js can help by simplifying code generation). I'd really like a Go version of GWT.


and now you have two problems.


writing yucky Java and still dealing with yucky JavaScript? :P


And developers have been fretting about serializing stuff you wouldn't think would ever need serializing, ever since.


Have you looked at Dart? It's not that far from Java.


Not really. It's a dynamically typed language that tries to fool the casual user into thinking it's statically typed.


But it has more normal types. I'm betting if asm.js takes off, dart2js could easily output code that uses it. Sure you can write fully untyped Dart code, but their js compiler already does a lot of type inference.


"Please keep in mind that I used to work on V8"

In theory a V8 could optimize regular JS as fast as asm.js, but in practice OdinMonkey made this code 3x to 10x faster (or more) with a couple months of effort. And to boot it is completely compatible with every JS engine, and the code generated can be cached offline for fast startup (because of the linking part), and the FFI between JS and asm.JS is trivial.

Basically this post is sour grapes. Mozilla came up with a better idea than the so-called geniuses at Google did.


That's a weird thing to say. Is the perception really that mozilla doesn't have its share of geniuses?


He makes a solid point regarding polyfills for asm.js: if the specification is so rigorously defined, then the only fallback a current-generation browser has on offer when encountering a "v2 asm.js" is to treat it like regular Javascript. That's a pretty nasty regression, and hopefully one that will be addressed.

Regardless I still prefer building on Javascript than any proposed alternative. No matter what happens next, any pristine standard is going to look like an overextended mess even in just a few years time. So why not pick the overextended mess we started with.


We've talked about versioning, and I think feature detection is probably the best approach. If the user agent publishes a list of supported asm.js features via a standard query mechanism (note that this is a feature list, not browser detection) -- e.g., ASM.SIMD === true -- then apps can negotiate content accordingly. This might entail compiling multiple versions of your codebase for various versions of asm.js, but that's an acceptable price to pay for VM evolution, IMO.


Ouch, that's a great point about versioning.

The thing that appeals to me about asm.js is that it short-cuts the long, slow path of VM optimization and gives great performance now instead of five years down the road. Sure it's an ugly hack but that doesn't matter to your users, all they see is a much faster user experience.

Now all we need is a SIMD API for JS and we can write some truly amazing apps.


"it short-cuts the long, slow path of VM optimization and gives great performance now instead of five years down the road."

Something tells me that, based purely on usage metrics, chrome matters more than firefox:

http://stats.wikimedia.org/archive/squid_reports/2013-02/Squ...

(and pretty much every other metric according to https://en.wikipedia.org/wiki/Usage_share_of_web_browsers#Hi... )


Thats starcounter numbers. Do you really think IE is bellow 30%? Chrome numbers are inflated.


"So why not pick the overextended mess we started with."

What's the starting point? Javascript or assembly? NaCl gets closer to the original starting point (which predates javascript by more than 20 years)


By my count Javascript is around 5 years older than amd64. If we had gone with native client back then we'd be stuck with pre-SSE x86 or maybe they'd have been forward-looking and gone with Alpha or Itanium with emulation for "legacy" systems.

Also, why the false choice between asm.js and NaCl anyway? The article makes a pretty compelling case for "neither".


> By my count Javascript is around 5 years older than amd64. If we had gone with native client back then we'd be stuck with pre-SSE x86 or maybe they'd have been forward-looking and gone with Alpha or Itanium with emulation for "legacy" systems.

That's not really how NaCL works. Any instruction that can't break out of the jail can be whitelisted; this includes all versions of SSE.

Additionally, PNaCL can be retargeted at as-of-yet unknown architectures.

The result is that binaries would have been either shipped as PNaCL-only, or optimized for i386+SSE1 with fallback to PNaCL. As new hardware was shipped, new binaries could be targeted appropriately.

This seems reasonable to me; default to portability, provide an option for best-case performance.


How do you plan to ensure that authors test and ship working and performant PNaCl binaries if they and their users are always using the native code?


How did Apple developers ship working PPC/m68k binaries? Or PPC/x86 after that, later PPC/x86/x86-64, and armv6/armv7/armv7s on the iPhone?


Totally different. In all of those cases, the developers have access to devices that run the hardware, and they have economic incentive to target all of those devices. Whereas with your suggestion, developers ship and test native builds for all of the CPUs on the market, plus one extra "portable" version for posterity, one that no immediate customers will ever use (because they'll be using the native versions for their architecture instead). I suspect developers wouldn't even ship the portable version (too much of a pain), browsers won't optimize it (because nobody will use it), and developers won't test it (because nobody will use it).

It's like saying that images are no problem for accessibility on the Web, because designers know to use alt text. Of course they know they should, but we all know what the reality is. A backup "portable" version of an app is like alt text in this way. Some developers would do the right thing; many won't, and the Web will be de facto locked in to x86 and ARM forever.


That's a bit ridiculously alarmist, isn't it? Heck, you could have lead with that, rather than waiting for me to reply -- you seem to already know what you wanted to say.

Won't developers have access to PNaCL runtimes? How is this so different than Apple developers having access to the other target hardware

It comes down to toolchains and the workflow they're optimized for. With PNaCL, the portable version is the primary target of the toolchain, from which one generates other native binaries. You can still generate native binaries separately, but ideally, PNaCL would lead.

If the tools lead naturally in the right direction, then developers do follow. If you're feeling really worried about it, define a distribution format that mandates a PNaCL entry.


In the context of manipulating symbols on an infinitely long piece of tape, sure, but in the context of the web we don't "have" NaCl and there's sufficiently plausible technical and political resistance that we're probably never going to.

At various points he talks about how a fresh byte code would address some of the issues raised, and certainly from a full time compiler engineer's perspective, some pristine representation and a handful of tree-walks are obviously going to appeal to his academic side. I just doubt any such perfectly forward-compatible ideal representation will ever exist that can cater to all camps and survive the test of time, because to my knowledge such a thing has never existed.

So why abandon our current path for something that'll inevitably end up as ugly as JS in the long term, when we already have something as ugly as JS.


> ... there's sufficiently plausible technical and political resistance that we're probably never going to.

Resistance that you contribute to with almost every article that touches on NaCL, yes.

It's self-fulfilling prophesy and a circular argument: "We can't adopt NaCL because nobody will ever adopt NaCL" -- say the people responsible for choosing whether to adopt NaCL.

> So why abandon our current path for something that'll inevitably end up as ugly as JS in the long term, when we already have something as ugly as JS.

Why is this the "inevitable" result?


Show me a counterexample, that's all I ask. I'd give all the rhetoric in the world for a single damned counterexample of a program format that's lasted since the 80s. That's what's at stake.

Instead we have a company that throws specs over the wall like they're teenaged nerds that have just discovered Pascal's pack function, a company that pushed SPDY on everyone despite it relying on a draft SSL feature that got rejected at the IETF (and in the space of 2/3 years we now have three HTTP variants - and why? Because 3 years ago it shaved 1ms off a maps.google.com request). Did you even take a look at the original WebSocket spec? It worked in terms of comparing the entire start of the HTTP request using memcmp()!

Regardless of my opinion of NaCl, this is stuff that should be getting defined slowly and in the open by bearded old men that have spent their lives in glacial, trusted industries like telecoms, or just about any other mature technology industry. Not a throng of Java graduates who think they're changing the world by continually dumping crap specs over the wall and paying for subsidized articles to promote them.


> Show me a counterexample, that's all I ask. I'd give all the rhetoric in the world for a single damned counterexample of a program format that's lasted since the 80s. That's what's at stake.

ELF was defined in the late 80s/early 90s as part of SVR4. COFF was defined as part of SVR3. Both still exist today.

The JVM bytecode was defined in 1995 and has been evolved compatibly.

I'm curious, though. Do you require the same longevity of web formats that were defined in the 90s? Things have changed quite a bit, and attempting to browse the modern web with a 90s browser is futile.

Given that, I don't really understand what you're arguing for.

> Regardless of my opinion of NaCl, this is stuff that should be getting defined slowly and in the open by bearded old men that have spent their lives in glacial, trusted industries like telecoms, or just about any other mature technology industry.

Us 'bearded old men' aren't on the web, and its only improvements like SPDY, WebSockets, and NaCL that have made me the slightest bit interested in the web as an application platform.


  > attempting to browse the modern web with a
  > 90s browser is futile
I think the inverse is what we're talking about. Will we be able to browse NaCl sites in future browsers (possibly after an architecture change away from x86)? As it stands, we would probably be able to browse a 90's site on modern browsers.

It sounds like if NaCl compiles things down to ASM x86 (or really AMD64) (?). If that's true, then it's possible for things to not be forward compatible. I'll admit that it doesn't seem likely, especially after what happened with Itanium and AMD64/x86-64, but we could see things move towards ARM.


> I think the inverse is what we're talking about. Will we be able to browse NaCl sites in future browsers (possibly after an architecture change away from x86)? As it stands, we would probably be able to browse a 90's site on modern browsers.

Given PNaCL, the answer should be 'yes'.


For future Google browsers, maybe.


> attempting to browse the modern web with a 90s browser is futile

And it's hard to imagine stronger evidence that the people designing the "modern web" are not the few who are willing and able to do a competent job of it.


"a throng of Java graduates who think they're changing the world by continually dumping crap specs over the wall and paying for subsidized articles to promote them."

Based on the media hype surrounding asm.js, it really sounds like mozilla fits the description


>Based on the media hype surrounding asm.js, it really sounds like mozilla fits the description

Wait, what? I'm not even sure what you're trying to say here. If you seriously think Mozilla has payed for articles about themselves, you've gone off the deep end.

I know that I personally became aware of asm.js because of an offhand comment Eich made on HN. I posted that link on both reddit and HN, which I guess was the start of the hype, so if you could let me know where to collect my Mozilla money I'd be obliged. :)


The actual arguments against NaCl have been reiterated many times, and go well beyond the circular reasoning you suggest is being displayed. Unless you want to argue for the fun of arguing, the polite thing to do is to consider the previous arguments as implicitly included in the current discussion, and interpret other comments as charitably as possible.

For the sake of completeness, I'll note one of the biggest detriments of original NaCl: a lack of platform independence.


"a lack of platform independence"

Platform independence is not an obvious goal for game developers. If it were, you would see the same game show up in every platform, but that's not what we see. Heck, only recently did we really see games show up for Mac. For high performance contexts, you want to be able to express algorithms in a way that takes full advantage of the hardware, and the lack of homogeneity in hardware demands a platform-dependent solution.


>Platform independence is not an obvious goal for game developers.

The web is not a platform solely for game developers.

Run native game code inside a browser why?

What exactly do you get over, say, running it natively as a desktop/mobile app and just distributing it over the internet and/or web?


zero installation.


With things like Valve and the Mac App Store, installation is a non problem.

I have less issues downloading and updating an app like that, than keening with the subpar experience that is web games, browser compatibility et al.

Not to mention that with the speed of todays networks, even manual installation is a non problem.

We can even put a nice facade on top of it: "just visit this page and click "Install" and you get an app running locally after some small download time. It will even be sandboxed and all-in-one, so it won't mess any part of your system".


"just visit this page and click "Install"" That's one (and possible more) step than is necessary.


As if that matters one iota.

For one, it's merely one of around 20 steps you have to take in either case anyway (steps like: type url, press enter to goto page, wait for page to load, click through some license staff, click to start to play, click for fullscreen if you want, wait for assets and loading, click game settings, suffer through the intro, read help for the game controls, etc). So, yes, you might get rid of one measly step -- all the while introducing other steps, lags and inefficiencies.

Second, this step buys you freedom from having the program run on somebody else's mercy. Of course for networked games and apps with central servers you cannot avoid it.

But I would sure as hell like to avoid it for any program that doesn't need a central server. I don't want to use "Photoshop in the browser" and loose all my stuff when Adobe decides to lock me out or kill the service a la Google Reader. Plus have it be susceptible to internet outages, slowdowns, and using technologies and speeds that's 2-3 generations behind a native app. Not all programs gain from running on the browser. A lot of them lose.


That's a lot of strawmen you've set up there. Want a lighter? We can burn them down together.

Yes the scenarios you describe are ridiculous and unacceptable. But thankfully we live in the real world where those extra steps you describe don't actually exist in apps that were programmed by actually good programmers.

As for whether it matters one iota: It matters if you want more people to play your game. You are a progammer, and so you don't perceive the actual difficulty level of software installation. On the other hand, for normal people with average skill levels, having to install software can present a significant barrier to entry.


>Yes the scenarios you describe are ridiculous and unacceptable. But thankfully we live in the real world where those extra steps you describe don't actually exist in apps that were programmed by actually good programmers.

I have not seen even ONE (ONE) online app or game, where most of the above steps do not exist. You might be able to find one. I doubt you'll find two, much less five.

So, straw-men? Puh-lease.

>It matters if you want more people to play your game. You are a progammer, and so you don't perceive the actual difficulty level of software installation.

For one, you sidestepped all my arguments. I proposed systems like the Mac App Store and Valve -- and even more automated solution, like the Java Web Start for the 21st century (click a button on a webpage, and it's installed and running locally).

And yet, you tell me that "for normal people with average skill levels, having to install software can present a significant barrier to entry", as if we were discussing manual installers and stuff.

You are responding not to what I said, but to the preconceived ideas you have about traditional installers. The stuff I discussed (MAS, etc) are not at all "a significant barrier to entry".

Heck, 5 year olds to 70 year olds can use them just fine on their Mac or their iPhone. I've seen that.


I wonder what effect it would have on people's ability to pirate it


People's ability to pirate games is already broken- the more of a game's essential logic gets moved onto a server, the more useless the blob of code you hand over to clients is.

On the other hand, that also destroys a game's ability to be archived, creating a cultural "dark ages" if the approach picks up adoption. "Piracy" is essential to future historians.


NaCL is gated on PNaCL. NaCL can provide optimal performance, PNaCL can provide portability.


The author complains about having to write x = 0 in asm.js blocks where it simply means "here I declare x to be a number and not undefined". He worries that "if you remove a single + or | the code above will be rejected by the validator." But it is good: if your goal is to have a really fast code, and declared the section as such, you don't want any construct that will make it significantly slower.

Finally, author believes that the "JavaScript" code can be made "as fast" even without annotating it like in asm.js. He writes "I don't believe that anything like asm.js is needed to generate highly efficient native code" but of course he shows nowhere how something like that cab be achieved. Because it can't. Unless you somehow specify your assumptions in most of the cases you have to have the run-time checks. Which are slow.

Observe just for example:

     c = a + b;
If you haven't written before a = 0 and b = 0 you have to assume that a and be can be strings etc. You can have some code to take the int route but you have to have runtime checks. Now if you even wrote a = 0 and b = 0 and you're doing the loop in which you have a + b, you can't assume that the result will fit 32 bits. Which you don't need, unless you really want to do 32-bit arithmetic -- there are algorithms where you get the performance advantage by using it.

The asm.js annotations are a really clever way to tell to the JIT what you (the programmer) expect. The sections allow you to get the nice clean pieces of clean native code, free of a lot of run-time checks. Speed and much more efficient memory use. Pure goodness.


Correct me if I'm wrong, but wouldn't the hypothetical techniques needed to enable something like "asm.js without the compiler hint" also come rather close to solving the halting problem?


The techniques need to enable something like "asm.js without the compiler hint" exist and are in use today in JIT compilation (at least to some degree). Since the source code does not generally change and the generated instructions may, you can make assumptions first and later on rewrite the instructions if your assumptions don't hold. At least that's how I understand it. If you would do AOT compilation and ship the instructions, then you may be right about the halting problem.


Checking if assumptions hold must be done regularly in run time. If you already know which assumptions always hold you have a significant advantage at the start: simpler faster and more compact code and the chance for better "global" optimizations. Postponing a lot of decisions to the run time is never good. Postponing a little can be OK. Here we're in the range of "a lot" that we can save. Like "in every damn +,-,/" of any two values in the really long calculations.


"Why asm.js is a JavaScript subset?". Simple, because Mozilla wants the benefits of the (P)NaCl approach but don't want to have to import LLVM. asm.js allows them to 1) make use of their existing JIT compiler and 2) avoid defining a new bytecode format (the benefits of this are questionable). asm.js is ugly, but it doesn't seem that much more ugly than adding a new multi-megabyte dependency to every browser.

I suppose an interesting question would be how hard would it have been to standardise on the PNaCl bytecode format and use that as the input to ${FOO}monkey


Or they value backwards compatibility. Any asm.js you output will run with the same semantics and most likely pretty good performance whether or not the browser supports asm.js. Feed PNaCl bitcode to anything but special builds of Chrome and you may as well be streaming from /dev/random.


> "Why asm.js is a JavaScript subset?". Simple, because Mozilla wants the benefits of the (P)NaCl approach but don't want to have to import LLVM.

Or, you know, it could be like Mozilla's vice president of products said about NaCl: "These native apps are just little black boxes in a webpage."

In other words, Flash reincarnated.


Or, you know, it could be like Mozilla's vice president of products said about NaCl: "These native apps are just little black boxes in a webpage." In other words, Flash reincarnated.

Who cares? Asm.js apps are little black boxes.


From a technical perspective, using JS as a bytecode is less than optimal, of course.

But asm.js is a good backdoor. Good luck getting some third party bytecode added to MS's and Apple's browsers. There was a fairly popular bytecode used in web apps: Java.


> There was a fairly popular bytecode used in web apps: Java.

At least two, actually: Flash is another. Arguably better performing and more generally successful. And both of them were mostly open... freely targetable as runtimes, and almost freely re-implementable.

With as much hate as both of them have had, I find it a little weird when someone talks about doing it again, but then again, maybe the people who hated Flash aren't the same people who want a Google VM.

Or maybe they think third time's the charm.


I don't think you really need to import LLVM to use bytecode, nor do I think that bytecode must look anywhere like say LLVM bitcode.

I specifically listed freedom of choice as a bytecode benefit for implementors.

You can have bytecode and pipe it through a separate front-end into your JavaScript JIT compilation pipeline.

You can have a separate compilation pipeline as well, if you think it is better.


I didn't mean to imply you had said that. Sorry if my wording gave that impression.


oh, no. I was not saying that I have such an impression. I sometimes wrongly use "I did not say" as "I don't think" as a habit, literal translation from Russian where it does not have critical connotation.

I fixed the comment above.


[deleted]


>JITing strictly typed code is a vastly different problem than JITing dynamic code.

My limited understanding is that it's only vastly different because it's easier. One thing javascript JITs do is try to infer the datatypes likely to be used, and compile a version of the code with those assumptions baked in. In other words, when you've seen the 100th iteration of a loop, you can guess that the next 1000 iterations will involve the same types and optimize for them.


Wouldn't they want to eventually use LLVM to compile Asm.js anyway?

Surely LLVM will produce better code than a JavaScript JIT on non-dynamic code, since it's designed for it.

OTOH the Asm.js approach has the advantages of backward compatibility and not being forced to forever support a specific LLVM bitcode version.


I've heard plenty of reasons Mozilla doesn't like NaCl, but that sure isn't one of them...


I think the main reason for anyone not to support it, would be that it makes the WWW platform specific.

I can't wait for my first "Sorry. This webpage does not support ARMv7 Hard-float".

Another reason Mozilla might not support this is that it's actually not called Na(tive) Cl(ient). It's full name is "Google Native Client".

That sure as hell doesn't sound like a standardized and fully spec'd thing I'd implement in my browser if my name was anything besides Google.


I'm just speculating wildly like most other people here are, but you really think it wasn't a positive for them that they could reuse the JIT they've invested in and have expertise in rather than relying on LLVM.


I don't want to speak to that. I was more referring to the notion that they didn't consider it because LLVM is a large dependency. As others have noted, emscripten uses LLVM, and it's likely that LLVM will be leverage to coerce other code in asm.js style javascript. Mozilla's reasons for not wanting NaCl have been spelled out lots of places.


I would imagine that people would rarely write asm.js code, but instead they would generate it from statically typed languages (TypeScript, Dart, Haxe).

I admit asm is a bit ugly to look at. But, it's backwards compatible, and provides a way forward for getting some performance out of js in very specific cases. It's not going to be an all purpose code path for general use, and I don't think it's going to take attention away from other general js improvements.

The only thing I'm skeptical about here is whether or not Google chooses to integrate asm with Chrome (Setting aside what Apple or Microsoft ends up doing). It runs counter to the work Google has done with Dart. For that reason alone, I think there's a significant risk it won't gain traction.

I think the creation of "fast" web performance is seen as a huge carrot from the big tech companies (MS, Adobe, Google, Mozilla). Each is trying to offer it in different ways, and at the same time pulling developers into their specific dev tooling ecosystem.

It's a difficult time to pick languages/tooling. I like Haxe because I know it'll support the platform of whomever wins (asm.js, dartvm, es6, etc.)


> I would imagine that people would rarely write asm.js code, but instead they would generate it from statically typed languages (TypeScript, Dart, Haxe).

Current asm.js is just for C/C++. Those altJS languages can't make use of asm.js (Haxe->C++->LLVM->Emscripten->asm.js might be possible but I think it's meaningless).

asm.js is the answer of Mozilla for "the web standards can't handle AAA games" but I believe this is far from an ideal solution. And I agree with the worry of the poster that this ugly hack makes JavaScript more ugly, both on the specs and VMs.

To be honest, I trembled when I saw the Math.imul proposion to see how some people are so obsessed with JavaScript...


> Current asm.js is just for C/C++

There is already asm.js support for lljs, some thoughts on support in things like JSIL (C#) and Haxe (as you mentioned).

And in principle anything that compiles to LLVM IR or C/C++ would work through emscripten.


Current asm.js is just for C/C++.

I think that's a dangerous way of putting it. Asm.js is just a spec and can (and ought to) be emitted by things other than Emscripten.


As others have mentioned, I don't see asm.js as being fundamentally tied to C/C++. Please correct me if I'm wrong. Memory management could be addressed in future versions, but even if it is not, there are still ways of getting the performance benefit of asm.js using altjs.


I'd love to be on asm.js with Java. Does Dart really represent all of Google? From what I understand of Google, a project really is only as big as the team that works on it - unless its a CEO mandate, like social features. From that angle, Chrome > Dart by far and its in their best interest to accept relatively simple technological innovations that o

If I have to stay in JS, it's CoffeeScript or Javascript, there's a lot of productivity tools around them already. Having coded years of js, I'm definitely reaching my limit by how often I have to find the damn missing syntactical issue, or deal with custom class systems or not having refactoring support or or or, the list goes on.


> instead they would generate it from statically typed languages (TypeScript, Dart, Haxe)

All of those languages would be terrible for targeting asm.js. asm.js doesn't support GC. Java, C#, Haskell, Smalltalk, all sorts of languages you could imagine wanting to compile to run in a browser would actually be a terrible fit for asm.js.

You'd be much better off compiling to vanilla JS so you can take advantage of the underlying VM's support for GC, dynamic dispatch, strings, etc.


asm.js doesn't support GC... for now. Read the FAQ for more details: http://asmjs.org/faq.html

Even if it never supports GC, it's not out of the question to use historically GC languages like Java. It's just a matter of manually releasing memory through the API. It's a pain, but it's one i would gladly bear for better performance on some specific methods.


What's the point of writing something in Java if you have to do manual memory management and can't use any of the platform library? asm.js makes sense for performance critical code that you would in other languages write in C. So, in a game you could write your game engine or at least your graphics engine in C/C++ and then add a scripting language for logic (JavaScript/Java/Python/Lua/...). You could compile the engine from C/C++ to asm.js and the game logic to JavaScript. But compiling a classical high(est) level, GC collected language that is mostly defined by its standard lib (read: Java) to asm.js seems weird.


Managing memory in Java is not common, but it's not exactly weird either. Off the top of my head, you need to make memory related API calls for dealing with ORMs, object pools, and native externs.


Without having actually done much to investigate the feasibility of this, it seems to me like someone could write a compiler infrastructure that would generate both asm.js and (P)NaCL output from the same lower-level non-JS code, since you can get to both from LLVM bytecode. Combined with a JS client library to paper over the differences between the two, developers could just write one set of low-level code and not care which fast-path execution strategy the browser implemented, and the asm.js version, along with whatever polyfill hacks were necessary to make it work, could be served to old browsers to run the same code slowly.

This is, I guess, not that different from the OP's suggestion to create a bytecode and define a one-pass transformation of that into JS.


There's a valid philosophical objection here, and I found the article illuminating about the nature of Asm.js.

The strategy of making a static subset of a language that's easy to compile works great -- if it's targeted at humans! RPython, for example, means you can write the Python VM in Python, and the SmallTalk VM (or Squeak?) was written in SmallTalk using the same approach. The point is part of your code base is in this restricted language instead of C, and you can bootstrap that way.

If this language is just a compiler target and not meant to be particularly human-friendly (RPython has classes and mixins) then making it a weird subset of JavaScript is a little more dubious and without precedent that I can think of.


It's definitely not "just a compiler target," not more than the assembly language in which people still write some code by hand when there's need for that. The real world example:

https://github.com/srijs/rusha

(see the benchmarks -- Firefox 21 uses asm.js) and the source

https://github.com/srijs/rusha/blob/master/rusha.js

I don't see that the code got ugly. The calculations needed are the same, the most obvious differences from the "non-asm.js" code is the "asm.js" and a few |0 marks. And some of such |0 constructs can improve speed even on the browsers which don't do full "asm.js". So the engines will get smarter by recognizing such constructs even without the full support of "asm.js" segments. It's a win-win already.


Right, but your example here is a particularly skewed one. It's a hashing library. All it does is take integers and do arithmetic on them. Most real-world application code is not at all like that. Asm.js doesn't even support strings as far as I know.


Asm.js is exactly made for things like that: the pieces of code that don't depend on "strings" but calculate a lot. And for things like this:

http://bellard.org/jslinux/

For "strings" the existing JITers do already the good job. You can just use JavaScript as is then.


Yes, agreed. asm.js is a great fit here. My point was that that fact doesn't generalize well to other problem areas.

Earlier you said asm.js isn't just a compiler target, but if your program uses strings, or objects, or closures, or GC, then it's not going to be a good fit to use asm.js.


In related news, researchers from Mozilla and UC San Diego with support from Microsoft Research have recently published another approach to typed JavaScript, named Dependent Javascript or "DJS". This isn't focused on performance so much as correctness.

1. http://lambda-the-ultimate.org/node/4700

2. http://goto.ucsd.edu/~ravi/research/djs/


This seems to assume asm.js came first. Really it's a development build on top of what things like emacripten were already doing.

There doesn't really seem to be much criticism here of asm.js as a bytecode, only as a syntax.


Compiling JavaScript down to asm.js format JavaScript might be messy, but it might lead to better performance if your compiler is better than the JIT engine.

I'm still not sure why a compact, concise, standardized bytecode format isn't being pursued. Perhaps minified, gzipped asm.js encoded JavaScript is as close as we'll ever get.


> is as close as we'll ever get.

This is the point I think the author and a lot of people are missing. It's not always about what is the best possible design, instead it is often "what we will get". Politics, ideology, and timing can often come in the way of "the best design".

I would love to use NaCl (I was cheering for it up until recently), but if only Chrome is implementing it then it's about as useful as a screwdriver that doesn't fit any screws.

asm.js is already a stronger candidate because I can use it right now and it will work everywhere. Obviously the browser vendors has to create asm.js specific optimizations for it to be truly great, but that is like fuzzing over the handle, grip and material of the screwdriver. It fits the screws, it gets the job done, it actually works.

What does it matter how great your technology is if it doesn't work? What does it matter if NaCl is ten times faster if I can't actually use it? From what I see, Mozilla isn't very hot on the idea of adopting NaCl, and Microsoft will probably not even touch it in this millennia.

If a client wants me to produce something, I can't use NaCl, because it won't actually run in non-chrome browsers (without installing plugins).

To me it appears that the choice is either asm.js or nothing. NaCl just isn't happening.


> asm.js is already a stronger candidate because I can use it right now and it will work everywhere.

This is fundamentally not true. We tend to think of "better perf" as not being breaking the web, but that isn't always the case. Imagine a game written using asm.js. Plays beautifully at 30fps on Firefox. Great! And it runs on every browser, right?

Well, if it runs at three FPS on Safari, then as far as the game developer and the player are concerned, no, the game doesn't run on Safari.


Yes but games with their real-time requirements are a special case. Most websites are still usable even if the Javascript is running a tenth as fast.


Does this mean that Chrome "broke the web" by being so fast while IE6 was still in circulation?


If you're going to use IE6 as the baseline that pretty much every browser was breaking the web relative to it, including later versions of IE, weren't they?

I don't recall many websites at the time only being effectively usable for perf reasons in Chrome, but my memory is a bit hazy. Certainly, there were fewer very JavaScript heavy sites, and most of HTML5 hadn't been created yet. That meant there were fewer opportunities to build web apps that required fast JS. Also, Chrome wasn't as fast as it is now.

But, yes, if people were building apps then that said "only in Chrome" then that would imply that to some degree Chrome was breaking the web. Fortunately, in that case, the other browsers caught up quickly and resolved that tension.


Apps can degrade gracefully by scaling down graphics when they are running on non-asm browsers.


If your graphics are the bottleneck, then you're likely GPU bound and not CPU bound, so that may not buy you much. Making a game that can make players happy across a 10x range of hardware capabilities is hard.


It seems that (P)NaCl could be pretty close if you can, for browsers without it (same as not having asm.js), compile the LLVM bitcode into Javascript, which I think is possible with Emscripten (also what compiles to asm.js).

You would probably want some sort of layer to abstract away the NaCl API so you can call the equivalents for asm.js, but I don't actually know that asm.js has equivalents.


> why a compact, concise, standardized bytecode format isn't being pursued

Two reasons:

1. Backwards compatibility.

2. We've had such a thing since the 1990's. It's called Java. Nobody likes it and nobody uses it [1], so that's a good enough reason.

[1] Nobody uses Java in the browser, that is. When was the last time you remember seeing a Java applet?


I would figure that we'd learned a lot of lessons from Java, and Java was never truly integrated with the browser, always glommed in via some clumsy plug-in architecture.

If JavaScript was a plugin it would be just as annoying and useless as an "applet".


The Mozilla guys have mentioned a compressed JS AST representation as the bytecode-like option if you take the asm.js approach to its logical JS VM as runtime conclusion.


I think that short concise bytecode is why we have high level languages like javascript to begin with.


By "bytecode" we probably mean "ActionScript" which is a train-wreck of an implementation due to differing bytecode formats internally for various versions of Flash.

Maybe this is why people are terrified of the idea.


No, by bytecode, I mean bytecode.

http://en.wikipedia.org/wiki/Bytecode


> It can’t allocate normal JavaScript objects or access normal JavaScript properties. No strings. Only arithmetic and typed arrays (a single one actually with multiple views).

I think he is missing a core concept here. You can create and modify strings, just at a lower level. An array of ints can be used as a char array, which in turn can be used as a string. You just have to interface with it at a lower level, probably implementing your own C-style strlen/strtok functions. It will be more complicated, especially if you need to accommodate unicode, but it is doable. Granted, there is no advantage to abandoning the built-in string manipulation functions except speed, but you would only use asm.js when speed absolutely matters.

At a more fundamental level, when you have access to memory through a stack and/or heap, and can create arrays of ints with which you can do normal arithmetic and load/store operations, you really can do anything. After all, that is the entirety of what actual assembler code has to work with. You just have to make your own abstractions.


When I say "no strings" I mean JavaScript built-in primitive string values. I will amend the paragraph to make it explicit. It was added because I talked to people who think that just adding "use asm" on top of their node.js module will make it go as fast as C++ without realizing that asm.js is a very limited subset with its own rules.

You can surely represent a JS string as a Uint16Array or Uint8Array, but copying things in and out of emulated heap into real heap will cause penalty. You will also have to reimplement a lot of functionality that host JS engine provides. Trying to implement say a regexp engine in asm.js that would be as performant as V8's built-in on would be a challenge.

Yes, you can do anything. I don't dispute that. But you have to do it manually and sometimes you are replicating functionality that already exists in the host engine e.g. you might implement GC, but it will less efficient than builtin one.


Yeah, I have the same bad feeling on asm.js.

If we need c/c++=>llvm=>javascript/asm.js for performance reasons, then why not just embed llvm inside web brosers? Isn't it much easier, and much cleaner?


OK, let's say that with stuff like ams.js we never get true native performance than say NaCL would give.

Isn't that also a good thing?

I wouldn't want the web as a platform to be that good that it kills all native desktop applications, essentially turning every computer into something like a Chrome Book.

Isn't that a real danger with stuff like NaCL?


> I wouldn't want the web as a platform to be that good

What? Are you so ideologically caught up that you actually wish something wont get good? It is totally reasonable to to think something isn't the right approach or a waste of time, or to not like the workflow, etc. But to actually wish that something wont get good enough to be a general solution even if it can? How does that make any sense?


>But to actually wish that something wont get good enough to be a general solution even if it can? How does that make any sense?

It makes a lot of sense because things do not happen in isolation. A "good enough web platform" could bring a lot of OTHER outcomes too.

I wrote one in my comment already: all computers because Chromebooks, glorified clients.

That has other consequences: you loose control, you don't own your apps (the company does, and it can take them from you in an instant, like Google Reader), etc. Not to mention the privacy implication of ALL apps running in a web browser, getting their data remotely. Or the implications for companies and governments locking content, pushing you out, demanding a premium, etc.


I don't see that those consequences follow, at all. I see a natural evolution from where we are now to a world where people own their own servers in the cloud and run all the software on them.

Or, if you really don't want any data to go over the wire, just run all the web app servers locally.

There was a big swing from the idea that at&t would sell you a terminal that was connected to a central mainframe, to personal computers that were distributed, back to everything being centralized because the advantages of the web were compelling and centralizing things was the natural path. Things will swing back again to decentralization, but the difference this time is you will be able to access all of your data from anywhere at any time.


I find asm.js pointless. If all you need a fast numeric calculations in JavaScript, the better compiler backend would be WebCL - OpenCL binding to JavaScript.


Could you expand on this? It seems like an apples and oranges comparison.


Asm.js only allows numeric operations on emulated heap buffer. You can do much better with OpenCL on CPU device.

Asm.js intended to be used as a backend for compilers. One of OpenCL use cases is the same.

Asm.js code pretends to be a JavaScript code, OpenCL does not, but there is WebCL - a JavaScript binding standard for OpenCL.


Asm.js is still JavaScript. You can do all kinds of things that JavaScript can't do by itself if you call native code with a plug in. That's the difference. No plugin. No installation.


What's the difference between Mozilla deciding to bundle Asm.js with Firefox vs. Google deciding to bundle WebCL and WebGL with Chrome?


I refer you to http://asmjs.org/faq.html to understand the difference. Particularly the Question: "Why not NaCl or PNaCl instead? Are you just being stubborn about JavaScript?"


What bothers me about asm.js is that no one really needs it. I mean don't get me wrong, Unreal III in Firefox is pretty cool, but very very few developers will seriously consider writing a production game for the browser. So then what are we left with?

FFT implementations? OCR? Real-time fluid mechanics? Who does these things in a browser? What JS needs is to finally standardize WebSockets (across the board), to seriously consider multi-threading (not this WebWorker nonsense), to implement optional type checking (like Dart), etc, etc.

The effort that's going into asm.js should go into building better JIT compilers. We seem to be forgetting that JavaScript is a web programming scripting language, not some general-purpose low-level esoteric programming language. There are so many improvements JS could make on the web-side of things (you know, where it's actually used) instead of on the "cool 3D Unreal III game-programming, bytecode-crunching, Jacobi-simulating"-side of things.


very very few developers will seriously consider writing a production game for the browser

Zynga, PopCap, Team Meat and a few others come to mind. If you meant 3D "AAA" titles, that just became reasonable a few weeks ago so you'll have to give me some time while the data points come in.

However much or little incentive those would have had in the past, I think it's only going to increase. If FirefoxOS catches on (and it seems to have already with manufacturers), any games will have to be "for the browser", as well as all applications.

FFT implementations? OCR? Real-time fluid mechanics? Who does these things in a browser?

The snarky answer would be "nobody, yet" but (much like commercial gaming) that's not even true. People have been doing this strength of computation in the browser since before it was reasonable to do it.

Worse, do you think leaving out this possibility is a good thing for us? I think asm.js is partly a product of benchmark trolling, but it really does open doors for us.

Forgetting B2G for the moment, because that argument is too easy to make, and doesn't apply universally anyway: shortly before the publication of asm.js I had jokingly suggested someone (plug: http://tapes.fm/) implement multiband master compression to a multitrack playback webapp, because it was a damn shame I couldn't reproduce the dynamics of the original tracks thanks to lacking master compression. But if you can do FFT it's suddenly worth doing now, in fact I am sure it will happen. Perhaps not on tapes.fm but somewhere.


The goal of things like asm.js is to make JavaScript a general purpose, low level programming language. I'm not sure why you consider it esoteric, though.

People don't do those things in the browser because asm.js (or some alternative) isn't widespread yet. When it is, they will.


Exactly. JS was never meant to be a general-purpose language. Using the right tool for the right job is a fundamental aspect of software engineering. If you use C for writing web-apps and JavaScript for writing operating systems, you're doing something very, very wrong.

The irony here is that JavaScript isn't even a very good language, anyway. So when lambdas are implemented in a language like C++11 (to the chagrin of many programmers that shun the unnecessary complexities of recent revisions of C++), that's one thing; but when we're trying to use JS for low-level programming, that seems very silly.

And as far as your second point is concerned, I beg to differ. Java has been able to run OpenGL in the browser for ages. Flash can also do hardware-acceleration. As can Unity3D. 99% of computers have either Flash or Java installed, and yet, no serious developer (barring maybe a couple of indies) would seriously consider Java/Flash as a serious platform. I don't think HTML5/asm.js will change that.


>JS was never meant to be a general-purpose language.

The original designers and implementers of the Internet, the Web and HTML probably never envisioned the myriad of ways those technologies would be used (abused?). So what?

>Using the right tool for the right job is a fundamental aspect of software engineering.

So tell me, what is the right tool to build a cross-platform application that runs on every system, without requiring native install (and with every other benefit of a typical 'web-app')?

>If you use C for writing web-apps ...

WHY?!? How can you even say that in light of the Unreal demo that Mozilla showcased, where you have a high-performance game running on a standard browser. Given that, you still see zero potential? I don't know if browser gaming will take off, but that's irrelevant. If the browser can run a high performance game, it tells me devs can make use of that processing power to build any sort of web application that traditionally would have needed to be native. You don't see the potential in being able to build a web-based Photoshop or AutoCad?

>The irony here is that JavaScript isn't even a very good language, anyway.

You're right, it isn't, so what? HTTP isn't the best protocol, and neither is TCP, but try replacing them with any 'superior' protocol. JavaScript, like HTTP, and TCP is ubiquitous, that's its strength.

>no serious developer (barring maybe a couple of indies) would seriously consider Java/Flash as a serious platform.

I consider both serious platforms for gaming and otherwise, even with the knowledge that they will be replaced by HTML/JS on the client.


> 99% of computers have either Flash or Java installed

If you define "computer" as being something that's not a "phone" or a "tablet", sure.

But people writing web content today sure care about those non-"computer"s...


People who write in other languages need it. It's not only about number crunching. It's about people leveraging their knowledge and experience in their languages to write client side scripts.


Looks like the asm.js Jedi mind trick didn't work on someone :)


If it is not JavaScript, and it gains it's performance by static typing - well, we already have a language like that, and it's more production ready than asm.js - it's Dart.


Dart is not statically typed.


In an addition to the parent comment, Dart uses its type system to increase performance, even when generating JavaScript: http://www.youtube.com/watch?v=rbLkYlbEZ1E


Isn't he describing Dart in his second to last section?


I am not sure what do you mean. Dart is a dynamically typed language, not a bytecode. Dart's type annotations don't even have any implications on performance and not used for optimizations by either Dart VM or dart2js.


Though, you can run the DartVM in "checked" mode which will validate all type assignments to make sure they match the types annotated in your code.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: