Hacker Newsnew | comments | show | ask | jobs | submit login

And at which point in history 10x speed down has become acceptable? In certain industries people are trying to squeeze the last bit of performance out of their code, fine tuning cache usage, SSE and whatnot. I just don't see it as an innovation in technologies, quite the opposite in fact.



Remember Emscripten is not the last word, it only demonstrates such translators are possible, and I believe is still the only such example thus far. Given a small team the experience might be significantly improved, especially considering Emscripten works by translating LLVM bitcode, even though a less lossy source translation approach may be possible.

JS might be the first 'architecture' for which standard C's pointer arithmetic and casting restrictions get put to truly good use.

-----


When comparing to other portable systems (python, for example), it's "pretty good".

Plus all the messy porting of C libs to different platforms go away (you just need one SDL-to-canvas wrapper instead of 5 different implementations). It allows for a concentration of effort.

-----


We aren't talking 10x slowdown anyhow—asm.js slowdown is more like 2x, and that's just version 1.

-----


Extreme portability is often worth 10x slowdown.

-----


It sure may be, but JavaScript can't be "extremely" more portable than, say, C++, since all JavaScript engines with majority market share to date are implemented in C++.

Also its portability is not an inherent property of JS, it is because of the history.

-----




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact

Search: