And at which point in history 10x speed down has become acceptable? In certain industries people are trying to squeeze the last bit of performance out of their code, fine tuning cache usage, SSE and whatnot. I just don't see it as an innovation in technologies, quite the opposite in fact.
Remember Emscripten is not the last word, it only demonstrates such translators are possible, and I believe is still the only such example thus far. Given a small team the experience might be significantly improved, especially considering Emscripten works by translating LLVM bitcode, even though a less lossy source translation approach may be possible.
JS might be the first 'architecture' for which standard C's pointer arithmetic and casting restrictions get put to truly good use.