Once you've built a C-to-JS compiler, and a Nim-to-C compiler, you've got a perfectly-functioning Nim-to-JS compiler. There's no game of telephone gradually confusing things; the fidelity is perfect no matter how deep the pipeline goes.
(This is also what's amazing about layered packet-switched networks, come to think of it. The core switch transferring data from your phone to AWS doesn't need to speak 4G or Xen SDN; it's just coupled to them, eventually, by gateways somewhere else, and it all just keeps working, one quadrillion times per second, all over the world.)
For example, here are a list of ways in which the C-to-JS compiler is far less than "perfect": http://kripken.github.io/emscripten-site/docs/porting/guidel...
In software, though, "imperfect" means "will work perfectly, 100% of the time, even after a quadrillion tests, within a well-specified subset of all possible inputs; will fail in arbitrary unknown ways outside of that subset."
It's a difference to being "within tolerance"—even within tolerance of a physical system, stresses are still going on. Physical systems have a lifetime, and repair procedures, for a reason.
But in software, you don't have to worry about "stress within tolerance." In fact, even if someone else builds an "imperfect" software system that accepts inputs with undefined results, you can just wrap it with an input validator, and suddenly it's a well-defined-100%-of-the-time system!
(Of course, in implementation, software has to run on hardware, which is the other kind of imperfect system. But, surprisingly, you can write software to compensate even for that, with failover and quorum-agreement &c.)
Unix Philosphy right there (right?)