To use a more common language and tool as an example: I can't build GCC without already having a working C++ compiler. Granted, GCC will accept any reasonably conformant C++98 compiler, but still... If I'm bootstrapping GCC on Linux (or almost any other major *nix), that compiler's almost always going to be another copy of GCC.
If getting started with Rust on another platform is indeed so difficult, I'd think it would be a better use of their resources to make sure that cross-compilation is functional, rather than messing around with distributing LLVM IR and stuff. If I'm building a C/C++ build environment for a new platform, a cross-compile of my tools is probably how I'd start.
According to this, there have been 290 snapshots in total. And keep in mind that you would also need to rebuild LLVM quite a few times as well during this process, as Rust has continually upgraded its custom LLVM fork over the years.
Niklaus Wirth idea was to have a portable assembly that he could use to easily bootstrap the compiler.
So porting to a new platform meant:
- setting the output format to P-Code
- compile the compiler
- write a P-Code interpreter without any attention to performance
- use it to run the compiler until the new native backend is done
This was specially important back in the day each OS had as option their own proprietary system programming language or Assembly.
Personally I like this option (using interpreters as porting tool) better than cross-compiling, as you can work directly on the target system, except in embedded platforms case.
LLVM IR is target architecture dependent, so it's not portable between machine types.
For simple projects and not-too-complex code the IR technically can be cross-platform (although I never tried whether it worked between different endians); we got a number of early prototypes actually working. We compiled IR "object files" on x64 platform and managed to link them on ARM, after a few extra disassembly steps. They even ran. This was slightly more than 3 years ago.
From my experience a more correct statement would be: "LLVM IR is mostly architecture independent." The moment you mix in extremely low-level code, such as atomics, portability of IR breaks down. The operations for atomic types are (or at least were) inlined as build-host assembly.
It was an interesting exercise nonetheless.
For an example, look at http://llvm.org/devmtg/2014-10/Slides/Skip%20the%20FFI.pdf, from page 103 to 107.
Edit: There seems to be a misunderstanding. What I am saying is that "LLVM IR is mostly architecture indepedent" is false. LLVM IR is not even remotely architecture independent and nonportability happens with even the simplest codes.
Of course that's a different proposition than e.g. compiling a C program to LLVM IR using Clang and then trying to compile that IR on a different target, or trying to interact with non-LLVM-IR functions that conform to the platform ABI.
Of course, the resulting native code may not be the best for the target, since e.g. a native integer on one platform might become a pair of smaller integers on another platform. But it could work.
With all of that said, I expect the proposal was to use the Rust compiler to compile the Rust compiler to IR, and I imagine Rust is complex enough that it must generate at least some target-specific IR. Perhaps one could take the generated IR and "normalize" it, but it's questionable whether it would be worth it.
Of course anything compiled on x64 failed to link on ARM.
I'm hacking away happily with rust now though, I quite like it so far and am curious to see where it will go from here.
Before that, cargo was always broken, if you didn't have the correct version for building it.
This will certainly change once Rust stable released.
As all things Rust, the whole ecosystem still has a huge label "in construction". The things that are supported and intended to be kept stable are quite stable, though!