This isn't Rust or Clang's fault, of course, it's just a consequence of using LLVM IR as the data for LTO, that LLVM IR has no backwards compatibility guarantees, and that Rust is out-of-tree for LLVM.
In theory using a stable format for LTO would avoid issues like this. Wasm is one option that has working cross-language inlining already today, but on the other hand it has less rich an IR, and the optimizers are less powerful than LLVM.
In other words a newer optimizer running on older IR may emit broken code.
For example, imagine that LLVM has a known bug with some pass, and it has a workaround somewhere else that disables generating IR that would hit that bug. A different version of LLVM may fix that bug, and remove the workaround - but then optimizing bitcode from another version could be vulnerable.
Another example is undefined behavior in LLVM IR. It may be handled differently in different versions, and it's hard to know what might happen from mixing them.
In general, LLVM is heavily tested - on each revision. I'm not aware of any large-scale project that tests all LLVM versions on LLVM IR from all other versions. That's untested. I'd be afraid to rely on that.
I don't know what Apple does with user-supplied bitcode, but if I were them I'd be recompiling old bitcode with the old LLVM that matches it, or something else (like subset the bitcode to remove undefined behavior, etc.).
However, the way I found it is that it affected the random version of LLVM trunk stable rustc was using at the time... That's one of the reasons why stable rust should stay away from LLVM trunk.