Nothing at all stops any jet-based interpreter from using normal standards practices to specify a standard jet profile. We can even expect aggressive vendors to "embrace and extend" that profile.
But, in practice as well as theory, this isn't going to cause incompatibilities, monopolies, etc. Your semantic compatibility is perfect.
In fact, jet competition should have excellent effects on performance across a healthy market. If Google hosts a jet-propelled interpreter, it's natural for it to both offer a tensor processing library, and hook that library up to a jet that talks to a hardware TPU. Your code will work the same if you move off Google -- you're not physically locked in.
What we just got rid of: the bureaucratic overhead of standardizing a TPU opcode. Look at all the crypto algorithms you have to run in JS on the browser, when half the time there's a hardware instruction on the chip. It's technically trivial to connect these two things -- but not politically. Politically, you need a committee to standardize the builtin.
It's true that performance disparities are absurd if you don't jet-propel basic arithmetic. But no one would do that. If a system works both in theory and in practice, it works. You can't really say that in theory, it doesn't work in practice.
(Disclaimer: jet inventor here. But Simplicity's jets are much more badass because of the Coq equivalency proofs.)