Have fun porting that, this language is already dead out the gate.
It also seems to ignore low level details on how compilers optimize programs, Rust actually invested into this on the language side and wasn't just some syntastic difference from C.
It seems to me that porting apps that rely on default types like int, where the int size/behavior changes between platforms, have about 1000x more chance of compatibility issues when trying to get the same code to run on new platforms. Sure there could be a performance hit on, e.g., a system that is optimized for 32 bit and you specify 16 bits. That's what types like fast_int16_t are for in C/C++. But porting the code is just easier when the types can't change out from under you.
Early versions of C couldn't even decide whether char was signed or unsigned; how is specifying uint8 or sint8 not strictly better than a char that you couldn't be sure whether it was signed or not? How is specifying uint16/32/64 or sint16/32/64 not strictly better than saying "[unsigned] int"?
If I make it an int, then on a tiny machine, it might only be 16 bits. That means that the vector can only hold 64K elements. But that's probably OK, because if I'm playing with a chip where ints are 16 bits, it probably doesn't have enough memory, address space, or need to contain more elements than that.
But if I'm on a supercomputer, int might be 64 bits, and vector might need to hold that much. That is, the size of int tends to generally scale in the same neighborhood as the other capabilities of the machine.
If I have to decide how big to make the size of a vector, what size do I pick? int64? That works for the supercomputer people, but it means that an 8051 has to do 64 bit operations to use my vector package. That's... less than ideal.
In fact, though, I would think that an 8051 vector would need to be profoundly simpler than a supercomputer vector; not just differently sized. When you're dealing with 1-16k of RAM and not much more ROM, a full "vector package" isn't what you need, but rather a really, really limited vector package that only does exactly what you need and no more.
In fact, you probably want a version with an 8-bit "length" value -- or even a 7-bit length with some other relevant flag stored in the final top bit, for space optimization reasons.
Source: I was the lead developer on an original Game Boy game, which ran on a CPU that, IIRC, was roughly an 8053 (it had instructions somewhat related to an 8080 or Z-80, lacking all the 16-bit instructions, but adding an 8-bit fast-memory operator reminiscent of the 6502). Back in those days you didn't have "packages." You had code snippets (in assembly language, of course) that you shaved bytes off of to make them fit. And then you shaved more bytes off.
[edit: clarity, and note that it's the Game Boy I was talking about; originally said Game Boy Advance, which was an ARM CPU. Also did a game on that device, but it was the Game Boy I meant to describe.]
typedef uint16_t vector_size_t;
typedef uint64_t vector_size_t;
#error "Unsupported platform."
I understand that C and Java have (sometimes) different use cases, still I feel reasonably confident that 32-bit arithmetic is going to be efficient enough on most platforms of interest (some embedded platforms might be a problem).
It's true that something like the Posix int_fast_16_t (etc) types might be useful though.