Hacker News new | comments | show | ask | jobs | submit login

> int32 instead of int. In C2 you always specify the size

Have fun porting that, this language is already dead out the gate.

It also seems to ignore low level details on how compilers optimize programs, Rust actually invested into this on the language side and wasn't just some syntastic difference from C.




I've encountered this prejudice against specific-width types before. It confused me then as now.

It seems to me that porting apps that rely on default types like int, where the int size/behavior changes between platforms, have about 1000x more chance of compatibility issues when trying to get the same code to run on new platforms. Sure there could be a performance hit on, e.g., a system that is optimized for 32 bit and you specify 16 bits. That's what types like fast_int16_t are for in C/C++. But porting the code is just easier when the types can't change out from under you.

Early versions of C couldn't even decide whether char was signed or unsigned; how is specifying uint8 or sint8 not strictly better than a char that you couldn't be sure whether it was signed or not? How is specifying uint16/32/64 or sint16/32/64 not strictly better than saying "[unsigned] int"?


Well, let's say I'm implementing something like STL's vector. I've got a size (number of elements). What type should it be? Clearly something int-like, but what exactly?

If I make it an int, then on a tiny machine, it might only be 16 bits. That means that the vector can only hold 64K elements. But that's probably OK, because if I'm playing with a chip where ints are 16 bits, it probably doesn't have enough memory, address space, or need to contain more elements than that.

But if I'm on a supercomputer, int might be 64 bits, and vector might need to hold that much. That is, the size of int tends to generally scale in the same neighborhood as the other capabilities of the machine.

If I have to decide how big to make the size of a vector, what size do I pick? int64? That works for the supercomputer people, but it means that an 8051 has to do 64 bit operations to use my vector package. That's... less than ideal.


If you've got generics (like templates) or macros, you make your int be different for different vector scales.

In fact, though, I would think that an 8051 vector would need to be profoundly simpler than a supercomputer vector; not just differently sized. When you're dealing with 1-16k of RAM and not much more ROM, a full "vector package" isn't what you need, but rather a really, really limited vector package that only does exactly what you need and no more.

In fact, you probably want a version with an 8-bit "length" value -- or even a 7-bit length with some other relevant flag stored in the final top bit, for space optimization reasons.

Source: I was the lead developer on an original Game Boy game, which ran on a CPU that, IIRC, was roughly an 8053 (it had instructions somewhat related to an 8080 or Z-80, lacking all the 16-bit instructions, but adding an 8-bit fast-memory operator reminiscent of the 6502). Back in those days you didn't have "packages." You had code snippets (in assembly language, of course) that you shaved bytes off of to make them fit. And then you shaved more bytes off.

[edit: clarity, and note that it's the Game Boy I was talking about; originally said Game Boy Advance, which was an ARM CPU. Also did a game on that device, but it was the Game Boy I meant to describe.]


Which is why a good language should, in addition to fixed-width types, have a size type, which is guaranteed to be the same size as a pointer; but it shouldn't have some random other types with vaguely defined widths and use cases. For example, Rust has u8, u16, etc., and usize, but no "short", "int", or "long"; Go has uint8, uint16, etc., and uintptr, except it also has uint, which is arguably a bad idea. C now has size_t, but none of its traditional unspecified-width types satisfied that criterion, since long is too short to fit a pointer on platforms such as Windows 64 (and long long's too long on 32-bit platforms, but that came in the same standard revision as size_t).


You can always be explicit:

    #if WE_ARE_ON_A_TINY_MACHINE
    typedef uint16_t vector_size_t;
    #elif WE_ARE_ON_A_SUPERCOMPUTER
    typedef uint64_t vector_size_t;
    #else
    #error "Unsupported platform."
    #end
That's better than your code which inserts 100,000 things into a vector mysteriously breaking on certain platforms (on which you may or may not have tested your code).


When people bring up arguments for parameterizing numeric bit width I kind of shrug now. You don't get debugged code for free when you change the declaration. You have to walk through all your arithmetic regardless - any sufficiently large program will eventually depend on something like overflow behavior or the existence of an extreme range of values. In all likelihood, you're safer off copy-pasting and editing when you switch up primitives like that.


How is this any different than rust(aside from [i|u]size type)?


You mean that int32 is some sort of premature specialisation ? But if the machine you run that program has 15 bits int won't you have to rewrite part of the logic to deal with the new limit ?


Java (the most popular programming language in the world by many metrics, if you didn't know) works exactly like this.

I understand that C and Java have (sometimes) different use cases, still I feel reasonably confident that 32-bit arithmetic is going to be efficient enough on most platforms of interest (some embedded platforms might be a problem).

It's true that something like the Posix int_fast_16_t (etc) types might be useful though.




Applications are open for YC Winter 2019

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: