Hacker News new | past | comments | ask | show | jobs | submit login

It's a philosophy thing. There are plenty of languages that just solve for flat Von Neumann memory models. In the past, that did not describe all machines -- e.g. the pre-standard Borland C/C++ compilers for x86 real mode that used 32-bit ptrdiff_t and 16-bit size_t (which doesn't count as an answer because they were pre-standard in so many ways). In the future, it may or may not describe all machines -- it's possible that RISC wins forever and we continue to push all complexity into the software, or it's possible that something capabilities based comes forward instead. Similarly in primitive types, perhaps some day with have unums.

If the future is still flat, then... we've paid the cost of an extra few paragraphs in the standard? Folks interested in writing non-portable code can ignore this and use implementation-defined behaviors; those who want to be fully portable to future machines can be more careful (although ptrdiff_t is basically a cursed type anyway, so I don't see this particular overhead mattering). Yes, getting rid of ones complement makes sense these days -- but if you were worried about the compatibility issues around supporting it properly and avoiding undefined behavior, you're probably spending exactly as much effort today dealing with the fact that INT_MIN and friends are cursed mathematically on twos complement machines.

Meanwhile, suppose that we actually break out of this local minima. That's an interesting world, and an even more interesting one if we can carry most of our software forward with us.

We're not even a century into designing computers yet. I can't even begin to predict what architectures will look like in another five or ten centuries -- even assuming that transistor budgets continue to taper off. But I'll say that of the languages I use daily, C is one of the few that I'd still expect to be around and functional in that future, even if only as an archeological curiosity; it allows a decently high fidelity description of how an algorithm should be implemented across more than half a century of hardware.




C23 follows C++ in requiring signed integers to have two's complement representation. I think by this point it's pretty much settled that it's the "optimal" way to implement signed integers. Now if we change to non-binary architectures (or something radically different) things might change, but at that point quite a lot of the C standard will have to be thrown out as well.


I /mostly/ agree -- it's hard for me to think of a modern or future platform that would use ones complement integers... with one possible exception. Integers represented on top of IEEE floating point are inherently ones-complement (sign+magnitude), and I can definitely imagine potential future platforms that use 64-bit floats as their primary primitive, restricting them to integer representations for certain tasks. Thinking of designing something like a DSP-focused microcontroller in a world where onboard SRAM can significantly exceed 4GB, and where a desire for C compatibility and occasional tasks make it worthwhile to support function and data pointers, but supporting 53-bit pointers in a float ends up simpler than adding a 64-bit integer ALU that would rarely be used. In this case the associated types (ptrdiff_t, for example) might end up as 53-bit ones-complement integers stored in floats.


I know I'm being picky but sign+magnitude and ones's complement are a good bit different from each other despite both having negative zero.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: