
My int is too big - ingve
http://www.tedunangst.com/flak/post/my-int-is-too-big
======
rwmj
Some of this comes about from the choice of LP64 (32 bit ints on 64 bit
platforms[0]). There seems no sensible reason why ints are 32 bit on a 64 bit
platform, it only introduces inefficiency[1] and these kinds of problems.

The one argument I've seen is there would be no 32 bit type (since the next
smallest integer, ie. short, would be 16 bit), but a compiler could easily
have a __builtin_int32 mapped to C99 standard int32_t etc.

In C code that we write, we have a rule that every use of plain 'int' must be
justified - it's a "code smell". Most places where 'int' is used should be
replaced with 'size_t' (if used as an index to an array), or a C99 int32_t
etc.

[0]
[https://en.wikipedia.org/wiki/64-bit_computing#64-bit_data_m...](https://en.wikipedia.org/wiki/64-bit_computing#64-bit_data_models)

[1]
[https://gist.github.com/rygorous/e0f055bfb74e3d5f0af20690759...](https://gist.github.com/rygorous/e0f055bfb74e3d5f0af20690759de5a7)

~~~
caf
_In C code that we write, we have a rule that every use of plain 'int' must be
justified - it's a "code smell". Most places where 'int' is used should be
replaced with 'size_t' (if used as an index to an array), or a C99 int32_t
etc._

I completely disagree (except with the size_t bit - I agree with that). In my
view, it's usages of int32_t and similar that should be justified: why you do
you need _exactly_ 32 bits and 2s complement representation - why does _at
least_ 32 bits and unspecific representation not suffice?

~~~
cronjobber
> why does at least 32 bits and unspecific representation not suffice?

Because that doesn't help you. Static reasoning becomes harder. Dynamic checks
(pretending the exact size is unknown) are possible but .. the exact size _is_
statically known (just not to the programmer). It is a waste.

The sentiment is right, but the solution is metaprogramming, not unhelpfully
underspecified types.

------
gleenn
Whenever I get back into Clojure and use type hinting, I usually forget that
Clojure doesn't allow ints, only longs. Kind of a strange choice given the JVM
supports signed variants of the common word lengths. But then I read articles
like this.

I'm a pretty high-level language user (read: I don't do C, et al), so when I
read stuff like this it always makes me cringe and be happy Hickey decided to
keep it simple. 64 bits, everywhere! (God save us when things go to 128)

~~~
marcosdumay
> God save us when things go to 128

That will take a while. And even then, people will likely keep using 64 bits
for most things.

~~~
IshKebab
I doubt we'll ever have more than 2^64 bytes of RAM, so that will probably
never happen.

(Please don't make the obvious reply.)

~~~
marcosdumay
Even if we have more than 2^64 bytes (that's 16 EB, not really an astronomical
amount) of RAM, it's not obvious that the optimum way to use them will be by
placing everything at the same address space.

~~~
simcop2387
Yea i'd fully expect something like PAE to be more useful and common. Even if
something is using more than 16EB of ram, i can't imagine it'll be a single
process and instead it'll be larger distributed system instead, leaving each
process smaller than that. Just filling up 16 EB of ram right now would take:

    
    
        < simcop2387> farnsworth: eta[16 EiB, 19200 MiB/s] - #now# -> "years"
        < farnsworth> simcop2387:  2177.63532452879 years
    

based on marketing info on DDR4. So at the very least it's going to take a
while before we even have to worry about a max of 16EiB in address space.

------
mytummyhertz
i actually realized that the original fix to the int overflow was UB, not tim.

as usual, tim gets all the credit ;)

(i mean, he did send the email)

------
jdironman
I have to say this article is writing about topics leagues ahead of the
average computer user but in a way they could (well...most of them.)
understand.

