Hacker News new | past | comments | ask | show | jobs | submit login

I think we’re in agreement mostly. I was trying to make the point that the model needs to contain some practical assumptions at the level we’re discussing, or it breaks down. Both binary integer addition and binary address look ups are O(L) where L is the number of bits in the number. In practice for both, we assume L is a constant so both operations are O(1). Hardware completes the operations in essentially a single atomic step, too.

Even with data that is so big you can’t fit it in local storage, you’re almost surely not going to exceed the size of a 64 bit address space. If you can’t fit in cache/RAM/HDD/cloud storage the bottleneck won’t be a binary address look up. The upper bound will be a function of the latency between the CPU and where the memory is stored (like TFA is talking about).

Doing arithmetic on numbers larger than 64 bits (e.g. cryptography) is far more common than exceeding the address space of the OS / hardware / theoretical address space maximum (hundreds of terabytes up to exabytes).




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: