Hacker News new | past | comments | ask | show | jobs | submit login

> but the computer really does only see a big array.

Define "the computer" in this context.

Certainly not the x86 chip itself - it sees memory as a series of caches (L1, L2, L3) and eventually the memory bus, which it manages through various lookup tables (TLB etc.) more closely resembling a series of hash tables on steroids than an array - and that's ignoring per-processor caches on multiproc systems and all the invalidation logic that needs to occur as a result.

What about processes? One flat memory space! ...except when you communicate with another process, say by sharing memory. Then you realize you can't share your 'indicies' without associating them to other indicies, because even if the physical memory is the same, each process has their own 'array' for indexing into that memory (and yours doesn't even contain everything in theirs.) That's at least 68 arrays on my computer at the time of writing this, not one.

The kernel's the one managing this mess of arrays, pinning pages needed for interrupt handlers and software TLB support (for not even it is addressing pure physical memory most of the time?)

I guess you could argue that because your chip supports DMA, you can do all your array indexing through that to get to your 'one true' physical memory addressing scheme, label that as what your computer 'really sees', and ignore the 99.99% of instructions executing and making up the bulk of your computation, which have nothing to do with that addressing scheme, but that seems a bit disingenuous.




From C and assembly, that's very much how memory is access--compared with the objects notion mention in the GP.

The fact that certain accesses may cause memory layout to change or other strange things is something better left to the computer engineers. :)




Registration is open for Startup School 2019. Classes start July 22nd.

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: