Which I don't say to excuse such behavior, but neither does it suprise me.
2. Kernel already checks, whether the buffer is in a mapped memory region that your process has access to. If it isn't, the error is signaled with errno=EFAULT.
On Linux, write() (and similar system calls) will transfer at most 0x7ffff000 (2,147,479,552) bytes, returning the number of bytes actually transferred. (This is true on both 32-bit and 64-bit systems.)
> Kernel memory is wired: The kernel has no access to virtual memory. A quarter of physical RAM is reserved for the kernel, and within that quarter, the kernel allocates a percentage for the file mapping.
Kernel memory is wired–which means it can't be paged out to disk (it's "wired down"). The kernel works mostly in virtual addresses as it runs in protected mode. The top portion of the virtual address space (populated by pages backed by physical memory) is dedicated to the kernel.
Must be a nightmare to maintain.
> Must be a nightmare to maintain.
OS X is only a few years younger that Classic Mac OS (given it's basically an overhauled version of NeXTSTEP).
And while I'm sure it's a nightmare to maintain, I don't think it would be any easier if it were "younger." Modern operating systems are fundamentally very complicated pieces of software, and if one where cut down to be easy to maintain (which would include ditching backwards compatibility when it became mildly inconvenient), no one would want to use it.
But isn't this more or less what happens with macOS? Now and then they drop compatibility with some old stuff when they deem everyone should be running on the new thing. I'm thinking Carbon, 32-bit apps, etc.
They seem more aggressive with doing that than other vendors, when they do they signal for (years?) in advance, and I doubt they do that to the degree that it makes development "easy."