There seems to be all kinds of limits in macOS that break things in weird ways they shouldn't. For example, calling the read() or write() syscalls with a count parameter of greater than 2^31-1 will fail with EINVAL, rather than doing something sane like a partial write. (This is not excused by the sentence in POSIX "If the value of nbyte is greater than {SSIZE_MAX}, the result is implementation-defined", as SSIZE_MAX is 2^63-1, not 2^31-1.) See https://gitlab.haskell.org/ghc/ghc/-/issues/17414 for details.
It wouldn't suprise me if the majority of such >2GB write attempts were int32_t underflows with <2GB of data provided, and further that the EINVALs instead of any access of said data might avoid some segfaults as a result.
Which I don't say to excuse such behavior, but neither does it suprise me.
1. It wouldn't segfault, because it's the kernel reading the data. It can read from anywhere.
2. Kernel already checks, whether the buffer is in a mapped memory region that your process has access to. If it isn't, the error is signaled with errno=EFAULT.
On Linux, write() (and similar system calls) will transfer at most 0x7ffff000 (2,147,479,552) bytes, returning the number of bytes actually transferred. (This is true on both 32-bit and 64-bit systems.)
What I find more ridiculous in Linux is the argument limit of processes. On a machine with 64GB of RAM I still often get "Argument list too long" when invoking commands with wildcards, even if the total size of all filenames combined is much less than 1GB.
But is the total size of all filenames combined much less than the stack size? The argument list is stored on the stack of the new process, and it still has to leave enough space there for the program to use. See bprm_stack_limits in fs/exec.c for details.
I don't know much about Linux kernel internals, but I suppose the stack grows down in a virtual address space, where pages of the stack are allocated as needed by the kernel (?)
> Kernel memory is wired: The kernel has no access to virtual memory. A quarter of physical RAM is reserved for the kernel, and within that quarter, the kernel allocates a percentage for the file mapping.
Kernel memory is wired–which means it can't be paged out to disk (it's "wired down"). The kernel works mostly in virtual addresses as it runs in protected mode. The top portion of the virtual address space (populated by pages backed by physical memory) is dedicated to the kernel.
> Mac OS X is now older than Mac OS “Classic” was when OS X was first released.
> Must be a nightmare to maintain.
OS X is only a few years younger that Classic Mac OS (given it's basically an overhauled version of NeXTSTEP).
And while I'm sure it's a nightmare to maintain, I don't think it would be any easier if it were "younger." Modern operating systems are fundamentally very complicated pieces of software, and if one where cut down to be easy to maintain (which would include ditching backwards compatibility when it became mildly inconvenient), no one would want to use it.
> and if one where cut down to be easy to maintain (which would include ditching backwards compatibility when it became mildly inconvenient), no one would want to use it.
But isn't this more or less what happens with macOS? Now and then they drop compatibility with some old stuff when they deem everyone should be running on the new thing. I'm thinking Carbon, 32-bit apps, etc.
> But isn't this more or less what happens with macOS?
They seem more aggressive with doing that than other vendors, when they do they signal for (years?) in advance, and I doubt they do that to the degree that it makes development "easy."