But I really wanted some explanation of why Windows process startup seems to be so heavyweight. Why does anything that spawns lots of little independent processes take so bloody long on Windows?
I'm not saying "lots of processes on Windows is slow, lots of processes on Linux is fast, Windows uses CreateProcess, Linux uses fork, CreateProcess is an alternative to fork/exec, therefore fork/exec is better than any alternative." I can imagine all kinds of reasons for the observed behavior, few of which would prove that fork is a good model. But I still want to know what's going on.
Beyond the raw Process and Thread kernel objects, which are represented by EPROCESS + KPROCESS and ETHREAD + KTHREAD structures in kernel address space, a Win32 process also needs to have:
- A PEB (Process Environment Block) structure in its user address space
- An associated CSR_PROCESS structure maintained by Csrss (Win32 subsystem user-mode)
- An associated W32PROCESS structure for Win32k (Win32 subsystem kernel-mode)
I'm pretty sure these days the W32PROCESS structure only gets created on-demand with the first creation of a GDI or USER object, so presumably CLI apps don't have to pay that price. But either way, those latter three structures are non-trivial. They are complicated structures and I assume involve a context switch (or several) at least for the Csrss component. At least some steps in the process also involve manipulating global data structures which block other process creation/destruction (Csrss steps only?).
I expect all this Win32 specific stuff largely doesn't apply to e.g. the Linux subsystem, and so creating processes should be much faster. The key takeaway is its all the Win32 stuff that contributes the bulk of the overhead, not the fundamental process or thread primitives themselves.
EDIT: If you want to learn more, Mark Russinovich's Windows Internals has a whole chapter on process creation which I'm sure explains all this.
This tickles my brain. I read some blog post bitching that because Windows DLL's are kinda heavy weight it's way easy end up paying that price without realizing it.
It was a long time ago (~2006), and I honestly can't remember, but I feel like turning off anti-virus (and also backups, software updaters, and any other resident software) would have been one of the first things I would have checked. There was definitely something more fundamental going on.
One could probably argue that processes on Windows need to be lighter-weight now that sandboxing is a common security practice. These days, programs like web browsers opt to create a large number of processes both for security and stability purposes. In much the same way that POSIX should deprecate the fork model, Windows should provide lighter-weight processes.
Found my previous comment on it (which has a test case but not numbers): https://news.ycombinator.com/item?id=18226921
Now in a Linux VM it's approx 10 times faster than even WSL. And that should probably be even faster natively.
So anyway WSL is really usable and if you really only started 10 processes per sec something is wrong. Maybe you are using a crappy antivirus (I've heard that Kaspersky makes WSL extremely slow)
When I disable it, it is down to ~0.5s
I would not build a Linux kernel here instead of in a VM, but for tons of things, this is very usable.
To see how Libreoffice does it, see https://opengrok.libreoffice.org/xref/core/sal/osl/w32/proce...
Part of the problem is the DLLs, as many have mentioned, and also the fact that each statically links in its own CRT (C run-time). The shared C run-time MSFT is working on should help here. As should more lazy loading and setup.
No, that isn't the case on DLLs shipped with Windows.
Many frameworks are backed by XPC services, where the parent process has a socket-like connection to a backend server. After forking, the child would have no valid connection to the server. The fork() function establishes a new connection in the child for libSystem, to allow Unix programs to port easily to macOS, but other services' connections are not re-established. This makes fork on macOS (i) slow, and (ii) unsafe for code that touches virtually any of Apple's APIs.
Fork() is now basically the root of a looong list of special cases in so many aspects of programming. Things get even worse when you use a language with built-in runtime such as Golang for which multi-threaded programming the default behaviour. If fork() can't even handle multiple threads, what is the real point of having it when a 8 core 16 threads AMD processor is about $150 each.
These threads and those threads are not the same. The 16-threads SMT processor will happily chew on 16 different programs, processes or whatever the load at the moment is, e.g. if you use Python's multiprocessing you can create 16 processes and they'll be executed in parallel.
fork() can handle multiple threads but you have to be attentive when cleaning up etc. - quite often, code using fork() will get confused when you spawn threads, and code using threads will get confused when you fork()
Only in the pathological case where the large process is backed solely by the 4kb pages. The hardware has long now supported large pages - on x86 since Pentium Pro, if memory serves - and huge pages. The popular OSes (Linux 2.6+ and Windows 2003+) also do support large and huge pages.
A 2GB process can easily be three pages: r/x code, r/w stack, r/w data (2gb). Granted, it gets a bit more complex if mmapped I/O or JIT are used, but since both are mature technology now, it's fine to point fingers at any inefficiency and demand better. Another caveat would probably be shared libraries loading at separate address ranges, which, IMO, is another reason to ditch shared libraries for good.
Contrary to popular wisdom, OS research is still relevant.
OS research might be relevant, and it's good that some people have wild idea, but honestly I doubt this one will go anywhere :P
With Spectre, proper process bounds (well, address spaces) are more important than ever -- and oh well even without that I'd still have cited them as incredibly important, in the sense that I'd rather have more than fewer. Given that, code reuse involves shared libraries, for several good reasons; the obvious one being not wasting RAM, but then there is the update problem (how to patch programs when security holes are discovered, especially if multiple parties are involved), and on top of that there is the cache pollution problem, which is related to the code duplication problem, and which is quite insidious because it is probably simultaneously hard to benchmark and very real (ambient loss of perf, just not in very hot paths, but this will still have an impact on the general perf of a system, quite like Spectre mitigations are having a big impact)
Now we could like address space boundaries so much that we would want to just use even MORE processes in place of shared libraries, but this obviously does not work for all services (and Spectre is biting us again because context switches are not cheap), plus if you take it to the extreme this makes systems extremely hard to design, and even bigger. This is part of the reasons we are using Linux instead of Hurd... (well Linux is too much in the opposite direction, but there are hopes that it will in the long term evolve toward a middle ground)
And anyway that does not fit the narrative at all of using more huge pages.
Now there are the usual radical ideas about how everything should be running on some kind of VM (sometimes even including the kernel), drastically reducing the amount of "native" code; but given the reality of our current systems that "everything" both relies on multiple VMs and I doubt it will tend to only one, nor should it (because of the monoculture this would induce). Plus the ambient perfs are still lower than native code, and TBH I don't expect that to change ever.
So, why and how would you like to get rid of shared libraries?
Most high integrity real time OSes are microkernels.
Interesting that you mention Android, one of the key points of Project Treble is using separate processes for drivers with Android IPC to talk to the kernel (including hardware buffer handles).
So let's put manpower kind of aside for the bulk of the dev (where thousands of man-years are needed for any big project) and look at what could actually be achieved with the very small amount of manpower bootstrapping those projects. At this point you understand that the manpower thing is only a convenient narrative, while the reality is that even at the early time, Linux based systems worked really better than Hurd based systems.
Because general purpose micro-kernels based systems are hard, and especially those with a design as ambitious as the Hurd. (When you start to want to strongly isolate FS from VM code, it even stops being just hard and starts to be really HARD.)
And this was even worse at the time for perf reasons (but perf reasons are still applicable even today, given the impact on mobile and datacenter workloads)
However, yes, I'm in favor of more isolation today, because for a shitload ton of drivers it literally won't make any difference whether or not you take 1us vs 50us if you need to execute once every few seconds. So it is retarded to the highest level to run in kernel space if you don't actually need it. Sadly, Linux is way behind on that subject today.
That being said, and back to the original subject, a microkernel or at least a less monolithic one won't really get us in the less shared-libraries direction if it just re-implements the same perimeter of features of a monolithic ones, nor in the huge pages everywhere direction...
Is it really popular wisdom though, or is it the opinion of one person and it got hyped up, much like the same hype happened on a subpar programming language that same person worked on?
I don't think that's (entirely) true. This is more because a large service with some potent master process will have said process Do Stuff(tm) that will involve opening files, threads, signal handling, or whatever things that need to be taken care of one way or the other when forking to a worker (or whatever other child) process. It's therefore much simpler to fork a master subprocess into a child spawner earlier on, when it has yet to do anything. You significantly reduce your chances of screwing up if you have nothing to clean up for.
I just did a quick test, a 100mb process generally takes >2ms to fork, while a 1mb or less process takes 70us. It seems like its pretty much linear with process size.
* redirect stdin, stdout, and stderr
* open files that might be needed and close files that aren't
* change process limits
* drop privileges
* change the root directory
* change namespaces
I haven't yet read the paper, but considering the incredible simplicity from the programmer's PoV that fork provides, and the fact that at least Linux makes it pretty god damn fast, especially compared to Windows' non-forking model, I can't really see myself agreeing with their conclusion.
> While a spawn-like API is preferred for most instances of starting a program, for full generality it requires a flag, parameter, or new helper function controlling every possible aspect of process state. It is infeasible for a single OS API to give complete control over the initial state of a new process. ...
> clean-slate designs [e.g., 40, 43] have demonstrated an alternative model where system calls that modify per-process state are not constrained to merely the current process, but rather can manipulate any process to which the caller has access ...
> Retrofitting cross-process APIs into Unix seems at first glance challenging, but may also be productive for future research.
Now, the better answer to that is to have nothing like a set-uid mechanism, which would be nice, for sure. But just how much violence are we to do to the Unix model, and when are we expected to finish this? It's not like Linux can be abandoned -- for better or worse, Linux "won".
> a new process starts as an empty address space, and an advanced user may manipulate it in a piecemeal fashion, populating its address-space and kernel context prior to execution, without needing to clone the parent nor run code in the context of the child. ExOS  implemented fork in user-mode atop such a primitive. Retrofitting cross-process APIs into Unix seems at first glance challenging, but may also be productive for future research.
pid_t child = pfork();
for(int fd=0;fd<3;fd++) p_open2(child,fd,pts,O_RDWR);
char** envp = munge_env(/*parent's*/ environ);
int err = p_execve(child,file,argv,envp);
> clean-slate designs [e.g.,40, 43] have demonstrated an alternative model where system calls that modify per-process state are not constrained to merely the current process, but rather can manipulate any process to which the caller has access. This yields the flexibility and orthogonality of the fork/exec model, without most of its drawbacks: a new process starts as an empty address space, and an advanced user may manipulate it in a piecemeal fashion, populating its address-space and kernel context prior to execution, without needing to clone the parent nor run code in the context of the child.
Oh definitely. But why are they saying it "looks kind of familiar..."? That subsection is already the subject of the conversation. Surely they're not saying it looks similar to itself, right?
int p_execve(pid_t target,char* filename,
char** argv,char** envp);
A target of zero specifies the current process, so p_execve(0,file,argv,envp) is equivalent to execve(file,argv,envp).
We should probably require some kind of permission check before allowing the calling process to do this.
Regardless of this paper, I don't see its use declining significantly any time soon.
The paper also mention the use case of multiprocess servers which relies heavily on fork() but dismiss it as it could be implemented with threads. A crash in a worker would lead to the crash of the whole application. While a worker could just be restarted.
A proper use case of removing fork() from an actual program would help. For example, how nginx on Windows is implemented?
I suppose that makes sense on an OS on which crashing is expected behaviour, though some people would want to know what bug caused the crash and whether that bug has security implications.
System runs out of memory, various processes get terminated, and the easiest way to get it back into a good state is a restart: not that worrying, but do you have a memory leak? Some process segfaults with 54584554454d4f53 in the PC: should be investigated, not glossed over.
Removing fork() will take a long, long time. Every popular use case needs an alternative that doesn't suck.
But then again, fork() is kinda awful.
To support a multi-process web browser architecture that Chromium pioneered, you need to spawn processes. See https://chromium.googlesource.com/chromium/src/+/HEAD/docs/l...
In section 7 it suggests "We should therefore strongly discourage the use of fork in new code, and seek to remove it from existing apps."
Is anyone here going to help work on changing those 1304 packages?
I have already over-volunteered for thankless FOSS tasks like this, so I know it won't be me.
The goal is not "remove", but "seek to remove". The relevant definition of "seek" here is "to make an attempt" says https://www.merriam-webster.com/dictionary/seek .
How many of those 1304 Ubuntu packages require fork()? Are there benefits to replacing (say) 1283 of them with posix_spawn()?
> 7. GET THE FORK OUT OF MY OS!
Someone couldn't resist...
Interested to see what this paper has to say.
Fork-requiring program class 1:
The biggest example where fork() is needed are webservers/long-running programs with significant unchanging memory overhead and/or startup time.
Many large applications written in a language or framework that prefers the single-process/single-thread model for executing requests (e.g. Python/gunicorn, Perl, a lot of Ruby, NodeJS with ‘cluster’ for multicore, etc.) are basically dependent on fork(). Such applications often have a huge amount of memory required at startup (due to loading libraries and initializing frameworks/constant state). Creating workers that can execute requests in parallel but don’t require any additional memory overhead (just what they consume per request) is essential for them. fork()ing without exec()ing a new program facilitates this memory sharing; everything is copy-on-write, and most big webapps don’t need to write most of the startup-initialized memory they have, though they may need to read it.
Additionally, starting up such programs can take a long time due to costly initialization (seconds or minutes in the worst cases); using fork() allows them to quickly replace failed or aged-out subprocesses without having to pay that overhead (which also typically pegs a CPU core) to change their parallelism. “Quickly” might not be quick enough if a program needs to continually launch new subprocesses, but for periodically forking (or just forking-at-startup) long-running servers with a big footprint, it’s far better than re-initializing the whole runtime. For better or worse, we’ve come far enough from old-school process-per-request CGI that it is no longer feasible in most production deployments.
Q: Wouldn't it be nice if everyone wrote apps small enough that startup time was minimized and memory footprint was low?
A: Sure, but they won’t.
Q: People should just write their big, long-running services in a framework that starts fast, has low memory requirements, and uses threads instead of fork()s.
A: See previous answer. Also see zzzcpan’s response.
Q: Can you access some of those benefits with careful use of shared memory?
A: Yes, but it’s much harder to do than it is to use fork() in most cases (caveat Windows, but it’s still hard).
Q: Do tools exist in single-proc/single-thread forking frameworks/languages which switch from forking to hybrid async/threaded paradigms (like gevent) instead?
A: Yes, but they’re not nearly as mature, capable, or useful (especially when you need to utilize multiple cores).
Fork-requiring program class 2:
Programs which fork infrequently in order to parallelize uncommon tasks over shared memory. Redis does this to great effect; it doesn’t exec(), it just forks off a child process which keeps the memory image at the time of fork from the parent, and writes most of that memory state to disk so that the parent can keep handling requests while the child snapshots.
Python’s multiprocessing excels at these kinds of cases as well. If you’re launching and destroying multiprocessing pools multiple times a second, then sure, you’re holding it wrong, but many people get huge wins from using multiprocessing to do parallel operations on big data sets that were present in memory at the time multiprocessing fork()ed off processes. While this isn’t cross-platform, it can be a really massive performance advantage: no need to serialize data and pass it to a multiprocessing child (this is what apply_async does under the covers) if the data is already accessible in memory when the child starts. Node's 'cluster' module will do this too, if you ask nicely. Many other languages and frameworks support similar patterns: the common thread is making fork()ing parallelism "easy enough" with the option of spending a little extra effort to make it really really cheap to get pre-fork memory state into children for processing. Oh, and you basically don't have to worry about corrupting anyone else's in-memory state if you do this (not so with threads).
Q: $language provides a really accessible way to use true threads that isn’t nearly as tricky as e.g. multiprocessing or knowing all the gotchas (e.g. accidental file descriptor sharing between non-fork-safe libraries) of fork(); why not use that?
A: Many people still prefer languages with primarily-forking parallelism constructs for reasons besides their fork-based concurrency capabilities--nobody’s claiming multiprocessing beats goroutines for API friendliness--so fork() remains useful in much more than a legacy capacity.
Q: Why not use $tool which does this via threads or why not bind $threaded_language to $scripting_language and use threads on the other side of the FFI boundary?
A: People won’t switch. They won’t switch because it’s hard (don't tell me threaded Rust is as easy to pick up as multiprocessing--Rust has a lot of advantages in this space, but that ain't one of them) and because there’s a positive benefit to staying within a given platform, even if some infrequent tasks (hopefully your Python doesn’t invoke multiprocessing too much) are a bit more cumbersome than usual. Also, “Friendly, easy-to-use concurrency with threads” is often a very false promise. There’s a reason Antirez is resistant to threading.
TL;DR perhaps using fork() and exec() for launching new programs needs to stop. But fork() itself is absolutely essential for common real-world use cases.
 References to parallelism via fork() above assume you have more than one core to schedule processes onto. Otherwise it’s not that parallel.
EDITs: grammar. There will be several because essay. I won't change the substance.
Another common use of fork() for things other than exec()ing is multi-process services where all will keep running te same program. Arranging to spawn or vfork-then-exec self and have the child realize it's a worker and not a (re)starter is more work because a bunch of state needs to be passed to the child somehow (via an internal interface), and that feels hackish... And also this case doesn't suffer much from fork()s badness: you fork() early and have little or no state in the parent that could have fork-unsafety issues. But it's worth switching this use-case to spawn or vfork-then-exec just so we have no use cases for fork() left.
I don't think it is suboptimal. As the paper acknowledges it primary use is to set up the environment of the program you are about to exec(). There are four points to be made about that:
1. If you don't need to set up the environment it imposes almost no coding overhead. It reduces to "if (!(pid = fork()) exec(...)". That's hardly a huge imposition.
2. It doesn't seem to impose much runtime overhead either. If it did Linux and BSD would have acquired a spawn() syscall's ages ago. As it is they all implement posix_spawn() using a vfork() / exec(). Given we are talking a 30 year history here any claims getting rid of the fork() would give a noticeable performance boost should not be taken seriously without evidence.
3. If you do need to setup the environment then yes there are traps with threads and other things. As the paper says it's terrible - but to paraphrase Churchill the one thing it has in it's favour is it's better than all the other ways of doing the same thing. They actually acknowledge how to replace flexibility allowed by fork()
is an open research question. "We think it's horrible, but we don't have an alternative" isn't a convincing argument.
4. For all it's faults fork() has one outstanding attribute - it's conceptually drop dead simple: "create an exact copy of the process, the sole difference being getpid() returns a different value". That translates to bugger all code needed to implement it, few bugs, small man pages and a simple interface. A replacement providing the same flexibility will be some hideously complex thing that tries to implement all the use cases people used fork() for. It will be big and hard to learn, hard to use correctly, take reams of code, still won't do all that fork() allowed you to do. We will be complaining about if for decades to come.
I stopped reading the paper when they claims O_CLOEXEC was an overhead imposed by fork(). It isn't. The telltale give away should be it doesn't take effect on a fork() - it happens on the exec(), and the spawn() or whatever does exec()'s job. If you remove fork() things like O_CLOEXEC is your only way to control what environment your child process gets. Therefore one outcome of removing fork() is the reverse of what they claim - you won't get less O_CLOEXEC's, you will get many, many more of them as programmers clamour for ways to do the things fork() allowed them to do.
However, may I point out that Microsoft SQL Server benchmarks have been posted that show Linux TCP-H outperforming Windows?
While I am sure that this is wise criticism, it might also be concluded that Windows itself contains no small amount of architectural decisions that limit performance.
The use of fork as a concurrency mechanism (creating a new thread of control that executes in a copy of the address space) is very good and useful.
In the POSIX shell language, the subshell syntax (command1; command2; ...) is easily implemented using fork. This is useful: all destructive manipulations in the subshell like assignments to variables or changing the current directory do not affect the parent.
Check out the fork-based Perl solution to the Amb task in Rosetta code: https://rosettacode.org/wiki/Amb#Using_fork
This essentially simulates continuations (in a way). (If the parent process does nothing but wait for the child to finish, fork can be used to perform speculative execution, similar to creating a continuation and immediately invoking it).
Microsoft "researchers" can stuff it and their company's flagship piece of shit OS.
They also point out that on modern hardware you often should want to write multithreaded multiprocess application.
Their main criticism of fork is that it does not compose at any level of the OS (as it cannot be implemented over a different primitive)
I understand that a lot of people here dislike Microsoft for good reason (not only historical), but drawbacks in fork() are well known and recognized, here they point out that it is also hard-to-impossible to implement as a compatibility layer if the kernel does not support fork.
> Microsoft "researchers" can stuff it and their company's flagship piece of shit OS.
Do you have any reason to insult Microsoft researchers? They have plenty of citations in this paper of other researchers that appear to agree with them. This type of comments does not appear constructive to me
Booting a system doesn't compose; let's not have power-on reset and bootloaders.
Everything in this paper could have been cribbed from twenty year or older Usenet postings, mailing lists and other sources. Fork has been dissected ad nausem; anyone who is anyone in the Unix-like world knows this.
Oh, and threads have perpetually been the way to go on current hardware --- every damn year since 1988 and counting.
> Booting a system doesn't compose;
Actually this is false, virtual machine and hypervisors allow to boot a system inside another system
spawn do not have this requirement.
Even the close-on-exec flag race condition is a result of threads. duplicating a file descriptor and setting its close-on-exec flag is a two step process during which a fork can happen, causing a child to inherit the descriptor without close-on-exec flag being yet set. But that can only happen if there are threads. (Or something crazy, like fork being called out of an async signal handler).
But I explicitly want to not do it :) thread are obviously a good thing to have.
> every library and OS functionality aso needs to be "thread aware"
which is good, because differently from the case with fork thread aware libraries/OS help performance. Fork aware libraries/OS (in the case fork+exec) do not.
Note that this is necessary only because of the broken threading model that was retrofitted into Unix.
How it should work is that fork should clone the threads also. If a process with 17 threads forks, then the child has 17 threads. The thread IDs should be internal, so that all the pthread_t values in the parent space make sense in the child space and refer to the corresponding threads.
It's not fork's fault that the hacky thread design broke it. Fork is supposed to make a faithful replica of a process; of course if that principle is ignored in a major way (like, oops, where are the parent's threads?) then things are less than copacetic.
Threads also break the concept of a current working directory. If one thread makes a relative path access and another calls chdir, the result is a race condition.
Threads also break signals quite substantially; the integration of signal handling with threads is a mess.
Threads are not inherently a good thing to have; they are idiotic, in fact. Fork provides a disciplined form of threading that eliminates problems from the mutation of shared state, and provides fault isolation. It's much better to use forked processes instead of threads. Shared memory can be used for direct data structure access. With fork, you can create a shared anonymous mmap. This is then cloned into child processes as shared memory at the same virtual address.
Or CreateProcess(), which has a lot to do with microsoft.
I think it's well-known that Windows NT took a lot of ideas from VMS.
I don't think we should ever forget how MS behaved through the mid 2000s. But we don't live in that world anymore, they aren't (capable of being) that company anymore, and I think we're at a point where dismissing research because of a connection to MS is not protecting anyone from anything.
Has the old theory that "Windows NT" aka "WNT" = "VMS + 1" ever been proved or disproved?
(https://americanhistory.si.edu/collections/search/object/nma... the Smithsonian has early design docs with the original name on the spine).
You can use the posix_spawn function in glibc, which uses a vfork or clone syscall just like on Linux.
"Rich Felker, May 24, 2015:
Some interesting preliminary timing of
's posix_spawn vs fork+exec shows it ~25x faster for large parent processes. (~360us vs 9ms). #glibc has a vfork-based posix_spawn but it's only usable for trivial cases; others use fork.
posix_spawn always uses CLONE_VM. This also means
posix_spawn will fill the fork gap on NOMMU systems cleanly/safely (unlike vfork) once we get NOMMU working."
Also evilotto's post here:
"a 100mb process generally takes >2ms to fork, while a 1mb or less process takes 70us"
While the article points out that the NT kernel natively supports fork, it certainly isn't arguing for any extension of the call.
So all we're left with is "extinguish", which this article certainly does. And it is persuasive. I will look at posix_spawn() for my own code in the future.
The paper mentions the benefit of posix_spawn for the fork+exec use case.
I might've seen posix_spawn while skimming a manpage or browsing a change log but this is the first time that I'd actually learned about its purpose.
The article's conclusion isn't "and therefore Linux is bad" btw.
Of course, all Windows APIs are terrible, but that doesn't make complaints about fork() any less legitimate. The concept of Establishing empty processes, instead of cloning yourself, is much more sane.
After all, the use of fork() is 99% of the time just to call execve(), and anything done in between is just to clean up the mess from fork(). Having a dedicated way to just create processes in a controlled fashion would have been better there. And, the other 1% is usually cases where pthread should have been used instead.
I like the ease with which you can pass resources and data to the forked child from the parent, though. Otherwise I'd have to do a lot of serialiation and deserialization, or use shared memory, or unix sockets to pass fds, all of which also has it's gotchas and is way more complicated and error prone.
And, in this case, it sounds like a thread would do exactly what you want, but without the oddities of fork().
Ummmm. No. Threads are a much harder API to get right. They can work in this area, but that's not the same as saying they're right for all/most cases in this area.
I think a sizable part of that remaining 1% (if it is that low) are programs that leverage fork as the very powerful right tool for the job. Many of those also happen to be widely-used programs crucial for the operation of web services and large-data-set processing.
Ummmm. No. Threading is not a hard API to get right. It's very simple: You get a new executing thread in the same memory space. You can create them whenever you like without any side-effects. Now, don't trample on your memory. Read all you want from anywhere. If you want to write to shared memory, ensure both reads and writes are behind a mutex, or learn about atomics.
Fork(), on the other hand, is much trickier. Sure, you get a cloned memory space so you can trample all you want, but now you have to establish some form of IPC (which might itself end up requiring threading), and if you didn't fork() as the first thing in your process, you end up inheriting all sorts of state that you do not want. Threads and locks, for example, are now in limbo (depending on your unix flavor of choice), and you likely have a bunch of fd's that you did not want.
I cannot really think of any legitimate use-cases for fork() without exec(). There are legitimate use-cases for multi-process designs, but such designs are severely inconvenienced by fork(), as all they wanted to do was to start processes without inheriting state.
I also certainly cannot see any sensible argument for threading being harder than fork(), especially if you're just using it as a drop-in replacement where there will be no shared state after invocation outside of explicitly created communication channels.
Shared memory for threads is a form of IPC too, except one where it's very easy to make a mistake, introduce concurrency bugs.
> I also certainly cannot see any sensible argument for threading being harder than fork()
You should read a paper or two on concurrency bugs. Including on those using explicit but shared communication channels, like CSP does.
Very well said. It's gets no simpler than this. I think all too often, people try and complicate things where they don't need to be. Always do the SIMPLEST thing that works well.
I don't run Windows so I'm far from the most biased person but frankly, on the surface the fork/exec thing really does seem unnecessary and weird in the modern world, where we've come up with better ways to do concurrency than just raw threads and processes anyways.
Win32 API cannot do it. The underlying NT kernel can.
> Alternative: clone().
> This syscall underlies all process and thread creation on Linux. Like Plan 9’s rfork() which preceded it, it takes separate flags controlling the child’s kernel state: address space, file descriptor table, namespaces, etc. This avoids one problem of fork: that its behaviour is implicit or undefined for many abstractions. However, for each resource there are two options: either share the resource between parent and child, or else copy it. As a result, clone suffers most of the same problems as fork (§4–5).
Their arguments of why fork() is not a good fit these days seemed pretty reasonable to me.
> When a fork syscall is made on WSL, lxss.sys does some of the initial work to prepare for copying the process. It then calls internal NT APIs to create the process with the correct semantics and create a thread in the process with an identical register context. Finally, it does some additional work to complete copying the process and resumes the new process so it can begin executing.
AFAIK it’s only unix/Linux (posix) OSes that implement fork. Perhaps that’s what you meant by “every other system”, ie unix + clones/derivatives?
The COMMAND SVC had/has 4 variants:
- Execute program (akin to posix_spawn)
- Chain program (akin to posix_spawn, and parent exit)
- Execute subprocess (start a thread, one supplies code + stack address)
- Execute fork process (ala fork, but one supplies code + stack address like with 'subprocess' above)
Originally it only had the first two forms, 2.1 added the subprocess form, 2.2 added the fork form.
It didn't have a direct equivalent to exec(), but did have an OVERLAY SVC which loaded fresh code in to the process, and I expect that could be used to make something like exec(). Not that I ever tried, given there was no real need for it.
The other way to create an exec() like behaviour would have been with the CONTROL SVC, akin to ptrace(), but that would have been painful to do.
VAX and VMS are not POSIX or UNIX-like.
Methinks MS need to focus on their own issues and leave the nix world alone. While many people find their involvement in FOSS welcome, I do not and never have. They are still a for-profit company beholden to shareholders.
The purchase by MS of GitHub may, again, be welcomed by many, but I find it disastrous. I smell triple E here no matter what anyone says. This is why distros like Debian and Slackware are still so important. All nix needs to do is start adopting MS ideas and then it's a matter of time before distros adopt disastrous code like systemd. MS does want to control everything around them like every other for-profit company. I cannot see this any other way. They are involved for their own good, for things like Azure and their own "cloud". MS needs to focus on their own garden and not that of *nix. I always have and always will prefer the "us and them" mentality when dealing with MS. Don't forget EEE. It's still a reality should you care to look hard enough.
Sadly, UNIX (umbrella term here) is not what it was a few years ago. I dearly miss Solaris, for example. Nothing touched it in it's day, not even AIX or HP-UX. I was a UNIX admin for 10 years. I've used them all. Nothing MS can produce will ever be better than pure UNIX. There is a reason it's still being made. FreeBSD can outperform anything MS has on offer. Hell, they borrowed networking code because they couldn't come up with better.
Not all of us see us all under the same tent. I surely don't and never will. It's us and them. To say otherwise would indicate we on all on a level playing field and we're all working together to a common good. We're not. Good research aside, I don't like their history, stewardship, or about anything else they do. Agenda...
False. MS is now one of the leading FOSS contributors. You're living in the past.