Hacker News new | past | comments | ask | show | jobs | submit login
A fork() in the road (microsoft.com)
257 points by ralish on Apr 10, 2019 | hide | past | favorite | 178 comments



I read the paper, and they make a lot of good points about fork's warts.

But I really wanted some explanation of why Windows process startup seems to be so heavyweight. Why does anything that spawns lots of little independent processes take so bloody long on Windows?

I'm not saying "lots of processes on Windows is slow, lots of processes on Linux is fast, Windows uses CreateProcess, Linux uses fork, CreateProcess is an alternative to fork/exec, therefore fork/exec is better than any alternative." I can imagine all kinds of reasons for the observed behavior, few of which would prove that fork is a good model. But I still want to know what's going on.


I'm a bit rusty on this but from memory the overhead is by and large specific to the Win32 environment. Creating a "raw" process is cheap and fast (as you'd reasonably expect), but there's a lot of additional initialisation that needs to occur for a "fully-fledged" Win32 process before it can start executing.

Beyond the raw Process and Thread kernel objects, which are represented by EPROCESS + KPROCESS and ETHREAD + KTHREAD structures in kernel address space, a Win32 process also needs to have:

- A PEB (Process Environment Block) structure in its user address space

- An associated CSR_PROCESS structure maintained by Csrss (Win32 subsystem user-mode)

- An associated W32PROCESS structure for Win32k (Win32 subsystem kernel-mode)

I'm pretty sure these days the W32PROCESS structure only gets created on-demand with the first creation of a GDI or USER object, so presumably CLI apps don't have to pay that price. But either way, those latter three structures are non-trivial. They are complicated structures and I assume involve a context switch (or several) at least for the Csrss component. At least some steps in the process also involve manipulating global data structures which block other process creation/destruction (Csrss steps only?).

I expect all this Win32 specific stuff largely doesn't apply to e.g. the Linux subsystem, and so creating processes should be much faster. The key takeaway is its all the Win32 stuff that contributes the bulk of the overhead, not the fundamental process or thread primitives themselves.

EDIT: If you want to learn more, Mark Russinovich's Windows Internals has a whole chapter on process creation which I'm sure explains all this.


The WSL processes are called pico processes.

https://blogs.msdn.microsoft.com/wsl/2016/05/23/pico-process...


That was a super interesting read (and view), thank you. I've been in Linux land for almost two decades, but I've also spent a week (or so) porting our Linux-based development environment over to Windows with the help of WSL. This sheds some light on how it actually works. Maybe I'll have to look over it once more armed with this new information and see if I can squash some of those remaining problems with our solution.


You can also dive into the Drawbridge research papers, and how they used the LibOS concept to bring SQL Server into Linux


> created on-demand with the first creation of a GDI or USER object, so presumably CLI apps don't have to pay that price

This tickles my brain. I read some blog post bitching that because Windows DLL's are kinda heavy weight it's way easy end up paying that price without realizing it.



As mentioned in this very discussion 2 hours before. (-:

* https://news.ycombinator.com/item?id=19622723


I used to work on a cross-platform project, and spent several weeks trying to figure out why our application ran significantly faster on linux than windows. One major culprit was process creation (another was file creation). I never really uncovered the true reason, but I suspect it had to do with the large number of DLLs that Windows would automatically link if you weren't very careful. Linux, of course, can also load shared code objects, but in my experience, they are smaller and lighter weight.


Anti-virus software makes process and file operations a lot slower.


This should not be ignored. Windows machines are a favorite for having lots of heavy anti-virus running on them. They can destroy I/O performance. Windows 10 has a "real time scanner" running by default, but many corporate-IT security teams will add more and more. This alone can seriously slow down windows vs linux.


> Anti-virus software makes process and file operations a lot slower.

It was a long time ago (~2006), and I honestly can't remember, but I feel like turning off anti-virus (and also backups, software updaters, and any other resident software) would have been one of the first things I would have checked. There was definitely something more fundamental going on.


This probably isn't the technical explanation your looking for, but, in general, processes on Windows and processes on Unix aren't the same--or, at least, they're not meant to be used the same way. Creating lots of small processes on Windows has long been discouraged and considered poor design, whereas the opposite is true on Unix.

One could probably argue that processes on Windows need to be lighter-weight now that sandboxing is a common security practice. These days, programs like web browsers opt to create a large number of processes both for security and stability purposes. In much the same way that POSIX should deprecate the fork model, Windows should provide lighter-weight processes.


Windows now has minimal processes that have almost no setup and pico processes (based on minimal processes) that are the foundation for Linux processes in WSL.


The last time I used WSL (perhaps 6 months ago), its per-process overhead was awful. I don't recall the numbers, but I think it managed to start fewer than 10 processes per second. My memory suggests it was more like two processes per second, though I would recommend re-testing before trusting that.

Found my previous comment on it (which has a test case but not numbers): https://news.ycombinator.com/item?id=18226921


On 1803 and 1903 it's 3 to 4 times faster than MsysGit (WSL is ~1s on my laptops). It is possibly slightly faster on 1903 as my laptop running it is faster than the other for this bench, despite having an older processor.

Now in a Linux VM it's approx 10 times faster than even WSL. And that should probably be even faster natively.

So anyway WSL is really usable and if you really only started 10 processes per sec something is wrong. Maybe you are using a crappy antivirus (I've heard that Kaspersky makes WSL extremely slow)


Well, I hadn't installed any antiviruses myself. I think Windows Defender was running, though. It's possible that my computer came with additional crapware on it.


I just checked and both of my benchs were done with Defender.

When I disable it, it is down to ~0.5s

I would not build a Linux kernel here instead of in a VM, but for tons of things, this is very usable.


Others have mentioned about DLLs being pulled in, following post might be interesting:

https://randomascii.wordpress.com/2018/12/03/a-not-called-fu...


It's not process creation that is tricky, it's process termination!

To see how Libreoffice does it, see https://opengrok.libreoffice.org/xref/core/sal/osl/w32/proce...


Microsoft Research doesn't just do research on Windows. They employ lots of researchers that are free to pursue many different topics.


CreateProcess requires an application to initialize from scratch. When you fork, you cheaply inherit the initialized state of the whole application image. Only a few pages that are mutated have to be subject to copy-on-write. Even that copy-on-write is cheaper than calculating the contents of those pages from scratch.


There has been a lot of discussion in recent years about how cheap that "cheaply" really is.

* https://news.ycombinator.com/item?id=9653238

* https://news.ycombinator.com/item?id=18071278

* https://news.ycombinator.com/item?id=19622503


Yeah, it's not really cheap at all. However! vfork() is cheap, very very cheap, though, of course, you then have to follow it up with an exec(), and the cost of that on Windows depends on the setup cost of the executable being exec'ed.

Part of the problem is the DLLs, as many have mentioned, and also the fact that each statically links in its own CRT (C run-time). The shared C run-time MSFT is working on should help here. As should more lazy loading and setup.


> Part of the problem is the DLLs, as many have mentioned, and also the fact that each statically links in its own CRT (C run-time)

No, that isn't the case on DLLs shipped with Windows.


But it is for 3rd party DLLs.


Well, that depends on the DLL.


fork is pretty much always going to be cheaper than starting a new process in scratch over the same executable image (and library images) and then re-playing everything inside that process so that it gets into exactly the same state as the creator to be a de facto clone of it.


If I had to guess, I'd point to DLLs. The minimal Windows process loads probably half a dozen, plus the entry points are called in a serialized manner.


Pretty much identical to shared objects on Linux


Windows DLLs require fixups when they're loaded off their preferred base address.


So do relocatable shared libraries on Linux. https://eli.thegreenplace.net/2011/08/25/load-time-relocatio...


On macOS, fork() is a bit weird: https://opensource.apple.com/source/Libc/Libc-997.90.3/sys/f...

Many frameworks are backed by XPC services, where the parent process has a socket-like connection to a backend server. After forking, the child would have no valid connection to the server. The fork() function establishes a new connection in the child for libSystem, to allow Unix programs to port easily to macOS, but other services' connections are not re-established. This makes fork on macOS (i) slow, and (ii) unsafe for code that touches virtually any of Apple's APIs.


fork() is generally unsafe for that reason, and OS X is only special in this regard in that it has more of these hidden C library handles that can blow up on the child-side of fork(). vfork()+exec()-or-_exit() is much safer.


BeOS GUI applications also had problems with fork()


I agree with most parts of the paper.

Fork() is now basically the root of a looong list of special cases in so many aspects of programming. Things get even worse when you use a language with built-in runtime such as Golang for which multi-threaded programming the default behaviour. If fork() can't even handle multiple threads, what is the real point of having it when a 8 core 16 threads AMD processor is about $150 each.


> If fork() can't even handle multiple threads, what is the real point of having it when a 8 core 16 threads AMD processor ...

These threads and those threads are not the same. The 16-threads SMT processor will happily chew on 16 different programs, processes or whatever the load at the moment is, e.g. if you use Python's multiprocessing you can create 16 processes and they'll be executed in parallel.

fork() can handle multiple threads but you have to be attentive when cleaning up etc. - quite often, code using fork() will get confused when you spawn threads, and code using threads will get confused when you fork()


Fork has really weird semantics, and a lot of fun gotchas around managing resources. Good riddance?


Not even just the semantics, the performance is awful. Even when the fork is virtual (as any modern fork is) and there's no memory copying because it's COW, all the kernel page tables still need to be copied and for a multi-GB process that's nontrivial. That's why any sane large service that needs to fork anything will early on start up a slave subprocess whose only job is to fork quickly when the master process needs it.


>all the kernel page tables still need to be copied and for a multi-GB process that's nontrivial

Only in the pathological case where the large process is backed solely by the 4kb pages. The hardware has long now supported large pages - on x86 since Pentium Pro, if memory serves - and huge pages. The popular OSes (Linux 2.6+ and Windows 2003+) also do support large and huge pages. A 2GB process can easily be three pages: r/x code, r/w stack, r/w data (2gb). Granted, it gets a bit more complex if mmapped I/O or JIT are used, but since both are mature technology now, it's fine to point fingers at any inefficiency and demand better. Another caveat would probably be shared libraries loading at separate address ranges, which, IMO, is another reason to ditch shared libraries for good.

Contrary to popular wisdom, OS research is still relevant.


You want to ditch shared libraries and mmap to map your big processes using GB pages to make fork fast again (despite it not being the main and only drawback)???

OS research might be relevant, and it's good that some people have wild idea, but honestly I doubt this one will go anywhere :P


Ah sorry, only want to ditch the shared libraries; added mmap is a later edition didn't realize it's unclear. Of course mmap is necessary.


About shared libraries, I know that there is this line of thought considering them "evil" (well at least sufficiently to want to get rid of them); but I'm quite unsure about what a modern system would look like without them (although this is less a problem at the application level on e.g. Android, the system level is still extremely important)

With Spectre, proper process bounds (well, address spaces) are more important than ever -- and oh well even without that I'd still have cited them as incredibly important, in the sense that I'd rather have more than fewer. Given that, code reuse involves shared libraries, for several good reasons; the obvious one being not wasting RAM, but then there is the update problem (how to patch programs when security holes are discovered, especially if multiple parties are involved), and on top of that there is the cache pollution problem, which is related to the code duplication problem, and which is quite insidious because it is probably simultaneously hard to benchmark and very real (ambient loss of perf, just not in very hot paths, but this will still have an impact on the general perf of a system, quite like Spectre mitigations are having a big impact)

Now we could like address space boundaries so much that we would want to just use even MORE processes in place of shared libraries, but this obviously does not work for all services (and Spectre is biting us again because context switches are not cheap), plus if you take it to the extreme this makes systems extremely hard to design, and even bigger. This is part of the reasons we are using Linux instead of Hurd... (well Linux is too much in the opposite direction, but there are hopes that it will in the long term evolve toward a middle ground)

And anyway that does not fit the narrative at all of using more huge pages.

Now there are the usual radical ideas about how everything should be running on some kind of VM (sometimes even including the kernel), drastically reducing the amount of "native" code; but given the reality of our current systems that "everything" both relies on multiple VMs and I doubt it will tend to only one, nor should it (because of the monoculture this would induce). Plus the ambient perfs are still lower than native code, and TBH I don't expect that to change ever.

So, why and how would you like to get rid of shared libraries?


We are using Linux instead of Hurd due to manpower.

Most high integrity real time OSes are microkernels.

Interesting that you mention Android, one of the key points of Project Treble is using separate processes for drivers with Android IPC to talk to the kernel (including hardware buffer handles).


Well in the end we are using any X rather than Y tech because of manpower, regardless of pretty much any other characteristics.

So let's put manpower kind of aside for the bulk of the dev (where thousands of man-years are needed for any big project) and look at what could actually be achieved with the very small amount of manpower bootstrapping those projects. At this point you understand that the manpower thing is only a convenient narrative, while the reality is that even at the early time, Linux based systems worked really better than Hurd based systems.

Because general purpose micro-kernels based systems are hard, and especially those with a design as ambitious as the Hurd. (When you start to want to strongly isolate FS from VM code, it even stops being just hard and starts to be really HARD.)

And this was even worse at the time for perf reasons (but perf reasons are still applicable even today, given the impact on mobile and datacenter workloads)

However, yes, I'm in favor of more isolation today, because for a shitload ton of drivers it literally won't make any difference whether or not you take 1us vs 50us if you need to execute once every few seconds. So it is retarded to the highest level to run in kernel space if you don't actually need it. Sadly, Linux is way behind on that subject today.

That being said, and back to the original subject, a microkernel or at least a less monolithic one won't really get us in the less shared-libraries direction if it just re-implements the same perimeter of features of a monolithic ones, nor in the huge pages everywhere direction...


> Contrary to popular wisdom, OS research is still relevant.

Is it really popular wisdom though, or is it the opinion of one person and it got hyped up, much like the same hype happened on a subpar programming language that same person worked on?


> That's why any sane large service that needs to fork anything will early on start up a slave subprocess whose only job is to fork quickly when the master process needs it.

I don't think that's (entirely) true. This is more because a large service with some potent master process will have said process Do Stuff(tm) that will involve opening files, threads, signal handling, or whatever things that need to be taken care of one way or the other when forking to a worker (or whatever other child) process. It's therefore much simpler to fork a master subprocess into a child spawner earlier on, when it has yet to do anything. You significantly reduce your chances of screwing up if you have nothing to clean up for.


That's true, it's not the only reason. Dealing with threads and buffers and pthread_atfork and the associated heartbreak is a biggie also. But the performance is nothing to laugh at.

I just did a quick test, a 100mb process generally takes >2ms to fork, while a 1mb or less process takes 70us. It seems like its pretty much linear with process size.


Does using hugepages help?


It probably would, but I don't have a system I can test it on.


The performance is awful, but in return you get the COW memory you mentioned. That's a pretty huge benefit for a lot of programs with huge, seldom-changing memory state at startup. If those programs want to parallelize themselves without duplicating that memory or paying startup time/CPU overhead, fork() is a pretty handy way to achieve that.


These days, you can usually start a process without forking through posix_spawn/vfork. Although, I gather some servers still do it so they can set the current working directory more easily.


Between the `fork()` and an `exec()`, I can:

    * redirect stdin, stdout, and stderr
    * open files that might be needed and close files that aren't
    * change process limits
    * drop privileges 
    * change the root directory
    * change namespaces
And there are a few other things I am probably forgetting.


And ideally all these things become properties to a configuration object which is then used to spawn a process.


I'm not sure I agree it's the ideal way to do it. That's a heck of a lot of work for one function to do, and it necessarily duplicates the functionality of a ton of other functions. And that's ignoring the fact that forking without ever exec'ing can be really useful in many cases.

I haven't yet read the paper, but considering the incredible simplicity from the programmer's PoV that fork provides, and the fact that at least Linux makes it pretty god damn fast, especially compared to Windows' non-forking model, I can't really see myself agreeing with their conclusion.


When you read the paper, you'll see this covered in section 6 ("REPLACING FORK") subsection "Low-level: Cross-process operations"

> While a spawn-like API is preferred for most instances of starting a program, for full generality it requires a flag, parameter, or new helper function controlling every possible aspect of process state. It is infeasible for a single OS API to give complete control over the initial state of a new process. ...

> clean-slate designs [e.g., 40, 43] have demonstrated an alternative model where system calls that modify per-process state are not constrained to merely the current process, but rather can manipulate any process to which the caller has access ...

> Retrofitting cross-process APIs into Unix seems at first glance challenging, but may also be productive for future research.


That has security considerations when spawning a process to run an executable that gets privilege on exec (think set-uid on Unix).


Easy to deal with by not applying privileges until the parent is done tinkering and hits start.


I don't see how. The privilege elevation mechanism cannot apply to an already-running process, since then there will be a way to subvert the process.

Now, the better answer to that is to have nothing like a set-uid mechanism, which would be nice, for sure. But just how much violence are we to do to the Unix model, and when are we expected to finish this? It's not like Linux can be abandoned -- for better or worse, Linux "won".


The paper suggests:

> a new process starts as an empty address space, and an advanced user may manipulate it in a piecemeal fashion, populating its address-space and kernel context prior to execution, without needing to clone the parent nor run code in the context of the child. ExOS [43] implemented fork in user-mode atop such a primitive. Retrofitting cross-process APIs into Unix seems at first glance challenging, but may also be productive for future research.


Specifically, you'd want to do something like:

  pid_t child = pfork();
  for(int fd=0;fd<3;fd++) p_open2(child,fd,pts,O_RDWR);
  char** envp = munge_env(/*parent's*/ environ);
  int err = p_execve(child,file,argv,envp);
(Y'know, this looks kind of familiar...)


I'm really not sure what you're implying, can you please state it explicitly? I'd expect this code to look similar to both the CreateProcess and fork models, at the very least.


a1369209993's example snippet appears to be what is implied from section 6 ("REPLACING FORK"), subsection "Low-level: Cross-process operations":

> clean-slate designs [e.g.,40, 43] have demonstrated an alternative model where system calls that modify per-process state are not constrained to merely the current process, but rather can manipulate any process to which the caller has access. This yields the flexibility and orthogonality of the fork/exec model, without most of its drawbacks: a new process starts as an empty address space, and an advanced user may manipulate it in a piecemeal fashion, populating its address-space and kernel context prior to execution, without needing to clone the parent nor run code in the context of the child.


> a1369209993's example snippet appears to be what is implied from section 6 ("REPLACING FORK"), subsection "Low-level: Cross-process operations"

Oh definitely. But why are they saying it "looks kind of familiar..."? That subsection is already the subject of the conversation. Surely they're not saying it looks similar to itself, right?


Ahh. I thought it was familiar because it was the way one would write that sort of code now, in a fork/exec environment, only replacing the call to change local information into ones to change the child's information.


vfork() is the right tool. I dunno what semantics you have in mind for p_execve().


(See man (2)execve.)

  int p_execve(pid_t target,char* filename,
               char** argv,char** envp);
Executes a new program, specified by filename, in the context of the specified process. On success, the text, data, bss, and stack of the process specified by target are overwritten by that of the program loaded.

A target of zero specifies the current process, so p_execve(0,file,argv,envp) is equivalent to execve(file,argv,envp).

BUGS

We should probably require some kind of permission check before allowing the calling process to do this.


Less general functions can call more general functions, like fork calls clone on Linux.


You could call it posix_spawn_file_actions_t and posix_spawnattr_t . (-:


You can do all of that with posix_spawn() as well, though some of those may require a helper program.


> Good riddance?

Regardless of this paper, I don't see its use declining significantly any time soon.


Developers should use posix_spawn() as much as possible.


fork() is also used to daemonize and for privilege separation, two tasks where posix_spawn() cannot be used. I suppose daemonization can be seen as something of the past, but privilege separation is not. On Linux, privileges are attached to a thread, so it should be possible to spawn a new thread instead of a new process. However, a privileged thread sharing the same address space as an unprivileged one doesn't seem a good idea.

The paper also mention the use case of multiprocess servers which relies heavily on fork() but dismiss it as it could be implemented with threads. A crash in a worker would lead to the crash of the whole application. While a worker could just be restarted.

A proper use case of removing fork() from an actual program would help. For example, how nginx on Windows is implemented?


I can’t answer for Nginx but normally on windows if you want “worker processes” you just start N of them and have them read work from a shared memory queue. That is, workers live longer than the tasks they perform. If one crashes, a new one is spawned. This does seem like a more sensible way of doing things than forking tbh. It isolates work in processes but doesn’t pay for process creation per request.


Is recovery of a shared memory queue after one of the workers crashes even possible, in general? (what if the worker crashed before releasing a lock?)


I’m not sure how this is usually done but I’d avoid locks at nearly any cost and try to use a lock-free spmc queue such as https://github.com/tudinfse/FFQ


I may be strange, but that's the way I've always used fork() as well. It's one of the reasons why named pipes exist (or at least that's what I've always thought).


"If one crashes, a new one is spawned."

I suppose that makes sense on an OS on which crashing is expected behaviour, though some people would want to know what bug caused the crash and whether that bug has security implications.


Crashing is an expected behaviour in Linux as well, you can enable coredumps or utilize an applications log if you want to know why.


The Linux kernel doesn't crash much, unless you have dodgy drivers or dodgy hardware. Whether your userland programs crash or not depends on what you're running. I don't expect to see sshd crashing, for example, though it's true that almost any program will exit suddenly if the system runs out of memory, which to an ordinary user looks like a crash, though it's a very different thing really.


If you weren't talking about userland crashes, then your crack about "an OS on which crashing is expected behavior" makes no sense.


The comment was not meant to be taken all that seriously, of course, but an OS is more than just the kernel, and I do tend to disapprove of brushing a crash under the carpet.

System runs out of memory, various processes get terminated, and the easiest way to get it back into a good state is a restart: not that worrying, but do you have a memory leak? Some process segfaults with 54584554454d4f53 in the PC: should be investigated, not glossed over.


There is, of course, absolutely no difference in how errors are handled in this case vs forks. A process that handles one million requests is much more likely to crash than a process that handles one request though (regardless of OS).


posix_spawn() attributes can do a lot of this, and a helper program can do much if not all of the rest.

Removing fork() will take a long, long time. Every popular use case needs an alternative that doesn't suck.

But then again, fork() is kinda awful[0].

[0] https://gist.github.com/nicowilliams/a8a07b0fc75df05f684c23c...


I've been saying this for quite some time. Here's a gist I wrote about it: https://gist.github.com/nicowilliams/a8a07b0fc75df05f684c23c...


Can anybody elucidate about why fork() is still used in Chromium or Node.js? They are not old-grown traditional forking Unix servers (unlike Apache or the mentioned databases in the paper). I would expect them to implement some of the alternatives and having fork() only as a fallback in the code (i.e. after a cascade of #ifdefs) if no other API is available. Therefore, I wonder where the fork() bottlenecks really appear in everyday's life.


Chrome on Windows uses CreateProcess, and Windows came first, so Chrome is mostly architected around an approach that would fit posix_spawn better. However, fork has some benefits that I went into here:

http://neugierig.org/software/chromium/notes/2011/08/zygote....


> why fork() is still used in Chromium

To support a multi-process web browser architecture that Chromium pioneered, you need to spawn processes. See https://chromium.googlesource.com/chromium/src/+/HEAD/docs/l...


That's not what the page says. It says the use of fork() saves 8MB and a few tens of milliseconds per process spawn.


This is explained in the paper. It's to get access to copy-on-write memory so you can make a pre-initialised process cheaply.


It points out that "1304 Ubuntu packages (7.2% of the total) calling fork, compared to only 41 uses of the more modern posix_spawn()".

In section 7 it suggests "We should therefore strongly discourage the use of fork in new code, and seek to remove it from existing apps."

Is anyone here going to help work on changing those 1304 packages?

I have already over-volunteered for thankless FOSS tasks like this, so I know it won't be me.


There are things that don't fit the posix_spawn limitations, especially with fd or capability manipulation.


Yes, certainly. The paper covers many of those limitations.

The goal is not "remove", but "seek to remove". The relevant definition of "seek" here is "to make an attempt" says https://www.merriam-webster.com/dictionary/seek .

How many of those 1304 Ubuntu packages require fork()? Are there benefits to replacing (say) 1283 of them with posix_spawn()?


Yes, there are benefits to using posix_spawn: It's faster. See Figure 1.


How many of those packages would be improved with a faster spawn mechanism? Who is going to investigate each one? How will they convince upstream to change well-tested code?


From the paper...

> 7. GET THE FORK OUT OF MY OS!

Someone couldn't resist...


If they're going to remove fork, then python's multiprocessing is going to be dead. Maybe then the community will be forced to get rid of GIL?


When I learnt how fork() and select() worked, I just felt in love with Unix. The Win32 API was so ad-hoc and unnatural in direct comparison.


You fell in love with Unix because of a hacky unintuitive syscall? I'd suggest reading the paper!


For me it was poll() due to it's simple and intuitive API. Also, it's much faster then select() when you have a large number of file descriptors being monitored.


For some reason the book (Beginning Linux Programming, Wrox Press, 1998 edition I think) explained select() first, so like the proverbial little duck that names 'mother' the first think it sees moving after hatch, select() caught my heart.


And there I was thinking they would expose their fork() implementation.

Interested to see what this paper has to say.


While fork() might be sub-optimal for launching different programs (fork() + exec() vs. posix_spawn()), it's absolutely essential in several types of common systems that don't use it to launch different programs.

Fork-requiring program class 1:

The biggest example where fork() is needed are webservers/long-running programs with significant unchanging memory overhead and/or startup time.

Many large applications written in a language or framework that prefers the single-process/single-thread model for executing requests (e.g. Python/gunicorn, Perl, a lot of Ruby, NodeJS with ‘cluster’ for multicore, etc.) are basically dependent on fork(). Such applications often have a huge amount of memory required at startup (due to loading libraries and initializing frameworks/constant state). Creating workers that can execute requests in parallel but don’t require any additional memory overhead (just what they consume per request) is essential for them. fork()ing without exec()ing a new program facilitates this memory sharing; everything is copy-on-write, and most big webapps don’t need to write most of the startup-initialized memory they have, though they may need to read it.

Additionally, starting up such programs can take a long time due to costly initialization (seconds or minutes in the worst cases); using fork() allows them to quickly replace failed or aged-out subprocesses without having to pay that overhead (which also typically pegs a CPU core) to change their parallelism. “Quickly” might not be quick enough if a program needs to continually launch new subprocesses, but for periodically forking (or just forking-at-startup) long-running servers with a big footprint, it’s far better than re-initializing the whole runtime. For better or worse, we’ve come far enough from old-school process-per-request CGI that it is no longer feasible in most production deployments.

Anticipated rebuttals:

Q: Wouldn't it be nice if everyone wrote apps small enough that startup time was minimized and memory footprint was low?

A: Sure, but they won’t.

Q: People should just write their big, long-running services in a framework that starts fast, has low memory requirements, and uses threads instead of fork()s.

A: See previous answer. Also see zzzcpan’s response.

Q: Can you access some of those benefits with careful use of shared memory?

A: Yes, but it’s much harder to do than it is to use fork() in most cases (caveat Windows, but it’s still hard).

Q: Do tools exist in single-proc/single-thread forking frameworks/languages which switch from forking to hybrid async/threaded paradigms (like gevent) instead?

A: Yes, but they’re not nearly as mature, capable, or useful (especially when you need to utilize multiple cores).

Fork-requiring program class 2:

Programs which fork infrequently in order to parallelize uncommon tasks over shared memory. Redis does this to great effect; it doesn’t exec(), it just forks off a child process which keeps the memory image at the time of fork from the parent, and writes most of that memory state to disk so that the parent can keep handling requests while the child snapshots.

Python’s multiprocessing excels at these kinds of cases as well. If you’re launching and destroying multiprocessing pools multiple times a second, then sure, you’re holding it wrong, but many people get huge wins from using multiprocessing to do parallel operations on big data sets that were present in memory at the time multiprocessing fork()ed off processes. While this isn’t cross-platform, it can be a really massive performance advantage: no need to serialize data and pass it to a multiprocessing child (this is what apply_async does under the covers) if the data is already accessible in memory when the child starts. Node's 'cluster' module will do this too, if you ask nicely. Many other languages and frameworks support similar patterns: the common thread is making fork()ing parallelism "easy enough" with the option of spending a little extra effort to make it really really cheap to get pre-fork memory state into children for processing. Oh, and you basically don't have to worry about corrupting anyone else's in-memory state if you do this (not so with threads).

Anticipated Rebuttals:

Q: $language provides a really accessible way to use true threads that isn’t nearly as tricky as e.g. multiprocessing or knowing all the gotchas (e.g. accidental file descriptor sharing between non-fork-safe libraries) of fork(); why not use that?

A: Many people still prefer languages with primarily-forking parallelism[1] constructs for reasons besides their fork-based concurrency capabilities--nobody’s claiming multiprocessing beats goroutines for API friendliness--so fork() remains useful in much more than a legacy capacity.

Q: Why not use $tool which does this via threads or why not bind $threaded_language to $scripting_language and use threads on the other side of the FFI boundary?

A: People won’t switch. They won’t switch because it’s hard (don't tell me threaded Rust is as easy to pick up as multiprocessing--Rust has a lot of advantages in this space, but that ain't one of them) and because there’s a positive benefit to staying within a given platform, even if some infrequent tasks (hopefully your Python doesn’t invoke multiprocessing too much) are a bit more cumbersome than usual. Also, “Friendly, easy-to-use concurrency with threads” is often a very false promise. There’s a reason Antirez is resistant to threading.

--------------

TL;DR perhaps using fork() and exec() for launching new programs needs to stop. But fork() itself is absolutely essential for common real-world use cases.

[1] References to parallelism via fork() above assume you have more than one core to schedule processes onto. Otherwise it’s not that parallel.

EDITs: grammar. There will be several because essay. I won't change the substance.


There is one case where fork() is fantastic: as a way to dump a core of a running process while leaving the process running -- just fork() and abort()! But even this case should be handled by having something like gcore(1).

Another common use of fork() for things other than exec()ing is multi-process services where all will keep running te same program. Arranging to spawn or vfork-then-exec self and have the child realize it's a worker and not a (re)starter is more work because a bunch of state needs to be passed to the child somehow (via an internal interface), and that feels hackish... And also this case doesn't suffer much from fork()s badness: you fork() early and have little or no state in the parent that could have fork-unsafety issues. But it's worth switching this use-case to spawn or vfork-then-exec just so we have no use cases for fork() left.


These are both mentioned in the paper.


> While fork() might be sub-optimal for launching different programs (fork() + exec() vs. posix_spawn())

I don't think it is suboptimal. As the paper acknowledges it primary use is to set up the environment of the program you are about to exec(). There are four points to be made about that:

1. If you don't need to set up the environment it imposes almost no coding overhead. It reduces to "if (!(pid = fork()) exec(...)". That's hardly a huge imposition.

2. It doesn't seem to impose much runtime overhead either. If it did Linux and BSD would have acquired a spawn() syscall's ages ago. As it is they all implement posix_spawn() using a vfork() / exec(). Given we are talking a 30 year history here any claims getting rid of the fork() would give a noticeable performance boost should not be taken seriously without evidence.

3. If you do need to setup the environment then yes there are traps with threads and other things. As the paper says it's terrible - but to paraphrase Churchill the one thing it has in it's favour is it's better than all the other ways of doing the same thing. They actually acknowledge how to replace flexibility allowed by fork() is an open research question. "We think it's horrible, but we don't have an alternative" isn't a convincing argument.

4. For all it's faults fork() has one outstanding attribute - it's conceptually drop dead simple: "create an exact copy of the process, the sole difference being getpid() returns a different value". That translates to bugger all code needed to implement it, few bugs, small man pages and a simple interface. A replacement providing the same flexibility will be some hideously complex thing that tries to implement all the use cases people used fork() for. It will be big and hard to learn, hard to use correctly, take reams of code, still won't do all that fork() allowed you to do. We will be complaining about if for decades to come.

I stopped reading the paper when they claims O_CLOEXEC was an overhead imposed by fork(). It isn't. The telltale give away should be it doesn't take effect on a fork() - it happens on the exec(), and the spawn() or whatever does exec()'s job. If you remove fork() things like O_CLOEXEC is your only way to control what environment your child process gets. Therefore one outcome of removing fork() is the reverse of what they claim - you won't get less O_CLOEXEC's, you will get many, many more of them as programmers clamour for ways to do the things fork() allowed them to do.


fork() must go too??!!


[flagged]


fork() is also missing from every other OS that isn't an UNIX clone.


Interesting that Redis uses fork() for COW implementation.


It should be possible to achieve the same with mmap() and MAP_PRIVATE



It's hard to take them seriously when they imply the mess that threads are is somehow acceptable and necessary, but nicer, less error prone and simpler fork isn't. Threads are a nasty hack and a liability for the modern programmer to use. And systems researchers really should acknowledge that their continued existence as first class OS primitives is holding back systems research much more, than fork. I guess they are looking to spread FUD and justify the mess that Windows got itself into, not doing actual research.


What's wrong with threads exactly?


Aliasable, mutable memory (ie race conditions) is evil, and threads perfuse the entire programming environment with it. This is a dirty implementation detail that operating system kernels have to deal with, and we should be burying it in the same hole as memory swapping and TCP retransmits, not making it a fundamental hazard every application developer has to worry about.


I readily admit that I am unfamiliar with POSIX_spawn() and its benefits over fork().

However, may I point out that Microsoft SQL Server benchmarks have been posted that show Linux TCP-H outperforming Windows?

https://www.dbbest.com/blog/running-sql-server-on-linux/

While I am sure that this is wise criticism, it might also be concluded that Windows itself contains no small amount of architectural decisions that limit performance.


Fork is quite excellent, except in cases when the intent is to run a different program or when threads are involved (threads are basically an incompatible, competing model of concurrency).

The use of fork as a concurrency mechanism (creating a new thread of control that executes in a copy of the address space) is very good and useful.

In the POSIX shell language, the subshell syntax (command1; command2; ...) is easily implemented using fork. This is useful: all destructive manipulations in the subshell like assignments to variables or changing the current directory do not affect the parent.

Check out the fork-based Perl solution to the Amb task in Rosetta code: https://rosettacode.org/wiki/Amb#Using_fork

This essentially simulates continuations (in a way). (If the parent process does nothing but wait for the child to finish, fork can be used to perform speculative execution, similar to creating a continuation and immediately invoking it).

Microsoft "researchers" can stuff it and their company's flagship piece of shit OS.


The paper agrees with you that the fork models had a reason to exist and that is is perfect for shells.

They also point out that on modern hardware you often should want to write multithreaded multiprocess application.

Their main criticism of fork is that it does not compose at any level of the OS (as it cannot be implemented over a different primitive)

I understand that a lot of people here dislike Microsoft for good reason (not only historical), but drawbacks in fork() are well known and recognized, here they point out that it is also hard-to-impossible to implement as a compatibility layer if the kernel does not support fork.

Also:

> Microsoft "researchers" can stuff it and their company's flagship piece of shit OS.

Do you have any reason to insult Microsoft researchers? They have plenty of citations in this paper of other researchers that appear to agree with them. This type of comments does not appear constructive to me


It's an idiotic argument. Only functions compose. Though fork is packaged as a function, it's really an operator with a big effect.

Booting a system doesn't compose; let's not have power-on reset and bootloaders.

Everything in this paper could have been cribbed from twenty year or older Usenet postings, mailing lists and other sources. Fork has been dissected ad nausem; anyone who is anyone in the Unix-like world knows this.

Oh, and threads have perpetually been the way to go on current hardware --- every damn year since 1988 and counting.


Also levels of abstraction compose.

> Booting a system doesn't compose;

Actually this is false, virtual machine and hypervisors allow to boot a system inside another system


Virtual machines can be forked processes, and contain operating systems with forked processes, some of which are virtual machiens ... fork composes!


as a function obviously, the point is that it does not compose easily with other abstractions. That is every other library and OS functionality needs to be fork-aware.

spawn do not have this requirement.


The concept of "fork aware" didn't exist until threads. You could argue it's a thread problem. Remember, every library and OS functionality aso needs to be "thread aware" when threads are introduced. The pthread_atfork function can be thought about as "what do we do about thread and thread paraphernalia when we fork" rather than "what do we do about fork when we have threads".

Even the close-on-exec flag race condition is a result of threads. duplicating a file descriptor and setting its close-on-exec flag is a two step process during which a fork can happen, causing a child to inherit the descriptor without close-on-exec flag being yet set. But that can only happen if there are threads. (Or something crazy, like fork being called out of an async signal handler).


> You could argue it's a thread problem

But I explicitly want to not do it :) thread are obviously a good thing to have.

> every library and OS functionality aso needs to be "thread aware"

which is good, because differently from the case with fork thread aware libraries/OS help performance. Fork aware libraries/OS (in the case fork+exec) do not.


"Fork aware" is "thread aware". Hint: see the "pthread" substring in the identifier "pthread_atfork".

Note that this is necessary only because of the broken threading model that was retrofitted into Unix.

How it should work is that fork should clone the threads also. If a process with 17 threads forks, then the child has 17 threads. The thread IDs should be internal, so that all the pthread_t values in the parent space make sense in the child space and refer to the corresponding threads.

It's not fork's fault that the hacky thread design broke it. Fork is supposed to make a faithful replica of a process; of course if that principle is ignored in a major way (like, oops, where are the parent's threads?) then things are less than copacetic.

Threads also break the concept of a current working directory. If one thread makes a relative path access and another calls chdir, the result is a race condition.

Threads also break signals quite substantially; the integration of signal handling with threads is a mess.

Threads are not inherently a good thing to have; they are idiotic, in fact. Fork provides a disciplined form of threading that eliminates problems from the mutation of shared state, and provides fault isolation. It's much better to use forked processes instead of threads. Shared memory can be used for direct data structure access. With fork, you can create a shared anonymous mmap. This is then cloned into child processes as shared memory at the same virtual address.


I realize that "compating" is a misspelling, but I prefer to read it as a portmanteau of "compatible" and "competing" and think it's quite an excellent word for that difficult concept except that it errs slightly too far on the "competing" side.


Yes, "competible" would be good for the opposite situation.


I havent read the entire thing yet, but reading from "replacing fork" to the end it reads too much like embrace extend extinguish.


It's suggesting posix_spawn, which is standardized and has nothing to do with Microsoft.


> Just as a programming course would not today begin with goto, we suggest teaching either posix_spawn() or CreateProcess(), and then introducing fork as a special case with its historic context (§2).

Or CreateProcess(), which has a lot to do with microsoft.


It really doesn't. Microsoft employs a guy named Dave Cutler who is credited with leading the development of Windows NT in the late 80s/early 90s. They hired him from DEC, where he... is credited with co-leading a research project that later became VMS. If you go look at OpenVMS programming manuals, you will see process creation calls (e.g. SYS$CREPRC, LIB$SPAWN) that look and behave a lot like CreateProcess().

I think it's well-known that Windows NT took a lot of ideas from VMS.

https://en.wikipedia.org/wiki/Dave_Cutler https://docs.microsoft.com/en-us/windows/desktop/api/process... https://www.itec.suny.edu/scsys/vms/OVMSDOC073/v73/5932/5932... http://www.itec.suny.edu/scsys/vms/OVMSDOC073/v73/5841/5841p...

I don't think we should ever forget how MS behaved through the mid 2000s. But we don't live in that world anymore, they aren't (capable of being) that company anymore, and I think we're at a point where dismissing research because of a connection to MS is not protecting anyone from anything.


> I think it's well-known that Windows NT took a lot of ideas from VMS.

Has the old theory that "Windows NT" aka "WNT" = "VMS + 1" ever been proved or disproved?


The fact it was called "NT OS/2" before Windows NT seems to indicate it was a happy accident.

(https://americanhistory.si.edu/collections/search/object/nma... the Smithsonian has early design docs with the original name on the spine).


Yeah, if you're using Windows you aren't going to be able to use it. Or are you suggesting that Microsoft should implement posix_spawn?


Can't you use posix_spawn() with WSL and your favorite POSIX-compatible libc implementation?


Well that's a complicated question to answer.

You can use the posix_spawn function in glibc, which uses a vfork or clone syscall just like on Linux.


Also relevant, regarding the Linux native performance:

https://mobile.twitter.com/RichFelker/status/602313979894038...

"Rich Felker, May 24, 2015: Some interesting preliminary timing of @musllibc 's posix_spawn vs fork+exec shows it ~25x faster for large parent processes. (~360us vs 9ms). #glibc has a vfork-based posix_spawn but it's only usable for trivial cases; others use fork. @musllibc posix_spawn always uses CLONE_VM. This also means @musllibc posix_spawn will fill the fork gap on NOMMU systems cleanly/safely (unlike vfork) once we get NOMMU working."

Also evilotto's post here:

https://news.ycombinator.com/item?id=19622477

"a 100mb process generally takes >2ms to fork, while a 1mb or less process takes 70us"


Glibc got its main clone-based implementation in 2016, so it should be much more competitive now.



It's just the equivalent, not EEE.


Nowhere there did the authors "embrace" fork(). Quite the contrary.

While the article points out that the NT kernel natively supports fork, it certainly isn't arguing for any extension of the call.

So all we're left with is "extinguish", which this article certainly does. And it is persuasive. I will look at posix_spawn() for my own code in the future.


Okay, this one has me laughing out loud. Of COURSE Microsoft doesn't like fork()... Windows pretty much can't do it. I'll admit, there have been a lot of times I wish there was a more streamlined way to spawn processes on Linux (particularly daemons) but when I don't have fork() I always end up missing it. I'd take this paper a lot more seriously if it came from someone with a less obvious bias.


The article pointed out legitimate drawbacks related to the intersection of fork() and other features like posix threads.

The paper mentions the benefit of posix_spawn for the fork+exec use case.

I might've seen posix_spawn while skimming a manpage or browsing a change log but this is the first time that I'd actually learned about its purpose.

The article's conclusion isn't "and therefore Linux is bad" btw.


As Linux developer and Windows hater, I agree with Microsoft. fork() is a hack.

Of course, all Windows APIs are terrible, but that doesn't make complaints about fork() any less legitimate. The concept of Establishing empty processes, instead of cloning yourself, is much more sane.

After all, the use of fork() is 99% of the time just to call execve(), and anything done in between is just to clean up the mess from fork(). Having a dedicated way to just create processes in a controlled fashion would have been better there. And, the other 1% is usually cases where pthread should have been used instead.


Cleaning up your own process between fork and exec is hard. Several programs resort to terrible hacks like force-closing everything except file IDs 0,1,2 in a loop. Or they look into their /proc directory to discover whichnfile IDs exist, which is only marginally better. But when your process is a house of cards built on third party libraries with their own minds, there are not a lot of other options.


Use O_CLOEXEC everywhere (even third party libs). It's really annoying, but necessary. Means you need to use accept4(), dup3(), popen with an additional "e" (of course all of that needs to be feature tested, during compilation/runtime).


The catch is that you may not be able to control 3rd party libraries enough to be be able to do all that. Thus all these annoying hacks. To me, the complexity of using fork() and the race conditions around pid reuse are the worst design problems of POSIX systems.


Win32 has the opposite semantics, that O_CLOEXEC is the default semantics and the app has to request the opposite if it wants it, and this causes problems too. There should have been two flags and the application should have to specify one on every handle-/fd-creating system call. Hindsight is 20/20.


> Of course, all Windows APIs are terrible, but that doesn't make complaints about fork() any less legitimate. The concept of Establishing empty processes, instead of cloning yourself, is much more sane.

I like the ease with which you can pass resources and data to the forked child from the parent, though. Otherwise I'd have to do a lot of serialiation and deserialization, or use shared memory, or unix sockets to pass fds, all of which also has it's gotchas and is way more complicated and error prone.


But if you pass resources and data form forked child to parent, you are already using shared memory.

And, in this case, it sounds like a thread would do exactly what you want, but without the oddities of fork().


The memory is not shared, but copied, so you don't have to care about concurrent memory access.


> And, the other 1% is usually cases where pthread should have been used instead.

Ummmm. No. Threads are a much harder API to get right. They can work in this area, but that's not the same as saying they're right for all/most cases in this area.

I think a sizable part of that remaining 1% (if it is that low) are programs that leverage fork as the very powerful right tool for the job. Many of those also happen to be widely-used programs crucial for the operation of web services and large-data-set processing.


> Ummmm. No. Threads are a much harder API to get right.

Ummmm. No. Threading is not a hard API to get right. It's very simple: You get a new executing thread in the same memory space. You can create them whenever you like without any side-effects. Now, don't trample on your memory. Read all you want from anywhere. If you want to write to shared memory, ensure both reads and writes are behind a mutex, or learn about atomics.

Fork(), on the other hand, is much trickier. Sure, you get a cloned memory space so you can trample all you want, but now you have to establish some form of IPC (which might itself end up requiring threading), and if you didn't fork() as the first thing in your process, you end up inheriting all sorts of state that you do not want. Threads and locks, for example, are now in limbo (depending on your unix flavor of choice), and you likely have a bunch of fd's that you did not want.

I cannot really think of any legitimate use-cases for fork() without exec(). There are legitimate use-cases for multi-process designs, but such designs are severely inconvenienced by fork(), as all they wanted to do was to start processes without inheriting state.

I also certainly cannot see any sensible argument for threading being harder than fork(), especially if you're just using it as a drop-in replacement where there will be no shared state after invocation outside of explicitly created communication channels.


> but now you have to establish some form of IPC

Shared memory for threads is a form of IPC too, except one where it's very easy to make a mistake, introduce concurrency bugs.

> I also certainly cannot see any sensible argument for threading being harder than fork()

You should read a paper or two on concurrency bugs. Including on those using explicit but shared communication channels, like CSP does.


Concurrency bugs can be eliminated with state of the art static analysis (see Rust, Pony) - with the exception of deadlocks, which you can easily introduce with multiple processes as well.

http://blog.rust-lang.org/2015/04/10/Fearless-Concurrency.ht...


> I also certainly cannot see any sensible argument for threading being harder than fork(), especially if you're just using it as a drop-in replacement where there will be no shared state after invocation outside of explicitly created communication channels.

Very well said. It's gets no simpler than this. I think all too often, people try and complicate things where they don't need to be. Always do the SIMPLEST thing that works well.


ONE of the authors is from Microsoft. The other THREE are at Boston University and ETH.


As the article points out, the NT kernel actually natively supports fork. It's just not exposed.


Well, couldn't. Whatever they're doing with LXSS and picoprocesses seems to be good enough.

I don't run Windows so I'm far from the most biased person but frankly, on the surface the fork/exec thing really does seem unnecessary and weird in the modern world, where we've come up with better ways to do concurrency than just raw threads and processes anyways.


> Windows pretty much can't do it.

Win32 API cannot do it. The underlying NT kernel can.


I thought Linux had clone which glibc called for their implementation of fork.


Yes, the underlying syscall for fork() is clone [1], and the underlying syscall for exec*() is execve [2].

[1]: http://man7.org/linux/man-pages/man2/fork.2.html#NOTES

[2]: http://man7.org/linux/man-pages/man2/execve.2.html


Section 6: REPLACING FORK

> Alternative: clone().

> This syscall underlies all process and thread creation on Linux. Like Plan 9’s rfork() which preceded it, it takes separate flags controlling the child’s kernel state: address space, file descriptor table, namespaces, etc. This avoids one problem of fork: that its behaviour is implicit or undefined for many abstractions. However, for each resource there are two options: either share the resource between parent and child, or else copy it. As a result, clone suffers most of the same problems as fork (§4–5).


Which part of the paper made you laugh out loud?

Their arguments of why fork() is not a good fit these days seemed pretty reasonable to me.


I've been working with Unix systems for a long time. I too dislike fork(), even though I used to think it was the greatest thing. Here's a write-up of mine as to fork() being "evil": https://gist.github.com/nicowilliams/a8a07b0fc75df05f684c23c...


"every other system has a feature except us, and we are not going to add it (because reasons) even if it's very widely used"

also:

> When a fork syscall is made on WSL, lxss.sys does some of the initial work to prepare for copying the process. It then calls internal NT APIs to create the process with the correct semantics and create a thread in the process with an identical register context. Finally, it does some additional work to complete copying the process and resumes the new process so it can begin executing.

https://blogs.msdn.microsoft.com/wsl/2016/06/08/wsl-system-c...


They must have considered it many times (not least when making the partial posix support for NT) but felt that supporting it wouldn’t help deprecating it either.

AFAIK it’s only unix/Linux (posix) OSes that implement fork. Perhaps that’s what you meant by “every other system”, ie unix + clones/derivatives?


Also DRI's FlexOS 2.2 had it, it was not a unix clone per-se.

The COMMAND SVC had/has 4 variants:

- Execute program (akin to posix_spawn)

- Chain program (akin to posix_spawn, and parent exit)

- Execute subprocess (start a thread, one supplies code + stack address)

- Execute fork process (ala fork, but one supplies code + stack address like with 'subprocess' above)

Originally it only had the first two forms, 2.1 added the subprocess form, 2.2 added the fork form.

It didn't have a direct equivalent to exec(), but did have an OVERLAY SVC which loaded fresh code in to the process, and I expect that could be used to make something like exec(). Not that I ever tried, given there was no real need for it.

The other way to create an exec() like behaviour would have been with the CONTROL SVC, akin to ptrace(), but that would have been painful to do.


VAX/VMS implement "vfork()" which is an implementation of the most common use-cases of fork.

VAX and VMS are not POSIX or UNIX-like.


VAX is a hardware architecture. I'm not one to nitpick, but differentiating between VAX and VMS when VMS ran on the VAX is confusing.


On their POSIX compatibility layer.


We already have posix_spawn. I guess MS isn't aware of this.

Methinks MS need to focus on their own issues and leave the nix world alone. While many people find their involvement in FOSS welcome, I do not and never have. They are still a for-profit company beholden to shareholders.

The purchase by MS of GitHub may, again, be welcomed by many, but I find it disastrous. I smell triple E here no matter what anyone says. This is why distros like Debian and Slackware are still so important. All nix needs to do is start adopting MS ideas and then it's a matter of time before distros adopt disastrous code like systemd. MS does want to control everything around them like every other for-profit company. I cannot see this any other way. They are involved for their own good, for things like Azure and their own "cloud". MS needs to focus on their own garden and not that of *nix. I always have and always will prefer the "us and them" mentality when dealing with MS. Don't forget EEE. It's still a reality should you care to look hard enough.


Talk about uncharitable. MS Research produces world-class research. They don't just research Windows, they do research in all operating systems, programming languages and more.


It's not about being "uncharitable". It's about protecting nix from being controlled by outside forces. MS does, indeed. have world-class research, but they are sticking their heads in the nix camp, which some of us don't like. We're not all in this together, despite what some will tell you.

Sadly, UNIX (umbrella term here) is not what it was a few years ago. I dearly miss Solaris, for example. Nothing touched it in it's day, not even AIX or HP-UX. I was a UNIX admin for 10 years. I've used them all. Nothing MS can produce will ever be better than pure UNIX. There is a reason it's still being made. FreeBSD can outperform anything MS has on offer. Hell, they borrowed networking code because they couldn't come up with better.

Not all of us see us all under the same tent. I surely don't and never will. It's us and them. To say otherwise would indicate we on all on a level playing field and we're all working together to a common good. We're not. Good research aside, I don't like their history, stewardship, or about anything else they do. Agenda...


Microsoft Research is not Microsoft. Microsoft Research employs some of the main Haskell developers, and you don't see Haskellers going all conspiracy theory. Research is research, and either the ideas they describe are good and should be adopted, or they're bad and should be ignored.


This may be true, but I don't want MS having ANY say on what goes into a Linux or FreeBSD OS. None. They have an agenda that doesn't fit in well with FOSS. May no mistake about it, driving everything so that it works with Azure/VS, whatever, is about staying relevant in a world that is largely leaving them behind. Short of having to write PS at work (required), I haven't run anything MS at home since 1998 and have no need to do so. EEE is alive and well. Ask why they want *nix compatibility so bad. To extend their hegemony into everything. There is nothing MS offers that I need. Nothing. I'm about to set up a shop for some people that is completely and utterly MS free. Cost will be only the HW. No software license costs. Freedom to do whatever. No stupid, arbitrary concurrent connection limits. FOSS all the way.


> This may be true, but I don't want MS having ANY say on what goes into a Linux or FreeBSD OS. None. They have an agenda that doesn't fit in well with FOSS.

False. MS is now one of the leading FOSS contributors. You're living in the past.


My living in the past is YOUR opinion. There are many millions of FOSS users who are highly opposed to MS having any influence whatsoever on FOSS. They have an agenda and it's never in the best interest of the FOSS crowd. Do you think they are doing what they do out of benevolence? It's done for MS software compatibility with FOSS, so users will choose Azure and their other cloud offerings. It's done purely to keep them in the game and relevant. No other reason. I heavily distrust MS, as over the years they have given many reasons not to trust them. Ever wonder why so many people abandoned GitHub after MS bought them?


The paper mentions posix_spawn... so I guess they are aware




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: