Depends on the workload if it matters or not. Some it matters a ton, some not a bit. And if you have n=cores processes, you can get some benefits with collocation and the like the OS can do for you.
I’m not saying performance is not different between these types of solutions or the dev work is not sometimes different.
I’m saying that the penalties (including development effort) for work co-ordinated between processes compared to threads can vary from nearly zero (sometimes even net better) to terrible depending on the nature of the workload. Threads are very convenient programmatically, but also have some serious drawbacks in certain scenarios.
For instance fork()’d sub-processes do a good job of avoiding segfaulting
/crashing/dead locking everything all at once if there is a memory issue or other problem when doing the work, it’s very difficult in native threading models to do per-work resource management or quotas (like maximum memory footprint, or max number of open sockets, or max number of open files) since everything is grouped together at the OS level and it’s a lot of work to reinvent that yourself (which you’d need to do with threads). Also, the same shared memory convenience can cause some pretty crazy memory corruption in otherwise isolated areas of your system if you’re not careful, which is not possible with separate processes.
I do wish the python concurrency model was better. Even with it’s warts, it
is still possible to do a LOT with it in a performant way. Some workloads are definitely not worth the trouble now however.
Last I did this, when the processes were fork()‘s of the parent (the typical way this was done), memory overhead was minimal compared to threads.
A couple %. That was somewhat workload dependent however, if there is a lot of memory churn or data marshaling/unmarshalling happening as part of the workload, they’ll quickly diverge and you’ll burn a ton of CPU doing so.
Typical ways around that include mmap’ng things or various types of shared memory IPC, but that is a lot of work.
Also generally no one spins up distinct, new processes for the ‘co-ordinated distinct process work queue’ when they can just fork(), which should be way faster and pretty much every platform uses copy-on-write for this, so also has minimal memory overhead (at least initially)
The problem is (perhaps amusingly) with refcounting. As the processes run, they'll each be doing refcount operations on the same module/class/function/etc objects which causes the memory to be unshared.
Only where there is memory churn. If you’re in a tight processing loop (checksumming a file? Reading data in and computing something from it?) then the majority of objects are never referenced or dereferenced from the baseline.
Also, since the copy on write generally is memory page by memory page, even if you were doing a lot of that, if most of those ref counts are in a small number of pages, it’s not likely to really change much.
It would be good to get real numbers here of course. I couldn’t find anyone obviously complaining about obvious issues with it in Python after a cursory search though.