
Performance of fork()/exec() in typical web applications? - dhmnix
https://www.postgresql.org/docs/9.6/static/tutorial-arch.html
======
dhmnix
I've read (and have been told) that CGI is a performance bottleneck, because
the CPU time saving is absorbed by: a) kernel-land fork()/exec() calls; and b)
disk input/output; and also that c) developer time is more valuable than CPU
time.

Regarding a) I find it something of a contradiction to be told NOT to use CGI
because it will degrade performance, then to be told you shouldn't care about
performance (of interpreted languages) because it's easy to "throw more
hardware" at the problem. I also find the argument against a fork()/exec()
approach to be contradictory, when many databases fork() a new process per
connection (e.g., the PostgreSQL documentation says "The PostgreSQL server can
handle multiple concurrent connections from clients. To achieve this it starts
("forks") a new process for each connection.").

Regarding b) many queries (in my use cases) are repetitions of previous
queries, with minor modifications, meaning the OS/DB caching mechanisms cause
fewer I/O calls to actually hit the disk.

Regarding c) I run a tool to automatically (and incrementally) build my
software every time I save a source file, so the "edit-compile-test" cycle of
compiled languages does not really apply - all I do is edit my code and
refresh my browser.

What are other people's experience in this area? I'm hoping for some answers
from "old school developers" as well as younger developers. I've been taken in
by the "new shiny toys and architectures" on a number of occasions, and while
I can see the benefit in designs like, e.g., the Actor Model, the longer I
work in software, the more I find simplicity winning out.

