Hacker News new | past | comments | ask | show | jobs | submit login
CGI programs have an attractive one step deployment model (utcc.utoronto.ca)
31 points by ingve 11 months ago | hide | past | favorite | 16 comments



It makes sense that anything that existed at usable speeds in the 90s would be absurdly fast today. Even the overhead of spawning a new process is virtually unnoticeable on any vaguely modern hardware.


I’m glad we have no need for such straightforward, easy and fast methods today.


I appreciate the sarcasm, but yes I write cgi programs all the time. I go from conception to deployment in minutes.

I was looking at one such system (a bash one) that’s approaching its 10th anniversary. It has outlasted so many systems it’s unbelievable

Do the minimum job you need, just well enough.


Was it sarcasm? You'd be surprised. Some people love complexity for the sake of complexity.


Occasionally, simplicity comes in handy, for understanding what you're doing, and for moving fast.

CGI can be simpler than FastCGI. Which can be simpler than some huge automatic deployment system atop CI and Docker and containerization and Kubernetes and someone else's server pool a thousand miles away.

When I implemented the SCGI protocol (which is similar to FastCGI) for Racket, I intentionally made it work as both SCGI and CGI. The application code is the same for both backends, and the `scgi` package auto-detects which it should use. https://docs.racket-lang.org/scgi/

Of course in production you'll probably add layers atop this, but you could usually drop down to just running the CGI script on your laptop in some ad hoc debugging setup.


You can just run a Docker container on your laptop in some ad hoc debugging setup though. That's kind of the point, that it works the same on your laptop as in someone else's server pool a thousand miles away. CGI doesn't, since it is sensitive to web server, version, configuration, etc.

In practice of course, Docker/containerization also has various issues, but that's just down to inherent complexity of the problem space. Thinking CGI solves all your problems is like thinking reverting to a caveman lifestyle solves all your modern first world problems.


I said "Occasionally". If things aren't overly complicated, so you understand a lot about how things are working, you usually have a good idea when the debugging you need to do can be isolated from a huge mega-enterprise CI/deployment infrastructure, so you can do it much more rapidly and with low-level tools when that's helpful.

The "works the same on your laptop" argument is important for some purposes (and that's one of the reasons I run Debian Stable on everything normally), but I think it gets misunderstood by many people who would have no idea how to run even a simple one-off application without big infrastructure.

I don't recall CGI being sensitive to web server, version, or configuration, in practice. CGI should interface the same today as 30 years ago.


> FastCGI doesn't have as simple a deployment model as CGIs generally offer

That may be true in general but I don't think it's a universal truth - it's been well over a decade, but ISTR running PHP in Apache as FastCGI was a matter of a few directives in httpd.conf. No application servers, deployment model identical to that of plain CGI. Co I misremembering?


CGI is just a simple 'plugin' standard for web server's request processing path. Instead of having internal code process the request, it's sent out to outside application. Historically a web server process was executed from inet supervisor, which controlled socket mapping and connection accepts. So inet daemon would manage TCP, httpd would manage HTTP and CGI handler would manage the content.

There were multiple exec's in a request chain which was pretty slow. Web servers started to consolidate both ends under a same process - whatever was executed via CGI could be compiled to a dynamic library for direct interfacing, and web server could implement its own connection handler with multiple concurrency strategies instead of a simplistic fork one done by inetd.


It's mostly all nice and fine with CGI until you deploy it on real world and then find that on load spike it's not 500 request / second that you can serve from a given box, rather it falls down to 50 request / second. The numbers are given as example only that will depend on your hardware, but the proportion is the point.

Sure you can find a single-copy-deployment solution with FCGI too like how PHP does it, but one has to feel a need first.


CGI is just a specification. It absolutely has nothing to do with request processing or load balancing. The web server context processing a http request passes it on via CGI to external handler. What that context is, is it a thread, is it a threadpool, is is a forked process, etc. has nothing to do with CGI itself.


My blog [1] is still served up as a CGI program [2]. Never had a problem with it.

[1] https://boston.conman.org/ [2] https://github.com/spc476/mod_blog


I have always been fascinated by CGI, since you can offload a lot of work to the web server and then concentrate on the real business logic.

Less documented/talked about are performance tracking and optimisation of code that gets running on it.


You lose flexibility though. On the JVM, application containers are basically the same idea, so we started out that way. But the industry arrived at an embedded model, where the application would ship with the servlet engine and be responsible for starting the webserver, so as to be able to control every aspect of the web stack.


Which you'd then promptly put in front of a proxy as you want to handle things like certificates, authentication etc in a different way.


Exactly this




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: