Hacker News new | past | comments | ask | show | jobs | submit login

Throwing out a question to the room: if the worst happens and Nginx gets butchered, would it be so bad to go back to using Apache? I've never used it really, everything I've done has been Nginx, but is there some technical reason why Apache wouldn't be a fine fallback option for an open-source server?





If you like nginx, why would you fall back to apache instead of forking it, or using one of the many existing forks?

That's one of the best parts of OSS code - You don't just have to follow them to the next version they release.

(One example is Tengine - It is really nicely setup and a lot of people prefer it to core nginx already)


Tengine looks pretty interesting!

The main reason I switched to nginx from apache years ago was due to configuration, which was much nicer from my side. If I was to switch away from nginx because something happened, I'd probably find something similar configuration and speed wise. Caddy looks nice.

https://caddyserver.com/products/licenses

i would end up paying twice as much for caddy as i do for server


You can use caddy commercially for free, the same as nginx. You just have to use a copy built from source by someone other than Light Code Labs, because the prebuilt binaries downloaded from the official caddy site are the only ones with the commercial licensing restriction.

I think caddy and nginx are both great pieces of software that have overly expensive commercial pricing.


I had the same impression, but mholt pointed out to me that building from source or using the Github binary releases are valid alternatives to paying for licensing.

Yea Apache would be fine for 99% of people those that are pushing a huge number of requests might struggle but there is haproxy amongst others as alternatives

HaProxy does load balancing better, it never was a primary feature of nginx. Varnish can do the caching. Apache/Lighttpd can serve files and CGI, but maybe not as efficiently.

Apache is really struggling on resource consumption. It's still living in the world of one process or one thread per connection.

Operationally it always ends up in a clusterfuck of rewrite rules and there are many gotchas with undocumented and misbehaving directives.


>It's still living in the world of one process or one thread per connection

Not exactly - you choose the event MPM (mod_http2 won't even run with prefork MPM).


"It's still living in the world of one process or one thread per connection."

It hasn't lived in that world for a decade or so. It's great to give advice, but at least make it valid and factual. With the Apache 2.4 event MPM, httpd is async and event-driven and is just as fast as nginx.


Apache is full of gotchas as stated. For one, the event MPM is not async :p

It's still creating multiple processes that get recycled periodically, each with a fixed number of threads. Every active request holds a thread.

Apache quickly runs into troubles when having long lived requests (slow API calls or large file downloads) or when using websockets (hold a thread permanently).

The tuning to balance processes, threads, connections, requests and resource consumption is extremely complicated and it doesn't get very far.

HAProxy and nginx can both handle 10k concurrent connections out of the box. Apache requires extensive tuning before 1k.


Apache is a good piece of software, nothing wrong with it. People like nginx because it's low maintenance but I'm not sure it still has the performance edge for _dynamic_ apps.

Apache has improved and now has things such as Event MPM inspired by nginx. Nowadays most of us would run applications behind a proxy, not by running mod_php or mod_python directly - which made old school apps very slow.

Also Apache is notoriously easy to configure. And Nginx still absolutely rules when delivering static content. There is also Varnish Cache which is very good.


Nginx has historically been easier to scale up out of the box. Not to say Apache can't be configured to be competitive, but there's at least the perception that it's not as good at concurrency [1].

[1] https://help.dreamhost.com/hc/en-us/articles/215945987-Web-s...


Life is nicer in many ways under apache.

What are some of these "many ways"? because I strongly favor the quality of life under nginx over that of apache. For smaller projects, these days I reach for Caddy which is even more pleasant to use than nginx.

My personal reasons:

- PHP under apache is faster - Small config files per site - .htaccess at the folder level when required

I usually use both and serve php from apache and serve static content over nginx. Best of both worlds.


.htaccess is the thing I strongly dislike about Apache.

With nginx the config is centralized in a specific location. With Apache, configs are all over the place.


Not OP, but here are some (IMO big) advantages Apache has over nginx (FOSS):

- Provides HTTP caching (via mod_cache)

- Simplifies the deployment stack by providing built-in interpreters for dynamic languages. e.g. mod_php (i.e. you don't need php-fpm), mod_wsgi (i.e. you don't need gunicorn)

- Load-balancer upstream affinity ("stickiness") based on HTTP cookies

- Load-balancing based on upstream's connection count (via mod_heartbeat)

- Built-in Let's Encrypt integration (via mod_md)

tl;dr provides a lot of features out of the box -- some people may see this as "bloat" or overly tight coupling, but this provides simplicity.


the FOSS version of nginx has great built-in caching, so I disagree there, but the others I can agree on, especially mod_php. I don't ever use PHP these days, so it's not something I routinely think about, but I'm sure Apache is better for PHP than nginx.

I feel that the let's encrypt CLI clients are plenty good for nginx, to the point that installing a new module for apache (let alone nginx) is more work than just running a CLI client.

If you want TLS automation, Caddy is superbly good at fetching its own certs, and you don't have to fiddle with installing an additional module to do it.


Apache takes a few ms more for ssl termination.

It's generally something around 20-50ms extra last time I did some benchmarks


50ms is huge. I strongly suspect there is something more in play if you get that sort of perf difference between apache and nginx.



Applications are open for YC Summer 2019

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: