Hacker News new | past | comments | ask | show | jobs | submit login

I'm nervous about what this means for the future of nginx's built in load balancing. That's been an important and rock-solid part of my infrastructure.

I use F5's at work. The F5's do their job very very well, but are very very touchy. I've always considered nginx to be solid, stable, and low-maintenance. I think the opposite about F5 and am worried about this transition.

1) You can't export and re-import a config. Just doesn't work.

2) For the Virtual Appliance (not a recommended scenario by F5, to be fair) it's temperamental about its host and doesn't like to be migrated or moved, and will stop functioning.

3) Upgrades sometimes corrupt/wipe parts of the config, sometimes not.

4) Reboots sometimes corrupt/wipe parts of the config, and sometimes changes to the config are not actually applied until a reboot, even though there's no warning of this behavior.

5) Latest release notes for 12.x (last version I worked with) has a lengthy page detailing known issues. https://support.f5.com/kb/en-us/products/big-ip_ltm/releasen... Why? Why are there so many? Why are most of them critical? Many just shouldn't make it to release.

-edit- It's important to be hopeful. I'd rather land on the side of "nginx makes F5 better" rather than "F5 makes nginx worse".

1: How did you export? and did you try to import into a different unit?

4: In over 10 years of using F5 never seen a reboot break a config. The part about needing reboot to make a config change apply might be https://support.f5.com/csp/article/K13253. In short F5 keeps the old config in RAM and applies it to existing connections. when the latter expire, the old config is gone. Can be very annoying indeed when you are not aware of it. You can delete connections manually though. No need for reboot.

3/5: Yes, all their releases have a list of all known issues. It's a bit odd indeed, but I think it's a good thing. Plan your upgrades carefully.

Existing connections using the config when they were initiated is intended to prevent inconsistencies when changes are being made, these inconsistencies could lead to connections being dropped en mass after a config change.

I use F5s at work, heres some responses:

1) You can, if you are careful about what your doing. UCS archives are only intended to be restored on the same machines, restoring them on a different machine requires considering the master key for secret encryption and omitting the license from the UCS. If your restoring it on another model F5, you might be going about things in a sub-optimal way.

2) This typically relates to how stuff like VMWare deals with disk issues or migration. In (some versions) VMWare if there is a disk IO operation thats taking too long, it will pause the VM. It also pauses the VM when you use DRS or live migrate. IF your using a failover pair or cluster this will cause a failover because well, the other device went away for a period of time.

3) Ive only ever seen ASM config lost after an upgrade, and its usually because the unit was not relicensed before upgrading and wasnt licensed for the new version as a result.

4) Ive seen this happen an it was usually because something was in a bad state and rebooting just poked it into failing. Also, always save sys config after making changes in tmsh, it doesnt auto-save (same behavior as Cisco).

5) Because F5 has a policy of publishing a bug in the release notes and in bug tracker if it was ever seen by a customer. Contrast this with some other vendors who very selectively publish bug information.

Thanks for your responses. A good foil to my complaints. My experience is very selective.

I agree on 5, too - I'd rather have the bug published and at least know that it's been seen and is being worked on, but still, it's a little disheartening to look at an update you're about to install and see its list of breaks is longer than its list of fixes.

> [...] omitting the license from the UCS.

I believe this has improved in recent releases.

It used to be that if you tried importing a config, and you were not properly licensed, the import would bomb out in annoying ways.

In newer versions the import will at least pull in and save everything (AFAICT), but anything not licensed will simply not be active. Once you cut the proper cheques then things should work.

> 1) You can't export and re-import a config. Just doesn't work.

Besides normal archives (that are just tgz files) or manual config changes and re-loading, try "(tmsh) load sys config merge from-terminal (or file)". It is going to be a gamechanger. ;-) Takes any config snippet from "(tmsh) list..." or the bigip.conf files without rewriting into create/modify statements and add/replace-all-with blocks. No problem to import even large configs like 100KB at once, all as an atomic transaction.

I would not be worried unless F5 have stated they are going to change licensing. Haproxy have a commercial version of their software, including appliances. Several other open source projects have commercially supported versions.

NGinx has had a commercial version of their code for a long time. Now it will be under new management. I could see the branding for the commercial version of nginx change.

Throwing out a question to the room: if the worst happens and Nginx gets butchered, would it be so bad to go back to using Apache? I've never used it really, everything I've done has been Nginx, but is there some technical reason why Apache wouldn't be a fine fallback option for an open-source server?

If you like nginx, why would you fall back to apache instead of forking it, or using one of the many existing forks?

That's one of the best parts of OSS code - You don't just have to follow them to the next version they release.

(One example is Tengine - It is really nicely setup and a lot of people prefer it to core nginx already)

Tengine looks pretty interesting!

The main reason I switched to nginx from apache years ago was due to configuration, which was much nicer from my side. If I was to switch away from nginx because something happened, I'd probably find something similar configuration and speed wise. Caddy looks nice.


i would end up paying twice as much for caddy as i do for server

You can use caddy commercially for free, the same as nginx. You just have to use a copy built from source by someone other than Light Code Labs, because the prebuilt binaries downloaded from the official caddy site are the only ones with the commercial licensing restriction.

I think caddy and nginx are both great pieces of software that have overly expensive commercial pricing.

I had the same impression, but mholt pointed out to me that building from source or using the Github binary releases are valid alternatives to paying for licensing.

Yea Apache would be fine for 99% of people those that are pushing a huge number of requests might struggle but there is haproxy amongst others as alternatives

HaProxy does load balancing better, it never was a primary feature of nginx. Varnish can do the caching. Apache/Lighttpd can serve files and CGI, but maybe not as efficiently.

Apache is really struggling on resource consumption. It's still living in the world of one process or one thread per connection.

Operationally it always ends up in a clusterfuck of rewrite rules and there are many gotchas with undocumented and misbehaving directives.

>It's still living in the world of one process or one thread per connection

Not exactly - you choose the event MPM (mod_http2 won't even run with prefork MPM).

"It's still living in the world of one process or one thread per connection."

It hasn't lived in that world for a decade or so. It's great to give advice, but at least make it valid and factual. With the Apache 2.4 event MPM, httpd is async and event-driven and is just as fast as nginx.

Apache is full of gotchas as stated. For one, the event MPM is not async :p

It's still creating multiple processes that get recycled periodically, each with a fixed number of threads. Every active request holds a thread.

Apache quickly runs into troubles when having long lived requests (slow API calls or large file downloads) or when using websockets (hold a thread permanently).

The tuning to balance processes, threads, connections, requests and resource consumption is extremely complicated and it doesn't get very far.

HAProxy and nginx can both handle 10k concurrent connections out of the box. Apache requires extensive tuning before 1k.

Apache is a good piece of software, nothing wrong with it. People like nginx because it's low maintenance but I'm not sure it still has the performance edge for _dynamic_ apps.

Apache has improved and now has things such as Event MPM inspired by nginx. Nowadays most of us would run applications behind a proxy, not by running mod_php or mod_python directly - which made old school apps very slow.

Also Apache is notoriously easy to configure. And Nginx still absolutely rules when delivering static content. There is also Varnish Cache which is very good.

Nginx has historically been easier to scale up out of the box. Not to say Apache can't be configured to be competitive, but there's at least the perception that it's not as good at concurrency [1].

[1] https://help.dreamhost.com/hc/en-us/articles/215945987-Web-s...

Life is nicer in many ways under apache.

What are some of these "many ways"? because I strongly favor the quality of life under nginx over that of apache. For smaller projects, these days I reach for Caddy which is even more pleasant to use than nginx.

My personal reasons:

- PHP under apache is faster - Small config files per site - .htaccess at the folder level when required

I usually use both and serve php from apache and serve static content over nginx. Best of both worlds.

.htaccess is the thing I strongly dislike about Apache.

With nginx the config is centralized in a specific location. With Apache, configs are all over the place.

Not OP, but here are some (IMO big) advantages Apache has over nginx (FOSS):

- Provides HTTP caching (via mod_cache)

- Simplifies the deployment stack by providing built-in interpreters for dynamic languages. e.g. mod_php (i.e. you don't need php-fpm), mod_wsgi (i.e. you don't need gunicorn)

- Load-balancer upstream affinity ("stickiness") based on HTTP cookies

- Load-balancing based on upstream's connection count (via mod_heartbeat)

- Built-in Let's Encrypt integration (via mod_md)

tl;dr provides a lot of features out of the box -- some people may see this as "bloat" or overly tight coupling, but this provides simplicity.

the FOSS version of nginx has great built-in caching, so I disagree there, but the others I can agree on, especially mod_php. I don't ever use PHP these days, so it's not something I routinely think about, but I'm sure Apache is better for PHP than nginx.

I feel that the let's encrypt CLI clients are plenty good for nginx, to the point that installing a new module for apache (let alone nginx) is more work than just running a CLI client.

If you want TLS automation, Caddy is superbly good at fetching its own certs, and you don't have to fiddle with installing an additional module to do it.

Apache takes a few ms more for ssl termination.

It's generally something around 20-50ms extra last time I did some benchmarks

50ms is huge. I strongly suspect there is something more in play if you get that sort of perf difference between apache and nginx.

I too am worried about this. Thankfully F5 can't just disappear the open source nginx.

I'm still downloading the repo and a few versions just in case.

Not to mention the nginx waf product.

luckily not needed: there is h2o

Applications are open for YC Summer 2019

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact