1) You can't export and re-import a config. Just doesn't work.
2) For the Virtual Appliance (not a recommended scenario by F5, to be fair) it's temperamental about its host and doesn't like to be migrated or moved, and will stop functioning.
3) Upgrades sometimes corrupt/wipe parts of the config, sometimes not.
4) Reboots sometimes corrupt/wipe parts of the config, and sometimes changes to the config are not actually applied until a reboot, even though there's no warning of this behavior.
5) Latest release notes for 12.x (last version I worked with) has a lengthy page detailing known issues. https://support.f5.com/kb/en-us/products/big-ip_ltm/releasen...
Why? Why are there so many? Why are most of them critical? Many just shouldn't make it to release.
It's important to be hopeful. I'd rather land on the side of "nginx makes F5 better" rather than "F5 makes nginx worse".
4: In over 10 years of using F5 never seen a reboot break a config. The part about needing reboot to make a config change apply might be https://support.f5.com/csp/article/K13253. In short F5 keeps the old config in RAM and applies it to existing connections. when the latter expire, the old config is gone. Can be very annoying indeed when you are not aware of it. You can delete connections manually though. No need for reboot.
3/5: Yes, all their releases have a list of all known issues. It's a bit odd indeed, but I think it's a good thing. Plan your upgrades carefully.
1) You can, if you are careful about what your doing. UCS archives are only intended to be restored on the same machines, restoring them on a different machine requires considering the master key for secret encryption and omitting the license from the UCS. If your restoring it on another model F5, you might be going about things in a sub-optimal way.
2) This typically relates to how stuff like VMWare deals with disk issues or migration. In (some versions) VMWare if there is a disk IO operation thats taking too long, it will pause the VM. It also pauses the VM when you use DRS or live migrate. IF your using a failover pair or cluster this will cause a failover because well, the other device went away for a period of time.
3) Ive only ever seen ASM config lost after an upgrade, and its usually because the unit was not relicensed before upgrading and wasnt licensed for the new version as a result.
4) Ive seen this happen an it was usually because something was in a bad state and rebooting just poked it into failing. Also, always save sys config after making changes in tmsh, it doesnt auto-save (same behavior as Cisco).
5) Because F5 has a policy of publishing a bug in the release notes and in bug tracker if it was ever seen by a customer. Contrast this with some other vendors who very selectively publish bug information.
I agree on 5, too - I'd rather have the bug published and at least know that it's been seen and is being worked on, but still, it's a little disheartening to look at an update you're about to install and see its list of breaks is longer than its list of fixes.
I believe this has improved in recent releases.
It used to be that if you tried importing a config, and you were not properly licensed, the import would bomb out in annoying ways.
In newer versions the import will at least pull in and save everything (AFAICT), but anything not licensed will simply not be active. Once you cut the proper cheques then things should work.
Besides normal archives (that are just tgz files) or manual config changes and re-loading, try "(tmsh) load sys config merge from-terminal (or file)". It is going to be a gamechanger. ;-) Takes any config snippet from "(tmsh) list..." or the bigip.conf files without rewriting into create/modify statements and add/replace-all-with blocks. No problem to import even large configs like 100KB at once, all as an atomic transaction.
NGinx has had a commercial version of their code for a long time. Now it will be under new management. I could see the branding for the commercial version of nginx change.
That's one of the best parts of OSS code - You don't just have to follow them to the next version they release.
(One example is Tengine - It is really nicely setup and a lot of people prefer it to core nginx already)
i would end up paying twice as much for caddy as i do for server
I think caddy and nginx are both great pieces of software that have overly expensive commercial pricing.
Apache is really struggling on resource consumption. It's still living in the world of one process or one thread per connection.
Operationally it always ends up in a clusterfuck of rewrite rules and there are many gotchas with undocumented and misbehaving directives.
Not exactly - you choose the event MPM (mod_http2 won't even run with prefork MPM).
It hasn't lived in that world for a decade or so. It's great to give advice, but at least make it valid and factual. With the Apache 2.4 event MPM, httpd is async and event-driven and is just as fast as nginx.
It's still creating multiple processes that get recycled periodically, each with a fixed number of threads. Every active request holds a thread.
Apache quickly runs into troubles when having long lived requests (slow API calls or large file downloads) or when using websockets (hold a thread permanently).
The tuning to balance processes, threads, connections, requests and resource consumption is extremely complicated and it doesn't get very far.
HAProxy and nginx can both handle 10k concurrent connections out of the box. Apache requires extensive tuning before 1k.
Apache has improved and now has things such as Event MPM inspired by nginx. Nowadays most of us would run applications behind a proxy, not by running mod_php or mod_python directly - which made old school apps very slow.
Also Apache is notoriously easy to configure.
And Nginx still absolutely rules when delivering static content.
There is also Varnish Cache which is very good.
- PHP under apache is faster
- Small config files per site
- .htaccess at the folder level when required
I usually use both and serve php from apache and serve static content over nginx. Best of both worlds.
With nginx the config is centralized in a specific location. With Apache, configs are all over the place.
- Provides HTTP caching (via mod_cache)
- Simplifies the deployment stack by providing built-in interpreters for dynamic languages. e.g. mod_php (i.e. you don't need php-fpm), mod_wsgi (i.e. you don't need gunicorn)
- Load-balancer upstream affinity ("stickiness") based on HTTP cookies
- Load-balancing based on upstream's connection count (via mod_heartbeat)
- Built-in Let's Encrypt integration (via mod_md)
tl;dr provides a lot of features out of the box -- some people may see this as "bloat" or overly tight coupling, but this provides simplicity.
I feel that the let's encrypt CLI clients are plenty good for nginx, to the point that installing a new module for apache (let alone nginx) is more work than just running a CLI client.
If you want TLS automation, Caddy is superbly good at fetching its own certs, and you don't have to fiddle with installing an additional module to do it.
It's generally something around 20-50ms extra last time I did some benchmarks