This is why I recommend switching to HAProxy.
I mean, I'd definitely encourage people to use it for hobby projects, but if that's how the developers see their software, I would never trust them with anything serious.
Nonetheless, I still wouldn't be able to trust developers who think that's reasonable.
if it had been an error and unintentional i wouldn't have been worried. mistakes happen to everyone. but it was an actual design decision. Without serious code review i'd be too worried the developers had any other bright ideas.
Static file serving? Sure!
Load balanced proxying? mod_proxy_balancer is great!
Fine grained caching? mod_disk_cache is also great
Updating loadbalancer bits via the api?
mod_proxy_balancer supports a balancer-manager endpoint for that to do live updates
monitoring? mod_status + prometheus exporter or
native LE support? https://github.com/icing/mod_md is going to be rolled into upstream apache
That said, netcraft says Apache still runs almost 25% of the internet, which is no small stake: https://news.netcraft.com/archives/category/web-server-surve...
The weakness of nginx is that it can't have a dynamic module and if it's not compiled in, you need to roll your own build, which I won't do due to maintenance burden.
One thing that comes to my mind is that maybe this can't be solved by a module due to missing API in open source Nginx.
We're running v1.4 in production and it has been working pretty nice for us.
You should post the issue on their https://github.com/allinurl/goaccess, they may be able to help you.
Regarding the branding, for me top is a real-time tool rather than a logging tool. I was picturing something that may have been more useful for older style Apache httpd installs where you have several virtual hosts on a server and you'd want to know who is hogging the resources or causing the problems.
watch bash -c "topngx < /path/to/access.log"
Averages can lie, especially when something like an empty query can take close to zero time compared to a non-trivial transaction. If some robot or other artifact of your site is generating a some amount of null queries that will make your average response time look better than it actually is. Percentiles, particularly on the tail of 90th or above, tell a better story of how well and consistently you're responding to traffic under load.
And ofcourse there is another problem of correctly storing all your latencies accurately which becomes pretty hard if you are using something like prometheus.
1# I'm gonna compile it and provide a screenshot via a pull request.
2# Compilation failed because it needed sqlite3 headers and this is not reported in the app. I'm gonna edit that in the readme too :)
This year (specially with the whole covid thingy) I set a goal to contribute more to open source. I'm trying to find every little issue I can find and contribute to :P
Which presumably means those metrics are available in the OSS edition somehow...
Anyone know if there's a "deeper" way to get the same stats info about what Apache is doing without having to basically wait in line with all the other incoming requests?
Why make a whole new tool with limitations instead of improving the existing one?
What world does this guy live in that a program in Rust is easier to get running on any random machine than python script?
Distributing single binary would be easier, but `cargo install xyz` seems harder than python script