> Since 1.6, HAProxy can forge SSL certificate on the fly!
Yes, you can use HAProxy with your company’s CA to inspect content.
> Device Identification
(Considering it seems you need a proprietary lib)
> Processing of HTTP request body
All seem to be be moving away from what HAProxy is/was. It's not an application server. It's not a webserver. It's not a mail server.
I've not tried it - but there should be some overlap in use-cases, and it would appear to be a much smaller project than HAProxy.
Through our company, we have some customer who want us to integrate
into HAProxy the ability to detect device type and characteristics and
report it to the backend server. We got a couple of contributions from
2 companies experts in this domain: 51 degrees and deviceatlas. You
can now load those libraries in HAProxy in order to fully qualify a
client capabilities and set up some headers your application server
can rely on to adapt content delivered to the client or let the
varnish cache server use it to cache multiple flavor of the same
object based on client capabilities.
More on this blog later on how to integrate each product.
We measure two main aspects of performance:
- Per-request detection overhead: Willy’s preferred way to measure impact on HAProxy performance is the per request overhead added. DeviceAtlas adds a few µs per request. Typically this does not cause much of an issue. Example: if a load balancer is serving 20,000 requests per second with 80% CPU, that's 40µs of CPU time per request on a machine that can
go up to 50µs. If we add 4µs to that we reach 88% of CPU under the same load, and the end user performance is not degraded in any meaningful way.
- Memory footprint: DeviceAtlas lets you configure the per-device property set to tailor the memory and performance impact. The resulting memory impact ranges from about 12MB to 100MB.
Since Nginx started to add more and more features to their loadbalance it makes sense for HAProxy to get a scripting engine.
EDIT: I guess part of my concern with Lua was the example on the linked article. It seems to be able to interface with most of the system and not just as a content source, which makes more sense.
It's one of those "take apart your airplane and put it back together again" projects that one generally shouldn't do, though sometimes there is no way around it. If you need to rearrange the whole internal architecture of your engine, you should it do it quickly and not mix it with new work.
Otherwise I love HAProxy!
Edit: solved this one myself with a little research. Systemd also implements the /dev/log Unix domain socket (even with the silly syslog facility names, without needing syslog installed) - so:
log /dev/log local0 info
If the admin wants the logs in syslog, 2>&1 | logger or similar. If the admin wants to use multilog, s6-log, or journald, that's also easy.
(For the same reasons, daemons should not include daemonization routines but should run in the foreground. If I want it in the background, I'll arrange for my process supervisor to start it.)
If a daemon doesn't... daemonize into the background, it's no longer a daemon - just a process.
Silly loonix folks.
Please see the FreeBSD program daemon or the FreeBSD handbook section 3.8.
log /dev/stdout local0 info
[ALERT] 286/160528 (20831) : sendto logger #1 failed: Connection refused (errno=111)
Is this possible with HAProxy? If it is, the documentation doesn't make it clear how.
Haproxy btw is fantastic too, useful for hiding real location of web server from attacks
Most people already have logging infrastructure in place for this.
Unix philosophy and all that.
Why is it nice?
Mail, though--yeah, nope.
But no http2 so it won't get in front of my nginx instances, yet ;)
Squid remains to be the only one that can deal with SSL proxying(yes it's kind of MITM, but it's needed sometimes), and it's also the real "pure" open source. HAproxy might be better fit for enterprises that need support?
Squid is a caching HTTP proxy, which began with forward proxying but also supports reverse proxying. I wouldn't regard it as relevant to modern, dynamic architectures as HAProxy or Varnish (another caching-focused project).
There's no real difference in open source purity between any of these projects, unless you dislike the stewardship of a company. HAProxy has existed for a long time without such stewardship (as has Varnish). Indeed, Squid's lack of commercial backing might be a hint as to its current relevance.
on the other side, you have varnish plus and nginx plus which are closed source, which means their clients can't have access to the source code, they don't know what they run.
"Please don't sign comments; they're already signed with your username. If other users want to learn more about you, they can click on it to see your profile."