From skimming the github page the only thing that stands out is shared memory-based SSL contexts and UDP peer communication across processes/machines. Not sure, though, if this is something haproxy can also do? Never had a need for that level of performance so have never looked.
Haproxy would probably work fine too, but seems a bit overkill for running a local termination proxy.
Ouch. This default means 16GB only lets you handle 80k connections. I would hardly call this "scalable". In 2015 you find blog posts left and right showing you how to reach 1 million connections on a single machine with language X or tool Y or framework Z. Maybe the developers should change this default.
Servers can have up to 1TB of ram without becoming overpriced.
But starving 80k users due to low buffering will be expensive in the long run. Way more expensive than RAM ;-)
Still, 200kB is excessive. A program with buffers no larger than 10kB can _easily_ saturate a 1 Gbit/s NIC. Hitch is designed to handle many concurrent connections, so even if it handled a paltry 10 connections it could easily saturate a 10 Gbit/s NIC with 10kB buffers. If not, then there is a design flaw somewhere.
"Servers can have up to 1TB of ram without becoming overpriced"
This is irrelevant. If a Hitch version had a default overhead of 10kB per connection, it could in theory scale to 20x the number of connections than this version of Hitch, for a given amount of RAM (no matter the amount). Maximizing the use you get out of a given amount of hardware resources should be your priority when writing scalable software.
Let me ask a leading question: how much of this do you think is openssl overhead?
Please consider optimising for a real usage scenario, not some fantasy benchmarking setup.
The minimum theoretical memory usage is 4kB per connection: 1 page in kernel space, and nothing on the userland (eg. you use zero-copy to txfer data between sockets or to/from file descriptors).
At Google, our SSL/TLS overhead per connection is 10kB: https://www.imperialviolet.org/2010/06/25/overclocking-ssl.h...
Secure sockets have a lot more overhead than plain TCP sockets, on top of that it has all of the overhead that a proxy has per connection.
You should also set SSL_MODE_RELEASE_BUFFERS to reclaim memory from idle SSL connections.
* Copyright (c) 1991, 1993
* The Regents of the University of California. All rights reserved.
It's very much based on this:
Not very modern, in other words.
Whether that is normal or not, it's up to developer. It's possible to use as small subset of C++ as developer wants. E.g. use only templates. But staying in C realm might be better for portability.
When I read this, I couldn't help but think of "macroses" as pronounced like "neuroses", which seems appropriate.
They first appear for kernel usage where the fact that they expanded to inline code in functions avoided creating too many stack-frames and provided optimization.
They do have the advantage of not relying on casting everything to "void *" or resort to callbacks for walking (see: TAILQ_FOREACH for instance).
Unless it's changed since I last attempted Pound only supported a single wildcard cert when it came to SNI whereas looking at the hitch code it suggests it might play nicely with multiple wildcard certificates
Edit: To clarify I don't think Pound technically supported any wildcard certificates, the wildcard cert had to be the default to work
We are positive to merging any code changes necessary to get it running with libressl though.
This has pros and cons, but besides the current CA situation I think it's pretty clearly better than what we have today. That's not really the point though; it's going to happen, regardless of flaws.
Using software like Varnish that is intentionally HTTP-only will always be possible, but it introduces architectural and operational handicaps. It may not matter for a lot of use cases, but at large scale you are going to pay for the architectural choice to separate these functional units into multiple processes (or even boxes).
As much as I appreciate some of what Varnish can do, the no-SSL stance and associated mindset really puts me off of it.