I pretty much followed the instructions here:
edit, better here: https://blog.kowalczyk.info/article/Jl3G/https-for-free-in-g...
I didn't believe it could be that simple, but it worked first time and has proven really robust.
Edit: Taken from certmagic docs
// plaintext HTTP, gross
// encrypted HTTPS with HTTP->HTTPS redirects - yay!
That's interesting. Is that a common optimization? I hadn't heard of any other web server doing that.
It's not terribly exact, +-1000 bytes would probably not make a big difference, but I think it's a good default.
And of course, some people may have unique use cases where a custom threshold may be better.
If an attacker controls /some/ of this data, and would like to read /other parts/, they can abuse compression to measure whether the parts they don't know are "like" the part they control, because if they are then the compression will make the results shorter than otherwise which they can passively measure.
It's not a problem to move a compressed object over a secure channel on its own, the problem arises if either you try to compress the channel which is moving objects from different origins (e.g. a cookie set by a random advertising web site and your Facebook password) or compress a composite object e.g. maybe your backups mixed with a file you downloaded from a dodgy "pirate" video site.
This was mentioned in 2012 when CRIME (also included BEAST exploit) and later BREACH vulnerability (when it was considered cool to come up with cool sounding name, creating a logo and a website for specific vulnerabilities)
For example, https://github.com/expressjs/compression/blob/dd5055dc92fdea...
> Even if both the client and the server supports the same compression algorithms, the server may choose not to compress the body of a response, if the identity value is also acceptable.
I've actually run into this twice in my career and it has been a surprise to those around me in both cases. Both times in the context of small payloads where the server is applying some heuristic about whether to encode or not. (e.g status page stops sending gzipped output when the server is becoming "unhealthy")
This makes obvious sense once you consider that the client tells the server which compression formats it supports in every request yet not every data format is compressible, nor does the server necessary support any candidate compression format.
For example, the server wouldn't gzip a jpeg since it's already compressed.
All Accept-* headers are like this. e.g. the server doesn't necessarily support any of the languages requested in the Accept-Language header, but it doesn't hurt to ask. You always have to inspect the response headers to see the result of negotiation.
Similar thing is with header that's quite useful, but for some reason very few sites honor it: "Accept-Language" browser can specify which languages are preferred, but it is up to server to honor it (for example given language version is not available).
> The file size must be between 1,000 and 10,000,000 bytes.
The docs do not explain why.
As everything will end up in a packet when sent through the network stack, you might want to choose your minimum input size in such way, that you generate gzip-compressed output big enough. Why big enought? Nagle's algorithm 
So yet another reason to think about 'what to gzip'.
More sophisticated algorithms can either decide exactly which packets to send, or use TCP_CORK to shove part of a packet into a buffer before they add the rest of the stuff, e.g. preparing HTTP headers and then adding the static document that goes after them.
> All in one small self-contained executable.
Size of algernon executable: 24.4 MiB
Size of nginx-full executable: 1.1 MiB
Size of apache2 executable: 648K
Right now a "hello world" application in Go has comparable size to an OS with full GUI.
Media storage would go on spinning rust disks anyways separate from the SSD(s).
Find a case where it's actually too slow, ok, but saying "A does X, and X can lead to Y, therefore A does Y" is wrong.
If you say the size of the binary is an issue, then give the issue, not how it could (or not) be an issue.
- it increases amount of storage to store multiple versions of containers (when you have an internal app and do frequent releases it adds quickly)
- it increases amount of data transferred on every deployment
- increases amount of memory used (the whole point of containers, was to efficiently use hardware (BORG), although a lot of people today miss that reason and run containers on VMs)
They're all compiled languages doing (roughly) the same operations. There is an order of magnitude difference in the number of instructions in one compared to the other.
Apache is more likely to be entirely cached whereas the others aren't.
Size matters for performance.... if you're not CPU bound, fine, but to say its immaterial is naive.
Image size is currently not the most important metric, but - guessing how your average node.js package already looks today - demanding people ignore it completely will probably set you on the road of multi TB images that also contain the developer's favorite desktop environment in the medium future.
Hopefully we can get smaller binaries by Go 1.13.
In fact, the readme of this project is really thorough!
Can you run ldd on all of these and then report the combined size for each binary+libraries?
I don't know why the grandparent was downvoted. Go binaries are not small and the claim that this is a "small" single executable is untrue.
Hopefully the Go team will give us a flag to decide for ourselves whether to optimise for executable size or initialisation time. I know I'm fed up of uploading 50Mb files over dodgy wifi+vpn connections to update my server.
(edit fix repetition of design)
The original is at: https://science.raphael.poss.name/go-executable-size-visuali...
$ echo $((`ldd /usr/sbin/apache2 | cut -d">" -f 2 | sed "s/(.*$//;s/ //" | xargs du -L | cut -f 1 | sed "s/$/+/" | xargs echo` 0))
So 3.2 MB of shared library dependencies. 1.8 MB just being the libc which is almost guaranteed to be used by a different program already.
Not if we're talking about containers :)
Size does matter and not just in sense that it is using resources. The largest part of the 24 MB probably never gets executed but it adds unnecessary complexity that may hide bugs and security flaws.
Sadly, the Go package that provides support for QUIC does not compile with gccgo, yet.
I guess the main benefit is that Algernon covers a lot of ground, with a minimum of configuration, while being powerful enough to have a plugin system and support for programming in Lua. There is an auto-refresh feature that uses Server Sent Events, when editing Markdown or web pages. There is also support for the latest in Web technologies, like HTTP/2, QUIC and TLS 1.3. The caching system is decent. And the use of Go ensures that also smaller platforms like NetBSD and systems like Raspberry Pi are covered. There are no external dependencies, so Algernon can run on any system that Go can support.
The main benefit is that is is versatile, fresh, and covers many platforms and use cases.
For a more specific description of a potential benefit, a more specific use case would be needed.