
Self-Contained Pure-Go Web Server with Lua, MD, HTTP/2, QUIC, Redis Support - Propolice
https://github.com/xyproto/algernon
======
marcus_holmes
I wonder why they didn't include Let's Encrypt integration - it's completely
painless using the acme library, and that would prevent the whole "HTTP or
HTTPS?" discussion around HTTP/2

~~~
xyproto
It's in progress. Algernon is an open source project where I am the main
contributor, and I develop Algernon in my spare time. Pull requests are
welcome.

~~~
marcus_holmes
I'd love to help, but my coding time is already taken building a product.

I pretty much followed the instructions here:
[https://godoc.org/golang.org/x/crypto/acme/autocert](https://godoc.org/golang.org/x/crypto/acme/autocert)

edit, better here: [https://blog.kowalczyk.info/article/Jl3G/https-for-free-
in-g...](https://blog.kowalczyk.info/article/Jl3G/https-for-free-in-go-with-
little-help-of-lets-encrypt.html)

I didn't believe it could be that simple, but it worked first time and has
proven really robust.

~~~
erdii
It is even easier:
[https://github.com/mholt/certmagic](https://github.com/mholt/certmagic)

Edit: Taken from certmagic docs

Instead of:

// plaintext HTTP, gross

http.ListenAndServe(":80", mux)

Use CertMagic:

// encrypted HTTPS with HTTP->HTTPS redirects - yay!

certmagic.HTTPS([]string{"example.com"}, mux)

------
tyingq
_" Files that are sent to the client are compressed with gzip, unless they are
under 4096 bytes."_

That's interesting. Is that a common optimization? I hadn't heard of any other
web server doing that.

~~~
DarkWiiPlayer
I heared somewhere (Some blogs comment section, I believe) that gzip actually
reduces the security of HTTPS; maybe someone can confirm / explain that?

~~~
tialaramex
Compression technologies, including gzip, obviously have the goal of making
things smaller by predicting later data based on earlier data. If the later
data looks more like the earlier data, the result is smaller than if it was
random gibberish. Compression!

If an attacker controls /some/ of this data, and would like to read /other
parts/, they can abuse compression to measure whether the parts they don't
know are "like" the part they control, because if they are then the
compression will make the results shorter than otherwise which they can
passively measure.

It's not a problem to move a compressed object over a secure channel on its
own, the problem arises if either you try to compress the channel which is
moving objects from different origins (e.g. a cookie set by a random
advertising web site and your Facebook password) or compress a composite
object e.g. maybe your backups mixed with a file you downloaded from a dodgy
"pirate" video site.

------
est31
This is quite impressive, but this claim is a bit wrong:

> All in one small self-contained executable.

Size of algernon executable: 24.4 MiB

Size of nginx-full executable: 1.1 MiB

Size of apache2 executable: 648K

~~~
sagichmal
For self-contained architecture-specific server binaries, there is no
practical difference between 240KB, or 2.4MB, or 24.4MB, or even, at a
stretch, 244MB. It's not worth mentioning or optimizing for, except as
novelty. I wish people would stop golfing with these numbers.

~~~
takeda
You have it backwards, apache or nginx size is not for the novelty. Go is just
a pig and its size grows every new release, and the size isn't really for
being statically linked or debugging symbols, because it's huge even when
those options are disabled.

Right now a "hello world" application in Go has comparable size to an OS with
full GUI.

~~~
sagichmal
Who cares? Literally, what problem does it cause, or what does it make worse?
It is totally immaterial.

~~~
takeda
\- increases amount of time to fetch and run container (it actually is quite
noticeable when you have app that scales out and you updating it)

\- it increases amount of storage to store multiple versions of containers
(when you have an internal app and do frequent releases it adds quickly)

\- it increases amount of data transferred on every deployment

\- increases amount of memory used (the whole point of containers, was to
efficiently use hardware (BORG), although a lot of people today miss that
reason and run containers on VMs)

~~~
yetanotherme
True. Luckily, most Go binaries can be upx'd (
[https://upx.github.io/](https://upx.github.io/) ) for a fraction of their
original size. Just put it into your Dockerfile as a part of the build
process.

~~~
penagwin
This works and helps with storage/transit. Word of warning though that many AV
like to flag UPX'd executables (only important if it's not an internal tool),
and it'll take even longer for it to start, and will use more memory. My
understanding is it essentially does standard compression on the executable
and appends the decompression part on the front of it.

------
a_imho
How does this compare to OpenResty?

~~~
DarkWiiPlayer
I suppose this one offers an all-in-one package, while openresty is really
just an nginx server with builtin Lua(JIT) support.

------
mtw
What is the benefit of using this? In what scenario would this excel? Thanks.

~~~
xyproto
Good question. I'm not sure if it excels in any scenario. There are
specialized web servers that excel at caching or at raw performance. There are
dedicated backends for popular front-end toolkits like Vue or React. There are
dedicated editors that excel at editing and previewing Markdown, or HTML.

I guess the main benefit is that Algernon covers a lot of ground, with a
minimum of configuration, while being powerful enough to have a plugin system
and support for programming in Lua. There is an auto-refresh feature that uses
Server Sent Events, when editing Markdown or web pages. There is also support
for the latest in Web technologies, like HTTP/2, QUIC and TLS 1.3. The caching
system is decent. And the use of Go ensures that also smaller platforms like
NetBSD and systems like Raspberry Pi are covered. There are no external
dependencies, so Algernon can run on any system that Go can support.

The main benefit is that is is versatile, fresh, and covers many platforms and
use cases.

For a more specific description of a potential benefit, a more specific use
case would be needed.

