
So you want to expose Go on the Internet - imauld
https://blog.gopheracademy.com/advent-2016/exposing-go-on-the-internet/
======
Xeoncross
Peter Lambert wrote his own guide on getting a perfect SSL Labs score by
tweaking the go HTTP server config ([https://blog.bracebin.com/achieving-
perfect-ssl-labs-score-w...](https://blog.bracebin.com/achieving-perfect-ssl-
labs-score-with-go)). Both of these articles are a good quick read for
Gophers.

------
daurnimator
I'm happy to note that my (newly released) library lua-http covers almost all
of these concerns in it's default configuration, with even cleaner semantics
around the read/write timeouts.

The only thing missing is an equivalent of the "Idle" timeout mentioned in the
OP: I'm curious how you think it should behave in the HTTP 1.1 pipelining
case, I guess only counting the time that the connection is totally idle?

------
derefr
Does anyone here know whether such a similar article exists for Erlang (or
specifically, the Erlang ecosystem's Cowboy HTTPD library)?

Even though Cowboy (and frameworks built atop it, like Phoenix) is known to
perform well under load (including DDoS-like load), I've always been wary
about exposing directly it to the Internet. I know NGINX was explicitly
hardened against many classes of web server attack; I haven't ever seen the
same claimed about Cowboy.

It'd be reassuring just to know of anyone with a large, public-facing web
service, who has deployed Erlang in a directly-exposed HTTP server role,
weathered attacks, and come out fine. (Heroku, maybe?) But I haven't heard
much on that front, either.

------
brianpgordon
Has someone wrapped all of this in a library? Or are there plans to update the
defaults so that they're more hardened?

It seems like it shouldn't be this hard.

~~~
grey-area
There have been and will be improvements. Go 1.8 improves timeouts.

They are constrained to some extent by the Go 1 compatibility promise, for
example they don't want to change default behaviour on timeouts probably
because of that. Hopefully if they have a Go 2 at some point they'll use that
opportunity to clean up some APIs and fix a few things like this.

------
merb
still misses how to bind port 80 or 443 via non root user:

[http://serverfault.com/questions/112795/how-to-run-a-
server-...](http://serverfault.com/questions/112795/how-to-run-a-server-on-
port-80-as-a-normal-user-on-linux)

Has many good answers.

~~~
voidlogic
You do not need to run as root to bind an application to a low port, instead
use setcap (it works for everything not just Go):
[https://stackoverflow.com/questions/14537045/how-i-should-
ru...](https://stackoverflow.com/questions/14537045/how-i-should-ru..).

------
epynonymous
i use nginx in front of go for production, this article is interesting as
certainly it would reduce some level of administrative obligation if i could
remove nginx, case in point, i actually just distribute go binaries to my
production servers so i don't even need the go compiler, this would simplify
deployment somewhat. but then again, i'm not really changing my nginx
configuration that often.

some questions that come to mind, i also leverage nginx for static file
caching, i've seen some sample code for fileserver in net/http, but what kind
of algorithm does fileserver use for caching, lru? can you configure the size
of the cache?

and in terms of scale, i haven't reached this point yet in my project, but
from the _olden_ sinatra days, i'd spin up multiple processes and proxy
through nginx. in terms of a single (machine) server, could a go process
essentially be limited to 1 per server? i'm assuming the go binary could
leverage multiple cores automatically so i wouldn't need to do like ruby or
python?

what are your experiences with go backend services? i run a restful api server
that connects to a database and redis, so far, performance seems good enough
where i only need 1 go process per machine.

------
module0000
Put it behind HAProxy!(or nginx if you just starting *nixing in the last 5
years and don't know what haproxy is)

Both are battle tested(HAProxy more so), and like another poster said...if you
don't use something like them, you're going to re-invent the wheel in several
areas.

------
sofaofthedamned
I usually use haproxy in front of docker containers, this is interesting
stuff.

I wonder what he feels about using Caddy instead of bare net/http ?

~~~
grey-area
Caddy server pretty much is bare net/http (it uses the stdlib), so it would be
doing the same thing.

------
nodesocket
I'd still recommend running NGINX in front of Go or Node backends. NGINX gives
you the flexibility to add things like gzip, caching, static asset expires
headers, load balancing, health checks, etc.

(shamless plug) In terms of the TLS, I wrote a short blog post
([http://blog.commando.io/the-perfect-nginx-ssl-
configuration/](http://blog.commando.io/the-perfect-nginx-ssl-configuration/))
on a setting up NGINX to get an A+ rating on Qualys SSL Labs. It is really
only a few lines/directives.

~~~
chrismarlow9
I agree. I enjoy go because the apps I build are simple, and all the "special
features" like caching, gzip, expire headers, health checks and reverse
proxying come with nginx. I picked go as my language of choice because I want
to get to the root of the problem and write code for that, and do that 1 thing
very well. Its not that I don't think Go can do it, it's just that I don't
want to do it in Go. It feels like an abuse of the language paradigm to do
everything in go.

------
tedunangst
But no method to just specify max connections before old ones start getting
closed? Did I miss that? With nginx, I don't mess with timeouts. I just set
max connections appropriately and that's it. Don't care about slow connections
when descriptors aren't in short supply.

~~~
dsp1234
I'm not aware that nginx has an option to "specify max connections before old
ones start getting closed". The default behavior is that when max connections
are hit, it doesn't allow new connections[0]. Then with the default timeouts
of 60 seconds for client_body_timeout and client_header_timeout, a connection
can be held for at least 2 minutes[1][2]. Note that the body timeout is "The
timeout is set only for a period between two successive read operations, not
for the transmission of the whole request body.", so it's possible to
arbitrarily extend the amount of time a single connection is open.

Since these connections are long lived, and normal connections are generally
short, the total number of connections gets dominated by the slow ones. If no
other mitigations are put into place, then this can cause a server to hit
ulimits/max_conns and keep legitimate requests blocked.

This is known as the "Slowloris" attack[3], and is mentioned in the nginx DDoS
mitigation blog post[4].

[0] -
[http://nginx.org/en/docs/http/ngx_http_upstream_module.html#...](http://nginx.org/en/docs/http/ngx_http_upstream_module.html#server)

[1] -
[http://nginx.org/en/docs/http/ngx_http_core_module.html#clie...](http://nginx.org/en/docs/http/ngx_http_core_module.html#client_body_timeout)

[2] -
[http://nginx.org/en/docs/http/ngx_http_core_module.html#clie...](http://nginx.org/en/docs/http/ngx_http_core_module.html#client_header_timeout)

[3] -
[https://en.wikipedia.org/wiki/Slowloris_(computer_security)](https://en.wikipedia.org/wiki/Slowloris_\(computer_security\))

[4] - [https://www.nginx.com/blog/mitigating-ddos-attacks-with-
ngin...](https://www.nginx.com/blog/mitigating-ddos-attacks-with-nginx-and-
nginx-plus/)

~~~
tedunangst
Hmmm, thanks. It worked for me, but I should look closer.

------
grobbles
It's years old, but I listened to the advice of-

[https://dennisforbes.ca/index.php/2013/08/07/ten-reasons-
you...](https://dennisforbes.ca/index.php/2013/08/07/ten-reasons-you-should-
still-use-nginx/)

-and in re-analysis it all holds completely true. Nginx gives my deploy flexibility, at essentially negligible cost. And no Go development should include a bunch of boilerplate code to do banal stuff like serving static content.

~~~
insertnickname
Static file server in one line of Go:

    
    
        http.ListenAndServe(":8080", http.FileServer(http.Dir("/usr/share/doc"))
    

[https://godoc.org/net/http#FileServer](https://godoc.org/net/http#FileServer)

------
ruslan_talpa
I still live under the assumption that there are oh so many ways to ____ with
a web server beyond opening a TCP connection and keeping it open, for all of
them there is code in nginx to defend agains. But hearing this from CloudFlare
is worth a second look. Can someone with first hand knowledge of the nginx
(front line) codebase comment if the things described in this article is all
it takes to have a mostly resilient http service?

~~~
shanemhansen
I can't comment on the nginx codebase, but I've been running golang
production-facing golang servers for a long time and I feel safe in saying I
have mostly resilient http services.

I've worked with more than one company handling over 100k requests/s on the
public internet with Go. Go's networking model combined with the work that's
gone into fuzzing the stdlib combined with the benefit of hindsight when it
comes to data structures and security combined with lots of love from google
web people has resulted in an extremely mature web stack.

~~~
ruslan_talpa
I see what you are saying but that only confirms that your services are fast
and stable as long as ppl are using them for their purpose. What i am
interested in is if they are resilient when someone is deliberately attacking
them and not with trivial scripts.

~~~
shanemhansen
I think I understand what you are saying too. I've personally worked on 2
alexa top 100 sites for the US that are using go on the public internet. They
see a fair amount of malicious traffic. I actually find Go and net/http to be
a pretty solid base for defusing layer 7 attacks.

------
yumaikas
It's nice to see that Go's http and TLS libraries are getting even better.
They're what attracted me to Go in the first place.

Also, the coverage of the various timeouts for HTTP requests is mostly new
information to me. Is that something that nginx and apache usually take care
of?

~~~
ben_jones
My personal experience with the Go stdlib is that it practices "defensive
programming" pretty well. For example HttpClient defaults. It's what you would
expect from a language with go's objectives, but it's still something I
appreciate (even if it forces me to do things right when I don't want to!).

~~~
zzzcpan
HttpClient has no usable defaults. In fact, you can't even make a usable file
downloader because of the way it handles timeouts, you have to write your own.

------
jest7325
I wonder if it would make sense to front-end that with nginx. It has nice
http2 and the latest SSL/TLS implementation. Just curious what other think
about this approach.

~~~
toufka
That's currently what is done. However the idea that I could get my entire
stack, from HTTP handler, to router, to logic, to database query, back to
response into a single 10mb binary is pretty enticing.

If I could get there, I'd have full control over every aspect of an API
request from packet to server query, back to payload, in a single programming
language, in a single conceptual framework. There is a lot to like about that.
I'm not sure it's needed - nginx works so well, and perfect settings are just
a single config file away. But if I could get to a truly single-binary
deployment, I'd be pretty happy too.

------
avitzurel
Nginx is your best friend.

I always have an Nginx proxy in front of services. Whether it's Go, Ruby or
Node (or Docker)

~~~
akerl_
Can you clarify why?

~~~
avitzurel
@logn clarified pretty well.

Anything you do in your service other than your own business logic is
reinventing the wheel (I mean in the HTTP level), and not doing it so well as
someone before you (nginx/HaProxy etc...).

This is a generalization of course, but my strategy of putting Nginx in front
of everything didn't fail me so far.

~~~
Can_Not
You say strategy, I think you meant well established, battle tested and
proven, industry standard best practice.

------
smegel
Sigh. If you're configuring "Curve Preferences" you're doing it wrong. Crypto
either works out of the box or find another tool.

~~~
tptacek
First, the person writing this article (Filippo Valsorda) has expertise.

Second, the point isn't to select crypto that "works", but rather to select
crypto that is efficiently supported by Golang.

~~~
smegel
> However, you should still set PreferServerCipherSuites to ensure safer and
> faster cipher suites are preferred, and CurvePreferences to avoid
> unoptimized curves

Sounds more like he's giving advice to others on the finer points of elliptic
curve cryptography. Programmers should not need to know this stuff.

~~~
edmccard
>> and CurvePreferences to avoid unoptimized curves

The key word being unoptimized. In the article, the code snippet has a comment
"Only use curves which have assembly implementations" and he mentions that "a
client using CurveP384 would cause up to a second of CPU to be consumed on our
machines." (presumably because it does not have an assembly implementation)

> Programmers should not need to know this stuff.

It can sometimes be a sign of a leaky abstraction, but programmers might need
to know the performance characteristics of the code they write.

------
nkozyra
NGINX has so many nice reverse proxy tools out of the box that it's still very
appealing to just plop it in front of any service (much less a Go service).

Performance & failsafe is a big part of the appeal, but so is local caching,
traffic splitting (for A/B testing or regional versions), etc. It's hard to
ignore that when choosing to expose your server directly or put it behind
NGINX.

Unless you use none of these things, you'll end up reinventing a bunch of
wheels.

~~~
weberc2
Given that Go's HTTP interfaces are very composable, and assuming there are
libraries to do caching and traffic splitting, you wouldn't be reinventing
wheels. At that point, it seems that the question is whether you prefer to
manage NGINX config files or write your configuration in Go.

~~~
nkozyra
OK, so 're-implement' the wheel. Using a variety of unrelated, dubiously-
updated libraries.

I'm still not sure what the advantage of that is over using perhaps the most
reliable, certainly most-used web server as a proxy in front of your app. I'm
open to convincing, though.

~~~
weberc2
How is filling out struct fields more complex than filling out config files?
The principle advantage is system simplicity--everything deploys in a single
file, no network topology to troubleshoot, fewer moving parts, no new highly-
configurable tool to master. If the quality of those libraries is as poor as
you suppose, then take NGINX by all means. I don't see any reason to make
those assumptions, however.

I don't mean to overstate the advantages--I think both solutions are fine;
neither will make or break your operation.

~~~
nkozyra
It's more than filling out a few structs, though. Traffic splitting or caching
via NGINX can literally be done in a handful of lines. No go gets, no
middleware and it's well tested, mature and backed by software that powers a
majority of the web.

I use go net/http every day, in production. I trust it, but NGINX provides so
much more battle tested functionality out of the box.

~~~
weberc2
> It's more than filling out a few structs, though.

How do you know how many lines are required to configure hypothetical
middleware?

> No go gets

How is static compilation worse than `apt-get install` or `docker run`?

> No middleware

It's another process... why would running another process be better than
middleware?

> it's well tested, mature and backed by software that powers a majority of
> the web.

Granted. It seems like this is the only clear win for NGINX, and it may well
change if Go libraries mature. Time will tell.

~~~
234dd57d2c8db
You're missing the sysadmin angle of this entirely. Nginx has amazing tooling
around load balancing, configuration mgmt, multiple languages, logging
options, rewrite rules, rate limiting, file upload size tuning, HTTP tuning in
general, the list goes on and on. What if your site needs to support multiple
backends like a JVM app, a wsgi app, and an old crufty cgi app. You gonna
write backends for all that shit too in Go?

Sorry but I'm not going to be writing ansible code that modifies structs
inside of some program then compiles said program. No thank you that sounds
like crazy town. Also other sysadmins and infrastructure engineers will
actually know how things work and won't have to go reading the source code for
some crazy Go app program at 2am that also is a webserver for some reason?

Separation of concerns, use it!!

~~~
weberc2
> You gonna write backends for all that shit too in Go?

The discussion is scoped to a single Go application. No one is proposing
replacing NGINX with Go (or anything else) for JVM apps.

> won't have to go reading the source code for some crazy Go app program at
> 2am that also is a webserver for some reason?

This is a rephrasing of the question I posed earlier--is it easier to manage
configuration in Go source code or NGINX config files.

> Separation of concerns, use it!!

Concerns can be separated without being in distinct processes or implemented
by distinct programmers or implemented in distinct programming languages.

~~~
Can_Not
You're free to not use nginx or Apache if you can validate that you are better
off without them, but IMHO it sounds like a nightmare of "experimental
homemade wheels" being muddled in with business logic.

------
stanleydrew
One reason we didn't do this with our messaging service at Charge was that we
didn't want code that we wrote to have access to our private TLS keys in
production. Not everyone needs that level of protection, but it's helpful to
avoid giving your software engineers footguns that can inadvertently lead to
decryption of all your production data streams.

~~~
dispose13432
>we didn't want code that we wrote to have access to our private TLS keys in
production.

Correct me if I misunderstood you, but you don't want _engineers_ who write
your code to have access to private TLS keys which are _used_ in production

~~~
closeparen
A separate process is overkill for protection from engineers, just have the
private keys read from disk, and only have them on production disks.

If you compromise a process, you can potentially exfiltrate its memory. You'd
need to also compromise the operating system to exfiltrate memory from other
processes.

So, keys being in nginx means you can only get the keys by breaking nginx (or
the OS), not by breaking the in-house application.

~~~
amorphid
Or don't have the keys on the server at all. Anyone who gets root access can
walk right up to the key file and yoink it. Obviously keys have to be stored
somewhere. But it doesn't have to be on every server's disk.

Also, try to avoid passing in keys as command line arguments. If you can,
avoid using environment variables, too. You can pass thet data in using
standard in, so the data is never exposed.

Example of leaky environment variables:

[https://gist.github.com/amorphid/db037f03246962959b6a034b2ca...](https://gist.github.com/amorphid/db037f03246962959b6a034b2ca3ef1b)

~~~
grey-area
Interesting link on env vars. Any links on how to do this properly?

~~~
amorphid
Here's an example you can try on any Linux System running procfs.

[https://gist.github.com/amorphid/4a65741d14db38b96341d7e1f2d...](https://gist.github.com/amorphid/4a65741d14db38b96341d7e1f2dd69b6)

The short version is I'm passing a variable in via the pid's standard in,
reading the line, and then declaring the variable. This is a very contrived
example :) But you can write a wrapper script that would handle all of the
line reading for you.

This originally came up when I was asking someone how to pass sensitive
information (API keys, passwords, etc.). I did some research, and found this
approach.

In most programming languages, for basic system calls, you basically just call
a command, that command runs, and then exits. But sometimes you want a script
that can take information from standard in, or send it to you from standard
out. Like you may write a script that runs for a few minutes, then says "OK,
I'm ready for the password!", and then you pass it in at the time it's needed
(but honestly, don't do it unless you need to, because it's one more thing
that can break.

Erlang/Elixir land have a library called erlexec that does this =>
[http://saleyn.github.io/erlexec/](http://saleyn.github.io/erlexec/)

Another Elixir library is Porcelain =>
[https://github.com/alco/porcelain](https://github.com/alco/porcelain)

------
gtrubetskoy
There's still one thing missing - graceful restart:

[https://grisha.org/blog/2014/06/03/graceful-restart-in-
golan...](https://grisha.org/blog/2014/06/03/graceful-restart-in-golang/)

BTW - to people new to Go this article may make it look like serving HTTP is
complicated, but it's actually remarkably easy. And if you consider that it is
actually _possible_ to have a complete server with just the standard lib (and
with TLS and HTTP/2 to boot) running as a single process - compared to Python
or Ruby, where because of the GIL you _must_ place an apache/nginx/haproxy in
front of it and also run a bunch of unicorns or something similar on different
ports (at which point you need something like Chef/Puppet to manage the config
because it gets very complicated very fast) - this is actually pretty amazing.

~~~
jaredklewis
As has been pointed in some of the other subthreads, another reason to run
nginx in front of Go/Python/Ruby is that running on 80 or 443 needs root
access. From a deep security perspective, it's better to run your app as a
dedicated user with only the necessary privileges.

Also, assuming that any static assets are served, probably a good idea to
leverage sendfile.

~~~
brobinson
>running on 80 or 443 needs root access

Not necessarily:

    
    
        setcap 'cap_net_bind_service=+ep' your_go_binary
        ./your_go_binary

~~~
jaredklewis
This interesting, I have never seen setcap. It seems it doesn't work with
scripts (ruby, python) and if you are using JVM/mono/beam you will need to
setcap the whole VM, but a very cool solution for a language like Go with
binaries!

~~~
m45t3r
My two cents, you probably need to apply setcap in Python interpreter itself
instead of the script. It shouldn't be a problem though, since you probably
will use a virtualenv anyway.

Another option would be to drop privileges at runtime.

~~~
poooogles
>It shouldn't be a problem though, since you probably will use a virtualenv
anyway.

virtualenvs don't create a new interpreter, they just fudge the python path?

Definitely not recommended on interpreted languages (although we use it all
the time on our go apps).

~~~
m45t3r
They create a copy of the binary of the interpreter, you can even call it
directly instead of activating the virtualenv first.

