Hacker News new | comments | show | ask | jobs | submit login
So you want to expose Go on the Internet (gopheracademy.com)
433 points by imauld 128 days ago | hide | past | web | 140 comments | favorite



There's still one thing missing - graceful restart:

https://grisha.org/blog/2014/06/03/graceful-restart-in-golan...

BTW - to people new to Go this article may make it look like serving HTTP is complicated, but it's actually remarkably easy. And if you consider that it is actually _possible_ to have a complete server with just the standard lib (and with TLS and HTTP/2 to boot) running as a single process - compared to Python or Ruby, where because of the GIL you _must_ place an apache/nginx/haproxy in front of it and also run a bunch of unicorns or something similar on different ports (at which point you need something like Chef/Puppet to manage the config because it gets very complicated very fast) - this is actually pretty amazing.


You mean missing from the article or from Go stdlib? This was added in Go 1.8 (in beta now), so that link is a bit out of date I think.

https://github.com/golang/go/issues/4674

Agree that serving http/https with Go is really pretty straightforward - you can do it without all the tweaks in this article, and most people have been. It works remarkably well by default and scales well too without much effort.


I meant missing from both, but I didn't realize that it's making its way into the standard lib - this is very cool, thanks for the link!


> compared to Python or Ruby, where because of the GIL you _must_ place an apache/nginx/haproxy in front of it and also run a bunch of unicorns or something similar on different ports

or you know, just call fork() a few times, like Apache has done for more than a decade


This is precisely what unicorn is meant to do though, it's just a trope of the architecture to use hproxy on top.

Calling fork() internally is ultimately akin to what everyone is going to do. Why not use a library to do this and let that code gradually get tempered to a reliable status?


Isnt that a lot more overhead than green threads in a single proc?


It doesn't matter if you have work outside of I/O to do for request responses.


And hope you don't have any state that can collide.


fork() starts new processes, so that would be a weird state to collide.


I think you mean that if you are calling fork you better not have any state. Otherwise you will soon have multiple independent/mutated copies of your state which is normally not a good thing. But you are correct that they would not collide.


We use SO_REUSEPORT [1] to implement graceful restarts. There's a library for go that allows you to do that [2].

1. https://lwn.net/Articles/542629/

2. https://github.com/kavu/go_reuseport


Example of code using SO_REUSEPORT. Ghostunnel does it to reload short lived certificates: https://github.com/square/ghostunnel/blob/master/main.go#L30...


How does it do that without dropping open connections (does it drain)?


Here's how it works in our case:

1. Start a new process, start accepting new connections

2. In your old process, stop accepting new connections — old ones are still active.

3. In the old process, close old connections when they become idle.

4. In the old process, wait until all of your connections are closed and then quit or automatically quit after a timeout.


You can close the socket without dropping existing connections.


As has been pointed in some of the other subthreads, another reason to run nginx in front of Go/Python/Ruby is that running on 80 or 443 needs root access. From a deep security perspective, it's better to run your app as a dedicated user with only the necessary privileges.

Also, assuming that any static assets are served, probably a good idea to leverage sendfile.


>running on 80 or 443 needs root access

Not necessarily:

    setcap 'cap_net_bind_service=+ep' your_go_binary
    ./your_go_binary


This interesting, I have never seen setcap. It seems it doesn't work with scripts (ruby, python) and if you are using JVM/mono/beam you will need to setcap the whole VM, but a very cool solution for a language like Go with binaries!


My two cents, you probably need to apply setcap in Python interpreter itself instead of the script. It shouldn't be a problem though, since you probably will use a virtualenv anyway.

Another option would be to drop privileges at runtime.


>It shouldn't be a problem though, since you probably will use a virtualenv anyway.

virtualenvs don't create a new interpreter, they just fudge the python path?

Definitely not recommended on interpreted languages (although we use it all the time on our go apps).


They create a copy of the binary of the interpreter, you can even call it directly instead of activating the virtualenv first.


If you have a service run with systemd you can use this in the unit file:

AmbientCapabilities=CAP_NET_BIND_SERVICE

otherwise you'll have to run setcap any time the binary changes.


> Also, assuming that any static assets are served, probably a good idea to leverage sendfile.

Go already leverages sendfile:

https://golang.org/src/net/tcpsock_posix.go#L44


Go HTTP standard library does use sendfile whenever possible (you need some kinds of file descriptors, etc...)


And why is nginx safer?


Because, nginx is a hardened program used by thousands of companies with a strong interest in making sure there aren't vulnerabilities. Also, it is a program with a very narrow set of functions. Nginx vulnerabilities are comparatively rare: two security advisories in 2016, none in 2015. [0]

By comparison, app code is often developed rapidly and often only reviewed by at most a few people. Even companies staffed by brilliant minds like Google regularly have vulnerabilities in their application code.

0. http://nginx.org/en/security_advisories.html


It also takes somethings that would suck to learn the ruby version of, the php version of, the node version of, the python version of, etc., and does it faster, and allows you to isolate it from your application's complexity and business logic cleanly. Those things are serving static assests, SSL, and backend spliting. I cringe everytime I see a tutorial loading up the framework in a slow language like node or php to pass a static asset without the use of something like Apache or nginx.


Multi-threading http servers in Python like CherryPy has a decent performance regardless of GIL.

People need to stop hype the buzzword GIL. NodeJS/GoLang doesnt even support native threading.


Go does not provide direct access to threads, but multiple goroutines can run on multiple threads simultaneously. Go is not single-threaded like Node.js nor "only one thread can run at any one point in time" like CPython. Go's concurrency provides real parallelism.


Go provides real parallelism if run on multiple CPUs, otherwise things are running concurrently (a la Python).


Of course. It's impossible to do parallel computation on a single CPU.


> but multiple goroutines can run on multiple threads simultaneously

Is goroutines very effective with CPU-bound computation tasks?


I don't know if it's "very effective" (compared to what?), but it's fine to do CPU-bound computation in goroutines. The Go runtime will try to schedule goroutines to run in parallel on different threads.


What about goroutines?


I never even considered the possibility. We take ours down so rarely (months in between) that a couple seconds of failed requests are fine. I understand however there are circumstances where that may not be the case.


Thanks for this article. Do you find that incrementing the waitgroup on every accept tends to become a bottleneck? Or are you siloing your server on a single core or something like that?


It should be negligible. It's an atomic add, and the OS accept function call should also be much more heavy weight. Add to that lots of synchronization that will happen on all further socket HTTP/socket reads/writes, especially if it's HTTP/2.


> Do you find that incrementing the waitgroup on every accept tends to become a bottleneck?

The specific thing I was dealing with when I wrote the blog post wasn't high-volume enough for it to make any difference, so I don't really know....

But is there a better option than a WaitGroup here?


Check out the Context API, it's a standardization of several cancellation approaches.


Those two go hand in hand. Even if you you context (or a bare channel) for signaling the cancellation you often want to wait until the cancellation really happened and the child task terminated. For this task WaitGroup is the easiest thing to use.


And x/sync/errgroup if you want the lovechild of context and WaitGroup


Python can self host. I've heard great things about CherryPy which is actually used to deploy things like Flask, even though CherryPy can work by itself.


You argue that people want to put Go in front whereas you would not with Python/Ruby because of the GIL. This is a misunderstanding because people use nginx in front for failover and load balancing, and not just because Python/Ruby are comparatively slow. So your argument is fundamentally flawed.


Python/Ruby is single process, and to exploit all the resources on a single machine you need multiple processes of your app (usually #cores or #cores-1).

Now that you have multiple processes, you need something to dispatch between them - intro Nginx/Gunicorn etc.

A single Go process can exploit all the system resources available without the multi-process orchestration required by Python/Ruby and thus does not require the extra layer above.


> Python/Ruby is single process

That's an implementation detail of your Python/Ruby program. I've written plenty of Ruby apps that have been multi-process and multi-threaded, and both at the same time.

> Now that you have multiple processes, you need something to dispatch between them

Yes, that's the case irrespective of language.

> A single Go process can exploit all the system resources available without the multi-process orchestration required by Python/Ruby and thus does not require the extra layer above.

This is no different than for Ruby at least.

Your criticisms are mostly outdated as of Ruby 1.9.x, though you can get bitten by the GIL. (EDIT: to clarify, avoiding this depends on either multi-process or taking advantage of the fact that while Ruby Thread's can only be scheduled one at a time, C-extensions etc. that releases the GIL can run in parallel with other Ruby Thread's; of course all of this is MRI specific in any case)

In practice, if you use a reasonably modern web server like Puma, you get good concurrency without having to think much about it.


>> Yes, that's the case irrespective of language.

I don't need gunicorn or nginx in front of my Java/Go/C# process to achieve concurrency and take advantage of all CPU cores.

>> Your criticisms are mostly outdated as of Ruby 1.9.x

Python 3.x & Ruby 1.9.x have the same limitations - any native code will lock the GIL, so unless all your code is in extensions you're going to run into this problem. Ruby 1.9 introducing OS threads did not change this.

Sure it's great that I can write an extension in C and take advantage of multiple OS threads in my Ruby/Python runtime, but the moment those threads have to deal with ANY native objects you're back to square one.


Your single Go Process is typically #cores Processes. All the orchestration is still handled by you, just there falacities for it in the language for it.

Just because you are using stdlib not an external application does not change the functionality that is happening.


I think this is a bit disingenuous.

Do you also think that Java is multi-process? I can see the argument that an OS thread is a process but it is basically irrelevant in every way that matters in development and operations.

An example - how do you access the same shared-memory (say, a map/dictionary) from multiple cores in any particular runtime?

- Go: you make a map, you protect the map with a mutex, and you reference it directly.

- Java: same, or ConcurrentHashMap or whatever is available.

- 'Worker'/'forking' runtimes: you can't, right? You move the state to another component (such as Redis), or you use an OS-provided shared memory facility, or you need inter-process RPC etc ...

--

Of course, sometimes workers being separate processes is a virtue rather then a burden ... but I'm not sure this is true in the case of Ruby or Python when it's more purely a limitation of the runtime.


It orchestrates using threads iirc, and the primitives for handling the communication in a more safe fashion are in the box, as you mention... and depending on the OS starting threads may be nearly as expensive as processes. That said, it's definitely less overhead than orchestration across forked threads.


There's multiple layers of "in front". Python and ruby shops often have dedicated L7 load balancers, and then run nginx on individual hosts to mux to multiple dynamic language processes/threads.


NGINX has so many nice reverse proxy tools out of the box that it's still very appealing to just plop it in front of any service (much less a Go service).

Performance & failsafe is a big part of the appeal, but so is local caching, traffic splitting (for A/B testing or regional versions), etc. It's hard to ignore that when choosing to expose your server directly or put it behind NGINX.

Unless you use none of these things, you'll end up reinventing a bunch of wheels.


Given that Go's HTTP interfaces are very composable, and assuming there are libraries to do caching and traffic splitting, you wouldn't be reinventing wheels. At that point, it seems that the question is whether you prefer to manage NGINX config files or write your configuration in Go.


OK, so 're-implement' the wheel. Using a variety of unrelated, dubiously-updated libraries.

I'm still not sure what the advantage of that is over using perhaps the most reliable, certainly most-used web server as a proxy in front of your app. I'm open to convincing, though.


How is filling out struct fields more complex than filling out config files? The principle advantage is system simplicity--everything deploys in a single file, no network topology to troubleshoot, fewer moving parts, no new highly-configurable tool to master. If the quality of those libraries is as poor as you suppose, then take NGINX by all means. I don't see any reason to make those assumptions, however.

I don't mean to overstate the advantages--I think both solutions are fine; neither will make or break your operation.


It's more than filling out a few structs, though. Traffic splitting or caching via NGINX can literally be done in a handful of lines. No go gets, no middleware and it's well tested, mature and backed by software that powers a majority of the web.

I use go net/http every day, in production. I trust it, but NGINX provides so much more battle tested functionality out of the box.


> It's more than filling out a few structs, though.

How do you know how many lines are required to configure hypothetical middleware?

> No go gets

How is static compilation worse than `apt-get install` or `docker run`?

> No middleware

It's another process... why would running another process be better than middleware?

> it's well tested, mature and backed by software that powers a majority of the web.

Granted. It seems like this is the only clear win for NGINX, and it may well change if Go libraries mature. Time will tell.


You're missing the sysadmin angle of this entirely. Nginx has amazing tooling around load balancing, configuration mgmt, multiple languages, logging options, rewrite rules, rate limiting, file upload size tuning, HTTP tuning in general, the list goes on and on. What if your site needs to support multiple backends like a JVM app, a wsgi app, and an old crufty cgi app. You gonna write backends for all that shit too in Go?

Sorry but I'm not going to be writing ansible code that modifies structs inside of some program then compiles said program. No thank you that sounds like crazy town. Also other sysadmins and infrastructure engineers will actually know how things work and won't have to go reading the source code for some crazy Go app program at 2am that also is a webserver for some reason?

Separation of concerns, use it!!


> You gonna write backends for all that shit too in Go?

The discussion is scoped to a single Go application. No one is proposing replacing NGINX with Go (or anything else) for JVM apps.

> won't have to go reading the source code for some crazy Go app program at 2am that also is a webserver for some reason?

This is a rephrasing of the question I posed earlier--is it easier to manage configuration in Go source code or NGINX config files.

> Separation of concerns, use it!!

Concerns can be separated without being in distinct processes or implemented by distinct programmers or implemented in distinct programming languages.


You're free to not use nginx or Apache if you can validate that you are better off without them, but IMHO it sounds like a nightmare of "experimental homemade wheels" being muddled in with business logic.


> How do you know how many lines are required to configure hypothetical middleware?

Primarily through experience. Happy to be shown otherwise. Middleware usually takes a bit more configuration than that.

> Granted. It seems like this is the only clear win for NGINX, and it may well change if Go libraries mature. Time will tell.

We can quibble over whether it's the "only" win, but even so, it's a very, very big win. It's a single point of failure that handles a lot of features you'd have to replicate with various packages that lack the kind of support, maturity and community that NGINX has.


> How is filling out struct fields more complex

It's also about reproducing all the business logic provided by nginx, not just configuration. You can't pretend like net/http package gives you everything ngnix provides, that's a lie.


> whether you prefer to manage NGINX config files or write your configuration in Go

When it comes to these decisions, I use the Principal of Least Power (https://en.wikipedia.org/wiki/Rule_of_least_power)

If writing

    proxy_cache_path /path/to/cache levels=1:2 keys_zone=my_cache:10m max_size=10g

    location / {
        proxy_cache my_cache;
        proxy_pass http://my_upstream;
    }
gets me what I want without Turing-completeness, I'll choose that.

Obviously, more complicated tasks require more complicated tools.


I agree, when choosing between two configuration languages all else equal, choose the one with the least power. However, I think there are some advantages to choosing Go that might offset this rule--specifically simpler deployment and process topology. Further, if you know Go and not NGINX, the learning curve is (probably) better as well.


> ...if you know Go and not NGINX...

This doesn't seem like a solid basis for making engineering decisions -- or any kind of decisions.


On the contrary, it makes a lot more sense to build something in technology you have expertise in than it does to "use the right tool for the job". I'd rather have less tools to maintain at the expense of them being slightly worse fits.


What I am referring to, is thinking that it is good to not know anything about the alternatives available to you and still make a decision.

This doesn't always lead to the best decisions -- although by chance it can work out okay -- and it also has the effect of limiting your horizons. Now I can't say one should learn about every single web server out there, but when the choice is between a dedicated web server and a library in a particular programming language, there is really a lot to see on the other side. Rewrite rules and the wide variety of HTTP stats available for logging are both areas where web servers can give us new ideas.

When I was much younger I thought, why learn databases? I can use Java, I know XML. Just open the XML file and add the records. There's XQuery for searching... The historical precursor for "Why not write everything in Go?" is "Why not write everything in Java?".


> What I am referring to, is thinking that it is good to not know anything about the alternatives available to you and still make a decision.

I think you can know enough about an alternative to decide whether or not its appropriate without conquering its learning curve.


> > What I am referring to, is thinking that it is good to not know anything about the alternatives available to you and still make a decision.

> I think you can know enough about an alternative to decide whether or not its appropriate without conquering its learning curve.

Yeah, but that's not the same thing as using your level of knowledge as the deciding factor.


What you say makes no sense whatsoever. Go doesn't come with a copy of nginx in its standard library. Go std lib is not equivalent to what ngnix provides. You gophers are so obsessed by doing everything with Go it's unbelievable. Go is a tool,just like nginx, it's not a silver bullet.


Absolutely, but one tool is better than two tools. If it's good enough for most cases, then great, stick with the std library. It makes testing it and deploying it easier. When it gets more complicated, add Nginx.


One reason we didn't do this with our messaging service at Charge was that we didn't want code that we wrote to have access to our private TLS keys in production. Not everyone needs that level of protection, but it's helpful to avoid giving your software engineers footguns that can inadvertently lead to decryption of all your production data streams.


>we didn't want code that we wrote to have access to our private TLS keys in production.

Correct me if I misunderstood you, but you don't want _engineers_ who write your code to have access to private TLS keys which are _used_ in production


As clarified by some sibling comments, having your own engineers writing code that has access to your private keys is potentially problematic.

For one thing, a malicious or disgruntled engineer could sneak in code that exposes your private key material in some fashion.

Secondly, and more likely, your engineers may make a mistake that inadvertently exposes process memory, which would include your private key material.

In simpler terms, it often pays to have a firewall between important company secrets and the guy/gal who happens to be working on your web app this month.

Assuming you would answer the question "Do your engineers have root access to production machines?" in the negative, you probably also don't want your engineers writing code that has access to the things that the root user has access to.

One other way to put this is, are you comfortable running your Go process as root (in order to bind to port 443)?


>One other way to put this is, are you comfortable running your Go process as root (in order to bind to port 443)?

You do not need to run as root to bind an application to a low port, instead use setcap (it works for everything not just Go): https://stackoverflow.com/questions/14537045/how-i-should-ru...


A separate process is overkill for protection from engineers, just have the private keys read from disk, and only have them on production disks.

If you compromise a process, you can potentially exfiltrate its memory. You'd need to also compromise the operating system to exfiltrate memory from other processes.

So, keys being in nginx means you can only get the keys by breaking nginx (or the OS), not by breaking the in-house application.


Or don't have the keys on the server at all. Anyone who gets root access can walk right up to the key file and yoink it. Obviously keys have to be stored somewhere. But it doesn't have to be on every server's disk.

Also, try to avoid passing in keys as command line arguments. If you can, avoid using environment variables, too. You can pass thet data in using standard in, so the data is never exposed.

Example of leaky environment variables:

https://gist.github.com/amorphid/db037f03246962959b6a034b2ca...


Those env vars should only be exposed to the same user. The same user can also usually attach a debugger and read the secrets from memory. (Of course, this is harder, so you may not want secrets in env vars anyway.)

The unix permissions model is designed to isolate one user's data from another.


Interesting link on env vars. Any links on how to do this properly?


Here's an example you can try on any Linux System running procfs.

https://gist.github.com/amorphid/4a65741d14db38b96341d7e1f2d...

The short version is I'm passing a variable in via the pid's standard in, reading the line, and then declaring the variable. This is a very contrived example :) But you can write a wrapper script that would handle all of the line reading for you.

This originally came up when I was asking someone how to pass sensitive information (API keys, passwords, etc.). I did some research, and found this approach.

In most programming languages, for basic system calls, you basically just call a command, that command runs, and then exits. But sometimes you want a script that can take information from standard in, or send it to you from standard out. Like you may write a script that runs for a few minutes, then says "OK, I'm ready for the password!", and then you pass it in at the time it's needed (but honestly, don't do it unless you need to, because it's one more thing that can break.

Erlang/Elixir land have a library called erlexec that does this => http://saleyn.github.io/erlexec/

Another Elixir library is Porcelain => https://github.com/alco/porcelain


You can just overwrite the WBC variables once you read them in process.

TBH, though, if they are sniffing env variables from processes there's not reason not to sniff that process's memory directly.


> A separate process is overkill for protection from engineers

From engineers sure.

But a separate process helps for other threats, like heartbleed.


I sure hope you're using an intermediate CA.


How do you encrypt/secure the traffic from the LB to the application servers?


Does anyone here know whether such a similar article exists for Erlang (or specifically, the Erlang ecosystem's Cowboy HTTPD library)?

Even though Cowboy (and frameworks built atop it, like Phoenix) is known to perform well under load (including DDoS-like load), I've always been wary about exposing directly it to the Internet. I know NGINX was explicitly hardened against many classes of web server attack; I haven't ever seen the same claimed about Cowboy.

It'd be reassuring just to know of anyone with a large, public-facing web service, who has deployed Erlang in a directly-exposed HTTP server role, weathered attacks, and come out fine. (Heroku, maybe?) But I haven't heard much on that front, either.


I'd still recommend running NGINX in front of Go or Node backends. NGINX gives you the flexibility to add things like gzip, caching, static asset expires headers, load balancing, health checks, etc.

(shamless plug) In terms of the TLS, I wrote a short blog post (http://blog.commando.io/the-perfect-nginx-ssl-configuration/) on a setting up NGINX to get an A+ rating on Qualys SSL Labs. It is really only a few lines/directives.


I agree. I enjoy go because the apps I build are simple, and all the "special features" like caching, gzip, expire headers, health checks and reverse proxying come with nginx. I picked go as my language of choice because I want to get to the root of the problem and write code for that, and do that 1 thing very well. Its not that I don't think Go can do it, it's just that I don't want to do it in Go. It feels like an abuse of the language paradigm to do everything in go.


Go does native gzip also, and you can load assets into memory easily for instant streaming if you don't want to use the linux file buffer (which should already have a copy in memory).

If it helps anyone, I published the recommendations here plus some other resources into a simple Go package for instant A+ server report: https://github.com/Xeoncross/secureserver


It's nice to see that Go's http and TLS libraries are getting even better. They're what attracted me to Go in the first place.

Also, the coverage of the various timeouts for HTTP requests is mostly new information to me. Is that something that nginx and apache usually take care of?


you can usually configure them, see for example http://nginx.org/en/docs/http/ngx_http_core_module.html#send... and http://nginx.org/en/docs/http/ngx_http_proxy_module.html#pro...

The difference is Nginx sets timeouts based on the time between successive byte reads or writes, so for example if you have a 30 second timeout and receive one byte every 29 seconds, you won't trigger Nginx's timeouts: https://kev.inburke.com/slides/reliable-http/#connect-timeou...

Go generally prefers wall-clock timeouts for reading or writing the entire response, that is, if you don't get the whole thing in 30 seconds, return an error, regardless of when you received each individual byte. Although you can configure Nginx-style timeouts if you want.


My personal experience with the Go stdlib is that it practices "defensive programming" pretty well. For example HttpClient defaults. It's what you would expect from a language with go's objectives, but it's still something I appreciate (even if it forces me to do things right when I don't want to!).


HttpClient has no usable defaults. In fact, you can't even make a usable file downloader because of the way it handles timeouts, you have to write your own.


I'm not a fan of Go, but its libraries seem top-notch.


They are, unless you want to know what went wrong. Then the fundamental problems with the library design sorta beat you over the head.


Peter Lambert wrote his own guide on getting a perfect SSL Labs score by tweaking the go HTTP server config (https://blog.bracebin.com/achieving-perfect-ssl-labs-score-w...). Both of these articles are a good quick read for Gophers.


I still live under the assumption that there are oh so many ways to ____ with a web server beyond opening a TCP connection and keeping it open, for all of them there is code in nginx to defend agains. But hearing this from CloudFlare is worth a second look. Can someone with first hand knowledge of the nginx (front line) codebase comment if the things described in this article is all it takes to have a mostly resilient http service?


I can't comment on the nginx codebase, but I've been running golang production-facing golang servers for a long time and I feel safe in saying I have mostly resilient http services.

I've worked with more than one company handling over 100k requests/s on the public internet with Go. Go's networking model combined with the work that's gone into fuzzing the stdlib combined with the benefit of hindsight when it comes to data structures and security combined with lots of love from google web people has resulted in an extremely mature web stack.


I see what you are saying but that only confirms that your services are fast and stable as long as ppl are using them for their purpose. What i am interested in is if they are resilient when someone is deliberately attacking them and not with trivial scripts.


I think I understand what you are saying too. I've personally worked on 2 alexa top 100 sites for the US that are using go on the public internet. They see a fair amount of malicious traffic. I actually find Go and net/http to be a pretty solid base for defusing layer 7 attacks.


I wouldn't call nginx to be mostly resilient, but at least nginx doesn't leak resources by default and has measures against some known attacks, like the one that prevents DoS from range requests. It also has some ways to make it somewhat more resilient with limit_req and limit_conn modules and handles timeouts for streaming and external requests properly. The only way for Go to get on that level is to write another networking library, the one in the standard library is pretty much broken by design (they've been working on it for many years and it's still doesn't handle even timeouts properly).


This was my feeling too about the article (again, feeling, no factual base at all) but shanemhansen from the comment above seems to have a good experience with this type of approach so who knows, maybe we are overengineering by putting nginx in front for defense. But i would suspect the "security through obscurity" also helps quite a bit in defending most of the attacks


But no method to just specify max connections before old ones start getting closed? Did I miss that? With nginx, I don't mess with timeouts. I just set max connections appropriately and that's it. Don't care about slow connections when descriptors aren't in short supply.


I'm not aware that nginx has an option to "specify max connections before old ones start getting closed". The default behavior is that when max connections are hit, it doesn't allow new connections[0]. Then with the default timeouts of 60 seconds for client_body_timeout and client_header_timeout, a connection can be held for at least 2 minutes[1][2]. Note that the body timeout is "The timeout is set only for a period between two successive read operations, not for the transmission of the whole request body.", so it's possible to arbitrarily extend the amount of time a single connection is open.

Since these connections are long lived, and normal connections are generally short, the total number of connections gets dominated by the slow ones. If no other mitigations are put into place, then this can cause a server to hit ulimits/max_conns and keep legitimate requests blocked.

This is known as the "Slowloris" attack[3], and is mentioned in the nginx DDoS mitigation blog post[4].

[0] - http://nginx.org/en/docs/http/ngx_http_upstream_module.html#...

[1] - http://nginx.org/en/docs/http/ngx_http_core_module.html#clie...

[2] - http://nginx.org/en/docs/http/ngx_http_core_module.html#clie...

[3] - https://en.wikipedia.org/wiki/Slowloris_(computer_security)

[4] - https://www.nginx.com/blog/mitigating-ddos-attacks-with-ngin...


Hmmm, thanks. It worked for me, but I should look closer.


I'm happy to note that my (newly released) library lua-http covers almost all of these concerns in it's default configuration, with even cleaner semantics around the read/write timeouts.

The only thing missing is an equivalent of the "Idle" timeout mentioned in the OP: I'm curious how you think it should behave in the HTTP 1.1 pipelining case, I guess only counting the time that the connection is totally idle?


Has someone wrapped all of this in a library? Or are there plans to update the defaults so that they're more hardened?

It seems like it shouldn't be this hard.


There have been and will be improvements. Go 1.8 improves timeouts.

They are constrained to some extent by the Go 1 compatibility promise, for example they don't want to change default behaviour on timeouts probably because of that. Hopefully if they have a Go 2 at some point they'll use that opportunity to clean up some APIs and fix a few things like this.


still misses how to bind port 80 or 443 via non root user:

http://serverfault.com/questions/112795/how-to-run-a-server-...

Has many good answers.


You do not need to run as root to bind an application to a low port, instead use setcap (it works for everything not just Go): https://stackoverflow.com/questions/14537045/how-i-should-ru....


Nginx is your best friend.

I always have an Nginx proxy in front of services. Whether it's Go, Ruby or Node (or Docker)


And Apache/PHP and Tomcat and all the things. Nginx as the Frontline has been my policy since forever.

For resilience, HA, proxy routing, static files, masking backend errors, caching.

Does HTTP+S and 2 and Websocket so nicely.

Nginx all the things.


One thing I've found lacking in Nginx is active health checks. I know this is a feature in Nginx Plus, but I'm curious to know if there are any elegant open source solutions for this.


HAProxy does health checks. Might not be worth adding that inline for your needs but it does a good job of load balancing.


Can you clarify why?


Not OP but if you want an app in production then you want to be able to configure the http parts and script with command line tools. And if you start doing all that then you're basically reinventing nginx. Even comparing Java, nginx still has nicer http features, such as reloading ssl certs and config without dropping connections.


@logn clarified pretty well.

Anything you do in your service other than your own business logic is reinventing the wheel (I mean in the HTTP level), and not doing it so well as someone before you (nginx/HaProxy etc...).

This is a generalization of course, but my strategy of putting Nginx in front of everything didn't fail me so far.


You say strategy, I think you meant well established, battle tested and proven, industry standard best practice.


i use nginx in front of go for production, this article is interesting as certainly it would reduce some level of administrative obligation if i could remove nginx, case in point, i actually just distribute go binaries to my production servers so i don't even need the go compiler, this would simplify deployment somewhat. but then again, i'm not really changing my nginx configuration that often.

some questions that come to mind, i also leverage nginx for static file caching, i've seen some sample code for fileserver in net/http, but what kind of algorithm does fileserver use for caching, lru? can you configure the size of the cache?

and in terms of scale, i haven't reached this point yet in my project, but from the _olden_ sinatra days, i'd spin up multiple processes and proxy through nginx. in terms of a single (machine) server, could a go process essentially be limited to 1 per server? i'm assuming the go binary could leverage multiple cores automatically so i wouldn't need to do like ruby or python?

what are your experiences with go backend services? i run a restful api server that connects to a database and redis, so far, performance seems good enough where i only need 1 go process per machine.


I usually use haproxy in front of docker containers, this is interesting stuff.

I wonder what he feels about using Caddy instead of bare net/http ?


Caddy server pretty much is bare net/http (it uses the stdlib), so it would be doing the same thing.


Put it behind HAProxy!(or nginx if you just starting *nixing in the last 5 years and don't know what haproxy is)

Both are battle tested(HAProxy more so), and like another poster said...if you don't use something like them, you're going to re-invent the wheel in several areas.


It's years old, but I listened to the advice of-

https://dennisforbes.ca/index.php/2013/08/07/ten-reasons-you...

-and in re-analysis it all holds completely true. Nginx gives my deploy flexibility, at essentially negligible cost. And no Go development should include a bunch of boilerplate code to do banal stuff like serving static content.


Static file server in one line of Go:

    http.ListenAndServe(":8080", http.FileServer(http.Dir("/usr/share/doc"))
https://godoc.org/net/http#FileServer


I wonder if it would make sense to front-end that with nginx. It has nice http2 and the latest SSL/TLS implementation. Just curious what other think about this approach.


That's currently what is done. However the idea that I could get my entire stack, from HTTP handler, to router, to logic, to database query, back to response into a single 10mb binary is pretty enticing.

If I could get there, I'd have full control over every aspect of an API request from packet to server query, back to payload, in a single programming language, in a single conceptual framework. There is a lot to like about that. I'm not sure it's needed - nginx works so well, and perfect settings are just a single config file away. But if I could get to a truly single-binary deployment, I'd be pretty happy too.


literally the first sentence of the post:

> Back when crypto/tls was slow and net/http young, the general wisdom was to always put Go servers behind a reverse proxy like NGINX. That’s not necessary anymore!


Sigh. If you're configuring "Curve Preferences" you're doing it wrong. Crypto either works out of the box or find another tool.


First, the person writing this article (Filippo Valsorda) has expertise.

Second, the point isn't to select crypto that "works", but rather to select crypto that is efficiently supported by Golang.


> However, you should still set PreferServerCipherSuites to ensure safer and faster cipher suites are preferred, and CurvePreferences to avoid unoptimized curves

Sounds more like he's giving advice to others on the finer points of elliptic curve cryptography. Programmers should not need to know this stuff.


>> and CurvePreferences to avoid unoptimized curves

The key word being unoptimized. In the article, the code snippet has a comment "Only use curves which have assembly implementations" and he mentions that "a client using CurveP384 would cause up to a second of CPU to be consumed on our machines." (presumably because it does not have an assembly implementation)

> Programmers should not need to know this stuff.

It can sometimes be a sign of a leaky abstraction, but programmers might need to know the performance characteristics of the code they write.


[flagged]


Please stop breaking the guidelines. They ask us not to call names like this because it's not conducive to having the types of civil discussions that this site is for.

https://news.ycombinator.com/newsguidelines.html


> Please stop breaking the guidelines.

How about you stop breaking the guidelines? The parent didn't break any guideline.


...no, it's literally saying "here are the settings you need to make sure you only use efficiently implemented algorithms"


Because making safe, efficient settings the hard-to-tamper-with default would be waaay too sensible?


More sensible than trying to have a conversation by stating your points as condescending "questions".

In case you actually care to discuss it, there's definitely a trade-off between variety of cipher support (which some people want) and efficiency of cipher support (which some people absolutely need). Prioritizing one over the other is not a clear or simple decision.


instead, programmers are given a fips-compliant "black box" drop-in openssl replacement which they're not expected to question, right? :)


If by "question" you mean an average web scripter with a community college degree in "IT" deciding what elliptic curve to use in conjunction with what cipher suite and protocol...then maybe the answer is yes?


Are you at all familiar with Go's TLS and crypto libraries? Because, by design, unsafe and exotic curve choices aren't in the standard library. Here's the entire set of curves exposed in 1.7 to Go's TLS:

    type CurveID uint16

    const (
        CurveP256 CurveID = 23
        CurveP384 CurveID = 24
        CurveP521 CurveID = 25
    )


[flagged]


(a) No.

(b) You can use asterisks to set off emphasized text. The guidelines ask you not to use all-caps.

(c) You can ignore all these recommendations and write a Go web app by doing what most people do, and put nginx in front of your app server.


I wasn't emphasising, I was quoting a hypothetical developer yelling in frustration because he just wants to write a safe web app and people keep talking about curves. All-caps is the correct markup style for yelling.


Again, the guidelines ask you not to do that at all, so regardless of the intention, please don't.

Your hypothetical developer has 3 options:

(1) Accept the Golang defaults. They might be exposed to one more in a very long series of possible compute DOS vectors, due to the fact that curves other than P-256 aren't assembly optimized in Go.

(2) Do what most prod networks do and run the Go app server behind an nginx reverse proxy. This is probably the right architecture regardless.

(3) Change the SSL configuration to allow only P-246 and Curve25519, like the article suggests.

Option (1) with respect to curves is just fine for virtually everybody. There's really no issue here.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact

Search: