Hacker News new | comments | show | ask | jobs | submit login
Guide to Writing Dockerfiles for Go Web-Apps (hasura.io)
102 points by alberteinstein 8 months ago | hide | past | web | favorite | 47 comments



1. Running strip on go binaries is a BAD IDEA: https://github.com/moby/moby/blob/2a95488f7843a773de2b541a47...

Use -ldflags="-s -w" instead

2. Production build should NOT be running glide install - you want ALL your dependencies vendored, locked and commited to your repo before you build it. Bonus: you can have your Docker image build by CI pipeline and know it's going to be exactly like the one you've got locally.

3. If you're including external resources in your container (upx in this case) via url it's common sense to verify GPG signature or, when one isn't available, at least file hash

4. If your app doesn't need things like ca-certs you don't need Alpine - you can just use "FROM SCRATCH" to only have the statically linked binary in your container slashing another 50% off the final size of container.


On (2) note that checking in vendored code is not necessary for reproducible builds, as you will get the exact revision that's in your lock file for glide or dep. You will, however, massively bloat your git repo.


I'll take bloat over having Github being up as a build dependency.


Stick a cache on your build server. As long as your cache doesn't go down you can cache all the build dependencies. Then github is not a dependency anymore.


That does not work with Docker in Docker Builds with the copy from pattern. (Especially not when running on sbt...)


Yes it does. You can pass http_proxy as a build argument to docker and then it will use your cache when it builds docker images.


Ditto decreased build speed because of internet downloads.


How far can you take this thinking?

"My app will build even with Github down!"

"My app will build with Github down, and the entire internet down!"

"My app will build with Github down, and the internet down, and the US government being taken over by a foreign military power!"

"My app will build with Github down, the internet down, and the US government being taken over, and Europe a smoldering crater from the last alien invasion!"

At some point - just - nobody cares that your app can build. There's bigger problems.

To me, "Github is down and not coming up!" is one of those "oh my god we now have serious problems who cares about our app" moments.


GitHub usually does a good job but we aren't paying for an SLA guarantee and major outages aren't unheard of. It's easy enough to make your build a deterministic function of the branch contents that I don't see much of a defense for not doing it.


Re: #4

If all you need is ca-certs you can still use SCRATCH and copy copy /etc/ssl/certs/ca-certificates.crt from the build stage image into the final stage

  COPY --from=0 /etc/ssl/certs/ca-certificates.crt /etc/ssl/certs/ca-certificates.crt


Thanks, will try this out and update.


> 1. Running strip on go binaries is a BAD IDEA:

This is old information. It is perfectly fine to strip Go binaries and has been for years. This post does a pretty decent job of summing it up.

https://dominik.honnef.co/posts/2016/10/go-and-strip/


On 4, I recommend using the multi stage to build in alpine/Debian/etc and then copy all the necessary artifacts onto a scratch base.


Thanks! Will try, test and update regarding points 1,2,3. My use case required ca certs but I'll mention the scratch alternative too.


#4: Another motivation for using Alpine is to have a shell available so that quick debugging is possible if something goes wrong. For e.g. use ping to see if DNS lookups are happening in a Kubernetes cluster.


I really don't understand the obsession with small docker final images, unless you're doing something silly like export/import and throwing away the benefits of layers. Removing temporary files from layers is great, but you get very little benefit from starting smaller. Especially so if that means your final layer is bigger. That's actually a lot worse storage-wise than having a fat base image and a bunch of tiny final layers.

I do understand the desire to omit unnecessary attack surface, but if you have a good common base image, then you're not incurring any additional storage beyond once per docker host for all the layers your images have in common.


When a container is scheduled to a node in Kubernetes, if the image is already available, it takes only ~2s for it to be up and running. But, if it is not present on a node and the image is of 2GB, the download speed is the rate limiting factor, taking start up times well beyond 2 seconds. And in a multi node environment where you do scaling up and down or maybe autoscale, new nodes could come up a lot. So, Leaner images are always better.


> So, Leaner images are always better.

That's a very vague statement, though, given the layer concept. The same final filesystem image can use any number of layers to get there. The way you organize the layers in your images, and share data between multiple apps, is important on top of generally keeping things reasonably small.

What I think you're trying to say is that fast startup is preferable. What you have not addressed is average startup time, which is where you still benefit dramatically by understanding layers.

The smallest average amortized image size is best. That means if you can get all the services for a new node started with 2GB total download, it's better than 3GB total download.

Using layers appropriately, and perhaps adding dependencies used by 80% services to your base image, may be best for overall efficiency. Know the tool and know your usage patterns.


This adds an un-necessary dependency between services. Team working on service A would need to coordinate with team working on service B to keep same layers. The contract should be to keep the image size as small as possible, and teams can work independently.

Also, python:3.6 doesn't always mean the same layers, since it could be rebuilt with entirely different layers with same tag, unless FROM is locked to a particular layer.


Not a huge docker user but I've noticed various build systems and deployment mechanisms operating slower as a result of large docker images. Faster build times are really attractive to me so I keep my images small.


I’ve noticed some Docker image registry solutions out there billing / chargeback based upon total image size committed to the repository rather than realized disk usage. This pretty much destroys the space advantages of Docker images completely (I suspect it’s because the underlying asset storage system by such services is super naive or the folks in charge of said companies simply want to charge enterprises boatloads because that will absolutely be noticed by anyone cost-sensitive whatsoever).


The major problem with very large containers is that it can cause failed jobs in container schedulers due to the holdoff timer expiring before the container is even downloaded. I've only seen this with Marathon so far but I've not yet tried to launch a 2GB container in k8s.


This doesn't happen on Kubernetes. The Pod stays in ContainerCreating state unless the entire image is pulled. We are running some Java containers which are >2GB and apart from the time it takes to start-up on a fresh node, everything is fine.


We have had our entire kube go down because the image size was huge and the master node only had 20gb disk and docker wasn’t cleaning up old named images. Smaller just extends the problem to the future, but it gives us time to re-build our cluster with a new kube version.


Thank you for mentioning vendoring.


I don't understand what is the advantage of using Docker in production to run Go. It can already emit static binaries, can contain embedded assets and you can cross-compile for all supported systems using xgo... Ship your binaries, drop them anywhere in the filesystem, run them however you wish and you're done.

Even for building, a proper build system (like Bazel) is a better tool than building in Docker containers...


> Ship your binaries, drop them anywhere in the filesystem, run them however you wish and you're done.

It's this part right here. Containers provides a uniform deployment model for all apps you need to maintain and service. The original analogy to physical shipping containers is spot on here. It doesn't matter if your Go app is "shipping ready"; what matters is that it stacks neatly and identically with all the other stuff running around on the network.


Running in container only environments (Kubernetes) would require it to be packaged in a container.


This is definitely seems to be the use case the blog was written for, seeing the author's background.


I'm currently running 0 containers in production, but...

containers abstract the 'anywhere in filesystem' away. There's no need to worry about conflicting file/directory names, K8 or mesos take care of that with zero effort.

K8 and mesos offer both failure management and deployments / updates out of the box. For example, chef, salt and ansible offer ways to manage rolling updates and I do know some monit or runit or systemd to keep a crashing service going, but that's all centered on single nodes. With marathon, you define how many nodes you want, and how many nodes should be available during an update. If you push an update, Marathon does the rest for you.

Even more, in a sufficiently sized Mesos / K8 cluster, you can ignore physical nodes over a short timestamp. A physical node dies? As long as there's capacity, your services recover. Technically, while reckless, to decomission a physical host, you can just pull the plug on that box. Your orchestration should recover within minutes.

I am critical about containers, especially looking at relational databases, memcaches, rather static elasticsearch instances and such. But if you have many and/or rapidly deploying applications, good container orchestration systems to safe time and effort.


I don't use it in production, but I do have front-end teams that need to develop against services, so it's nice to be able to have them use Docker to serve those APIs locally. That's probably the biggest productivity win in my team right now, along with just sheer reproducibility.


If it's pure Go and and amount of assets you want to include is not that big then the benefits are not as obvious as with Python for example, but the second you have anything CGO you're in the same boat. Static linking non-go code is somewhat flaky.


Maybe you have a JSON/YAML/XML config file that, for whatever reason, is not compiled within the single Go binary.


If you're writing the software, I don't see a reason why it should exist, be shipped alongside the binary and not be modified per-server anyway. And if it is modified per server, it's your CM's responsibility, directly or via modifying startup flags.


Indeed, although I have thoroughly enjoyed xgo[1] (docker-based x-compilation) for building cross-platform binaries with packages like Sqlite folded in.

[1] https://github.com/karalabe/xgo


There's many reasons, but one of the biggest ones nowadays would be Kubernetes.


Why not use GOOS/GOARCH to cross-compile, so you don't have to do the compilation step in a virtualized environment? You can build on the host and just copy the resulting binary in like all the rest of your artifacts.


I've been following this pattern by Kelsey Hightower on his blog https://medium.com/@kelseyhightower/optimizing-docker-images.... I love because the end result is a small base image.


I would strongly advice on using multi-stage dockerfiles instead of building binaries on host. Conveniently Docker manual has an example for Go already: https://docs.docker.com/develop/develop-images/multistage-bu...


Someone should make a collection of these for all languages. I’d love to see an equivalent for Swift.


Question, is Docker actually a reliable way of distributing software? I have toyed with it a couple of times and all I ever got was errors about incompatible client and server versions.


I found it akward when running docker "standalone". However after I learned about k8s, all the things started to make sense. So my advice is, start using it. If it doesn't click at least try to use it in k8s, if you still can't grasp it you probably do not need it.


Related: Anyone have experience with Go and Unikernels? Saw a dead discussion a week ago on unik (https://github.com/solo-io/unik)

For security purposes, the idea of a unikernel rather than a container has been very interesting. Wanted to use them in a test soon.


Docker Inc. bought a unikernel startup a while back. Presumably they see that as a possible endpoint.


Why is it a problem to have different versions of Go on the same machine?


Having two different versions is not an issue, tools like GVM helps in managing them. Docker helps in handling this easily without any extra tools. But, if something is built with a version, it cannot be easily changed without thorough testing.

For e.g. when Kubernetes was built with Go 1.8 instead of 1.7, emptyDir teardown path was broken due to the change in behavior of os.Rename in Go 1.8 [1]. This bug caused other issues like the one with minikube [2] and they had to rebuild with Go 1.7 and update new binaries for already released v0.20.0

[1] https://github.com/kubernetes/kubernetes/issues/43534#issuec... [2] https://github.com/kubernetes/minikube/issues/1630#issuecomm...


I've been using Habitat[1] for shipping all my Go applications. That way I can run the go binaries on whatever kind of infrastructure I need (containers, vms, bare metal). I can mix and match for different environments, or change my mind later without needing to repackage my application or my infrastructure automation.

Ultimately at the end of the day I need to work on a team with other operations-minded developers, and shipping them a dockerfile / docker container is kind of a spit in the face, so this has been a good solution for me.

[1] https://habitat.sh




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: