Use -ldflags="-s -w" instead
2. Production build should NOT be running glide install - you want ALL your dependencies vendored, locked and commited to your repo before you build it. Bonus: you can have your Docker image build by CI pipeline and know it's going to be exactly like the one you've got locally.
3. If you're including external resources in your container (upx in this case) via url it's common sense to verify GPG signature or, when one isn't available, at least file hash
4. If your app doesn't need things like ca-certs you don't need Alpine - you can just use "FROM SCRATCH" to only have the statically linked binary in your container slashing another 50% off the final size of container.
"My app will build even with Github down!"
"My app will build with Github down, and the entire internet down!"
"My app will build with Github down, and the internet down, and the US government being taken over by a foreign military power!"
"My app will build with Github down, the internet down, and the US government being taken over, and Europe a smoldering crater from the last alien invasion!"
At some point - just - nobody cares that your app can build. There's bigger problems.
To me, "Github is down and not coming up!" is one of those "oh my god we now have serious problems who cares about our app" moments.
If all you need is ca-certs you can still use SCRATCH and copy copy /etc/ssl/certs/ca-certificates.crt from the build stage image into the final stage
COPY --from=0 /etc/ssl/certs/ca-certificates.crt /etc/ssl/certs/ca-certificates.crt
This is old information. It is perfectly fine to strip Go binaries and has been for years. This post does a pretty decent job of summing it up.
I do understand the desire to omit unnecessary attack surface, but if you have a good common base image, then you're not incurring any additional storage beyond once per docker host for all the layers your images have in common.
That's a very vague statement, though, given the layer concept. The same final filesystem image can use any number of layers to get there. The way you organize the layers in your images, and share data between multiple apps, is important on top of generally keeping things reasonably small.
What I think you're trying to say is that fast startup is preferable. What you have not addressed is average startup time, which is where you still benefit dramatically by understanding layers.
The smallest average amortized image size is best. That means if you can get all the services for a new node started with 2GB total download, it's better than 3GB total download.
Using layers appropriately, and perhaps adding dependencies used by 80% services to your base image, may be best for overall efficiency. Know the tool and know your usage patterns.
Also, python:3.6 doesn't always mean the same layers, since it could be rebuilt with entirely different layers with same tag, unless FROM is locked to a particular layer.
Even for building, a proper build system (like Bazel) is a better tool than building in Docker containers...
It's this part right here. Containers provides a uniform deployment model for all apps you need to maintain and service. The original analogy to physical shipping containers is spot on here. It doesn't matter if your Go app is "shipping ready"; what matters is that it stacks neatly and identically with all the other stuff running around on the network.
containers abstract the 'anywhere in filesystem' away. There's no need to worry about conflicting file/directory names, K8 or mesos take care of that with zero effort.
K8 and mesos offer both failure management and deployments / updates out of the box. For example, chef, salt and ansible offer ways to manage rolling updates and I do know some monit or runit or systemd to keep a crashing service going, but that's all centered on single nodes. With marathon, you define how many nodes you want, and how many nodes should be available during an update. If you push an update, Marathon does the rest for you.
Even more, in a sufficiently sized Mesos / K8 cluster, you can ignore physical nodes over a short timestamp. A physical node dies? As long as there's capacity, your services recover. Technically, while reckless, to decomission a physical host, you can just pull the plug on that box. Your orchestration should recover within minutes.
I am critical about containers, especially looking at relational databases, memcaches, rather static elasticsearch instances and such. But if you have many and/or rapidly deploying applications, good container orchestration systems to safe time and effort.
For security purposes, the idea of a unikernel rather than a container has been very interesting. Wanted to use them in a test soon.
For e.g. when Kubernetes was built with Go 1.8 instead of 1.7, emptyDir teardown path was broken due to the change in behavior of os.Rename in Go 1.8 . This bug caused other issues like the one with minikube  and they had to rebuild with Go 1.7 and update new binaries for already released v0.20.0
Ultimately at the end of the day I need to work on a team with other operations-minded developers, and shipping them a dockerfile / docker container is kind of a spit in the face, so this has been a good solution for me.