Also, I find Docker a lot easier to work with than trying to manually maintain parity between my dev environment and production hosts. Having previous "pushed simple apps to a single box or two" I'd never go back, even for the simplest of apps.
This is partially possible in the cloud as well, yet it requires building machine images which is slow. And you end up with a provider specific solution especially if you try to integrate it very well.
But in short: It makes sense very soon if you run on bare metal (and arguably makes it more feasible option to run on bare metal) while it makes only sense much later if you're running in the cloud. I'd say you should have at least more than one team deploying more than every few days.
Overall I'd say it doesn't solve anything that was unsolved before, but it solves a common problem in a palatable way.
Our team uses CoreOS and we're slowly moving over to Mesos/Marathon. Our team also has really streamlined unit testing, integration testing and deployment. When you need to deploy an app that may need 4 or 5 nodes, and the rest of the company also is deploying microservices that need anywhere from 2 ~ 20 notes, having a system like Mesos or Kubernetes is essential.
They are big systems. They can be difficult to maintain. I've been trying to setup my own mini Kubernetes in Vagrant and have struggled quite a bit. This post really goes to show a large amount of the complexities too.
It really depends on your scale. If you start growing, investing in people to maintain a container infrastructure is pretty key.
and blog on related things here:
but I do understand your skepticism - the hype is far from reality.
Putting a binary on a VM works just fine and is perfectly repeatable and scriptable already.
I'm using Gitlab.com pipelines also and it means my CI/CD pipeline is now free, and once by build/tests are run it pings the Rancher API to upgrade the service.
Though the requirements for a multi-server setups are quite large IMO.
It's been smooth sailing.
Because of massive costs savings we were able to just reinvest it in our own redundancy. Also 12-factor apps are pretty damn resilient.
How are you handling persistent storage? I have opted to not "dockerize" Postgres for example after reading horror stories.
I've found certain things quite buggy with their cloud offering but the price/performance can't be argued with.
You should checkout ranchers channel for some videos: https://www.youtube.com/channel/UCh5Xtp82q8wjijP8npkVTBA
It's mind-blowing that they offer this for free, considering the impact it has.
k8s currently has incomplete support for metal deployments. for example the integration between ingress and load balancers on metal vs cloud.
im kind of wondering how people are deploying them in real life.
secondly, how do you deploy new code?
Feels safe enough for our needs. Before this we didn't have 10 minute snapshots of all our data so this has actually allowed us to recover from dev errors as well (if they were to happen).
We do have our DB running in a container + mounted volumes that use the above configuration.
No, for real: It looks like I failed to communicate the intent very well.
I do NOT recommend setting up such infrastructure for running Ghost. I do recommend such setup (not necessarily on DO though), if you have a sufficiently complex stack and you want to be able to deploy to it quickly. This is often the case if you're have multiple teams who want to deploy independently and you can't wait for building and spinning up VM images. Or if you're running on bare metal and can't get away with something like Docker + a few shell scripts.
In my case I just run this way-to-complex-stack because I've worked in the space (at Docker and Mesosphere to be specific) and want to stay in touch with the space.
And voila, I have the steps to install <subject> ... ;-) I'm being a little snarky, but I do think that the dockerfile is pretty cool by itself, and wouldn't even mind seeing other automation frameworks use the dockerfile and the swarm configs for other purposes.
We are using containers for pretty much same purpose - try it out different things and just throw it onto one of our dokku boxes - fast and easy way to play with something without investing heavily.
I'd like to think that the industry is moving forward towards better, higher-level abstractions (like Kubernets or Heroku) but it seems to me it's only moving sideways, just like it did with the adoption of VMs.
1. Login to DO and get your API keys
2. Select what images you want and how many instances
3. Click install and Flynn will setup your load-balanced environment.
4. Push apps like Heroku
I've been evaluating dokku, Flynn and Deis. Just rancherOS left to do.
I'd love to know what was the smoothest process for something like a phoenix/node/rails app out of these. Feel free to email me at email@example.com if you feel like breaking it down over chat too :D
We use Cloud Foundry in production and we've used Dokku in development. CF is similar to Dokku and Flynn as it can provide services (databases etc) and wire them to your container.
Everyone running hobby projects should be using Dokku. It's very easy to get started and can replace Heroku in many cases. It is missing HA but you can always scale horizontally or load balance multiple dokku nodes (behind say nginx/haproxy) against an external db/redis although you are essentially running commands for each node.
Dokku should look into Docker Swarm to do orchestration but that is a mammoth task for a pure open source project (no support business to fund it) and as josegonzalez mentions he needs the cash to be motivated to do it. Dokku also lacks a stable way to ship logs to external services. dokku-logspout is available as a plugin but it frequently loses it's connection and needs to restart. But then excels past any of the other PaaS solutions when it comes to SSL integration via Let's Encrypt. It's a truly remarkable project.
Flynn provides services and sets up Postgres to balance between nodes so it too is HA. Flynn works with Google, Amazon and Digital Ocean via it's web installer (I had trouble getting it to run on a third-party provider, SSH was broken). Flynn provides Postgres, redis, MySQL and MongoDB out of the box and HA but with the caveat that you can't really customize these services. I also killed one of the three nodes via the DO console and it still carried on with no issues.
Deis was the hardest of the bunch. It essential is a manual process to get Kubernetes up and running unless you use the cozy Google Cloud Container which I did but isn't cheap ($100/month before bandwidth for 2 containers). Once up and running it is as easy as the other two. They don't have any services though - people suggested running either running Postgres in a container. I think a PaaS should include the DB but that's debatable.
Things I haven't tried:
- adding a node to Flynn or Deis once up and running
- killing a node on Deis and checking if it still runs (I assume Kubernetes takes care of this)
- incorporating these systems with Gitlab to provide a full continuous delivery system
As far as shipping logs, I feel as though thats the responsibility of the underlying container environment, and so I'd probably have something that is based on setting docker options for containers (or an interface to do that for whatever scheduler you are using).
All of my blogs are hosted on a DO machine, all of them are dockerized and I "orchestrate" them with simple chef scripts.
For this simple workload, I would go with swarm or something as simple as bash with upstart.
Deploying wordpress/ghost to DO takes 10 minutes.
Note: I am one of the Dokku maintainers.
I've played with kubernetes on DO in the past and had a lot of trouble with the controller node using more than 512mb of ram. Maybe I was running into a bug, but I'd be more comfortable with a 20 dollar cluster running a $10 controller node.
I really look forward to more provider volume support landing upstream. If we end up with support for digitalocean, vultr, and packet volumes, we're going to be in a great place. You can run a whole lot on a 200 dollar packet cluster. Maybe most people use more of AWS than I do, but blockstorage is really the final thing I need kubernetes to abstract over for the provider to not matter to me.
If you're interested here's the digitalocean pr upstream: https://github.com/kubernetes/kubernetes/pull/36894
1. Why Kubernetes? (when mesosphere/dcos is available?)
2. Any available best practices when setting the entire environment from scratch? (right from server level security, to doing data backups for example)
As a critic, I thought that if you were running it in production, you'd need 3 masters? (to ensure that if the master goes down, another takes over?)
So with 3 masters, and 2 slaves...the production cost go up to about $25/m. I feel that's too expensive for what you do get (price vs performance) compared to others (I personally feel a production environment running 5 servers from packet.net @ ~180$/m is the way to go )
 - https://www.packet.net/bare-metal/servers/type-0/
> Why Kubernetes? (when mesosphere/dcos is available?)
Frankly, if you're familiar and comfortable with DCOS, the core financial benefit is the same: increased hardware utilization, assuming your workload can realize that benefit. And if your application is relatively monolithic and handles the same steady load, I'd advise to stay away from advanced orchestration tools because they're only add moving parts and more complexity, unless you have 
> Any available best practices when setting the entire environment from scratch?
For education, I would recommend Kelsey's excellent "Kubernetes the Hard Way": https://github.com/kelseyhightower/kubernetes-the-hard-way
For production, I would recommend Telekube: http://gravitational.com/managed-kubernetes/ It does a lot of extra heavy lifting to make sure your k8s feels like always-on black box (I work there).
 Kubernetes also has a "side" benefit of abstracting the underlying infrastructure away, so if you have a use case of running the same complex stack on AWS, colo, vmware, etc then running it on top of a "cloud abstraction layer" lowers your ops/dev costs significantly. We talk about this here: http://gravitational.com/telekube/
I was then trying to replicate the same in the cloud but with more constraints (security, networking etc.) and found both Kubernetes and DCOS complicated.
Recently Azure seems to have DCOS (and Kubernetes) as it's orchestration tools for Azure Container Service and I wanted to learn more about each.
I had a production cluster running on mesos/marathon (mesosphere). Also have an open source project heavily invested in it .
Recently, I refactored the entire environment to work on Kubernets and the open source project is following it as well (It is basically a miror of my production ENV).
After this long story I will say this.
They are both hard to set up correctly, especially if you follow best-practices of closed networks and publicly accessed resources (VPN, Internet Gateway etc...).
Kubernetes has a lot of "force" around it right now with the community booming. Posts like this (with it's many problems), is somewhat of a proof becuase it shows a vibrant community around a project.
Kube is more stable from my experience.
Like you mentioned, I would not run Kube production on anything less than 200-250$ which means multiple masters and multiple slaves for the pod runners.
It's nice to run something cheap to check the capabilities but not more than that.
Regarding DO, I'm a big fan and run a lot of projects (personal) with them, for production grade stacks I would use AWS/Google/Azure, those give you better capabilities around hardening a cluster (security, networking) for production workloads.
we are considering a k8s based stack.. but have seen very few on-metal deployments with k8s.
2) This setup is done from scratch, so given all the constraints this is what I would do. Then again, I'd probably care little about saving a few $$ by using DigitalOcean and look into deploying it on AWS or GCP which is more mature you don't need to use terraform which could be more reliable..
The cluster I describe has 3 nodes, all running controller and worker services.
But again: Please don't use this setup 1:1 for production. If you really just want to run a blog and stuff, there are better options.
Easier setup and flow with Let's Encrypt is also essential. In the end, I think it's interesting, but overkill for a lot of scenarios... In a prior role, we went with 3 instances of dokku, with everything (but DB) deployed to all three instances, and an LB in front... was easier than all the orchestration tooling at the time.
Also, at least in mesos, you can set up time scheduled tasks which I don't think is possible with Swarm.
But sure, it's overkill if you're not mainly interested in Kubernetes itself.
in fact, just yesterday, there was a new post on the group to create a SIG-Metal ( https://docs.google.com/document/d/1oYtW7fgSJsQDl-ln6ETvAQrN...) because there's so less information.
load balancer integration is written for all cloud providers... but not for metal (https://github.com/kubernetes/kubernetes/tree/master/pkg/clo...). there's experimental integration with haproxy/traefik in contrib... but not in production.
the reason is evident - a lot of the difficult parts of k8s is being "outsourced" to the underlying cloud platform.
k8s needs to fix this fast... because the real disruption is being able to eliminate a cloud platform (or build your own).