This is an interesting portrait of container-based production stacks, but I'm sure most companies aren't using containers.
This also doesn't mention anything about the application you're serving: HTTP servers, databases; nor does it mention the vehicle you use to push changes to your application (source control? ssh to host? ssh to each guest container, good heavens?), control distribution upgrades (how would you upgrade your kernel? how would you upgrade apache?), and so on.
What do other kinds of stacks that aren't on the "Container Bandwagon" look like? Just "ordinary" provisioning of virtual machines?
With so many problems, I don't think "Container" can be considered as equivalent to "Modern" quite yet.
I'm no expert, but I would imagine environments without containers probably use some sort of provisioning software like Chef or Ansible. If your application is big enough, you cannot manually deploy to each one as you described.
This article seems more like a container-centric description since there is no mention of build servers, applications, databases, etc.
Very true this is container based and is very much in the style of what I experienced over 10 years at Google. I put a disclaimer in the post to that.
I was also considering using the term "cloud native" over "modern" but didn't want to bring definition of that term into the discussion.
My bar for modern is that it should be self healing -- hardware/VMs should fail and the system should be able to recover without missing a beat. That requires a level of management and dynamism that is hard to get without containers. (Burning AMIs w/ autoscaling groups is a good alternative).
It's articles like this that make me feel like containers are the new Java EE, factory-factory-builders, dependency injection, etc. I mean all this makes sense in some context, but whatever happened to right tool for the right job? Just having all of these things for no reason doesn't make you "modern".
This is definitely a visionary post and the title is a little misleading. A more useful title might be: A Vision for a Container-based Production Stack (based on experiences at Google). I think the post covers the scope of container-related problems very well, but I definitely know of a lot of production stacks that work well and don't look at all like the post describes :)
One consideration missing from the post: one day we could very well have a ton of opaque/sealed/closed-source container images provided as SaaS. For example, Oracle could provide their products in containers which you would then use on your own hardware or your own cloud provider. This sort of product would give CIOs a bit more control over OPEX/CAPEX and security.
The current post assumes a single central registry. IMO it's more likely containers drive a distributed ecosystem of registries, and container-oriented stacks will need to deal with this.
I noticed how much of this focuses on containers and the problems that follow specifically from using containers. I thought the reason for using containers was because it makes shit simple.
Many of the layers defined there are necessary for any well run production system (things like logging and monitoring).
A lot of the extra components allow for dynamic placement and management of workloads. This allows for the system to be self healing and self managed in the face of hardware failure.
In addition, much of this automatic management (once set up) can allow for you to run more different things. It can reduce the operations cost of microservices and let you factor your app in a way where you can scale the development team.
Hopefully, over time, this stack will come pre-tested and pre-packaged. Things look super complex now because of the abundance of options. As things settle, it'll get easier to reap the benefits without taking on the cognitive burden.
I'm always thinking about a Shellshock type scenario.
Instead of "use x tool to upgrade bash on every server", it becomes "upgrade bash on every server", followed by "rebuild each of my several types of containers" then "redeploy every single container".
Ideally your containers don't depend on bash, and instead are just running their app's process. So then any given scenario is generally only either upgrade bash on every server (if the problem if lower in the stack than the container) or rebuild and redeploy every container (if the problem is in a library used at the app level).
I think that there are disadvantages to using a PaaS. It can bring all of these components together, but it also means that you don't benefit from best of breed systems at each layer.
Also, up until now most PaaSes were vertically integrated and opaque. I imagine that the PaaS of the future will look more like OpenShift where it is a curated set of components in this stack instead of an opaque stack.
I worked on one of those "opaque" stacks, and as you can imagine, I disagree. Not least because everything we develop is done in the open.
As a developer, all I care about is that the code I tested locally and that passed on CI runs the same way in production, without further thought or effort.
As an operator, all I care about is that I don't get paged when a bad piece of dev code rampages and chokes the shared resource.
PaaSes have solved this. There's just no reason left to roll your own.
It is important to not go overboard with these things. If you are building an app that probably won't have 10^6 requests a second and won't need to scale to the moon, this is overkill. All of such things add a lot of extra latency and are CPU hungry. Logstash and elastic search are big offenders here.
what I'm worried about and I think is frightening is that people may start to think that MODERN==CONTAINER which is wrong. It depends really. Do not use app containers just for the sake of being modern.
If you really want a production ready stack first inform yourself deeply how to do normal Dev & normal Ops right, understand the concepts and get to know how to make a single server and your application bullet proof for production usage before ever thinking of adding the complexity layer of app containers on top or underneath of this. Chance is pretty big that you do not need app containers because your use case is too small for justifying this "modern" layer of abstraction which will give you also new sort of "modern" problems for sure at some stage. Don't get me wrong... containers can be great but not everybody needs them.
Lots of comments on the term "modern" and the fact that this is container based.
While there are obviously other ways to skin this cat, for me, I want the following out of my production stack:
* Self healing and self managing. If a machine fails, I don't want to have to think about it. The system should just work.
* Supports microservices. The idea of breaking your app into smaller components (regardless of the name) can help you to scale your engineering organization by keeping the dev team for each µs small enough that a 2 pizza team can own it.
* Efficient. I want a stack that doesn't require a lot of hand holding to make sure I'm not wasting a ton of resources.
* Debuggable. Complex applications can be hard to debug. Having good strategies for application specific monitoring and log collection/aggregation can really help to provide insights into the stack.
I wonder why nobody considers Ceph[http://ceph.com/] as a proper storage system for micro-service stacks. Maybe because it is mostly considered an infrastructure solution.
Migrating data between hosts and containers for stateful services, data snapshots and file-system isolation can be easily solved for example with Ceph RBDs.
This also doesn't mention anything about the application you're serving: HTTP servers, databases; nor does it mention the vehicle you use to push changes to your application (source control? ssh to host? ssh to each guest container, good heavens?), control distribution upgrades (how would you upgrade your kernel? how would you upgrade apache?), and so on.
What do other kinds of stacks that aren't on the "Container Bandwagon" look like? Just "ordinary" provisioning of virtual machines?
With so many problems, I don't think "Container" can be considered as equivalent to "Modern" quite yet.