This also doesn't mention anything about the application you're serving: HTTP servers, databases; nor does it mention the vehicle you use to push changes to your application (source control? ssh to host? ssh to each guest container, good heavens?), control distribution upgrades (how would you upgrade your kernel? how would you upgrade apache?), and so on.
What do other kinds of stacks that aren't on the "Container Bandwagon" look like? Just "ordinary" provisioning of virtual machines?
With so many problems, I don't think "Container" can be considered as equivalent to "Modern" quite yet.
This article seems more like a container-centric description since there is no mention of build servers, applications, databases, etc.
I'm sure that's been written before, but I can't think of anything off the top of my head...
I was also considering using the term "cloud native" over "modern" but didn't want to bring definition of that term into the discussion.
My bar for modern is that it should be self healing -- hardware/VMs should fail and the system should be able to recover without missing a beat. That requires a level of management and dynamism that is hard to get without containers. (Burning AMIs w/ autoscaling groups is a good alternative).
One consideration missing from the post: one day we could very well have a ton of opaque/sealed/closed-source container images provided as SaaS. For example, Oracle could provide their products in containers which you would then use on your own hardware or your own cloud provider. This sort of product would give CIOs a bit more control over OPEX/CAPEX and security.
The current post assumes a single central registry. IMO it's more likely containers drive a distributed ecosystem of registries, and container-oriented stacks will need to deal with this.
I'd love to see a decentralized image naming and distribution system. I'm pushing hard for this to happen as part of the OCI. See https://groups.google.com/a/opencontainers.org/forum/#!topic....
A lot of the extra components allow for dynamic placement and management of workloads. This allows for the system to be self healing and self managed in the face of hardware failure.
In addition, much of this automatic management (once set up) can allow for you to run more different things. It can reduce the operations cost of microservices and let you factor your app in a way where you can scale the development team.
Hopefully, over time, this stack will come pre-tested and pre-packaged. Things look super complex now because of the abundance of options. As things settle, it'll get easier to reap the benefits without taking on the cognitive burden.
Instead of "use x tool to upgrade bash on every server", it becomes "upgrade bash on every server", followed by "rebuild each of my several types of containers" then "redeploy every single container".
Also, up until now most PaaSes were vertically integrated and opaque. I imagine that the PaaS of the future will look more like OpenShift where it is a curated set of components in this stack instead of an opaque stack.
As a developer, all I care about is that the code I tested locally and that passed on CI runs the same way in production, without further thought or effort.
As an operator, all I care about is that I don't get paged when a bad piece of dev code rampages and chokes the shared resource.
PaaSes have solved this. There's just no reason left to roll your own.
Lots of comments on the term "modern" and the fact that this is container based.
While there are obviously other ways to skin this cat, for me, I want the following out of my production stack:
* Self healing and self managing. If a machine fails, I don't want to have to think about it. The system should just work.
* Supports microservices. The idea of breaking your app into smaller components (regardless of the name) can help you to scale your engineering organization by keeping the dev team for each µs small enough that a 2 pizza team can own it.
* Efficient. I want a stack that doesn't require a lot of hand holding to make sure I'm not wasting a ton of resources.
* Debuggable. Complex applications can be hard to debug. Having good strategies for application specific monitoring and log collection/aggregation can really help to provide insights into the stack.
I am sure this is in our the future.
Migrating data between hosts and containers for stateful services, data snapshots and file-system isolation can be easily solved for example with Ceph RBDs.