
Containers Are Not the Future - pjmlp
https://www.linkedin.com/pulse/containers-future-ian-eyberg/
======
ch_123
If we take "Container" to specifically mean OS-level isolation of
applications, then there is a strong likelihood that Containers will be
replaced by lightweight VMs.

To me, the rise of containers has been about packaging software alongside all
its dependencies in a standalone package that is (to varying degrees) isolated
from the underlying host. In that sense, lightweight VMs are an evolution of
the container model we already use.

The author fails to explain how these lightweight VMs would be orchestrated.
Maybe Kubernetes sucks, but if something more secure comes along, is that a
failure of the container approach, or is it an evolution?

All in all, this feels like someone writing a post saying "The aircraft
created by the Wright Brothers was terrible, therefore powered flight was a
failure". That, an attempt to sell something by bashing popular techs and
millennial coders.

~~~
btilly
_The author fails to explain how these lightweight VMs would be orchestrated._

There are two directions that most organizations can go from containers, and
they have opposite answers to this question.

The first direction is back to a monolithic application running in a VM.
Orchestration no longer exists as a problem. What else do we get? Development
is easier. Debugging is easier. We have literally decades of experience
figuring out how to make it work.

Very, very few of the people reading this post are solving problems harder
than ones I was solving decades ago using that kind of stack. Other than the
fact that your resume won't look as good, there is zero reason not to do this.

The other is, as the post's author recommends, even further into isolating the
app. Down to something even simpler than a container. In which case you will
need a solution for orchestration, but it won't let you leverage a single
running VM to a whole cluster. Limiting the attack area will be good for
security.

As a bonus things that container folks say work, but don't, will work. For
example autoscaling fails in adtech with containers fails hard because the
time that it takes to boot up new instances exceeds the time that you are
getting flooded by your ad. But if your unikernel boots fast enough, now
autoscaling _does_ work.

Now there is a third use case. And that is organizations like Google whose
codebase is so big that they can't run it monolithically on a single machine.
Containers solve that in a distributed environment by letting them run only
some of their code on each machine. There is a hidden cost that RPCs over a
network are inherently an order of magnitude more expensive (in time, latency,
and resources) than a function call inside of a running application. So you
order an order of magnitude more hardware, and containers are the sweet spot
for you.

But that problem is of a kind that very few organizations have. And making it
actually work takes a lot of thought and organization that very few in the
containerization world seem to understand. Certainly if you read a few blog
posts, install Docker and Kubernetes, then start deploying through Helm, I am
willing to bet large sums of money that you are almost certainly making rookie
mistakes.

~~~
mschuster91
> The first direction is back to a monolithic application running in a VM.
> Orchestration no longer exists as a problem

Of course it does, VMware makes a boatload of money with VM orchestration. Or
you setup OpenStack which is free but a nightmare in itself.

You do _not_ want a hand-rolled KVM virtualization stack. I've been there,
I've seen the horrors.

~~~
btilly
In 2003 I was a developer on a website that served a million pages per hour
using 2 databases, 4 webservers, and associated networking. Most of that
hardware was there for automated failover.

At that scale, orchestration is simply not a challenge. We wrote a deployment
script to roll out code changes and life was good.

2020 hardware is not worse than 2004 hardware. And while that business
eventually failed, it was not due to technology. (And it made a lot of money
before it failed.) As I said, most here are not doing something harder now
than we did then. The same class of solution works.

As the old saying goes, KISS. Know when it works, and when it doesn't. As
another saying says, YAGNIY. (You Ain't Gonna Need It Yet.) Deal with the
limits of being monolithic after you encounter them. Don't complicate your
life to tackle problems that don't exist.

~~~
mschuster91
> In 2003 I was a developer on a website that served a million pages per hour
> using 2 databases, 4 webservers, and associated networking. Most of that
> hardware was there for automated failover.

fwiw, Stackoverflow runs on a similarly sized/deployed stack.

But that is because Stackoverflow is fine with their employees running a
cheap, un-complex system that _works_ (I can't remember the last SO outage!),
and their employees are also happy with this.

Others, especially startups? Gotta chase these hipster credentials and use the
free AWS/Azure/GCP credits somehow... and the employees don't want boring old
school stuff, they want the newest hipster crap that's barely alpha tested,
but in production. Utterly ridiculous. Most of them would be fine with three
sub-$10k AMD servers and a bit of work by a seasoned Linux sys-admin. But eww.
Linux management. Not cool/hip.

~~~
heavenlyblue
Did that stack include daily traffic analytics hardware?

~~~
btilly
See [https://nickcraver.com/blog/2016/02/17/stack-overflow-the-
ar...](https://nickcraver.com/blog/2016/02/17/stack-overflow-the-
architecture-2016-edition/) for an overview of their stack.

They do NOT need a stack of daily traffic analytics hardware. As I said above,
remote RPCs take an order of magnitude more resources than function calls. If
you are used to using various distributed tools to do analytics, you are used
to requiring an order of magnitude more hardware than a simpler solution. And
if there are inefficiencies in your solution, it is easy for a second order of
magnitude to sneak in. Therefore your experience is going to seriously mislead
you about the likely limits of a simpler solution.

Making this concrete, if your job requires a Hadoop cluster of less than 50
machines, odds are good that it could be run just fine on a single machine
with a different technology choice. But if you are fully utilizing hundreds of
machines in your cluster, then you actually need to be distributed. (Though
you could achieve considerable savings for some of your jobs could be run in a
more efficient architecture. Amazon in particular makes excellent use of this
kind of hybrid architecture in places.)

Google is making excellent decisions for the scale that they operate on. You
just are unlikely to operate at a scale that makes their decisions make much
sense for you.

------
seneca
Pardon the ad hominem, but this is mostly marketing FUD from someone with a
vested interest in you not using Kubernetes. Kubernetes may not be perfect,
but I guarantee you the licenses are a lot cheaper than whatever "NanoVMs"
are.

The tone is also very arrogant and condescending. Not a great way to win mind
share in this industry.

~~~
khawkins
I had to stop reading after the insinuation that Hacker News comment sections
are filled with trash. To the contrary, I find that half of the time the
comments sections are more thoughtful and enlightening than the article
posted. Very regularly I read the comments first to see if the post is worth
diving into.

~~~
halbritt
Yeah, that's where I knew something was wrong. There are some very genuine and
useful conversations that happen in the comments at HN. They aren't
universally awesome, but when I see something linked, I frequently jump
straight to the comments to see the thoughts of the HN users.

------
ChrisSD
I think the three usages of containers often get conflated.

1) Packaging 2) Isolation 3) Secure isolation

Containers have dong a good job of packaging and isolating applications. This
is has been a boon to development. Particularly with testing and deploying
applications in a more or less consistent state and managing those
applications. Note that these benefits have nothing to do with security.

Containers have done a poor job of being secure against malicious actors but
that doesn't mean the isolation is useless. From a security point of view you
just have to treat it the same as if it was running outside the container(s).

One day we'll hopefully get something better from a security POV but that
doesn't mean packaging and isolation are worthless in themselves, even if they
don't add any security. The problem comes when people assume containers add
more security than they do.

~~~
meowface
Exactly, this is what has me so confused. The value I get from containers is
the series of steps that create the container, not the underlying
implementation. I don't really care what substrate the software sits in, be it
a unimicronanofirecrackerkernelVM or LXC or a chroot jail, besides security
implications of isolation escapes. I just like having a deterministic
Dockerfile-like file. If I can keep that, sure, I'll buy into your Containers
But Not But Really Cloud Native For Actual schtick and use it, especially if
it actually is more secure (I'm sure it very likely is). It's the abstraction
and model that makes containers easier to work with, not the implementation.

I find it very irritating and deceptive that the author wildly equivocates
between containers and Kubernetes almost every sentence. And it's incredibly
condescending that they suggest people buying the container Kool-Aid are
simply too young to have done development and operations on regular
servers/VMs, "duh!" Or that they're just trying to pad their resumes. No, I've
had plenty of years of experience doing that, and I find containers far
simpler to develop with, deploy, maintain, and update. And I've never used
Kubernetes.

As most others have said, this comes across as extremely patronizing and
demeaning marketing FUD from a company so certain that they're obviously right
and almost everyone else is mind-numbingly wrong that even if that's actually
the case, no one is ever going to believe them; and very justifiably. This is
not how you do marketing, or write prose or communicate in general.

~~~
topspin
> wildly equivocates between containers and Kubernetes

The conflation of containers and K8s is pretty damning. I personally believe
k8s is an early realization of orchestration and will be superseded by better
ones. Building yet another orchestration system is a cottage industry at the
moment and I've noticed that the existence of k8s hasn't impeded it.

And "spin up a linux box" is deeply naive. It is the habit of spinning up
linux instances that leave companies and other institutions with huge
collections of neglected, mis-configured, vulnerable linux VMs spread all over
the planet with 0.1% average utilization.

Developers popularized containers, independent of the cloud or the troubles of
operations. Containers solve tough development and packaging problems and have
value whether the ultimate deployment abstraction is 'docker' or some microVM
system. The most likely use case for whatever microVM product one might
imagine is _running containers_.

~~~
eyberg
* I don't see the conflation of k8s/containers. Many/most? production container users are using kubernetes. Not sure how that is debatable. Mesosphere changed their name to D2IQ and Docker sold off their enterprise arm.

* As for microvms running containers -- in this case Nanos doesn't run Linux so it can't run containers.

------
api
I've been saying for years that containers are nothing more than an ugly hack
to get around the fact that OSes are broken. More specifically modern OSes
still carry 1980s assumptions about multi-tenancy, system administration,
application management, state management, etc.

In the 1980s a "server" was a big expensive special snowflake. Each server had
a "system administrator." The system administrator would "install" software
"on" the machine.

I quoted all those words to highlight no longer valid concepts.

Boxes are ephemeral now. Systems have more than one administrator, sometimes
belonging to a different organization or department. Applications are not
installed "on" the system in the sense that they root into it and modify its
state. If they do this makes them a terrible hassle to manage. Applications
should be ephemeral things that can float freely between machines and store
their state in some kind of distributed or remote data store or database, or
at least can be easily migrated between hosts. Everything is mobile.
Everything is commodity. Everything is liquid.

OSes are just not designed this way, so we have an ugly hack called a
container that basically amounts to "let's tar up images of operating systems
and just treat them like giant static binaries." Onto this is bolted a whole
array of other ugly hacks to make that easy to deal with, but the concept is
fundamentally ugly as hell.

~~~
tarasmatsyk
Those are valid points,

Have you seen or heard of any alternatives? I mean, software getting more and
more complicated, maybe even more complicated than necessary, however running
apps natively has its own pitfalls (we've all been there). It's like choosing
necessary evil and if there is a better way of doing things we should evaluate
it and talk louder about it

~~~
api
The only attempts I've seen are advanced package managers for Linux, like the
stuff used by NixOS:

[https://en.wikipedia.org/wiki/NixOS](https://en.wikipedia.org/wiki/NixOS)

Unfortunately these tend to be overly complex and have too steep of a learning
curve, and they don't address _all_ the issues.

The closest is probably apps on mobile, but again that doesn't address all the
issues and comes with a whole set of restrictions that are very problematic
outside small single-user devices.

~~~
ghostwriter
I'd say NixOS probably is not the right entry point if you are looking for a
softer learning experience. NixOS is built on top of Nixpkgs package manager,
which itself could be thought of as a side-effect of Nix - the DSL and a
runtime for building software from source in a reproducible manner. If you
start by learning how to employ Nix-the-build-tool in your daily software
development practices, you will be amazed how little you need to learn in
order to be productive with it, because there's essentially just one new
concept - a derivation [1], the rest are just community-driven infrastructure
and utility methods on top of it.

[1] [https://nixos.org/nixos/nix-pills/our-first-
derivation.html#...](https://nixos.org/nixos/nix-pills/our-first-
derivation.html#idm140737320503024)

------
jeanvalmarc
K8 specifically might not be the future, but containers are the past (chroot)
present (docker) and future. Sure, containers are a hacky shell script... but
a hacky shell script that _always runs the same way_. I put in my time chasing
down RPM dependencies, and debugging weird behavior because of a different
version of some shared library I previously didn't know existed. I think the
arguments presented here are focusing on implementation details, while the
higher level picture is that containers-the-concept save a TON of developer
(and devops) time and pain.

~~~
asdfman123
I haven't delved that deep into containerland but to me, it seems the main
downside of using k8s is the complexity of set up and maintenance.

I'm not sure what's stopping people from developing tools to make the whole
containerization process invisible to developers who don't need that much
control. I'm sure stuff like that already exists/is being worked on, and I
don't know why it shouldn't work.

~~~
doctor_eval
The thing is that k8s != containers. I’ve been using containers for years, but
I don’t use k8s in any of my current projects (although that might change at
some point)

That’s why I had to stop reading this article. They repeatedly conflate k8s
with containers. From the users perspective, one is an extraordinarily complex
orchestration framework, and the other is an incredibly simple packaging
system.

------
justnoise
The company I work at started with the idea that we would build an
orchestration system for Unikernels. Very much in the same space as the
NanoVMs folks. I totally applaud their work. We took a different path once our
small team worked with unikernels for a bit (mostly OSv). During the early
days we noticed that the unikernel landscape had technical shortcomings that
would take an incredible amount of engineering effort to overcome and we found
convincing users to trade Linux for a (mostly but not totally) compatible
unikernel based system was an insurmountable hurdle. It was a fun experiment
but, after timeboxing our work and taking stock of the landscape, we fell back
to one of our original sentiments: A stripped down Linux is actually a pretty
good substitute for a Unikernel for most applications (emphisis on "most).

We ended up pivoting to running a lightweight linux, based on Alpine and
orchestrating everything using Kubernetes and Virtual Kubelet [1]. Shameless
Plug! Pods are isolated on their own virtual machines that are booted for the
pod, the underlying OS is rock solid and gives users all the great tools,
bells and whistles you'd expect from a linux based system. Fewer surprises,
easier development. We actually open sourced the system today.

[1] [https://github.com/elotl/kip](https://github.com/elotl/kip)

~~~
wwarner
Great intel from on the ground. The story arc makes total sense. I also have
starry eyes for orchestrated unikernels, but otoh linux is so configurable
scaling down the kernel also seems very reasonable. How do the boot times
compare? To me, the unikernel's chief appeal is that it could potentially
start up in about the same amount of time it takes to load a native binary.
Thanks for sharing.

~~~
justnoise
On bare-metal boot times are drastically different. OSv was sub-second while
our Alpine images were 3-5 seconds depending on what services we needed.
However, we were focused on running our system on cloud VMs, not bare-metal.
In the cloud you can't get a VM to start booting in less than 20-30s so that
order of magnitude difference turned into, at most a 10% difference in boot
times.

In 2017 we measured the restart time in of our unikernel images in AWS to be
22 seconds, all that time was waiting for Zen (2017... no KVM yet) to get
around to getting to the place where we could run our image. So for our use
case, the boot time didn't actually matter, it was far overshadowed by
everything else happening under the hood.

I should say: Unikernels do have their advantages and should be used in areas
that can exploit those advantages: Fast boot, low attack surface, way better
performance for some workloads. We had trouble finding the specialized
customers in the cloud that needed any of a unikernel's positives so badly
that they would take on a unikernel's shortcomings.

------
t0astbread
The tone of this article is very condescending, it contains a lot of absolute
statements and most of its reasoning is done by linking somewhere else.

This reads like a marketing piece ("CEO at NanoVMs") by someone who is betting
on the frustration of Developers and Operators with the pace at which "DevOps
tech" is moving.

Maybe MicroVMs are the future (whatever that means) but this is not the way to
push it.

------
pbiggar
I wrote a satirical blog post on Docker/k8s in 2015 [1]. At the time, I
actually felt docker was the future, but it became clear over the next few
years that it was too much. The complexity of docker /infra/devops is
overwhelming, and something much much simpler is clearly needed.

I don't know if Dark (my startup trying to solve this) is going to take over,
but it's pretty clear that the current complexity is unsustainable.

This goes double if recession/layoffs/etc is happening due to COVID. People
are going to need to spend a lot less on engineering, and the amount being
spent on engineers that maintain infra is an obvious starting point. I would
expect people to switch to much simpler infrastructures, and reduce the
complexities of their stacks by necessity.

[1] [https://circleci.com/blog/its-the-future/](https://circleci.com/blog/its-
the-future/)

~~~
taylodl
Be careful. The whole point of Docker/k8s was to _simplify_ operational
complexity - and to that end it's done a great job. Yes k8s has complexity,
but it's a _known and knowable_ complexity vs. the unknown complexities of the
processes of yore.

Does that mean there can't be improvements? Of course not. If you're working
on simplifying things even further then great. But I'm concerned with your
ability to execute if you believe Docker/k8s have made things more complicated
because that signals to me you may not realize where the complexities lie.

Meanwhile I'd love to hear about your ideas to make things less complex.

~~~
myu701
> Yes k8s has complexity, but it's a known and knowable complexity vs. the
> unknown complexities of the processes of yore.

TL;DR -> I see what you are saying, but there are methods to make your
infrastructure consistent and keep the manual drift away that do not require
k8s-level complexity. Both for monoliths and microservices.

Because we talk of the processes of yore, I will use a simple application
architecture in the form of a web site that clients access that CNAMEs to a
load balancer.

The load balancer distributes traffic to web servers, which each make back-end
connections to a database.

This could also be load balancer to hosts that just run single-host instances
of docker CE/EE as well - microservices can be done through this setup as
well. If they are designed correctly, they shouldn't "need" to all be in one
cluster, after all, these are durable microservices, not sharded bits of a
monolith, right? :)

You have 4 basic tools, which with not a lot of thought can be used as a
foundational platform for many, many workloads.

Aside from your codebase, you have the following (and/or their competitors)

Hashicorp Terraform

JetBrains TeamCity

Octopus Deploy

With a build server, a deployment platform, and a configuration management
platform, you can get 98% of 'web scale' (all the scale you are really gonna
get + plenty of runway for all but the FAANGs among us) for WAY less daily
cognitive load. This applies to the DevOps team, the developers, the support
cannon fodder (note if this is too true, and you are in leadership - please
help this by empowering your cannon fodder), the DBAs, all of 'em.

1\. Programmers check in their code to source

2\. TeamCity runs unit/integration/regression tests. Pass? Publish to nuget
feed for Octopus

3\. Terraform sets up automated DB, web server, load balancer infrastructure
for the servers, configured for whatever your company's definition of
'production server' is

4\. Octopus deploy takes the newest build artifact from TC nuget package and
deploys it to those servers.

In the above process, if you need to scale up:

1\. modify the count of web servers in terraform

2\. Terraform apply

3\. wait for TF to complete its provisioning of new servers

4\. wait for Octopus to recognize the new servers as deployment targets for
your code (in this scenario, you write TF such that it will install the
Octopus agent and set deployment target)

5\. Deploy your code to the new servers

6\. Have Terraform reconfigure the load balancer to expect these web servers
to work but start in a disabled state

7\. the load balancer will detect they are now passing health checks, enable
them, and poof, they are serving traffic.

You have infrastructure as code, you have CICD, you have scalability for
miles, and not a docker registry, helm chart, istio policy, or rancher needed.
Best of all, your helpdesk, intern developers, even non-tech managers can
cognitively follow the process.

Need to patch? Cattle, not pets says:

1\. Terraform: reconfigure load balancer to drain connections on the boxes to
patch

2\. Terraform: reconfigure lb to remove from LB when done

3\. Terraform: destroy the boxes

4\. Terraform: spin me up same number of new boxes. Using packer or a fresh
AMI, the new box will be patched, then have TF set configs, install etc. as
done with the original boxes

5\. Octopus: deploy code to all known web servers

6\. TF: add boxes to load balancer

PS: sorry this grew into a wannabe blog post with handwave set to maximum.

------
random3
There's a good reason I start reading articles and papers from the end to the
beginning.

"Conclusion Containers are clearly not the future. You will eventually need to
migrate completely off of containers and kubernetes and we are here to help."

While this does not negate the core idea, it summarizes the motivation and
bias behind the arguments.

------
philipswood
Operating systems abstract resources.*

Unixes got the abstraction boundary not-quite-right, by excluding networking
and filesystems from the concept of process.

Containers are a correction to this.

i.e. not quite as catchy, but:

"Containers: the present is what the past should have been."

* Windows abstract screens, Files abstract storage Sockets abstract connections, Threads abstract CPUs, etc.

~~~
downerending
Maybe so, but somehow containers aren't really there yet. As a trivial
example, when I 'lsof' a container process (as root), it often cannot "see"
all of the information about the process' open files, apparently because
they're mapped into a different chroot, or because of other cgroups/namespace
issues.

This is one of a number of issues that make containers hard to deal with.

(We also see an issue where containers interact badly with the OOM killer,
leading the entire system to hang because /proc/*/cmdline cannot be read at
all.)

~~~
atombender
The failure of userland tools (such as lsof) to correctly show work with
containers is unfortunate, but this is partly an artifact of the Linux
kernel's somewhat ramshackle approach to containerization.

The kernel doesn't have a concept of containers, but rather of kernel
namespaces. For example, it's possible for a process to create a new namespace
that is the same as that of a default process, except with a new file system
namespace — i.e., a chroot.

Hopefully we'll see these tools updated to better understand kernel
namespaces. In the meantime, if you need to interact with containers, use the
nsenter [1] tool. For example, it allows you to enter the same PID namespace
as the container, but still use the host file system.

For example, run to lsof against a container's PID 1 (corresponding to the
host PID 1234):

$ nsenter -p 1234 lsof -p 1

[1] [http://man7.org/linux/man-
pages/man1/nsenter.1.html](http://man7.org/linux/man-
pages/man1/nsenter.1.html)

------
andrewla
I think this is largely correct, but I think we have to add that POSIX is not
the future either. The model of process isolation and shared resources that is
built in to the POSIX model is just too hard to get right.

I think the post-POSIX world will have to look somewhat like the IOS model of
isolation, something like the NIX level of packaging, and something like the
Docker model of running. A shared filesystem where certain information is held
in certain locations in the filesystem is just a nightmare to manage, but it
is currently not possible to replace it.

~~~
MuffinFlavored
> The model of process isolation and shared resources that is built in to the
> POSIX model is just too hard to get right.

isn't that the basis of like, every modern operating system? Haven't we been
working on getting it right since the '70s?

------
100-xyz
This phrase from the article "Resume Driven Development" is 100% true where I
work.

~~~
Frost1x
It's true at a lot of places.

I know one developer who is obviously planning to go to Amazon because _every
single application_ he works on immediately starts with looking at which AWS
services he can glue together. I'm to the point where I want to create and AWS
bingo card and when hes on a conference call, pass it around to see if anyone
wins by the time he describes some solution set.

 _" Well I'm starting with [AWS service here] but it doesn't support [major,
normally functional breaking requirement], so I'll use [this crazy set of JS
NPM packages] to do the work in between"_

Wish people would develop solutions, not resumes.

------
chubot
Containers have uses, but yes if you expect them to be secure, you're in for a
big surprise.

And last time I checked Kubernetes basically washes its hands of the problem,
and maybe makes it worse by adding layers that obscure the security
properties.

I worked on a Kube-like project from 2010-2015 and I was definitely horrified
by the state of containers in the Linux kernel. Security is basically what
made me give up on the project.

~~~
techntoke
You can run containers in userspace, use app armor, etc and you get better
security. Aside from maybe one or two major security exploits, what issues
have you want into?

------
skywhopper
I'm not so sure that unikernels are the way forward. And in general, angry
rants about technologies from the CEO of a company entirely devoted to some
competing technology is generally best taken with a large grain of salt.

Undoubtedly there are risks to having the flexibility that a full OS provides,
but the benefits to troubleshooting, incident-response, resource-sharing, and
just plain operational flexibility are huge. Until there's a good story for
managing unikernel-based microVMs at scale, they aren't going to make a dent
in container-world.

I suspect the path we'll actually see pursued is more along the lines of
Bottlerocket ([https://github.com/bottlerocket-
os/bottlerocket](https://github.com/bottlerocket-os/bottlerocket)), which is
basically the Linux kernel plus a containerd API that works with Kubernetes
but doesn't expose nearly as much surface area.

------
markbnj
Nothing in software is ever "the future." How many things change as fast as
software technology? For many of us not concerned with "building our brands"
through "resume driven development", but instead focused on enabling rapid and
reliable production deployments containers and orchestration have improved our
results. Obviously this will not remain the dominant paradigm forever. The
next thing may be an evolution of it, or it may be a take in a different
direction that leaves kubernetes a dusty and unused side-branch in history.
Wouldn't be the first time. Regardless of how likely that is, or what the
drivers of the eventual result are, I am not sure what the author wishes
people to have done instead of employing containers? Should we have just kept
on installing dependencies and services onto VMs while awaiting the one true
new thing that would last forever?

------
bdcravens
Containers are like front-end frameworks: they are great tools to have, and
solve certain problems better than the previous options, but there is a
tendency among advocates to assume they are the only tool in the toolbox.

------
jcastro
> If their own developer evangelists are going to conferences and calling the
> software a "clusterfuck" why on earth would you bring this clusterfuck into
> your organization?

Most software sold to large organizations are tire fires/clusterfucks, Kris is
just saying up front that complex software is hard.

Everyone knows reasonably complex software tends to be a clusterfuck -- if
you're making purchasing decisions based on a vendor telling you their
software does all these things and it's a perfectly engineered masterpiece
then ...

~~~
kris-nova
thanks jcastro - if anything here is a clusterfuck it was this bro's article -
If you actually watch my talks you will see that I love Kubernetes and just
want to get people ready and excited about new technologies. The word
clusterfuck is just funny to say...

------
tabtab
Security-as-a-second-thought also doomed Java Applets and Flash. If Cloudville
doesn't focus heavily on security, it's doomed. History has given you at least
3 lessons.

------
jacques_chester
I feel this was a bit scattershot, more about Kubernetes than containers per
se.

Kubernetes isn't directly responsible for the sins of container runtimes and
vice versa. Lightweight VMs solve some of these and I don't think it's
controversial at this point to say so.

I may have reservations about the design of Kubernetes but I don't need self-
serving conspiracy amongst the designers to hold those reservations.

Disclosure: I work for VMware, we do a fair amount of Kubernetes stuff,
including lightweight VMs.

~~~
tra3
Are we not able to have polite discourse anymore? Sheesh..

Container orchestration is not the first silver bullet that has grown a large
following that makes money off of it, whether reasonable or not. I do have a
question though. Say you have a bunch of microvms processing your workload or
serving dynamic pages. How are you going to manage your fleet of MicroVMs?
Would you perhaps write a microvm orchestrator of some sort?

Got a huge kick out of the first comment on the linked page:

> You know what you call a "micro-VM that's purpose-built to run one
> application" if you're a Unix person?

> A process.

> And now we've come full circle. Everything old is new again!

~~~
eyberg
For everyone asking about a "microvm orchestrator" \- the implication here is
that the public cloud (where the vast majority of people reading this will be
deploying too) _is_ the orchestrator.

Public cloud like AWS and GCP does all the networking, volume management and
everything else for you - for free. That's arguably one of the reasons for
using it in the first place. There is no need to put something on top. That's
kinda the whole point here.

~~~
wmf
People abandoned Packer+Terraform style workflows in favor of containers. Why?
Why change back?

~~~
eyberg
I think there are few thoughts here.

a) I don't necessarily agree with that statement - witness HashiCorp's recent
raise:

[https://www.sdxcentral.com/articles/news/hashicorp-
scores-](https://www.sdxcentral.com/articles/news/hashicorp-scores-)
175m-funding-round-5b-valuation/2020/03/

b) The vast majority of container users run their workloads on public clouds
like AWS and GCP which run on top of virtual machines to begin with.

I'm not stating you need to use terraform (I definitely don't) - what I'm
saying is that you can use the underlying orchestration provided by the cloud
of choice - be it the networking or the volumes. There's no need to replicate
that on top as container workloads do. When people who speak of
kubernetes/container complexity - this is what they are complaining about.

If you look at a lot of the configuration mgmt software out there -
chef/puppet/ansible/salt/terraform/etc. it all involves configuring a linux
system whereas with unikernels it's literally just one application and instead
of having to plumb your network together or ensure that your workloads or
'stateless' vs 'stateful' you just use the underlying cloud facilities. That's
the magic. It actively removes complexity from the provisioning.

------
AgentME
I'm confused why the author keeps associating containers with total
insecurity. Docker's approach to whitelisting access to the filesystem has
been great in my experience. I've managed classic multi-tenant Linux servers
before, and I've pretty much always found lots of examples of services having
read or even write access to each other's files unnecessarily. Docker's strong
whitelisting-based approach to filesystem access seems so much more ideal.

In the video of someone breaking into a Kubernetes cluster, they start off
with restricted kubectl access. Okay, so restricted kubectl access isn't a
strong security boundary. I don't think it's a big deal to a lot of people's
use-cases. I didn't know it was a feature; the lack of a strong feature like
that is only a dealbreaker to certain kinds of setups. It doesn't seem
necessary for sandboxing different services apart from each other; it's for
giving different sysadmins/developers access to deploying to different parts
of the cluster. I'm going to make a wild guess that the issue isn't core to
and unfixable in Kubernetes, because the person in the video shows a way to
plug the issue. Summarizing this as calling Kubernetes security a clusterfuck
is way over the top.

I could believe that the author's NanoVM project is even better at security
out of the box, but I find the way they present the argument as overblown, and
they don't introduce NanoVM for comparison until the end, so I think the
arguments are doubly misleading because most people are going to be comparing
containers to the classic plain multi-tenant Linux server model, and Docker
and Kubernetes seem like at least an improvement to that in security.

------
tapoxi
"CEO at NanoVMs"

OK.

~~~
pelasaco
CEO of Redmeat corp says that Soja isn't the future.

------
king_magic
Eh, hard to care when Docker works well for what I need it to. I'll stick with
containers until something _significantly_ better comes along.

------
thethethethe
Ehh if microvms are the future, we are going to need software to schedule and
orchestrate them across large clusters. Software with this capability happens
to already exist and it’s called Kubernetes.

I feel as though the author is only thinking about Kubernetes as an
application platform. However, you can build platforms on top of Kubernetes
(knative, Istio, etc) which solve many of the issues the author has brought
up.

------
Havoc
Some truths in there...but unfortunately drowned out by the FUD marketing
spiel

------
movedx
I agree with the article entirely. Just the other day/week I was down voted
for lamenting K8s and pointing out how dumb it is.

When I saw and read about unikernels a few years ago I knew they'd be the
future. I've explained several times in the past: why did we create containers
instead of working hard to make VMs better, lighter, faster? Now the time has
come.

K8s will be dead in 3-4 years.

------
brendangregg
I used to be a die-hard advocate of containers, back when they were Solaris
Zones. They could destroy the performance of Xen VMs (and I wrote posts
explaining the technical reasons why.) But a lot of things changed with HW
hypervisors, with the leaders nowadays being Amazon Nitro and Firecracker.

It's something the article doesn't address. Containers used to be a slam dunk,
so I'm not sure I'd say that we were wrong _at the time_. But HW hypervisors
evolved, so now I can see HW hypervisors like Nitro and Firecracker being
compelling for some* workloads (e.g., to give the end user a dedicated kernel
where they can do anything, including run any BPF program, run PGO kernels,
etc.)

* I'm not sure yet whether it's "some" or "most".

------
jpswade
It’s almost as if there’s multiple solutions for multiple problems.

Containers are just a solution to a problem. I’ve been around long enough to
know that containers aren’t novel either.

Novelty is the future. It always is.

------
luord
The biggest reason I have to concern myself with deployments nowadays is to
quickly spin up local projects. So far, nothing has offered me the speed of
docker-compose to achieve that.

Sure, I can take a look at these microvms, but if they're not as convenient to
run multiple projects simultaneously—I'm a freelancer and have my own hobby
projects—then I see no reason to switch.

After all, I used to use vagrant, but spinning out containers is so much
faster.

------
apazzolini
Unrelated to the content of this post, but look at how terrible Linked in is:

[https://share.getcloudapp.com/JruWePeK](https://share.getcloudapp.com/JruWePeK)

Scrolling pushes state to your history, and it's so poorly done that hitting
back doesn't even restore previous scroll positions (I'm assuming that was the
intent).

------
detaro
And yet, you could probably with a bit of work swap out the container runtime
for a VM manager in Kubernetes and most users wouldn't even notice.

------
ai_ja_nai
This article is enthusiastically declaring that the CEO of NanoVMs doesn't
understand the audience he is addressing.

Containers were never meant to be "secure" and Kubernetes has nothing to do
with security in general. Application level containerization solves 3
problems: 1- repeatable builds 2- software distribution 3- infrastructure
mutability

Docker solves brilliantly all these problems: we can build software in
controlled environments that we call images and the product is an image
itself, that we can share by means of software catalogs called registries and
that we can deploy the same way we can play CDs in a computer, without
modifying the host system. Security was never meant to be a concern, it was
never part of the plan, at least no more than running a process in an
operating system. And containment systems are just a smart way to launch a
process, that's it. They can't replace security functionalities beyond what is
already provided by host OS.

Is this a fault? Would you blame a hammer for not being able to unscrew a
cork?

NanoVMs CEO spends some time also making fun of "stripped down systems",
accusing them of recklessly "ripping out components of a system that simply
don't belong in the virtualized world of 2020", namely users, SSH logins and
desktops. Again, there is a big misunderstanding here. Linux processes expect
some facilities to be present at runtime, namely system calls to retrieve
Process IDs and user IDs. But these facilities were merely retrofitted in
containment systems to avoid processes crash upon syscall invocations. As all
retrofitted functionalities, their implementation is barely beyond the dummy
level. Therefore, we have PID 1 everywhere, no SSH logins and dummy users.

Now, Kubernetes. K8s is just a scheduler, an orchestrator. It has nothing to
do with security, it is mainly concerned with resource optimal usage and
container placement. And it is a fundamental component in microservice world,
otherwise we would be paralized by operations at each deployment. "You spin up
a linux box and call it a day - done". Seriously? Is this really the whole
understanding of the world of operations? I've been in ops for 12 years and
I'd never want to go back to the old bad days of 2008, where most of us
managed an handful of servers and services and, yet, datacenter ops were at
one with deployment ops. That was the very world of manual and error prone
ops. Orchestration has been a bliss but requires a unit of scheduling. A
container, in this case. But it may be a VM as well.

So, the saddest thing here is that NanoVMs could be a perfect replacement for
containers while retaining the benefits of orchestration (as in Kubevirt, for
example), but the CEO prefers to throw away the dirty water along with the
baby for marketing hype.

------
pelasaco
The CEO from a VM oriented Start-up says that Container are not the future..

There is no way to go back to VM deployment and management. I used to be
openstack wizard, proxmox ninja and kvm/vagrant contributor and I cannot
really think about giving up my whole container orchestration (ci, deploy,
update, introspection , monitoring, you name it) to use back VMs.. I'm 40,
doing sysadmin since 2002 and computer scientist since 2006.

------
Eric-Guo
Why should I click the LinkedIn URL to read a tech article?

------
arpa
There is no way to secure k8s. Really now. REALLY NOW.

~~~
phtrivier
Care to expand ?

~~~
techntoke
[https://github.com/kubernetes/enhancements/commit/c8b1f15359...](https://github.com/kubernetes/enhancements/commit/c8b1f153592c9499d1b44c2a0a88a3dde6726652)

