
Object Storage, Kubernetes and Why You Can't Containerize a Storage Appliance - jtsymonds
https://blog.min.io/high-performance-object-storage-with-kubernetes/
======
georgebarnett
There’s some interesting ideas in this blog, but also a bunch of weird logic
assertions which sound like the type of statements you’d get from somebody
trying to sell their thing.

    
    
        In case that wasn’t clear,
        you can’t containerize an 
        appliance. That means you
        cannot orchestrate an 
        appliance. That means you
        cannot adopt Kubernetes if
        you keep buying appliances.
    

Why do you need to containerise an appliance? Containerisation is useful for
abstraction of hardware, but that presumes the goal of a homogenous fleet. In
the cloud you don’t care and in a datacenter you separate the workload. So
long as there’s an API you can poke to manage the thing, you’re fine.

Likewise, you don’t need to containerise something to have kube orchestrate
it. You simply need an api object and a controller which can work with that.

Also, the idea that you somehow can’t use kubernetes if you have storage
appliances is just plain wrong. Perhaps it’s true that can’t do _some_ things,
but the idea that you throw the whole thing out because one workload doesn’t
fit seems very extreme to me (although not surprising from a company who sells
a Kubernetes based solution).

Why does _everything_ need to be on Kubernetes?

~~~
xchaotic
I still think, consistent (as in ACID) persistence layer is a huge gap in the
early design of k8s ecosystem, so it still feels like you're wrangling against
the architecture to have something persisted. A good example is an otherwise
excellent Monzo banks's architecture, they've built a very impressive backend
it seems: [https://monzo.com/blog/2016/09/19/building-a-modern-bank-
bac...](https://monzo.com/blog/2016/09/19/building-a-modern-bank-backend) but
they still had a production outage cause by Kubernetes:
[https://community.monzo.com/t/anatomy-of-a-production-
kubern...](https://community.monzo.com/t/anatomy-of-a-production-kubernetes-
outage-presentation/37331)

I really see a big conceptual clash between a consistent database view and k8s
ephemeral everything, not sure of object storage is the enswer to that.

~~~
redis_mlc
The DBA world has almost entirely rejected VMs, containers and even SANs for
master databases, which never seem to quite work for production workloads.

The issues are:

1) performance degradation - the DBA wants 100% of the possible hardware
performance, not less, for masters

2) infrastructure updates affecting the database. k8s has had a lot of
updates, which interfere with the average 400 days uptime achieved on physical
hardware that I've seen

3) increasing query tail latency, measured at 3x or more under VMware for
example

4) vendors disclaiming support (Oracle being the most notable example. They
will only support their VM technologies per their license.)

5) sharing database infrastructure with anybody never turns out well.

6) stalls by adding yet another scheduler.

Having said that, if you decide you just don't care about database
performance, go ahead with what you feel is best. But don't whine about poor
availability or queries being slow.

Source: DBA who has worked with all of the above technologies. Just give me a
pair of 2U servers with Fusion IO SSD cards!

