Ask HN: Who is using Kubernetes or Docker in production and how has it been? - chirau
======
penguinlinux
I worked at a company where they started using kubernetes very early on. It
was a mess, we configured it and provisioned with CoreOS and Ansible. it was
suck a Mess.

Then I moved to another company and we used ECS and it worked and did the job
but 3 months ago i started learning kubernetes on my own. From provisioning a
cluster to deploying workloads and i can tell you that Kubernetes is great. It
is not that complicated to install and to extend and support. The kubernetes
documentation is really all you need. I am using kubernetes and find that
things are easier to manage than when we had to provision instances with
puppet or chef . Everything is a container and we can deploy containers or
roll them back with ease.

The problem with Kubernetes is not kubernetes itself, I think the problem is
that developers should spend time learning docker and how to create and
package docker containers with their applications and setup CI/CD pipelines to
deploy such containers into kubernetes. the challenge is . that many
developers are not using docker for local development and don't know best
practices on how to develop containers. but it would take a couple of more
years to resolve these issues. Kubernetes is great . Also i got to preview EKS
and their setup is a bit of a mess and pricy

Learn to provision your own clusters . Would be my advice.

------
caymanjim
Docker is two things: there are Docker images, which are like deployable
virtual machine images that contain a system bundled up with a minimal
environment. It's a powerful tool that is indispensable in a microservices
architecture.

Then there's Docker Swarm, which is a system for deploying, distributing,
running, and connecting Docker images into a cohesive system. It's
conceptually awesome, but in usage it's horrible. The commands are
unintuitive, the configuration is difficult and poorly documented, the logging
sucks, the networking is confusing, and it does evil unexpected things that
punch holes in your firewall without the slightest warning or safety net. It
frequently implodes for no obvious reason, is difficult to debug due to the
horrible logging, fills up your filesystem because it doesn't clean up after
itself, and is a complete nightmare to maintain.

I haven't used Kubernetes yet, but by all accounts it's a superior container
environment. We're about to switch to it at work. We'll still create Docker
images, because that part is great. But we'll deploy those Docker images in
Kubernetes. Despite not having used it yet, it can't possibly be worse than
Docker swarm.

------
tonymke
We've been in a self-hosted mesos/marathon cluster since mid/late 2014.

Today it's an absolute pleasure to use in a dev team of 5.

We had to address quite a few things over the years to make it that way,
particularly in the early days - docker, mesos, and marathon's tooling in
particular were quite weak at the time. Some of the big ones: How do you
centralize configuration properly across DCs and environments? How do you
properly CI/CD? Log and metric at scale? How do we proxy traffic from the edge
to containers in a way that doesn't involve bothering an ops guy every time we
shuffle things about, pre-k8s? What if we need to churn faster than what
marathon is built for?

Once we worked through those, it became pretty changing as a developer. I
don't think I would enjoy going back to the pre-container life - things just
take so much longer to get done.

We're just doing our first k8s project now. Our existing solutions handle a
lot of the big bang it would bring into any other team's lives, but certain
things (managed persistent volumes, making stateful containers practical) mean
I will be writing less chef recipes. I definitely appreciate that.

------
lacion
I was a very early kubernetes adopter and moved to production very very fast,
as of now I been using k8s in prod for about 3 years. it was not easy, we
needed to create some tooling around it to have a decent workflow for all of
engineering for it.

but now is a total delight the amount of automation and guesswork for
engineers is minimal they have everything at reach either from a ui or a CLI
tool.

I would say the biggest downside to kubernetes is that at first there appears
to be a lot of magic to deploy it, official docs recommend tools that hide all
of the details about how k8s work and what each component actually do. so it
took a while to figure it all out. k8s is still missing some things for high
available production deployments like Multi DC and Multiple regions, you will
have to do a lot on your own for deployments like that.

------
rschoultz
We had to move one end-user facing service from a proprietary (distributed)
on-premise data centers solution running rented/hosted. We set up a number of
criteria for evaluating cloud vendors as well as on-premise and semi-hybrid
solutions. We had been following Kubernetes since some time back, and the
platform had matured considerably, so we decided to continue our further cloud
vendor evaluations by using Kubernetes.

At the end, the Kubernetes solution neutralized the choice of cloud vendor, at
least from a software release and management point of view. From a security,
availability, latency and a few other aspects the choice of cloud provider
became less of an issue/equal challenge.

We have faced a few minor challenges when using Kubernetes. The knowledge
barrier; The problem, as well as the beauty of Kubernetes, is that it takes on
quite a comprehensive view of network management, service discovery, DNS
management, deployments, container orchestrations, secrets management, system
administration and much more. We use this as an opportunity for learning more
than we see problems. But several roles (in the enterprise) need to come
together on a pull request for a change, rather than having tickets and side
projects. Switching to new features, like RBAC, TLS policy for AWS ELBs and
generally keeping up with new features is another. The mostly excellent
documentation has helped a lot.

Using Kubernetes, we noticed that latency of using the service was slashed to
50-80%, depending on the location of the end-user. This, however, we
attributed more to the ability to roll out in more regions and auto-scaling.
Of course k8s is not alone in supporting this, but it really comes out of the
box.

A second effect we noticed was that by integrating the releases via
Kubernetes, we reduced the time from the point of being ready in system test,
to be passing our Release Readiness Check (yes we are an enterprise), and have
user acceptance test environments and production environments being
provisioned using about 15% of the manpower of our previous processes, and
having releases being available in minutes and not in days (weeks), with
enhanced visibility and maintainability. As an example, having the possibility
to easy tear down or upgrade projects, with the right security and scale at
all times (and no lingering volumes, load balancer pools or firewall rules)

For us, Kubernetes has brought a higher predictability of releases, and
monitorability of the total solution. We did also switch a solution from one
cloud provider to another, and might switch back. For the move we needed some
labeling of services and management (referencing) of certificates.

