Hacker News new | comments | show | ask | jobs | submit login
Kubernetes by Example (kubernetesbyexample.com)
612 points by rbanffy on May 31, 2017 | hide | past | web | favorite | 132 comments



For anyone interested in Kubernetes: Red Hat's OpenShift is worth taking a look at.

It's upstream Kubernetes + a PaaS framework built in top of it.

It takes care of role-based access control, has a secured Docker registry (prevents applications from pulling each other's source code), Jenkins integration and can automatically build, push and deploy your applications.

Our team started using it and it's great. The documentation is top-notch (it's probably the best docs I've ever seen in an open source project).

I've seen many teams re-invent the wheel over and over again, when OpenShift already does most of what they need.

Happy to answer questions!

https://www.openshift.org/ (`oc cluster up` and a running Docker is all it takes for a first test)

Docs: https://docs.openshift.org/latest/welcome/index.html

Blog: https://blog.openshift.com/


I just tried to get started with minishift and it doesn't seem to work.

`minishift` seems to be similar to `minikube`. On my mac, running `minikube start` successfully starts a minikube instance in Virtualbox.

Unfortunately `minishift start` seems to sit there and fail after 120 seconds (with xhyve and vbox) because "the docker-machine didn't report an IP address", and it seems that the docker-machine is not even created.

This is a shame, I'd very much like to try out openshift. If anyone else has the same issue here please let me know!

Edit: Someone replied but deleted their comment. I should have run `oc cluster up --create-machine`!


I've been using it on my current project and I love it. This is probably my favorite RH product. For anyone who's interested here's a quick way to get a microservices architecture up and running in OpenShift: https://jhipster.github.io/openshift/.


As someone looking to move their production environment to Docker with Kubernetes, I'm wondering how OpenShift compares to Rancher? I've been looking at it a bit and it seems to provide everything with a nice UI as well.


I was once asking myself the same question. Apparently, Openshift has stronger security focused on a multitenant environment in addition to an out-of-the-box Paas. But I would really like see a good comparison of both.


well their hosted version is "US East (Ohio)" only and spinning up a own cluster is sometimes overkill. you can start a website with just a managed database and 2 or 3 instances where you install your applications. yes it's cool to have klick to deploy, but everything comes with a cost.


You don't need either Kubernetes or OpenShift if all you need is 2-3 instances. Just write an Ansible playbook.

That being said, I do run a few 2-3 node OpenShift clusters and the additional complexity was well worth it.


Totally agree. Heading to the Melbourne Openstack conference right now actually!


Openshift is something different from Openstack.


Yes I'm aware sorry was on transport so wrote a short reply. What I was relating to was Open 'Cloud' platforms and the presence of RedHat and their support / contributions.


As someone who stepped out of the devops world for a minute and is now trying to convert my companies infrastructure to use these tools, this is very useful and I'm reading through the whole thing.

However, I'm still confused by how the tools in the ecosystem interact with the capabilities of various cloud providers. That is, we're using DigitalOcean and Docker and I want to get our infra to a point where I can easily spin up a brand new staging environment (staging-2 say) with an isolated postgres node using an attached volume which also say runs redis, a proxy node, a couple of nodes for application servers, and a couple of nodes for background jobs, all w/private networking, secure, w/non-root access, and run a quick task to seed dbs.

I just can't seem to find guides which put the whole thing together, just pieces and I'm lost in researching an overwhelming number of tools from Ansible to Terraform to Kubernetes to Helm, etc, etc, etc.


I'm not a very versed DevOps guy, but I have used Ansible, Terraform and Kubernetes.

It should go a bit like this:

- Use Terraform to provision VMs, networking resources and storage from DigitalOcean. Basically, write the scripts that make your whole infrastructure available with a single command.

- Then use Ansible for anything you might want installed on those machines - Kubernetes, security packages, SSH keys for your team.

- Use Kubernetes to then deploy your application on top of your secure and replicable infrastructure.

Each of the steps above should be roughly one shell command. If you are disciplined enough to always provision machines, install packages and deploy your app via config files, this should be very much achievable.


>- Use Terraform to provision VMs, networking resources and storage from DigitalOcean. Basically, write the scripts that make your whole infrastructure available with a single command.

How does autoscaling work if terraform/ansible are involved before kubernetes can take over?


IMO it's better for autoscaling to bake VM images and use cloud-init or similar to add per-instance config tweaks if necessary. Using Ansible to do runtime configuration is slow and doesn't play well with autoscaling. I do like Ansible, however, and use it for building EC2 AMIs. I even wrote a tool for this purpose at https://github.com/cloudboss/bossimage.


There are generally two levels of autoscaling involved with kubernetes.

Firstly, kubernetes is able to create multiple instances of your app as load increases and scale them out over all the nodes.

Secondly, the nodes your kubernetes cluster is running on can also autoscale. With terraform you can set up an AWS autoscaling group for example to automatically increase the size of your cluster as load increases.


Hey just curious:

With a dockerfile is there any point to using Ansible/Puppet/Chef any more? I have used those tools to keep 'how machines are installed' in code in the past, but with dockerfiles these days I don't get the point anymore. Seems like the future will be having a few dockerfiles (staging, prod, dev, db, etc) then just finding a place to run them (digital ocean, docker swarm, aws container service - or your macbook).


You still have to bootstrap the instances/droplets/etc with Docker, Kubernetes, etc before Dockerfiles can be used. Unless you're starting with images where those are already baked in, Ansible/Chef/Puppet are good ways to accomplish this on a clean OS.


It's debatable. Ansible/Chef/Puppet themselves need to be bootstrapped as well.

Bash is perfectly fine if the machine can be installed with less than 100 lines of code and immutable infrastructure is being used (so no need for idempotent operations). Just make sure to add ShellCheck to the CI pipeline.


Ansible/Chef/Puppet themselves need to be bootstrapped as well.

Nope, Ansible just needs SSH access.


Remote hosts need Python, but you can use the raw command functionality to figure out what the remote package manager is and install Python as required.


Not when using AutoScaling Groups. Then the machine will be provisioned on boot and therefor ansible would have to be installed on the machine.


No, you'd use Ansible to prepare the new image once, then you'd use that image to scale. Bootstrapping everything each time a VM is created doesn't make much sense.


> Bash is perfectly fine if the machine can be installed with less than 100 lines of code

It is always tough to predict the future; something < 100 lines can easily grow over time. It is nice to start with a tool like Chef/Ansible/Puppet which can more easily grow as the project grows vs Bash.


I agree that it's hard to predict the future, that's why I'm not trying to add unnecessary dependencies. 100 lines of code are not hard to rewrite in anything else if necessary.


I'm doing just that (for a SaaS).

1. Build an AMI with Packer that contains Ubuntu + Docker + aws-cli.

2. Build and publish docker images for all the services that will run on the instances. (one image per service, not per environment like you suggested)

3. Configure the EC2 instance or AutoScaling Groups with userdata. The userdata usually just contains a `docker run ...` instruction.

I used to have Chef, run test kitchen, bake AMIs, cycle all the machines. Now I have a single AMI and do most of the testing on the docker side and then some on staging. In both cases I am using terraform to manage the overall infrastructure (and userdata scripts).

The only issue is how to manage secrets as userdata are readable with the DescribeInstance ACL. The best solution that I found is to upload the secrets on S3+KMS and use IAM Role on the machine to retrieve them. Or use Hashicorp Vault.


For single-provider solutions, I'd say that's debatable. You might be fine with <insert cloud provider here>'s PaaS spin-up services for Kubernetes. The instant you want to span multiple cloud providers, you'll rub up against all the minor differences between each of their implementations, and I think that's where the value in SaltStack/Ansible/Chef/Puppet is in modern times. Manually orchestrate your infrastructure once, then deploy it to any cloud provider.


I moved away from dockerfiles to Packer to build images for multiple environments. Even if you don't need that, ansible from dockerfile is a good choice to avoid writing hard to manage/reuse scripts in yml


I don't think it's a good idea to have different dockerfiles for your staging, dev and prod environments.

The point of using Docker containers is that you get to deploy to production your dev environment as is.


No, you don't need those tools anymore.

Cook a base VM image that has the docker engine and anything else you need. Do this once and spin up VMs you need after that. This is what images are for, and they're far easier, faster, more reliable and more secure than dealing with all this other junk.


How do you do the cooking of the base VM image?


All the major clouds already have API commands to convert a VM into a reusable image - so a simple startup script + this API command is all you need. Even better since you can script all this too.

If anything changes, update the startup script, then run this "make new base image" script and you're all set. Update VMs as needed. Really doesn't get much more complicated than that and using tools like ansible just to setup your VMs so you can run docker containers later is unnecessary.


You make Ansible sound like an ICBM launching system. In practice, you do a simple "apt install ansible" once (on your machine) and then you write a simple startup script in it. There's nothing extra to be "unnecessary", it's a 1:1 replacement.


Why do you need that then when you can just use the startup script directly with the VM?

Unnecessary = not needed to function. Replacing functionality is just preference, which is fine, but specifically not necessary.


Why do you need that then when you can just use the startup script directly with the VM?

My point is that's a loaded question. You could just as easily ask why do you need bash in every VM when you can just have Ansible in a single machine.


Check out Packer (https://www.packer.io/intro/)


Could you point to any articles that go into a bit more depth on this provisioning-configuration-deployment flow?


You can use Ansible for step one, too. And I think I'd recommend it highly over Terraform for production provisioning.


> I think I'd recommend it highly over Terraform for production provisioning.

I'm curious why you recommend this? I'd argue the opposite. With the declarative nature of Terraform, you can know exactly which part of your infrastructure is going to change before you actually apply it. With Ansible, this information is a bit more opaque. You basically just run the playbook and pray it goes as planned and things haven't drifted too much.


You can run Ansible in check mode and see what steps will make changes.


If you (or someone on your team) is truly enjoying assembling DIY solutions using the best of what OSS has to offer, the possibilities of how to accomplish this are endless.

But for a single-command fire-and-forget commercial solution, take a look at http://gravitational.com/telekube

It allows you to, basically, define your own "Kubernetes flavor" with pre-installed components like CI/CD pipeline, databases, redis, etc, call it "developer cluster foo" and have X replicas of it running on any infrastructure. Telekube runs a single process per box and manages dozens of parts (kube services, docker, systemd, terraform) automatically for you, does atomic in-place version upgrades of the entire stack, etc. It is basically a "kubernetes appliance" and that's how many of our customers use it: ship it in a box into on-premise environments.

Yes, yes. It was a shameless plug since I work at Gravitational. We believe that DevOps folks have better things to do that spend any time provisioning Kubernetes and keeping it (and a dozen of components it now consists of) alive.


Interesting!

Do you document what security options you use with Telekube anywhere?

One of the things I've noticed with the wide variety of k8s distros and installers that there are is that it's not always easy to establish what they've chosen to do in that regard, and as some of the options like RBAC are newer and have some operational overhead, not every distro uses them.


Yes, the security portion of our clusters is handled by Teleport, which is OSS and its architecture is well-documented here: http://gravitational.com/teleport/docs/2.0/architecture/

Teleport maps your corporate groups/roles to both SSH and Kubernetes.


thanks for that. I was perhaps more thinking of areas around how you configure kubernetes. for example, do you use RBAC, do you configure kubelet authentication/authorization, what certificates are used by the API server for etcd authentication? that kind of thing :) If there's manifests/pod speccs for the components that'd probably have the answers...


Amusingly, much of the backend of digitalocean runs via kubernetes. They way they do k8s secrets in hashicorp vault is pretty nice actually:

https://blog.digitalocean.com/vault-and-kubernetes/


You might enjoy this guide[1] to get started. It discusses how to run a cluster on virtually any cloud provider and offers fully automated provisioning using Terraform[2].

[1] https://github.com/hobby-kube/guide [2] https://github.com/hobby-kube/provisioning


Do you need to run staging-2 in a complete new cluster? If not, maybe namespaces can be what you need.


I would very strongly recommend this approach. It's worked quite well for me.

There are so many ways to split out application environments in k8s (namespaces being one, generated prefixes in a helm template being another) that you should really try to use your cluster as a single compute resource. Maybe segregate for prod if you're not confident in your ability to lock it down, but everything else should just be creating/scaling pods, then adding nodes to the cluster when it runs out of compute.


Agreed! The documentation is sparse, mostly blog posts and frequently out of date. I am finding some success with terraform style kubernetes cluster setups - there was a good one I came across for DigitalOcean that helped me trying to provision to Scaleway


I've been playing with kubernetes for the past month and I'm just now deciding not to go with it for our new production systems, mostly because I just don't understand it well enough to know how to fix it if it goes wrong.

There's a lot of cool things about kubernetes (e.g. I had an automated SSL cert fetcher for LetsEncrypt that applied to any SSL ingress I added) but it still does some weird things sometimes (like constantly trying to schedule pods on instances without enough spare memory, and then killing other pods because of that; fairly certain that's not supposed to happen).

I think I'll revisit it next year and hope that it's a bit easier to get into. I'm especially hopeful about using it with Spinnaker and some sort of CI, though I couldn't find anything lighter weight than Jenkins that was straightforward to get set up on it.


> constantly trying to schedule pods on instances without enough spare memory

I assume you're explicitly declaring the memory requirements for the application?

EDIT: I ask because usually I read examples just throwing deployments at a cluster, ends up the more explicit you are with the app requirements (CPU/mem, etc.) the better a job the scheduler can do. I realize this sounds like advanced common sense.


Aah yeah that might have helped. I hadn't been setting my memory requirements as I didn't technically know what they were :D

That's good to know for next time!


There's a great example on why setting limits is a good thing from one of Kelsey Hightower's talks, for some reason your initial comment triggered remembering this advice: https://youtu.be/HlAXp0-M6SY?t=21m51s


Requests/limits settings also decide what "quality of service" class a pod will use.

Setting requests and limits to the same values puts it in "Guaranteed", whereas setting requests and limits to different values gives you "Burstable". The default QoS class is "BestEffort", which makes the pod expendable. (This isn't the best-documented part of Kubernetes.) (Edit: You can see the QoS class of a pod with "kubectl describe pod <name> | grep QoS".)

QoS classes are important for scheduling. When Kubernetes needs to evict pods, it will pick burstable and best-effort pods first.

Note that setting memory limits can sometimes not work well with apps that use GC. I have had particularly disappointing experiences with Go apps. Go's GC is a bit odd in that it reserves a huge chunk of virtual memory (which is why Go apps, if you look at "vsz" only, often seem to take more RAM than expected), and is not very aggressive about releasing it. I have a Go app that uses only a few megs actual, but because of the GC will allocate half a gig and get OOM-killed by the kernel before the GC is able to collect.


You have confused BestEffort with Burstable. Agreed about the docs.


Gah, of course I did. Thanks, edited.


Jenkins and Spinnaker go hand-in-hand. The Jenkins stage is a first class citizen in a Spinnaker pipeline.

Spinnaker + K8s is an incredible combo. We actually ran it inside Kubernetes, and it was able to upgrade itself. Pretty awesome stuff.


I have the exact same issue with it. It just tries to do everything and it's too much to wrestle with when you just want the basics.


I would recommend updating this to describe ReplicaSets[1] over ReplicationControllers. They are very similar and serve the same purpose, but the huge difference is that ReplicaSets have selector support -- meaning that you can require N replicas of pods that have X selector (rather than requiring N replicas of pods with the exact same spec).

[1]: https://kubernetes.io/docs/concepts/workloads/controllers/re...


I tend to agree, as ReplicaSets are, as the k8s docs describe, the 'next-generation replication controller', though I couldn't find a place where they state that RCs are in fact deprecated. The section on service discovery constructs an example using RCs, and I'm guessing other sections do as well, so the change may be non-trivial.


RS is still beta though, so while they're the future, RCs aren't going anywhere for a long time. In general use RS (or even better, deployments) and don't worry about it.


I host several smallish PHP/HTML sites for family, friends, and a few clients. Is Kubernetes a viable solution. These sites get very little traffic.

What I am looking for is: - Ability to easily deploy containers - Ability to route by url - Ability to swap out containers without affecting others

Does Kubernetes solve this problem for me? Is there a better option?


I am using Dokku (https://github.com/dokku/dokku) for exactly that and love it. If you've ever used Heroku, you'll feel right at home. When I looked into Kubernetes for my purposes, it seemed to be quite the overkill.


Just started using Dokku too and I love it partly because I was using Heroku before finding out Dokku.


I think it'll be overkill for smaller sites, but it would work. AWS ECS is a bit simpler IMO. You may also want to look at Dokku, which gives you essentially Heroku in a box, and you can deploy either classic Heroku-style or Dockerfiles. You could run it on a very small Digital Ocean droplet.


I second Dreamhost[0] for PHP/HTML.

I would also recommend you checkout now[1] for deploying containers and adding ability to add route by path[2].

[0]: https://www.dreamhost.com/

[1]: https://zeit.co/now

[2]: https://zeit.co/blog/path-alias


I am also interested in knowing this.

Also, can Kubernetes be used to deploy to a single VPS instance (e.g: a 10$ Digital Ocean droplet) or is it only for a multi-node system like GKE?


You can use GKE with a single node if it's big enough (it won't let you do it with the absolute smallest instance). I do it because I just like the k8s model for deploying apps. If any of my sideprojects deployed there ever start to matter I'll scale it up.


Whilst you could deploy k8s to a single node, it's massive overkill for that kind of solution.

The benefit I see in it is orchestration of large clusters of containers. The offset against that benefit is that there's quite a lot of complexity involved.

For single node solutions, if you want to use containers at all, I'd just use docker.


I agree with raesene9 here. Docker plus a proxy (haproxy, traffaek, and docker-flow-proxy come to mind) would do the trick.


Use a shared host like Dreamhost. It is much cheaper and much less time consuming than Kubernetes.


Might be helpful to have some text that explains the bare list of jargon on the front page, is the idea that readers should already know the lingo? I have thoughts about what's going on here.


Right. I don't even know what Kubernetes is — a link to their homepage[1] might've be nice.

[1] https://kubernetes.io/


I feel like if it were all on one page, I would at least skim through the examples, but I don't want to click through to each one.


We've been exploring openshift and minishift for a project for the last few weeks, and we've come back very impressed.

We especially like the interface they built that ties everything together.


So, trying to get a clear handle on just what Kubernetes is - its basically supervisord but for containers, across a pool of servers? With the addition of:

- A complete application e.g. wordpress + mysql containers, can be represented as pods

- Pods can be "scheduled" e.g. auto-scaled, across "nodes" e.g. servers, with load balancing etc

Is that right?


More generally, Kubernetes is an abstraction layer between your application and hardware/cloud that allows you to declaratively define what your app is, how it runs, where it runs and what dependencies it needs. All in a standardized format that allows versioning and easy modification.

As opposed to the old-school method of having to describe all that in words to an ops team (or not and just expecting them to figure it out): "This is a Java app. It needs JDK 1.6 and at least 1GB of RAM. It expects to write logs at /var/log/foo.log. It needs a MySQL database and Redis and Elasticsearch at configured hostnames and ports. We need to run at least 6 instances horizontally scaled behind a load balancer."


"This is a Java app. It needs JDK 1.6 and at least 1GB of RAM. It expects to write logs at /var/log/foo.log. It needs a MySQL database and Redis and Elasticsearch at configured hostnames and ports...

Wow, brittle. Can't Java use environment variables? And since when does development tell ops what hostnames a machine should have, not to mention how many instances to run?

Sorry, don't mean to jump all over you, but your words apparently jumped all over me. :)


> since when does development tell ops [deployment details]?

Not to be cheeky, but since "devops"!

Docker, Kuburnetes, etc... all of these things live on the point of intersection between developers and operations. Some of the config artifacts (e.g. dockerfiles) typically live in source control, which is usually the domain of developers. But the values with which they're populated are typically set by operations.

It is indeed a dance that the two groups have to work out among themselves, and different organizations will handle it differently. Ultimately, I think you find that it really isn't FUNDAMENTALLY different from the dance that they already do in the old school. There will always have to be a handshake, where dynamic values are stored someplace and code knows to point to that place. How you handle that handshake is ultimately a human process thing.


I'm familiar, thanks. Keeping both ops and app in the same repo is one thing, but combining parts of them in code is a pretty amateur Separation of Concerns flaw and will inevitably break. I'm not sure what you're defending here.


The app can still use environment variables and be completely independent (12 factor), but the Dockerfile/Kubernetes config still needs to provide those things, so there is a clear distinction between ops and dev, they're just more mingled than previously.


I think maybe you didn't read the top-level comment in this branch of the thread?


Thats a pretty good explanation! Thanks!



Yeah, pretty much.

The point is it all comes as part of the framework though, so hopefully building for kubernetes will be consistent.

Also, whilst you could put wordpress in a pod (php, mysql) it's probably more typical to put nginx + php-fpm inside containers next to each other (a pod) rather than mysql. You'd probably run mysql as another service, at least that's how I typically do it.


Basically, yes. It's a scheduler, a networking layer (cross-communication across nodes if pods that need to talk to each other are scheduled on different nodes), and some storage management (so you define a volume and tell MySQL to persist there etc etc). And an optional LB/router for ingress traffic to the pods.

That, and a loads of very verbose YAML to describe things.


>A complete application e.g. wordpress + mysql containers, can be represented as pods

Thats probably not a good idea since

1. A pod runs on a single machine .

2. A pod is a basic scaling unit, you probably don't want scale both Wordpress and MySQL together.

> its basically supervisord but for containers, across a pool of servers?

Thats just ReplicationController part of kubernetes.


Right, so in the WordPress example you would have:

- a WebApp pod for the WordPress, which can be scheduled/load balanced across several nodes

- a MySQL Mast pod, and possibly MySQL Slave pods which would be scheduled/load balanced

I REALLY want to get into all this but this seems like SO MUCH MORE WORK than just running them traditionally...


> this seems like SO MUCH MORE WORK than just running them traditionally

Here's an analogy: let's say you've written a command line tool and you want to install it. The simple answer is to just copy the binary to /usr/bin. Does it need config files? Library dependencies? That's fine, copy them to the appropriate directories. But then you, as the sysadmin, have the responsibility of remembering exactly what the state of the system is. If you need to make changes later on, you need to spend time figuring out exactly what needs to be updated, and then manually apply those changes without making mistakes.

The cleaner solution is to use a package manager like rpm or apt-get. You have to spend some time up front to package your code, including explicitly defining things like package dependencies and clean-up scripts. But once that's done, you can let the package manager do the work of figuring out how to get the system into the desired overall state. You're doing more work in the short term, but it pays dividends in terms of maintainability.


Another great explanation - so its kinda-like a package manager for complete systems (like my above WordPress example).

Neat! Think I'm getting the appeal now


TBH the package manager for applications thing that was described is just Docker, you don't need Kubernetes for that.

Kubernetes makes more sense if you're running big clusters of systems.


> I REALLY want to get into all this but this seems like SO MUCH MORE WORK than just running them traditionally...

It is... Unless you have hundreds of servers - and then it pays off handsomely.


Absolutely! I can definitely see the payoff there. It just seems that everyone is packaging projects as Docker now and its a bit confusing that orchestration seems to be a bit of an afterthought in the docker world, when I'm sure most applications will require a database.

Still, I'll keep on plugging away at kubernetes; I actually quite like the design of the system, very tidy!


Take a look at the Helm Package Manager. Once you have Kubernetes running you can very easily install MySQL (even more complex high availability clusters). If you can put your application in a docker you are almost there.

If you don't manage Kubernetes yourself this is actually not that much more work. It is sort of a shift in thinking, and you have to retrain, but once you do that it should actually be faster.

I am doing this right now and the paradigm shift is just really big, that's why it seems to take so much longer.


Thanks for this. As much interest as there is in kubernetes right now, it's surprising how little good documentation there is.


I just bought Kubernetes in Action (MEAP) last week. I would highly recommend this book. It clearly spells out what it is and how docker containers are just units within a distributed system. Compared to trying to figure out how to properly do networking with Docker Compose, Kubernetes is clearly thought out and much easy to use and reason with. The final version comes out in August.


I was going to ask for some stock market tips since KIA isn't supposed to be published until August 2017, but I see that MEAP means there is an early access version of it that gives you incremental updates. Thank you for the pointer.


I'd second the recomendation for KIA, I've been following along with the MEAP and it's being kept well up to date with the latest version, which came out yesterday, covering features from 1.6 like RBAC.

One of the problems with books about things like Kubernetes which move quickly is that they can be well out of date before they hit final release.


Thanks for the recommendation. Will take a look at KIA.


I've found Docker Swarm Mode to be refreshingly simple after playing with Kubernetes. Am I crazy to have it in production?


From what I've seen of both, Docker swarm mode is smaller in scope and simpler than Kubernetes.

I think some setups will suite swarm better, whilst others will benefit from the richness of what k8s provides.


No, it's a great solution both from a UX and technical perspective. It's easy to wrap your head around the manager, workers, and how tasks are scheduled from start to finish. Plus, it's secure. Would recommend.


A replication controller (RC) is a supervisor for long-running pods.

A deployment is a supervisor for pods and replica sets

So what's the difference between these supervisors?


They basically serve the same purpose, but Deployments move more of the "state" into the Kubernetes API controller.

For example, the standard way to do a rolling update using RCs is to create a new RC that is responsible for the updated pods, and then gradually increase/decrease the replica counts to reach the desired state. This is conceptually simple, but the downside is that the Kubernetes API doesn't know anything about the relationship between the old and new controllers. All the responsibility is pushed to the client.

With Deployments, both the old and new configurations are first-class objects. So you can view the history of previous configurations, and you can query the progress of a rolling update. You also get better-defined behavior when multiple clients are trying to concurrently make changes, because Kubernetes can arbitrate between them at a higher level.


From https://kubernetes.io/docs/concepts/workloads/controllers/re...

> A ReplicaSet ensures that a specified number of pod “replicas” are running at any given time. However, a Deployment is a higher-level concept that manages ReplicaSets and provides declarative updates to pods along with a lot of other useful features. Therefore, we recommend using Deployments instead of directly using ReplicaSets, unless you require custom update orchestration or don’t require updates at all.


A Pod manages a set of containers on a node.

A ReplicaSet (RCs are old and deprecated) manages a set of Pods. It has a template and a target replica count.

A Deployment manages ReplicaSets across versions/upgrades. When you change a deployment it creates a new ReplicaSet and does a rolling upgrade from new to old version by tweaking replica counts in new and old ReplicaSets.


Deployments are newer concept. The evolution was: Replication Controllers -> Replica Sets -> Deployments.

I don't think the differences are well-documented, though.


For figuring out how to write pod and service specs - I really like looking at the Helm Charts source code:

https://github.com/kubernetes/charts/tree/master/stable

(Helm aims to be a package manager for kubernetes, and its packages are called Charts)


It'd be great to read something about how people are handling logging in production with K8S+ELK for example.


I don't know how much it qualifies as "reading," but we've experienced great success using a DaemonSet of https://github.com/rtoma/logspout-redis-logstash#readme

Because Kubernetes is great about applying docker labels, we get the k8s container name, Pod name, Pod namespace, UID, and then the normal docker metadata provided by logspout-redis-logstash. Then use the normal, and essential IMHO, multi-line codec on the logstash side of things: https://www.elastic.co/guide/en/logstash/5.4/plugins-codecs-...

We have a few ``if [docker][image] =~ "foo"`` statements to snowflake the types of multiline split patterns, but all in all it just works.

The next level up the hierarchy of needs is to also grab the systemd journal content from the Node itself and send that along, too, but it has not yet become a priority. Not to mention the likely substantial increase in store size once the much, much chattier kubelet traffic arrives in ES.


This is a great! Thanks Openshift team.


I've managed to get kubernetes up and running on Azure but stuck besides that on what the next steps are

Anyone have resources they'd like to share for me to pick up the basic requirements to get a docker stack up?

Up to now I've used docker compose


We just recently decided to go with Openshift Origin after a long debate and POC's. We're currently using Mesos/marathon with a bunch of custom deployment scripts which are terrible, deployment issues etc..


Anyone have experience with AWS ECS and Kubernetes? How do they compare?


I briefly evaluated ECS more than a year ago and it was lacking even more features than Kubernetes did. It has improved quite a bit since then, but Kubernetes has added even more functionality in the meantime. In a nutshell, if you run more than a few services, Kubernetes is probably the better choice.


First, please consider whether you actually need any of the Kubernetes stuff. The odds are that you don't. A huge number of people are switching to k8s just because it's the cool thing to do, without understanding any of the implications. Most companies have to undergo major software rearchitectures and renovations to make good use of the featureset that Kubernetes promises.

Second, please don't run a database in it, omg. See point one. If there were just one application that is not reasonable to run in a container, it would be a database.

Third, yes, this seems to be the new way things happen in the software world. I'm worried about what tools we'll need to develop to overcome the cacophony of noise and half-solutions that is incumbent in the GitHub era.


You've got some good points about this but you're bringing it up repeatedly on HN in ways that are getting tedious (i.e. people have complained to us, and if one is complaining, many are bored). You're also being pretty cavalier with the flamebait (e.g. https://news.ycombinator.com/item?id=14453780). We're trying to avoid that sort of argument on HN, so could you please not?

It's true of course that bushy-tailed engineers chase shiny new things unnecessarily and argue about it defensively. That's real, but it's also perennial—all us curmudgeons did the same thing in our day. This is cycle-of-life stuff. Such discussions quickly converge to the same old thing unless we're careful to avoid that, and the burden of avoiding it falls more on the senior members.

We detached this subthread from https://news.ycombinator.com/item?id=14453544 and marked it off-topic.


I appreciate the explanation, and as multiple obviously-irritated k8s contributors have come in to counteract my other comments, I admit I have been curious how long HN would be willing to withstand the pressure from beings so highly-regarded and attached to such prestigious institutions. Surely, it is not wise to irritate powerful persons, especially not with semi-credible lambasts that affect an important centerpiece of their corporate strategy (k8s is part of the strategy to transition people to Google Cloud as opposed to AWS).

This is also good supporting evidence for the amount of SV heterodoxy that HN-the-institution is willing to stomach (that topic was broached last time we crossed paths).

Are perennial, cycle-of-life topics not expected to regularly appear on discussion platforms? I find it grating to see "k8s this, k8s that" all the time. Should I register a complaint as well, so that these threads can be killed next time they come up, on the theory that "many are bored" by the topic, based on the existence of my complaint?

Specifically, which things are not allowed to be mentioned in the future? Was it:

point 1, "Please consider whether you need k8s before you use it";

point 2, "Please don't run a database inside Kubernetes";

or point 3, "Yes, the software world is hardly manageable due to the unintended effects exposed by GitHub and friends"?

Which of these is too tedious and boring to express on HN?

I definitely would understand if only the flamebait comment were off-topic'd, as I know that was close to the edge. I'm more worried by the fact that counterpoints that are apparently "boring" or "tedious" justify censorship.


It is good to have healthy skepticism for new things. But I don't think k8s is just the "cool thing to do". Here's some reasons why running an enterprise on it is better than on bare VM metal:

1. Containers are a better application level abstraction than VMs. Why as someone deploying an app should I care about machine details. I want my application to be deployed n replicas with x resources and to be discoverable.

2. Containers make your application self documenting on its dependencies. Yes it is some work to get your stuff containerized but we haven't come across anything yet that was all that difficult.

3. Running a container on its own on bare Docker does not give you discovery, multiple nodes, restart policies or many other things k8s does.

4. Much of dev ops roll your own that is going on is solving problems that nearly everyone has and why not do it in a proven environment that does nearly all of the work for you.

I don't think at all that putting apps in containers, shipping them with a resource request/service descriptor, and then exposing them via service discovery is going anywhere at all. Whether it is k8s and Docker or something else, once you are using it, it is clear that this is really the abstraction deployments should be at. I'd say the odds are that you do need k8s stuff for any enterprise/business deployment of services. I'd be surprised if within a few years people are doing it any way else.


1. This is called "shared hosting". I sympathize with that desire. k8s is a way to roll your own shared host. Disinterested application developers who resent having to look at a command line do really like k8s because they treat it like a shared host platform, but they fail to appreciate the middle layer of complication that it entails, or the stickier elements of its architecture. A lot of ignorant app developers don't realize that you don't really get most of k8s's promises unless your application is already specifically designed to handle a transient, disappear-at-a-moment's-notice lifecycle, and most existing software is not built around these assumptions. It can't all be shoehorned in at the infrastructure level. Database servers are a fine example of an application architecture that is antithetical to this.

2. This is called "static linking". I also sympathize with static linking, but again this comes with implications that are not well understood by many of its advocates. In 4 years, I expect that some fad will emerge to repopularize dynamic linking under some new name, like "securable containerized orchestrated hypothalmized applicationators". Everyone will say "Dude, I can't imagine nobody using securable containerized orchestrated hypothamlmized applicatorness in 4 years". Rinse and repeat.

3. Yes, Docker sucks. We agree.

4. Sure, but you don't need to roll your own. Much of trendy devops is a reinvention of old things that we've already had and tried over the last several decades. For the most part, these come with the same old problems, and there are reasons they fell out of favor.

There are definitely times when the tradeoffs will fall such that Kubernetes makes sense. Those times are much rarer than most people who are running k8s want to admit. That's because they're subconsciously running k8s as a status symbol, and lying to themselves about its necessity and value in their use case.

I'm not saying the way we were doing things before k8s came around was exactly right. For example, going back to shared hosting is probably not a bad idea for the vast majority of things, and stuff like logging into your FTP server and copying your files have been gussied up as "cloud object storage" (and costs a lot more money now). So it's really just a general oscillation of trends, I guess.


Databases run fine in containers; conceptually there's very little difference between that and running outside of one. The burden is on yourself to know how to manage it.

Specifically, Kubernetes provides the tools to "pin down" the database to the extent you want/need: Stateful sets, node affinities, QoS classes, nodepools and so on. You want to avoid OOMkilling the database, you want to dedicate the appropriate amount of resources to it, you may want to dedicate a specific type of hardware to it, etc.

It's not for beginners, but nor is it anything like the disaster you portray it as.


This is true to the extent that containers remain in the realm of "concept" rather than reality. In the theoretical container that only serves the function of isolating each process within its own resource space, there is little reason why databases can't function reasonably well. This element alone is more frequently referred to as "sandboxing".

However, in the real world of Docker and Kubernetes, it abstracts away components that actually matter to the function and administration of the database, and which can only be simulated by relying on experimental features like StatefulSets, or flaky features like node affinities that require each node to be registered with the correct labels and, as far as I know, are treated as preferences, not demands (i.e., pods will be scheduled on other nodes if nodes matching the affinity are not available).

It is for beginners because non-beginners would realize that they're just taking a very complicated and winding route back to step one.

If you want to avoid all OOM kills triggered by exceeding the cgroup spec, if you want to avoid the complexity and potential risk involved in configuring sturdy backing volumes and dealing with the questions around what happens when a node gets rescheduled onto a node that doesn't have that volume, if you want to ensure that the database is running reliably and stably on a specific type of hardware, the right solution is to go buy a server (or at the very least use a dedicated VM, not a container that will thrash it all over the place and expose the process to the risk of things like a dockerd hang), install the database on it, and leave it alone. Not to try to shoehorn it into k8s and try to grab this stuff that k8s exists to hide out of some weak abstract space.

The risks associated far outweigh the benefits of Kubernetes for an application like a database (and arguably many other types of applications). As far as I know, databases are supposed to be administered with a great deal of respect for their rock-solid stability. The very suggestion of running one on a two-year-old platform should make any business very scared, even if that platform was conventional. k8s is anything but, and it is still trying to figure out how to provide the basics. Running a production database in k8s is a shockingly distasteful affront to stability just in principle.

It sounds like the only reason people are able to give to justify database-in-k8s is "I want to be able to control it with `kubectl` like I can control everything else". My personal impulse would be to say that people who insist on using one administrative utility only should probably not be allowed to operate servers. A more realistic solution may be something like a kubectl proxy mode, where one can register external servers within the cluster namespace and use `kubectl exec <my-db-server> /bin/bash` to initiate an SSH connection.


Why shouldn't you run databases on it? Ive been running larger databases(10tb+) on it for some time.

If your database can deal with failure of single node and uses network storage it's totally doable to run databases on Kubernetes.

Take a look at Patroni on how to run Postgres on IaaS or Kubernetes.


Database servers are generally designed from the ground-up to utilize basically the entire machine and to be left running for a long time, with caching and memory utilization techniques designed around these assumptions. VMs muck with this somewhat but at least the memory region is reserved for that VM's usage. Docker eliminates that, and k8s eliminates the knowledge even of what type of hardware/memory the system is being executed on (without taking several contrived steps to restore this).

Database servers are also designed on the assumption that each node will be available on a consistent basis, at a consistent address, and that the data directory will be available to the server as soon as it starts up (except for rare node bootstrap operations).

This is the exact opposite of what Kubernetes/Docker seek to provide, and in fact, such things can only be provided within Kubernetes by extensive special configuration that leans heavily on experimental features.

There could not be a worse ideological match. Yes, you can try really hard to ram that square peg through that round hole, eventually pushing it through with significant damage to both the peg and the hole, but why would you?


> This is the exact opposite of what Kubernetes seek to provide

Not true. You can have dedicated machines for your db instances, check out 1. node affinity 2. Statefulsets.

Statefulsets are designed to provide consistency (no "split brain"), stable network identity ("at a consistent address"), stable storage identity ("and that the data directory will be available to the server as soon as it starts up"). Those features are beta but it's a matter of time (hardening and the likes) before we tut them stable.

> why would you?

For all the same reasons you would run any other workload on the same platform.


The reason to run any other workload on Kubernetes is to dynamically schedule your containers across a cluster of anonymous hardware resources, and to provide automatic monitoring, recovery, and control of those containers when certain events occur. The goal, essentially, is to abstract the hardware/system-level element from the deployment element. That's all well and good, but applications that make certain architectural assumptions do not lend themselves well to random scheduling across an anonymous array of system resources. Databases are absolutely among that class of applications, as are many other application types (which are now called "stateful" applications).

For k8s to work well for an application, that application has to be anonymous, without any masters or controllers. It has to be able to tolerate the sudden vaporization of any member. It has to be willing and able to share host hardware with any number of other services, including some which may be bad neighbors/CPU hogs, and it has to be content to be rescheduled in the event that a node goes down, that a pod is killed to ease the transition, etc. Databases fail on virtually every point of this.

If the database and Kubernetes start from opposite design paradigms, why does it make sense to run a database inside Kubernetes? I am still not getting it. `kubectl delete pod/my-postgres-pod` is not a smart thing; you don't want any of that scheduling magic that Kubernetes provides.

At most you would want Kubernetes to tell you that your database is failing health checks and execute the STONITH process to failover to replica, but you hardly need Kubernetes if all you care about is process monitoring.

So can you elaborate on the reasons? Databases are simply not designed for this kind of infrastructure and I see no value in trying to pretend they are. Isn't this the reason that CockroachDB exists, so that people can finally run their DBs in something like Kubernetes without endless headaches?

I think it would be very interesting to analyze the stability and performance features of a PgSQL k8s deployment, PgSQL VM deployment, and PgSQL bare metal deployment. The only issue is that you can't expect the failures to be open with their data.


You shouldn't run databases on it unless you can express exactly why you should. Me, I already have resource segmentation throughout my infrastructure. They have their own network stacks and predictable, understandable resource utilization.

I call those segments "virtual machines." It's rather harder to blog about an RDS instance (or even a self-hosted EC2 one), but you minimize moving parts and the way your bleeding-edge systems can screw you.


My main reason to run basically everything in Kubernetes is abstraction between resources and applications and standardization of deployments.

While you can achieve the same with various other ways like good configuration management and virtual machines I have found that the overall cost of creating/running and monitoring deployments on Kubernetes is smaller than on a traditional VM based solutions.


Are you actually hosting customer data in this system? How are you ensuring that the scheduler never shuts down or relocates the database server without your explicit consent? And when the scheduler does relocate the service, how do you ensure clients can connect to the new instance without interruption?


Here, let me pre-emptively render the standard replies, to save everyone time:

* "Nuh-uh! We love Kubernetes and run EVERYTHING in it, so there. And we DID NOT do that just so we could be cool by copying Google or just so I could pad my resume, I triple-double-swear".

* "But, StatefulSets! It doesn't matter that it's taken over two years of engineering effort to get to a place where fundamental functionality that you've gotten automatically for the last 3 decades exists, like individually-addressable servers or stable disk space. It doesn't matter that they're still beta. Run your database in it TODAY, or you are in serious danger of exhausting your GCool Points!"

* "Please take back your mean comments about modern-day software development culture. Node.js was my first language and I for one find left-pad a miraculous innovation, which never would've been possible without a culture of everyone scrambling to copy some crappy module up to npm and their github profile so that they can brag about how many stars they have. I am thoroughly enjoying this same ethos being carried over into sysadmin/devops."


Every time you write about the ongoing silliness that is chasing the car that is infrastructure/platform trends, I want to buy you a drink. The overwhelming proportion of teams neither need nor benefit from all of this stuff. We're all not running Google here--at some point, the mindset of an engineer should go from "this looks fun" to "this isn't going to blow up and screw us at the worst possible time", but unfortunately, it looks like the devops/infrastructure world is currently traipsing down the road tread by other trendy ecosystems and fields.

So many developers insist that elegance is having nothing to take away--and then they pile all of this stuff that they don't fully understand on top of other stuff they don't understand at all just because "it's devops". It's maddening and strange.

(While we're at it: most of you don't need your million microservices, either, and you're probably not fault-tolerant across them anyway.)


Indeed. The scary part is that eventually the consequences of this will propagate up to someone important and powerful enough to do something about it, and then I worry that there will be legal limitations put on the industry.

I believe there are already substantial rumblings in this direction given the current state of cybersecurity, so this new age of recklessness only stands to hasten it.

I am really hoping that we can avoid such strenuous regulations, lest highly functional autodidacts and anti-authoritarians find themselves shut out of yet another high-paying, at-least-semi-merit-based career. The evisceration of the legal profession should stand as a warning to us. We do not want the actually good and useful people, who generally have low tolerance for the systemic brutality of rote learning, edged out by status seekers who have no real value to offer but can occupy space in a room for the requisite time to earn their certifications.

And on the note about microservices, you couldn't be more right. These architectures are such nightmares to debug that special transaction tracing tooling had to be invented to ease the process, and you have to either go through and update all 90 of your services to use these or go through a centralized service broker (now called a "microservice mesh" so that people don't catch on) to append tracing tags to each request.

Please get real here, people. There are times when this kind of stuff may be useful (if you're running Google or Twitter, for example). That does NOT apply to your small company of 100 employees. You are only making things harder on yourselves.


Thank goodness there a people that get this. I ran screaming from a devops role I'd been in for two years around 6 months ago. This thought process of "complexity == good", "old == bad" - watching the company starting to pull apart an already unstable product and break it down into microservices when they could't even document their existing codebase, how to bootstrap a new instance.

Most puppet installations I've seen run by these small teams just degenerate into snowflake factories. Going to all that effort to run a huge packer pipeline so your templates are the same accross all your clouds providers - and then pulling whatever-the-current-random-version-of-a-bunch-of gems from rubygems.org onto hundreds of different machines. Arggggh!

The argument I keep hearing is that "if we don't use all the latest tools then how are we going to attract the best talent".

Every few years I dip my toes back into the devops space, just to see if it's improving. Nope. It's just getting worse.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: