Hacker News new | past | comments | ask | show | jobs | submit login
Kubernetes 1.10 released (kubernetes.io)
318 points by el_duderino on March 28, 2018 | hide | past | favorite | 189 comments



Is there an easy way to get a single-node production-ready Kubernetes instance? I'd like to start using the Auto DevOps features that GitLab is adding, but all the tutorials I can find either have you installing minikube on your laptop or setting up a high-availability cluster with at least 3 hosts. Right now I'm using CoreOS and managing Docker containers with systemd and shell scripts, which works all right but is tedious and kind of hard to keep track of. I don't have anything that needs to autoscale or fail over or do high availability, I just want something that integrates nicely and makes it easy to deploy containers.

EDIT: I should have clarified, I want to self-host this on our internal VMWare cluster, rather than run it on GKE.


> Is there an easy way to get a single-node production-ready Kubernetes instance?

Not really. There are plenty of ways of getting a single node instance. None of them will give you a "production-ready" one, because they don't define it that way (and I happen to agree). You can of course do whatever you want.

Since you are using VMWare anyway, why can't you spin up more VMs (maybe smaller)? You can vmotion them away to different nodes when you are ready to actually make the cluster HA. It is a really really good idea to keep master and workers separate, even if you run a single node for each.

Failure of the worker will of course bring down your applications. When you recover it or spin up another one, K8s will recover your apps for you. Failure of the master will not adversely affect your systems only the cluster's ability to manage itself and its self-healing capabilities (which will affect uptime at some point).

Failure of the combined master/worker/etcd node should be recoverable, but frankly, at this point, should you care? I would just shoot it in the head and add some automation to provision a brand new cluster and deploy those applications again. Since you are not worried about HA and just want a place to deploy the containers, just make the k8s single-node-cluster cattle.


Sure. Install kubeadm on the node, "kubeadm init", install a pod network, then remove the master taint


Reminds me of the plumbis “How is it made”.


^ This.


I am actually in a very similar situation. I want to run a microservices system on a single k8s node for testing purposes, put nginx with SSL in front of it and add Jenkins for automation.

Took the day to setup minikube on a CentOS server and play around, however, I wasn't able to expose anything to the outside world. Looking into Ingress at the moment, however, documentation is a bit loose there, I think.

Another comment suggests to setup a single-node cluster and remove the taint on the master, maybe I will try that instead.

edit: any advise much appreciated!


I ran into the same problem with exposing it to the outside world, also found the documentation there to be a problem. I got closest with kubespray, I was originally working with the Kubernetes documentation on a 4 node cluster.


You can run Google Kubernetes Engine with a single node, just set the size to 1. It will complain about the high availability, etc. but it works just fine if you don't need that.


To the best of my knowledge, there is no such thing as single node production ready k8s cluster.

> I don't have anything that needs to autoscale or fail over or do high availability

You should not be using Kubernetes.


What should you be using, then, if your development workflow outputs container-images, and you want to deploy them to your own hardware with remote orchestration + configuration management (and you consider Docker Swarm dead-in-the-water)?

That is: what is the “run your own thing-that-is-slightly-lower-level-than-Heroku on a fixed pool of local hardware resources” solution in 2018?


Just to be clear: You can run Kubernetes on a single node. It just wont be "production ready", because the minimum qualifications of "production ready" has raised in Kube world. A single node running anything isn't production ready, let alone Kube. A single node running a nginx server isn't "production ready" anymore.

But Kubeadm will still do it. Kops if you're on AWS. GKE if you're on GCP. Just Docker would be easier to set up though, and that's what the OP means.


I love how there is a plurality of solutions which seem to address everything but your use case...


I would probably just use either docker-compose or systemd+runc for a single node, then use Ansible to manage configurations. Kubernetes strongly assumes you've got a cluster, not a single node.


To be honest for something small and simple I’d just throw up a Rancher server and call it from Ansible.


You could try "The canonical distribution of Kubernetes" https://www.ubuntu.com/kubernetes . That has a single node deployment using lxd, IIRC.


that's Canonical, with a capital C


I think kubespray[1] does what you want, but I'm not 100% sure.

[1]: https://github.com/kubernetes-incubator/kubespray


I've built a bunch of simple scripts for this that you can find here [0]. It's not polished for public consumption or updated to k8s 1.10 yet, but it's what I use for production clusters, some of them single-node. Run the preparation script, then the master script, setup an .env file beforehand with the few required variables, and you're good. Feel free to ask questions here or in the repo issues section.

EDIT to add: It's assuming Ubuntu LTS as the node's OS, not sure if that fits your use case. Should be possible to adapt this to ContainerLinux or anything else without much trouble.

I haven't worked with GL's Auto DevOps yet, but I think the cluster should have everything necessary to get going with that.

[0] https://github.com/seeekr/kubeops


If you want a middle ground between hand-written shell scripts and full-blown Kubernetes, we use Hashicorp's Nomad[0] on top of CoreOS at $dayjob and are quite happy with it.

Similar use case - self-hosted VMs, for low-traffic, internal tools, and no need for autoscaling.

I can't speak to how well it integrates with Gitlab's Auto DevOps, but Nomad integrates very well with Terraform[1] and I'd be surprised if there wasn't a way to plug Terraform into Gitlab's process.

0: https://www.nomadproject.io/

1: https://www.terraform.io/


i would be very cautious about calling anything single-node production-ready


This is for a bunch of internal tools, so if it goes down it's more of a nuisance than anything. Is there something that makes a single-node kubernetes setup less reliable than a single server without kubernetes?


I have my group's internal Jenkins service hosted on a single node EC2 instance running Kubernetes (t2.medium) and I would echo all of the advice you're getting. Kubeadm, definitely. And moreover, don't call it production-ready.

A production-ready cluster has dedicated master(s), period. In order to get your single-node cluster to work (so you can schedule "worker" jobs on it) you're going to "remove the dedicated taint," which is signaling that this node is not reserved for "master" pods or kube-system pods only. That will mean that if you do your resource planning poorly with limits and requests, you will easily be able to swamp your "production" cluster and put it underwater, until a reboot.

(The default configuration of a master will ensure that worker pods don't get scheduled there, which makes it harder to accidentally swamp your cluster and break the Kube API, but also won't do anything but basic Kubernetes API stuff.)

If things go south, you're going to be running `kubeadm reset` and `kubeadm init` again because it's 100% faster than any kind of debugging you might try to do, and you're losing money while you try to figure it out. That's not a production HA disaster readiness or recovery plan.

But it 100% works. Practice it well. Jenkins with the kubernetes-plugin is awesome, and if I have a backup copy of the configuration volume and its contents, I can start from scratch and be back to exactly where I was yesterday in about 15-20 minutes of work.

My 1.5.2 cluster's SSL certificate expired a few weeks ago, on the server's birthday, and after several hours trying to reconcile the way that SSL certificate management has changed, to find the proper documentation about how to change the certificate in this ancient version, as well as making considerations that I might upgrade, and what does that mean (read: figuring out how to configure or disable RBAC, at the very least)... I conceded that it was easy to implement the "DR-plan Lite" that we had discussed, went ahead and reinstalled over the same instance "from scratch" again with v1.5.2, and got back to work in short order.

I've spoken with at least half a dozen people that said administering Jenkins servers is an immeasurable pain in the behind. I don't know if that's what you intend to do, but I can tell you that if it's a Jenkins server you want, this is the best way to do it, and you will be well prepared for the day when you decide that it really needs more worker nodes. It was easy to deploy Jenkins from the stable Helm chart.


I've done a number of 1.5 to 1.9 migrations, if you need help figuring out what API endpoints/etc have changed I can give you some guidance if you ping me on k8s slack; mikej.

Once you get onto 1.8+ w/ CRDs you can manage your SSL certs automatically via Jetstacks Certmanager; https://github.com/jetstack/cert-manager/tree/master/contrib...


Thanks! I will check it out!

It just hasn't been a priority. I have no need for RBAC at this point, as I am the only cluster admin, and the whole network is fairly well isolated.

I couldn't really think of a good reason to not upgrade when it came time to kubeadm init again, but then I realized I could probably save ten minutes by not upgrading, it was down, and I didn't know what the immediate consequences of adding RBAC would be for my existing Jenkins deployment and jobs.

Chances are it would have worked.


Honestly for the situation you presented you'll find very few QOL improvements by upgrading. You could probably sit on 1.5 forever on that system (internal jenkins) forever.


The biggest driver is actually just to not be behind.

You can tell already from what little conversation we've had that "always be upgrading" is not a cultural practice here (yet.)

We have regular meetings about changing that! Had two just yesterday. Chuckle


I don't think so, provided it has the necessary resources to run everything in a single node. There are a few more moving parts which you won't really be using to any great extent.


more parts = more things that can go wrong


That's not quite true. More parts == more things that can fail, but whether those failures result in the entire system failing depends on how you've combined the parts.

If you make each of the pieces required parts of the whole, then yes - adding more of them will increase the chance that the whole system fails. But in kubernetes, the additional pieces (nodes) are all redundant parts of the whole, and can fail without affecting the availability of the whole system. The more nodes you add, the more redundancy you're adding, and the less chance that the system as a whole will be affected.

Mathematically:

If a component fails F% of the time, then adding N of them "in series" (all of them need to work) means your whole system fails with a (1-(1-F)^N)% chance. Iow, as N goes up, the system approaches (1-0)% => 100% chance of failure.

Otoh, if you combine the parts "in parallel", and you only need any one[1] of the components to work in order for the whole system to work, then the system has a F^N% chance of failure. As N goes up, this system approaches 0% chance of failure.

[1] Kubernetes (etcd) isn't quite this redundant, since etcd needs a majority quorum to be functional not just any single node. But the principle is similar and still gets more reliable as you add nodes.


You can set the node count to 1 when you create the cluster in GitLab


For more information on how to connect the cluster to GitLab please see https://docs.gitlab.com/ee/user/project/clusters/


Use GKE, it's super easy to set up and the master is free.if it's for internal tools you could even run it on a group of preemptive instances, which will auto restart and self heal when one gets terminated.


I'm very curious to hear field reports from people who switched to using Kubernetes in production in the last year or so. Why'd you do it? What got better and what got worse? And are you happy with the change?


At GitLab we're making the application ready for Kubernetes, we have not switched yet. It required us to untangle many parts. For example we used to store uploads on disk before moving them to object storage, now it goes directly to object storage. There where many interactions between our applications https://docs.gitlab.com/ee/development/architecture.html#com... over disk or sockets that we need to clean up.

When its done we expect to be able to scale the different parts of our application independently. This also makes it easier to detect problems (why did Gitaly suddenly autoscale to twice the number of containers).


At GitLab, our installation method for the product was also tightly based around chef-solo and chef omnibus. At first we were trying to continue using these technologies inside our containers, but this required our containers to run as root.

So a lot of our effort in moving to kubernetes has required us to try and duplicate all of our previous installation work in ways that don't require root access while the containers are running.

To help, we've chosen to use Helm to manage versioning/upgrading our application in the cluster: https://helm.sh/


I recently present at Cloud Expo Europe, describing the how and why GitLab has been working to produce a cloud native distribution. I've outlined some of the blockers we've faced, and what we've been working on to resolve these going forward. Those changes benefit both our traditional deployment methodologies, as well as our efforts for deployment on Kubernetes.

Journey to Cloud Native: Breaking the monolith and scaling towards tomorrow - https://docs.google.com/presentation/d/1fsgvSuGpn-MnMqKaTOoi...

Our biggest blockers have been how to separate file system dependency for our various parts. For Git content, we've implemented Gitaly (https://gitlab.com/gitlab-org/gitaly/). For various other parts, we've been implementing support for object storage across the board. Doing this while keeping up with the rapid growth of Kubernetes and Helm has been a challenge, but entirely worth the effort.

We are also bringing all of these changes to GitLab CE, as a part of our Stewardship directive (https://about.gitlab.com/stewardship/, https://about.gitlab.com/2016/01/11/being-a-good-open-source...). We don't feel that our efforts to allow for greater scalability, resiliency, and platform agnosticism belong only in the Enterprise offering, so we're actively ensuring that everyone can benefit, just as they can contribute.


Running GitLab CI specifically is absolutely amazing on K8S... Thank you so much for your efforts.


Thanks, glad you are finding it helpful!

We have a lot of improvements coming as well: https://about.gitlab.com/direction/#ci--cd


If Gitlab could deploy to a play-with-k8s temporary cluster with four hours to test and evaluate, that would be pretty cool.


I agree, that would be awesome!

While it's not a single button, GitLab is getting closer to that goal with the addition of our cluster integration feature: https://docs.gitlab.com/ee/user/project/clusters/#adding-and...

If you create a trial account on GCP, you can then use GitLab to make a new GKE cluster and it will be automatically linked. You can then click a single button to deploy the Helm Tiller, a GitLab Runner, and then run your CI/CD jobs on your brand new cluster!

The GCP trial is pretty nice, you get $300 in credits and you won't be automatically billed at the end: https://cloud.google.com/free/


Curious, did you evaluate any other similar technologies like Swarm/(Mesos+Marathon) etc.?

What made you eventually go ahead with K8s other than the main obvious reason (the community) itself?


As twk3 has said, we have examined multiple other options.

One big reason we've chosen to implement with Kubernetes is community adoption. You can now run Kubernetes on AWS, GCP, Azure, IBM cloud, as well as a slew of other platforms both on-premise and as a PaaS. With the expanding ecosystem, our decision has only been further confirmed, as the list of available solutions and providers continues to grow rapidly (https://kubernetes.io/docs/setup/pick-right-solution/)

Another big reason is far simpler: ease of use and configuration. A permanent fixture in our work for Cloud Native GitLab has been that this method must be as easy, if not easier to provide a scalable, resilient, highly-available deployment of GitLab at scale.

We can’t say, “This is our new suggested method. By the way, it’s harder.”

What we have found is that many other solutions require a much larger initial investment of time to understand and configure GitLab as a whole solution, as compared to the combination of Kubernetes with Helm. Helm provides us templating, default value handling, component enable/disable logic, and many other extremely useful features. These allow us to provide our users with a practical, streamlined method of installation and configuration, without the need to spend countless hours reding documentation, deciding on the architecture, and making edits to YAML.


At GitLab, we did evaluate Mesosphere DC/OS. What turned our focus to Kubernetes as our primary cluster install target was the speed at which it was developing, and after watching the space and talking to partners/customers we formed the opinion that Kubernetes was going to lead the pack.

We've been looking at these technologies for two years now, with our focus being Kubernetes for the last one.


GitLab has been investing significantly in Kubernetes, both because we believe in the platform but also because we see significant demand from our customers as well. It's ability to run on-premise as well as availability in a wide variety of managed cloud flavors is a huge benefit, and likely a driver of the demand.

We also try to use the same deployment tools for GitLab.com that we provide to customers, and this lets us offer a scalable production-grade deployment method that can run nearly anywhere.


We have switched last year and achieved 45% cost savings. This is mostly because we can now easily run all CPU heavy activity to preemptible nodes if they are available. It is also much simpler to gracefully scale down the number of nodes/containers outside peak hours.

We also no longer have to manage servers.

All of this was possible without Kubernetes, but it is so much easier with it. Although admittedly much of the ease of use is due to not having to manage Kubernetes itself – we use Google Kubernetes Engine. I would not want to install or manage a Kuberenetes cluster/master myself.

I can recommend managed Kubernetes to anyone that runs many different apps/services.


We run all of our production backend on it at Monzo (we're a bank.) [1]

We first deployed v1.2 nearly 2 years ago, and I can say Kubernetes has made some amazing improvements in that time – in terms of functionality, usability, scalability, and stability. This release continues that trend.

We've invested a lot in it too with things like making sure our etcd clusters backing Kubernetes are really solid [2], and we've even added some of our own features to Kubernetes like allowing CPU throttling to be more configurable to get more predictable >p99 latency from our applications.

We've been through our share of production issues with it (some of which we've posted publicly about in the hope that others can learn more about operating it too [3]), but I don't think there's any way in which we could run an infrastructure as large and complex as ours with so few people without Kubernetes. It's amazing.

[1] https://monzo.com/blog/2016/09/19/building-a-modern-bank-bac...

[2] https://monzo.com/blog/2017/11/29/very-robust-etcd/

[3] https://community.monzo.com/t/resolved-current-account-payme...


One data point: I've wanted to but so far have not made much progress. I'd say my biggest impediment has been documentation: I can get it installed, but making it work seems to be beyond the scope of the documentation. I got closest once I found out about "kubespray" to install the cluster rather than using the official Kubernetes installation docs process.

I spent a couple weeks not quite full time going through tutorials, reading the documentation, reading blog posts and searching for solutions to the problems I was having. My biggest problem was with exposing the services "to the outside world". I got a cluster up quickly and could deploy example services to it, but unless I SSH port forwarded to the cluster members I couldn't access the services. I spent a lot of time trying to get various ingress configurations working but really couldn't find anything beyond an introductory level documentation to the various options.

Kubespray and one blog post I stumbled across got me most of the way there, but at that point I had well run out of time for the proof of concept and had to get back to other work.

My impression was that Kubernetes is targeted to the large enterprise where you're going to go all in with containers and can dedicate a month or two to coming up to speed. Many of the discussions I saw talked about or gave the impression of dozens of nodes and months of setup.

Other options I'll probably look at when I have time to look at it again: Deis https://deis.com/ , Dokku http://dokku.viewdocs.io/dokku/ , Flynn https://flynn.io/ , LXC https://linuxcontainers.org/lxc/introduction/ and (though I'd been trying to avoid it) Docker Swarm https://docs.docker.com/engine/swarm/


If you're looking for a better "out of the box" experience, I'd recommend having a look at OpenShift.

You can use either their free tier in cloud or use the Open source OpenShift Origin for trials (there's also MiniShift, which is similar to MiniKube).

From my looking at it OpenShift comes with some of the parts that base Kubernetes leaves to plugins, so things like ingress, networking etc are installed as part of the base.


Are you trying to play around, or set up a working cluster? If you just want to play around, I'd suggest just using minikube to get things going.

Anecdotally, I got an HA cluster running across 3 boxes in the space of about a month, with maybe 2-3 hours a day spent on it. The key for me was iterating, and probably that I have good experience with infrastructure in general. I started out with a single, insecure machine, added workers, then upgraded the workers to masters in an HA configuration.

I don't think it is really that hard to get a cluster going if you have some infrastructure and networking experience, especially if you start with low expectations and just tackle one thing at a time incrementally.


Full Disclosure: I work for Red Hat in the Container and PaaS Practice in Consulting.

At Red Hat, we define an HA OpenShift/Kubernetes cluster as 3x3xN (3 masters, 3 infra nodes, 3 or more app nodes) [0] which means the API, etcd, the hosted local Container Registry, the Routers, and the App Nodes all provide (N-1)/2 fault tolerance.

Not to brag, since we're well practiced at this, but I can get a 3x3x3 cluster in a few hours, I've lead customer to a basic 3x3x3 install (no hands on keyboard) in less than 2 days, and our consultants are able to install a cluster in 3-5 working days about 90% of the time, even with impediments like corporate proxies, wonky DNS or AD/LDAP, not so Enterprise Load Balancers, and disconnected installs. Making a cluster read for production is about right-sizing and doing good testing.

[0] http://v1.uncontained.io/playbooks/installation/#cluster-des...


Worth mentioning that my "got a cluster working in a month" time frame includes starting with zero Kubernetes experience, and no etcd ops experience. Using kops, pretty much anybody can get a full HA cluster running in about 15 minutes. On top of that, it's maybe 5 more minutes to deploy all the addons you'd expect for running production apps on a cloud-backed cluster.

The great thing about automation is that once you have these basic tools (Prom/Graf monitoring/alerting, ELK, node pool autoscaling, CI/CD) implemented as declarative manifests, they're deployable anywhere in minutes.


would be good if the "Enterprise Load Balancer" would just be another set of servers (with HAProxy + keepalived or something else, I love the "single ip" failover)

Edit: especially load balacing the master servers. (that's actually the hard part of k8s, not even setting it up with/out openshift/ansible whatever) load balancing services on k8s itself is basically just running either calico network and use one or two haproxy deployments of size 1 with a ip annotation or just using https://github.com/kubernetes/contrib/tree/master/keepalived...


One last challenge. Can you do all the setup without being root?


As long the user can install packages (say, via /etc/sudoers file), make config changes, Yes. That's supported by our installer [0].

[0] https://github.com/openshift/openshift-ansible/blob/master/i...


I'm trying to set up a cluster in our development environment to play around with in preparation for rolling it to staging and production. So, minikube I have ruled out because it doesn't prove out the most critical parts of what we will need to run it in production.

I do have a lot of infrastructure and networking experience, it was mostly a matter of the ingress setup having many moving parts which were poorly documented. I could see that it had set up bridges and iptables rules and NAT and virtual interfaces, but I was never able to get a picture of how the setup was supposed to work to be able to see what parts of that picture were right or wrong.

There was no clear road-map of setting up a cluster. Most people talking about Kubernetes were doing "toy" deployments, which only had limited application to what I was doing. I only found kubespray because of a passing mention, for example.

I'd say your about right with a month. Had I given it another week or two, I probably would have gotten it going. I had really only expected it to take a couple days to have a proof of concept cluster, so at 2 weeks I was way beyond what I had slotted to spend on it.

Looking over the Getting Started Guide it looks very simple to get a test cluster set up. Which maybe set my expectations unreasonably high.

I guess that's what I'm trying to say: With the current state of documentation, it's probably a calendar month investment to get going.


Take a look at rancher too. https://rancher.com/

Rancher 2.0 promises nice UI for K8s


Rancher 2.0 with Kubernetes. Should make things a lot easier.


Docker Swarm might be worth looking at regardless:

(1) it's like 15 minutes to learn how it works and then maybe a morning or so playing around with it. Very low investment.

(2) if you're already using docker-compose, it's pretty much an in-place switch. You might need to deal with a few restrictions (mandatory overlay network, no custom container names, no parameterized docker-compose), in exchange you'll get zero-downtime upgrades, automated rollbacks, and of course the ability to add more machines to your swarm.


Dokku is great if you want to deploy on a single machine. I've been happy with it so far and haven't been lucky enough to have a problem of scaling yet.


We're running Kubernetes for minor amounts of traffic in production, but we're still not in a great place due to a few limitations in running Kubernetes on your own gear.

I know that people like to talk about cost savings and stuff like that, but I'd like to see if it actually lowered your app latency and increased sales/conversions/whatnot. Things that matter a lot to a growth business.

I ask because the various overlay/iptables/NAT/complicated networking setups in Kubernetes lend themselves to adding more overhead and being much slower than running on "bare metal" and talking directly to a "native" IP. I really, really wish that Kubernetes had full, built-in IPv6 support. It would remove a lot of this crud.

Our solution works around this by assigning IP's with Romana and advertising those into a full layer 3 network with bird. The pods register services into CoreDNS, and an "old fashioned" load balancer resolves service names into A records. Requests are then round robin'd across the various IP's directly. There's no overlay network. There's no central ingress controllers to deal with. There's no NAT. It's direct from LB to pod.

The nginx ingress controller is not a long term solution. It's a stop-gap measure. Someone really needs to build a proper, lightweight, programmable, and cheap, software-based load balancer that I can anycast to across several servers. That or Facebook just needs to open-source theirs.


Regarding networking, did you consider Flannel? Its "host-gw" backend doesn't have any overhead, as it's only setting up routing tables, which is fine for small (<100s of nodes) clusters that have L2 connectivity between nodes.

For load balancing, have you looked at Traefik?


Our network is full layer 3 Clos Leaf/Spine. We'd much prefer something with network advertisements (OSPF/BGP) or SDN. Layer 2 stuff is OK for labs, but I don't know anyone building out layer 2 networks any more.


Have you tried Envoy, either with Istio or with Heptio's Contour ingress?


Look up Traefik. I haven't used nginx since.


Will do! Thanks for the suggestion.


We use Kubernetes in a very unusual way. We built a tool that allows you to take a snapshot of a Kubernetes cluster (along with all apps deployed inside) and save it as a single .tar.gz tarball.

This tarball can be used to re-create the exact replicas of the original cluster, even on server farms not connected to the Internet, i.e. it includes container images, binaries for everything, etc.

But if the replica clusters do have internet access, they can dial back home and fetch updates from the "master". People use this to run complex SaaS applications inside their customers' data centers, on private clouds. These clusters run for months without any supervision, until the next update comes out.

Thanks to Kubernetes, you have a complete, 100% introspection into a complex cloud software, which allows for a use case like this. Basically if you're on Kubernetes you can no longer be tied to your one or two AWS regions and start selling your complex SaaS as downloadable run-it-yourself software. [1]

[1] https://gravitational.com/telekube/


Is it really necessary to copy the containers themselves? Do they contain long-lasting state? (And if they do, how does this state get properly synchronized between master and slaves?)

If the offline server farms hosted a private Docker registry (which is very simple to set up), couldn't you then simply push the container images to the registry, copy the relevant YAML files, and instantiate an identical cluster that way?


That's pretty clever, it looks like you've found a sweet spot somewhere in the middle between SaaS and good-old on premises that would serve a lot of use cases.


How do you guys handle eg CVEs?


My company is running it in production. We started with the monitoring stack and then our application, which is a SaaS product. The product itself is a stack deployed on a per customer basis. With rough configuration management using chef and terraform it took an ops person hours to complete a deployment a year ago. Today, it takes a minute and it's self-service for engineers.

Efficiency is increased a great deal. Previously, I'd have idle instances costing money when they were unused. Now my resource consumption is amortized across my entire infrastructure.

It also drives an architecture that lends itself to better availability. A pod in k8s may be rescheduled at any moment. If it's a stateless service, one may just increase the number of replicas to prevent a service interruption. If it's a stateful service, one is forced to think about how to persist data and gracefully resume operations.

It took a while for everyone on my team to get familiar with it, but once we started to really grok it, the sense of safety went way up. I feel pretty good about the fact that if someone accidentally blew away a cluster with thousands of pods in it, that I could easily replicate the entire thing and get it back up and running in tens of minutes without a lot of hassle.


Your story with K8S sounds like a complete parallel of ours: we deploy a stack per customer and migrated from Chef. Would love to compare notes on how you're managing all the customer instances (we built in-house tooling layered on top of helm). Feel free to shoot me an email (in my profile).


Also built in-house tooling on top of helm.


How do you handle (if at all) network isolation between customers? I'm trying to find if there's a way to run one large cluster across multiple AWS VPCs.


Calico.

Also, my customers don't have strong requirements for network isolation. I'd be more concerned about compliance in areas like finance or healthcare.


Just seen your reply - thanks loads. Had not heard of Calico.


What's your team size now and when you first started using it?


Team is six people. First started using k8s in prod in May 2017 for monitoring.


Company was moving microservices and containers anyway, so Kubernetes is one of the few sane options to run them.

I can't say that anything got worse. Once you (collectively) get past the initial learning curve and add automation in place, it's way better than most other deployment scenarios. Kubernetes worker nodes fail from time to time, AWS kills them and we barely notice. POds get rescheduled automatically and mostly work.

Frankly, the remaining headache-inducing things are mostly related to the software which is not running on Kubernetes (mostly stateful infrastructure, and mostly for non-technical reasons). Managing VMs is a pain.

There is one thing that one needs to be careful. If you are moving from a VM for each service, to Kubernetes, now suddenly your service is sharing a machine with other services, which wasn't the case before. So I'd suggest that proper resource limits be set so that the scheduler can work properly.

One thing that can get worse is network debugging. There are way more moving parts, so it is not as easy to just fire up wireshark.


We use it in production. It's declarative nature is excellent. We tell it how many of which application we want to run, where our nodes are, and it does the legwork in deciding where to run the applications.

So far zero problems.


We have been migrating to Kubernetes for the past year. Overall it has been great and the extensibility of the platform is amazing once you combine Custom Resource Definitions (CRDs) into your cluster.


Moving a lot of our build infrastructure to it. The effort has essentially been moving alpha to beta grade.

The knowledge/documentation base just flat out sucks. It feels like documentation expects you to just know what you are doing and are really just reading it for the smaller feature switches to change how things work. That or expect you to be running on GKE.

That said, it's the best at doing what we need which is a scheduler and manager for running multi-container workloads.

I expect it to be more fulfilling/difficult when we move to more long-lasting pods but ultimately still use kubernetes.


At Reddit, we’re beginning to move our stateless services to it. We’re still in early stages and want to have a story that can at least be better than our current infrastructure. A lot of that means heavily utilizing Kubernetes abstractions, sticking close to the community and it’s tooling so we can provide more functionality than we could before with just a small team of infra folks. What I mean by this specifically is benefits of things like having an API for deployments, being able to provide different rollout strategies, offering devs access to more infrastructure safely etc. Another thing we’re hoping Kubernetes helps us solve is being able to become multi-cloud tenants. The benefits of this to us would be cost savings, and hopefully more reliability.

P.S: if you’re interested in working on Kubernetes at reddit, send me a message at saurabh.sharma [at] reddit.com


At my last meetup. (Detroit Kuberentes) We heard the story of a company that was able to rebuild their entire production environment from scratch in 1 hr. They run on kuberentes, of course it's not the only story, I'm sure using something like Terraform/Cloudformation is also part of the story.


I've been toying around with Kubernetes (AKS on Azure) the last few weeks and have to say I rather impressed. Still being able to start from scratch and be up and running in one hour is really impressive.


I'm using both AKS and GKE in Google and I have to say that GKE is way, way nicer to work with.


I'd be very interested to hear in what way you think GKE is nicer to work with, care to elaborate?


Clusters come up faster. They're easier to upgrade. Don't have to delete clusters to change them (just provision a new node pool). Persistent storage is less weird. No RBAC. Weirdness around kube-system namespace. I.e. if I create a registry there, it gets disappears with no logs or events suggesting why.


Our app was originally a single Node/Meteor app running on an Nginx server in AWS. We used shell scripts to deploy. "meteor build" was ran on each devs machine when we wanted to deploy. We experimented with a small microservice running on a docker instance on EC2, then eventually moved to a Kops-provisioned EC2 cluster for everything.

We don't really have tremendous auto-scaling needs. But there are a few reasons why we did it.

We've since scaled to a small handful of services (frontend web app, backend node api, legacy stuff, custom deployments for high paying customers). We knew that standardizing on Docker was a good idea just to simplify the build process. What were previously service-specific, hard to maintain, hard to read, just weird shell scripts became a simple Dockerfile for each service, often only 5 or 6 lines long. Once you have that, setting up CI is ridiculously easy; AWS CodeBuild into ECS ECR took all of a day to implement. If we were on GCP it would have been even easier.

Comparatively, we'd spent longer than that actually maintaining the old scripts. The new ones require zero maintenance; they "just work", 100% of the time.

So we knew we wanted Docker. And that was a very good choice. No regrets. We start there. But once we had docker, our eyes turned to orchestration.

Its worth saying that there are a lot of nonobvious answers to questions in the "low/medium scale up devops" world. One big one we ran into early is "where do we store configuration?" Etcd or Consul are fine, but we're a pretty small shop; we didn't want to have to manage it ourselves. We could go with Compose or something. But, how do we get that configuration to the apps? We were in the process of writing new services and we wanted to follow 12-Factor and all that, so envvars make sense. But to get configuration from a remote source would break that, so we'd need some sort of hypervisor for the app to fetch that...

Additionally, how do you deploy? Let's say we go with a basic EC2 instance running Docker. We'd have to patch it together with shell scripts, ssh in, pull new images, GC old images (this was a huge problem with our "interim" step in the first paragraph, our EBS volumes filled up all the time, lots of manual work), restart, etc. Can you do that with zero downtime? Probably. More scripts...

Load balancing and HA? Yeah of course we can wire that up. More scripts. More CloudFormation templates or whatever...

Eventually you arrive at an inescapable conclusion: At a surprisingly low level of scale, you are reinventing Kube and Kube-like systems. So why not just use Kube? You get config management built in. You get a lot of deployment power in kubectl. You get auto-provisioning load balancers on AWS. You get everything.

And I could not be more serious when I say: It "just works". We've had two instances of random internal networking issues between our ingress controller and a service, which were resolved by restarting the ingress controller. That's... it, in a full year.

Like Docker before it, I think Kube is a foundational piece of technology that is only going to get more and more popular. Its incredible. But I do think that there is room in the market to make it easier to adopt. Best practices around internal configuration stuff is hard to come by; even something as simple as "I want a staging env, do i use two clusters or namespaces in one cluster" doesn't have a clear answer. Getting a local development environment cycle up and running is a pain in the ass. Monitoring and alerting is still pretty DIY; GCP solves some of this, but there's no turn-key alerting piece that I'm aware of. Logging is a nightmare if you are self-hosting, including on Kops; we ended up installing the Stackdriver agent and we use GCP for it, even though literally everything else in our stack is on AWS.

I literally don't believe there's a level of "production scale" your organization could be at where you wouldn't benefit from Kube. It is far far easier to set up than a bare deployment of anything if you do GKE. Connect Github to CI to Kube... good to go.


Hi..can someone ELI5 to me what Kubernetes is? Also what's the best way to get started/tutorials you can recommend for a new user? Thank you!


Lots of folks have explained it well in the normal terms used for it. Here's the comparison.

Compare it to init. Systemd is currently the dominant init for most Linux flavors. When your system starts and the kernel has done its thing, init takes over. It launches daemons, sets up networking, that sort of thing. After startup, it will relaunch things that crash, possibly do other things.

Kubernetes is sort of an init for an emerging pattern of cross-machine systems. Like systemd, it uses configuration to figure out what should run in what order, relaunches things that fail and manages networking (albeit in a very different context), that sort of thing.

There are huge differences because the problems are very different. But as a high level comparison, there are worse ones. (Especially if you're very charitable when interpreting "that sort of thing".)


I’m going to alter and copy this idea: “kubernetes is distributed init”. I’d say it’s also distributed cron and if you include the resource stuff, a distributed kernel :p


Yeah, I think Kubernetes is more like a distributed OS than a distributed systemd.


You have a bunch of servers with kubernetes software installed on all of them. 1 of them is master, the rest are workers. You talk to Master through API and it talks to workers with SSH.

You send the Master some YAML files that list what containers you want running, how many, and lots of optional details. Master makes it happen so it matches what you said should be running.

Master also constantly checks everything so it matches what you asked for, even if something changes like a server failing. That's it.


Nice clear explanation. In the spirit of ELI5 I kind of think "container" is a terrible name for what seem to be super-lightweight VMs, I can't help thinking something like "instance" might have been better.

In the same way in OO languages we instantiate a class to get an object instance, containerisation seems to involve creating an instance (or several instances) of an image.

But I am really still learning all this stuff - I'm sure there's a reason behind it, and at the end of the day it's simply a set of terms one needs to absorb.


Instance is probably more overloaded, but also abstract in this case. Container refers to the virtualization mechanism that is lighter than an entire operating system but still isolates several processes into their own environment.

Instance of VM image => Running VM

Instance of Container image => Running Container


That's fair enough, I guess it's just the fact that the term until very recently confused me rather than clarified things; I found it genuinely obfuscatory, as a developer with relatively light infrastructure experience.

Of course I have come to terms with the fact that by now it's entrenched so we're using it, end of story - give me a few more years and I'll be pretty much adapted to it :-)


Full Disclosure: I work for Red Hat in the Container and PaaS Practice in Consulting.

As others here have said, it’s a Container Orchestration Platform and is a largely pluggable architecture that also manages at varying extents Clustered Computing Resources, Application Resource Management, SDN Networking, DNS, Service Discovery, Load Balancing, and other concerns of “Cloud Native Application Development and Deployment“.

You can try it out at http://kubernetesbyexample.com/ .


Red Hat's open source OpenShift PaaS[1] is amazing. It has all the missing pieces you need for a production Kubernetes cluster: documentation and playbooks on how to run a production cluster[2], a build system, a container registry with fine-grained permissions, application templates, a logging framework...

Red Hat is a major contributor to Kubernetes and continually upstreams OpenShift features (like RBAC - they implemented it in OpenShift first, then upstreamed it, and then rebased OpenShift on top of it, removing the custom implementation).

I'm currently looking at migrating a large enterprise setup to Kubernetes/OpenShift.

[1]: https://github.com/openshift/origin

[2]: https://docs.openshift.org/latest/welcome/index.html


I really like OpenShift. I had a hard time grokking K8s but OpenShift is much more friendly. I was able to PoC a microservices app on OpenShift with Minishift[1]. This way you can play around with it locally and make sure you understand what you're doing. The documentation is pretty good. Red Hat has stepped up big time in this area.

[1]: https://www.openshift.org/minishift/


No offense, please, but whenever I see a website describing a thing as $chain_of_buzzwords like this, I immediately close the tab. This is technically a correct definition, but completely unhelpful to a person coming in from the outside, who is not familiar with the problem space.

Heck, even I am not sure what "application resource management" means, after working with Kubernetes for two years now. I could make educated guesses, but nothing certain.


The key concept to understand is that containers are just regular operating system processes with additional isolation.

Most people are familiar with memory isolation between processes. The use of further isolation mechanisms for the filesystem, I/O etc. is generally termed a container [1].

Kubernetes adds an orchestration layer on top of containers to manage processes running across different operating system instances.

[1] https://jvns.ca/blog/2016/10/10/what-even-is-a-container


The simplest way to explain it is that it is google's version of docker swarm.

Aside from that, it's a way to manage the deployment of containers across one or more hosts. Getting more traffic than usual? You have a program that detects it and sends a command to kubernetes/dockerswarm that tells it to scale up the number of web containers. Stuff like that.


Kubernetes is a container operating system. A container allows is a light weight virtual computer. Which allows you to run applications in isolation. This is useful when you might have applications with conflicting libraries. When you have multiple applications packaged in containers. You need to decide on which computer to run it, you have to deal with the networking, the load balancing if scaling it out. Upgrades and such things. Kubernetes is designed to automate all that away. You build a cluster (group of computers) You tell kubernetes you wish to deploy/run an app, kubernetes figures out where to deploy it, how to network it, link em, load balance, etc.


Kubernetes is an open source system for managing containerized applications across multiple hosts, providing basic mechanisms for deployment, maintenance, and scaling of applications


It is also mostly cloud agnostic which is nice. This gives a degree of cross cloud portability to application suites.

The setup of the cluster is cloud specific but after that everything can just talk Kubernetes.


Watch this video. It's literally K8s for kids (and quite awesomely well done): https://youtu.be/4ht22ReBjno


Kubernetes orchestrates clusters of containers in a data center.

It chooses where to run the containers, wires up networking between them. And keeps them running.

I'm on my phone so I can't grab the link for you but a good intro is Kelsey Hightowers Kubernetes The Hard Way on GitHub.


Kubernetes is a thing that makes it easier for one program to talk to several other programs, by tying them together with elastic string, and putting them in whatever room it thinks they should live, and makes them as big or small as they said they needed to be. It runs in the engine bay of the truck, not in the cab of the truck, or in the container.


The best way to get started is to follow the official guide https://kubernetes.io/docs/tutorials/stateless-application/h...

You will use a tool called a “minikube” which is “mini” version of kubernetes that runs on your local machine.

You should focus on to get familiar with kubectl(1) first, it’s a simple CLI tool to manage kubernetes cluster.

If you have any questions & issues you can ask on Stack Overflow #kubernetes and you can also join the Slack channel.


To follow this up, can anyone explain the benefits over Docker? I've used Docker before but am unfamiliar with Kubernetes terminology. I do understand it's an open source project by Google.


Kubernetes USES docker, so it's not a competitor.

The main benefits over the competing docker project, Docker Swarm, is that it does WAY MORE, is 100% free and open source, and has much better adoption.

I would argue that with Docker Swarm you have to bring the glue yourself, and it doesn't really solve any of the hard problems. Kubernets on the other hand is an all-in-one package that solves a LOT of hard problems for you.


+1 for open source then. Thanks for the explanation.


If you accept the analogy that Kubernetes is an OS for the Data Center, then ‘docker’ command is the process/application launcher in that model. ‘docker’ does one thing well and Kubernetes does many things to provide a management and monitoring framework around a Container Runtime like ‘docker’ or ‘cri-o’ or the others.


I appreciate this analogy. Thanks for the input. +1


Just to correct you, while Kubernetes originated at Google, it is now not owned by them and was contributed to the Cloud Native Computing Foundation (CNCF) and is independently governed there: https://www.cncf.io/projects/


Kubernetes runs Docker containers (and rkt containers) but provides additional functionality like ensuring x instances of that container run across a cluster, load balancing, service discovery, and configuration management.


Thanks!


So let’s say your application is running In Docker. How do you make sure that all the containers that should be running are running and how do you make sure they can talk to each other and the outside world as necessary?

Kubernetes is one solution. You use config files to specify how many of what should be running and it makes sure the system stays in that state and recovers as need be. It’s at a higher level of abstraction than running Docker containers manually on machines.


This makes sense from container to container communication. Does this help with database instances or is it typical to leave database calls as an external source to the orchestration?


Projects I’ve worked on have always had the DB in a separate container.


This isn't a full explanation, but Kubernetes isn't a direct competitor to Docker -- but rather Swarm, Docker's orchestration solution. Both work with Docker (or one of it's direct competitors such as Rkt) to orchestrate multiple containers that work in conjunction with eachother.


So it's essentially preference when it comes to orchestration. Thanks. :thumbs_up:


What your application like (web browser, spreadsheet, and word processor) is to your operating system. Docker is to kubernetes. Docker lets you package applications into containers, kubernetes manages your containers, in terms of scheduling, resource allocation, etc.


I assume it depends on the need but an orchestration could represent a bunch of microservice apps for a single larger project. This is making me think... can you nest orchestrations?


Received more answers than I anticipated. Thanks for the explanations my fellow tech peeps.


Katacoda has excellent tutorials/labs for Kubernetes. https://www.katacoda.com/learn


Its a tool for DevOps helping orchestrate container deployments and scaling, you probably want to play around with docker before diving into to kubernetes.


[flagged]


Most of these tools have been around for more than 5 years, many organizations have invested tons of money to use them, and each is supported financially by one or more corporations, so by that metric alone they aren't going anywhere anytime soon.


They need some sort of LTS versioning. Keeping up with their breakneck development pace is a job all its own.


Kops has not released a 1.9 version yet. Even k8s projects can't keep up.


Kops generally stays one release behind. 1.9 is being end to end tested in the last week. I wouldn't use that as an example of not keeping up, it's the established cycle for the project.


kops 1.9 is very close to ready now, but this is a longer lag than normal. We've historically released kops 1.x when we consider that k8s 1.x is stable, including all the networking providers and ecosystem components. That's typically about a month after release.

User feedback has been that that we want to keep that, but that we should also offer an alpha/beta of 1.10 much sooner, so that users that want to try out 1.10 today can do so (and so we get feedback earlier). So watch for kops 1.10 alpha very soon, and 1.11 alpha much earlier in the 1.11 cycle.


For those that aren't aware, justinsb is the author of kops. Looking forward to the 1.9 release.


Ah - sorry, probably should have disclaimered that! I did write the original kops code, but now there's a pretty active set of contributors working on kops (and contributions are always welcome and appreciated!)


I think LTS type of implementations are down to container services providers and things like OpenShift.


The Linux kernel is part of distributions yet it still has LTSes. It seems a valid complaint.


We published a post about this a couple months ago which might interest you[0]. Here's the HN discussion in case you missed it[1].

[0] https://gravitational.com/blog/kubernetes-release-cycle/

[1] https://news.ycombinator.com/item?id=16285192


Agreed. This would also reduce mid/large enterprise worries regarding adoption (or investigating the tech for possible adoption). Being on the upgrade train several times a year is too probably much for them...


Yep at the moment (AFAIK) support is essentially for 9 months from release (current release + 2 previous) so without an LTS process, everyone has to upgrade at least that often to keep getting security patches...


Can anybody share their experiences with running applications that use persistent volumes on bare metal kubernetes?

I mean without cloud services like Google cloud persistent disks.


Disclosure: I work on IBM cloud

IBM has been running kubernetes on baremetals internally for over a year and just recently announced it as a product: https://www.ibm.com/blogs/cloud-computing/2018/03/managed-ku....

I will admit that it isn't as straightforward to get it to work as you might imagine at first. But once you've got the automation humming, its been a surprisingly easy to maintain. Would highly recommend that route! (but of course I'm biased since I've worked on it)

One interesting use case is that its straightforward to access the underlying hardware on baremetal machines with `SecurityContext: priviliged` (you can do a much more fine-grained security permissions; I'm just giving an example). So for instance, you can access GPU's, TPM (trusted platform modules) this way.


+1 for "regular" Ceph. Don't bother with that rook stuff. Just setup a regular Ceph cluster and go. Kubernetes handles its stuff and a (much more reliable and stable) Ceph cluster handles blocks and files.


Can you explain what's wrong with Rook? I thought it was supposed to make life easier when running Ceph.


NIO, the self-driving car company is doing this. They did a pretty detailed interview on their use case which includes a 120 PB data lake, and Cassandra, Kafka, TensorFlow, HDFS. You can read here: https://portworx.com/architects-corner-kubernetes-satya-koma... . (Disclosure, I work for Portworx the solution they use for container storage, but hopefully the content speaks for itself).


Redhat probably has the best production quality deployment for self hosted Kubernetes. They support running glusterfs: https://github.com/openshift/openshift-ansible/blob/master/i...

Personally I wouldn't do it unless you have a RedHat contract (or already have a team that manages glusterfs), but its worth looking at.


1200 core k8s cluster + 1.5PB ceph (shared nodes to some degree) no issues with persistent disks etc, only "annoying" thing is to figure out RBAC initially

you just use a storagecontroller, then its no work whatsoever, ceph does the rest


I've been recently experimenting with getting lustrefs usable with kubernetes, and needed some way to natively integrate it into the cluster I had.

https://github.com/torchbox/k8s-hostpath-provisioner Proved to be useful, it allows you to use any mounted path on (all) a node(s) (hostpath method) to return satisfy a persistent volume claim and return a persistent volume backed by the mounted file system.

A similar set up could work using bare metal, are you using something like Openstack Ironic?


For Kubernetes 1.10+ I'd recommend the local volume provisioner instead: https://github.com/kubernetes-incubator/external-storage/tre...


OpenEBS looks promising, but have not given it a try yet, ref: https://www.openebs.io/


Commercial: Portworx, StorageOS, Quobyte

OpenSource: OpenEBS, Rook (based on Ceph), Rancher's Longhorn

Commercial options highly recommended if you want safety and support as storage is hard, although people seem to be running all of these options well enough. Portworx probably most highly developed with Quobyte a good option if you want shared storage with non-kubernetes servers.


We experimented with both using cephfs via rook and glusterfs via heketi and ran into enough operational issues and speed bumps that we're just using hostPath volumes for now.

IOW, they're not production ready yet.


Openshift from redhat is built on kubernetes. Openshift offers a free tier to try out their cloud services. I'd recommend it to anyone who wants to try it out.

I'm not affiliated with redhat in any way but I have enjoyed using the openshift platform.

Here's the link for anyone interested: https://www.openshift.com/pricing/index.html


http://openshift.io is also built on Kubernetes (by virtue of using openshift.com for its deployment pieces)


Good to see CSI is gaining traction. Only if we could build containers inside K8S pods safely, I would be very happy. Maintaining a stupid Docker cluster for just building containers is really a burden.


Seems someone else was able to get it that working: https://blog.jessfraz.com/post/building-container-images-sec...


If the gitlab ci provider for kubernetes would get this feature, it’d be amazing.

I could finally run gitlab’s CI safely on kubernetes and generate containers.


i have a problem of running gitlab runner within k8s. the docker layer is not cached. is there anyway to fix this?


In my experience the GitLab Runner on k8s should utilized the cache. Are you using Docker-in-Docker by chance? By default, I don't think it can cache data between jobs.


can you share your config.toml for k8s runner?


I switched to k8s about a year and a half ago before deploying a modest frontend/backend pair. although there is some friction, I generally like the approach (I used to use borg, so it's a pretty low barrier).

The biggest problem I have is debugging all the moving parts when there are ~10+ minute async responses to config changes.


10 minutes? That sounds a lot. We usually have 1-2 minutes wait time for a changed ConfigMap to reach a running pod, or for a Deployment to roll over to a new ReplicaSet. (That's on k8s 1.4.)


Has anyone been using CoreDNS as Alpha in 1.9, or tried the Beta now in 1.10? What was your use case and reason for switching? How is it better than KubeDNS?


I don't have much content on CoreDNS. But from the description in the README, https://github.com/coredns/coredns/blob/master/README.md, I guess it acts as a flexible and programmable entry service for underlying services and can do many applications such as Load Balance, depending on plugins.


We've been using CoreDNS for a bit now, and it's much better than KubeDNS. We found that KubeDNS would time out and drop requests from time to time. No such issues with CoreDNS. Would recommend (at least from a reliability standpoint).


Thanks for sharing. How are you monitoring timeouts and dropped requests on KubeDNS?


We were running with shorter TTL's on service records, and upstream apps threw rashes of errors when queries timed out.


I see some people who use Elixir/Erlang ecosystems then shove them into a kubernetes system. Isn't this going against what the Elixir/Erlang system already provides ? What are the usecases for this?


> Isn't this going against what the Elixir/Erlang system already provides?

No?

So, I guess, let's dig into that a little bit. Erlang's always kind of left the health and safety of your deployed system as a whole up to you, being preoccupied with giving you tools for understanding and maintaining its internal operation. OTP provides two semi-unique things: supervision of computation with control of failure bubble-up and hot code reloading. Hot code reloading isn't used all that much in practice, outside of domains where it's _very important_ that the whole system can never be offline and load balancing techniques are not applicable. That's a specific niche and, sure, probably one that kubernetes can't service. With regard to supervision, there's no incompatibility between OTP's approach and a deployment consisting of of ephemeral nodes that live and then die by some external mechanism. Seems to me that kubernetes is no different a deployment target in this regard than is terraform/packer, hot-swapped servers in a rack or any of the other deploy methods I've seen in my career.


I'm reading through Joe Armstrong's thesis on Erlang, and one of his main points is that concurrency programming shouldn't need to care about locality, at least in terms of correctness. Processes can run locally, or on separate machines, because each process is isolated as if it's its own VM. Containerizing it just makes for simpler deployment units, in my view.

That said, there might be some pragmatic tradeoffs eventually made for the Erlang VM that I'm not familiar with yet that makes the container move more dubitable.


For the folks who have implemented K8 in Production - curious if you use it to do resource management as well, or, if you also use something like Mesosphere in conjunction? Or would you stick with a like-stack (DCOS+Marathon)?

There is surprisingly little online discussion/documentation on the intersect of Resource Management and Container orchestration. Not sure if it is too early in the curve, a dark art, not actually done, or what...


Kubernetes does resource management; all cluster scheduler systems do. The difference between Kubernetes and Mesos for resource management is that resource requests and limits are optional in Kubernetes and mandatory in Mesos. It is best practice to specify resources in Kubernetes.


I'm still wondering about going with plain kubernetes, or investing time in OpenShift. Any insights from people that have tried both ways ?


I've run vanilla k8s for about 3 years now in prod but am also fairly familiar (and really like) Openshift Origin. I usually tell people asking this question the following;

OO comes with a bunch of really nice quality of life improvements that are missing (but in a lot of cases can be added via 3rd party TPR/CRDs/etc) in k8s but you aren't deviating so far from k8s that you won't be able to go work on vanilla k8s in the future. Not at all. Most of the additional stuff are annotations that simply wouldn't do anything and you'd remove them if you moved from OO to k8s.

I think that if you're brand new to the environment OO can really help you get running quickly. You just have to make sure that you do in fact dive in to the actual k8s yaml and deal with ingresses, prometheus, grafana, RBAC, etc at some point. I haven't used OO in awhile but I believe you could successfully do most of what I do day to day via yaml/json through the OO UI.

On the flip side a lot of people will probably tell you to start from k8s, whether that's GKE or AWS or minikube or wherever and go through the k8s the hard way. Personally, and I help people quite frequently on the k8s slack, I feel like that leads a lot of people down a path of frustration. It may be perfect for your style of learning or it may just scare you off.

Now when it comes to OO you are at the behest of their releases. Their most recent release was Nov 2017 and k8s 1.10 was just cut. I'm not sure what version OO is on now, evidently they changed their versioning numbers to not correspond to the k8s version.

Join #kubernetes-users and #kubernetes-novice on slack.k8s.io if you need any help. It's a vibrant community. You can message me directly @ mikej if you'd like.

edit: Ok OO 3.7 is k8s 1.9, that's perfect. I wait a few months before jumping into new major.


I’m about to cut OO 3.9 (based on 1.9) - we’ve been waiting for the subpath CVE fixes and regressions to get sorted out before we cut a release.


Awesome! I mentioned this in #openshift-users. The OO website still states;

> An OpenShift Origin release corresponds to the Kubernetes distribution - for example, OpenShift 1.7 includes Kubernetes 1.7.

I had to dig around to figure out the version, might want to update that. :)

And great work, btw, OO is fantastic.


I'm not a production user, but I've had a look at both. From what I've seen OpenShift provides a more polished and complete package than base Kubernetes, so if you're looking to get productive quick, it's probably a good option.

Kubernetes base is more flexible of course, but with that flexibility comes more stuff you have to do yourself...

Either way they're both based on the same tech., so experience with one largely translates across to the other


Kubernetes is great for those that are ops-inclined, OpenShift is a much better experience for your typical developer though.

Personal anecdote: We've had OpenShift Origin running in production for a year and a half, I'm the DevOps guy and know how to poke around in the internals but the developers on my team are just that, developers. They just want to say "here's my code, go make it work", S2I, templates and Jenkins pipelines let them do just that - they go paste the URL of a git repository into the web interface and watch the project build and automatically deploy. It's a pretty magic experience watching a Junior developer deploy 13 applications over the course of a year with no supervision or hand-holding from a more-senior developer.


I deployed plain k8 to start with 2 month back and started adding more add-ons and plugins as we go. This made us be flexible abt what we use and control. So far no issues. Also our customers are inclined for k8 workload files not OpenShift.


Note that OpenShift is Certified Kubernetes, so you don't need to choose between OpenShift and K8s: https://www.cncf.io/certification/software-conformance/

For example, Helm charts work on OpenShift: https://blog.openshift.com/getting-started-helm-openshift/

(Disclosure: I'm the executive director of CNCF and help run the Certified Kubernetes program.)


Something I've been wondering about: how stable is Kubernetes service discovery? I.e. can it entirely replace something like eureka? Is there any reason not to use Kubernetes provided service discovery?


Depends if all service discovery sources and targets are all within k8s. If so, k8s works well, if not, not so much.


Funny that this and Soloman leaving Docker showed up on the same page.


I am hearing that everybody is interested in Kubernetes, yet relatively few people are actually using Kubernetes in production.

Are you using Kubernetes? And if not, what is your reason not to?


I'm not so sure about that last part. We (Kinvey, www.kinvey.com) have been using it for customer-facing services in production for over a year. Results have exceeded our expectations.


We are using Kubernetes since 2016 to deploy OpenStack for SAP's internal cloud. We also offer the reverse, Kubernetes on OpenStack, as-a-service to our internal customers. [1]

We are all-in-all very satisfied with Kubernetes, except that it's quite messy to do right on bare metal.

[1] Also available as open-source: https://github.com/sapcc/kubernikus


Please check out the large and growing list of case studies of companies of all sizes adopting Kubernetes: https://kubernetes.io/case-studies/

Disclosure: I'm executive director of CNCF, which funds the case study write-ups.


Funny so many people are talking here about how fast Kubernetes is moving... That's one of the complaints about Docker Swarm that led me to rule it out...


I am looking for,

Kubectl auth login, is it available yet ?


so cool, would be even cooler at 1.20




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: