Hacker News new | past | comments | ask | show | jobs | submit login

Why wouldn’t you start with Kubes? Less vendor lock in and a much bigger community.



Because K8s adds large amount of potentially unnecessary complexity?

- Setting up Swarm is trivial, setting up Kubernetes is not so much (kubeadm is still not recommended for production, and there are valid reasons for this). I guess the only thing that's probably easier on K8s is (totally unsupported) multiarch cluster - it's somewhat messy with Swarm[1]. Although my experiments with multiarch K8s had failed (got issues with CNI stuff and postponed research for later).

- Compose file format is significantly simpler and concise. Less code to write is good.

- Debugging failing Swarm is significantly easier than debugging failing Kubernetes. Well, that's probably subjective and I haven't truly deeply debugged either, but at least I believe so - on occasions I was able to find my way through moby, swarmkit & libnetwork source, and K8s feels a very different beast.

____

Update[1]: I re-checked multiarch status for Swarm and found out that now `docker service create --name test --placement-pref 'spread=node.id' --replicas 2 --no-resolve-image ubuntu sh -c 'while true; do uname -a; sleep 1; done'` just works on a freshy set up mixed x86_64+armhf Swarm cluster (no multiarch alpine images yet, though - promised to come next week). Guess, Swarm had beaten K8s here.


I think several others have commented on this, but the learning curve of Swarm seems insignificant to the setup and implementation of a k8s cluster.

I work with a team of four total developers and realistically two of us handle the vast majority of "operations." As a result, what was important for us in orchestration was ease of setup, speed of initial implementation, and the lowest immediate and ongoing difficulty associated with whichever orchestration tooling we chose.


I've been running solo a GKE cluster for almost 2 years. Maaaybe I spent 1 week on eng effort on it.


GKE, from the research I have done, and the small cluster I built for a side project, smooths out many of the particularly challenging aspects of a k8s implementation. I mean it is advertised as a managed service, I do not think that is particularly comparable to setting up a raw k8s cluster or Docker Swarm cluster yourself. Would you feel that is an accurate assessment?

My company is currently fairly locked into another DO for misc. reasons, and as a result would be deploying to DO, where you do not have the benefit of all of the automated tooling/management provided by GKE.


Absolutely. Running k8s on bare metal requires full-time dedicated staff. Running k8s with GKE, and possibly also Azure Container Service, though I have no direct experience, is a one-off half-day effort.

What people usually don't realize is that once you're in the cloud, all your services talk TCP. A service in GKE and another one in AWS are just a network hop away. Two considerations are:

* Network egress costs, which is currently outrageously high, and serves as a lock-in device of sorts. Depending on actual workloads, it may or maynot make sense for you.

* Security. Though there are numerous VPC solutions out there, some of them supported by the cloud providers themselves.

Anecdotically, we also run a small RDS database in AWS, though we only need ~100 small queries a day.


Yeah, thanks for sharing your experiences on that.

Unfortunately our egress costs would likely be significant.

I am honestly anxious to get my employer started on the path to k8s, but until the tooling reduces the hours required to maintain it successfully, Swarm seems like the superior solution if you are locked in on a non-GKE/Azure Container Service provider.

I have heard good things about Kops helping setup/enable production-stable orchestration, but they are not ready yet for Digital Ocean either, although I think it is on the list™.


Pivotal and Google built Kubo (recently renamed to the less memorable Cloudfoundry Container Runtime[0]) to make the deployment/management/upgrade thing easier. It's built on top of BOSH, which has a relatively long track record in managing stateful, long-lived, distributed systems on top of IaaSes.

Disclosure: I work for Pivotal.

[0] https://cloudfoundry.org/container-runtime/


Digital Ocean recently announced a partnering with a Kubernetes vendor. You can run managed K8S on DO now.


Do you have a URL for that announcement?


I have built a prod K8S cluster by hand (back in the 1.1 days). It taught me a lot of the foundations of K8S, so I don't regret it. It is why we are using GKE.

With GKE, most of your focus will be around (1) tooling for generating manifests, such as Helm or (forgot the name of it). I had written something called Matsuri, but that is only useful if your team is a Ruby shop. (2) what to put into the containers and how they link up.

So yeah, I agree with your assessment.


Because K8s has quite some overhead, a steeper learning curve and is more difficult to set up and to maintain. I don't see this as a problem in bigger companies, but in companies with less than 10 IT people and no prior knowledge I'd take Swarm over K8s every day.


We have less than 10 people, 5 of them engineers. We are overbooked getting features out. We are using K8S in both dev and production, not Swarm.

A big part of that is because I have had experience running Docker (not Swarm), ECS, K8S, and building developer tooling (Vagrant), in addition to being a regular developer.

The flip side: I had to opportunity to try out these different tech in production and saw where the pain points are. The overhead of K8S exists to solve those pain points, though that is probably not that obvious to a small team without prior knowledge.

For example, I had set up a prod k8s by hand. I will never do that again. On the other hand, I know roughly what is going on when something breaks in our Google GKE cluster.


Hosh , could you share your learning from setting up a prod K8 HA cluster? Could be useful for me.


I am not sure if I can. Setting it up by scratch let me become familiar with some of the underlying mechanisms of how k8s is put together. A part of that is:

https://rocketeer.be/blog/2015/11/kubernetes-from-the-ground...

And although I never ran through Hightower's Kubernetes the Hard Way, it is like that. https://github.com/kelseyhightower/kubernetes-the-hard-way

After running through that as a kind of kata, it was easier to infer and troubleshoot things when things go wrong. The transfer-of-learning happens only if you run yourself through these exercises.

I can share some things at a higher level though:

Label selectors are your friend. Master them. They are used everywhere.

Stateless is still easier than stateful. Start with putting stateless workloads in production before ever trying stateful.

If you have the expertise to mix your stateful pods with your stateless pods, make sure you master StatefulSet and things like persistant volume claims.

If you fake stateful pods like I did in production, then Kubernetes does not know how to cleanly shut them down. Automated maintenance involving kubectl cordon and drain no longer function well. You end up having to hand migrate stateful pods from node to node.


If "Docker == containers" continues to hold true in many people's minds, then it's possible that Docker Swarm could feel like the "vanilla" orchestration platform. Of course those of us familiar with the platforms know better, but that mindset could persist, especially with pseudo-technical decision-makers.


'docker == containers' is true in that people often use the terms interchangeably. I've heard many people say 'you should get into Docker' and then talk about Kubernetes.


Could you elaborate what alternatives to docker are worth checking out? I'm unfamiliar with containers and out of my head I can't name anything besides docker.


There's CRI-O. v1.0 was announced yesterday - https://medium.com/cri-o/cri-o-1-0-is-here-d06b97b92a98


The lesson of Linux is that fragmentation is bad. This is a chance to fight fragmentation.


rkt is likely going to be replacing Docker for the default engine in Kubernetes and/or OpenShift (I vaguely recall hearing that it was coming, but can't find a source to cite): https://coreos.com/rkt/

(edit: or maybe I'm thinking of CRI-O)


Well I started with Swarm because Kubes had additional learning to do. Swarm worked from the same docker-compose.yml files I already knew.

Looking at moving to k8s but that was my reason for swarm at the time. And it was the right one as it just worked with minimal effort.


Still deploying to Swarm, very happy. It's much simpler compared to k8s.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: