
Four years after its release, Kubernetes has come a long way - CrankyBear
https://techcrunch.com/2018/06/06/four-years-after-release-of-kubernetes-1-0-it-has-come-long-way/
======
firebacon
I think the kubernetes project is heavily driven by Google marketing, and that
they are not doing this out of charity, but because they are trying to get you
to use their cloud platform in the long run. They know getting somebody to
build their stuff on open-source kubernetes is a win for them. After some time
people will realize that running kubernetes yourself is actually harder and
more fragile than just running your app without it and at that point the
obvious move will be to use a hosted kubernetes service, like Google's cloud.

And I really think just handing over all our apps to google to run them for us
is _not_ in our (developers) interest in the long term. It would further
solidify Google's "monopoly of the internet" position and also means that in
the future - once we have succeeded in convincing our bosses to just rent
every interesting bit of technology from google - that the only interesting
jobs left will be... at Google.

So please go ahead and downvote me, but please also try to consider my point
of view (that there is a ton of very aggressive marketing with a financial
incentive going on here) next time you read and defend some kubernetes hype
piece.

~~~
koz_
I think Google's interests are aligned with developer interests here. The main
benefit to GCP that k8s success brings is that migration between clouds
becomes easier (e.g. moving from EKS to GKE would presumably be easier than
moving from Elastic Beanstalk to App Engine). Less lock-in benefits everyone
except the current market leader.

~~~
tannhaeuser
K8s only gives you theoretical leverage in negotiations with "cloud
providers". As long as you can't reasonably run it on your own hardware
(because it's simply way too complex and you'll be having trouble finding
experts, even at obscene salaries) you won't be realistically able to migrate
to another "cloud provider" either.

It's an old trick from the playbook of ERP software vendors: make your
software absurdly configurable so that all the meat is in the configuration.

Worse (and I know this isn't popular and will hurt) k8s for all intents and
purposes just runs Docker images which are just pointless wrappers that don't
add anything but point-of-failures and vulnerabilities.

~~~
pests
A container is just a linux process with special restrictions. Its not just a
pointless wrapper.

~~~
firebacon
Don't think the term "container" is really well-defined.

The "container" that docker and others implement is actually a collection of
different kernel namespacing features. I assume the one you are referring to
are cgroups. I think a better description would be that each process in a
linux system is part of (many) cgroup hierarchies. And you can have more than
one process in each of the groups.

I think what parent meant is that you can actually get all of these really
nice isolation features for your service without using "Docker". It is trivial
to enable them using linux command line utils, or use something like systemd
which can also do it for you.

~~~
pests
Of course Docker is just the popular brand name for those isolation features
and 'image' 'container' and all related terminology exist independently
without Docker

~~~
yoz-y
Docker is also a cross platform command line tool that helps to manage said
images and automate a lot of work that would otherwise need to be done. For me
it is kind of like Dropbox: yes you can piece it together yourself, but using
it is very convenient and you can spend your time elsewhere.

~~~
pests
In my opinion its on its way out. CNCF and k8s has already basically replaced
it with the CRI initiative and containerd[0][1]. Runc, rkt, and a multitude of
other tools run containers. img, ocra-build, and others can build them.

[0] I realize this is from Docker as well but I feel it supports my point that
Docker, Inc themselves are shedding baggage to still stay relevant.

[1] [https://kubernetes.io/blog/2018/05/24/kubernetes-
containerd-...](https://kubernetes.io/blog/2018/05/24/kubernetes-containerd-
integration-goes-ga/)

~~~
yoz-y
I'll have to look into it thanks. I'd like to not end up requiring different
tools to run the same container on Windows/Mac and Linux. Currently my
workflow is easy, I create my Dockerfiles on a mac, do all of the building and
testing, and then just tell devs on Windows to pull a repo and do a docker-
compose --build up. I hope in the future this will not grow into tool hunting.

~~~
pests
You'll probably be able to continue using the same tools across platforms to
run containers.

I personally feel that Docker tries to do too much, almost the systemd of the
container world. I believe having alternative container runtimes and build
systems decoupled from Docker (both in the running program sense but also the
company) will be the best in the long run.

With or without docker your workflow will remain the same. Its the image
itself (the CRI spec) that makes that cross-platform magic work. I myself do
my development on a Windows machine, ship a tar.gz off to Google's Cloud
Builder to build the image and publish to a registry which then gets tested
and debugged on a linux host.

------
tnolet
The only part missing in most Kubernetes write ups is that you probably don't
need it. I'm hopeful most engineers recognise this, although my day to day
experience paints a darker picture. I know of at least one hot startup
deploying Kubernetes just to attract engineering talent.

~~~
swyx
i am a mainly js engineer and so far have lived entirely within the firebase
world as far as my backend goes. in your (and everyone else's) opinion, then:
when -do- you need it? set some quantifiable bars for me please?

~~~
tnolet
I could throw the "you'll know it when you see it" but I'll try not to! 1\.
You have a large amount of underused hardware/VM 's you want to cluster. 2\.
You have an eloborate microservices system with a dynamic service discovery
mechanism. 3\. You have the above over many regions / teams. 4\. All your
services are mostly stateless. 5\. Your team of rockstar ops engineers are
building their own crappy version of Kubernetes just to deal with the
engineering challenges your business faces.

~~~
cogman10
Funnily, my current employer is just that. We predate K8s, so it can be
forgiven somewhat. Transitioning, though, is just awful. We have every reason
to transition, yet getting the system setup, running, and working with the
current infrastructure is a daunting task.

You don't need K8s, until you do, at which point switching can be a nightmare
scenario.

------
alexk
I am writing this for folks who are reading this thread and wondering what's
of said here is true and what's not true.

* Google conspiracy theories - (Google is doing it to lock everyone in).

Not true. Kubernetes was created as a best-practices fully open source Borg,
the very reason it's using Docker and Etcd is the desire of Googlers to work
with OSS community and it paid back by Red Hat and later CoreOs and others
joining the project.

Kubernetes is complex, yes, but so is Mesos and OpenStack, and pretty much any
other production grade orchestration system I've seen, so I would disregard
this argument as well.

* Google is not even using Kubernetes.

Google is a big company, they can't switch to Kubernetes overnight or even in
5 years. I don't have this info, but I'm pretty sure many teams are using GKE
and the internal adoption grows.

* What if Google goes away? Everyone will get locked in!

Google is not the only major contributor to Kubernetes core these days. Red
Hat is another one, and many others, so this lock in point is not valid.

Google is not even BDFL - the project is governed by CNCF (OSS foundation
similar to Linux foundation) and the project was donated by Google several
years ago.

Kubernetes development process is organized in the most fascinating open
process I have ever seen - SIGs (special interest groups) are fully open and
anyone can participate and help develop the project. I have learned a lot
about openness just looking at the organisation of the dev process.

* You don't need Kubernetes if you have one service.

Not true. You can benefit from Kubernetes even if you have just one container.
It solves many problems you would have to solve otherwise - service discovery,
configuration management, load balancing, secrets management, fail-over,
routing, publishing new releases and packaging and many others.

You probably don't want to self-host Kubernetes, because it is complex unless
you have experience and desire to do so, that point is valid. But thankfully,
you can use GKE, AKS or EKS nowadays.

Disclaimer:

Our company works with Kubernetes so I am biased, but we have no affiliation
with Red Hat or Google.

~~~
jacques_chester
> _Google conspiracy theories - (Google is doing it to lock everyone in)._

My own conspiracy theory is slightly different: Google's aim is to scorch the
earth so that AWS cannot form a locked-in position.

I don't think they immediately saw this possibility, but that they moved
swiftly and energetically to support the growth of Kubernetes once they did
see it.

The alternatives for Google were all strategically painful. ECS might have
become successful, meaning a loss for Google. Mesos or Docker Swarm might have
taken off, leading to snap acquisitions of the relevant companies by Microsoft
or AWS, again a downside for Google.

As the plurality contributor to the commoditising winner, Google is able to
prevent the other two cloud giants from pulling ahead on container
orchestration.

Disclosure: I work for Pivotal, we have a Kubernetes-based product (PKS) we
co-develop with Google and VMWare.

------
streblo
There's a lot of FUD in the comments here about kubernetes, so let me chime in
with a success story -- my company has been using kubernetes to operate a
reasonably large and complex set of services (>100 nodes/pods, >5B
requests/day) and we're doing it with only 3 engineers. Our experience has
been great, and if I had to do it again, I'd happily choose k8s without
reservation.

~~~
timeSl
Can I ask if you have a bare metal cluster or you do this through a cloud
service?

~~~
streblo
We are using Google Kubernetes Engine

------
dopeboy
I consult and I also run a small SaaS startup. All of my deploys thus far
service an audience of ~100 users per month---fairly small. I git push my
monolith to Heroku and call it a day. Because of that, I don't know anything
about containers, Kubernetes, Docker, etc.

As a forward thinking engineer, what should I do to stay up with the times? Is
it worth my time dockerizing my projects? Should I be using kubernetes when
deploying my projects?

~~~
d4l3k
Probably not. Container services really shine when you have a lot of different
teams that all need to launch services and not worry about what happens at the
hardware level or where they run. This makes sense at a lot of large companies
like Facebook and Google where they have a dedicated team running the
container layer. GKE makes it a bit easier but then you're locked into Google
Cloud.

If you're only running a few services it really doesn't make much sense to
setup Kubernetes. I setup a small cluster a few weeks ago and it certainly was
a lot more involved than I expected. I ended up deciding not to use it, since
it didn't really add much (for my use case) and I felt like if anything went
wrong I'd be in a world of hurt trying to debug it.

Took a lot more time to setup Kubernetes than my entire current deployment
system which uses
[https://github.com/pressly/sup](https://github.com/pressly/sup).

However, dockerizing your projects might not be a bad idea depending on what
you're doing. I dockerize a lot of my projects since it makes it super easy to
deploy when using the automatic Docker Hub builds. Though I did have a problem
where Docker upgraded and make all of my existing containers unrunnable.

~~~
iampims

        I felt like if anything went wrong I'd be in a world of hurt trying to debug it.
    

This is the part very few k8s aficionados mention: how complicated is it to
troubleshoot when something goes wrong?

~~~
geggam
Not to mention when folks talk HA and you ask about federation they just give
you a blank stare.

k8s is HA in the same region if you are using GKE. Not really what you can
call HA

~~~
pests
Google just posted a blog[0] about multi-region clusters and ingress.

It's a little hacky as you need to create multiple clusters in each
zone/region and then with an anycast IP attached to a Google Cloud Load
Balancer.

[0] [https://cloudplatform.googleblog.com/2018/06/How-to-
deploy-g...](https://cloudplatform.googleblog.com/2018/06/How-to-deploy-
geographically-distributed-services-on-Kubernetes-Engine-with-kubemci.html)

~~~
geggam
Hacky is not production ready.

~~~
pests
Its not hacky in that sense. They provide a new kubemci tool and I'm sure it
will be given the k8 release treatment.

------
state_less
It's nice for users to have a common platform, you don't have rewrite nearly
as much code or config (approaching zero) when changing providers.

Maybe it'll spur new developments due to the lower costs and provide bedrock
for big investments to build on. It's still early days.

It'd be nice to see more P2P cluster building tools for individuals and groups
of friends.

~~~
markovbot
>It'd be nice to see more P2P cluster building tools for individuals and
groups of friends.

I've been wanting to build a k8s cluster for myself and a few friends. I
didn't feel there was a lack of tooling to facilitate this, but I haven't
really looked into it much. Could you share some of your specific concerns,
and any tips if you've done this?

~~~
state_less
There is kubeadm [1] that is part of the project and it does make it easier
than in the past to get a cluster running on hosts you control. You can create
VMs (Ubuntu, Debian, RHEL, etc...) and run kubeadm on each to join nodes
together on the cluster.

I'd like it to be even easier though, so your friend just downloads the P2P
Kubernetes application, run it and they can see your cluster and join it (with
some key that you give them) using a GUI. Similar process if they are starting
the cluster, except this time they name the cluster and hand out the invite
(with key) to their friend.

[1] [https://kubernetes.io/docs/setup/independent/create-
cluster-...](https://kubernetes.io/docs/setup/independent/create-cluster-
kubeadm/)

~~~
markovbot
ah okay, that's different from what I was thinking of a cluster shared by a
group of friends. I'm running my on VMs from a cheap VPS provider, was going
to just split the bill with others who used it.

Your use case is very interesting though. It would be interesting to see that
eventually.

------
bg4
Agreed. And to follow that thought, you probably don't need microservices
either.

------
w8rbt
IMO, Kubernetes is another indicator of the success of Go (Docker is too). Go
is simple and fast like C, yet safe like Python. It really combines great
performance and safety.

~~~
kraig
sometimes i wonder how openstack would have done if it was written in go

~~~
wmf
It would be much faster but would still be extremely flaky and all the
architectural problems (e.g. RPC over MQ) would still be there.

------
throw2016
Devops has added a web of complexity that distance users from the underlying
technology.

Suddenly mounting filesystems is a 'storage driver', using networking
technology built into the kernel is a 'network driver' or using an Nginx
reverse proxy or Haproxy load balancer is an 'ingress controller'. This new
vocabulary confuses rather than informs and ends up adding more layers of
complexity.

What happens when things break? Simply knowing the json/yaml layers above is
not going to help without an understanding of networking and the underlying
technologies.

Facebook, Google, Netflix are not 2 engineers running devops, they all have
unique architectures and an army of experts to run their infrastructure. The
idea that you can be 'webscale' without ops experts is complete fantasy.

------
nstart
Maybe off topic. But before Kubernetes AND docker, how did people manage
microservices? Was it a machine stack per service? Were there tools that
helped keep all the services running together in a single machine? I haven't
been in software for long enough to know what this past looked like so
courious if any of the HN community might know more about this especially at a
personal experience level.

~~~
firebacon
> Were there tools that helped keep all the services running together in a
> single machine?

Traditionally, services on unix system are started and supervised directly by
the init process ("init system"). That has been SysV init for most of the
time, but in the last decade we got two major new systems: upstart (dead now)
and the infamous systemd. Using systemd can still be a very good alternative
to using docker today.

> Was it a machine stack per service?

That still sounds like a good approach to me today.

If your service is not big/expensive enough to need at least one full machine,
why bother cutting your app up into such small services in the first place?

For example, for web applications, a traditional setup would have been to have
a couple of dedicated servers that run the database and a larger pool of
webservers that serve the (stateless) application. In front of that you could
have two linux boxes doing the network load balancing and failover or a
commercial load balancer "box" product.

------
vshan
It's astounding how much of Kubernetes is simply marketing. Google doesn't
even use it internally.

Service Fabric[0] is leagues ahead of any cluster orchestration framework,
heck, it's a complete distributed systems platform which powers all critical
Azure services (Cosmos DB, Cortana, SQL DB). It is the only framework that
allows native stateful applications; your state doesn't need to be delegated
to another service. It offers Reliable Collections (Java and C#) which are
persistent, replicated and sharded native collections.

I wish more devs knew about SF.

[0] [https://github.com/Microsoft/service-
fabric](https://github.com/Microsoft/service-fabric)

~~~
the_new_guy_29
This one here is also a pure marketing...

------
sidcool
A lot of displeasure around K8S here, are there any simpler alternatives?

~~~
parasubvert
HN tends to follow tech hype cycles, Kubernetes is entering the “trough of
disillusionment” now that people are starting to use it in anger. This reminds
me of prior fights like Solaris vs Linux. Kubernetes is not that bad, its
actually quite elegant in many ways. It’s just that it is tackling a hard
problem in the hardest way possible: by providing a new set of bottom up
abstractions to replace traditional virtual machines, config management, and
network and storage virtualization in the image of the Google model of massive
scale workload scheduling. That means it is a whole new world to learn but you
can’t quite forget about the old world yet, and need to map between the two.
Is it complicated? Yeah, mainly because it’s evolving so fast and serving many
audiences. It will get easier.

Simpler alternatives? Any PaaS like Heroku or Google App Engine or Cloud
Foundry.

Or a serverless function platform like AWS lambda (not sure if it’s “simpler”
but it is getting there)

Disclaimer I work for Pivotal who sells cloud foundry and a distro of
Kubernetes

~~~
aphexairlines
AWS Lambda seems much simpler.

Bundle your code in a zip file. Describe it in a yaml template with the name,
runtime, zip filename, memory needed, entry point in your code, and external
HTTP path. Deploy it with "aws cloudformation package" and "aws cloudformation
deploy".

The platform will give you load balancing, scaling, logging, metrics, internal
and external endpoints, DNS, and CDN.

The PaaS platforms you mentioned are basically the same (you provide code + a
bit of config and they do the rest), except priced by the hour.

~~~
parasubvert
The plumbing involved with Lambda tends to be onerous, as does debugging. But
for many cases I completely agree it is simpler and a glimpse of the future...

