
Kubernetes 1.12 - stablemap
https://kubernetes.io/blog/2018/09/27/kubernetes-1.12-kubelet-tls-bootstrap-and-azure-virtual-machine-scale-sets-vmss-move-to-general-availability/
======
darren0
It's about time for a Kubernetes to do LTS releases. 3 month release cycle and
support for only 9 months is too quick for companies given that in practice
companies are running lots of clusters, not just 1 or 2 big clusters.

~~~
manigandham
Kubernetes already has many vendors that provide packaged distributions and
long-term support if you want it.

If you use the managed hosting/cloud providers then updates are also automatic
and easy.

~~~
macNchz
I definitely have not found updates on GKE to be easy, even though all it
takes to trigger it is clicking a button.

Each one I’ve done on our cluster over the past couple of years has caused
_something_ to break, often in a strange and obscure manner.

One time an automatic master upgrade happened at 9pm and effectively sent all
of our logs into a black hole for reasons unknown. As with many of the other
upgrade hiccups, I just had to poke, prod, delete and recreate entities and
things magically started working again.

There is so much going on under the hood it makes sense that there would be
issues with upgrades, so the frequency of the releases is frustrating. Dealing
with after hours alarms just because the k8s release cycle is so aggressive
really grinds my gears. I would fully support an LTS version.

~~~
cagenut
this is the hardest thing for people to get sometimes, it was the original
core "concept" behind devops/ci-cd:

upgrades don't get easier when you delay them and make them bigger, they get
harder. if you think k8s upgrades are bad now, having them change twice as
much half as often would be _worse_.

~~~
macNchz
We've certainly embraced the devops approach, releasing many times per day
with lots of automation on k8s, but I don't see this as being incompatible
with using LTS releases.

Having an LTS release available offers the flexibility to choose when to
upgrade based on any number of factors affecting the business and engineering
cycles, the same way we pin software libraries to specific versions so our own
stability and build consistency isn't subject to the varying release cycles of
the package authors.

There's definitely a balance to be found between waiting so long to upgrade
that it becomes insurmountable, and having to frequently run disruptive,
mystery-downtime-causing upgrades for unneeded new features.

~~~
solatic
If you have an automated test suite for your applications, and you use it to
ensure that candidates pass before deployment, why can't you test new
Kubernetes releases by ensuring that your apps' test suites (which are known
to be good) pass on new Kubernetes versions? If the test suite fails on a new
Kubernetes version, then either improve the test suite to get rid of false
positives or fix the software.

In proper CI/CD practices, build "stability" is an anachronism. Master is
always green. Test changes before merging. If the changes fail the test suite
then don't merge and don't deploy until it's fixed. Master stays green. Why is
dynamic infrastructure any different?

LTS only makes sense as a strategy if testing and carrying out the upgrade
involve significant amounts of work which needs to be planned out in advance.
Ideally, this isn't the case.

------
te_chris
People with Kube experience, at what point should someone on Heroku consider
switching to Kube with something like GKE? I have a rough idea of the pros and
cons, particularly cost vs. time overhead, but curious about more subtle
things.

~~~
marenkay
I'm migrating "legacy" applications (as in non-containerized ones) to
container environments for medium to large companies, and I would suggest two
things:

\- get a local test setup with minikube
([https://github.com/kubernetes/minikube](https://github.com/kubernetes/minikube))
and Helm ([https://www.helm.sh/](https://www.helm.sh/)) and try out a few
applications \- convert a project of yours to a Kubernetes deployment, then a
Helm chart

If both steps are successful and you still feel like Kubernetes is doing a
better job for you then Heroku, get a Kubernetes professional. I promise you,
there will be as many problems as with any other environment. The only
difference is that the Kubernetes issues are not documented.

~~~
te_chris
"The only difference is that the Kubernetes issues are not documented."

Move that one to the icebox then!

------
brtknr
Fast release cycle does give k8s a regular news coverage!

------
dilyevsky
Anyone knows if in-place VPS is actually in? Doesn’t seem like it from reading
gh issue. Imo this should be at the top of the release notes, not hidden in
the middle

~~~
pas
What is in-place VPS?

~~~
dilyevsky
Vertical pod scaling

------
vkat
kubelet tls bootstrap is a feature I am looking forward to. We just finished
dev work and testing to get to 1.11 as we have to support various cni's and
automated upgrades from 1.10, it will probably be couple of months before we
get to 1.12

------
Logishort
K8s 1.12 is here

