
Kubernetes 1.19 - gtirloni
https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.19.md
======
different_sort
Reminder it's time to get off 1.16 now. Only 3 versions (current-2) are
supported upstream.

[https://kubernetes.io/docs/setup/release/version-skew-
policy...](https://kubernetes.io/docs/setup/release/version-skew-policy/)

~~~
clhodapp
Tell that to Google. Their "regular" (recommended) release channel in their
managed Kubernetes Engine product _regular_ ly falls off the back of the
supported range, while their "rapid" channel barely stays within it.

~~~
alec_kendall
I’d like to see those two teams eat lunch together

------
cagenut
Ingress going GA and the support cycle bumping up from 9 months to 12 are my
fav two things in this one.

Of course it'll probably be another few months (if not six) before we see this
in gke/eks.

~~~
TuringNYC
Ingress going GA seems big. I just read through the release notes but didnt
see as much about Ingress Controllers. I always found it strange that Ingress
Controllers are external components provided by the CSPs or 3rd parties (if
you are going multi-cloud support as I am).

Looking at NGINX, Kong, Traefik, and the loose change of community-contributed
extensions to NGINX, Kong, Traefik, I always wonder why it isn't just part of
the core. Anyone know if that is the eventual plan? If so, what happens to the
small companies building solutions, do their products just become 2nd class
citizens?

~~~
jrockway
I think the problem is that HTTP reverse proxies do so much, it's hard to
specify what Kubernetes should do and what you should get an external solution
to do. (Personally I am appalled at how org-wide SSO works right now, and
almost started my own company to sell an open-source solution to the problem.
But... I don't know how to start a company and then there was a pandemic.
Someone else will sell you a not-as-good solution for mega-bucks.)

In my opinion, people are always going to want their special thing;
compatibility with their legacy nginx.conf, support for HTTP/3, whatever.
Kubernetes can never satisfy people with those requirements. For that reason,
I think people need to agree on a service discovery protocol, a route
discovery protocol, and then hook those into their frontend proxy of choice.
For example, I never really liked Ingress (lots of unspecified landmines, like
what order rules are evaluated in when matching a request to a route), and so
just run Envoy inside Kubernetes. Envoy is then fed a static route table and a
valid TLS certificate, and handles all traffic in an explicit and easy to
debug way. I wrote something to glue Envoy's service discovery to Kubernetes
so my static config file contains less boilerplate
([https://github.com/jrockway/ekglue](https://github.com/jrockway/ekglue)),
and it's served me well. (I need to write something to add route table entries
to DNS, though. I currently do that by editing my DNS records in a web
interface, which is tedious and uninteresting.)

With that in mind, what people should really be focusing on is standardizing
xDS (Envoy's config language basically), and hooking it into their container
orchestration framework and web server of choice. Then you can run Nginx on
Openshift or Apache on ECS and everyone is using the same tools to do the very
boring gruntwork of figuring out what your backends are called, and what
syntax you use to say "send all requests that match example.com/foobar to a
backend my container orchestration framework calls foo-bar-v2". I am sure it
will sort itself out eventually -- xDS has a lot of momentum in things that
aren't Envoy (notably gRPC), so this could all be a solved problem outside of
Kubernetes soon enough.

~~~
rdli
I generally agree, and would add though that the Service API work that is
forming the basis of ingress v2 will help a lot here as well.

(I do think that xDS is a non trivial interface to learn and standardize on
though and is probably overkill for most.)

~~~
jrockway
I think a too-complex API is better than a too-simple API. If the underlying
infrastructure is too simple, then people have proprietary workarounds to get
the features they need (see Kubernetes Ingress and all the vendor-specific
annotations). If the underlying infrastructure is too complex... then
implementation is harder, but at least you can do what you need. Someone else
can smooth it over and give you a simplified version, if the subset you need
is the same subset they needed.

This is always a spectrum, of course, and xDS is probably missing crucial
features... but at least if it becomes somewhat widely used, there will be an
incentive to add the missing features.

