
Kubernetes 1.18 - onlydole
https://kubernetes.io/blog/2020/03/25/kubernetes-1-18-release-announcement/
======
fosk
Although I understand the fear that many have of being left behind the
technology curve by not having the time - or the chance - to run Kubernetes in
their day to day work, we really must appreciate that Kubernetes really is the
future of infrastructure. For those that are looking at Kubernetes with
suspicion, it is a natural instinct to think of K8s as a threat to all the
knowledge we have built in the past few years, but it doesn't have to be that
way. So many things that we would have built ourselves (deployments, upgrades,
monitoring, etc) can now be streamlined with K8s, and existing knowledge
around those topics will just make our transition to Kubernetes faster
(besides still being able to use most of our expertise within K8s anyways).

Managed solutions make K8s easy to use, while we can still benefit from being
able to run out workloads left and right on any cloud vendor. In one word:
portability. Which in the day and age of cloud vendor lock-in it is to be
protected at all cost.

I know that some organizations allow to allocate time to explore new
technologies and learn new practices. If your organization has no policy
around this, it is worth try asking. Ultimately it will benefit the
organization and the business as a whole, as they will be able to build a
solid foundation to more rapidly transition and execute on their digital
products. Kubernetes is good for business.

~~~
AndrewKemendo
I'm sorry but this reads like some mix of a sales pitch and religious
preaching.

K8S doesn't eliminate the workflow for "deployments, upgrades, monitoring,
etc.." it just black boxes them. It also assumes out of the gate that
everything needs to be able to do HA, scale for 1,000,000 instances/s etc...

Over and over and over people show examples (I'm guilty too) of running
internet scale applications on a single load balanced system with no
containers, orchestration or anything.

So please stop preaching this as something for general computing applications
- it's killing me cause I've got people above me, up my ass about why I
haven't moved everything to Kubernetes yet.

~~~
rumanator
> K8S doesn't eliminate the workflow for "deployments, upgrades, monitoring,
> etc.." it just black boxes them.

Kubernetes does not black box anything. At most it abstracts the computer
cluster comprised of heterogeneous COTS computers, as well as the
heterogeneous networks they communicate over and the OS they run on.

I'm starting to believe that the bulk of the criticism directed at Kubernetes
is made up by arrogant developers who look at a sysadmin job, fail to
undertand or value it, and proceed to try to pin the blame on a tool just
because their ubriss doesn't allow them to acknowledge they are not competent
in a different domain. After all, if they are unable to get containerized
applications to deploy, configure, and run on a cluster of COTS hardware
communicating over a software-defined network abstracting both intra and
internet then of course the tool is the problem.

~~~
AndrewKemendo
It's the exact opposite. I don't think that's stuff should be abstracted away.

~~~
throwaway894345
> It's the exact opposite. I don't think that's stuff should be abstracted
> away.

Why not? The Kubernetes/serverless/DevOps people have a compelling argument--
organizations can move faster when dev teams don't have to coordinate with an
ops/sysadmin function to get anything done. If the ops/sysadmin/whatever team
instead manages a Kubernetes cluster and devs can simply be self-service users
of that cluster, then they can move faster. That's the sales pitch, and it
seems reasonable (and I've seen it work in practice when our team transitioned
from a traditional sysadmin/ops workflow to Fargate/DevOps). If you want to
persuade me otherwise, tell me about the advantages of having an ops team
assemble and gatekeep a bespoke platform and why those advantages are better
than the k8s/serverless/DevOps position.

~~~
coredog64
One of the things I see ignored in these discussions is the strategic
timeline. Yes, dev teams can yeet out software like crazy without an ops team.
But eventually you build up this giant mass of software the dev team is
responsible for. Ops was never involved until one day the mgmt chain for the
dev team realizes he can free up a bunch of capacity by dumping his
responsibilities onto ops.

IMO, some of these practices come from businesses with huge rivers of money
who can hire and retain world class talent. I’d like to see some case studies
of how it works when your tiny DevOps team is spending 80% of their time
managing a huge portfolio of small apps. How then do you deliver “new, shiny”
business value and keep devs and business stakeholders engaged and onboard?

~~~
throwaway894345
I might be misunderstanding you, but this line makes me think you
misunderstood the k8s/serverless/devops argument:

> your tiny DevOps team is spending 80% of their time managing a huge
> portfolio of small apps

In a DevOps world (the theory goes), the DevOps team supports the core
infrastructure (k8s, in this case) while the dev teams own the CI pipelines,
deployment, monitoring, etc. The dev teams operate their own applications
(hence DevOps), the "DevOps team" just provides a platform that facilitates
this model--basically tech like k8s, serverless, docker, etc free dev teams
from needing to manage VMs (bin packing applications into VM images,
configuring SSH, process management, centralized logging, monitoring, etc) and
having the sysadmin skillset required to do so well [^1]. You can disagree
with the theory if you like, but your comment didn't seem to be addressing the
theory (sincere apologies and please correct me if I misunderstood your
argument).

[^1] Someone will inevitably try to make the argument that appdevs should have
to learn to "do it right" and learn the sysadmin skillset, but such
sysadmin/appdev employees are rare/expensive and it's cheaper to have a few of
them who can build out kubernetes solutions that the rest of the non-sysadmin
appdevs can use much more readily.

------
taywrobel
Reminder to everyone that unless you have a truly massive or complex system,
you probably don’t need to run K8s, and will save yourself a ton of headaches
avoiding it in favor of a more simple system or using a managed option.

~~~
outworlder
Not sure why this disclaimer has to be posted every time there's a discussion
on K8s. It is a tool, if you need to use it, do use it. If not, don't.

Although I would argue that you need to know what trade offs you are making if
you have the right use-case (multiple containers you need to orchestrate,
preferably across multiple machines) and you are not using it or a similar
tool. There are lots of best-practices and features you get out of the box,
that you would have to implement yourself.

You get:

* Deployments and updates (rolling if you so wish)

* Secret management

* Configuration management

* Health Checks

* Load balancing

* Resource limits

* Logging

And so on(not even going into stateful here), but you get the picture.
Whatever you don't get out of the box, you can easily add. Want Prometheus?
That's an easy helm install away.

Almost every system starts out by being 'simple'. The question is, it going to
_stay_ simple? If so, sure, you can docker run your container and forget about
it.

~~~
jarfil
You can migrate docker deployments to K8s just by adding the parts you were
missing, so when in doubt, it always makes sense to start with docker, docker-
compose, and only consider K8s as an alternative to docker swarm.

~~~
Shish2k
I actually spent the past 3 days attempting to migrate my DIY “docker
instances managed by systemd” setup to k8s, and found getting started to be a
huge pain in the ass, eventually giving up when none of the CNIs seemed to
work (my 3 physical hosts could ping each other and ping all the different
service addresses, but containers couldn’t ping each other’s service
addresses).

That said, if anyone REALLY wants to go the k8s route, it seems like starting
with vanilla docker did allow me to get 75% of the work done before I needed
to touch k8s itself :)

~~~
ojhughes
This should be really easy using k3s.

------
eric_khun
Anyone solved properly the CPU Throttling issues they are seeing with
kubernetes ? Is this release solved it? We are seeing a lot of throttling on
every deployments, what impact our latency, even when setting a really high
cpu limit. The solutions seems to be:

\- remove the limit completely. Not a fan of this one since we really don't
want a service going over a given limit...

\- using the static management cpu policy [2] Not a fan because some services
doesn't need a "whole" cpu to run...

Anyone has any other solutions? Thanks!

[1]
[https://github.com/kubernetes/kubernetes/issues/67577](https://github.com/kubernetes/kubernetes/issues/67577)

[2] [https://kubernetes.io/docs/tasks/administer-cluster/cpu-
mana...](https://kubernetes.io/docs/tasks/administer-cluster/cpu-management-
policies/#static-policy)

~~~
dilyevsky
1\. It makes no sense to quota your cpu with the exception of very specific
cases (like metered usage). You’re just throwing away compute cycles

2\. Same applies to dedicate cores for pretty much same reasons

Having said that if you really really want quota but don’t want shit tail
latency I suggest setting cfs_quota_period to under 5ms via kubelet flag

~~~
rumanator
> It makes no sense to quota your cpu with the exception of very specific
> cases (like metered usage).

This is not true at all. Autoscaling depends on CPU quotas. More importantly,
if you want to keep your application running well without getting chatty
neighbors or getting your containers redeployed around for no apparent reason,
you need to cover all resources with quotas.

~~~
pluies
Agree re noisy neighbours, but autoscaling depends on _requests_ rather than
_limits_, so you could define requests for HPA scaling but leave out the
limits and have both autoscaling and no throttling.

~~~
EdSchouten
The problem with having no throttling is that the system will just keep on
running happily, until you get to the point where resources become more
limited. You will not get any early feedback that your system is constantly
underprovisioned. Try doing this on a multi-tenant cluster, where new pods
spawned by other teams/people come and go constantly. You won't be able to get
any reliable performance characteristics in environments like that.

For such clusters, it's necessary to set up stuff like the LimitRanger
([https://kubernetes.io/docs/concepts/policy/limit-
range/](https://kubernetes.io/docs/concepts/policy/limit-range/)) to put a
hard constant bound between requests and limits.

~~~
dilyevsky
And how will you get feedback on being throttled other than shit is randomly
failing e.g connection timeouts?

~~~
snupples
Effective monitoring. Prometheus is free and open source. There are other paid
options.

~~~
dilyevsky
That was a trick question actually - use your Prometheus stack to alert on
latency sensitive workload with usage over request and ignore everything else.

~~~
snupples
Of course, you're missing the point. Depending on your application a little
throttling doesn't hurt, and it can save other applications running on the
same nodes that DO matter.

In the meantime you can monitor rate of throttling and rate of CPU usage to
limit ratio. Nothing stops you from doing this while also monitoring response
latency.

On the other hand CPU request DOES potentially leave unused CPU cycles on the
table since it's a reservation on the node whether you're using it or not.

Again needs may vary.

~~~
dilyevsky
You got it completely backwards. Request doesn’t leave unused cpu as it is
cpu.shares, limit does being cfs quota that _completely prevents your process
from scheduling even if nothing else is using cycles_. Don’t believe me?
here’s one of kubernetes founders saying same thing -
[https://www.reddit.com/r/kubernetes/comments/all1vg/comment/...](https://www.reddit.com/r/kubernetes/comments/all1vg/comment/efgyygu)

~~~
snupples
Incorrect. If a node has 2 cores and the pods on it have request of 2000m
nothing else will schedule on that node even if total actual usage is 0.

You can overprovision limit.

This is easy to test for yourself.

------
QUFB
I've seen the cloud agnostic nature of Kuberbetes mentioned in many posts
here. This is only true on the surface for a significant number of use cases
and deployment models.

Once petabytes of data are out there in your GCP or AWS environment,
"portability" will be costly due to extortionistic pricing of egress
bandwidth.

~~~
hodgesrm
A lot of people think of multi-cloud as some kind of arbitrage where you jump
quickly between markets. Running applications in cloud environments is a lot
more like leasing property. Once you set up there are costs to moving.

The portability argument boils down to saying you are not boxed in. If things
get bad enough you can move. This is a big long-term advantage for businesses
because it means you can correct mistakes or adjust to changing business
conditions. That's what most people who run companies are really looking for.

~~~
MrBuddyCasino
It also leads to nullifying the advantages of the cloud. The whole point are
the proprietary services they offer, and use those instead of building them
yourself. Trying to be „cloud agnostic“ is one of the biggest mistakes one
could make.

~~~
growse
I'd argue the whole point is actually the fact that you can lease
CPU/Memory/space as you need it, and capacity constraints now become simple
cash constraints. You don't need to shell out millions of dollars on a
specialist enormous hardware just to be able to use it for an hour.

Lots of big companies that operate extensively in AWS/Azure/GCP don't go
anywhere near the managed services they offer, because they end up being a
horror show in terms of scalability, functionality and troubleshootability.
Depending on your risk appetite, running Kafka on Fargate/EC2 is a lot more
attractive than using Kinesis (for example).

------
dragonsh
Containers are not necessary if the systems are build with something like Guix
or Nix as they provide transaction level updates to applications and
dependency and secure by default as no need to run a daemon with root access
to run, monitor and manage containers. They provide same way of managing
application deployment, the way application source code is managed with
versioned deployments and rollbacks all baked in.

But as with any technology Guix and Nix are still decade ahead of the present
and may pick up later when technology converge back to running application
servers and other dependent software in isolation with user level namespaces.

Kubernetes try to solve one problem and create 10 other infrastructure
problems to manage and instead of working on application, tie the company to a
specific distribution or cloud service provider. So far there is nothing
revolutionary in it unless the startup or company adopting k8s is google size
operations.

From a software developer perspective which is the main audience of HN it will
be popular as most of them dream or wants to work for company of size google.
Startup founders want to solve the scaling problem like google in the
beginning as everyone dreams to be google size from day one. Kubernetes
complexity is useful at large scale for majority i.e. over 90% of the
deployments simple containers, bare metal or VM with traditional configuration
management will be sufficient.

~~~
gigatexal
Kubernetes is here. Better to learn it and use it where appropriate than to
fight it.

~~~
dragonsh
I don't need to fight it, I am not looking for a job and follow the herd
mentality. My own startup is pretty happy with Guix and related infrastructure
on bare-metal, it works well for us and we can still stand on shoulders of
giants who have done much better job in managing large distributed
infrastructure.

~~~
gigatexal
That’s awesome. I’m not evangelizing it or saying you should use it as I did
say “where appropriate” and at your startup it doesn’t seem appropriate.

There are probably 10x more engineers that have experience with k8s than with
guix though so ... perhaps that could be a factor though not a reason to
change out the infra completely.

------
threatofrain
Anyone have a recommended guide for Kubernetes?

~~~
stevepike
I don't know if it's updated for more recent versions, but I read "Kubernetes:
Up and running" last year and it was excellent. It covers the motivations
behind some of the decisions which helped things click for me.

~~~
chrizel
Also read this book last month and it gave me a nice overview. But when doing
all the examples you notice, that even with the updated second edition from
the end of 2019, many of them are outdated. Not a big problem to solve, just
small differences, but then you realize that Kubernetes is a fast moving
target. A book about technology will always have this problem, but the K8s
space seems to move especially fast currently.

Now I’m also more leaning towards the official docs as a recommendation,
because they should always be more up to date... nevertheless, “Kubernetes: Up
and Running” took my fear off this (at first) complex architecture. In the
end, K8s is not that difficult to understand and the involved building blocks
make sense, after you get the hang of it.

By the way, Microsoft is giving away “Kubernetes: Up and Running” second
edition for free currently: [https://azure.microsoft.com/en-
us/resources/kubernetes-up-an...](https://azure.microsoft.com/en-
us/resources/kubernetes-up-and-running/)

------
crb
Podcast interview with the release team lead:
[https://kubernetespodcast.com/episode/096-kubernetes-1.18/](https://kubernetespodcast.com/episode/096-kubernetes-1.18/)

------
robbiet480
Really annoying that AWS EKS only got K8S 1.15 last week.

------
lprd
I've taken a couple k8s courses, I understand all the small parts that make up
k8s, but yet it seems that there are still no easy solutions to install on
bare metal. The default recommendation is to always just roll with a managed
solution. This is slightly irritating considering there are plently of
companies out there who own their own infrastructure.

There are plenty of great developer distributions out there (k3s, kind,
minikube, microk8s), but those are single node only, and aren't meant for
production use.

I'm still searching for a solid guide on how to get k8s installed on your own
hardware. Any suggestions would be very appreciated!

~~~
ojhughes
k3s supports multinode

~~~
lprd
Ah, I wasn't aware. Does it support HA as well?

~~~
ojhughes
Yes if you use an external DB for the k8s control plane

[https://rancher.com/docs/k3s/latest/en/installation/datastor...](https://rancher.com/docs/k3s/latest/en/installation/datastore)

~~~
pas
There's also an experimental embedded DQlite (raft + sqlite) thingie too!

[https://rancher.com/docs/k3s/latest/en/installation/ha-
embed...](https://rancher.com/docs/k3s/latest/en/installation/ha-embedded/)

------
joseph
Shamelesss plug: Keights, my Kubernetes installer[1] for AWS supports 1.18
(the latest version available on EKS is 1.15). Keights also supports running
etcd separate from the control plane, lets you choose instance sizes for the
control plane, and can run in GovCloud.

1\.
[https://github.com/cloudboss/keights](https://github.com/cloudboss/keights)

~~~
llarsson
How does it compare to kops or, dare I say it, kubespray?

~~~
joseph
Hi, I've often worked in corporate environments where it wasn't necessarily
allowed to spin up a VPC, or to create an internet gateway. In the time I was
creating this, many companies who are in healthcare or are otherwise locked
down, could not use kops due to its requirement for an internet gateway. I
created keights to fill that space, so that anyone could run Kubernetes in
AWS, even in air gapped environments. This is pretty common nowadays, by the
way - enterprises have a team to manage all AWS accounts, and they set up VPCs
and connectivity ahead of time, before development teams get access to the
account; access to the internet is through a proxy only, and no one can modify
the network. Not to mention, most of the access to Amazon's services can now
be done without an internet gateway, using VPC endpoints. Keights fits well in
this world of locked down network access, and it works well even in GovCloud
(you would need to build the AMI there as my public AMIs cannot be shared with
GovCloud accounts).

Keights and Kubespray both use Ansible, however they do it in a very different
way. (Disclaimer: I haven't used kubespray, only looked over the
documentation). Keights uses Ansible roles to build CloudFormation stacks to
produce a cluster. The nodes in the cluster bootstrap themselves using systemd
services that are baked into the AMI; Ansible does not run on the nodes in the
cluster. Kubespray, as I understand it, uses a traditional Ansible approach of
pushing configurations over ssh to nodes in its inventory. To my knowledge, it
does not actually build the machines in the cluster, it just configures
existing machines. Keights does the full end-to-end automation to bring up a
working cluster, including the creation of all required AWS resources
(autoscaling groups, a load balancer for the apiserver, security groups, etc -
though you do provide certain resources as parameters, for example your VPC ID
and subnet IDs, due to the aforementioned requirements to fit into locked-down
environments).

------
mehdix
FWIWI a few days back I upgraded my microk8s-based k8s to 1.18 beta channel
and it solved at least ImagePullBackOff problem that I had with pulling from
my private GitLab registry. Worked like charm.

------
EdwardDiego
I'm pleased to see the changes in the HPA, having pod scale-up/down periods be
tied to a systemwide setting was a bit painful.

~~~
hiroshi3110
This one?
[https://github.com/kubernetes/kubernetes/blob/master/CHANGEL...](https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.18.md#new-
api-fields) > autoscaling/v2beta2 HorizontalPodAutoscaler added a
spec.behavior field that allows scale behavior to be configured. Behaviors are
specified separately for scaling up and down. In each direction a
stabilization window can be specified as well as a list of policies and how to
select amongst them. Policies can limit the absolute number of pods added or
removed, or the percentage of pods added or removed. (#74525, @gliush) [SIG
API Machinery, Apps, Autoscaling and CLI]

~~~
EdwardDiego
Yep, in the version I'm on (1.15) there's only global flags and config[1]
which apply to all HPAs, but not all apps should scale the same way - our net
facing glorified REST apps can easily scale up with, say, a 1-2m window, but
our pipeline apps sharing a Kafka consumer group should be scaled more
cautiously (as consumer group rebalancing is a stop-the-world event for group
members)

1: [https://v1-15.docs.kubernetes.io/docs/tasks/run-
application/...](https://v1-15.docs.kubernetes.io/docs/tasks/run-
application/horizontal-pod-autoscale/#algorithm-details)

------
deboflo
Kubernetes? No thanks.

