
Kubernetes 1.10 released - el_duderino
http://blog.kubernetes.io/2018/03/kubernetes-1.10-stabilizing-storage-security-networking.html
======
wolfgang42
Is there an easy way to get a single-node production-ready Kubernetes
instance? I'd like to start using the Auto DevOps features that GitLab is
adding, but all the tutorials I can find either have you installing minikube
on your laptop or setting up a high-availability cluster with at least 3
hosts. Right now I'm using CoreOS and managing Docker containers with systemd
and shell scripts, which works all right but is tedious and kind of hard to
keep track of. I don't have anything that needs to autoscale or fail over or
do high availability, I just want something that integrates nicely and makes
it easy to deploy containers.

EDIT: I should have clarified, I want to self-host this on our internal VMWare
cluster, rather than run it on GKE.

~~~
moondev
Sure. Install kubeadm on the node, "kubeadm init", install a pod network, then
remove the master taint

~~~
m1sta_
Reminds me of the plumbis “How is it made”.

------
wpietri
I'm very curious to hear field reports from people who switched to using
Kubernetes in production in the last year or so. Why'd you do it? What got
better and what got worse? And are you happy with the change?

~~~
linsomniac
One data point: I've wanted to but so far have not made much progress. I'd say
my biggest impediment has been documentation: I can get it installed, but
making it work seems to be beyond the scope of the documentation. I got
closest once I found out about "kubespray" to install the cluster rather than
using the official Kubernetes installation docs process.

I spent a couple weeks not quite full time going through tutorials, reading
the documentation, reading blog posts and searching for solutions to the
problems I was having. My biggest problem was with exposing the services "to
the outside world". I got a cluster up quickly and could deploy example
services to it, but unless I SSH port forwarded to the cluster members I
couldn't access the services. I spent a lot of time trying to get various
ingress configurations working but really couldn't find anything beyond an
introductory level documentation to the various options.

Kubespray and one blog post I stumbled across got me most of the way there,
but at that point I had well run out of time for the proof of concept and had
to get back to other work.

My impression was that Kubernetes is targeted to the large enterprise where
you're going to go all in with containers and can dedicate a month or two to
coming up to speed. Many of the discussions I saw talked about or gave the
impression of dozens of nodes and months of setup.

Other options I'll probably look at when I have time to look at it again: Deis
[https://deis.com/](https://deis.com/) , Dokku
[http://dokku.viewdocs.io/dokku/](http://dokku.viewdocs.io/dokku/) , Flynn
[https://flynn.io/](https://flynn.io/) , LXC
[https://linuxcontainers.org/lxc/introduction/](https://linuxcontainers.org/lxc/introduction/)
and (though I'd been trying to avoid it) Docker Swarm
[https://docs.docker.com/engine/swarm/](https://docs.docker.com/engine/swarm/)

~~~
andrewstuart2
Are you trying to play around, or set up a working cluster? If you just want
to play around, I'd suggest just using minikube to get things going.

Anecdotally, I got an HA cluster running across 3 boxes in the space of about
a month, with maybe 2-3 hours a day spent on it. The key for me was iterating,
and probably that I have good experience with infrastructure in general. I
started out with a single, insecure machine, added workers, then upgraded the
workers to masters in an HA configuration.

I don't think it is really that hard to get a cluster going if you have some
infrastructure and networking experience, especially if you start with low
expectations and just tackle one thing at a time incrementally.

~~~
oso2k
Full Disclosure: I work for Red Hat in the Container and PaaS Practice in
Consulting.

At Red Hat, we define an HA OpenShift/Kubernetes cluster as 3x3xN (3 masters,
3 infra nodes, 3 or more app nodes) [0] which means the API, etcd, the hosted
local Container Registry, the Routers, and the App Nodes all provide (N-1)/2
fault tolerance.

Not to brag, since we're well practiced at this, but I can get a 3x3x3 cluster
in a few hours, I've lead customer to a basic 3x3x3 install (no hands on
keyboard) in less than 2 days, and our consultants are able to install a
cluster in 3-5 working days about 90% of the time, even with impediments like
corporate proxies, wonky DNS or AD/LDAP, not so Enterprise Load Balancers, and
disconnected installs. Making a cluster read for production is about right-
sizing and doing good testing.

[0] [http://v1.uncontained.io/playbooks/installation/#cluster-
des...](http://v1.uncontained.io/playbooks/installation/#cluster-design-
architecture)

~~~
user5994461
One last challenge. Can you do all the setup without being root?

~~~
oso2k
As long the user can install packages (say, via /etc/sudoers file), make
config changes, Yes. That's supported by our installer [0].

[0] [https://github.com/openshift/openshift-
ansible/blob/master/i...](https://github.com/openshift/openshift-
ansible/blob/master/inventory/hosts.example#L32-L43)

------
BaconJuice
Hi..can someone ELI5 to me what Kubernetes is? Also what's the best way to get
started/tutorials you can recommend for a new user? Thank you!

~~~
moistoreos
To follow this up, can anyone explain the benefits over Docker? I've used
Docker before but am unfamiliar with Kubernetes terminology. I do understand
it's an open source project by Google.

~~~
p3llin0r3
Kubernetes USES docker, so it's not a competitor.

The main benefits over the competing docker project, Docker Swarm, is that it
does WAY MORE, is 100% free and open source, and has much better adoption.

I would argue that with Docker Swarm you have to bring the glue yourself, and
it doesn't really solve any of the hard problems. Kubernets on the other hand
is an all-in-one package that solves a LOT of hard problems for you.

~~~
moistoreos
+1 for open source then. Thanks for the explanation.

------
ascendantlogic
They need some sort of LTS versioning. Keeping up with their breakneck
development pace is a job all its own.

~~~
manojlds
Kops has not released a 1.9 version yet. Even k8s projects can't keep up.

~~~
AlexB138
Kops generally stays one release behind. 1.9 is being end to end tested in the
last week. I wouldn't use that as an example of not keeping up, it's the
established cycle for the project.

~~~
justinsb
kops 1.9 is very close to ready now, but this is a longer lag than normal.
We've historically released kops 1.x when we consider that k8s 1.x is stable,
including all the networking providers and ecosystem components. That's
typically about a month after release.

User feedback has been that that we want to keep that, but that we should also
offer an alpha/beta of 1.10 much sooner, so that users that want to try out
1.10 today can do so (and so we get feedback earlier). So watch for kops 1.10
alpha very soon, and 1.11 alpha much earlier in the 1.11 cycle.

~~~
iooi
For those that aren't aware, justinsb is the author of kops. Looking forward
to the 1.9 release.

~~~
justinsb
Ah - sorry, probably should have disclaimered that! I did write the original
kops code, but now there's a pretty active set of contributors working on kops
(and contributions are always welcome and appreciated!)

------
cube2222
Can anybody share their experiences with running applications that use
persistent volumes on bare metal kubernetes?

I mean without cloud services like Google cloud persistent disks.

~~~
scurvy
+1 for "regular" Ceph. Don't bother with that rook stuff. Just setup a regular
Ceph cluster and go. Kubernetes handles its stuff and a (much more reliable
and stable) Ceph cluster handles blocks and files.

~~~
stormbeard
Can you explain what's wrong with Rook? I thought it was supposed to make life
easier when running Ceph.

------
humbleMouse
Openshift from redhat is built on kubernetes. Openshift offers a free tier to
try out their cloud services. I'd recommend it to anyone who wants to try it
out.

I'm not affiliated with redhat in any way but I have enjoyed using the
openshift platform.

Here's the link for anyone interested:
[https://www.openshift.com/pricing/index.html](https://www.openshift.com/pricing/index.html)

~~~
bmaupin
[http://openshift.io](http://openshift.io) is also built on Kubernetes (by
virtue of using openshift.com for its deployment pieces)

------
CSDude
Good to see CSI is gaining traction. Only if we could build containers inside
K8S pods safely, I would be very happy. Maintaining a stupid Docker cluster
for just building containers is really a burden.

~~~
codereflection
Seems someone else was able to get it that working:
[https://blog.jessfraz.com/post/building-container-images-
sec...](https://blog.jessfraz.com/post/building-container-images-securely-on-
kubernetes/)

~~~
kuschku
If the gitlab ci provider for kubernetes would get this feature, it’d be
amazing.

I could finally run gitlab’s CI safely on kubernetes and generate containers.

~~~
tuananh
i have a problem of running gitlab runner within k8s. the docker layer is not
cached. is there anyway to fix this?

~~~
joshlambert
In my experience the GitLab Runner on k8s should utilized the cache. Are you
using Docker-in-Docker by chance? By default, I don't think it can cache data
between jobs.

~~~
tuananh
can you share your config.toml for k8s runner?

------
dekhn
I switched to k8s about a year and a half ago before deploying a modest
frontend/backend pair. although there is some friction, I generally like the
approach (I used to use borg, so it's a pretty low barrier).

The biggest problem I have is debugging all the moving parts when there are
~10+ minute async responses to config changes.

~~~
majewsky
10 minutes? That sounds a lot. We usually have 1-2 minutes wait time for a
changed ConfigMap to reach a running pod, or for a Deployment to roll over to
a new ReplicaSet. (That's on k8s 1.4.)

------
iooi
Has anyone been using CoreDNS as Alpha in 1.9, or tried the Beta now in 1.10?
What was your use case and reason for switching? How is it better than
KubeDNS?

~~~
scurvy
We've been using CoreDNS for a bit now, and it's much better than KubeDNS. We
found that KubeDNS would time out and drop requests from time to time. No such
issues with CoreDNS. Would recommend (at least from a reliability standpoint).

~~~
iooi
Thanks for sharing. How are you monitoring timeouts and dropped requests on
KubeDNS?

~~~
scurvy
We were running with shorter TTL's on service records, and upstream apps threw
rashes of errors when queries timed out.

------
abledon
I see some people who use Elixir/Erlang ecosystems then shove them into a
kubernetes system. Isn't this going against what the Elixir/Erlang system
already provides ? What are the usecases for this?

~~~
troutwine
> Isn't this going against what the Elixir/Erlang system already provides?

No?

So, I guess, let's dig into that a little bit. Erlang's always kind of left
the health and safety of your deployed system as a whole up to you, being
preoccupied with giving you tools for understanding and maintaining its
internal operation. OTP provides two semi-unique things: supervision of
computation with control of failure bubble-up and hot code reloading. Hot code
reloading isn't used all that much in practice, outside of domains where it's
_very important_ that the whole system can never be offline and load balancing
techniques are not applicable. That's a specific niche and, sure, probably one
that kubernetes can't service. With regard to supervision, there's no
incompatibility between OTP's approach and a deployment consisting of of
ephemeral nodes that live and then die by some external mechanism. Seems to me
that kubernetes is no different a deployment target in this regard than is
terraform/packer, hot-swapped servers in a rack or any of the other deploy
methods I've seen in my career.

------
TuringNYC
For the folks who have implemented K8 in Production - curious if you use it to
do resource management as well, or, if you also use something like Mesosphere
in conjunction? Or would you stick with a like-stack (DCOS+Marathon)?

There is surprisingly little online discussion/documentation on the intersect
of Resource Management and Container orchestration. Not sure if it is too
early in the curve, a dark art, not actually done, or what...

~~~
ianburrell
Kubernetes does resource management; all cluster scheduler systems do. The
difference between Kubernetes and Mesos for resource management is that
resource requests and limits are optional in Kubernetes and mandatory in
Mesos. It is best practice to specify resources in Kubernetes.

------
noselasd
I'm still wondering about going with plain kubernetes, or investing time in
OpenShift. Any insights from people that have tried both ways ?

~~~
swozey
I've run vanilla k8s for about 3 years now in prod but am also fairly familiar
(and really like) Openshift Origin. I usually tell people asking this question
the following;

OO comes with a bunch of really nice quality of life improvements that are
missing (but in a lot of cases can be added via 3rd party TPR/CRDs/etc) in k8s
but you aren't deviating so far from k8s that you won't be able to go work on
vanilla k8s in the future. Not at all. Most of the additional stuff are
annotations that simply wouldn't do anything and you'd remove them if you
moved from OO to k8s.

I think that if you're brand new to the environment OO can really help you get
running quickly. You just have to make sure that you do in fact dive in to the
actual k8s yaml and deal with ingresses, prometheus, grafana, RBAC, etc at
some point. I haven't used OO in awhile but I believe you could successfully
do most of what I do day to day via yaml/json through the OO UI.

On the flip side a lot of people will probably tell you to start from k8s,
whether that's GKE or AWS or minikube or wherever and go through the k8s the
hard way. Personally, and I help people quite frequently on the k8s slack, I
feel like that leads a lot of people down a path of frustration. It may be
perfect for your style of learning or it may just scare you off.

Now when it comes to OO you are at the behest of their releases. Their most
recent release was Nov 2017 and k8s 1.10 was just cut. I'm not sure what
version OO is on now, evidently they changed their versioning numbers to not
correspond to the k8s version.

Join #kubernetes-users and #kubernetes-novice on slack.k8s.io if you need any
help. It's a vibrant community. You can message me directly @ mikej if you'd
like.

edit: Ok OO 3.7 is k8s 1.9, that's perfect. I wait a few months before jumping
into new major.

~~~
smarterclayton
I’m about to cut OO 3.9 (based on 1.9) - we’ve been waiting for the subpath
CVE fixes and regressions to get sorted out before we cut a release.

~~~
swozey
Awesome! I mentioned this in #openshift-users. The OO website still states;

> An OpenShift Origin release corresponds to the Kubernetes distribution - for
> example, OpenShift 1.7 includes Kubernetes 1.7.

I had to dig around to figure out the version, might want to update that. :)

And great work, btw, OO is fantastic.

------
sthomas1618
Something I've been wondering about: how stable is Kubernetes service
discovery? I.e. can it entirely replace something like eureka? Is there any
reason not to use Kubernetes provided service discovery?

~~~
dmourati
Depends if _all_ service discovery sources and targets are all within k8s. If
so, k8s works well, if not, not so much.

------
yingxie3
Funny that this and Soloman leaving Docker showed up on the same page.

------
FooBarWidget
I am hearing that everybody is interested in Kubernetes, yet relatively few
people are actually using Kubernetes in production.

Are you using Kubernetes? And if not, what is your reason not to?

~~~
SamLevin88
I'm not so sure about that last part. We (Kinvey, www.kinvey.com) have been
using it for customer-facing services in production for over a year. Results
have exceeded our expectations.

------
linsomniac
Funny so many people are talking here about how fast Kubernetes is moving...
That's one of the complaints about Docker Swarm that led me to rule it out...

------
kubectl
I am looking for,

Kubectl auth login, is it available yet ?

------
tomerbd
so cool, would be even cooler at 1.20

