
Kubernetes 1.11 includes In-Cluster Service Load Balancing and CoreDNS - rbanffy
https://kubernetes.io/blog/2018/06/27/kubernetes-1.11-release-announcement/
======
ponyous
Is K8S ever suitable for small business? K8S is only supported on expensive
hosts (AWS, GCE, Azure), is it possible to spin it up on hetzner/ovh or
another provider that is cheaper than AWS, possibly even on my own hardware?
How hard it would be to make the simplest reliable prod environment on bare
metal with K8S?

I'm at the moment running 1 (tech) man SaaS with minimum profits, here are my
goals (1,2,3 are essential):

1\. Be cheap - It's a side project, I don't need to sacrifice money for speed
as I enjoy what I'm doing.

2\. Automatic failover (This should cover majority of downtime imho)

3\. Backups

4\. Automatic scalability (CPU over 95% for last 2 minutes or smth like that?
add new containers pls)

5\. Load balancing (using erlang so this can be handled on application level
if nodes can see eachother)

6\. If gitlab autodevops can work out of the box that would be fantastic

Unrelated to above: \- How are K8S backups handled? I spoke with a friend (K8S
fanboy) and he said you have several containers spawned of the DB and they are
all interconnected. If one of them fails another one has the data. This seems
stupid to me, what if the whole zone fails and all DB containers are shut off?
\- Same friend asked me: "What restarts your services if they fail?".
"init.d". "And what if the machine fails?" Well I gotta restart it manually.
"You see, you should use K8S orchestrator". "What restarts K8S orchestrator if
it fails?" <Silence> ^ I have a feeling big part of community is like that,
"super cool feature of K8s you should use it" but there is no connection to
the rest of the context.

Ps. conversation was in another language so I'm not sure I used the right
terms eg. "K8S orchestrator"

~~~
Birch-san
I recommend Rancher for the small business that wants to roll their own
Kubernetes for free. Rancher 2 is a bit underdocumented right now, but we had
great success with Rancher 1.

Your bare metal hosts can be provisioned as Rancher hosts, which are dumb
slaves managed by a Rancher server.

From the Rancher server, you can trivially express "I want this Docker
container deployed across n Rancher hosts" or "I want 1 instance of this
docker container across all Rancher hosts with the tag "has public IP".

You can group your Rancher hosts into Sandbox and Prod Rancher environments.
You can easily install Rancher's load balancer service on them, or mount
network storage, or register secrets upon them (like private Docker registry
keys).

It also gives you health checks, host monitoring, and zero downtime redeploys.
Super easy to use from the UI or CLI. Easy to install, too.

~~~
Murrawhip
I currently use kubeadm for our small business cluster. 1 master and 2 nodes.
We haven't put a ton into it but it seems to be running pretty well so far.
Have done an upgrade from 1.7 to 1.9 without much/any downtime. Is there much
more of a learning curve with Rancher?

~~~
Birch-san
Rancher 1 is extremely easy to learn. Generally you do everything from the UI
(I only use the CLI to view combined logs). High-availability is easy too;
they provide a load-balancer (haproxy) with good UI integration.

Rancher 2 has a nicer UI and is more tightly related to Kubernetes. But there
are moreorless zero docs, so be careful.

With either version: it's trivial to grab the master and node Docker images,
and deploy them to your local machine to have a play.

------
meta_AU
I've always liked how K8S release notes come with a series of blog posts doing
a deep dive into each feature. But when I think back on it, because they come
out after the release announcement I end up forgetting about them until the
next release.

I wonder if releasing the posts in the period leading up to the release would
be better, or if that would just lead to artificial delays on the actual
release.

------
doctoboggan
Tangentially related but hopefully someone one HN can help me out.

I want to try using Kubernetes with my current project. I've built it using
Docker and docker-compose. My compose file currently has four services (web,
db, dbadmin, and reverse proxy). I am using a flask app for the web container,
postgres for db, pgadmin for the dbadmin, and traefik for the reverse proxy.

I am currently running the above stack on a single host using `docker-compose
up`. I like how docker-compose creates a network for me where I can access
other containers by their container name.

For my own education, I would like to try deploying this with Kubernetes.
Specifically I am interested in learning how to spin up multiple web
containers that all talk to the same postgres container.

Can someone let me know if I am thinking about Kubernetes correctly, and if so
some good resources for me to learn how to deploy by stack with it?

~~~
yannovitch
Try maybe first
[https://github.com/kubernetes/kompose](https://github.com/kubernetes/kompose)
?

~~~
doctoboggan
Interesting, thanks for the pointer I will check that out.

------
nateguchi
Asked this on the other posting of this but, does anyone know when Kubernetes
will stabilise their IPv6 support? Anyone using / planning to use this in
production?

------
bogomipz
Does the "IPVS-based in-cluster service load balancing" replace the Kube-proxy
iptables load-balancing then?

~~~
sisk
It’s an alternative, selectable by changing the proxy-mode flag on kube-proxy.
If the iptables implementation is working for you, I wouldn’t necessarily jump
to it—though note that iptables has a big performance hit when you start
getting to hundreds to thousands of overlay IPs and you’ll notice it if you
manage a mid-to-large sized cluster. Certainly worth playing with (throw it on
for a node or two for now), or worth the default for a new cluster, but like
with any recently stable option, if it ain’t broke …

~~~
tsenkov
Likely a dumb question (apologies in advance): can that in any way replace the
need to spinning a native LoadBallancer on AWS?

~~~
iooi
I don't think this a dumb question, ELBs on AWS are stupid expensive for what
they do.

~~~
skywhopper
I agree it's a reasonable question. And ELBs can be expensive at small scales.
They can handle a huge amount of traffic with an exceptional level of
reliability, but at small scales there are probably much better options
available.

