
Kubernetes 1.5.0 Released - tylermauthe
https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG.md
======
jcastro
If anyone is interested in giving k8s a go we've been working on fully
supported Kubernetes for Ubuntu:
[https://www.ubuntu.com/cloud/kubernetes](https://www.ubuntu.com/cloud/kubernetes)

This works on GCE, AWS, Azure, VMWare, bare metal (via MAAS) and LXD
containers. We're currently supporting 1.4 but will support 1.5 (and upgrades)
in the next week or so.

Disclaimer: I'm on the team that works on this.

~~~
SEJeff
What does Canonical work on in upstream Kubernetes? I see hundreds+ of commits
from Redhat and CoreOS in Kubernetes and the associated projects?

In the main kubernetes git repo, I see 39 non-merge from @canonical commits,
99% of which are under cluster/juju/* and 1 doc fix for juju, so not really a
value add to Kubernetes at all. How can I, as someone deploying kubernetes on
bare metal, trust you to manage a difficult project when Canonical doesn't
seem to contribute much at all to the project or ecosystem?

It is great that you're making kubernetes easy to manage and deploy, but other
than ease of use (which GKE does an excellent job of as well from the authors
of k8s), what is Canonical's actual value add for paying you to manage k8s?
Sorry for the terseness, but I'm genuinely curious what value add there is
here. As a purely tech focused person, I simply don't see it. Even other firms
doing exactly what you do (Apcera) have actual code change commits. I don't
see a single one from Canonical that does.

~~~
marcoceppi
Unlike other companies, we don't jump in to upstream projects and throw weight
around to influence a project to make it more marketable for us. Kubernetes
has a vibrant, and powerful core contributor base already. When it makes
sense, for any changes we make, we'll be happy to package those back upstream.
Primarily, our contributions to Kubernetes is around operations. What happens
1 week, 1 month, 1 year after setting up a cluster.

GKE is great, but GKE is Google only. It's not on-prem, it's not cross cloud,
and it's not portable. That's important to some people. Our contributions to
cluster/juju is the distillation of our operational knowledge in running
Kubernetes everywhere. The same upstream k8s, deployed with the same tooling,
everywhere.

Not all value can be measured in commits :)

~~~
SEJeff
The way you start this off is a pretty poor response to my original question,
whereas the rest is fantastic. It isn't about throwing one's weight around or
even making it more marketable, but about improving the product for new use
cases (such as Apprenda wanting to better Kubernetes via the on-premise sig).

Thanks for your response, I was curious as to the value add and this helps.

~~~
marcoceppi
Thanks for the feedback, my original sentence may have been a bit hasty, but
the core of the message is there. I see Canonical's role as the
expediter/server in a kitchen instead of donning another chef hat when dish
are piling up to go out. We want to celebrate and get the amazing work of the
community into as many hands as possible.

We participate in SIGs as well, sig-on-premise being one we co-chair and co-
founded. We've planning on helping the project in ways other than code
contributions.

~~~
SEJeff
Sorry I meant even Apprenda has actual golang feature commits. That was my
point Re: Apprenda and contributions.

This seems to be Canonical's general direction. It just makes me sad that is
is more on marketing than engineering. I wish it was more of both as clearly
Canonical does Linux + marketing better than basically anyone or there
wouldn't be so much Ubuntu everywhere :)

------
jstoja
Has anyone already successfully created a HA-cluster on a bare metal
infrastructure? It seems overly complicated and not that well documented from
what I can see.

~~~
hnarayanan
I have attempted this using
[https://coreos.com/kubernetes/docs/latest/kubernetes-on-
bare...](https://coreos.com/kubernetes/docs/latest/kubernetes-on-
baremetal.html) . One can get quite far, but I am not sure it is worth the
effort. I understand this is a vague analogy, but it feels like you're trying
to setup email infrastructure when what you really want is to send email.

It is a lot easier on the public cloud, or easier still on a managed service.

~~~
iagooar
What about people who cannot go to the cloud? What about people who need more
performance? Lower costs? Kubernetes seems to be such a great abstraction for
the underlying hardware, why not use it where it is needed the most?

~~~
nixgeek
citation required, since there are IaaS offerings that do not involve a
hypervisor (e.g. Rackspace OnMetal) and therefore its possible to move "to the
cloud" without performance penalties.

on costs via Reserved Instances and negotiating with your account team at
larger volume, it can be pretty reasonable, and for a smaller outfit,
substantially cheaper than hiring experts in running physical infrastructure
of your own.

~~~
vidarh
> and for a smaller outfit, substantially cheaper than hiring experts in
> running physical infrastructure of your own.

You needs to have the skillsets to manage cloud deployments too. For the
systems I manage, which range from actual bare metal, via dedicated servers,
to VPSs and AWS deployments, the incremental effort spent on managing hardware
as you go down towards bare metal tends to be pretty much a rounding error
compard to the overall operations effort. Once the systems have been wired up,
and PXE booted into a suitable setup, the effort is pretty much the same.

And with the cost differential, I'd say once you go over a few hundred dollars
a month on servers that stay up 24/7, you're losing money on public cloud
deployments vs. managed dedicated hosting. Once you go over $1k to $2k/month,
you're losing money vs. colo.

For people who actually have a lot of batch jobs where servers stay up for
less than 6-8 hours a day, the maths look different, but it's very rare I come
across cloud setups that are cheaper than dedicated, with all staff costs etc.
accounted for.

------
boyd
One note of caution for those skimming the release notes: v1.5.0 now allows
anonymous access to the API server by default. With the default k8s
authorization being `AllowAll` this makes it _very_ important to pass
`--allow-anonymous=false` if updating from v1.4.x to v1.5.0.

It looks like this is going to be set to false for v1.5.1:
[https://github.com/kubernetes/kubernetes/pull/38708](https://github.com/kubernetes/kubernetes/pull/38708)

~~~
TheIronYuppie
This is correct, and has been fixed. If you use kube-up or Google Container
Engine, the default is set to false.

Disclosure: I work at Google on Kubernetes.

------
samuell
Folks wanting an easy road to deploying Kube on (so far) AWS, GCE and
OpenStack, with bare metal and local on the roadmap, might want to check out
KubeNow:

[https://github.com/kubenow/KubeNow](https://github.com/kubenow/KubeNow)

Tries to re-use as much as possible from great projects like Terraform,
Packer, Ansible and the kubeadm tool, and just add a thin layer on top of that
(less risk for bit rot), which is an approach that seems appealing to me.

~~~
machbio
Does KubeNow support AWS Spot Instances ?

~~~
TheIronYuppie
Not sure on AWS Spot, but on Google Cloud you can use Preemptible VMs (which
have similar 50-80% discounts).

[https://cloud.google.com/container-
engine/docs/preemptible-v...](https://cloud.google.com/container-
engine/docs/preemptible-vm)

Disclosure: I work at Google on Kubernetes.

------
pyvpx
you'd think such a project so heavily supported by Google would have IPv6
support. Hell, you'd think it'd be native/first-class.

~~~
dankohn1
Yes, it's a known issue. Pull requests welcome.
[https://github.com/kubernetes/kubernetes/issues/1443#issueco...](https://github.com/kubernetes/kubernetes/issues/1443#issuecomment-264605379)

~~~
TheIronYuppie
Yes, please! It's a pretty deep problem, so we'd love the help.

Disclosure: I work at Google on Kubernetes.

------
shaklee3
Looks like PetSets has been renamed StatefulSets and has moved to beta in this
release.

------
chrisgaun
DotNETes (Windows container and Hyper-v container) support is here!
Disclosure: I worked on DotNETes

~~~
iso-8859-1
Do you have an example application?

------
iamdeedubs
k8s-the-hard-way explicitly says its not production ready. Is there a document
that explains what its missing or production topology documented somewhere?

~~~
lazypower
I don't think its documented in any official capacity, but we (sig cluster
ops) did generate some visuals that might aid in grokking the topology of
Kubernetes as a whole, and we did model this after production-setups.

A few things to keep in mind:

These maps are service centric, and abstract units as vertical columns in
their respective diagrams. Services must be HA to be considered “production
ready”

Additional concerns that may/may-not be represented here:

\- TLS Security on all endpoints \- TLS Key Rotation in the event of
compromise/upgrade/expiration \- Durable storage backed workloads \- ETCD
state snapshots for cluster point-in-time recovery \- User/RBAC - this still
needs more info before i can outline it (time limited) \- Network policy for
namespace/application isolation (this is an unspoken requirement for many
business units)

The diagrams:

Kubernetes cluster services
[https://docs.google.com/drawings/d/1U4GBSg9Sdn7JspoxDyA4qwGM...](https://docs.google.com/drawings/d/1U4GBSg9Sdn7JspoxDyA4qwGMJC9JelWiev5MwYG9zwg/edit)

Kubernetes Binary Services Topology map
[https://docs.google.com/drawings/d/10sXtgdelUI3GbWjrYh2z5vhF...](https://docs.google.com/drawings/d/10sXtgdelUI3GbWjrYh2z5vhFahdqcYqQtOu9fs2dPkk/edit)

Kubernetes Cluster node Maps (3)
[https://docs.google.com/drawings/d/1x1PEE0RKvCRnP5JCAjmfbr_7...](https://docs.google.com/drawings/d/1x1PEE0RKvCRnP5JCAjmfbr_7NTpjBvOo04pvXbgztqM/edit)

We left off working on a Network draft diagram, and if you’re interested in
contributing/participating in this process, join us in the #sig-cluster-ops
slack channel. We meet thursdays (or have, new year schedule dependent)

~~~
Terretta
Missed these in Slack SIG channel, thanks!

------
awinter-py
kubefed autocorrects to 'cubefield' in G search

