
Kubernetes 1.3 released - nkvoll
https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG.md/#v130
======
lobster_johnson
I'm really liking Kubernetes — we're in the process of migrating to it.

If there's on area that is in dire need of improvement, though, it's the
documentation. If you look around, there is essentially _no_ documentation
that starts from first principles, going through the different components (and
their lifecycle, dependencies, requirements and so on) one by one,
irrrespectively of the cloud environment. There is a "Kubernetes from scratch"
[1] document, but it's just a bunch of loose fragments that lacks almost all
the necessary detail, and has too many dependencies. (Tip: ask the user to
install from source, and leave out how to use images, cloud providers and
other things that obscure the workings of everything.)

Almost all of the documentation assumes you're running kube-up or some other
automated setup, which is of course convention, but hides a huge amount of
magic in a bunch of shell scripts, Salt config and so on that prevents true
understanding. If you run it for, say, AWS, then you'll end up with a
configuration that _you don 't understand_. It doesn't help that much of the
official documentation is heavily skewed towards GCE/GKE, where certain things
have a level of automatic magic that you won't benefit from when you run on
bare metal, for example. kube-up will help someone get it up and running fast,
but does _not_ help someone who needs to maintain it in a careful, controller
manner.

Right now, I have a working cluster, but getting there involved a bunch of
trial and error, a _lot_ of open browser tabs, source code reading, and so on.
(Quick, what version of Docker does Kubernetes want? Kubernetes doesn't seem
to tell us, and it doesn't even verify it on startup. One of the reefs that I
ran aground on was when 1.11 didn't work, and had to revert to 1.9, based on a
random Github issue I found.)

[1] [http://kubernetes.io/docs/getting-started-
guides/scratch/](http://kubernetes.io/docs/getting-started-guides/scratch/)

~~~
Rapzid
I can't agree with this enough. We are all on AWS and the level of effort it
would take to migrate to Kubernetes while maintaining our ability to spin up
complete ad-hoc environments on the fly(which also serves as continual DR
testing) seems too much to justify at this point. Also, I can't come out the
other side with just one or two people understanding, or having any hope of
understanding, how everything works :|

Likely if I had to choose today or this quarter we would go the empire route
and build on top of ECS. Though, our model and requirements are a bit
different so we'd have to heavily modify or roll our own.

~~~
lobster_johnson
One thing I would say is that — because of the aforementioned documentation
mess — it _seems_ more daunting than it actually is. And the documentation
does make it seem like a lot of work.

All you need to do, in broad strokes, is:

* Set up a VPC. Defaults work.

* Create an AWS instance. Make sure it has a dedicated IAM role that has a policy like this [1], so that it can do things like create ELBs.

* Install Kubernetes from binary packages. I've been using Kismatic's Debian/Ubuntu packages [2], which are nice.

* Install Docker >= 1.9 < 1.10 (apparently).

* Install etcd.

* Make sure your AWS instance has a sane MTU ("sudo ifconfig eth0 mtu 1500"). AWS uses jumbo frames by default [3], which I found does not work with Docker Hub (even though it's also on AWS).

* Edit /etc/default/docker to disable its iptables magic and use the Kubernetes bridge, which Kubelet will eventually create for you on startup:
    
    
       DOCKER_OPTS="--iptables=false --ip-masq=false --bridge=cbr0"
    

* Decide which CIDR ranges to use for pods and services. You can carve a /24 from your VPC subnet for each. They have to be non-overlapping ranges.

* Edit the /etc/default/kube* configs to set DAEMON_ARGS in each. Read the help page for each daemon to see what flags they take. Most have sane defaults or are ignorable, but you'll need some specific ones [4].

* Start etcd, Docker and all the Kubernetes daemons.

* Verify it's working with something like: kubectl run test --image=dockercloud/hello-world

Unless I'm forgetting something, that's _basically_ it for one master node.
For multiple nodes, you'll have to run Kubelet on each. You can run as many
masters (kube-apiserver) as you want, and they'll use etcd leases to ensure
that only one is active.

[1]
[https://gist.github.com/atombender/3f9ba857590ea98d18163e983...](https://gist.github.com/atombender/3f9ba857590ea98d18163e9832965166)

[2] [http://repos.kismatic.com/debian/](http://repos.kismatic.com/debian/)

[3]
[http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/network_m...](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/network_mtu.html)

[4]
[https://gist.github.com/atombender/e72c2acc2d30b0965543273a2...](https://gist.github.com/atombender/e72c2acc2d30b0965543273a22e4a7d0)

~~~
chrissnell
You're making things really hard on yourself. Boot your nodes with CoreOS and
it provides almost everything you need (except Kubernetes itself) out-of-the-
box. It all works really well together and you get automatic updates, too. I
can't imagine trying to run the cluster we run on Ubuntu, trying to roll my
own Docker/etc/flannel installs.

~~~
lobster_johnson
I'm sure CoreOS is nice, but we're currently on Ubuntu, and I'm trying to
reduce the number of unknown factors and new technologies that we're bringing
into the mix. Ubuntu is not the issue here. (FWIW, you don't need Flannel on
AWS.)

~~~
piva00
Can you expand a bit on why you don't need flannel on AWS? We're currently
deploying a k8s cluster and I surely went the flannel route (following the
steps of CoreOS guide to k8s) but it'd be nice to remove that setup from our
deployment if possible.

~~~
lobster_johnson
AWS has VPCs, allowing you to get a practically unlimited number of private
subnets.

In some cloud environments (e.g. DigitalOcean), there's no private subnet
shared between hosts, so Kubernetes can't just hand out unique IPs to pods and
services. So you need something like Flannel, which can set up a VPC either
with UDP encapsulation or VxLAN.

Flannel also has a backend for AWS, but all it does is update the routing
table for your VPC. Which can be useful, but can also be accomplished without
Flannel. It's also limited to about 50 nodes [1] and only one subnet, as far
as I know. I don't see the point of using it myself.

[1]
[https://github.com/coreos/flannel/issues/164](https://github.com/coreos/flannel/issues/164)

~~~
bboreham
Could you say how you arrange that the addresses you pick for your pods do not
clash with the addresses AWS picks for instances?

~~~
lobster_johnson
Kubernetes does this for you IPs. For example, if your VPC subnet is
172.16.0.0/16, then you can tell K8s to use 10.0.0.0/16.

AWS won't know this IP range and won't route it. So K8s automatically
populates your routing table with the routes every time a node changes or is
added/removed.

K8s will give a /24 CIDR to each minion host, so the first will get
10.0.1.0/24, the next 10.0.2.0/24, and so on. Each pod will get 10.0.1.1,
10.0.1.2, etc.

Obviously having an additional IP/interface per box adds complexity, but I
don't know if K8s supports any other automatic mode of operation on AWS.

(Note: Kubernetes expects AWS objects that it can control — security groups,
instances, etc. — to be tagged with KubernetesCluster=<your cluster name>.
This also applies to the routing table.)

~~~
bboreham
OK, I see this is the same as what Flannel does in its aws-vpc backend, but I
though you were saying you could do better. Maybe I mis-parsed what you said.

If you're adding a routing rule for every minion then you will also hit the 50
limit in AWS routing tables.

~~~
lobster_johnson
Sorry about the confusion — yes, absolutely. One option is to ask AWS to
increase it.

Flannel is just one of many different options if you need to go beyond 50
nodes. It seems some people use Flannel to create an overlay network, but this
isn't necessary. You can use the host-gw mode to accomplish the same thing as
Kubernetes' default routing-table-updating behaviour, but with routing tables
maintained on each node instead.

------
jturolla
Really amazed by their great work. I'll look forward to upgrade the setup at
my company. Since we started using kubernetes, we reduced our bill to 30% of
its original price, and it made everything easier and scalable just as if we
were using the costy Heroku. It's a really useful tech for third-world
startups that cannot afford to spend thousands of dollars on infraestructure.
I hope I can contribute to this OSS in the near future.

~~~
chrissnell
We've seen similar savings at our company. We have deployed Kube on a 6-node
cluster of CoreOS nodes with 512GB each. These are dedicated servers hosted at
Rackspace. We're about 30-40% utilized on RAM and maybe 15-20% on CPU. To host
a similar set of services on our older Openstack environment would require at
least 2-3x the number of servers. The cost savings isn't even the best part.
Kubernetes has allowed us to build a completely self-service pipeline for our
devs and has taken the ops team out of day-to-day app management. The nodes
update themselves with the latest OS and Kube shifts the workload around as
they do. This infrastructure is faster, more nimble, more cost-effective and
so much easier to run.

This is the best infrastructure I've ever used in twenty years of doing ops
and leading ops teams.

~~~
thockingoog
You folks don't know how much it means to us to hear that people are finding
success with Kubernetes. Thanks for using it. We'll try to keep pushing the
envelope.

~~~
chrissnell
Thanks, Tim. Y'all have been awesome. Thanks for the quick response to GH
issues and Slack questions. I hope that we can speak at a conference someday
and tell the world about how much more fun and easy Kubernetes made our jobs.

------
hnarayanan
This is exciting. I need to update my Django Kubernetes tutorial
([https://harishnarayanan.org/writing/kubernetes-
django/](https://harishnarayanan.org/writing/kubernetes-django/)) with some
new constructs that simplify things.

~~~
brianwawok
Django is about the perfect use case for a Kubernetes tutorial.. because it
has enough moving parts that you need to do some trickery, but not 10 levels
deep.

------
philips
Yea! The team at CoreOS is really excited about this release and the work that
we have done as a community.

If you are interested in some of the things that we helped get into this
release see our "preview" blog post from a few weeks ago, RBAC, rkt container
engine, simpler install, and more:
[https://coreos.com/blog/kubernetes-v1.3-preview.html](https://coreos.com/blog/kubernetes-v1.3-preview.html)

~~~
thockingoog
The CoreOS team and technologies have been critical to getting Kubernetes
going. Thanks, Brandon.

~~~
philips
At the risk of sounding like a mutual admiration society: Working with and
learning from the folks in the community like Brian Grant, Dawn Chen, Joe
Beda, Sarah Novotny, Brendan Burns, Daniel Smith, Mike Danese, Clayton
Coleman, Eric Tune, Vish Kannan, David Oppenheimer, yourself Tim, and the
hundreds of other folks in the community has been a great experience for me
and the rest of the team at CoreOS.

Can't wait to continue the success with v1.4!

------
Rapzid
I'm still sitting on the sidelines waiting for the easy to install, better
documented for AWS version. It's also a bit unclear as to why we are talking
federation and master/slave in 2016; other systems are using raft and gossip
protocols to build masterless management clusters..

Watching issues like
[https://github.com/kubernetes/kubernetes/issues/23478](https://github.com/kubernetes/kubernetes/issues/23478)
, and
[https://github.com/kubernetes/kubernetes/issues/23174](https://github.com/kubernetes/kubernetes/issues/23174)
.. I'm not super interested in "kicking the tires"; I'm evaluating replacing
all our environment automation with a version built around Kubernetes. Easy-up
scripts that hide a ton of nasty complexity won't do the trick.

Following the issues I'm getting the impression that too much effort is being
put into CM style tools vs making the underlying components more friendly to
setup and manage. Did anyone see how easy it is to get the new Docker
orchestration running?

Then there is the AWS integration documentation.. I'm following the hidden
aws_under_the_hood.md updates, but I'm still left with loads of questions;
like how do I control the created ELB's configuration(cross zone load
balancing, draining, timeouts,etc)?

I re-evaluate after every update and there are some really nice features being
added in, but at the end of the day ECS is looking more and more the direction
to go for us. Sure, it's lacking a ton of features compared to Kubernetes and
it's nigh but impossible to get any sort of information about roadmaps out of
Amazon... But it's very clear how it integrates with ELB and how to manage the
configuration of every underlying service. It also doesn't require extra
resources(service or human) to setup and manage the scheduler.

~~~
thockingoog
It's funny how words can be played. The Kubernetes "master" is a set of 1 or
more machines that run the API server and associated control logic. This is
_exactly_ what systems like Docker swarm do, but they wrap it in terms like
RAFT and gossip that make people weak in the knees. Kubernetes has RAFT in the
form of the storage API (etcd). This is a model that has PROVEN to work well,
and to scale well beyond what almost anyone will need.

"Federation" in this context is across clusters, which is not something other
systems really do much of, yet. You certainly don't want to gossip on this
layer.

"evaluating replacing" really does imply "kicking the tires". Put another way
- how much energy are you willing to invest in the early stages of your
evaluation? If a "real" cluster took 3 person-days to set up, but a "quick"
cluster took 10 person-minutes, would you use the quick one for the initial
eval? Feedback we have gotten _repeatedly_ was "it is too hard to set up a
real cluster when I don't even know if I want it".

There are a bunch of facets of streamlining that we're working on right now,
but they are all serving the purposes of reducing initial investment and
increasing transparency.

> how easy it is to get the new Docker orchestration running

This is exactly my point above. You don't think that their demos give you a
fully operational, secured, optimized cluster with best-perf networking,
storage, load-balancing etc, do you? Of course not. It sets up the "kick the
tires" cluster.

As for AWS - it is something we will keep working on. We know our docs here
are not great. We sure could use help tidying them up and making them better.
We just BURIED is things to do.

Thanks for the feedback, truly.

~~~
Rapzid
I would consider "kicking the tires" actually running up a cluster and playing
with it. One can also evaluate by reading documentation and others reports of
issues to look for show-stopping problems. For instance, a couple releases ago
there was not multi-AZ support. The word on the street at that time was to
create multiple clusters and do higher level orchestration across them.. That
was a no-go for us; no need to "kick the tires".

Whatever you may think of my level of knowledge or weak knees for consensus
and gossip protocols, these problems(perceived or otherwise) with setup,
documentation, and management seem pretty widely reported.

EDIT: I hope this doesn't sound too negative. Kubernetes IS getting better all
the time. I only write this to give a perspective from somebody who would like
use Kubernetes but has reason for pause. Our requirements are likely not
standard; our internal bar for automation and ease of use is quite high. We
essentially have an internal, hand-rolled, docker-based PaaS with support for
ad-hoc environment creation(not just staging/prod). We would like to move away
from holding the bag on our hand-rolled stuff and adopt a scheduler :)
Deciding to pull the trigger on any scheduler though would be committing us to
a rather large amount of integration effort to reach a parity that doesn't
seem riddled with regressions over the current solution.

~~~
TheIronYuppie
In this case, what you would do is set up two separate clusters, and spread an
ELB across them. No federation required :)

Disclosure: I work at Google on Kubernetes

~~~
illumin8
But, if you have persistent EBS volumes, you wouldn't be able to mount them on
the other cluster if you had a failure of an entire AZ.

------
TheIronYuppie
We are really proud of this release, both making it much easier to get started
(with a laptop ready local development experience) as well as large scale
enterprise features (support for stateful applications, IAM integration, 2x
scale).

As others in the thread mentioned, this was the cut of the binary, we'll be
talking a lot more about it, updating docs and sharing customer stories in the
coming weeks.

Thanks, and please don't hesitate to let me know if you have any questions!

Disclosure: I work at Google on Kubernetes.

~~~
rue
> laptop ready local development experience

The experience, definitely something I’m looking forward to, needs a lot of
improvement if your laptop has an Apple logo on it. Hopefully some part of the
team is working on that :)

~~~
dlor
Check out minikube, it's designed for running on laptops with Apple logos. :)

[https://github.com/kubernetes/minikube](https://github.com/kubernetes/minikube)

Disclosure: I work at Google, on minikube.

------
esseti
I run into kubernetes a week ago. Found out this:
[https://www.udacity.com/course/scalable-microservices-
with-k...](https://www.udacity.com/course/scalable-microservices-with-
kubernetes--ud615)

Sounds pretty interesting, especially all the part about service discovery &
node health/replacement.

Anyone using it for production?

~~~
crb
We (Google) are :-)

Otherwise, there's a list at
[http://kubernetes.io/community/](http://kubernetes.io/community/), including:
New York Times, eBay, Wikimedia Foundation, Box, Soundcloud, Viacom, and
Goldman Sachs, to name a few.

~~~
esseti
Duh, that's a nice list of references. I'll try to get trough the
documentations and tutorial. It seems to solve a lot of troubles when we
(normal people) have to deal when deploying docker container (in aws for
example). Among others service discovery and health of nodes.

------
crb
1.3.0 is tagged, yes. The actual release (docs, release notes, etc) will
happen early next week.

~~~
nkvoll
Not to get into a discussion regarding what constitutes a "Release" for any
specific project, whether it's tagging, pushing, announcing[0], updating
documentation, creating release notes, publish a release blog post and so on.

A final build of 1.3 was tagged with an accompanying changelog and
announcement post. I found it weird that it had no more ceremony, nor any
prior submission on hn, and as it had been announced through the kubernetes-
announce mailing list for 17 hours, I figured its existence would be
interesting to the community, so I submitted it in good faith.

In any case, kudos to everybody working on it and congratulations on the
release, whether it's this week or the next.

[0]: [https://groups.google.com/forum/#!topic/kubernetes-
announce/...](https://groups.google.com/forum/#!topic/kubernetes-
announce/apb6bUfFsdc)

~~~
justinsb
I'm glad you posted it - thanks!

My understanding is that with the timing of the US holiday, it made more sense
to hold off on the official announcement for a few days. So that's why there
aren't more announcements / release notes etc; and likely there won't be as
many people around the community channels to help with any 1.3 questions this
(long) weekend.

You should expect the normal release procedure next week! And if you want to
try it out you can, but most of the aspects of a release other than publishing
the binaries are coming soon.

------
dankohn1
I'm looking for startups that are using Kubernetes in production who would
like some free publicity.

I'm the new executive director of the Cloud Native Computing Foundation, which
hosts Kubernetes. We have end user members like eBay, Goldman Sachs and
NCSoft, but we're in need of startup end users (as opposed to startup vendors,
of which we have many).

Please reach out to me at dan at linuxfoundation.org if you might like to be
featured in a case study.

------
jackweirdy
Wow, spooky coincidence - I discovered and installed this for the first time
today! The docs could use some work, but generally pretty easy to get started.

Great to see an openstack provider's been added, too.

~~~
philips
Which guides did you use?

~~~
jackweirdy
Only the docs on the website (barring a 5 minute intro to the "Why" of
kubernetes[0]). Used the docs for getting set up locally with minikube, and
also the hello node example.

~~~
philips
minikube is awesome, really exciting development for the community. Which
platform are you on?

~~~
jackweirdy
I'm on OSX, deploying to AWS - currently manually, potentially with Kubernetes
soon!

------
kordless
Interested in federated clusters. How is federation being scoped and who's
doing most of the work on it?

~~~
TheIronYuppie
Federation is a VERY big area. Your best bet is to start with the proposals:

    
    
      https://github.com/kubernetes/kubernetes/blob/master/docs/proposals/federation-high-level-arch.png
      https://github.com/kubernetes/kubernetes/blob/master/docs/proposals/federated-api-servers.md
    

Though there are many issues in discussion. Anything in particular you want to
work on? Disclosure: I work at Google on Kubernetes

------
qubit23
One feature I was hoping to see in this release was the ScheduledJobs
controller. I remember seeing it mentioned in one of the RCs; did it get
pushed back? This would be useful for those of us who want a more highly
available cron-like system running on top of Kubernetes.

~~~
thockingoog
It was so close, it just missed the boat. It will hopefully be on the next
boat.

------
bogomipz
Does anybody know how 1.3 is for stateless services? Can I use an API to crete
a persistent disk volume and adjust that volume size just as I would any other
resource like CPU, memory? The use case being postgres/mysql instances.

~~~
thockingoog
You can't, in general, adjust the size of block devices purely transparently.
We don't currently support blockdev resizing. Would love to talk about how to
achieve that, though

~~~
bogomipz
Is it possible under specific circumstances such as if using LVM devices?

~~~
thockingoog
an LVM device is local, which is not supported as PersistentVolume. :(

~~~
bogomipz
Thats too bad it would be great to have an API for that. Mesos has the concept
of path and mount disks, it would be neat if Kubernetes had something similar:

[http://mesos.apache.org/documentation/latest/multiple-
disk/](http://mesos.apache.org/documentation/latest/multiple-disk/)

~~~
thockingoog
We do! it's just not a "persistent" volume because, well, it's not
persistent...

------
nikon
Awesome. Just digging into Docker and have recently been reading about
Kubernetes.

Does anyone have examples of how they are managing deployments? I.e. deploying
app update, running db migrations perhaps?

------
technofiend
There was some coverage including labs at this week's RedHat Summit. Once all
the materials are on line that may prove a useful 1.3 reference.

~~~
TheIronYuppie
Our blog post is live!

[http://blog.kubernetes.io/2016/07/kubernetes-1.3-bridging-
cl...](http://blog.kubernetes.io/2016/07/kubernetes-1.3-bridging-cloud-native-
and-enterprise-workloads.html)

Disclosure: I work at Google on Kubernetes

------
EDevil
Now if only a native Azure provider was developed it would be excellent...

~~~
jmspring
The status of K8s on Azure is being updated here -
[https://github.com/colemickens/azure-kubernetes-
status](https://github.com/colemickens/azure-kubernetes-status)

~~~
TheIronYuppie
Cole has been an absolute machine working on this - we'd love your help! Net,
though, is that extending Kubernetes in this way is available to everyone - we
support ~15 different cloud and OS configurations today, we'd love to support
more!

Disclosure: I work at Google on Kubernetes.

------
smegel

      > AWS    
      > Support for ap-northeast-2 region (Seoul)
    

What does this mean? How can K8S be tied into something as specific as an AWS
region?

~~~
matthewrudy
It means the `kube-up` scripts now work with ap-northeast-2 straight out of
the box.

Just set

    
    
        export KUBERNETES_PROVIDER=aws
        export KUBE_AWS_ZONE=ap-northeast-2a
        kube-up.sh
    

Here's the PR
[https://github.com/kubernetes/kubernetes/pull/24464](https://github.com/kubernetes/kubernetes/pull/24464)

