
Kubectl – Configuration Guide - lukasbar
https://knowledgepill.it/posts/kubernetes-kubectl-client-config/
======
kissgyorgy
I find the kubectx and kubens tools invaluable for switching contexts and
namespaces quickly:
[https://github.com/ahmetb/kubectx](https://github.com/ahmetb/kubectx)

~~~
remram
I've just been using this function in my bashrc:

    
    
        kubea(){
            if [ "$#" = 1 ]; then
                KUBECONFIG=~/.kube/configs/"$1"
            else
                ls -1 ~/.kube/configs/
            fi
        }
    

That way I can "activate" a specific config the same way I activate my Python
virtualenv, for a specific terminal. I also add the basename of the config to
my prompt.

    
    
        kubea myproject-PROD

~~~
Nullabillity
Or, to avoid the persistent state..

    
    
        function dkube
          env KUBECONFIG=$HOME/.kube/config-dev $argv
        end
    
        function dkctl
          dkube kubectl $argv
        end

~~~
remram
It's funny, I used to have something exactly like this, but found that too
many other tools and scripts also needed to access the right cluster.
Persistent state works better for me in practice.

------
mlthoughts2018
My strong advice is do not ever rely on use-context or any similar abilities
to change implicit state.

You should absolutely always be typing out huge, verbose commands like kubectl
—context my-context -n my-namespace <some commands> ...

The opportunity for tragic error when you have implicit context or namespace
settings applied is just too great.

~~~
longcommonname
Or you can signify the context using shell prompt and use kube rbac to give
read only permissions instead of letting people mess with your prod cluster.

~~~
mlthoughts2018
Managing shell tooling to edit ps1 for this is a bad idea, because there are
already many other much longer established mutations people use that for (like
git branch or activated Python environment), kubernetes can’t come along
decades later and suggest to use the tool effectively one has to break those
long standing workflows.

It’s always an available option for people who want it, but many many use
cases are poorly served by it.

The RBAC thing is different. I am 100% for those RBAC limitation as long as I
have total authority to make a devops admin do what I need them to do right
when I need them to do it.

If my team is going to get alerted on build failures or service errors that
require kubernetes mutation actions or exec to debug or resolve (99% of the
time they do), then my team needs full admin access.

If rbac prevents access, then you better route the alerts to someone else or
give us authority to compel instant triage responses.

In the 3 different companies I’ve worked in that use large kubernetes
clusters, rbac has been a miserable problem across the board, because SREs
don’t want to be responsible for triaging or resolving application team
issues, and application teams will not honor alerts that are not actionable
because of poorly conceived permissions issues where people mistakenly think
they are setting useful access control policy but really they aren’t.

~~~
DasIch
You could have a mechanism to grant yourself temporary write access on an as-
needed basis.

At Zalando everyone has read access (except to secrets). You can request write
access with a command line tool another employee can then approve it. There is
also an option to request access with an incident ticket, in that case you
immediately get write access without approval by someone else. Write access
expires after 1h.

Access to the underlying AWS account uses the same mechanism.

~~~
mlthoughts2018
But then when you’ve unlocked access, you could just make the same fat-
fingered mistakes due to implicit use-context settings.

You might use use-context to set yourself to that restricted production
context at the start of an incident. 30 minutes later you resolved the
incident, but the access is still active and whoops you delete a deployment
you thought you were deleting from a stage experiment or something.

All that does is add an extra hoop to jump through. People need to stop
believing that adding extra bureaucratic hoops offers any type of safety - it
doesn’t.

If you want to restrict access then you need to actually restrict it and
convert on-call alert responsibilities to a central devops team that, like it
or not, is responsible for solving everyone else’s problems.

If you don’t want a central devops team that can be paged and on the hook for
everyone else’s systems, then you cannot have write access controls.

There’s just no way out of this dilemma. Temporary access that can be self-
granted == no access control. Temporary access that requires admin approval ==
admin team is on pager duty for every other team.

------
jaimehrubiks
I like to have one config file per cluster and point kubeconfig to all these
files. Then I use a bash function to select the cluster by name. My zsh shell
previews the cluster name on the right when I start typing k, kubectl, helm
and similar... So it is harder for me to make a mistake

~~~
mastegizmo
I think this is the Best method - it is quite safe, I’m doing the same

------
pepemon
What does exactly this post add to what have been already described in the
official documentation[1]?

[1] [https://kubernetes.io/docs/tasks/access-application-
cluster/...](https://kubernetes.io/docs/tasks/access-application-
cluster/configure-access-multiple-clusters/)

------
nunez
Does anyone else hate that we have to deal with Kubeconfigs at all?

~~~
pgwhalen
I’m a little confused, what would the alternative be?

~~~
nunez
I would love to use environment variables, like this:

KUBE_APISERVER_ADDR=[https://foo.com](https://foo.com)
KUBE_CERT_FILE=/tmp/cert KUBE_NAMESPACE=bar kubectl get po

This way, I can export them into my environment with my .bash_profile instead
of having to persist state just to perform cluster ops

------
anurag
Kubie makes it easier (and safer) to manage multiple contexts because it makes
each shell independent:
[https://github.com/sbstp/kubie](https://github.com/sbstp/kubie)

