Skip to content
kubernetes

Kubernetes Authentication - Kubeconfig

Understand the role of the kubeconfig file in Kubernetes authentication, its structure, and how to configure multiple clusters and contexts.

8 min read

Também em Português

Series Kubernetes Triple A
2/2

As mentioned in the previous post, running kubectl apply -f manifest.yaml is a process that involves authentication, authorization, and admission. Continuing the article series that will culminate in the resource being persisted to etcd, in this chapter we’ll talk about the Kubernetes authentication process — the first step in the security chain.

Authentication is responsible for answering the question: “Who is making this request?”

Only after this step does Kubernetes proceed to verify what that entity can do (authorization) and if the request is valid (admission).


Understanding the Request

Let’s continue with the same example, now running a command with kubectl and enabling verbosity level 7:

Terminal
$ kubectl get pods -v 7

Observing the sequence of events in the image above, we have:

  1. When running the command kubectl get pods,

  2. one of the first operations performed is reading the file available at ~/.kube/config,

  3. shortly after, a request is made using the HTTP GET method to the URI available at https://127.0.0.1:53806/api/v1/namespaces/kube-system/pods,

  4. and finally, the list of pods available in the namespace kube-system is displayed.

When observing the command execution, we notice that reading the file ~/.kube/config is one of the first operations. The information available in it will serve as input to make the request to the kube-apiserver and return the list of pods in the namespace kube-system.


The Role of the kubeconfig File

The kubeconfig file stores information about clusters, users, namespaces, and security mechanisms used to authenticate to clusters. In practice, all commands executed with kubectl read this file to determine how to connect and how to authenticate to the Kubernetes API server.

Continuing with the previous example, let’s try to make the same request that kubectl is making, but using an HTTP client. (I’m using httpie for a better-presented response, but the same can be done using curl -s -X GET -k https://127.0.0.1:53806/api/v1/namespaces/kube-system/pods):

As can be seen, all requests to the kube-apiserver need to be associated with a regular user or a ServiceAccount. In this case, since we’re not passing any information that identifies us in the request, the user is treated as anonymous (system:anonymous).

It’s possible to completely disable the anonymous user by passing the --anonymous-auth=false flag to the kube-apiserver. This way, all requests that don’t have a known user associated will be completely ignored.

For the next commands, I’ll use information available in the kubeconfig to replicate the same behavior returned by kubectl, that is, we’ll use the same credentials. Don’t worry if you don’t understand what this information is; throughout this article, these gaps will be filled.

First, with yq, let’s extract the certificate information used to authenticate to the kube-apiserver. The information we need is the certificate authority certificate, the client certificate, and the client key. With the certificates and keys in hand, we’ll pass them as parameters in the curl command.

Extracting certificates from kubeconfig
# Extract the Certificate Authority (CA) Certificate
$ yq -r ‘.clusters[] | select(.name == “kind-kind”) | .cluster.certificate-authority-data’ ~/.kube/config | base64 -d > kind-kind-certificate-authority
# Extract the Client Certificate
$ yq -r ‘.users[] | select(.name == “kind-kind”) | .user.client-certificate-data’ ~/.kube/config | base64 -d > kind-kind-client-certificate
# Extract the Client Key
$ yq -r ‘.users[] | select(.name == “kind-kind”) | .user.client-key-data’ ~/.kube/config | base64 -d > kind-kind-client-key

With the files saved, we execute the request with curl:

Executing request with curl
$ curl https://127.0.0.1:62246/api/v1/namespaces/kube-system/pods \ --cacert kind-kind-certificate-authority \ --cert kind-kind-client-certificate \ --key kind-kind-client-key --silent \ | jq '.items[].metadata.name' -r

As we can observe in the image above, passing the same information used by kubectl, we get the same result, successfully authenticating the request.


Location of kubeconfig

By default, the configuration file (kubeconfig) is stored at:

Terminal
~ ~/.kube/config

However, it’s possible to define an alternative path through the KUBECONFIG environment variable.

For example, using kind, we can create two clusters and specify where the configuration file will be saved:

Terminal
$ KUBECONFIG=/tmp/cluster-1-kubeconfig.yaml kind create cluster —name cluster-1
$ KUBECONFIG=/tmp/cluster-2-kubeconfig.yaml kind create cluster —name cluster-2

Defining Which Configuration File to Use

There are three main ways to indicate which configuration file kubectl should use.

1. Using the KUBECONFIG variable directly

You can define the file path for a single execution:

Terminal
$ KUBECONFIG=/tmp/cluster-1-kubeconfig.yaml kubectl -n kube-system get pods

To avoid repeating the variable in all commands, we can export it to the current environment:

Terminal
$ export KUBECONFIG=/tmp/cluster-1-kubeconfig.yaml
$ kubectl -n kube-system get pods

2. Using multiple kubeconfig files

The KUBECONFIG variable can also contain a list of files, separated by : (on Unix/Linux/macOS systems) or ; (on Windows). kubectl will automatically merge these files into a single view:

Terminal
$ KUBECONFIG=“/tmp/cluster-1-kubeconfig.yaml:/tmp/cluster-2-kubeconfig.yaml” kubectl config view

This functionality is extremely useful in scenarios with multiple clusters — such as development, staging, and production environments — allowing you to unify credentials and contexts in a single consolidated file.

3. Using the --kubeconfig flag

Another option is to specify the file directly on the command line using the --kubeconfig flag:

Terminal
$ kubectl —kubeconfig /tmp/cluster-1-kubeconfig.yaml -n kube-system get pods

This approach is especially practical in scripts or automation pipelines, where the file path needs to be explicitly defined.


Structure of the kubeconfig File

The kubeconfig is a YAML file that follows a schema that defines four main sections:

  • clusters – information about the cluster API address and the certificate used in communication.

  • users – credentials and authentication mechanisms.

  • contexts – combinations of cluster, user, and namespace.

  • current-context – the active context, used by default by kubectl.

A basic example of a kubeconfig file:

apiVersion: v1
kind: Config
current-context: kind-cluster-1
clusters:
- cluster:
    certificate-authority-data: "..." # Cluster CA in Base64
    server: https://127.0.0.1:56815
  name: kind-cluster-1
users:
- name: kind-cluster-1
  user:
    client-certificate-data: "..." # Client certificate in Base64
    client-key-data: "..."         # Client private key in Base64
contexts:
- context:
    cluster: kind-cluster-1
    namespace: kube-system
    user: kind-cluster-1
  name: kind-cluster-1

clusters Section

The clusters list defines all Kubernetes clusters known to kubectl.

Each entry contains:

  • name: unique cluster name within the file (referenced by contexts).

  • cluster.server: URL of the Kubernetes API server (apiserver endpoint).

  • cluster.certificate-authority-data: certificate authority (CA) certificate encoded in Base64, used to validate the server’s identity.

    • Alternatively, you can use certificate-authority: /path/to/ca.crt.

Examples with kubectl config:

Query
kubectl config get-clusters

Get the list of available clusters

Modification
kubectl config set-cluster kind-cluster-3 --server https://127.0.0.1:56815 --certificate-authority certificate.cer

Save a new cluster (with CA path)

kubectl config set-cluster kind-cluster-3 --server https://127.0.0.1:56815 --certificate-authority certificate.cer --embed-certs

Save a new cluster (with embedded CA)

kubectl config delete-cluster cluster-3

Remove a cluster

users Section

The users list defines access credentials for cluster authentication.

Each entry contains:

  • name: user name (referenced by contexts).

  • user.client-certificate-data and user.client-key-data: certificate and private key pair in Base64 (certificate-based authentication).

  • user.token: authentication token (used in managed providers like EKS, GKE, or AKS).

  • user.username / user.password: basic authentication (rarely used in production).

In corporate environments, it’s common to use exec plugins to generate dynamic tokens, such as OIDC providers or AWS CLI with EKS:

user:
  exec:
    command: aws
    args:
      - "eks"
      - "get-token"
      - "--cluster-name"
      - "my-cluster"

Examples with kubectl config:

Query
kubectl config get-users

Get the list of available users

Modification
kubectl config set-credentials kind-cluster-3 --client-certificate client-certificate.crt --client-key client.key

Create credential (Certificate/Key - path)

kubectl config set-credentials kind-cluster-3 --client-certificate client-certificate.crt --client-key client.key --embed-certs

Create credential (Certificate/Key - embedded)

kubectl config set-credentials kind-cluster-3 --username username --password password

Create credential (Username/Password)

kubectl config set-credentials kind-cluster-3 --token token

Create credential (Token)

kubectl config set-credentials kind-cluster-3 --auth-provider=oidc --auth-provider-arg=client-id=foo --auth-provider-arg=client-secret=bar

Create OIDC credential

contexts Section

The contexts list combines a cluster, a user, and optionally, a default namespace.

Each entry contains:

  • name: unique context name (referenced by current-context).

  • context.cluster: cluster name (declared in clusters).

  • context.user: user name (declared in users).

  • context.namespace (optional): defines the default namespace when using kubectl commands.

Examples with kubectl config:

Query
kubectl config get-contexts

Get the list of available contexts

Modification
kubectl config set-context prod-admin --cluster kind-cluster-1 --user kind-cluster-1

Create a new context

kubectl config set-context dev-user --cluster kind-cluster-2 --user dev-user --namespace app-frontend

Create a new context with default namespace

kubectl config delete-context dev-user

Remove a context

Context Management
kubectl config use-context prod-admin

Change the current context

current-context Section

Defines which context is currently active — that is, which cluster, user, and namespace will be used by default when running kubectl without additional options.

To check the current context:

Terminal
$ kubectl config current-context

To switch the active context, you can use:

Terminal
$ kubectl config use-context context-name

Related Posts

Comments 💬