In my previous article, I described the basic techniques of setting security boundaries around Kubernetes pods. In this part, I’m going to talk about some of Kubernetes’ architectural components, and explain how to keep them safe when the isolation around a pod is broken.
The Kubernetes API server is the orchestration centre of the cluster, creating and managing all resources (like pods, deployments, services, etc.) All states of the resources are persisted in an Etcd datastore, which is the “source of truth” for the entire cluster. Gaining unauthorised access to it can be compared to obtaining supervisor privileges. Losing its reliability can be compared to the destruction of the whole cluster.
Some distributions of Kubernetes still use the Etcd without TLS certificate-based authentication enabled. At the time of writing the authorisation mechanism of Etcd isn’t supported by the Kubernetes API server at all. This means that you can connect to the Etcd and modify its data from any pod. It’s a tremendous threat in the case of pod isolation malfunction or configuration failure. So, in short:
Etcd should be the focus of you security improvement efforts
If it’s possible you should use the newest, 3rd version of Etcd and set up TLS certificates for mutual authentication. If there is any other component in your cluster that also needs Etcd, you should set up separate instances for it. The only gateway to the Etcd holding Kubernetes data should be the Kubernetes API server and the access to it should be restricted only to the nodes that it runs on. For restricting the access on the networking level, you may try Kubernetes Network Policies described in the previous article.
Kubernetes API server
Now, let’s have a closer look at the Kubernetes API server. Unlike the Etcd, the Kubernetes API server has some security features. Every submitted API request passes through three phases before being processed.
Phase 1: Authentication
The Kubernetes API server uses service accounts for providing an identity for the processes that run in a pod. By default, every pod is assigned the “default” service account from the Kubernetes namespace. This means that three secret files are mounted into it (usually to /var/run/secrets/kubernetes.io/serviceaccount/): the bearer token for the API server, the name of the current namespace, and the ca.crt. This secret information is enough to authenticate to the API server. For example, executing the below code on a pod may return a response with detailed information about cluster version:
wget –header=”Authorization: Bearer $KUBE_TOKEN” –no-check-certificate –ca-certificate=/var/run/secrets/kubernetes.io/serviceaccount/ca.crt -O – https://kubernetes.default:443/version
For basic prevention, you can switch off automounting the secrets by setting up the automountServiceAccountToken flag to false in a pod spec or service account resource template. For better control over the access to the API, you may use one of the authorisation plugins employed in the next phase.
Phase 2: Authorisation
If the API request is authenticated, it will go through the authorisation phase. You can use different authorisation control modes depending on your needs i.e attribute-based (ABAC), role-based (RBAC), AlwaysAllow, AlwaysDeny, etc… In newer versions of Kubernetes, the RBAC authorisation plugin is enabled by default. RBAC’s configuration is quite complex, so it’s best to consult the documentation for the details.
While setting any authorisation plugin is good to follow best practices:
- Grant only minimum required privileges
- Create namespaces to separate you infrastructure elements
- Don’t use wildcards
It’s important to note that, if you don’t have an authorisation plugin enabled and you don’t secure access to the Kubernetes API, then a failure in pod isolation may lead to the highjacking of the whole cluster. It used to be an excruciating vulnerability in the older versions of Kubernetes where there was no authorisation plugin enabled by default. Below, in the links section, there is an interesting thread about it.
Phase 3: The admission controllers
If a request on a resource successfully passes through the two previous phases, it’s next processed by the enabled admission controllers. They may alter the resource before it’s persisted. The admission controllers manage various aspects of cluster configuration in a more centralised manner. They were explained in greater detail in the first part of this series.
The Kubelet is a process that runs on every node and is responsible for creating and maintaining the desired state of the pods. It’s the interface between nodes and cluster orchestration. It embeds an HTTP server that provides some information about the current state of the node, running pods, and general cluster configuration. It also lets you execute some operations on the pods.
The Kubelet API server (depending on Kubernetes version) may be reachable on the node’s IP 10255 (read-only) and 10250 (https) ports. The 10255 (read-only) API exposes sensitive configuration details. The 10250 (HTTPS) API, when anonymous access is enabled, accepts changes.
Here are some sample endpoints:
http://<node_ip>:10255/spec/ – node’s specification
http://<node_ip>:10255/pods/ – full deployment templates from current node
By default, both APIs can be accessed from pods, which put the cluster at risk in case of pod isolation failure.
The Kubernetes documentation recommends starting the kubelet process with disabled anonymous authentication and authorisation delegated to the Kubernetes API server. It also suggests enabling X509 client certificate authentication to the kubelet’s APIs. Sadly, these settings only work for HTTPS endpoints. For protecting the read-only API you need to employ network policies or network plugin policies. Fortunately, there are plans to remove the read-only endpoints in the near future.
Additionally, the NodeRestriction admission controller can be plugged in into the Kubernetes API server. It will restrict kubelet actions only to pods from the node where it runs.
We’ve now reviewed the configuration and available security features of the most important elements of Kubernetes. I hope the information covered in these two articles will serve as good starting point for building clusters that are more secure.
Dave Farley, author of the book “Continuous Delivery” will talk about “The Rationale for Continuous Delivery” at a free webinar on Wednesday June 3rd at 6pm BST. Register here to join us.