What’s cooking in your Kubernetes namespace?

Enterprise platform engineering teams are responsible for managing multi-tenant Kubernetes environments and it is most common for them to create separation between tenants using Kubernetes namespaces. As a platform engineer, you want an ability to sneak peek into each tenant namespace to understand what workload they are running?, How many resources have been created, How are they related to each other?, Who has created those?, etc. Answers to these questions can help a platform engineer manage tenant environments better. A good view into a namespace can help in troubleshooting, provisioning, monitoring etc.

It is not easy to get this view today using basic kubectl commands. And if there are Custom Resources involved, the resource relationships and ownerships become even more complicated. We have developed a kubectl plugin, kubectl connections, that can help you discover and visualize these resource relationships in a namespace. The command can be used as follows:

kubectl connections <ResourceType> <ResourceName> <NamespaceName>

Here <ResourceType> <ResourceName> represents any resource within a namespace. It is considered as a node in a resource relationship graph. The command outputs the entire resource relationship graph that the given resource is part of in json/text/png format.

To demonstrate this using a simple example, let’s look at how this command can be used to figure out what you get in the ‘kube-system’ namespace from various Kubernetes providers. The command will be

kubectl connections ServiceAccount default kube-system

Here are the outputs for Kubernetes clusters from 3 representative Kubernetes providers. The are also available here.

GKE:

EKS:

DigitalOcean:

Some of the observations that platform engineering teams can make from these:

  • Even though all the three Kubernetes versions are 1.17, each cluster’s makeup is different. Some provide the cluster where some of the operational software is already running, for example logging (fluent), monitoring (prometheus), networking (cilium).
  • These outputs show us that the number of Pods running in the kube-system namespace can differ based on what functions are provided on a cluster out-of-the-box. This can affect your decision on how much resources (CPU/Memory) to allocate for your control plane nodes in the cluster.
  • DNS Pod is present everywhere and it makes sense as DNS is probably one of the most important functions that needs to work out-of-the-box on a cluster.

With increasing adoption of Kubernetes Operators and Custom Resource Definitions (CRDs), the need to discover such resource relationships is only increasing. In such extended Kubernetes clusters there are multiple Service Accounts creating the workloads within a single namespace. And Custom Resources create sub-resources that end users have no control over. Kubectl connections plugin plays a key role in visualizing tenant namespaces in these extended clusters. Download the kubectl connections plugin and use it in your clusters to discover and monitor the resource topologies in a namespace.

www.cloudark.io