Need for Kubernetes Ecosystem Curator

You probably have heard a lot of Kubernetes migration stories by now. Here are some of the key observations from our experience of working on diverse Kubernetes migration projects — from migration of a centralized SaaS application to deploying container workloads on distributed edge locations.

In this post, we specifically refer to two ongoing real-life Kubernetes migration projects for this purpose. One of the projects is a SaaS application that is currently deployed on internal OpenStack infrastructure. Another is a green field edge computing application which is targeted to be deployed on several hundred remote edge locations.

  • The SaaS application is written in Java. In consists of six microservices. It uses a relational database backend (MariaDB with Galera clustering). Currently all the microservices and the database are packaged as containers and deployed on VMs using Ansible. This team is planning to run and manage their own Kubernetes clusters using vanilla upstream distribution.
  • The Edge computing application stack consists of workload Helm charts coming from different vendors. A key requirement of this project is that the workload Pods need to use multiple network interfaces. This team is going with a third-party Kubernetes distribution.

Key observations

While the scope and scale of both these Kubernetes migration projects is very different, here are some of the common observations.

1. Kubernetes Operator selection requires careful analysis

Kubernetes Operators are naturally entering into conversations. In the SaaS application one of the goals is to choose the right containerized database solution. For this we are evaluating several Database Operators. Similarly, the edge computing stack is using multiple CRDs e.g. NetworkAttachmentDefinition CRD from Multus project and Cert Manager. In both these projects our multi-Operator guidelines help us quickly analyze the maturity of third party Operators and CRDs for their enterprise readiness.

2. Upstream representation is challenging

Enterprise workloads are unique and their needs may not be directly satisfied by the community Operators / CRDs being considered. Often there is a need to engage with upstream developer community. This starts from trying to understand how to use the Operator to requesting enhancements or patches. We have seen need for upstream engagement with the Operator community in both the projects mentioned above. For instance, here are some issues that we filed on Multus Operator as we work on integrating it in the edge stack.

3. Using Custom Resources is complex.

For application developers, defining and modeling workflows using Custom Resources is challenging. ‘kubectl explain’ now exists for Custom Resources, but the Spec-property level information that it exposes can be often times too narrow to understand the big picture usage of a Custom Resource. Moreover, there are different kinds of resource relationships that are defined around a Custom Resource. These could be through labels, annotations, Spec properties or simply a sub-resource created by the Custom Resource. For example, NetworkAttachmentDefinition requires specific annotation on a Pod to grant desired network interface to it. Currently it is hard for application developers to find out this static and run-time information involving Custom Resources which is essential to build and manage the desired workflows. We are developing KubePlus API add-on to simplify building workflow automation with Custom Resources. It extends the Kubernetes resource graph by maintaining all implicit and explicit relationships of Custom Resources and then offers kubectl endpoints to query this information.

Conclusion

Today’s DevOps and Platform Engineering teams are developing Kubernetes expertise. However, the challenges they face are still quite daunting — they have to figure out which Kubernetes ecosystem projects are relevant for their needs, how to use them together, how to go about representing their needs with the upstream project maintainers. CloudARK focuses on these problems for DevOps teams through our Platform-as-Code subscription. Our goal is to complement internal DevOps teams by providing managed Kubernetes workflows assembled from various Kubernetes ecosystem projects. Reach out to us if you are looking for such a partner in your Kubernetes journey.

www.cloudark.io