Multi-tenancy is a software architecture in which a software is used by multiple users or tenants. It can be accomplished by either creating a separate instance of the application for each tenant or designing the application in such a way that a single instance can be shared by the multiple users.

While designing multi-tenancy inside the application can ensure the most optimal use of the underlying resources, it is a very time consuming process to re-architect your application to accommodate the tenancy requirements. The fastest way to deliver your software to multiple users is to create its separate instance per…

Enterprise platform engineering teams are responsible for managing multi-tenant Kubernetes environments and it is most common for them to create separation between tenants using Kubernetes namespaces. As a platform engineer, you want an ability to sneak peek into each tenant namespace to understand what workload they are running?, How many resources have been created, How are they related to each other?, Who has created those?, etc. Answers to these questions can help a platform engineer manage tenant environments better. A good view into a namespace can help in troubleshooting, provisioning, monitoring etc.

It is not easy to get this view…

Platform engineering teams prepare Kubernetes clusters for sharing between multiple users and workloads. This involves building Helm charts for variety of operational workflows. The challenge is in delivering these Helm charts as platform services in self service manner so that they can be used repeatedly with tenant level controls and consumption tracking.

To address this challenge, we have built an open-source framework (KubePlus) to create multi-tenant platform services with the required isolation guarantees and per-tenant consumption metrics tracking. …

Kubernetes extensibility story keeps on getting bigger and bigger. Kubernetes extensions can be categorized into 4 categories: Kubernetes Operators, Admission Controllers, Scheduler plugins, CLI plugins. Every KubeCon has a dedicated track for extensibility related topics. KubeCon NA 2020 also had a good coverage on these categories through their track on Customizing and Extensibility. It had talks on Operators, scheduler plugins and admission controllers. While this track is the primary place for extensibility related talks, occasionally you also find some interesting talks on extensibility in the Application + Development track.

Based on activities at KubeCon NA 2020, here are our 3…

A Kubernetes cluster is typically shared between different teams. These teams deploy their workloads on the cluster that depend on each other and form the overall enterprise application.

For example, DevOps team deploys Prometheus and Kafka Operators on the cluster and is responsible for creating required instances of Prometheus to enable metrics collection and Kafka to ingest logs from the other applications. Database team deploys MySQL Operator and is responsible for creating Mysql instances to support web applications running on the cluster. Application developers leverage these instances and deploy their web applications. In this example every team is dealing with…

Kubernetes CRDs and Operators are now widespread. Today most of the Kubernetes distributions come prepackaged with a number of Operators/CRDs. DevOps teams also write their own Operators to pre-package required automation for their workloads. An Operator adds Custom Resources to the cluster. Variety of teams using this purpose-built cluster can leverage available Custom Resources as they build their YAMLs for deploying the applications.

For these YAML developers Custom Resources present an opportunity as well as a challenge. Custom resources are an opportunity because a Custom Resource provides a declarative method for enabling some complex task. For example, ‘CassandraDatacenter’ Custom Resource…

DevOps teams are building their Kubernetes native stacks assembling and developing required Kubernetes Operators for their workload. But what does it take to develop an Operator that is a good citizen of such a multi-Operator environment? To help Operator developers systematically approach this problem we have developed the Operator Maturity Model. Various community Operators have benefited from this model. This model also has been used by the Operator development team at DataStax in developing their Cassandra Operator.

The Operator Maturity Model is divided into six categories (Consumability, Configurability, Security, Robustness, Debuggability, and Portability) with a set of guidelines in each…

In designing monitoring solutions for Kubernetes applications, DevOps teams are guided by following high-level questions:

  • If a worker node fails, which applications or services will get affected?
  • How much CPU/Memory/Storage is being consumed by the entire application?
  • How can we aggregate logs or events at application level from container/pod level?

An application is architected in Kubernetes with the help of Kubernetes Resources (built-in Resources like Pod, Service, Deployment and Custom Resources coming from various Operators). Application workflows are realized in Kubernetes YAMLs by establishing connections between Kubernetes Resources. These connections are based on various relationships such as labels, annotations, ownership…

In Kubernetes, Pods consume CPU and memory resources on the worker nodes in a cluster. In the case of Kubernetes Custom Resources, the Pods that are created as part of Custom Resource instances are the ones that consume physical resources of the cluster. This post discusses our ongoing work around tracking resource metrics for Custom Resources.

Today’s enterprise Kubernetes clusters typically use more than one Operators/CRDs to simplify building their application workflows. We have seen requirements from our customers for simplifying monitoring of these Operators and Custom Resources. …

You probably have heard a lot of Kubernetes migration stories by now. Here are some of the key observations from our experience of working on diverse Kubernetes migration projects — from migration of a centralized SaaS application to deploying container workloads on distributed edge locations.

In this post, we specifically refer to two ongoing real-life Kubernetes migration projects for this purpose. One of the projects is a SaaS application that is currently deployed on internal OpenStack infrastructure. Another is a green field edge computing application which is targeted to be deployed on several hundred remote edge locations.

  • The SaaS application…

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store