Select Page

Application Management in Kubernetes

by | Jan 5, 2023 | NETWAYS

Working with Kubernetes means working with YAML. A lot of YAML. Some might say too much YAML. Your standard deployment usually consists of a workload resource, such as a Deployment or DaemonSet, in order to get some nice little Pods running. Remember, a Pod consists of one or more Containers and a Container is just a Linux process with some glitter. In addition, we might need a ConfigMap in order to store some configuration for our workload.

Now that our Pods are running we need some infrastructure around them. A Service resource will give us a stable name resolution for the IPs of our Containers (called Endpoints). We should also add some Network Policies, which control traffic flow (think Firewall).

Finally, we might have a Secret containing a TLS key/certificate and an Ingress resource, so that the IngressController knows where to find our workload. Remember, an IngressController is a Reverse Proxy.

So that is a lot of YAML we just wrote. a minimal example can easily have more than 200 lines [No citation needed]. Now multiply the environments we are running (i.e. development, staging, production) et voilà even more YAML. In order to reduce this repetition some tools have emerged in the Kubernetes ecosystem.

This article will give an overview of some of these tools. The focus will be on tools that use templates, overlays or functions to generate YAML. Tooling that leverages Custom Resources and Controllers are outside the scope of this article, but will be briefly mentioned in the conclusion. Remember, Controllers are just applications that watch the state of the cluster and make changes if the desired state and the current state differ.

Kustomize

One of the quickest ways to minimize repetition is “Kustomize”, since it is built into kubectl. Kustomize does not use templates or a custom DSL, unlike other solutions, instead it merges and overlays YAML to produce an output we can send to the Kubernetes API. Meaning, it is YAML all the way down.

In this and all other examples we will create a Service resource with a custom port and a namespace, just to keep it simple. To get started with kustomize we need a base YAML manifest, overlays containing the changes that are to be applied on the base and a kustomization.yml in which we describe what will happen:

ls -l

base/  # Contains the YAML with the Kubernetes Resources
overlays/  # Contains the YAML with which we customize the base Resources
kustomization.yml  # Describes what we want to customize

cat base/service.yml
---
apiVersion: v1
kind: Service
metadata:
  name: foobar-service
spec:
  selector:
    app.kubernetes.io/name: Foobar
  ports:
    - protocol: TCP
      port: 8080
      targetPort: 80

cat base/kustomization.yml
---
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
commonLabels:
  app: foobar
resources:
  - service.yml

In `overlays/` we now define what we want to change. The metadata/name of the resources are required so that kustomize knows where to customize. In this example we change the port from 8080 and add a namespace.

cat overlays/dev/kustomization.yml
---
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
bases:
  - ../../base
patchesStrategicMerge:
  - port.yml
namespace: foobar-prod

cat overlays/dev/port.yml
---
apiVersion: v1
kind: Service
metadata:
  name: foobar-service
spec:
  ports:
    - protocol: TCP
      port: 7777
      targetPort: 80

Using `kubectl kustomize` we can now generate the final full manifest with a specific overlay:

kubectl kustomize overlays/dev
...

kubectl kustomize overlays/prod

---
apiVersion: v1
kind: Service
metadata:
  labels:
    app: foobar
  name: foobar-service
  namespace: foobar-prod
spec:
  ports:
  - port: 9999
    protocol: TCP
    targetPort: 80
  selector:
    app: foobar
    app.kubernetes.io/name: Foobar

`patchesStrategicMerge` is one of many built-in generators and transformers that can be used to modify the base resources, an extensive list can be found in the documentation. Kustomize offers a quick and easy way to structure YAML manifests, being built into kubectl lowers the barrier of entry. You can ship the kustomize code directly with your application for example.
However, it can be somewhat cumbersome to work with larger codebases and is not really suited for distribution.

Helm

Helm uses a template language to generate YAML that can then be sent to the Kubernetes API. Either by Helm itself or kubectl. It is widely used in the Kubernetes ecosystem. In the CNCF Cloud Native Survey 2020 63% of the 1,324 respondents named Helm as their preferred method for packaging.

In order to use Helm we need its CLI and a Helm Chart, which we either create ourselves or use an existing one. A Helm Chart consists of one or more templates and a Chart.yaml (containing metadata) and can then be packaged and hosted for others to download or be used locally.

The templates use the Go Template Language in which we have various options to customize data, such as variables, pipelines and functions. For example, we can define variables that can be filled by the user, i.e. “{{ .Values.service.port }}” can be filled in by the user. This is done by creating a YAML-like file for the template. A values.yaml file serves as a place for defaults.

cat templates/service.yaml

---
apiVersion: v1
kind: Service
metadata:
  name: {{ include "foobar.fullname" . }}
  labels:
    {{- include "foobar.labels" . | nindent 4 }}
spec:
  type: {{ .Values.service.type }}
  ports:
    - port: {{ .Values.service.port }}
      type: ClusterIP
      targetPort: http
      protocol: TCP
      name: http
  selector:
    {{- include "foobar.selectorLabels" . | nindent 4 }}


cat values.yaml  # The defaults
---
service:
  port: 8080

Custom value files can now be created in order to change what is rendered:

cat prod.yaml  # A custom values file
---
service:
  port: 7777

The Helm CLI can then be used to either render the finale YAML manifest or directly talk to the Kubernetes API (creating the resources).

# helm template name-of-the-release path-to-helm-code -f path-to-values-file
helm template foobar-release foobar/ -f foobar/prod.yaml

# Now we can change the port depending on our value file
helm template foobar-release foobar/ -f foobar/prod.yaml | grep port:
   - port: 9999
helm template foobar-release foobar/ -f foobar/dev.yaml | grep port:
   - port: 7777

# Directly create the resources in the cluster
helm install foobar-release foobar/ -f foobar/prod.yaml

Getting started with Helm is usually quite simple and rewarding, it quickly reduces the amount of YAML and offers options for packaging ones applications. In my experience, things get tricky once the parameterized templates start growing. Trying to cover every option/edge-cases and maintaining sanity becomes a balancing act. Nonetheless working with Kubernetes you are likely to run into Helm, maybe even without knowing it.

Terraform

Terraform, a tool to write Infrastructure as Code, can also be used to deploy Kubernetes resources.

Terraform uses a DSL named HashiCorp Configuration Language (HCL) to describe the desired infrastructure, this code is then applied with the Terraform CLI. Storing the state of what is already deployed Terraform knows when to change what in a declarative way. It uses “Providers” to bundle the logic to talk a specific API (such as Hetzner, AWS, or Kubernetes).

In this example we will use a very simple Terraform project structure, a more complex structure is possible however. In a central Terraform file we will describe the resource we want to create together with its variables:

provider "kubernetes" {
# Here we could specify how to connect to a cluster
}

# Default variables
variable "port" {
  type        = string
  description = "Port of the Service"
  default     = "8080"
}

# The resources template
resource "kubernetes_service" "foobar" {
  metadata {
    name = "foobar-example"
  }
  spec {
    selector = {
      app = "Foobar"
    }
    session_affinity = "ClientIP"
    port {
      port        = var.port
      target_port = 80
    }

    type = "ClusterIP"
  }
}

The previously defined variables can be used to change the values depending on our needs. Within tfvars file we can create different sets of variables for different deployments of the resource:

# tfvars contain the actual values that we want
cat terraform/prod.tfvars
port = "9999"

cat terraform/dev.tfvars
port = "7777"

We can now generate the YAML or talk to the Kubernetes API to directly create the resources.

terraform plan -var-file=dev.tfvars

  # kubernetes_service.foobar will be created
  + resource "kubernetes_service" "foobar" {

terraform plan -var-file=prod.tfvars

  # kubernetes_service.foobar will be created
  + resource "kubernetes_service" "foobar" {

Terraform offers a nice solution when you are already using Terraform to manage infrastructure. Having a one-stop-shop for deploying infrastructure and the applications on top. Introducing Terraform in the tool-chain might be a hurdle though, since tools like Helm might be more suited when it comes to working with Kubernetes.

kpt

kpt uses predefined functions to modify YAML manifests, instead of templates or YAML generators/overlays. These functions – i.e. setters and substitutions – are then applied in a declarative way to a base YAML manifest. All this is combined in a kpt Package, containing a Kptfile (Metadata and declaration of what functions are applied) and YAML manifests of Kubernetes resources.

Kptfiles are written in YAML and are similar to Kubernetes resources. Within the Kptfile we declare a pipeline that contains the functions to be applied, for example:

pipeline:
  mutators:
    - image: gcr.io/kpt-fn/set-labels:v0.1
      configMap:
        app: foobar
  validators:
    - image: gcr.io/kpt-fn/kubeval:v0.3

As we can see, the functions can have different types (mutators, validator, etc) and are inside an OCI Image. This OCI Image contains the code to execute the described function. For example, “set-label” is a Golang tool which adds a set of labels to resources. You could also write your own.

ls foobar/

Kptfile  # Contains metadata and functions to apply
service.yaml # Kubernetes Manifest

cat Kptfile
---
apiVersion: kpt.dev/v1
kind: Kptfile
metadata:
  name: foobar-example
pipeline:
  mutators:
    - image: gcr.io/kpt-fn/apply-setters:v0.2.0
apiVersion: v1

cat service.yaml
---
apiVersion: v1
kind: Service
metadata:
  name: foobar-service
  labels:
    app: nginx
  annotations:
    internal.kpt.dev/upstream-identifier: '|Service|default|foobar-service'
spec:
  selector:
    app.kubernetes.io/name: Foobar
  ports:
    - protocol: TCP
      port: 8080 # kpt-set: ${foobar-service-port}
      targetPort: 80

Once we are done with our pipeline definition our kpt package is ready to deploy. We can directly talk to the Kubernetes API and deploy or just render the YAML.

kpt fn render --output unwrap

Packaging in kpt is based on Git forking. Once we are done with our package we can host it on a git server and users fork the package to use it. Users can then define their own pipelines to customize the manifests.

Note that kpt is still in development and subject to many major changes (the examples here are done with version 1.0.0-beta.24). This is currently my main issue with kpt, otherwise I think it is a very neat solution. Having the option to download and fully extend existing packages is very interesting, all while simply working with YAML manifests.

If you want to learn more about kpt, check out the “Kubernetes Podcast from Google” Episode 99 (https://kubernetespodcast.com/episode/099-kpt/)

Conclusion

The shown Application Management tools are either templates, generators/overlays or functions to generate YAML. Another approach for managing applications is by using Custom Resources in Kubernetes that are managed by a custom Controller. Examples of such tools would be Acorn (https://acorn.io/) or Ketch (https://www.theketch.io/)

Personally, I do not think that these CRD based tools are an optimal solution.

The complexity they introduce might be fine if the benefit is worth it, however, my real issue is with adding an abstraction for the API. Meaning, you never learn to work with the API. While yes, developers might want to focus on the Application not the Infrastructure, it is still very likely that you will have to know how the API works (i.e. for monitoring or debugging). When learning how to write/render correct Kubernetes manifests for your application, you also learn how to work with Kubernetes resources.

Furthermore, some of the CRD based tools might have limitations when creating resources (i.e. the Ketch version at the time of writing requires you to create PVCs). Thus, depending on the solution you still need to learn how to write YAML.

That being said, it is true that template or DSL based tools have a similar downside. It can be quite tricky to cover all use-cases in a parameterized templates and thus be difficult to maintain. I recommend this design-proposal for further reading on the topic:

https://github.com/kubernetes/design-proposals-archive/blob/main/architecture/declarative-application-management.md#parameterization-pitfalls

The application management ecosystem is still a very active and changing part of Kubernetes. We will probably see a couple of solutions come and go until it stabilizes, if ever. Helm will probably stick around due to its current popularity and the momentum this carries.

I personally will keep an eye on kpt, which seems a very nice balance between complexity and modularity, while still working with YAML. Oh how I love YAML⸮

References

Markus Opolka
Markus Opolka
Senior Consultant

Markus war nach seiner Ausbildung als Fachinformatiker mehrere Jahre als Systemadministrator tätig und hat währenddessen ein Master-Studium Linguistik an der FAU absolviert. Seit 2022 ist er bei NETWAYS als Consultant tätig. Hier kümmert er sich um die Themen Container, Kubernetes, Puppet und Ansible. Privat findet man ihn auf dem Fahrrad, dem Sofa oder auf GitHub.

0 Comments

Submit a Comment

Your email address will not be published. Required fields are marked *

More posts on the topic NETWAYS

Vielseitige Einblicke der Abteilungswoche bei NETWAYS

In der modernen Arbeitswelt ist es wichtig, über den eigenen Tellerrand hinauszuschauen und ein Verständnis für verschiedene Bereiche eines Unternehmens zu entwickeln. Genau aus diesem Grund wurde bei NETWAYS die Idee der Abteilungswoche geboren. In dieser Woche hatte...

Monthly Snap März 2024

Endlich Frühling in Nürnberg! Die Laune ist doch morgens gleich besser, wenn es schon hell ist, wenn man aus dem Haus geht. Wir haben im März viele schöne Blogposts für Euch gehabt. Falls Ihr welche davon verpasst hat, hier ein Überblick für Euch. Aber natürlich...

OSMC 2023 | Will ChatGPT Take Over My Job?

One of the talks at OSMC 2023 was "Will ChatGPT take over my job?" by Philipp Krenn. It explored the changing role of artificial intelligence in our lives, raising important questions about its use in software development.   The Rise of AI in Software...