So you know all about Kubernetes and how it manages your containers, hosts your ingresses, mounts your volumes, schedules your jobs, feeds your dog and makes your coffee. You know that Kubernetes is for devops folks who don’t want to be woken up at 4 AM on Sunday to rebuild a failed server or to spend three months re-architecting an application when the CFO decides you should use a cheaper cloud provider. Kubernetes lets you replace your server after lunch on Monday and migrate your app to a new cloud between meetings on Tuesday.
Your app’s Kubernetes primitives- deployments, pods, jobs, volumes, claims, services, ingresses, and whatnot- need to be managed somehow. If you’re like us, your first attempt at that would be to write your resource YAMLs explicitly and keep them in your source control server. If you need to support a few different environments, such as multiple cloud providers, or cloud and a bare metal cluster, or Minikube, then you’d probably copy your YAMLs into a directory for each environment and then make whatever specific changes are needed for each environment.
This method has a few problems:
Helm solves all these problems. In Helm, Kubernetes resource YAMLs are written as templates. The collection of templates and related information is called a “chart”. Templates are very flexible and allow resources to be included and configured based on data provided to Helm. And it understands the application management lifecycle to make installation, upgrade, and removal a breeze.
As an example of a Helm-templated resource definition, consider:
{{- if .Values.dbdump.enabled }}
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: dbdump
spec:
schedule: '{{ default "0 8 * * *" .Values.dbdump.schedule }}'
jobTemplate:
spec:
[...many lines snipped...]
{{- end }}
This simple example shows two basic template techniques:
Here are a few benefits of using Helm:
There’s no need to keep resource YAML that’s specific to a single environment. YAML files are templated and can install whatever resources are necessary for the environment where the application is being installed. For example, when a pod is configured, a value can come from the Helm command line, and a default value can be given by the chart. This is useful for allowing an application to use a specific persistent volume claim, if provided, and to create a new PVC if none is given. The templates can also be used to install a resource only under certain conditions. For example, when using Google Cloud, the chart could deploy resources for a Google Cloud Load Balancer, but when using Minikube it could deploy an ingress resource. Alternatively, the conditions can be based on business requirements, such as deploying a search service only if the customer has paid for a search feature.
Helm understands Kubernetes primitives. For example, when installing an application Helm will install volumes and networking before the deployments that depend on them. You don’t need to worry about adding resources in the correct order or about forgetting to add a resource. Helm also understands what resources need to be restarted when they are reconfigured and what needs to be deleted if a resource is removed from a chart.
This capability also makes it easy to recover from failures. For example, if you accidentily remove a configmap you can just re-install the application to restore the missing resource.
Helm charts are versioned. It’s easy to see what applications were installed from old versions of the chart.
Helm even helps manage tasks outside the Kubernetes cluster. It offers lifecycle hooks, which are containers that Helm runs when certain release events happen, such as before or after app installation or after removing an application.
These hooks can be used to automate provisioning of databases, DNS records, storage accounts, or any other resource inside or outside the cluster.
Helm manages your application dependencies. A chart can specify other charts as dependencies, ensuring important services are installed when the application is deployed. The dependencies can be conditional, allowing flexibility to, for example, use a cloud database in one environment but use a database in Kubernetes in a Minikube environment.
I think you’ll agree- Helm is the sane way to deploy applications in Kubernetes.