What is Helm?

In very basic terms, Helm is a package manager for Kubernetes that makes it easy to take applications and services that are highly repeatable and scalable and deploy them to a Kubernetes cluster. Helm deploys applications using charts, which are essentially the final packaged artifact; a complete collection of files describing the set of Kubernetes resources required to deploy the application successfully. By far the most popular way to manage Kubernetes applications and releases, Helm is a graduated member of the Cloud Native Computing Foundation (CNCF), and the Helm charts repository on GitHub has more than 14,000 stars. 

Out of the box, there are major benefits to using Helm. To name a few, deployments become simpler and streamlined. Software vendors can provide a set of base defaults for an application, then developers can override or extend these settings during installation to suit the requirements for their deployments. These default configurations are often hardcoded in the deployment templates and made configurable by a values.yaml file which developers can choose to pass the chart during installation. This ease of use helps with the steep learning curve of Kubernetes. Developers don’t necessarily need to obtain a deep understanding of each Kubernetes object in order to deploy an application quickly. Finally, having a chart built and maintained by software vendors allows it to be used over and over again by a large audience, reducing duplication and complexity of customer and user releases across multiple environments. 

While the benefits described above are great, it is important to understand the risks associated with using these new artifacts. Below are two major components to consider when beginning to work with Helm charts.  

Container Images & Helm Deployments

Since Helm charts deploy Kubernetes applications, they include references to container images in the YAML manifests which then run as containers on the cluster. For many charts, there are often several images (optional or required) for the application to start. Due to this, visibility into the images that will be used with your Helm deployment via image inspection should be a mandatory step in your deployment process. For example, the MariaDB Helm chart includes a reference to the image: docker.io/bitnami/mariadb:10.3.22-debian-10-r27 in the values.yaml file.

Ordinarily, it is a good idea to check to see if this container image has any known vulnerabilities or misconfigurations that could be exploited by an attacker. This is also a good example of configuration flexibility with Helm. If I wanted to use a different image with this Helm chart, I could swap out the registry, repository, tag combination in the values.yaml, and deploy with another image. Important to be careful here though, as the deployment configuration for the applications are often software specific, so there are no guarantees that a newer version of MariaDB will work with the deployment. 

Managing the sets of images that exist across various deployments in a Kubernetes environment often involves continuous monitoring of the image workload to manage the sprawl effectively. While there is no silver bullet, integrating image inspection, enforcement, and triage into your development and delivery workflow will greatly improve your security posture and reduce the time to resolution should containerized applications become vulnerable.

Configuration Awareness For Secure Deployments

As discussed above, Helm charts contain Kubernetes YAML manifest files, which describe the properties and characteristics of the Kubernetes objects that will then be deployed on the cluster. There are many defaults within these files which can be overridden upon deployment of the Helm chart via the values.yaml file or inline. It is very easy to deploy an application without CPU or memory limits set, without security contexts, or a container running with SYS_ADMIN capabilities, or in privileged mode. 

Quite often, there are optional configurations documented in Helm charts which can greatly enhance the security of the deployment. It is not uncharacteristic for the basic Helm deployment to focus on “getting the software up and running” and not necessarily a default secure deployment. Without getting too involved in the intricacies of Kubernetes and runtime security, it is important to understand exactly what Helm is doing behind the scenes with regards to the application you are deploying, and what tweaks you can provide the configuration for the Kubernetes objects to secure your deployment. 

In a similar vein, there are oftentimes application-specific configurations that are not always obvious or available for modification via the values.yaml file. I highly recommend taking a look at the deployment templates, configmaps, etc. just to ensure you are deploying the application the right way for your security needs.

Last, but certainly not least, once you’ve deployed your applications in a Kubernetes environment, appropriate measures need to be taken to enforce and monitor for malicious activity, network anomalies, container escapes, etc. These are separate sets of challenges than managing deployment and configuration artifacts in a build workflow and often require tools designed specifically for forensics and monitoring of a runtime environment.  

Conclusion

The benefits of using Helm are easy to see, and many developers can get started quickly by deploying containerized applications to Kubernetes with just a simple "helm install." However, with any tool that includes a significant amount of abstraction to reduce deployment complexity, taking a methodical approach to understanding the nuts and bolts of what is going on behind the scenes is a recommended security practice to undertake. With the basic tips above, hopefully, you can be well on your way to understanding a bit more able Helm, so you can deploy Kubernetes applications securely.