In recent years, the adoption of microservices and containerized architectures has continually been on the rise, with everyone from small startups to major corporations joining the push into the container world. According to VMWare, 59 percent of large organizations surveyed use Kubernetes to deploy their applications into production. As organizations move towards deploying containers in production, keeping security at the forefront of development becomes more critical because while containers ideally are immutable, they are just another application exposed to security vulnerabilities. The potential impact of a compromise of the underlying container orchestrator can be massive, making securing your applications one of the most important aspects of deployment.
Securing the underlying infrastructure that Kubernetes runs on is just as important as securing the servers that run traditional applications. There are many security guides available, but keeping the following three points in mind is a great place to start.
- Secure and configure the underlying host. Checking your configuration against CIS Benchmarks is recommended as CIS Benchmarks provide clear sets of standards for configuring everything from operating systems to cloud infrastructure.
- Minimize administrative access to Kubernetes nodes. Restricting access to the nodes in your cluster is the basis of preventing insider threats and reducing the ability to elevate commands for malicious users. Most debugging and other tasks can typically be handled without directly accessing the node.
- Control network access to sensitive ports. Ensuring that your network limits access to commonly known ports, such as port 22 for SSH access or ports 10250 and 10255 used by Kubelet, restricts access to your network and limits the attack surface for malicious users. Using Security Groups (AWS), Firewall Rules (GCP), and Azure Firewall (Azure) are simple, straightforward ways to control access to your network resources.
- Rotate infrastructure access credentials frequently. Setting shorter lifetimes on secrets, keys, or access credentials makes it more difficult for an attacker to make use of that credential. Following recommended credential rotation schedules greatly reduces the ability of an attacker to gain access.
Ensuring the configuration of Kubernetes and any secrets is another critical component to securing your organization’s operational infrastructure. Here are some helpful tips to focus on when deploying to Kubernetes.
- Encrypt secrets at rest. Kubernetes uses an etcd database to store any information accessible via the Kubernetes API such as secrets and ConfigMaps; essentially the actual and desired state of the entire system. Encrypting this area helps protect the entire system.
- Enable audit logging. Kubernetes clusters have the option to enable audit logging, keeping a chronological record of calls made to the API. They can be useful for investigating suspicious API requests, for collecting statistics, or for creating monitoring alerts for unwanted API calls.
- Control the privileges containers are allowed. Limiting access to a container is crucial to prevent privilege escalation. Kubernetes includes pod security policies that can be used to enforce privileges. Container applications should be written to run as a non-root user, and administrators should use a restrictive pod security policy to prevent applications from escaping their container.
- Control access to the Kubelet. A Kubelet's HTTPS endpoint exposes APIs which give access to data of varying sensitivity, and allow you to perform operations with varying levels of power on the node and within containers. By default, Kubelet allows unauthorized access to the API, so securing it is recommended for production environments.
- Enable TLS for all API traffic. Kubernetes expects all API communication within the cluster to be encrypted by TLS, and while the Kubernetes APIs and most installation methods encrypt this by default, API communication in deployed applications may not be encrypted. Administrators should pay close attention to any applications that communicate over unencrypted API calls as they are exposed to potential attacks.
- Control which nodes pods can access. Kubernetes does not restrict pod scheduling on nodes by default, but it is a best practice to leverage Kubernetes’ in-depth pod placement policies, including labels, nodeSelector, and affinity/anti-affinity rules.
Securing Containerized Applications
Aside from how it is deployed, an application that runs in a container is subject to the same vulnerabilities as running it outside a container. At Anchore, we focus on helping identify which vulnerabilities apply to your containerized applications, and the following are some of many key takeaways that we’ve learned.
- Scan early, scan often. Shifting security left in the DevSecOps pipeline helps organizations identify potential vulnerabilities early in the process. Shift Left with a Real World Guide to DevSecOps walks you through the benefits of moving security earlier in the DevSecOps workflow.
- Incorporate vulnerability analysis into CI/CD. Several of our blog posts cover integrating Anchore with CI/CD build pipelines. We also have documentation on integrating with some of the more widely used CI/CD build tools.
- Multi-staged builds to keep software compilation out of runtime. Take a look at our blog post on Cryptocurrency Mining Attacks for some information on how Anchore can help prevent vulnerabilities and how multi-stage builds come into play.
With the shift towards containerized production deployments, it is important to understand how security plays a role in each level of the infrastructure; from the underlying hosts to the container orchestration platform, and finally to the container itself. By keeping these guidelines in mind, the focus on security shifts from being an afterthought to being included in every step of the DevSecOps workflow.
Need a better solution for managing container vulnerabilities? Anchore's Kubernetes vulnerability scanning can help.