Docker Security Best Practices: Part 3 – Securing Container Images
Previously, in our Docker Security Best Practices series, we took a deeper look into Securing the Docker Host, and what best practices to follow. This post will continue the series, focusing on Docker images, the challenges that come with securing these artifacts, and what countermeasures can be taken to achieve a better container image security stance. Left out from this discussion will be any considerations that touch on host or runtime security.
Background on images
Simply put, a Docker image is a collection of data that includes all files, software packages and metadata needed to create a running instance of a Docker container. In essence, an image is a template from which a container can be instantiated. Images are immutable, meaning, once they’ve been built, they cannot be changed. If someone were to make a change, a new image would be built as a result.
Docker images are built in layers. A core component of images is known as the ‘base layer’. This is the foundation upon which all other components/layers are added to. Commonly, base layers are minimal, and typically representative of common OSs.
Images are most often stored in a central location called a registry. From registries like Docker Hub, developers can store their own images, or find and download images that have already been created.
A very fundamental approach to countering threats for container images is automating building and testing. Organizations should set up the tooling to analyze images in a continuous manner. In short, development teams need a structured and reliable process for building and testing the Docker images that are built. For container image specific pipelines, tools specifically designed to uncover vulnerabilities, configuration defects, and other security best practices, should be employed. Additionally, this tooling should give developers the option to create governance around the images being scanned. Meaning, based on configurable policy rules/gates, images can pass or fail the image scan step in the pipeline, and not be allowed to progress further.
A simple example of how this might look:
- Developer commits code changes to source control
- CI platform builds container image
- CI platform pushing container image to staging registry
- CI platform calls a tool to scan the image
- The tool passes or fails the images based on the policy mapped to the image
- If the image passes the policy evaluation and any other tests defined in the pipeline, the image is pushed to a production registry.
As part of our continuous approach, packages and components within the image should be scanned for common and known vulnerabilities. Image scanning should be able to uncover vulnerabilities contained within all layers of the image, not just the base layer. Moreover, image inspection and analysis should be able to detect vulnerabilities for OS and non-OS packages container within the images, as there are oftentimes vulnerable third-party libraries as part of application code. Should a new vulnerability for a package be published after the image has been scanned, the tool should be able to retrieve new vulnerability info for the applicable component, and alert the developers so remediation can begin.
Organizations should be able to create and enforce policy rules based on the severity of the vulnerability as defined by the Common Vulnerability Scoring System.
Example: If the image contains any vulnerable packages with a severity greater than medium, stop this build.
Images should be configured to adhere with common best practices listed below.
Create a user for the container image
Containers should be run as a non-root user, whenever possible. The
USER instruction within the Dockerfile defines this.
Use trusted base images for container images
Ensure that the container image is based on another established and trusted base image downloaded over a secure channel. Official repositories are Docker images curated and optimized by the Docker community or associated vendor. For organizations, developers should be connecting and downloading images from secure, trusted, private registries. These trusted images should be selected from minimalistic technologies whenever possible to reduce attack surface areas.
Docker Content Trust and Notary can be configured to give developers the ability to verify images tags and enforce client-side signing for data sent to and received from remote Docker registries. Content trust is disabled by default.
Do not install unnecessary packages in the container
As stated above, images should be selected from minimalistic technologies whenever possible to reduce size and attack surface. Additionally, packages outside the scope and purpose of the container should not be installed.
HEALTHCHECK instruction to the container image
HEALTHCHECK instruction tells Docker how to test a container to check that it is still working. This instruction should be added to Dockerfiles. Based on the result of the healthcheck (unhealthy), docker could exit a non-working container and instantiate a new one.
Do not use update instructions alone in the Dockerfile
Do not use update instructions such as
apt-get update alone or in a single line in the Dockerfile. Instead, run the following:
RUN apt-get update && apt-get install -y \ bzr \ cvs \ git \ mercurial \ subversion
This will help avoid duplication of packages and make updates easier.
Related also see leveraging the build cache for insight on how to reduce the number of layers and overall Dockerfile best practices.
Use COPY instead of ADD when writing Dockerfiles
COPY instruction copies files from the local host machine to the container file system. The
ADD instruction can potentially retrieve files from remote URLs and perform unpacking operations. Since
ADD could bring in files remotely, the risk of malicious packages and vulnerabilities from remote URLs is increased.
Do not store secrets in Dockerfiles
Do not store any secrets within container images. Developers may sometimes leave AWS keys, API keys, or other secrets inside of images. If attackers were do grab these secrets/keys they could be exploited. Secrets should always be stored outside of images and provided dynamically at runtime as needed.
Only install verified packages in containers
Only verified packages from trusted sources should be downloaded and installed. If you are downloading a package via
apt-get from official Debian repositories, this is all set. To see how this can be verified within a Dockerfile see Redis Dockerfile.
One tool for implementing the best practices above is Anchore. Anchore is a service that conducts static analysis on Docker images, and evaluates these images against user-defined checks. With Anchore, vulnerabilities within packages for OS and non-OS components can be identified, and the image configuration best practices described above can be enforced via policy rules.
With Anchore, policies can be configured to check for the following:
- Image metadata
- Exposed ports
- Effective users
- Dockerfile instructions
- Password files
One potential implementation for building secure and compliant container images in a CI pipeline can be achieved using the open source Jenkins CI tool along with Anchore for scanning and policy checks
By following a policy-based compliance approach, organizations can vastly improve their container image posture by implementing Anchore policies tightly mapped to the above container image best practices within a CI/CD platform.