With the clock ticking on new vulnerability scanning rules, organizations must adhere to a number of FedRAMP requirements. Prepare containerized applications for FedRAMP authorization with this checklist.
Author: teamanchore
Complete Guide to Hardening Containers with STIG
Preparing your containers and navigating your way through the STIG approval process can be daunting. This white paper will help your organization align for STIG readiness.
4 Ways to Prepare your Containers for the STIG Process
The Security Technical Implementation Guide (STIG) is a Department of Defense (DoD) technical guidance standard that captures the cybersecurity requirements for a specific product, such as a cloud application going into production to support the warfighter. System integrators (SIs), government contractors, and independent software vendors know the STIG process as a well-governed process that all of their technology products must pass. The Defense Information Systems Agency (DISA) released the Container Platform Security Requirements Guide (SRG) in December 2020 to direct how software containers go through the STIG process.
STIGs are notorious for their complexity and the hurdle that STIG compliance poses for technology project success in the DoD. Here are some tips to help your team prepare for your first STIG or to fine-tune your existing internal STIG processes.
4 Ways to Prepare for the STIG Process for Containers
Here are four ways to prepare your teams for containers entering the STIG process:
1. Provide your Team with Container and STIG Cross-Training
DevSecOps and containers, in particular, are still gaining ground in DoD programs. You may very well find your team in a situation where your cybersecurity/STIG experts may not have much container experience. Likewise, your programmers and solution architects may not have much STIG experience. Such a situation calls for some manner of formal or informal cross-training for your team on at least the basics of containers and STIGs.
Look for ways to provide your cybersecurity specialists involved in the STIG process with training about containers if necessary. There are several commercial and free training options available. Check with your corporate training department to see what resources they might have available such as seats for online training vendors like A Cloud Guru and Cloud Academy.
There’s a lot of out-of-date and conflicting information about the STIG process on the web today. System integrators and government contractors need to build STIG expertise across their DoD project teams to cut through such noise.
Including STIG expertise as an essential part of your cybersecurity team is the first step. While contract requirements dictate this proficiency, it only helps if your organization can build a “bench” of STIG experts.
Here are three tips for building up your STIG talent base:
- Make STIG experience a “plus” or “bonus” in your cybersecurity job requirements for roles, even if they may not be directly billable to projects with STIG work (at least in the beginning)
- Develop internal training around STIG practices led by your internal experts and make it part of employee onboarding and DoD project kickoffs
- Create a “reach back” channel from your project teams to get STIG expertise from other parts of your company, such as corporate and other project teams with STIG expertise, to get support for any issues and challenges with the STIG process
Depending on the size of your company, clearance requirements of the project, and other situational factors, the temptation might be there to bring in outside contractors to shore up your STIG expertise internally. For example, the Container Platform Security Resource Guide (SRG) is still new. It makes sense to bring in an outside contractor with some experience managing containers through the STIG process. If you go this route, prioritize the knowledge transfer from the contractor to your internal team. Otherwise, their container and STIG knowledge walk out the door at the end of the contract term.
2. Validate your STIG Source Materials
When researching the latest STIG requirements, you need to validate the source materials. There are many vendors and educational sites that publish STIG content. Some of that content is outdated and incomplete. It’s always best to go straight to the source. DISA provides authoritative and up-to-date STIG content online that you should consider as your single source of truth on the STIG process for containers.
3. Make the STIG Viewer part of your Approved Desktop
Working on DoD and other public sector projects requires secure environments for developers, solution architects, cybersecurity specialists, and other team members. The STIG Viewer should become a part of your DoD project team’s secure desktop environment. Save the extra step of your DoD security teams putting in a service desk ticket to request the STIG Viewer installation.
4. Look for Tools that Automate time-intensive Steps in the STIG process
The STIG process is time-intensive, primarily documenting policy controls. Look for tools that’ll help you automate compliance checks before you proceed into an audit of your STIG controls. The right tool can save you from audit surprises and rework that’ll slow down your application going live.
Parting Thought
The STIG process for containers is still very new to DoD programs. Being proactive and preparing your teams upfront in tandem with ongoing collaboration are the keys to navigating the STIG process for containers.
Learn more about putting your containers through the STIG process in our new white paper entitled Navigating STIG Compliance for Containers!
NVIDIA Secures Containers with Anchore
NVIDIA utilizes Anchore Enterprise to stay ahead of critical security requirements….
Anchore Enterprise and the new OpenSSL vulnerabilities
Today the OpenSSL project released an advisory for two new vulnerabilities that were rated as having a critical severity, but have been lowered to having a high severity. These vulnerabilities only affect OpenSSL versions 3.0.0 to 3.0.6. As OpenSSL version 3 was released in September of 2021, it is not expected to be widely deployed at this time. OpenSSL is one of those libraries that isn’t a simple upgrade. OpenSSL version 1 is much more common at the time of this writing and is not affected by CVE-2022-3786 or CVE-2022-3602.
The issues in question are not expected to be exploitable beyond a crash by a malicious actor due to the vulnerabilities being stack buffer overflows. Stack buffer overflows result in crashes on modern systems due to a security feature known as stack canaries which have become commonplace in recent times.
Detecting OpenSSL with Anchore Enterprise
Anchore Enterprise easily detects OpenSSL as it is commonly packaged within Linux distributions. These are packaged versions of OpenSSL in which a package manager installs a pre-built binary package, commonly referred to as APK, DEB, or RPM packages. Below is an example of searching a Fedora image for OpenSSL and determining it has OpenSSL 3.0.2 installed:
This is the most common way OpenSSL is shipped in container images today.
That’s not the entire story though. It is possible to include OpenSSL when shipping a binary application. For example the Node.js upstream binary statically links the OpenSSL library into the executable. That means OpenSSL is present in Node.js, but there are no OpenSSL files on disk for a scanner to detect. In such an instance it is necessary to review which applications will include OpenSSL and look for those.
In the case of Node.js it is necessary to look for the node binary located somewhere on the disk. We can examine the files contained in the SBOM to identify /usr/local/bin/node, for example:
If Node.js is installed as a package, it will get picked up without issue. If Node.js is installed as a binary, either from source or from Node.js itself, it’s slightly more work to detect as it is necessary to review all of the installed files, not just a package named “node”.
We have an update coming in Anchore Enterprise 4.2 that will be able to identify Node.js as a binary install, you can read more about how this will work below where we explain detecting OpenSSL with Syft.
Detecting OpenSSL with Syft
Anchore has an open source SBOM scanner called Syft. It is part of the core technology in Anchore Enterprise. It’s possible to use Syft to detect instances of OpenSSL in your applications and containers. Syft has no issues detecting OpenSSL packages installed by operating systems. Running it against a container image or application directory works as expected.
There’s also a new trick Syft just learned, that’s detecting a version of Node.js installed as a binary. This is a brand new feature you can read about in a Syft blog post. You can expect this detection in Anchore Enterprise very soon.
Using Anchore policy to automatically alert on CVE-2022-3786 and CVE-2022-3602
Anchore Enterprise has a robust policy and reporting engine that can be used to ease the burden of finding instances of CVE-2022-3786 and CVE-2022-3602. There is a “Quick Report” feature that allows you to search for a CVE. Part of what makes a report such as this so powerful is that you can search back in time. Any SBOM stored in Anchore Enterprise ever can be queries. This means even if you don’t have old containers available to scan, if you have the SBOM stored, you can know if that image or application was ever affected by this issue without the need to rescan anything.
It should be noted that you may want to search for the CVE and also the GitHub GHSA IDs. While the GHSA does refer to the CVE, at this time Anchore Enterprise treats them differently when creating policy and reports.
Planning for the future
We will probably see CVE-2022-3786 and CVE-2022-3602 showing up in container images for years to come. It’s OK to spend some time at the beginning manually looking for OpenSSL in our applications and images, but this isn’t a long term strategy. Long term it will be important to rely on automation to detect, alert, and prevent vulnerable OpenSSL usage. Even if you aren’t using OpenSSL version 3 today, it could be accidentally included at a future date. And while we’re all busy looking for OpenSSL today, it will be something else tomorrow. Automation can help detect past, present, and future issues.
The extensive use of OpenSSL means security professionals and development teams are going to be dealing with the issue for many months to come. Getting immediate visibility into your risk using open source tools is the fastest way to get going. But as we get ready for the long haul, prepare for the next inevitable issue that surfaces. Perhaps you’ve already found some as you’ve addressed OpenSSL. Anchore Enterprise can get you ready for a quick and full assessment of the impact, immediate controls to prevent vulnerable versions from moving further toward production, and streamlined remediation processes. Please contact us if you want to know how we can help you get started on your SBOM journey.
Detecting binary artifacts with Syft
Actions speak louder than words
It’s no secret that SBOM scanners have primarily put a focus on returning results from packaging managers and struggle with binary applications installed via a side channel. If you’re installing software from a Linux distribution, NPM, or PyPI those packages are tracked with package manager data. Syft picks those packages up without any problems because it finds evidence in the package manager metadata to determine what was installed. However, if we install a binary, such as Node.js without a package manager, Syft won’t pick it up. Until now!
There’s a new update to Syft, version 0.60.1, that now gives us the ability to look for binaries installed outside of a package manager. The initial focus is on Node.js because the latest version of Node.js includes OpenSSL 3, which is affected by recently released security vulnerabilities. Node.js is an application that includes this latest version of OpenSSL 3, which makes it important to be able to find it at this time.
In the future we will be adding many other binary types to detect, check back to see all the new capabilities of Syft soon.
We can show this behavior using the node container image. If we scan the container with Syft version 0.59.0, we can see that the Node.js binary is not detected. We are filtering the results to only show us things with ‘node’ in their name. The official node container is quite large and contains many packages, if we don’t filter the output it would be several pages long.
There is no binary named ‘node’ in that list. However, we know this binary is installed, it is the official node container. Now if we try again using Syft version 0.60.1 the node binary is in the output of Syft with a type of binary.
How does this work?
The changes to Syft are very specific and apply only to the Node.js binary. We added the ability for Syft to look for binaries that could be node, this begins by looking at the names of the binary files on disk. This was done to avoid trying to scan through every single binary file on the system which would be very slow and consume a great deal of resources.
Once we find something that might be a Node.js binary, we extract the plaintext strings data from it. This is comparable to running the ‘strings’ command from a UNIX environment. Basically what happens is we look for strings of plain text and ignore the binary data. In our case we are looking for a string of text that contains version information in a Node.js binary. If we determine the binary is indeed Node.js, we then extract the version details.
The output of Syft is of ‘binary’ format. If you look at Syft output you will see the different types of packages that were detected. These could be npm, deb, or python for example. Now you will also see a new type which is binary. As mentioned, the only binary type that can be found today is node, but more are coming soon.
Final Thoughts
Given how new this feature is, there is a known drawback. This patch could cause the Node.js binary to show up twice in an SBOM. If Node.js is installed via a package manager, such as rpm, the RPM classifier will find ‘node’ and so will the binary classifier. The same node binary will be listed twice. We know this is a bug and we are going to fix it soon. Given the importance of being able to detect Node.js, we believe this addition is too important to not include even with this drawback.
As already mentioned, this update only detects the Node.js binary. We are also working on binary classifiers for Python and Go in the short term, and long term we expect many binary classifiers to exist. This is an example of not letting perfect get in the way of good enough.
Please keep in mind this is the first step in a very long journey. There will be bugs in the binary classifiers as they are written. There are many new things to classify in the future, we don’t yet know what sort of things we will be looking for, which is exciting. Syft is an open source project – we love bug reports, pull requests, and questions. We would love you to join our community!
It is essential that we all remain vigilant and proactive in our software supply chain security as new vulnerabilities like OpenSSL and malicious code are inevitable. Please contact us if you want to know how we can help you get started on your SBOM journey and detect OpenSSL in your environment.
Anchore Capabilities Statement – Public Sector
Docker Security Best Practices: A Complete Guide
When Docker was first introduced, Docker container security best practices primarily consisted of scanning Docker container images for vulnerabilities. Now that container use is widespread and container orchestration platforms have matured, a much more comprehensive approach to security is standard practice.
This post covers best practices for three foundational pillars of Docker container security and the best practices within each pillar:
- Securing the Host OS
- Securing the Container Images
- Continuous Approach
- Image Vulnerabilities
- Policy Enforcement
- Create a User for the Container Image
- Use Trusted Base Images for Container Images
- Do Not Install Unnecessary Packages in the Container
- Add the HEALTHCHECK Instruction to the Container Image
- Do Not Use Update Instructions Alone in the Dockerfile
- Use COPY Instead of ADD When Writing Dockerfiles
- Do Not Store Secrets in Dockerfiles
- Only Install Verified Packages in Containers
- Securing the Container Runtime
- Consider AppArmor and Docker
- Consider SELinux and Docker
- Seccomp and Docker
- Do Not Use Privileged Containers
- Do Not Expose Unused Ports
- Do Not Run SSH Within Containers
- Do Not Share the Host’s Network Namespace
- Manage Memory and CPU Usage of Containers
- Set On-Failure Container Restart Policy
- Mount Containers’ Root Filesystems as Read-Only
- Vulnerabilities in Running Containers
- Unbounded Network Access from Containers
What Are Containers?
Containers are a method of operating system virtualization that enable you to run an application and its dependencies in resource-isolated processes. These isolated processes can run on a single host without visibility into each others’ processes, files, and network. Typically each container instance provides a single service or discrete functionality (called a microservice) that constitutes one component of the application.
Containers, themselves, are immutable, which means that any changes made to a running container instance will be made on the container image and then deployed. This capability allows for more streamlined development and a higher degree of confidence when deploying containerized applications.
Securing the Host Operating System
Container security starts at the infrastructure layer and is only as strong as this layer. If attackers compromise the host operating system (OS), they may compromise all processes on the OS, including the container runtime. For the most secure infrastructure, you should design the base OS to run the container engine only, with no other processes that could be compromised.
For the vast majority of container users, the preferred host operating system is a Linux distribution. Using a container-specific host OS to reduce the surface area for attack is generally a best practice. Modern container platforms like Red Hat OpenShift run on Red Hat Enterprise Linux CoreOS, which is hardened with SELinux and offers process, network, and storage separation. To further strengthen the infrastructure layer of your container stack and improve your overall security posture, you should always keep the host operating system patched and updated.
Best Practices for Securing the Host OS
The following list outlines some best practices to consider when securing the host OS:
1. Choosing an OS
If you are running containers on a general-purpose operating system, you should instead consider using a container-specific operating system because they typically include by default such security features as enabled SELinux, automated updates, and image hardening. Bottlerocket from AWS is one such OS designed for hosting containers that is free, open source, and Linux based.
With a general-purpose OS, you will need to manage every security feature independently. Hosts that run containers should not run any unnecessary system services or non-containerized applications. And you should consistently scan and monitor your host operating system for vulnerabilities. If you find vulnerabilities, apply patches and update the OS.
2. OS Vulnerabilities and Updates
Once you choose an operating system, it’s important to standardize on best practices and tooling to validate the versioning of packages and components contained within the base OS. Note that if you choose to use a container-specific OS, it will contain components that may become vulnerable and require remediation. You should use tools provided by the OS vendor or other trusted organizations to regularly scan and check for updates to components.
Even though security vulnerabilities may not be present in a particular OS package, you should update components if the vendor recommends an update. If it’s simpler for you to redeploy an up-to-date OS, that is also an option. With containerized applications, the host should remain immutable in the same manner containers should be. You should not be persisting data uniquely within the OS. Following this best practice will greatly reduce the attack surface and avoid drift. Lastly, container runtime engines such as Docker frequently update their software with fixes and features. You can mitigate vulnerabilities by applying the latest updates.
3. User Access Rights
All authentication directly to the OS should be audited and logged. You should only grant access to the appropriate users and use keys for remote logins. And you should implement firewalls and allow access only on trusted networks. You should also implement a robust log monitoring and management process that terminates in a dedicated log storage host with restricted access.
Additionally, the Docker daemon requires ‘root’ privileges. You must explicitly add a user to the ‘docker’ group to grant that user access rights. Remove any users from the ‘docker’ group who are not trusted or do not need privileges.
4. Host File System
Make sure containers are run with the minimal required set of file system permissions. Containers should not be able to mount sensitive directories on a host’s file system, especially when they contain configuration settings for the OS. This is a bad practice that you should avoid because an attacker would be able to execute any command that the Docker service can run and potentially gain access to the entire host system because the Docker service runs as root.
5. Audit Considerations for Docker Runtime Environments
You should conduct audits on the following:
- Container daemon activities
- These files and directories:
- /var/lib/docker
- /etc/docker
- docker.service
- docker.socket
- /etc/default/docker
- /etc/docker/daemon.json
- /usr/bin/docker-containerd
- /usr/bin/docker-runc
Securing Docker Images
You should know exactly what’s inside a Docker container before deploying it. Many of the challenges associated with ensuring Docker image security can be addressed simply by following best practices for securing Docker images.
What Are Docker Images?
So first of all, what are Docker images? Simply put, a Docker container image is a collection of data that includes all files, software packages, and metadata needed to create a running instance of a container. In essence, an image is a template from which a container can be instantiated. Images are immutable, which means that once they’ve been built, they cannot be changed. If someone were to make a change, a new image would be built as a result.
Container images are built in layers. The base layer contains the core components of an image and is the foundation upon which all other components and layers are added. Commonly, base layers are minimal and typically representative of common OSes.
Container images are most often stored in a central location called a registry. With registries like Docker Hub, developers can store their own images or find and download images that have already been created.
Docker Image Security
Incorporating the mechanisms to conduct static analysis on your container images provides insight into any potential vulnerable OS and non-OS packages. You can use an automated tool like Anchore to control whether you would like to promote non-compliant images into trusted registries through policy checks within a secure container build pipeline.
Policy enforcement is essential because vulnerable images that make their way into production environments pose significant threats that can be costly to remediate and can damage your organization’s reputation. Within these images, focus on the security of the applications that will run.
Explore the benefits of containerization and how they extend to security in our latest whitepaper.
Docker Image Security Best Practices
The following list outlines some best practices to consider when implementing Docker image security:
1. Continuous Approach
A fundamental approach to securing container images is to automate building and testing. You should set up the tooling to analyze images continuously. For container image-specific pipelines, you should employ tools that are purpose-built to uncover vulnerabilities and configuration defects. Your tooling should give developers the option to create governance around the images being scanned so that based on your configurable policy rules, images can pass or fail the image scan step in the pipeline and not progress further. In short, development teams need a structured and reliable process for building and testing the container images that are built.
Here’s how this process might look:
- Developer commits code changes to source control
- CI platform builds container image
- CI platform pushes container image to staging registry
- CI platform calls a tool to scan the image
- The tool passes or fails the images based on the policy mapped to the image
- If the image passes the policy evaluation and all other tests defined in the pipeline, the image is pushed to a production registry
2. Image Vulnerabilities
As part of a continuous approach to securing container images, you should scan packages and components within the image for common and known vulnerabilities. Image scanning should be able to uncover vulnerabilities contained within all layers of the image, not just the base layer.
Moreover, because vulnerable third-party libraries are often part of the application code, image inspection and analysis must be able to detect vulnerabilities for OS and non-OS packages contained within the images. Should a new vulnerability for a package be published after the image has been scanned, the tool should retrieve new vulnerability info for the applicable component and alert the developers so that remediation can begin.
3. Policy Enforcement
You should create and enforce policy rules based on the severity of the vulnerability as defined by the Common Vulnerability Scoring System.
Example policy rule: If the image contains any vulnerable packages with a severity greater than medium, stop this build.
4. Create a User for the Container Image
Containers should be run as a non-root user whenever possible. The USER instruction within the Dockerfile defines this.
5. Use Trusted Base Images for Container Images
Ensure that the container image is based on another established and trusted base image downloaded over a secure channel. Official repositories are Docker images curated and optimized by the Docker community or associated vendor. Developers should be connecting and downloading images from secure, trusted, private registries. These trusted images should be selected from minimalistic technologies whenever possible to reduce attack surface areas.
Docker Content Trust and Notary can be configured to give developers the ability to verify images tags and enforce client-side signing for data sent to and received from remote Docker registries. Content trust is disabled by default.
For more info see Docker Content Trust and Notary. In the context of Kubernetes, see Connaisseur, which supports Notary/Docker Content Trust.
6. Do Not Install Unnecessary Packages in the Container
To reduce container size and minimize the attack surface, do not install packages outside the scope and purpose of the container.
7. Add the HEALTHCHECK Instruction to the Container Image
The HEALTHCHECK instructions directive tells Docker how to determine if the state of the container is normal. Add this instruction to Dockerfiles, and based on the result of the healthcheck (unhealthy), Docker could exit a non-working container and instantiate a new one.
8. Do Not Use Update Instructions Alone in the Dockerfile
To help avoid duplication of packages and make updates easier, do not use update instructions such as apt-get update alone or in a single line in the Dockerfile. Instead, run the following:
RUN apt-get update && apt-get install -y bzr cvs git mercurial subversion
Also, see leveraging the build cache for insight on how to reduce the number of layers and for other Dockerfile best practices.
9. Use COPY Instead of ADD When Writing Dockerfiles
The COPY instruction copies files from the local host machine to the container file system. The ADD instruction can potentially retrieve files from remote URLs and perform unpacking operations. Since ADD could bring in files remotely, the risk of malicious packages and vulnerabilities from remote URLs is increased.
10. Do Not Store Secrets in Dockerfiles
Do not store any secrets within container images. Developers may sometimes leave AWS keys, API keys, or other secrets inside of images. If attackers were to grab these keys, they could be exploited. Secrets should always be stored outside of images and provided dynamically at runtime as needed.
11. Only Install Verified Packages in Containers
Download and install verified packages from trusted sources, such as those available via apt-get from official Debian repositories. To verify Debian packages within a Dockerfile, see Redis Dockerfile.
Implementing Container Image Security
One way to implement Docker image security best practices is with Anchore, a solution that conducts static analysis on container images and evaluates these images against user-defined checks. With Anchore, you can identify vulnerabilities within packages for OS and non-OS components and use policy rules to enforce the image configuration best practices described above.
With Anchore, you can configure policies to check for the following:
- Vulnerabilities
- Packages
- Secrets
- Image metadata
- Exposed ports
- Effective users
- Dockerfile instructions
- Password files
- Files
A popular implementation is to use the open source Jenkins CI tool along with Anchore for scanning and policy checks to build secure and compliant container images in a CI pipeline.
Securing Docker Container Runtime
Docker runtime security is critical to your overall container security strategy. It’s important to set up tooling to monitor the containers that are running. If new vulnerabilities get published that are impactful to a particular container, the alerting mechanisms need to be in place to stop and replace the vulnerable container quickly.
The first step in securing the container runtime is securing the registries where the images reside. It’s considered best practice to pull and run images only from trusted container registries. For an added layer of security, you should only promote trusted and signed images into production registries. Vulnerable, non-compliant images should not live in container registries where images are staged for production deployments.
The container engine hosts and runs containers built from container images that are pulled from registries. Namespaces and Control Groups are two critical aspects of container runtime security:
- Namespaces provide the first and most straightforward form of isolation: Processes running within a container cannot see and affect processes running in another container or in the host system. You should always activate Namespaces.
- Control Groups implement resource accounting and limiting. Always set resource limits for each container so that the single container does not hog all resources and bring down the system.
Only trusted users should control the container engine. For example, if Docker is the container runtime, root privileges are required to run Docker commands, and you should exercise caution when changing the Docker group.
You should deploy cloud-native security tools to detect such network traffic anomalies as unexpected traffic flows within the network, scanning of ports, or outbound access retrieving information from questionable locations. In addition, your security tools should monitor for invalid process execution or system calls as well as for writes and changes to protected configuration locations and file types. Typically, you should run containers with their root filesystems in read-only mode to isolate writes to specific directories.
If you are using Kubernetes to manage containers, your workload configurations are declarative and described as code in YAML files. These files can describe insecure configurations that can potentially be exploited by an attacker. It is generally good practice to incorporate Infrastructure as Code (IaC) scanning as part of a deployment and configuration workflow prior to applying the configuration in a live environment.
Why Is Docker Container Runtime Security So Important?
One of the last stages of a container’s lifecycle is deployment to production. For many organizations, this stage is the most critical. Often a production deployment is the longest period of a container’s lifecycle, and therefore it needs to be consistently monitored for threats, misconfigurations, and other weaknesses. Once your containers are live and running, it is vital to be able to take action quickly and in real time to mitigate potential attacks. Simply put, production deployments must be protected because they are valuable assets for organizations whose existence depends on them.
Docker Container Runtime Best Practices
The following list outlines some best practices to follow when implementing Docker container runtime security:
1. Consider AppArmor and Docker
From the Docker documentation:
AppArmor (Application Armor) is a Linux security module that protects an operating system and its applications from security threats. To use it, a system administrator associates an AppArmor security profile with each program. Docker expects to find an AppArmor policy loaded and enforced.
AppArmor is available on Debian and Ubuntu by default. In short, it is important that you do not disable Docker’s default AppArmor profile or create your own customer security profile for containers specific to your organization. Once this profile is used, the container has a certain set of restrictions and capabilities such as network access or file read/write/execute permissions. Read the official Docker documentation on AppArmor.
2. Consider SELinux and Docker
SELinux is an application security system that provides an access control system that greatly augments the Discretionary Access Control model. If it’s available on the Linux host OS that you are using, you can start Docker in daemon mode with SELinux enabled. The container would then have a set of restrictions as defined in the SELinux policy. Read more about SELinux.
3. Seccomp and Docker
Seccomp (secure computing mode) is a Linux kernel feature that you can use to restrict the actions available within a container. The default seccomp profile disables about 44 system calls out of more than 300. At a minimum, you should ensure that containers are run with the default seccomp profile. Get more information on seccomp.
4. Do Not Use Privileged Containers
Do not allow containers to be run with the –privileged flag because it gives all capabilities to the container and also lifts all the limitations enforced by the device cgroup controller. In short, the container can then do nearly everything the host can do.
5. Do Not Expose Unused Ports
The Dockerfile defines which ports will be opened by default on a running container. Only the ports that are needed and relevant to the application should be open. Look for the EXPOSE instruction to determine if there is access to the Dockerfile.
6. Do Not Run SSH Within Containers
SSH server should not be running within a container. Read this blog post for details.
7. Do Not Share the Host’s Network Namespace
When the networking mode on a container is set to --net=host
, the container will not be placed inside a separate network stack. In other words, this flag tells Docker not to containerize the container’s networking. This is potentially dangerous because it allows the container to open low-numbered ports like any other root process. Additionally, a container could potentially do unexpected things such as terminate the Docker host. Bottom line: Do not add the --net=host
option when running a container.
8. Manage Memory and CPU Usage of Containers
By default, a container has no resource constraints and can use as much of a given resource as the host’s kernel allows. Additionally, all containers on a Docker host share the resources equally and non-memory limits are enforced. A running container begins to consume too much memory on the host machine is a major risk. For Linux hosts, if the kernel detects that there is not enough memory to perform important system functions, it will kill processes to free up memory, which could potentially bring down an entire system if the wrong process is killed.
Docker can enforce hard memory limits, which allow the container to use no more than a given amount of user or system memory. Docker can also enforce soft memory limits, which allow the container to use as much memory as needed unless certain conditions are met. For a running container, the --memory
flag is what defines the maximum amount of memory the container can use. When managing container CPU, the --cpu
flags give you more control over the container’s access to the host machine’s CPU cycles.
9. Set On-Failure Container Restart Policy
By using the --restart
flag when running a container, you can specify how a container should or should not be restarted on exit. If a container keeps exiting and attempting to restart, it could possibly lead to a denial of service on the host. Additionally, ignoring the exit status of a container and always attempting to restart the container can lead to a non-investigation of the root cause behind the termination. You should always investigate when a container attempts to be restarted on exit. Configure the --on-failure
restart policy to limit the number of retries.
10. Mount Containers’ Root Filesystems as Read-Only
You should run containers with their root filesystems in read-only mode to isolate writes to specifically defined directories, which you can easily monitor. Using read-only filesystems makes containers more resilient to being compromised. Additionally, because containers are immutable, you should not write data within them. Instead, designate an explicitly defined volume for writes.
11. Vulnerabilities in Running Containers
You should monitor containers for existing vulnerabilities, and when problems are detected, patch or remediate them. If vulnerabilities exist, container scanning should find an inventory of vulnerable packages (CVEs) at the operating system and application layers. You should also implement container-aware tools designed to operate at the same elasticity and agility of containers.
Checks you should be looking for include:
- Invalid or unexpected process execution
- Invalid or unexpected system calls
- Changes to protected configs
- Writes to unexpected locations or file types
- Malware execution
- Traffic sent to unexpected network destinations
12. Unbounded Network Access from Containers
Controlling the egress network traffic sent by containers is critical. Tools for monitoring the inter-container traffic should at the very least accomplish the following:
- Automated determination of proper container networking surfaces, including inbound and process-port bindings
- Detection of traffic flow both between containers and other network entities
- Detection of network anomalies, such as port scanning and unexpected traffic flows within your organization’s network
A Final Word on Container Security Best Practices
Containerized applications and environments present additional security concerns not present with non-containerized applications. But by adhering to the fundamentally basic concepts for host and application security outlined here, you can achieve a stronger security posture for your cloud-native environment.
And while host security, container image scanning, and runtime monitoring are great places to start, adopting additional security best practices like scanning application source code (both open source and proprietary) for vulnerabilities and coding errors along with following a policy-based compliance approach can vastly improve your container security. To see how continuous security embedded at each step in the software lifecycle can help you improve your container security, request a demo of Anchore.
Top Four Types of Software Supply Chain Attacks and How to Stop Them
It’s no secret that software supply chain attacks are on the rise. Hackers are targeting developers and software providers to distribute malware and leverage zero-days that can affect hundreds, sometimes even thousands, of victims downstream. In this webinar, we’ll take a deep dive into four different attack methods, and most importantly, how to stop them.
Practical Advice for Complying with Federal Cybersecurity Directives: 7 Things You Should Do Now
Join an open source security leader and a former DoD DevSecOps engineer for actionable tips on successfully aligning your leadership, culture, and process to comply with federal cybersecurity directives.