With the clock ticking on new vulnerability scanning rules, organizations must adhere to a number of FedRAMP requirements. Prepare containerized applications for FedRAMP authorization with this checklist.
Author: teamanchore
Complete Guide to Hardening Containers with STIG
Preparing your containers and navigating your way through the STIG approval process can be daunting. This white paper will help your organization align for STIG readiness.
4 Ways to Prepare your Containers for the STIG Process
The Security Technical Implementation Guide (STIG) is a Department of Defense (DoD) technical guidance standard that captures the cybersecurity requirements for a specific product, such as a cloud application going into production to support the warfighter. System integrators (SIs), government contractors, and independent software vendors know the STIG process as a well-governed process that all of their technology products must pass. The Defense Information Systems Agency (DISA) released the Container Platform Security Requirements Guide (SRG) in December 2020 to direct how software containers go through the STIG process.
STIGs are notorious for their complexity and the hurdle that STIG compliance poses for technology project success in the DoD. Here are some tips to help your team prepare for your first STIG or to fine-tune your existing internal STIG processes.
4 Ways to Prepare for the STIG Process for Containers
Here are four ways to prepare your teams for containers entering the STIG process:
1. Provide your Team with Container and STIG Cross-Training
DevSecOps and containers, in particular, are still gaining ground in DoD programs. You may very well find your team in a situation where your cybersecurity/STIG experts may not have much container experience. Likewise, your programmers and solution architects may not have much STIG experience. Such a situation calls for some manner of formal or informal cross-training for your team on at least the basics of containers and STIGs.
Look for ways to provide your cybersecurity specialists involved in the STIG process with training about containers if necessary. There are several commercial and free training options available. Check with your corporate training department to see what resources they might have available such as seats for online training vendors like A Cloud Guru and Cloud Academy.
There’s a lot of out-of-date and conflicting information about the STIG process on the web today. System integrators and government contractors need to build STIG expertise across their DoD project teams to cut through such noise.
Including STIG expertise as an essential part of your cybersecurity team is the first step. While contract requirements dictate this proficiency, it only helps if your organization can build a “bench” of STIG experts.
Here are three tips for building up your STIG talent base:
- Make STIG experience a “plus” or “bonus” in your cybersecurity job requirements for roles, even if they may not be directly billable to projects with STIG work (at least in the beginning)
- Develop internal training around STIG practices led by your internal experts and make it part of employee onboarding and DoD project kickoffs
- Create a “reach back” channel from your project teams to get STIG expertise from other parts of your company, such as corporate and other project teams with STIG expertise, to get support for any issues and challenges with the STIG process
Depending on the size of your company, clearance requirements of the project, and other situational factors, the temptation might be there to bring in outside contractors to shore up your STIG expertise internally. For example, the Container Platform Security Resource Guide (SRG) is still new. It makes sense to bring in an outside contractor with some experience managing containers through the STIG process. If you go this route, prioritize the knowledge transfer from the contractor to your internal team. Otherwise, their container and STIG knowledge walk out the door at the end of the contract term.
2. Validate your STIG Source Materials
When researching the latest STIG requirements, you need to validate the source materials. There are many vendors and educational sites that publish STIG content. Some of that content is outdated and incomplete. It’s always best to go straight to the source. DISA provides authoritative and up-to-date STIG content online that you should consider as your single source of truth on the STIG process for containers.
3. Make the STIG Viewer part of your Approved Desktop
Working on DoD and other public sector projects requires secure environments for developers, solution architects, cybersecurity specialists, and other team members. The STIG Viewer should become a part of your DoD project team’s secure desktop environment. Save the extra step of your DoD security teams putting in a service desk ticket to request the STIG Viewer installation.
4. Look for Tools that Automate time-intensive Steps in the STIG process
The STIG process is time-intensive, primarily documenting policy controls. Look for tools that’ll help you automate compliance checks before you proceed into an audit of your STIG controls. The right tool can save you from audit surprises and rework that’ll slow down your application going live.
Parting Thought
The STIG process for containers is still very new to DoD programs. Being proactive and preparing your teams upfront in tandem with ongoing collaboration are the keys to navigating the STIG process for containers.
Learn more about putting your containers through the STIG process in our new white paper entitled Navigating STIG Compliance for Containers!
NVIDIA Secures Containers with Anchore
NVIDIA utilizes Anchore Enterprise to stay ahead of critical security requirements….
Anchore Enterprise and the new OpenSSL vulnerabilities
Today the OpenSSL project released an advisory for two new vulnerabilities that were rated as having a critical severity, but have been lowered to having a high severity. These vulnerabilities only affect OpenSSL versions 3.0.0 to 3.0.6. As OpenSSL version 3 was released in September of 2021, it is not expected to be widely deployed at this time. OpenSSL is one of those libraries that isn’t a simple upgrade. OpenSSL version 1 is much more common at the time of this writing and is not affected by CVE-2022-3786 or CVE-2022-3602.
The issues in question are not expected to be exploitable beyond a crash by a malicious actor due to the vulnerabilities being stack buffer overflows. Stack buffer overflows result in crashes on modern systems due to a security feature known as stack canaries which have become commonplace in recent times.
Detecting OpenSSL with Anchore Enterprise
Anchore Enterprise easily detects OpenSSL as it is commonly packaged within Linux distributions. These are packaged versions of OpenSSL in which a package manager installs a pre-built binary package, commonly referred to as APK, DEB, or RPM packages. Below is an example of searching a Fedora image for OpenSSL and determining it has OpenSSL 3.0.2 installed:
This is the most common way OpenSSL is shipped in container images today.
That’s not the entire story though. It is possible to include OpenSSL when shipping a binary application. For example the Node.js upstream binary statically links the OpenSSL library into the executable. That means OpenSSL is present in Node.js, but there are no OpenSSL files on disk for a scanner to detect. In such an instance it is necessary to review which applications will include OpenSSL and look for those.
In the case of Node.js it is necessary to look for the node binary located somewhere on the disk. We can examine the files contained in the SBOM to identify /usr/local/bin/node, for example:
If Node.js is installed as a package, it will get picked up without issue. If Node.js is installed as a binary, either from source or from Node.js itself, it’s slightly more work to detect as it is necessary to review all of the installed files, not just a package named “node”.
We have an update coming in Anchore Enterprise 4.2 that will be able to identify Node.js as a binary install, you can read more about how this will work below where we explain detecting OpenSSL with Syft.
Detecting OpenSSL with Syft
Anchore has an open source SBOM scanner called Syft. It is part of the core technology in Anchore Enterprise. It’s possible to use Syft to detect instances of OpenSSL in your applications and containers. Syft has no issues detecting OpenSSL packages installed by operating systems. Running it against a container image or application directory works as expected.
There’s also a new trick Syft just learned, that’s detecting a version of Node.js installed as a binary. This is a brand new feature you can read about in a Syft blog post. You can expect this detection in Anchore Enterprise very soon.
Using Anchore policy to automatically alert on CVE-2022-3786 and CVE-2022-3602
Anchore Enterprise has a robust policy and reporting engine that can be used to ease the burden of finding instances of CVE-2022-3786 and CVE-2022-3602. There is a “Quick Report” feature that allows you to search for a CVE. Part of what makes a report such as this so powerful is that you can search back in time. Any SBOM stored in Anchore Enterprise ever can be queries. This means even if you don’t have old containers available to scan, if you have the SBOM stored, you can know if that image or application was ever affected by this issue without the need to rescan anything.
It should be noted that you may want to search for the CVE and also the GitHub GHSA IDs. While the GHSA does refer to the CVE, at this time Anchore Enterprise treats them differently when creating policy and reports.
Planning for the future
We will probably see CVE-2022-3786 and CVE-2022-3602 showing up in container images for years to come. It’s OK to spend some time at the beginning manually looking for OpenSSL in our applications and images, but this isn’t a long term strategy. Long term it will be important to rely on automation to detect, alert, and prevent vulnerable OpenSSL usage. Even if you aren’t using OpenSSL version 3 today, it could be accidentally included at a future date. And while we’re all busy looking for OpenSSL today, it will be something else tomorrow. Automation can help detect past, present, and future issues.
The extensive use of OpenSSL means security professionals and development teams are going to be dealing with the issue for many months to come. Getting immediate visibility into your risk using open source tools is the fastest way to get going. But as we get ready for the long haul, prepare for the next inevitable issue that surfaces. Perhaps you’ve already found some as you’ve addressed OpenSSL. Anchore Enterprise can get you ready for a quick and full assessment of the impact, immediate controls to prevent vulnerable versions from moving further toward production, and streamlined remediation processes. Please contact us if you want to know how we can help you get started on your SBOM journey.
Detecting binary artifacts with Syft
Actions speak louder than words
It’s no secret that SBOM scanners have primarily put a focus on returning results from packaging managers and struggle with binary applications installed via a side channel. If you’re installing software from a Linux distribution, NPM, or PyPI those packages are tracked with package manager data. Syft picks those packages up without any problems because it finds evidence in the package manager metadata to determine what was installed. However, if we install a binary, such as Node.js without a package manager, Syft won’t pick it up. Until now!
There’s a new update to Syft, version 0.60.1, that now gives us the ability to look for binaries installed outside of a package manager. The initial focus is on Node.js because the latest version of Node.js includes OpenSSL 3, which is affected by recently released security vulnerabilities. Node.js is an application that includes this latest version of OpenSSL 3, which makes it important to be able to find it at this time.
In the future we will be adding many other binary types to detect, check back to see all the new capabilities of Syft soon.
We can show this behavior using the node container image. If we scan the container with Syft version 0.59.0, we can see that the Node.js binary is not detected. We are filtering the results to only show us things with ‘node’ in their name. The official node container is quite large and contains many packages, if we don’t filter the output it would be several pages long.
There is no binary named ‘node’ in that list. However, we know this binary is installed, it is the official node container. Now if we try again using Syft version 0.60.1 the node binary is in the output of Syft with a type of binary.
How does this work?
The changes to Syft are very specific and apply only to the Node.js binary. We added the ability for Syft to look for binaries that could be node, this begins by looking at the names of the binary files on disk. This was done to avoid trying to scan through every single binary file on the system which would be very slow and consume a great deal of resources.
Once we find something that might be a Node.js binary, we extract the plaintext strings data from it. This is comparable to running the ‘strings’ command from a UNIX environment. Basically what happens is we look for strings of plain text and ignore the binary data. In our case we are looking for a string of text that contains version information in a Node.js binary. If we determine the binary is indeed Node.js, we then extract the version details.
The output of Syft is of ‘binary’ format. If you look at Syft output you will see the different types of packages that were detected. These could be npm, deb, or python for example. Now you will also see a new type which is binary. As mentioned, the only binary type that can be found today is node, but more are coming soon.
Final Thoughts
Given how new this feature is, there is a known drawback. This patch could cause the Node.js binary to show up twice in an SBOM. If Node.js is installed via a package manager, such as rpm, the RPM classifier will find ‘node’ and so will the binary classifier. The same node binary will be listed twice. We know this is a bug and we are going to fix it soon. Given the importance of being able to detect Node.js, we believe this addition is too important to not include even with this drawback.
As already mentioned, this update only detects the Node.js binary. We are also working on binary classifiers for Python and Go in the short term, and long term we expect many binary classifiers to exist. This is an example of not letting perfect get in the way of good enough.
Please keep in mind this is the first step in a very long journey. There will be bugs in the binary classifiers as they are written. There are many new things to classify in the future, we don’t yet know what sort of things we will be looking for, which is exciting. Syft is an open source project – we love bug reports, pull requests, and questions. We would love you to join our community!
It is essential that we all remain vigilant and proactive in our software supply chain security as new vulnerabilities like OpenSSL and malicious code are inevitable. Please contact us if you want to know how we can help you get started on your SBOM journey and detect OpenSSL in your environment.
Anchore Capabilities Statement – Public Sector
Docker Security Best Practices: A Complete Guide
When Docker was first introduced, Docker container security best practices primarily consisted of scanning Docker container images for vulnerabilities. Now that container use is widespread and container orchestration platforms have matured, a much more comprehensive approach to security is standard practice.
This post covers best practices for three foundational pillars of Docker container security and the best practices within each pillar:
- Securing the Host OS
- Securing the Container Images
- Continuous Approach
- Image Vulnerabilities
- Policy Enforcement
- Create a User for the Container Image
- Use Trusted Base Images for Container Images
- Do Not Install Unnecessary Packages in the Container
- Add the HEALTHCHECK Instruction to the Container Image
- Do Not Use Update Instructions Alone in the Dockerfile
- Use COPY Instead of ADD When Writing Dockerfiles
- Do Not Store Secrets in Dockerfiles
- Only Install Verified Packages in Containers
- Securing the Container Runtime
- Consider AppArmor and Docker
- Consider SELinux and Docker
- Seccomp and Docker
- Do Not Use Privileged Containers
- Do Not Expose Unused Ports
- Do Not Run SSH Within Containers
- Do Not Share the Host’s Network Namespace
- Manage Memory and CPU Usage of Containers
- Set On-Failure Container Restart Policy
- Mount Containers’ Root Filesystems as Read-Only
- Vulnerabilities in Running Containers
- Unbounded Network Access from Containers
What Are Containers?
Containers are a method of operating system virtualization that enable you to run an application and its dependencies in resource-isolated processes. These isolated processes can run on a single host without visibility into each others’ processes, files, and network. Typically each container instance provides a single service or discrete functionality (called a microservice) that constitutes one component of the application.
Containers, themselves, are immutable, which means that any changes made to a running container instance will be made on the container image and then deployed. This capability allows for more streamlined development and a higher degree of confidence when deploying containerized applications.
Securing the Host Operating System
Container security starts at the infrastructure layer and is only as strong as this layer. If attackers compromise the host operating system (OS), they may compromise all processes on the OS, including the container runtime. For the most secure infrastructure, you should design the base OS to run the container engine only, with no other processes that could be compromised.
For the vast majority of container users, the preferred host operating system is a Linux distribution. Using a container-specific host OS to reduce the surface area for attack is generally a best practice. Modern container platforms like Red Hat OpenShift run on Red Hat Enterprise Linux CoreOS, which is hardened with SELinux and offers process, network, and storage separation. To further strengthen the infrastructure layer of your container stack and improve your overall security posture, you should always keep the host operating system patched and updated.
Best Practices for Securing the Host OS
The following list outlines some best practices to consider when securing the host OS:
1. Choosing an OS
If you are running containers on a general-purpose operating system, you should instead consider using a container-specific operating system because they typically include by default such security features as enabled SELinux, automated updates, and image hardening. Bottlerocket from AWS is one such OS designed for hosting containers that is free, open source, and Linux based.
With a general-purpose OS, you will need to manage every security feature independently. Hosts that run containers should not run any unnecessary system services or non-containerized applications. And you should consistently scan and monitor your host operating system for vulnerabilities. If you find vulnerabilities, apply patches and update the OS.
2. OS Vulnerabilities and Updates
Once you choose an operating system, it’s important to standardize on best practices and tooling to validate the versioning of packages and components contained within the base OS. Note that if you choose to use a container-specific OS, it will contain components that may become vulnerable and require remediation. You should use tools provided by the OS vendor or other trusted organizations to regularly scan and check for updates to components.
Even though security vulnerabilities may not be present in a particular OS package, you should update components if the vendor recommends an update. If it’s simpler for you to redeploy an up-to-date OS, that is also an option. With containerized applications, the host should remain immutable in the same manner containers should be. You should not be persisting data uniquely within the OS. Following this best practice will greatly reduce the attack surface and avoid drift. Lastly, container runtime engines such as Docker frequently update their software with fixes and features. You can mitigate vulnerabilities by applying the latest updates.
3. User Access Rights
All authentication directly to the OS should be audited and logged. You should only grant access to the appropriate users and use keys for remote logins. And you should implement firewalls and allow access only on trusted networks. You should also implement a robust log monitoring and management process that terminates in a dedicated log storage host with restricted access.
Additionally, the Docker daemon requires ‘root’ privileges. You must explicitly add a user to the ‘docker’ group to grant that user access rights. Remove any users from the ‘docker’ group who are not trusted or do not need privileges.
4. Host File System
Make sure containers are run with the minimal required set of file system permissions. Containers should not be able to mount sensitive directories on a host’s file system, especially when they contain configuration settings for the OS. This is a bad practice that you should avoid because an attacker would be able to execute any command that the Docker service can run and potentially gain access to the entire host system because the Docker service runs as root.
5. Audit Considerations for Docker Runtime Environments
You should conduct audits on the following:
- Container daemon activities
- These files and directories:
- /var/lib/docker
- /etc/docker
- docker.service
- docker.socket
- /etc/default/docker
- /etc/docker/daemon.json
- /usr/bin/docker-containerd
- /usr/bin/docker-runc
Securing Docker Images
You should know exactly what’s inside a Docker container before deploying it. Many of the challenges associated with ensuring Docker image security can be addressed simply by following best practices for securing Docker images.
What Are Docker Images?
So first of all, what are Docker images? Simply put, a Docker container image is a collection of data that includes all files, software packages, and metadata needed to create a running instance of a container. In essence, an image is a template from which a container can be instantiated. Images are immutable, which means that once they’ve been built, they cannot be changed. If someone were to make a change, a new image would be built as a result.
Container images are built in layers. The base layer contains the core components of an image and is the foundation upon which all other components and layers are added. Commonly, base layers are minimal and typically representative of common OSes.
Container images are most often stored in a central location called a registry. With registries like Docker Hub, developers can store their own images or find and download images that have already been created.
Docker Image Security
Incorporating the mechanisms to conduct static analysis on your container images provides insight into any potential vulnerable OS and non-OS packages. You can use an automated tool like Anchore to control whether you would like to promote non-compliant images into trusted registries through policy checks within a secure container build pipeline.
Policy enforcement is essential because vulnerable images that make their way into production environments pose significant threats that can be costly to remediate and can damage your organization’s reputation. Within these images, focus on the security of the applications that will run.
Explore the benefits of containerization and how they extend to security in our latest whitepaper.
Docker Image Security Best Practices
The following list outlines some best practices to consider when implementing Docker image security:
1. Continuous Approach
A fundamental approach to securing container images is to automate building and testing. You should set up the tooling to analyze images continuously. For container image-specific pipelines, you should employ tools that are purpose-built to uncover vulnerabilities and configuration defects. Your tooling should give developers the option to create governance around the images being scanned so that based on your configurable policy rules, images can pass or fail the image scan step in the pipeline and not progress further. In short, development teams need a structured and reliable process for building and testing the container images that are built.
Here’s how this process might look:
- Developer commits code changes to source control
- CI platform builds container image
- CI platform pushes container image to staging registry
- CI platform calls a tool to scan the image
- The tool passes or fails the images based on the policy mapped to the image
- If the image passes the policy evaluation and all other tests defined in the pipeline, the image is pushed to a production registry
2. Image Vulnerabilities
As part of a continuous approach to securing container images, you should scan packages and components within the image for common and known vulnerabilities. Image scanning should be able to uncover vulnerabilities contained within all layers of the image, not just the base layer.
Moreover, because vulnerable third-party libraries are often part of the application code, image inspection and analysis must be able to detect vulnerabilities for OS and non-OS packages contained within the images. Should a new vulnerability for a package be published after the image has been scanned, the tool should retrieve new vulnerability info for the applicable component and alert the developers so that remediation can begin.
3. Policy Enforcement
You should create and enforce policy rules based on the severity of the vulnerability as defined by the Common Vulnerability Scoring System.
Example policy rule: If the image contains any vulnerable packages with a severity greater than medium, stop this build.
4. Create a User for the Container Image
Containers should be run as a non-root user whenever possible. The USER instruction within the Dockerfile defines this.
5. Use Trusted Base Images for Container Images
Ensure that the container image is based on another established and trusted base image downloaded over a secure channel. Official repositories are Docker images curated and optimized by the Docker community or associated vendor. Developers should be connecting and downloading images from secure, trusted, private registries. These trusted images should be selected from minimalistic technologies whenever possible to reduce attack surface areas.
Docker Content Trust and Notary can be configured to give developers the ability to verify images tags and enforce client-side signing for data sent to and received from remote Docker registries. Content trust is disabled by default.
For more info see Docker Content Trust and Notary. In the context of Kubernetes, see Connaisseur, which supports Notary/Docker Content Trust.
6. Do Not Install Unnecessary Packages in the Container
To reduce container size and minimize the attack surface, do not install packages outside the scope and purpose of the container.
7. Add the HEALTHCHECK Instruction to the Container Image
The HEALTHCHECK instructions directive tells Docker how to determine if the state of the container is normal. Add this instruction to Dockerfiles, and based on the result of the healthcheck (unhealthy), Docker could exit a non-working container and instantiate a new one.
8. Do Not Use Update Instructions Alone in the Dockerfile
To help avoid duplication of packages and make updates easier, do not use update instructions such as apt-get update alone or in a single line in the Dockerfile. Instead, run the following:
RUN apt-get update && apt-get install -y bzr cvs git mercurial subversion
Also, see leveraging the build cache for insight on how to reduce the number of layers and for other Dockerfile best practices.
9. Use COPY Instead of ADD When Writing Dockerfiles
The COPY instruction copies files from the local host machine to the container file system. The ADD instruction can potentially retrieve files from remote URLs and perform unpacking operations. Since ADD could bring in files remotely, the risk of malicious packages and vulnerabilities from remote URLs is increased.
10. Do Not Store Secrets in Dockerfiles
Do not store any secrets within container images. Developers may sometimes leave AWS keys, API keys, or other secrets inside of images. If attackers were to grab these keys, they could be exploited. Secrets should always be stored outside of images and provided dynamically at runtime as needed.
11. Only Install Verified Packages in Containers
Download and install verified packages from trusted sources, such as those available via apt-get from official Debian repositories. To verify Debian packages within a Dockerfile, see Redis Dockerfile.
Implementing Container Image Security
One way to implement Docker image security best practices is with Anchore, a solution that conducts static analysis on container images and evaluates these images against user-defined checks. With Anchore, you can identify vulnerabilities within packages for OS and non-OS components and use policy rules to enforce the image configuration best practices described above.
With Anchore, you can configure policies to check for the following:
- Vulnerabilities
- Packages
- Secrets
- Image metadata
- Exposed ports
- Effective users
- Dockerfile instructions
- Password files
- Files
A popular implementation is to use the open source Jenkins CI tool along with Anchore for scanning and policy checks to build secure and compliant container images in a CI pipeline.
Securing Docker Container Runtime
Docker runtime security is critical to your overall container security strategy. It’s important to set up tooling to monitor the containers that are running. If new vulnerabilities get published that are impactful to a particular container, the alerting mechanisms need to be in place to stop and replace the vulnerable container quickly.
The first step in securing the container runtime is securing the registries where the images reside. It’s considered best practice to pull and run images only from trusted container registries. For an added layer of security, you should only promote trusted and signed images into production registries. Vulnerable, non-compliant images should not live in container registries where images are staged for production deployments.
The container engine hosts and runs containers built from container images that are pulled from registries. Namespaces and Control Groups are two critical aspects of container runtime security:
- Namespaces provide the first and most straightforward form of isolation: Processes running within a container cannot see and affect processes running in another container or in the host system. You should always activate Namespaces.
- Control Groups implement resource accounting and limiting. Always set resource limits for each container so that the single container does not hog all resources and bring down the system.
Only trusted users should control the container engine. For example, if Docker is the container runtime, root privileges are required to run Docker commands, and you should exercise caution when changing the Docker group.
You should deploy cloud-native security tools to detect such network traffic anomalies as unexpected traffic flows within the network, scanning of ports, or outbound access retrieving information from questionable locations. In addition, your security tools should monitor for invalid process execution or system calls as well as for writes and changes to protected configuration locations and file types. Typically, you should run containers with their root filesystems in read-only mode to isolate writes to specific directories.
If you are using Kubernetes to manage containers, your workload configurations are declarative and described as code in YAML files. These files can describe insecure configurations that can potentially be exploited by an attacker. It is generally good practice to incorporate Infrastructure as Code (IaC) scanning as part of a deployment and configuration workflow prior to applying the configuration in a live environment.
Why Is Docker Container Runtime Security So Important?
One of the last stages of a container’s lifecycle is deployment to production. For many organizations, this stage is the most critical. Often a production deployment is the longest period of a container’s lifecycle, and therefore it needs to be consistently monitored for threats, misconfigurations, and other weaknesses. Once your containers are live and running, it is vital to be able to take action quickly and in real time to mitigate potential attacks. Simply put, production deployments must be protected because they are valuable assets for organizations whose existence depends on them.
Docker Container Runtime Best Practices
The following list outlines some best practices to follow when implementing Docker container runtime security:
1. Consider AppArmor and Docker
From the Docker documentation:
AppArmor (Application Armor) is a Linux security module that protects an operating system and its applications from security threats. To use it, a system administrator associates an AppArmor security profile with each program. Docker expects to find an AppArmor policy loaded and enforced.
AppArmor is available on Debian and Ubuntu by default. In short, it is important that you do not disable Docker’s default AppArmor profile or create your own customer security profile for containers specific to your organization. Once this profile is used, the container has a certain set of restrictions and capabilities such as network access or file read/write/execute permissions. Read the official Docker documentation on AppArmor.
2. Consider SELinux and Docker
SELinux is an application security system that provides an access control system that greatly augments the Discretionary Access Control model. If it’s available on the Linux host OS that you are using, you can start Docker in daemon mode with SELinux enabled. The container would then have a set of restrictions as defined in the SELinux policy. Read more about SELinux.
3. Seccomp and Docker
Seccomp (secure computing mode) is a Linux kernel feature that you can use to restrict the actions available within a container. The default seccomp profile disables about 44 system calls out of more than 300. At a minimum, you should ensure that containers are run with the default seccomp profile. Get more information on seccomp.
4. Do Not Use Privileged Containers
Do not allow containers to be run with the –privileged flag because it gives all capabilities to the container and also lifts all the limitations enforced by the device cgroup controller. In short, the container can then do nearly everything the host can do.
5. Do Not Expose Unused Ports
The Dockerfile defines which ports will be opened by default on a running container. Only the ports that are needed and relevant to the application should be open. Look for the EXPOSE instruction to determine if there is access to the Dockerfile.
6. Do Not Run SSH Within Containers
SSH server should not be running within a container. Read this blog post for details.
7. Do Not Share the Host’s Network Namespace
When the networking mode on a container is set to --net=host
, the container will not be placed inside a separate network stack. In other words, this flag tells Docker not to containerize the container’s networking. This is potentially dangerous because it allows the container to open low-numbered ports like any other root process. Additionally, a container could potentially do unexpected things such as terminate the Docker host. Bottom line: Do not add the --net=host
option when running a container.
8. Manage Memory and CPU Usage of Containers
By default, a container has no resource constraints and can use as much of a given resource as the host’s kernel allows. Additionally, all containers on a Docker host share the resources equally and non-memory limits are enforced. A running container begins to consume too much memory on the host machine is a major risk. For Linux hosts, if the kernel detects that there is not enough memory to perform important system functions, it will kill processes to free up memory, which could potentially bring down an entire system if the wrong process is killed.
Docker can enforce hard memory limits, which allow the container to use no more than a given amount of user or system memory. Docker can also enforce soft memory limits, which allow the container to use as much memory as needed unless certain conditions are met. For a running container, the --memory
flag is what defines the maximum amount of memory the container can use. When managing container CPU, the --cpu
flags give you more control over the container’s access to the host machine’s CPU cycles.
9. Set On-Failure Container Restart Policy
By using the --restart
flag when running a container, you can specify how a container should or should not be restarted on exit. If a container keeps exiting and attempting to restart, it could possibly lead to a denial of service on the host. Additionally, ignoring the exit status of a container and always attempting to restart the container can lead to a non-investigation of the root cause behind the termination. You should always investigate when a container attempts to be restarted on exit. Configure the --on-failure
restart policy to limit the number of retries.
10. Mount Containers’ Root Filesystems as Read-Only
You should run containers with their root filesystems in read-only mode to isolate writes to specifically defined directories, which you can easily monitor. Using read-only filesystems makes containers more resilient to being compromised. Additionally, because containers are immutable, you should not write data within them. Instead, designate an explicitly defined volume for writes.
11. Vulnerabilities in Running Containers
You should monitor containers for existing vulnerabilities, and when problems are detected, patch or remediate them. If vulnerabilities exist, container scanning should find an inventory of vulnerable packages (CVEs) at the operating system and application layers. You should also implement container-aware tools designed to operate at the same elasticity and agility of containers.
Checks you should be looking for include:
- Invalid or unexpected process execution
- Invalid or unexpected system calls
- Changes to protected configs
- Writes to unexpected locations or file types
- Malware execution
- Traffic sent to unexpected network destinations
12. Unbounded Network Access from Containers
Controlling the egress network traffic sent by containers is critical. Tools for monitoring the inter-container traffic should at the very least accomplish the following:
- Automated determination of proper container networking surfaces, including inbound and process-port bindings
- Detection of traffic flow both between containers and other network entities
- Detection of network anomalies, such as port scanning and unexpected traffic flows within your organization’s network
A Final Word on Container Security Best Practices
Containerized applications and environments present additional security concerns not present with non-containerized applications. But by adhering to the fundamentally basic concepts for host and application security outlined here, you can achieve a stronger security posture for your cloud-native environment.
And while host security, container image scanning, and runtime monitoring are great places to start, adopting additional security best practices like scanning application source code (both open source and proprietary) for vulnerabilities and coding errors along with following a policy-based compliance approach can vastly improve your container security. To see how continuous security embedded at each step in the software lifecycle can help you improve your container security, request a demo of Anchore.
Top Four Types of Software Supply Chain Attacks and How to Stop Them
It’s no secret that software supply chain attacks are on the rise. Hackers are targeting developers and software providers to distribute malware and leverage zero-days that can affect hundreds, sometimes even thousands, of victims downstream. In this webinar, we’ll take a deep dive into four different attack methods, and most importantly, how to stop them.
Practical Advice for Complying with Federal Cybersecurity Directives: 7 Things You Should Do Now
Join an open source security leader and a former DoD DevSecOps engineer for actionable tips on successfully aligning your leadership, culture, and process to comply with federal cybersecurity directives.
Top 4 Best Practices for Securing Your Source Code Repositories
Source code is the cornerstone of software development and if not stored and managed securely, could lead to the collapse of your entire pipeline. In this webinar we’ll look at the top four best practices for securing your source code repositories.
Gartner Innovation Insight for SBOMs
The software bill or materials, or SBOM, is foundational for end-to-end software supply chain management and security. Knowing what’s in software is the first step to securing it. Think of an SBOM like an ingredients label on packaged food: If there’s a toxic chemical in your can of soup, you’d want to know before eating it.
SBOMs are critical not only for identifying security vulnerabilities and risks in software but also for understanding how that software changes over time and potentially becomes vulnerable to new threats. In Innovation Insight for SBOMs, Gartner recommends integrating SBOMs throughout the software development lifecycle to improve the visibility, transparency, security, and integrity of proprietary and open-source code in software supply chains.
The Role of SBOMs in Securing Software Supply Chains
Gartner estimates that by 2025, 60 percent of organizations building or procuring critical infrastructure software will mandate and standardize SBOMs in their software engineering practice — a significant increase from less than 20 percent in 2022. However, organizations that are using open-source software and reusable components to simplify and accelerate software development are challenged with gaining visibility into the software they consume, build, and operate. And without visibility, they become vulnerable to the security and licensing compliance risks associated with software components.
SBOMs are an essential tool in your security and compliance toolbox. They help continuously verify software integrity and alert stakeholders to security vulnerabilities and policy violations.
To achieve software supply chain security at scale, Gartner recommends that software engineering leaders integrate SBOMs into their DevSecOps pipelines to:
- Automatically generate SBOMs for all software produced
- Automatically verify SBOMs for all open source and proprietary software consumed
- Continuously assess security and compliance risks using SBOM data before and after deployment
Gartner underscores the importance of integrating SBOM workflows across the software development lifecycle, noting that “SBOMs are an essential tool in your security and compliance toolbox. They help continuously verify software integrity and alert stakeholders to security vulnerabilities and policy violations.”
Who Should Use SBOMs
Citing U.S. National Telecommunications and Information Administration (NTIA) recommendations, Gartner identifies three primary entities that benefit from SBOM adoption:
- Software producers: Use SBOMs to assist in the building and maintenance of their supplied software
- Software procurers: Use SBOMs to inform pre-purchase assurance, negotiate discounts, and plan implementation strategies
- Software operators: Use SBOMs to inform vulnerability management and asset management, to manage licensing and compliance, and to quickly identify software and component dependencies and supply chain risks
SBOM Tools Evaluation
Gartner cautions that SBOMs are not intended to be static documents and that every new release of a component should include a new SBOM. When evaluating open-source and commercial SBOM tools for SBOM generation and management, Gartner advises organizations to select tools that provide the following capabilities:
- Create SBOMs during the build process
- Analyze source code and binaries (like container images)
- Generate SBOMs for those artifacts
- Edit SBOMs
- View, compare, import, and validate SBOMs in a human-readable format
- Merge and translate SBOM contents from one format or file type to another
- Support use of SBOM manipulation in other tools via APIs and libraries
By generating SBOMs in the build phase, developers and security teams can identify and manage the software in their supply chains and catch bad actors early before they reach runtime and wreak havoc.
New Gartner Report: Innovation Insight for SBOMs
How to Meet the 6 FedRAMP Vulnerability Scanning Requirements for Containers
If you are tasked with implementing FedRAMP security controls for containerized workloads, this webinar is for you. We’ll walk you through a step-by-step process to explain how Anchore Enterprise can help you prepare a response for each of the six scanning requirements outlined in the FedRAMP Vulnerability Scanning Requirements for Containers.
SBOM-powered Software Supply Chain Management
SBOMs are quickly becoming the foundational element of software supply chain security. With the release of Anchore Enterprise 4.0, we are building on our existing SBOM capabilities to create the first SBOM-powered software supply chain management solution.
Anchore Enterprise 4.0 Delivers SBOM-Powered Software Supply Chain Management
With significant attacks against the software supply chain over the last year, securing the software supply chain is top of mind for organizations of all sizes. Anchore Enterprise 4.0 is designed specifically to meet this growing need, delivering the first SBOM-powered software supply chain management tool.
Powered By SBOMs
Anchore Enterprise 4.0 builds on Anchore’s existing SBOM capabilities, placing comprehensive SBOMs as the foundational element to protect against threats that can arise at every step in the software development lifecycle. Anchore can now spot risks in source code dependencies and watch for suspicious SBOM drift in each software build, as well as monitor applications for new vulnerabilities that arise post-deployment.
New Key Features:
Track SBOM drift to detect suspicious activity, new malware, or compromised software
Anchore Enterprise 4.0 introduces an innovative new capability to detect SBOM drift in the build process, alerting users to changes in SBOMs so they can be assessed for new risks or malicious activity. With SBOM drift detection, security teams can now set policy rules that alert them when components are added, changed, or removed so that they can quickly identify new vulnerabilities, developer errors, or malicious efforts to infiltrate builds.
End-to-end SBOM management reduces risk and increases transparency in software supply chains
Building on Anchore’s existing SBOM-centric design, Anchore Enterprise 4.0 now leverages SBOMs as the foundational element for end-to-end software supply chain management and security. Anchore automatically generates and analyzes comprehensive SBOMs at each step of the development lifecycle. SBOMS are stored in a repository to provide visibility into your components and dependencies as well as continuous monitoring for new vulnerabilities and risks, even post-deployment. Additionally, users can now meet customer or federal compliance requirements such as those described in the Executive Order On Improving the Nation’s Cybersecurity by producing application-level SBOMs to be shared with downstream users.
Track the security profile of open source dependencies in source code repositories and throughout the development process
With the ever-expanding use of open source software by developers, it has become imperative to identify and track the many dependencies that come with each piece of open source at every step of the development cycle to ensure the security of your software supply chain. Anchore Enterprise 4.0 extends scanning for dependencies to include source code repositories on top of existing support for CI/CD systems and container registries. Anchore Enterprise can now generate comprehensive SBOMs that include both direct and transitive dependencies from source code repositories to pinpoint relevant open source vulnerabilities, and enforce policy rules.
Gain an application-level view of software supply chain risk
Securing the software supply chain requires visibility into risk for each and every application. With Anchore Enterprise 4.0, users can tag and group all of the artifacts associated with a particular application, release, or service. This enables users to report on vulnerabilities and risks at an application level and monitor each application release for new vulnerabilities that arise. In the case of a new vulnerability or zero-day, users can quickly identify impacted applications solely from the SBOM repository and respond quickly to protect and remediate those applications.
Looking Forward
Anchore believes that SBOMs are the foundation of software supply chain management and security. The Anchore team will continue to build on these capabilities and advance the use of SBOMs to secure and manage the ever-evolving software supply chain landscape.
Policy-Based Compliance for Containers: CIS, NIST, and More
Policies are an integral part of ensuring security and compliance, but what does “policy-based compliance” mean in the world of cloud-native software development? How can policies be automated to ensure the security of your container images?
Helping Entrepreneurs Take Flight
The Kindness Campaign, inspired by Anchore’s core values, focuses on spreading kindness throughout our local communities. With Anchorenauts distributed across the US and UK, our quarterly volunteer program enables and encourages Anchorenauts to connect with local organizations and give back. In addition to direct support for various causes throughout the year, Anchore empowers team members to get involved with eight (8) paid volunteer hours per quarter.
This month, we are excited to partner with Ashley Goldstein from the Santa Barbara based organization, Women’s Economic Ventures (WEV). WEV, in partnership with Mixteco Indigena Community Organization Project (‘MICOP”), programatically supports aspiring entrepreneurs within the Indigenous and Latinx community in Santa Barbara and Ventura Counties.
Through the Los Emprendedores Program, Ashley firmly believes in the WEV’s and MICOP’s ability to empower members with the skills they need to launch their own businesses and to effect change in the most marginalized populations.
As part of the Kindness Campaign, Anchore has donated gently used Apple MacBooks to support budding entrepreneurs with the tools needed to kick start their businesses and enable their tremendous entrepreneurship training in the Los Emprendedores Program. In the program, participants develop highly valuable business skills ranging from business planning, grant writing, digital marketing, and key ESG (Environmental, Social, & Governance) practices.
As a tech company, we deeply believe in the responsibility to give back a piece of the industry to our community through widening access to both basic technology, but also business and career opportunities in the technology sector. At Anchore, we feel a great sense of pride in playing a part in contributing to that in our community, and are grateful for the opportunity to support Ashley, WEV, and MICOP.
How You Can Take Action
If your company has gently used computer equipment that is ready to be donated, we encourage you to reach out to WEV, and other organizations doing amazing work in their communities such as Boys & Girls Clubs of America (that have local chapters nationwide) to learn more about the ways you can help.
Be sure to check back next quarter to hear about new activity with Anchore’s Kindness Campaign.
Best Practices for Securing Open Source Software for Enterprises
Open source software is everywhere, and it’s here to stay. Yet 45% of respondents to Anchore’s 2022 Software Supply Chain Security Report still cite securing OSS as their top container security challenge.
2022 Trends in Software Supply Chain Security
Anchore surveyed hundreds of security and DevOps leaders at large enterprises on their software supply chain security practices. Their answers reveal that a top trend in 2022 is a focus on securing software supply chains as the use of software containers continues to rise.
Container Security Best Practices: Zero-Days
Jan 26th @ 2pm EST/11am PST
FedRAMP Pre-Assessment Playbook for Containers
2022 Security Trends: Software Supply Chain Survey
In January 2022, Anchore published its Software Supply Chain Security Survey of the latest security trends, with a focus on the platforms, tools, and processes used by large enterprises to secure their software supply chains, including the growing volume of software containers.
What Are the 2022 Top Security Trends?
The top 2022 security trends related to software supply chain security are:
- Supply chain attacks are impacting 62 percent of organizations
- Securing the software supply chain is a top priority
- The software bill of materials (SBOM) emerges as a best practice to secure the software supply chain
- Open source and internally developed code both pose security challenges
- Increased container adoption is driving the need for better container security
- Scanning containers for vulnerabilities and quickly remediating them is a top challenge
- The need to secure containers across diverse environments is growing as organizations adopt multiple CI/CD tools and container platforms
Software Supply Chain Security Survey: Key Findings
The Anchore Software Supply Chain Security Survey is the first survey of respondents exclusively from large enterprises rather than solely from open source and developer communities or smaller organizations. The survey asked 428 executives, directors, and managers in IT, security, development, and DevOps functions about their security practices and concerns and use of technologies for securing containerized applications. Their answers provide a comprehensive perspective on the state of software supply chain security with a focus on the impact of increased use of software containers.
We highlight several key findings from the survey in this blog post. For the complete survey results, download the Anchore 2022 Software Supply Chain Security Report.
1. Supply chain attacks impacted 62% of organizations
Such widespread attacks as SolarWinds, MIMECAST, and HAFNIUM as well as the recent Log4j vulnerability have brought the realities of the risk associated with software supply chains to the forefront. As a result, organizations are quickly mobilizing to understand and reduce software supply chain security risk.
A combined 62 percent of respondents were impacted by at least one software supply chain attack during 2021, with 6 percent reporting the attacks as having a significant impact and 25 percent indicating a moderate impact.
2. Organizations focus on securing the software supply chain
More than half of survey respondents (54 percent) indicate that securing the software supply chain is a top or significant focus, while an additional 29 percent report that it is somewhat of a focus. This indicates that recent, high-profile attacks have put software supply chain security on the radar for the vast majority of organizations. Very few (3 percent) indicate that it is not a priority at all.
3. SBOM practices must mature to improve supply chain security
The software bill-of-materials (SBOM) is a key part of President Biden’s executive order on improving national cybersecurity because it is the foundation for many security and compliance regulations and best practices. Despite the foundational role of SBOMs in providing visibility into the software supply chain, fewer than a third of organizations are following SBOM best practices. In fact, only 18 percent of respondents have a complete SBOM for all applications.
Despite these low numbers, respondents do report, however, that they plan to increase their SBOM usage in 2022, so these trends may change as adoption continues to grow.
4. The shift to containers continues unabated
Enterprises plan to continue expanding container adoption over the next 24 months with 88 percent planning to increase container use and 31 percent planning to increase use significantly.
A related trend of note is that more than half of organizations are now running employee- and customer-facing applications in containers.
5. Securing containers focuses on supply chain and open source
Developers incorporate a significant amount of open source software (OSS) in the containerized applications they build. As a result, the Security of OSS containers is ranked as the number one challenge by 24 percent of respondents with almost half (45 percent) ranking it among their top three challenges. Ranked next was Security of the code we write with 18 percent of respondents choosing that as their top container security challenge and Understanding full SBOM with 17 percent.
6. Organizations face challenges in scanning containers
As organizations continue to expand their container use, a large majority face critical challenges related to identifying and remediating security issues within containers. Top challenges include identifying vulnerabilities in containers (89 percent), the time it takes to remediate issues (72 percent), and identifying secrets in containers (78 percent). Organizations will need to adopt more accurate container scanning tools that can accurately pinpoint vulnerabilities and provide recommendations for quick remediation.
7. Organizations must secure across diverse environments
Survey respondents use a median of 5 container platforms.The most popular method of deployment is standalone Kubernetes clusters based on the open source package, which 75 percent of respondents use. These environments are run on-premises, via hosting providers, or on infrastructure-as-a-service from a cloud provider. The second most popular container platform is Azure Kubernetes Service (AKS) with 53 percent of respondents using, and Red Hat OpenShift ranks third at 50 percent. Respondents leverage the top container platforms in both their production and development environments.
For more insights to help you build and maintain a secure software supply chain, download the full Anchore 2022 Software Supply Chain Security Report.
Attribution Requirements for Sharing Charts
Anchore encourages the reuse of charts, data, and text published in this report under the terms of the Creative Commons Attribution 4.0 International License.
You may copy and redistribute the report content according to the terms of the license, but you must provide attribution to the Anchore 2022 Software Supply Chain Security Report.
2022 Software Supply Chain Security Report
7 Software Supply Chain Security Actions to Take in 2022
Join us Jan 12th @ 2pm EST/11am PST to learn how to plan your “Day 2” for Log4j and future zero-day vulnerabilities, leverage SBOMs as a foundation for supply chain security, and expand automation against malware, cryptomining, and leaked secrets.
InfoWorld: How to detect the Log4j vulnerability in your applications
A bug in the ubiquitous Log4j library can allow an attacker to execute arbitrary code on any system that uses Log4j to write logs. Does yours?
InfoWorld: Why SBOM management is no longer optional
Security Boulevard: The Dangers of a Log4j Worm
Key Things to Know about SBOMs and SBOM Standards
This blog post has been archived and replaced with the support pillar page here: https://anchore.com/wp-admin/post.php?post=987473316&action=edit
The blog post is meant to remain “public” so that it will continue to show on the /blog feed. This will help discoverability for people browsing the blog and potentially help SEO. If it is clicked on it will automatically redirect to the pillar page.
Find the Log4j Vulnerability with Anchore Enterprise
This tutorial video shows how to identify and triage the Log4j vulnerability using Anchore Enterprise. It details how to validate in runtime, the CISA policy bundle, automated policy function, automated enforcement function, and visibility to see if Log4j was inherited from a base image.
Identify Log4j Using Anchore Enterprise with Anchore CTL
This tutorial video shows how to identify the Log4j vulnerability using Anchore CTL. AnchoreCTL is a command line client for Anchore Enterprise that makes it easy to automate
Find the Log4j Vulnerability Using Syft and Grype
This tutorial video shows a walkthrough of how to generate an SBOM using Syft and how to scan that SBOM with Grype to identify any instances of the Log4j vulnerability.
Securing Cloud-Native Software to Comply with FedRAMP, STIGs, and More
Federal compliance requirements are constantly evolving to meet the growing challenges and complexities of securing the software supply chain. The task of meeting these compliance standards for cloud-native applications and containers can be overwhelming, but it doesn’t have to be.
Anchore Enterprise 3.3 Increases Vulnerability Visibility and Adds UI Enhancements
Visibility into cloud-native software applications is essential for securing the software supply chain. Today’s applications include code and components from many different sources, including internal developers, open source projects, and commercial software. With the release of 3.3, Anchore Enterprise now provides richer insight into your software components to identify more vulnerabilities and security risks along with UI enhancements to streamline the management of your container image analysis.
Discover and Mitigate Vulnerabilities and Security Risks Found in Rocky Linux Containers
Anchore Enterprise 3.3 can now scan and continuously monitor Rocky Linux container images for any security issues present in installed Rocky Linux packages to improve their security posture and reduce threats. Rocky Linux packages are now also included in SBOMs. Additionally, customers can apply Anchore’s customizable policy enforcement to Rocky Linux packages and vulnerabilities.
Create Customizable Login Messages to Share Info with Your Team
A customizable banner can now be added to the login page. This can be used to provide Anchore Enterprise users with information such as instructions on how to login (i.e. SSO or email address) or which administrator to contact in the event of an issue. Both end-users and administrators will benefit from this new feature as it will enable collaboration and communication between internal teams that are using Anchore Enterprise.
Delete Multiple Items at Once in the Repository View Through the UI
Anchore Enterprise UI users can now select and delete multiple repo tags and “failed” images from the Repository View. When an image is analyzed, a list of repo tags are generated. These tags are alphanumeric identifiers that are attached to an image name. Depending on the content of the image, hundreds of these tags can be generated, many of which are superfluous to the user. Now, rather than having to click on and delete each tag individually, users can delete these unnecessary tags in bulk. Additionally, users can delete multiple images at once that have failed analysis either due to policy requirements or a misconfiguration as well.
Evaluate Policy Bundle Changes Without Having to Leave the Edit Screen
Anchore Enterprise UI users will now be able to view their policy evaluation as they edit their policy bundles without having to leave the edit screen in the UI. Policy bundle evaluations provide users with a pass or fail status for their images based on user-defined allowlists and blocklists. The ability to view the evaluation while editing the policy bundle enables users to see how their changes are affecting the evaluation without having to leave the screen they are working in.
Highlights From Anchore Open Source Meetup – Dec 2021
4 Ways to Reduce your Vulnerability Remediation Backlog in the SDLC
With an increased focus on vulnerability scanning, it’s becoming more common to see a backlog of findings start to pile up. This creates a burden for multiple teams, slows down the development lifecycle, and increases the chances of major vulnerabilities sneaking through and infiltrating the software supply chain.
Securing the Software Supply Chain: Why Signed Attestations for SBOMs Matter
As software supply chains continue to grow in complexity, securing them is becoming an ever more daunting task. With components coming from so many possible origins, it is becoming increasingly important to establish “trust” and prevent tampering. One of the most secure ways to do this is with a signed SBOM.
Open Source to Enterprise: Which Anchore Option is Right for You?
You have choices in container security tools that range from open source to enterprise-grade platforms. Get the details on Anchore’s open source and enterprise solutions so that you can determine which option is right for you.
Creating a FedRAMP Compliance Checklist
Creating a FedRAMP compliance checklist can be vital to approaching compliance methodically. While government contracting is full of FedRAMP challenges stories, the move to cloud-native development grants us new tools, technologies, and methodologies to better set your projects up for FedRAMP compliance success. It’s up to you to capture these best practices in a checklist or process flow for your teams to follow.
Considerations for your FedRAMP Compliance Checklist
Here are some concerns to include in your checklist:
1. Shift Security Left
Shifting left describes using tools and practices to improve and encourage more rapid feedback from security stakeholders about security and compliance into the early development stages. However, the objective is always to hand bugs and fixes back to developers as part of a smooth, ongoing, continuous development process.
Unit testing is a familiar example of shifting left by delivering early, user-experience feedback on functionality. Shifting unit testing left ensures that most problems are caught early, during the development stage, where it is quicker and simpler to remedy them.
By shifting security left, the handling of each vulnerability becomes an integral part of the CI/CD pipeline. This prevents a mass of vulnerabilities from appearing as a single irritating blockage before your team admits a system into production. More frequent vulnerability scanning during development ensures bugs and other issues can be dealt with quickly and efficiently as they arise, and security becomes a part of the development process.
With the primary focus of CI/CD environments on fast, efficient development and innovation, security has to work efficiently as part of this process. Anchore advises that DoD and federal security teams use tools that can deliver rapid feedback into development. Security tools must integrate with typical CI/CD and container orchestration tools. The tools you choose should also promote early-stage interaction with developers.
2. Follow the 30/60/90 rule to keep Images Secure
Anchore recommends following the 30/60/90 rule to satisfy the guidance outlined in the DoD Cloud Computing Security Requirements Guide. This rule sets out the number of days to fix security issues:
- 30 days to fix critical vulnerabilities
- 60 days to fix high vulnerabilities
- 90 days to fix moderate vulnerabilities
In support of this, it is also strongly recommended to use a tool that allows security teams to update and validate vulnerability databases with new security data frequently. Not only is this necessary to satisfy Security Controls RA-5(2), but using such a tool is a best practice to ensure your security data is timely and relevant.
By following the 30/60/90 rule and ensuring that you update your vulnerability databases and feeds promptly, you empower your security teams to remediate new security challenges quickly and efficiently.
3. Make use of Tools that Support Container Image Allow/Deny Listing
Federal agencies should leverage container security tools that can enforce allowlisting and denylisting of container images. Maintaining allow and denylists are common methods of securing networks and software dependencies. However, they are less common in a containerized environment.
This capability is crucial, as attackers can potentially use containers to deploy denylisted software into secure areas such as your DevOps toolchain. The elements and dependencies of a container may not always appear in a software bill of materials (SBOM) from existing scanning tools. Therefore it’s crucial that the tools used can examine the contents of a container and can enforce allowlist and denylist safeguards.
Anchore advises that container image denylisting should occur at the CI/CD stage to allow rapid feedback. By shifting the security feedback to the developers, they receive immediate feedback on issues. This technique allows for faster remediation, as denylisted container images or the software contained within them are immediately flagged to the developer.
4. Deploy a Container Security Tool that Maintains Strong Configuration Management over Container Images
Software delivery and security operations teams should maintain an accurate inventory of all software they deploy on any federal information system. This inventory gives both teams accurate situational awareness of their systems and enables more precise decisionmaking.
Anchore advises federal agencies to implement a container-native security tool that can systematically deconstruct and inspect container images for all known software packages and display findings for information security personnel in an organized and timely manner.
5. Use a Container Scanning Tool that runs on IL-2 through IL-6
The DoD and federal agencies must leverage tools that keep any vulnerability data regarding the information system within their authorization boundaries. However, many FedRAMP vulnerability scanning tools require an agent that connects to the vendor’s external cloud environment. The DoD designates this as interconnectivity between DoD/federal systems and the tool vendor and would rule out the use of any agent/cloud-based tool within an IL-6 classified environment.
Where organizations still choose to implement an agent-based container security tool, they are then responsible for ensuring that the security vendor maintains an up-to-date accreditation for their cloud environment. The environment must also have the relevant RMF/FedRAMP security controls that the federal information system can inherit during the ATO process. In addition, any DoD or federal agency should ensure the agent-based tool can run in both classified/unclassified environments.
Learn how Anchore brings DevSecOps to DoD software factories.
6. Express Security Policy as Code
Where possible, select tools that enable your teams to define security policy as code. These tools enable security teams to establish and automate best practices that they can push to tools, either across the network or in more secure environments.
Expressing security policy as code also enables your ops teams to manage systems using existing software development life cycle (SDLC) techniques. For example, policy as code enables the versioning of security policies. Now teams can compare policy versions for configuration drift or other unexpected changes.
In essence, it will enable the policies themselves to be subjected to the same level as rigorous as the code they are applied against.
The onus of implementing new security policies shifts security left onto developers. It can be important not to tighten container security policies too far in one step. Versioning also enables any agencies to improve and tighten security policy over time.
This iterative approach towards improving security stops over-intrusive security policies from stalling development in the CI/CD pipeline. It prevents the emergence of any culture clash between developers and security operations. Security teams can begin with a policy base that delivers on minimum compliance standards and develop this over time towards evolving best practices.
Conclusion
Think of a FedRAMP Compliance Checklist as more than just a documented list of activities your teams need to perform to get your application FedRAMPed. Rather, think of it as a methodical and strategic approach for your developers and security teams to follow as part of holistic and proactive strategies for secure software development and government compliance.
Download our FedRAMP containers checklist to help jump start your organization’s FedRAMP compliance checklist.
Five Advanced Methods for Managing False Positives in Vulnerabilities
False positives in security scans are a costly headache for both DevOps and security teams. They can slow down, or even stop the development process dead in its tracks while issues are researched to determine if they are truly issues or not. Loosen your security controls too much and you can potentially open the door for legitimate vulnerabilities to infiltrate your systems.
7 Tips to Create a DevSecOps Open Source Strategy
DevSecOps open source convergence isn’t always apparent to business stakeholders. Here at Anchore, we’re believers in the open sourcing of DevSecOps because open source software (OSS) is foundational to cloud-native software development.
The Relationship between DevSecOps and Open Source
Open source technologies play a decisive role in how businesses and government agencies build their DevOps toolchains and capabilities. Entire companies have grown around open source DevOps and DevSecOps tools, offering enterprise-grade services and support for corporate and government customers.
DevSecOps Adoption IRL
The adoption of DevSecOps across the public sector and industries such as financial services and healthcare has been full of challenges. Some may even call DevSecOps adoption aspirational.
Adopting DevSecOps starts with shifting left with security. Work on minimizing software code vulnerabilities begins day 1 of the project, not as the last step before release. You also need to ensure that all your team members, including developers and operations teams, share responsibility for following security practices as part of their daily work. Then you must integrate security controls, processes, and tools at the start of your current DevOps workflow to enable automated security checks at each stage of your delivery pipeline.
Open Source in the Thick of DevSecOps
DevOps and DevSecOps can find their roots in the open source culture. DevOps principles have a lot in common with open source principles.
Software containers and Kubernetes are perhaps the best-known examples of open source tools advancing DevSecOps. Containers represent a growing open source movement representing some essential principles of DevSecOps, especially collaboration and automation. These tools can also help mitigate common threats such as outdated images, embedded malware, and insecure software or libraries.
The advantages of open source for DevSecOps include:
- No dependency on proprietary formats like you would get with vendor-developed applications
- Access to a vibrant open source community of developers and advocates trying to solve real-world problems
- An inclusive meritocracy where good ideas can come from anywhere, not just a product manager or sales rep who’s a few layers removed from the problems users encounter every day during their work.
Creating a DevSecOps Open Source Strategy
Here are some tips about how to set a DevSecOps open source strategy:
1. Presenting Open Source to your Organization’s Leadership
While open source technologies are gaining popularity across commercial and federal enterprises, it doesn’t always mean that your management are open source advocates. Here are some tips for presenting open source DevSecOps solutions to your leadership team:
- Open source technologies for a DevSecOps toolchain offer a low entry barrier to build a proof of concept to show the value of DevSecOps to your leadership team. Presenting a live demo of a toolchain carries much more weight than another PowerPoint presentation over another Zoom call.
- Proper DevSecOps transformation requires a roadmap that moves your enterprise from the waterfall software development life cycle (SDLC) or DevOps to DevSecOps. Open source tools have a place on that roadmap.
- Know the strengths and weaknesses of the open source tools you’re proposing for your DevSecOps toolchain, especially for compliance reporting.
- Remember, there are costs for implementing open source tools in your DevSecOps toolchain to work hours, implementation costs, operations, and security.
2. Establish OSS Governance Standards as an Organization
There can be many ways that OSS enters your DevSecOps pipeline that break from normal software procurement norms. Since OSS doesn’t come with a price tag, it’s easy for OSS to bypass your standard software procurement processes and even your expense reports, for that matter. If you’re building cloud-native applications at any sort of scale, you need to start wrapping some ownership and accountability around OSS.
Smaller organizations could assign a developer ownership and accountability over the OSS in their portion of the project. This developer would be responsible for generating the software bill of materials (SBOM) for the OSS under their responsibility.
Depending on the size of your development organization and use of OSS, it may make more sense to establish a centralized OSS tools team inside your development organization.
3. Place Collaboration before Bureaucracy
The mere words “software procurement” invoke images of bureaucracy and red tape in developers’ eyes, primarily if they work for a large corporation or government agency. You don’t want to repeat that experience with OSS procurement. DevSecOps offers you culture change, best practices, and new tools to improve collaboration.
Here are some ways to message how open source procurement will be different for your developers from the usual enterprise software procurement process:
- Count your developers and cybersecurity teams as entire stakeholders and tap into their open source experience
- Open and maintain communication channels between developers, legal, and business stakeholders through the establishment of an OSS CoEOSPO or similar working group
- Communicate with your developers through appropriate channels such as Slack or Zoom when you need input and feedback
4. Educate Your Stakeholders About the Role of OSS in DevSecOps
While your development teams may be all about OSS, that doesn’t mean the rest of your business stakeholders are. Use stakeholder concerns about the current security climate as an opportunity to discuss how OSS helps improve the security of your software development efforts, including:
- OSS means more visibility into the code for your cybersecurity team, unlike proprietary software code
- OSS tools serve as the foundation of the DevSecOps toolchain, whether its code and vulnerability scanning, automation, testing, or container orchestration
- DevSecOps and OSS procurement processes enable you to create security practices
5. Upgrade Your OSS Procurement Function
Your OSS procurement may still be entirely ad hoc, and there’s no judgment if that’s served your organization well thus far. However, we’re entering a new era of security and accountability as the software supply chain becomes an attack vector. While there’s no conclusive evidence that OSS played a role in recent software supply chain breaches, OSS procurement can set an example for the rest of your organization. A well-executed OSS procurement cycle intakes OSS directly into your DevSecOps toolchain.
Here are some upgrades you can make to OSS procurement:
- Establish an OSS center of excellence or go one step further and establish an open source program office to bring together OSS expertise inside your organization and drive OSS procurement priorities.
- Seek out an executive sponsor for OSS because it’s safe to say OSS adoption and procurement inside some enterprises aren’t easy. You are going to be navigating internal challenges, politics, and bureaucracy. Seek out an executive sponsor for OSS procurement in your organization. A chief technology officer or VP of development are natural candidates for this role. Your procurement effort needs an executive-level sponsor to champion your efforts and provide high-level support to ensure that OSS becomes a priority for your development organization.
- Encourage developer involvement in the OSS community, not only because it’s good for their career, your organization benefits from the ideas they bring back to in-house projects.
6. Make Risk Management Your Co-Pilot
Your development team assumes responsibility for the OSS to keep it secure and ensure your teams run the latest version and security updates. Such work can take developers away from client-facing and billable projects. There are corporate cultures, especially in professional services and system integration, where developers must meet quotas for the billable work. Maintaining OSS behind the scenes — when a customer isn’t necessarily paying — is a hard sell to management sensitive to their profit & loss.
A more cavalier approach is to move fast and assume the OSS in question is being kept up to date and secure by a robust volunteer effort.
Another option is outsourcing your OSS security and maintenance and paying for somebody else to worry about it. This solution can be expensive, even if you can find a vendor with the appropriate skills and experience.
7. Bring Together Developers + Business for DevSecOps Open Source Success
Software procurement in the enterprise world is an area of expertise all unto itself. When you take steps toward creating a more formalized OSS procurement cycle, it takes a cross-functional team to succeed with OSS procurement and later governance. An Open Source Program Office can be the ideal home for just such a cross-functional team.
Your contracts and legal teams often don’t understand technology, much less OSS. Likewise, your developers won’t be knowledgeable about the latest in software licensing.
Such a coming together won’t happen without leadership support and maybe even a little culture change in some organizations.
DevSecOps: Open Source to Enterprise Software
Compliance, whether it’s the United States government’s FedRAMP or commercial compliance programs such as Sarbanes Oxley (SOX) in the healthcare industry and Payment Card Industry Data Security Standard (PCI DSS) in the financial services industry, brings high stakes. For example, mission-critical government cloud applications can’t go live without passing an authority to operate (ATO). Financial and healthcare institutions face stiff fines and penalties if their applications fail compliance audits.
Beyond that, the breach of the week making headlines in mainstream and technology media is also driving DevSecOps decisions. Companies and federal agencies are doing what they can to becoming another cybersecurity news story.
Such high stakes present a challenge for organizations moving to DevSecOps. Relying on open source solutions solely for a DevSecOps toolchain puts the onus of maintenance and patching on internal teams. There’s also a point for tools such as container scanning your organization needs to look at enterprise offerings. Most often, the reason to move to an enterprise offering is that of compliance audits. For example, you require enterprise-class reporting and a real-time feed of the latest vulnerability data to satisfy internal and external compliance requirements. Vendor backing and support also become a necessity.
Final Thought
A DevSecOps open source strategy comes from melding procurement, people, and DevSecOps practices together. Doing so lets your organization benefit from the innovation and security that open source offers while relying on DevSecOps practices to ensure collaboration throughout the whole development lifecycle to successful product launch.
Three Software Supply Chain Attacks and How to Stop Them
Software supply chain attacks are on the rise. Threat actors are targeting software developers and suppliers to infiltrate source code and distribute malware to hundreds, sometimes even thousands, of victims globally… and they’re getting better at it everyday. Take a deep dive into supply chain attacks. Find out what they are, how they work, and most importantly, how to stop them.
Policy-Based Compliance for Containers: CIS, NIST, and More
Policies are an integral part of ensuring security and compliance, but what does “policy-based compliance” mean in the world of cloud-native software development? How can policies be automated to ensure the security of your container images?
The Software Bill of Materials and its Role in Cybersecurity
The software bill of materials is one of the most powerful security tools in modern cybersecurity. Learn about the role of SBOMs in this white paper.
5 DevSecOps Best Practices for Hybrid Teams
As we put away our beach chairs and pool toys, now that Labor Day is past us, it’s time to refresh your DevSecOps best practices if your organization is moving employees back to the office on a part-time basis. While your developers should capitalize on their remote work wins, hybrid work can require different approaches than what has been in place during the past 18+ months.
Here are some DevSecOps practices to consider if your development teams are moving to a hybrid work model:
1. Reinforce Trust and Relationships
The pandemic-forced remote work we’ve all been through has provided invaluable collaboration, empathy, and trust lessons. Your work to continuously improve trust and relationships on your teams doesn’t stop when some team members begin to make their way back to the office.
A challenge to be wary of with hybrid DevSecOps teams is the reality that some team members have face time with managers and executives in the office. Remote employees don’t get this time. A common employee concern is that two (or more) classes of employees develop in your organization.
There can be cultural issues at play here. Then again, work from home (WFH) anxiety and paranoia can be real for some people. Pay close attention and keep your communication between team members open as you venture into remote work. Provide parity for your meetings by allowing onsite and remote participants an equal platform. Another good rule is to communicate calmly and with candor. Such acts will help reinforce trust across your teams.
2. Review your DevOps/DevSecOps Toolchain Security
The move to remote work opened up commercial and public sector enterprises to new attacks as remote work grew endpoints outside the traditional network perimeter. Commercial and public sector organization endpoint security in pre-pandemic times was very much centralized.
Securing the DevSecOps pipeline is an underserved security discussion in some ways. The DevOps and DevSecOps communities spend so much time on discussions about delivery velocity and shifting security left. The actual security of the toolchain, such as the value of identity access management (IAM), zero trust architecture (ZTA), and other security measures. The benefit here is only authorized employees can access your toolchain.
Use the move to hybrid work to review and refresh your toolchain security against “man in the middle” and other attackers lurking for hybrid teams to target.
3. Improve your DevSecOps Tools and Security Reporting
End-to-end traceability gains added importance as more of your executives and stakeholders return to a new state of normalcy. Use your organization’s move to hybrid work to improve security and development tools reporting across your pipelines. There are some reasons for this refresher:
- Deliver additional data to your management and stakeholders about project progress through your pipelines regarding your hybrid work move. Be proactive and work with stakeholders during your hybrid work transition to see if they have additional reporting requirements for their management.
- Update your security reporting to reflect the new hybrid working environment that spans both inside and outside your traditional endpoints and network perimeter.
- Give your team the most accurate picture using data of the current state of software development and security over your projects.
4. Standardize on a Dev Platform
Hybrid work reinforces the need for your developers to work on a standardized platform such as GitLab or GitHub. The platform can serve as a centralized, secure hub for software code and project artifacts accessible to your developers, whether they are working from home or in the office. Each platform also includes reporting tools that can help you further communicate with your management about the progress and security of your projects.
If your developers are already standardized on a platform, use the move to hybrid work to learn and implement new features. For example, GitLab now integrates Grype with GitLab 14 for container security. GitHub includes GitHub Actions which makes it easy to automate CI/CD workflows.
5. Refine your Automation Practices
DevSecOps automation isn’t meant to be a one-and-done process. It requires constant analysis and feedback from your developers. With automation, look for areas to improve, such as change management and other tasks that you need to adapt to hybrid work. Make it a rule if hybrid work changes a workflow for your teams, it’s a new opportunity to automate!
Final thoughts
If you view DevOps and, in turn, DevSecOps as opportunities for continuous improvement, then DevSecOps best practices for hybrid work are another step in your DevSecOps journey. Treat it as the same learning experience as when your organization sent your team home in the early days of COVID-19.
Shifting Left and Right: Securing Container Images in Runtime with Anchore
Shifting security left reduces the cost to fix problems and avoids last minute delays. But to achieve continuous security and compliance, you also need to check container images in the registry and in Kubernetes at runtime.
DevOps Supply Chain Security: A Case for DevSecOps
DevOps supply chain security is becoming another use case for DevSecOps as enterprises seek innovative solutions to secure this attack vector. 60% of the 2021 Anchore Software Supply Chain Report considers securing the software supply chain as a top or significant focus area. DevSecOps gives enterprises the foundational tools and processes to support this security focus.
Anatomy of a Software Supply Chain Attack
A software supply chain is analogous to a manufacturing supply chain in the auto industry. It includes anything that impacts your software, especially open source and custom software components. The sources for these components come from outside an organization such as an open source software (OSS) project, third-party vendor, contractor, or partner.
The National Institute of Standards and Technology (NIST) has a concise and easy-to-understand definition of software supply chain attack:
A software supply chain attack occurs when a cyber threat actor infiltrates a software vendor’s network and employs malicious code to compromise the software before the vendor sends it to their customers.
Many organizations see increased value from in-house software development by adopting open source technology and containers to build and package software for the cloud quickly. Usually branded as Digital Transformation, this shift comes with trade-offs rarely highlighted by vendors and boutique consulting firms selling the solutions. You can get past these trade-offs with OSS by establishing an open source program office (OSPO) to manage your OSS governance.
They do not limit these risks to criminal hacking, and fragility in your supply chain comes in many forms. One type of risk comes from single contributors that could object morally to the use of their software, like what happened when one developer decided he didn’t like President Trump’s support of ICE and pulled his package from NPM. Or unbeknownst to your legal team, you could distribute software without a proper license, as with any container that uses Alpine Linux as the base image.
Why DevSecOps for Software Supply Chain Security?
DevSecOps practices focus on breaking down silos, improving collaboration, and of course, shifting security left to integrate it early in the development process before production. These and other DevSecOps practices are foundational to secure cloud-native software development.
Software supply chain security in the post SolarWinds and Codecov world is continuously evolving. Some of the brightest minds in commercial and public sector cybersecurity are stepping up to mitigate the risks of potential software supply chain attacks. It’s a nearly impossible task currently.
Here are some reasons why DevSecOps is a must for software supply chain security:
Unify your CI/CD Pipeline
The sooner you can unify your CI/CD pipeline, the sooner you can implement controls, allowing your security controls to shift left, according to InfoWorld. Implementing multiple controls across multiple systems is a recipe for disaster.
Unifying your CI/CD pipeline also gives you another opportunity to level set current tool standards, but you can upgrade tools as necessary to improve security and compliance.
Target Dependencies in Software Code
A DevSecOps toolchain gives you the tools, processes, and analytics to target dependencies in the software code coursing through your software supply chain. Less than half of our software supply chain survey respondents report scanning open source software (OSS) containers and using private repositories for dependencies.
Unfortunately, there’s no perfect solution to detecting your software dependencies. Thus, you need to resort to multiple solutions across your DevSecOps toolchain and software supply chain. Here are some traditional solutions:
- Implement software container scanning using a tool such as Anchore Enterprise (of course!) at critical points across your supply chain, such as before checking containers into your private repository
- Analyze code dependencies specified in the manifest file or lock files
- Track and analyze dependencies that your build process pulls into the release candidate
- Examine build artifacts before they enter your registry via tools and processes
The appropriate targeting of software dependencies raises the stature of the software bill of materials (SBOM) as a potent software supply chain security measure.
Use DevSecOps Collaboration to Break Down DevOps Supply Chain Barriers
DevSecOps isn’t just about tools and processes. It also instills improvements in culture, especially for cross-team collaboration. While DevSecOps culture is a work in progress for the average enterprise, and it should be that way, focusing a renewed focus on software supply chain security is cause for you to extend your DevSecOps culture to your contractors and third-party suppliers that make up your software supply chain.
DevSecOps frees your security team from being the last stop before production. They are free to be more proactive at earlier stages of the software supply chain through frameworks, automated testing, and improved processes. Collaborating with the security team takes on some extra dimensions with software supply security because they’ll deal with some additional considerations:
- Onboarding OSS securely to their supply chain
- Intaking third-party vendor technologies while maintaining security and compliance
- Collaborating with contractor and partner security teams as a player-coach to integrate their code into their final product
Structure DevSecOps with a Framework and Processes
As companies continue to move to the cloud, it’s becoming increasingly apparent they should integrate DevSecOps into their cloud infrastructure. Some pain points will likely arise, but their duration will be short and their payoffs large, according to InfoQ.
A DevSecOps framework brings accountability and standardization leading to an improved security posture. It should encompass the following:
- Visibility into dependencies through the use of automated container scanning and SBOM generation
- Automation of CI/CD pipelines through the use of AI/ML tools and other emerging technologies
- Mastery over the data that your pipelines generate gives your technology and cybersecurity stakeholders the actionable intelligence they require to respond effectively to technical issues in the build lifecycle and cybersecurity incidents
Final Thoughts
As more commercial and public sector enterprises focus on improving the security posture of their software supply chains, DevSecOps provides the framework, tools, and culture change that can serve as a foundation for software supply chain security. Just as important, DevSecOps also provides the means to pivot and iterate on your software supply chain security in the interests of continuous improvement.
Want to learn more about supply chain security? Download our Expert Guide to Software Supply Chain Security White Paper!
2021 Trends in Software Supply Chain Security
What security risks are DevOps teams facing in their software supply chain as the use of software containers continues to rise? Anchore has released its 2021 Software Supply Chain Security Report, which compiles survey results from hundreds of enterprise IT, Security and DevOps leaders about the latest trends in how their organizations are adapting to new security challenges.
How to Comply with DISA STIGs for Containers using Anchore
As the US Federal government seeks to accelerate software development and improve its cybersecurity posture, the DoD and many civilian agencies are now using DISA STIGs to check the security of cloud-native and containerized applications, which introduces some new challenges.
Advancing Software Security with Technical Innovation
As we explore the various roles and responsibilities at Anchore, one of the critical areas is building the roadmap for our enterprise product. Anchore Enterprise is a continuous security and compliance platform for cloud-native applications. Our technology helps secure the software development process and is in use by enterprises like NVIDIA and eBay as well as government agencies like the U.S. Air Force and Space Force.
As news of software supply chain breaches continue to make headlines and impact software builds across industries, the team at Anchore works each day to innovate and refine new technology to support secure and compliant software builds.
With this, Anchore is thrilled to announce an opening for the role of Principal Product Manager. Our Vice President of Product, Neil Levine, weighs in on what he sees as key elements to this role:
“Product managers are lucky in that we get to work with almost every part of an organization and are able to use both our commercial and technical skills. In larger organizations, a role like this often gets more proscribed and the ability to exercise a variety of functions is limited. Anchore is a great opportunity for any PM who wants to enjoy roaming across a diverse range of projects and teams. In addition to that, you get to work in one of the most important and relevant parts of the cybersecurity market that is addressing literal front-page news.”
Are you passionate about security, cloud infrastructure or open-source markets? Then apply for this role on our job board.
The Power of Policy-as-Code for the Public Sector
As the public sector and businesses face unprecedented security challenges in light of software supply chain breaches and the move to remote, and now hybrid work, means the time for policy-as-code is now.
Here’s a look at the current and future states of policy-as-code and the potential it holds for security and compliance in the public sector:
What is Policy-as-Code?
Policy-as-code is the act of writing code to manage the policies you create to help with container security and other related security policies. Your IT staff can automate those policies to support policy compliance throughout your DevSecOps toolchain and production systems. Programmers express policy-as-code in a high-level language and store them in text files.
Your agency is most likely getting exposure to policy-as-code through cloud services providers (CSPs). Amazon Web Services (AWS) offers policy-as-code via the AWS Cloud Development Kit. Microsoft Azure supports policy-as-code through Azure Policy, a service that provides both built-in and user-defined policies across categories that map the various Azure services such as Compute, Storage, and Azure Kubernetes Services (AKS).
Benefits of Policy-as-Code
Here are some benefits your agency can realize from policy-as-code:
- Information and logic about your security and compliance policies as code remove the risks of “oral history” when sysadmins may or may not pass down policy information to their successors during a contract transition.
- When you render security and compliance policies as code in plain text files, you can use various DevSecOps and cloud management tools to automate the deployment of policies into your systems.
- Guardrails for your automated systems because as your agency moves to the cloud, your number of automated systems only grows. A responsible growth strategy is to protect your automated systems from performing dangerous actions. Policy-as-code is a more suitable method to verify the activities of your automated systems.
- A longer-term goal would be to manage your compliance and security policies in your version control system of choice with all the benefits of history, diffs, and pull requests for managing software code.
- You can now test policies with automated tools in your DevSecOps toolchain.
Public Sector Policy Challenges
As your agency moves to the cloud, it faces new challenges with policy compliance while adjusting to novel ways of managing and securing IT infrastructure:
Keeping Pace with Government-Wide Compliance & Cloud Initiatives
FedRAMP compliance has become a domain specialty unto itself. While the United States federal government maintains control over the policies behind FedRAMP, and the next updates and changes, FedRAMP compliance has become its own industry with specialized consultants and toolsets that promise to get an agency’s cloud application through the FedRAMP approval process.
As government cloud initiatives such as Cloud Smart become more important, the more your agency can automate the management and testing of security policies, the better. Automation reduces human error because it does away with the manual and tedious management and testing of security policies.
Automating Cloud Migration and Management
Large cloud initiatives bring with them the automation of cloud migration and management. Cloud-native development projects that accompany cloud initiatives need to consider continuous compliance and security solutions to protect their software containers.
Maintaining Continuous Transparency and Accountability
Continuous transparency is fundamental to FedRAMP and other government compliance programs. Automation and reporting are two fundamental building blocks. The stakes for reporting are only going to increase as the mandates of the Executive Order on Improving the Nation’s Cybersecurity become reality for agencies.
Achieving continuous transparency and accountability requires that an enterprise have the right tools, processes, and frameworks in place to monitor, report, and manage employee behaviors throughout the application delivery life cycle.
Securing the Agency Software Supply Chain
Government agencies are multi-vendor environments with homogenous IT infrastructure, including cloud services, proprietary tools, and open source technologies. The recent release of the Container Platform SRG is going to drive more requirements for the automation of container security across Department of Defense (DoD) projects
Looking to learn more about how to utilizing a policy-based security posture to meet DoD compliance standards like cATO or CMMC? One of the most popular technology shortcuts is to utilize a DoD software factory. Anchore has been helping organizations and agencies put the Sec in DevSecOps by securing traditional software factories, transforming them into DoD software factories. Get caught up with the content below:
Policy-as-Code: Current and Future States
The future of policy-as-code in government could go in two directions. The same technology principles of policy-as-code that apply to technology and security policies can also render any government policy-as-code. An example of that is the work that 18F is prototyping for SNAP (Supplemental Nutrition Assistance Program) food stamp program eligibility.
Policy-as-code can also serve as another automation tool for FedRAMP and Security Technical Implementation Guide (STIG) testing as more agencies move their systems to the cloud. Look for the backend tools that can make this happen gradually to improve over the next few years.
Managing Cultural and Procurement Barriers
Compliance and security are integral elements of federal agency working life, whether it’s the DoD supporting warfighters worldwide or civilian government agencies managing constituent data to serve the American public better.
The concept of policy-as-code brings to mind being able to modify policy bundles on the fly and pushing changes into your DevSecOps toolchain via automation. While theoretically possible with policy-as-code in a DevSecOps toolchain, the reality is much different. Industry standards and CISO directives govern policy management at a much slower and measured cadence than the current technology stack enables.
API integration also enables you to integrate your policy-as-code solution into third-party tools such as Splunk and other operational support systems that your organization may already use as your standards.
Automation
It’s best to avoid manual intervention for managing and testing compliance policies. Automation should be a top requirement for any policy-as-code solution, especially if your agency is pursuing FedRAMP or NIST certification for its cloud applications.
Enterprise Reporting
Internal and external compliance auditors bring with them varying degrees of reporting requirements. It’s essential to have a policy-as-code solution that can support a full range of reporting requirements that your auditors and other stakeholders may present to your team.
Enterprise reporting requirements range from customizable GUI reporting dashboards to APIs that enable your developers to integrate policy-as-code tools into your DevSecOps team’s toolchain.
Vendor Backing and Support
As your programs venture into policy compliance, failing a compliance audit can be a costly mistake. You want to choose a policy-as-code solution for your enterprise compliance requirements with a vendor behind it for technical support, service level agreements (SLAs), software updates, and security patches.
You also want vendor backing and support also for technical support. Policy-as-code isn’t a technology to support using your own internal IT staff (at least in the beginning).
With policy-as-code being a newer technology option, a fee-based solution backed by a vendor also gets you access to their product management. As a customer, you want a vendor that will let you access their product roadmap and see the future.
Interested to see how the preeminent DoD Software Factory Platform used a policy-based approach to software supply chain security in order to achieve a cATO and allow any DoD programs that built on their platform to do the same? Read our case study or watch our on-demand webinar with Major Camdon Cady.
How NVIDIA Uses Shift Left Automation to Secure Containers
As container adoption grew, NVIDIA’s Product Security team needed to provide a scalable security process that would support diverse requirements across business units. They found that traditional security scanning tools didn’t work for containers — they were complicated to use, time consuming to run, and generated too many false positives.
The Broad Impact of Software Supply Chain Attacks
The broad impact of software supply chain attacks is clear in the findings of our recent 2021 Anchore Supply Chain Security Report. As malicious actors continue to advance the threat landscape in creative and alarming ways, Anchore commissioned a survey of 400+ enterprises with at least 1,000 employees to find out how real the impact is.
A whopping 64% of respondents to our survey reported that a supply chain attack had affected them in the last year. Furthermore, a third of those respondents report that the impact on their organizations was moderate or significant.
Scanning Challenges Abound
Enterprises facing these supply chain attacks also have to work through container scanning challenges. 86% of respondents reported challenges in identifying vulnerabilities. Too many false positives are a challenge for 77% of the respondents. On average, respondents estimate that 44% of vulnerabilities found are false positives. Getting developers to spend time on remediating issues was a challenge for 77% of respondents.
Corporate and government agency moves to DevOps and DevSecOps mean collaboration among development, security, and operations teams is more important than ever before. 77% of organizations are designating Security Champions within Dev teams to facilitate tighter collaboration.
Enterprise Security Focus: The Software Supply Chain
Against a backdrop of recent high-profile software supply chain attacks, 46 percent of respondents indicated that they have a significant focus on securing the software supply chain while an additional 14 percent have prioritized it as a top focus.
Very few (3%) of the respondents showed that software supply chain security isn’t a priority at all.
The DevOps Toolchain: An Enterprise Blind Spot
Experts have identified development platforms and DevOps toolchains as a significant risk point for software supply chain security. When attackers compromise a toolchain or development platform, they gain access to all the different applications that move through your development pipeline. This opens the door for bad actors to insert malicious code or backdoors that can be exploited once the developer deploys the software in production or (even worse) shipped to customers.
A critical best practice is to leverage infrastructure-as-code (IaC) to secure each platform or tool in the development process to ensure they are secured properly. Just over half of respondents are using IaC to secure these various platforms.
Do you want more insights into container and software supply chain security? Download the Anchore 2021 Software Supply Chain Security Report!
Software Supply Chain Security
One of the most vulnerable segments of software is the build process. Everything from open source projects to third party software vendors, learn best security practices for cloud-native application development.
Why an SBOM Is Critical for Cybersecurity and How To Create One
With recent high profile supply chain attacks, the software-bill-of-materials (SBOM) is becoming a critical foundation for cybersecurity. Organizations must understand all of the components in the applications they build so that they can properly secure them.
Settling into a Culture of Kindness
Blake Hearn (he/him) joined Anchore in February 2020 as a DevSecOps Engineer on the Customer Success team, marking the start of both Blake’s professional career and entry into DevSecOps. In this Humans of Anchore profile, we sat down with Blake to talk about learning new skill sets, a culture of kindness, and lessons from leadership.
From his start at Anchore, Blake has been immersed in a team of kind and supportive people offering him the mentorship, resources, and encouragement needed to be successful.
“The whole team really helped me learn at a fast rate. They created training materials and testing environments for me to learn, checked in with me frequently, and even recommended some certifications which played a huge role in building a foundational knowledge of DevSecOps. A year and a half ago I didn’t know anything about Docker, Jenkins or Kubernetes and now I’m using them every day.”
Blake’s support system reaches far beyond his direct team, extending all the way to the executives and co-founders of the company.
“I’ve had a really great experience with my managers and the leadership team. Being able to reach out to the CEO or CTO is amazing. Dan Nurmi (CTO/Co-Founder) has open office hours each week where I can bring my technical questions and feel comfortable doing so. Everyone at Anchore is really collaborative. I can ask anyone a question and they are more than willing to help.”
In his role, Blake spends most of his day working on the Platform One team at the Department of Defense (DoD) partnering with engineers from companies across the industry to help deliver software solutions faster and more securely across the DoD.
“It’s been a really good opportunity for me to learn from both my Anchore team and my Platform One team. My role requires a lot of custom Helm templating and testing updates on various Kubernetes clusters. We are putting our minds together to come up with solutions and do groundbreaking work.”
Looking ahead, Blake is eager to continue his learning journey. “I’m excited to continue learning from others and get into new skill sets. Recently, I’ve learned a little bit about the operational side of Machine Learning (ML) and how ML could be used in cybersecurity. Next, I would like to get into penetration testing to help improve the security posture of products and services. I think that would provide a huge benefit to our customers – especially with the supply chain attacks we’ve seen recently in the news.”
In summarizing his time at Anchore, Blake is grateful for the support system he has found: “I didn’t think companies like Anchore existed – where the company’s culture is so kind, everyone is really smart, works well together, and you have direct access to leadership. No other company I’ve seen compares to Anchore.”
Interested in turning your dreams into reality? Check out our careers page for our open roles anchore.com/careers.
Developing Passionate and Supportive Leaders
Anchore’s management program is founded on passionate people leaders who are kind, open, and invest in their team’s success. Finding passionate leaders means opening the door to opportunities for all employees. We empower Anchorenauts to apply for management roles and participate in a cross-functional interview process.
A few months into Dan Luhring’s (he/him) time at Anchore, a management role opened up in the Engineering organization. When the Director of Engineering asked if anyone on the team was interested in pursuing the role, Dan immediately raised his hand.
“When I interviewed for the manager position with the leadership team, I was glad that I was going through a formal process because it made me realize that Anchore understands how vitally important great managers are to the success of the company.”
Upon joining the Anchore management team, all leaders go through a robust training program where they learn more about different communication and working styles, coaching conversations, and the guiding principle of Anchore’s management philosophy: building trusting relationships.
“I love our manager training series. I thought the role-playing exercises were really thoughtfully done and have been missing from manager training I’ve done in the past. Between the training sessions, ongoing employee programs, and overall partnership, I feel really supported by our People Operations team in my role.”
Anchore’s continuous performance model enables our managers to set a strong foundation of trust and clear communication from day one. Although Dan had already been working with his team before becoming a manager, the Stay Interviews gave Dan even more insight into his new direct reports.
“I got a ton of value out of the Stay Interviews with my direct reports. It’s really useful to know what motivates people, how they like to receive recognition and feedback, and what their long-term career goals are. It made me more aware of their professional interests outside of their day-to-day responsibilities. Because I know the motivators of my direct reports, I can assign special projects based on individual interest, even if it’s not something they do in their core role.”
Reflecting on his opportunity to join the management team, Dan is excited to be part of making Anchore a great place to work and continuing to lead his team based on trust.
“There are things that Anchore gets right that I find to be really unique. We are thoughtful about who we promote into the management team. We have great support and autonomy with helpful programs and tools to facilitate trusting relationships, really caring about the people who report to us and wanting to help them achieve their career goals.”
Interested in becoming a team leader like Dan? View Anchore’s current openings here.
How To Secure Your DevOps Pipeline In a Post-SolarWinds World
DevOps lets developers innovate faster. But some normal DevOps processes can create the opportunity for bad actors or dangerous code to enter your DevOps pipeline and your software applications.
Red Hat & The Department of Defense
A detailed overview that highlights how Anchore and Red Hat partnered to deliver a new approach to building, deploying and operating software for the Department of Defense.
A Custom Approach to Software Security Solutions
We’re hiring a Product Marketing Manager! In this week’s Be Yourself, With Us, SVP of Marketing Kim Weins shares the exciting opportunities within the role.
“Product marketing at a startup like Anchore provides a lot of room to leave your stamp, since our product is evolving quickly based on problems our customers need to solve,” said Kim.
Anchore’s customer base ranges from large enterprises like NVIDIA and eBay to government agencies like the U.S. Space Force and the U.S. Air Force. Being nimble to create custom solutions is critical for our expanding software security products.
“On top of that, we’re in a rapidly growing industry with a solution at the nexus of cloud, containers and security. There’s immense potential for what Anchore can provide for customers and the Product Marketing Manager is going to have a huge impact on how these solutions are communicated to the rest of the industry,” she continued.
Are you passionate about the future of software security and curious about the next innovation that will help secure data and prevent cyberattacks? Then consider joining our marketing team. Visit this link to apply.
7 Must-Dos To Expedite FedRAMP for Containers
Getting FedRAMP authorization for your containerized applications can be daunting. You must comply with new requirements detailed in the recent FedRAMP Vulnerability Scanning Requirements for Containers.
The Fundamentals of Container Security
Begin exploring the strategic nature of containerization, its benefits and how many of them can be extended to security, while examining some of the unique challenges presented by full-speed container-based development.
Carving a Career Path That Fits
Startups come with many opportunities – the ability to partner directly with leadership, to move quickly with decision making, and to work on a variety of projects across the organization. At Anchore, we have intentionally designed our internal programs to provide employees with equitable opportunities for mentorship and career growth.
Through our continuous performance model, we built opportunity into the foundation of our company culture. We do this by ensuring every employee (regardless of background, tenure, or level) feels empowered to raise their hand with an idea, question, or express interest in a new project. Anchorenauts have ample opportunity to expand their skills as they work towards short-term and long-term career goals.
Instead of focusing solely on linear career paths, we give employees the opportunity to pursue other roles or career aspirations.
Andre Neufville (he/him) joined Anchore in November 2019 on the Customer Success team, with a focus on designing solutions that integrate container security best practices into customer DevSecOps workflows. “My role was to interface with customers and prospects, help them understand how to operate Anchore in containerized environments and integrate container scanning in their development workflows, but there was also an added sales aspect that I hadn’t done before.”
Client service wasn’t always the focus for Andre. Prior to Anchore, he worked on systems administration and network security.
“Early on I developed an interest in understanding the components of a secure enterprise network and used that knowledge to design better security around systems and network architectures. At the same time, cloud adoption grew and companies began developing modernization strategies for enterprise infrastructure. I transitioned to the role of a Cloud Security Architect in which I was able to apply my previous experience to advise customers on how to secure their cloud infrastructure and workloads.”
When Anchore’s InfoSecurity and IT team was expanding, Andre expressed interest in the role during a continuous performance discussion and was supported by his manager to pursue the opportunity. The IT Security Engineer role proved to be the perfect opportunity to combine his past experiences and current interests (as Andre is also in the process of getting his Masters degree in Cybersecurity Technology).
“In the past, I partnered with and advised customers on architecting solutions without the ownership of seeing it through. The InfoSec role has given me an opportunity to apply the same principles internally, but rather than just advising how it should be implemented, I get to follow through and look for areas of improvement. The whole end-to-end approach really intrigued me and my general affinity towards security that I’ve had in all my roles. I’m grateful for the opportunity to be a part of our internal IT Security initiatives and look forward to learning and growing in the role.”
Supporting employees to pursue alternative career opportunities within our organization is an integral part of Anchore’s culture – truly embodying our core values of kindness, openness, ownership. For more on our open roles, check out our careers page here.
Container Security Best Practices That Scale
Organizations are increasingly developing cloud-native software to serve the needs of customers, partners, and employees. They must ensure the security of these applications that are delivered using container technologies.
CloudBees
Read how CI/CD provider CloudBees integrated Anchore into their software delivery pipelines to deliver container governance and improve transparency with container security and compliance needs.
Pitney Bowes
Explore how the product security team at Pitney Bowes uses Anchore to support container compliance best practices and reporting for actionable insights to streamline delivery and improve security.
How To Secure Containers From Software Supply Chain Attacks
Software applications today include components from many sources, including open source, commercial components, and proprietary code. As software supply chain attacks have increased over the past several years, organizations must embed continuous security and compliance checks in every step of their software development process, from sourcing to CI/CD pipelines to production.
Shifting Security Left A Real World Guide To DevSecOps
Shifting security left can lead to massive productivity gains that extend beyond development teams. As a significant force multiplier, it allows organizations to be more productive and improve collaboration.
5 Open Source Procurement Best Practices
SolarWinds and now Codecov point to the need for enterprises to better manage how they procure and intake open source software (OSS) into their DevOps life-cycle. While OSS is “free” it’s not without internal costs as you procure the software and bring it to bear in your enterprise software.
Here are five open source procurement best practices to consider:
1. Establish Ownership over OSS for your organization
Just as OSS is becoming foundational to your software development efforts, shouldn’t it also be to your org chart?
We’re lucky at Anchore to have an open source tools team as part of our engineering group. Our CTO and product management team also have deep roots in the OSS world. Having OSS expertise in our development organization means there is ownership over open source software. These teams serve our overall organization plus current and prospective customers.
You have a couple of options for establishing ownership over OSS for your organization:
- Develop strong relationships with the OSS communities behind the software you plan to integrate into your enterprise software. For example, support can take the form of paying your developers to contribute to the code base. You can also choose to be a corporate sponsor of initiatives and community events.
- Task an appropriate developer or development team to “own” the OSS components they’re integrating into the software they’re developing.
- Stand up a centralized open source team if you have the budget and the business need, and they can serve as your internal OSS experts on security and integration.
These are just a few of the options for establishing ownership. Ultimately, your organization needs to commit the management and developer support to ensure you have the proper tools and frameworks in place to procure OSS securely.
2. Do your research and ask the right questions
Due diligence and research are a necessity when procuring OSS for your enterprise projects. Either your developers or open source team have to take the lead in asking the right questions about OSS projects you plan to include in your enterprise software. Procuring enterprise software requires a lot of work on the part of legal, contracts, and procurement teams to work through the intricacies of contracts, licensing, support, and other related business matters. There’s none of that when you procure OSS. However, it doesn’t mean you shouldn’t put in guard rails to protect your enterprise because sometimes you may not even realize what OSS your developers are deploying to production. Here are some questions that might arise:
- Who’s maintaining the code?
- Will they continue to maintain it as long as we need it?
- Who do we contact if something goes wrong?
It’s not about your developers becoming a shadow procurement department. Rather, it’s putting their skills and experience to work a little differently to perform due diligence they might do when researching enterprise software. The only difference here is your developers need to find out some of the “what ifs” that come if an OSS project goes stagnant or may not deliver on the potential of their project.
3. Set up a Standard OSS Procurement Process
A key step is to set up and document a standard OSS process that’s replicable across your organization to set a standard for the onboarding process. Be sure to tap into the expertise of your IT, DevOps, cybersecurity, risk management, and procurement teams when creating the process.
You also should catalog all OSS that meet the approval process set by your cross-functional team in a database or other central repository. This is a common best practice in some large enterprises, but keeping it up to date comes at an expense.
4. Generate an SBOM for your OSS
OSS doesn’t include a software bill of materials (SBOM), a necessary element for conducting vulnerability scans. It’s up to you to adjust your DevOps processes and put the tools in place for whomever owns OSS in your development organization. Generating an SBOM for OSS can take place at one or more phases in your DevOps toolchain.
5. Put OSS Maintenance in Place
When you’ve adopted an OSS component and integrated it into your software, you still need to have a method in place to maintain that source code. It’s a logical role if you have a dedicated open source team in-house and such work is accounted for in their budget, charter, and staffing. If you don’t have such a team, then the maintenance work would fall to a developer and that risks shifting priorities, especially if your developers are billable to client projects. The last option is to outsource the OSS maintenance to a third party firm or contractor, and that can be easier said than done, as the expertise can be hard to find (and sometimes costly!).
Then again, you can always roll the dice and hope that the OSS project remains on top of maintaining their source code and software with the necessary security updates and patches well into the future.
OSS Procurement and your Enterprise
The time is now to review and improve how your enterprise procures and maintains OSS. Doing the job right requires relationship building with the OSS community plus building internal processes and governance over OSS.
Do you want to generate SBOMs on the OSS in your development projects? Download Syft, our open source CLI tool for generating a Software Bill of Materials (SBOM) from container images and filesystems.
How To Secure Containers Across the SDLC With Anchore 3.0
With software supply chain attacks making headlines, it’s important to know how to secure containers at all phases of the software development lifecycle. You need to prevent security problems from reaching production and ensure that security issues are found earlier and fixed at a lower cost.
DevSecOps Expert Guide: Prioritizing Security for DevOps Teams
Prioritizing security as a design principle built into your development flow doesn’t happen overnight. Explore what a DevOps to DevSecOps transformation looks like in this white paper.
5 Reasons AI and ML are the Future of DevSecOps
As the tech industry continues to gather lessons learned from the SolarWinds and now Codecov breaches, it’s safe to say that artificial intelligence and machine learning are going to play a role in the future of DevSecOps. Enterprises are already experimenting with AI and ML with the hopes of reaping future security and developer productivity investments.
While even DevSecOps teams with the budget and time to be early adopters are still figuring out how to implement AI and ML at the scale, it’s time more teams look to the future:
1. Cloud-Native DevSecOps tools and the Data they Generate
As enterprises rely more on cloud-native platforms for their DevSecOps toolchains, they also need to put the tools, frameworks, and processes to make the best use of the backend data that their platforms generate. Artificial intelligence and machine learning will enable DevSecOps teams to get their data under management faster while making it actionable for technology and business stakeholders alike.
There’s also the prospect that AI and machine learning offer DevOps teams a different view of development tasks and enable organizations to create a new set of metrics
Wins and losses in the cloud-native application market may very well be decided by which development teams and independent software vendors (ISVs) turn their data into actionable intelligence. Creating actionable intelligence gives their stakeholders and developers views into what their developers and sysadmins are doing right security and operations wise.
2. Data-Backed Support for the Automation of Container Scanning
As the automation of container scanning becomes a standard requirement for commercial and public sector enterprises, so will the requirements to capture and analyze the security data and the software bill of materials (SBOM) that come with containers advancing through your toolchains.
The DevSecOps teams of the future are going to require next-generation tools to capture and analyze the data that comes from the automation of vulnerability scanning of containers in their DevSecOps toolchains. AI and ML support for container vulnerability scanning offer a delicate balance of autonomy and speed to help capture and communicate incident and trends data for analysis and action by developers and security teams.
3. Support for Advanced DevSecOps Automation
It’s a safe assumption that automation is only going to mature and advance in the future with no stopping. It’s quite possible that AI and ML will take on the repetitive legwork that powers some operations tasks such as software management and some other rote management tasks that fill up the schedules of present-day operations teams.
While AI and ML won’t completely replace their operations teams, these technologies may certainly shape the future of operations team duties. While there’s always the fear that automation may replace human workers, the reality is going to be closer to ops teams becoming more about automation management.
4. DevOps to DevSecOps Transformation
The SolarWinds and Codecov breaches are the perfect prompts for enterprises to make the transformation from DevOps to DevSecOps to protect their toolchains and software supply chain. Not to mention, cloud migrations by commercial and government enterprises are going to require better analytics over development and operational data their teams and projects currently produce for on-premise applications.
5. DevSecOps to NoOps Transformation
Beyond DevSecOps lies NoOps, a state where an enterprise automates so much that they no longer need an operations team, While the NoOps trend has been around for the past ten years, it still ranks as a forward-looking trend for the average enterprise.
However, there are lessons you can learn now from NoOps in how it conceptualizes the future of operations automation that you can start applying to your DevOps and DevSecOps pipelines, even today.
Final thoughts
For the mature DevSecOps shop of the future to remain competitive, it must make the best use of data from the backend systems in its toolchain; SBOMs; and container vulnerability scanning. Artificial intelligence and machine learning are becoming the ideal technology solutions for enterprises to reach their future DevSecOps potential.
Lark
Healthtech company Lark utilizes Anchore reporting to connect its security team to the application development lifecycle without creating additional manual work or slowing down development.
Software Supply Chain Security Report 2021
This report contains dozens of charts highlighting the latest enterprise trends in securing the software supply chain with a special focus on cloud-native applications. In this report, you’ll gain insights to help you reduce risk of software supply chain attacks. ok
Inside the Anchore Technology Suite: Open Source to Enterprise
Supporting container scanning in a compliance environment takes more than a standard DevSecOps approach. Choose the right combination of tools for automated security and compliance across toolchains.
Container Security For U.S. Government Information Systems
Containers introduce unique security challenges for enterprises and federal agencies alike. Get simple and manageable DevSecOps best practices for federal organizations that deploy containers at scale.
Hypothekarbank Lenzburg
Learn how Switzerland-based mortgage bank Hypothekarbank Lenzburg (HBL) selected Anchore to deliver security solutions to support seamless toolchain integration and painless DevSecOps workflows.
At Anchore we’re passionate about our products and our industry
At Anchore we’re passionate about our products and our industry, but we’re equally committed to building a company with amazing people, incredible career opportunities, and an ability to make a difference. We’re thrilled to start sharing more about who we are and what matters to us through the launch of our culture-first series.
On Fridays, you can expect to learn more about who Anchore is. We’ll give you a closer look at:
The Humans of Anchore: The people (including pets and little ones!) who help shape our company.
Be Yourself. With Us: A highlight reel of new jobs and a glimpse into the people you could be working with at Anchore.
Mission: Impact: This is where we show you our programs and initiatives and how they enable us to live out our core values every day.
So, come learn more about why we’re excited to work here. And maybe a little about how you can make that a reality for you, too, someday. Come be yourself. With us.
Curious what it’s like in a startup?
Curious what it’s like in a startup? As we continue our culture-first series, today we’re diving into the jobs and people at Anchore. All startups are different, at Anchore we focus on ensuring all employees, from individual contributors to the exec team, are given the opportunity to challenge themselves and explore new skillsets.
We talked to Support Engineer Tyran H. in the UK about his time on the team.
“Anchore is my first encounter working at an actual startup and is an amazing place to experience the real deal. Plus, I also have the opportunity to learn and develop technologies at the forefront of the tech world.”
Not only is Tyran part of our growing customer success team, but he was also Anchore’s first UK-based employee.
“As the first overseas hire, being welcomed as part of the family to help Anchore grow from the ground up has made settling in easy. It feels more like working on a passion-project with a group of friends than ACTUAL work, which is a massive bonus!”
Want to join Tyran and our team? Check out our latest job listings here.
Ocrolus
Fintech company Ocrolus successfully implemented Anchore to meet and exceed customer container security requirements, as well as achieve customer expectations around container security scanners.
Achieving Continuous ATO With Anchore
Given the recent attacks on the supply chain, security is the most essential aspect of software development, particularly when it comes to government and critical infrastructure. Anchore’s DoD-approved container scanning capabilities can help you speed up compliance and vulnerability scanning–expediting the ATO process and helping you go live with applications faster.
Anchore’s Approach to DevSecOps
Toolkits and orchestrators such as Docker and Kubernetes have been increasingly popular for companies wishing to containerize their applications and microservices. However, they also come with a responsibility for making sure these containers are secure. Whether your company builds web apps or deploys mission-critical software on jets, you should be thinking about ways to minimize your attack surface.
Aside from vandalizing and destroying company property, hackers can inflict massive damage simply by stealing data. In 2017, Equifax was fined over $500 million after customer data was stolen. British Airways and Uber have also been victims of data breaches and were fined hundreds of millions of dollars in recent years. With an average of 75 records being exploited every second, preventing bad actors from gaining access to your containers, pipelines, registries, databases, clusters and services is extremely important. Compliance isn’t just busywork, it keeps people (and their data) safe.
In this post, we’d like to discuss the unique approach Anchore takes to solving this problem. But before we get into that, let’s take a moment to define the buzzword that is probably the reason you’re reading this post: DevSecOps.
In a nutshell, DevSecOps is a modernized agile methodology that combines the efforts of development, operation and security teams. Working together to integrate security into every step of the development process, teams can deliver applications safely, at massive scale, without burdening them with heavyweight audits. DevSecOps helps teams catch issues early, before they cause damage and while they are still easy to fix. By making security a shared responsibility and shifting it left (towards developers and DevOps engineers), your company can deal with vulnerabilities before they enter production, saving time and reducing costs drastically.
In the following sections, we’ll cover a few unique reasons why organizations such as eBay, Cisco and the US Department of Defense have made Anchore a requirement in their software development lifecycle to help implement security with DevSecOps.
Lightweight Yet Powerful
At Anchore, we believe that everyone should know what’s inside the container images they build and consume. That is why the core of our solution is an open source tool, Anchore Engine, which performs deep image inspection and vulnerability scanning across all layers. When users scan an image, Anchore Engine generates a software bill of materials (SBOM) that consists of files, operating system packages, and software artifacts (including Node.JS NPM modules, Ruby GEMs, Java archives and Python packages). Anchore Engine also allows users to check for CVEs, secrets, exposed ports and many others, but more on that later!
Anchore Engine was designed to be flexible, so you can implement it anywhere:
- If you’re a developer and want to do a one-time scan of a container image for vulnerabilities before pushing any code to version control, you can use our CLI or API
- If you’re a DevOps engineer and wish to scan container images before pushing to or after pulling from a registry, you can easily integrate with your preferred CI/CD tool (CircleCI, Jenkins, GitHub Actions, GitLab) or perform inline scanning and analysis
- If you’re a security engineer responsible for locking-down clusters, you can use our Kubernetes Admission Controller to prevent any pods from running vulnerable containers
Anchore Engine can be configured on any cloud platform or on-premises, as well as with any Docker V2 compatible registry (public or private). Regardless of where you’re using Anchore Engine or how you’re using it, it’s important to know the exact contents of your containers so appropriate security measures can be taken.
Strict But Adaptable
Anchore Engine enables users to create custom security rules that can be adapted to align with company policy. For example, users can create and define checks for vulnerabilities, package whitelists and blacklists, configuration file contents, leaked credentials, image manifest changes, exposed ports and more. These rules allow you to enforce strict security gates like Dockerfile gates, license gates and metadata gates (check out our docs for more info!) before running any risky containers.
You may have heard of Infrastructure-as-Code, but have you heard of Security-as-Code or Policy-as-Code? Because Anchore policies are standard text files, they can be managed like source code and versioned over time as the software supply chain evolves and best practices are developed.
In addition to Anchore Engine, we offer Anchore Enterprise, which includes many enhanced features such as an easy-to-use interface, an air-gapped feed service, and notifications with Slack, Jira, GitHub or Microsoft Teams. There are many more features and capabilities of both Anchore Engine and Anchore Enterprise, but that is a topic for a later post.
Compliant And Growing
Just days away from becoming a CNCF Kubernetes Certified Service Provider, Anchore has been working hard to help companies fulfill their security requirements. Oftentimes, we receive calls from security teams who were asked to make their software adhere to certain compliance standards. Anchore is proud to help organizations achieve NIST SP 800-190 compliance, CIS Benchmarks for Docker and Kubernetes, and best practices for building secure Docker Images.
If you work with government agencies and are interested in another level of compliance, please check out our newest product, Anchore Federal! It includes a bundle of policies created in collaboration with the United States Department of Defense that can provide out-of-the-box compliance with the required standards.
In this post, we’ve listed a few key reasons why organizations choose to use Anchore. You may have noticed we also interchangeably used the words “you” and “your company”. That’s because – in today’s world of containers – you, as the reader, have the responsibility of talking with your company about what it’s doing to prevent threats, why it should be implementing DevSecOps processes, and how Anchore can help through container security. We are here to help.
Testing Anchore with Ansible, K3s and Vagrant
When I began here at Anchore, I realized I would need to create a quick and offline way to be able to test installation in a way that would better approximate the most common way it is deployed—on Kubernetes.
We have guides to stand up anchore with docker-compose, and how to launch into Amazon EKS, but we didn’t have a quick way to test our helm chart, and other aspects of our application, locally on a laptop in a way that used K3s instead of minikube.
I also wanted to stand up a quick approximation not just on my local laptop, but against various other projects I have. So I created a K3s project base that automatically deploys K3s in vagrant and VirtualBox locally on my laptop. Also, if I need to stand up a Kubernetes cluster on external hosts to my laptop, I can run the playbook against those hosts to stand up a K3s cluster and deploy the anchore engine helm chart.
To get started, you can check out my project page on Github. It’s not a feature-complete project yet, but pull requests are always welcome.
Scenario 1: Standing this Up on your Local Laptop
Step 1: Install dependencies
To use this, just make sure you’ve met the requirements for your laptop, which is to say: make sure you have Ansible, Vagrant and VirtualBox installed. Clone the repo, change directories into that repo and issue the command “vagrant up”. There are more details on the readme file to help get you started.
First, you’ll need to be running a Linux or macOS laptop and have the following things installed:
- Virtualbox v5.2 or later
- Vagrant v2.0 or later
- The Vagrant Virtualbox Guest Additions Plugin
- Ansible v2.7 or later
- A copy of the project repository (git clone)
First, install Virtualbox per the link instructions above. Once that is in place, install Ansible and Vagrant per the links above also. To install the Vagrant VirtualBox Guest Additions Plugin, you issue the following command:vagrant plugin install vagrant-vbguest
We are now ready to clone the repository and get this running. The following three commands will pull the repository and stand up the K3s cluster: git clone https://github.com/dfederlein/k3s_project_base.git cd k3s_project_base vagrant up
Scenario 2: Run this Playbook Against Hosts External to your Laptop
In this scenario, you have ansible installed on a control host, and you will be building a k3s cluster of hosts you already control. I will assume this scenario is utilized by people already familiar with Ansible and give some shortcut notes.
First, clone the repository with the following command: git clone https://github.com/dfederlein/k3s_project_base.git
Next, we’ll modify the hosts.ini file to reflect the hosts you want to create this cluster on. Once you’ve added those, the following command should get you what you need: ansible-playbook -i hosts.ini site.yml -u (user)
Add the become password and connection password or private key flags to that command as needed. More information on how to do that in the Ansible documentation.
The end of the processes detailed above should have a working K3s cluster running on your laptop, or on the external hosts you’ve pointed the playbook at, and a helm chart of anchore deployed to that cluster. Please note that the Vagrant/local deploy scenario may need some patience after being created, as it will operate with limited ram and resources.
Scanning Images on Amazon Elastic Container Registry (ECR)
The Anchore Engine supports analyzing images from any Docker V2 compatible registry however when accessing an Amazon ECR registry extra steps must be taken to handle Amazon Web Services authentication.
The Anchore Engine will attempt to download images from any registry without requiring further configuration. For example, running the following command:
$ anchore-cli image add prod.example.com/myapp/foo:latest
This would instruct the Anchore Engine to download the myapp/foo:latest image from the prod.example.com registry. Unless otherwise configured the Anchore Engine will try to pull the image from the registry without authentication.
In the following example, we fail to add an image for analysis due to an error.
$ anchore-cli image add prod.example.com/myapp/bar:latest Error: image cannot be found/fetched from registry HTTP Code: 404
In many cases it is not possible to distinguish between an image that does not exist and an image that you are not authorized to access since many registries do not wish to disclose the existence of private resources to unauthenticated users.
The Anchore Engine can store credentials used to access your private registries.
Running the following command lists the defined registries.
$ anchore-cli registry list Registry User docker.io anchore quay.io anchore registry.example.com johndoe 123456789012.dkr.ecr.us-east-1.amazonaws.com ABC
Here we can see that 4 registries have been defined. When pulling an image the Anchore Engine checks to see if any credentials have been defined for the registry, if none are present then the Anchore Engine will attempt to pull images without authentication but if a registry is defined then all access of metadata or pulls for images from that registry will use the specified username and password.
Registries can be added using the following syntax:
$ anchore-cli registry add REGISTRY USERNAME PASSWORD
The REGISTRY parameter should include the fully qualified hostname and port number of the registry. For example registry.anchore.com:5000
Amazon AWS typically uses keys instead of traditional usernames & passwords. These keys consist of an access key ID and a secret access key. While it is possible to use the aws ecr get-login command to create an access token, this will expire after 12 hours so it is not appropriate for use with Anchore Engine, otherwise, a user would need to update their registry credentials regularly. So when adding an Amazon ECR registry to Anchore Engine you should pass the aws_access_key_id and aws_secret_access_key.
For example:
$ anchore-cli registry add / 1234567890.dkr.ecr.us-east-1.amazonaws.com / MY_AWS_ACCESS_KEY_ID / MY_AWS_SECRET_ACCESS_KEY / --registry-type=awsecr
The registry-type parameter instructs Anchore Engine to handle these credentials as AWS credentials rather than traditional usernames and passwords. Currently, the Anchore Engine supports two types of registry authentication standard username and password for most Docker V2 registries and Amazon ECR. In this example we specified the registry type on the command line however if this parameter is omitted then the CLI will attempt to guess the registry type from the URL which uses a standard format.
The Anchore Engine will use the AWS access key and secret access keys to generate authentication tokens to access the Amazon ECR registry, the Anchore Engine will manage regeneration of these tokens which typically expire after 12 hours.
In addition to supporting AWS access key credentials Anchore also supports the use of IAM roles for authenticating with Amazon ECR if the Anchore Engine is run on an EC2 instance.
In this case, you can configure the Anchore Engine to inherit the IAM role from the EC2 instance hosting the engine.
When launching the EC2 instance that will run the Anchore Engine you need to specify a role that includes the AmazonEC2ContainerRegistryReadOnly policy.
While this is best performed using a CloudFormation template, you can manually configure from the launch instance wizard.
Select Create new IAM role.
Under the type of trusted entity select EC2.
Ensure that the AmazonEC2ContainerRegistryReadOnly policy is selected.
Give a name to the role and add this role to the Instance you are launching.
On the running EC2 instance you can manually verify that the instance has inherited the correct role by running the following command:
#curl http://169.254.169.254/latest/meta-data/iam/info { "Code" : "Success", "LastUpdated" : "2018-01-1218:45:12Z", "InstanceProfileArn" : "arn:aws:iam::123456789012:instance-profile/ECR-ReadOnly", "InstanceProfileId" : "ABCDEFGHIJKLMNOP” }
By default the support for inheriting the IAM role is disabled. This can be enabled by adding the following entry to the top of the Anchore Engine config.YAML file.
allow_awsecr_iam_auto: False
When IAM support is enabled instead of passing the access key and secret access key use “awsauto” for both username and password. This will instruct the Anchore Engine to inherit the role from the underlying EC2 instance.
$ anchore-cli registry add / 1234567890.dkr.ecr.us-east-1.amazonaws.com / awsauto / awsauto / --registry-type=awsecr
You can learn more about Anchore Engine and how you can scan your container images whether they are hosted on cloud-based registries such as DockerHub and Amazon ECR or on private Docker V2 compatible registries hosted on-premises.