From War Room to Workflow: How Anchore Transforms CVE Incident Response

When CVE-2025-1974 (#IngressNightmare) was disclosed, incident response teams had hours—at most—before exploits appeared in the wild. Imagine two companies responding: 

  • Company A rallies a war room with 13 different teams frantically running kubectl commands across the org’s 30+ clusters while debugging inconsistent permission issues. 
  • Company B’s security analyst runs a single query against their centralized SBOM inventory and their policy-as-code engine automatically dispatches alerts and remediation recommendations to affected teams. 

Which camp would you rather be in when the next critical CVE drops? Most of us prefer the team that built visibility for their software supply chain security before the crisis hit.

CVE-2025-1974 was particularly acute because of ingress-nginx‘s popularity as a Kubernetes Admission Controller (40%+ of Kubernetes administrators) and the type/severity of the vulnerability (RCE & CVSS 9.8—scary!) We won’t go deep on the details; there are plenty of good existing resources already.

Instead we’ll focus on: 

  • The inconsistency between the naive incident response guidance and real-world challenges
  • The negative impacts common to incident response for enterprise-scale Kubernetes deployments
  • How Anchore Enterprise alleviates these consequences
  • The benefits of an integrated incident response strategy
  • How to utilize Anchore Enterprise to respond in real-time to a security incident

Learn how SBOMs enable organizations to react to zero-day disclosures in minutes rather than days or weeks.

Rapid Incident Response to Zero-Day Vulnerabilities with SBOMs | Webinar

An Oversimplified Response to a Complex Threat

When the Ingress Nightmare vulnerability was published, security blogs and advisories quickly filled with remediation advice. The standard recommendation was clear and seemed straightforward: run a simple kubectl command to determine if your organization was impacted:

kubectl get pods --all-namespaces --selector app.kubernetes.io/name=ingress-nginx

If vulnerable versions were found, upgrade immediately to the patched versions.

This advice isn’t technically wrong. The command will indeed identify instances of the vulnerable ingress-nginx controller. But it makes a set of assumptions that bear little resemblance to Kubernetes deployments in modern enterprise organizations:

  • That you run a single Kubernetes cluster
  • That you have a single Kubernetes administrator
  • That this admin has global privileges across the entire cluster

For the vast majority of enterprises today, none of these assumptions are true.

The Reality of Enterprise-Scale Kubernetes: Complex & Manual

The reality of Kubernetes deployments at large organizations is far more complex than most security advisories acknowledge:

1. Inherited Complexity

Kubernetes administration structures almost always mirror organizational complexity. A typical enterprise doesn’t have a single cluster managed by a single team—they have dozens of clusters spread across multiple business units, each with their own platform teams, their own access controls, and often their own security policies.

This organizational structure, while necessary for business operations, creates significant friction for vital incident response activities; vulnerability detection and remediation. When a critical CVE like Ingress Nightmare drops, there’s no single person who can run that kubectl command across all environments.

2. Vulnerability Management Remains Manual

While organizations have embraced Kubernetes to automate their software delivery pipelines and increase velocity, the DevOps-ification of vulnerability and patch management have lagged. Instead they retain their manual, human-driven processes.

During the Log4j incident in 2021, we observed engineers across industries frantically connecting to servers via SSH and manually dissecting container images, trying to determine if they were vulnerable. Three years later, for many organizations, the process hasn’t meaningfully improved—they’ve just moved the complexity to Kubernetes.

The idea that teams can manually track and patch vulnerabilities across a sprawling Kubernetes estate is not just optimistic—it’s impossible at enterprise-scale.

The Cascading Negative Impacts: Panic, Manual Coordination & Crisis Response

When critical vulnerabilities emerge, organizations without supply chain visibility face:

  • Organizational Panic: The CISO demands answers within the hour while security teams scramble through endless logs, completely blind to which systems contain the vulnerable components.
  • Complex Manual Coordination: Security leads discover they need to scan hundreds of clusters but have access to barely a fifth of them, as Slack channels erupt with conflicting information and desperate access requests.
  • Resource-Draining Incident Response: By day three of the unplanned war room, engineers with bloodshot eyes and unchanged clothes stare at monitors, missing family weekends while piecing together an ever-growing list of affected systems.
  • Delayed Remediation: Six weeks after discovering the vulnerability in a critical payment processor, the patch remains undeployed as IT bureaucracy delays the maintenance window while exposed customer data hangs in the balance.

The Solution: Centralized SBOM Inventory + Automated Policy Enforcement

Organizations with mature software supply chain security leverage Anchore Enterprise to address these challenges through an integrated SBOM inventory and policy-as-code approach:

1. Anchore SBOM: Comprehensive Component Visibility

Anchore Enterprise transforms vulnerability response through its industry-leading SBOM repository. When a critical vulnerability like Ingress Nightmare emerges, security teams use Anchore’s intuitive dashboard to instantly answer the existential question: “Are we impacted?”

This approach works because:

  • Role-based access to a centralized inventory is provided by Anchore SBOM for security incident response teams, cataloging every component across all Kubernetes clusters regardless of administrative boundaries
  • Components missed by standard package manager checks (including binaries, language-specific packages, and container base images) are identified by AnchoreCTL, a modern software composition analysis (SCA) scanner
  • Vulnerability correlation in seconds is enabled through Anchore SBOM’s repository with its purpose-built query engine, turning days of manual work into a simple search operation

2. Anchore Enforce: Automated Policy Enforcement

Beyond just identifying vulnerable components, Anchore Enforce’s policy engine integrates directly into an existing CI/CD pipeline (i.e., policy-as-code security gates). This automatically answers the follow-up questions: “Where and how do we remediate?”

Anchore Enforce empowers teams to:

  • Alert code owners to the specific location of vulnerable components
  • Provide remediation recommendations directly in developer workflows (Jira, Slack, GitLab, GitHub, etc.)
  • Eliminate manual coordination between security and development teams with the policy engine and DevTools-native integrations

Quantifiable Benefits: No Panic, Reduced Effort & Reduced Risk

Organizations that implement this approach see dramatic improvements across multiple dimensions:

  1. Eliminated Panic: The fear and uncertainty that typically accompany vulnerability disclosures disappear when you can answer “Does this impact us?” in minutes rather than days.
    • Immediate clarity on the impact of the disclosure is at your finger tips with the Anchore SBOM inventory and Kubernetes Runtime Dashboard
  2. Reduced Detection Effort: The labor-intensive coordination between security, platform, and application teams becomes unnecessary.
    • Security incident response teams already have access to all the data they need through the centralized Anchore SBOM inventory generated as part of normal CI/CD pipeline use.
  3. Minimized Exploitation Risk: The window of vulnerability shrinks dramatically as developers are able to address vulnerabilities before they can be exploited.
    • Developers receive automated alerts and remediation recommendations from Anchore Enforce’s policy engine that integrate natively with existing development workflows.

How to Mitigate CVE-2025-1974 with Anchore Enterprise

Let’s walk through how to detect and mitigate CVE-2025-1974 with Anchore Enterprise across a Kubernetes cluster. The Kubernetes Runtime Dashboard serves as the user interface for your SBOM database. We’ll demonstrate how to:

  • Identify container images with ingress-nginx integrated
  • Locate images where CVE-2025-1974 has been detected
  • Generate reports of all vulnerable container images
  • Generate reports of all vulnerable running container instances in your Kubernetes cluster

Step 1: Identify location(s) of impacted assets

The Anchore Enterprise Dashboard can be filtered to show all clusters with the ingress-nginx controller deployed. Thanks to the existing SBOM inventory of cluster assets, this becomes a straightforward task, allowing you to quickly pinpoint where vulnerable components might exist.

Step 2: Drill into container image analysis for additional details

By examining vulnerability and policy compliance analysis at the container image level, you gain increased visibility into the potential cluster impact. This detailed view helps prioritize remediation efforts based on risk levels.

Step 3: Drill down into container image vulnerability report

When you drill down into the CVE-2025-1974 vulnerability, you can view additional details that help understand its nature and impact. Note the vulnerability’s unique identifier, which will be needed for subsequent steps. From here, you can click the ‘Report’ button to generate a comprehensive vulnerability report for CVE-2025-1974.

Step 4: Configure a vulnerability report for CVE-2025-1974

To generate a report on all container images tagged with the CVE-2025-1974 unique vulnerability ID:

  • Select the Vulnerability Id filter
  • Paste the CVE-2025-1974 vulnerability ID into the filter field
  • Click ‘Preview Results’ to see affected images

Step 5: Generate container image vulnerability report

The vulnerability report identifies all container images tagged with the unique vulnerability ID. To remediate the vulnerability effectively, base images that running instances are built from need to be updated to ensure the fix propagates across all cluster services.

Step 6: Generate Kubernetes namespace vulnerability report

While there may be only two base images containing the vulnerability, these images might be reused across multiple products and services in the Kubernetes cluster. A report based solely on base images can obscure the true scale of vulnerable assets in a cluster. A namespace-based report provides a more accurate picture of your exposure.

Wrap-Up: Building Resilience Before the Crisis

The next Ingress Nightmare-level vulnerability isn’t a question of if, but when. Organizations that invest in software supply chain security before a crisis occurs will respond with targeted remediation rather than scrambling in war rooms.

Anchore’s SBOM-powered SCA provides the comprehensive visibility and automated policy enforcement needed to transform vulnerability response from a chaotic emergency into a routine, manageable process. By building software supply chain security into your DevSecOps pipeline today, you ensure you’ll have the visibility you need when it matters most.

Ready to see how Anchore Enterprise can strengthen your Kubernetes security posture? Request a demo today to learn how our solutions can help protect your critical infrastructure from vulnerabilities like CVE-2025-1974.


Learn how Spectro Cloud secured their Kubernetes-based software supply chain and the pivotal role SBOMs played.

The NVD Enrichment Crisis: One Year Later—How Anchore is Filling the Vulnerability Data Gap

About one year ago, Anchore’s own Josh Bressers broke the story that NVD (National Vulnerability Database) was not keeping up with its vulnerability enrichment. This week, we sat down with Josh to see how things are going.

> Josh, can you tell our readers what you mean when you say NVD stopped enriching data?

Sure! When people or organizations disclose a new security vulnerability, it’s often just a CVE (Common Vulnerabilities and Exposures) number (like CVE-2024-1234) and a description. 

Historically, NVD would take this data, and NVD analysts would add two key pieces of information: the CPEs (Common Platform Enumerations), which are meant to identify the affected software, and the CVSS (Common Vulnerability Scoring System) score, which is meant to give users of the data a sense of how serious the vulnerability is and how it can be exploited. 

For many years, NVD kept up pretty well. Then, in March 2024, they stopped.

> That sounds bad. Were they able to catch up?

Not really. 

One of the problems they face is that the number of CVEs in existence is growing exponentially. They were having trouble keeping up in 2024, but 2025 is making CVEs even faster than 2024 did, plus they have the backlog of CVEs that weren’t enriched during 2024. 

It seems unlikely that they can catch up at this point.

Graph showing how few CVE IDs are being enriched with matching data since April 2024
Graph showing how few CVE IDs are being enriched with matching data since April 2024
Graph showing the number of total CVEs (green) and the number of enriched CVEs (red). "The line slopes say it all"—NVD is behind and the number of unreviewed CVEs is growing.
Graph showing the number of total CVEs (green) and the number of enriched CVEs (red). “The line slopes say it all”—NVD is behind and the number of unreviewed CVEs is growing.

> So what’s the upshot here? Why should we care that NVD isn’t able to enrich vulnerabilities?

Well, there are basically two problems with NVD not enriching vulnerabilities. 

First, if they don’t have CPEs on them, there’s no machine-readable way to know what software they affect. In other words, part of the work NVD was doing is writing down what software (or hardware) is affected in a machine-readable way, enabling vulnerability scanners and other software to tell which components are affected. 

The loss of this is obviously bad. It means that there is a big pile of security flaws that are public—meaning that threat actors know about them—but security teams will have a harder time detecting them. Un-enriched CVEs are not labeled with CPEs, so programmatic analysis is off the table and teams will have to fall back to manual review.

Second, enrichment of CVEs is supposed to add a CVSS score—essentially a severity level—to CVEs. CVSS isn’t perfect, but it does allow organizations to say things like, “this vulnerability is very easy to exploit, so we need to get it fixed before this other CVE which is very hard to exploit.” Without CVSS or something like it, these tradeoffs are much harder for organizations to make.

> And this has been going on for more than a year? That sounds bad. What is Anchore doing to keep their customers safe?

The first thing we needed to do was make a place where we can take up some of the slack that NVD can’t. To do this, we created a public database of our own CVE enrichment. This means that, when major CVEs are disclosed, we can enrich them ahead of NVD, so that our scanning tools (both Grype and Anchore Secure) are able to detect vulnerable packages—even if NVD never has the resources to look into that particular CVE.

Additionally, because NVD severity scores are becoming less reliable and less available, we’ve built a prioritization algorithm into Anchore Secure that allows customers to keep doing the kind of triaging they used to rely on NVD CVSS for.

> Is the vulnerability enhancement data publicly available?

Yes, the data is publicly available. 

Also, the process for changing it is out in the open. One of the more frustrating things about working with NVD enrichment was that sometimes they would publish an enrichment with really bad data and then all you could do was email them—sometimes they would fix it right away and sometimes they would never get to it.

With Anchore’s open vulnerability data, anyone in the community can review and comment on these enrichments.

> So what are your big takeaways from the past year?

I think the biggest takeaway is that we can still do vulnerability matching. 

We’re pulling together our own public vulnerability database, plus data feeds from various Linux distributions and of course GitHub Security Advisories to give our customers the most accurate vulnerability scan we can. In many ways, reducing our reliance on NVD CPEs has improved our matching (see this post, for example).

The other big takeaway is that, because so much of our data and tooling are open source, the community can benefit from and help with our efforts to provide the most accurate security tools in the world.

> What can community members do to help?

Well, first off, if you’re really interested in vulnerability data or have expertise with the security aspects of specific open source projects/operating systems, head on over to our vulnerability enhancement repo or start contributing to the tools that go into our matching like Syft, Grype, and Vunnel.

But the other thing to do, and I think more people can do this, is just use our open source tools!

File issues when you find things that aren’t perfect. Ask questions on our forum.

And of course, when you get to the point that you have dozens of folders full of Syft SBOMs and tons of little scripts running Grype everywhere—call us—and we can let Anchore Enterprise take care of that for you.


Learn the 5 best practices for container security and how SBOMs play a pivotal role in securing your software supply chain.

Automate Your Compliance: How Anchore Enforce Secures the Software Supply Chain

In an era where a single line of compromised code can bring entire enterprise systems to their knees, software supply chain security has transformed from an afterthought to a mission-critical priority. The urgency is undeniable: while software supply chain attacks grew by a staggering 540% year-over-year from 2019 to 2022, organizations have rapidly responded. 

Organizations have taken notice—the priority given to software supply chain security saw a remarkable 200% increase in 2024 alone, signaling a collective awakening to the existential threat of supply chain attacks. Cybercriminals are no longer just targeting individual applications—they’re weaponizing the complex, interconnected software supply chains that power global businesses.

To combat this rising threat, organizations are deploying platforms to automate BOTH detecting vulnerabilities AND enforcing supply chain security policies. This one-two combo is reducing the risk of a breach from a 3rd-party supplier from cascading into their software environment.

Anchore Enforce, a module of Anchore Enterprise, enables organizations to automate both security and compliance policy checks throughout the development lifecycle. It allows teams to shift compliance left and easily generate reporting evidence for auditors by defining detailed security standards and internal best practices ‘as-code‘.

In this blog post, we’ll demonstrate how to get started with using Anchore Enforce’s policy engine to automate both discovering non-compliant software and preventing it from reaching production.


Learn about software supply chain security in the real-world with former Google Distinguish Engineer, Kelsey Hightower.

Software Security in the Real World with Kelsey Hightower

A Brief Primer on Policy-as-Code & Policy Packs

Policy-as-code (PaC) translates organizational policies—whether security requirements, licensing restrictions, or compliance mandates—from human-readable documentation into machine-executable code that integrates with your existing DevSecOps platform and tooling. This typically comes in the form of a policy pack.

A policy pack is a set of pre-defined security and compliance rules the policy engine executes to evaluate source code, container images or binaries.

To make policy integration as easy as possible, Anchore Enforce comes with out-of-the-box policy packs for a number of popular compliance frameworks (e.g., FedRAMP or STIG compliance).

A policy consists of three key components: 

  • Triggers:  The code that checks whether a specific compliance control is present and configured correctly
  • Gates:  A group of triggers that act as a checklist of security controls to check for
  • Actions: A stop, warn or go directive explaining the policy-compliant action to take

To better understand PaC and policy packs, we use airport security as an analogy.

When you travel, you pass through multiple checkpoints, each designed to identify and catch different risks. At security screening, officers check for weapons, liquids, and explosives. At immigration control, officials verify visas and passports. If something is wrong, like an expired visa or a prohibited item, you might be stopped, warned, or denied entry.

Anchore Enforce works in a similar way for container security. Policy gates act as checkpoints, ensuring only safe and compliant images are deployed. One aspect of a policy might check for vulnerabilities (like a security screening for dangerous items), while another ensures software licenses are valid (like immigration checking travel documents). If a container has a critical flaw such as a vulnerable version of Log4j it gets blocked, just like a flagged passenger would be stopped from boarding a flight.

By enforcing these policies, Anchore Enforce helps secure an organization’s software supply chain; just as airport security ensures dangerous passengers/items from making it through.

If you’re looking for a deeper dive on PaC, read Anchore’s Developer’s Guide to SBOMs & Policy-as-Code.

Getting Started: the Developer Perspective

Getting started with Anchore Enforce is easy but determining where to insert it into your workflow is critical. A perfect home for Anchore Enforce is distributed within the CI/CD process, specifically during the localised build process. 

This approach enables rapid feedback for developers, providing a gate which can determine whether a build should progress or halt depending on your policies.

Container images are great for software developers—they encapsulate an application and all of its dependencies into a portable package, providing consistency and simplified management. As a developer, you might be building a container image on a local machine or in a pipeline, using Docker and a dockerfile.

For this example, we’ll assume you are using a GitLab Runner to run a job which builds an image for your application. We’ll also be using AnchoreCTL, Anchore Enterprise’s CLI tool to automate calling Anchore Enforce’s policy engine to evaluate your container against the CIS security standard—a set of industry standard container security best practices.

First, you’ll want to set a number of environment variables in your GitLab repository:

ANCHORECTL_USERNAME  (protected)
ANCHORECTL_PASSWORD (protected and masked)
ANCHORECTL_URL (protected)
ANCHORECTL_ACCOUNT

These variables will be used to authenticate against your Anchore Enterprise deployment. Anchore Enterprise also supports API keys.  

Next, you’ll want to set up your GitLab Runner job definition whereby AnchoreCTL is run after you’ve built a container image. The job definition below shows how you might build an image, then run AnchoreCTL to perform a policy evaluation:

### Anchore Distributed Scan
  # You will need three variables defined:
  # ANCHORECTL_USERNAME
  # ANCHORECTL_PASSWORD
  # ANCHORECTL_URL
  # ANCHORECTL_ACCOUNT

.login_gitlab_registry:
 - echo "$CI_REGISTRY_PASSWORD" | docker login $CI_REGISTRY -u $CI_REGISTRY_USER --password-stdin

.install_anchorectl_alpine:
  - apk update && apk add curl
  - 'echo "Downloading anchorectl from: ${ANCHORECTL_URL}"'
  - 'curl $ANCHORECTL_URL/v2/system/anchorectl?operating_system=linux&architecture=amd64" -H "accept: */*" | tar -zx anchorectl && mv -v anchorectl /usr/bin && chmod +x /usr/bin/anchorectl && /usr/bin/anchorectl version'

image: docker:latest
services:
- docker:dind
stages:
- build
- anchore
variables:
  ANCHORECTL_FAIL_BASED_ON_RESULTS: "true"
  ANCHORE_IMAGE: ${CI_REGISTRY_IMAGE}:${CI_COMMIT_REF_SLUG}

Build:
  stage: build
  before_script:
    - !reference [.login_gitlab_registry]
  script:
    - docker build -t ${ANCHORE_IMAGE} .
    - docker push ${ANCHORE_IMAGE}

Anchore:
  stage: anchore
  before_script:
    - !reference [.install_anchorectl_alpine]
    - !reference [.login_gitlab_registry]
  script:
    - 'export PATH="${HOME}/.local/bin/:${PATH}"'
    ### scan image and push to anchore enterprise
    - anchorectl image add --no-auto-subscribe --wait --dockerfile ./Dockerfile --from registry ${ANCHORE_IMAGE} 
    ### then get the results:
    - anchorectl image check --detail ${ANCHORE_IMAGE} 

The following environment variable (which can also be passed as the -f flag to AnchoreCTL) ensures that the return code is set to 1 if the policy evaluation result shows as ‘fail’. You can use this to break your build:

  ANCHORECTL_FAIL_BASED_ON_RESULTS: "true"

Then the AnchoreCTL image check command can be used to either validate against the default policy or specifically against a given policy (using the -p flag). This could be useful since your account in Anchore Enterprise can only have one default policy permanently active:

anchorectl image check --detail ${ANCHORE_IMAGE} -p <DESIRED_POLICY_ID>

When executed, this pipeline will scan your container image against your selected policy requirements and immediately provide feedback. Developers see exactly which policy gates failed and receive specific remediation steps, often as simple as updating a package or adjusting a configuration parameter.

And that’s it! With a few extra lines in your job definition, you’re now validating your newly built image against Anchore Enterprise for policy violations.

On failure, the job will stop and if the build fails in this manner, the –detail option will give you an explanation of failures with remediation recommendations! This is a great way to get fast feedback and stop/warn/go directives directly within the development flow. 

Operationalizing Compliance Checks:  the Security Engineer Perspective

While developers benefit from shift-left security checks during builds, security teams need a broader view across the entire container landscape. They’ll likely be working to scan containers after they are built by the development teams, having already been pushed or staged testing or even deployed and already running. The critical requirement for the security team is the need to evaluate a large number of images regularly for the latest critical vulnerabilities. This can also be done with policy evaluation; feeding everything in the registry through policy gates. 

Below you can see how the team might manage this via the Anchore Enforce user interface (UI). The security team has access to a range of policies, including:

  • CIS (included by default in all Anchore Enterprise deployments)
  • NIST 800-53
  • NIST 800-190
  • US DoD (Iron Bank)
  • DISA
  • FedRAMP

For this UI walkthrough, we will demonstrate the use-case using the CIS policy pack. Navigate to the policy section in your Anchore UI and activate your desired policy.

If you are an Anchore customer and do not have a desired policy pack, contact our Customer Success team for further information on entitlements.

Once this is activated, we will see how this is set in action by scanning an image. 

Navigate to Images, and select the image you want to check for compliance by clicking on the image digest. 

Once the policy check is complete, you will see a screen containing the results of the policy check.

This screen displays the actions applied to various artifacts based on Anchore Enforce’s policy engine findings, aligned with the rules defined in the policy packs. It also highlights the specific rule an artifact is failing. Based on these results, you can determine the appropriate remediation approach. 

The security team can generate reports in JSON or CSV format, simplifying the sharing of compliance check results.

Wrap-Up

As software supply chain attacks continue to evolve and grow in sophistication, organizations need robust, automated solutions to protect their environments. Anchore Enforce delivers exactly that by providing:

  • Automated compliance enforcement that catches issues early in the development process, when they’re easiest and least expensive to fix
  • Comprehensive policy coverage with pre-built packs for major standards like CIS, NIST, and FedRAMP that eliminate the need to translate complex requirements into executable controls
  • Flexible implementation options for both developers seeking immediate feedback and security teams managing enterprise-wide compliance
  • Actionable remediation guidance that helps teams quickly address policy violations without extensive research or security expertise

By integrating Anchore Enforce into your DevSecOps workflow, you’re not just checking a compliance box—you’re establishing a powerful defense against the rising tide of supply chain attacks. You’re also saving developer time, reducing friction between security and development teams, and building confidence with customers and regulators who demand proof of your security posture.

The software supply chain security challenge isn’t going away. With Anchore Enforce, you can meet it head-on with automation that scales with your organization. Reach out to our team to learn more or start a free trial to kick the tires yourself.


The Critical Role of SBOMs in PCI DSS 4.0 Compliance

Is your organization’s PCI compliance coming up for renewal in 2025? Or are you looking to achieve PCI compliance for the first time?

Version 4.0 of the Payment Card Industry Data Security Standard (PCI DSS) became mandatory on March 31, 2025. For enterprise’s utilizing a 3rd-party software software supply chain—essentially all companies, according to The Linux Foundation’s report on open source penetration—PCI DSS v4.0 requires companies to maintain comprehensive inventories of supply chain components. The SBOM standard has become the cybersecurity industry’s consensus best practice for securing software supply chains and meeting the requirements mandated by regulatory compliance frameworks.

This document serves as a comprehensive guide to understanding the pivotal role of SBOMs in navigating the complexities of PCI DSS v4.0 compliance.


Learn about the role that SBOMs for the security of your organization in this white paper.

Learn about the role that SBOMs for the security, including open source software (OSS) security, of your organization in this white paper.

Understanding the Fundamentals: PCI DSS 4.0 and SBOMs

What is PCI DSS 4.0?

Developed to strengthen payment account data (e.g., credit cards) security and standardize security controls globally, PCI DSS v4.0 represents the next evolution of this standard; ultimately benefiting consumers worldwide. 

This version supersedes PCI DSS 3.2.1, which was retired on March 31, 2023. The explicit goals of PCI DSS v4.0 include promoting security as a continuous process, enhancing flexibility in implementation, and introducing enhancements in validation methods. PCI DSS v4.0 achieved this by introducing a total of 64 new security controls.

NOTE: PCI DSS had a minor version bump to 4.0.1 in mid-2024. The update is limited and doesn’t add or remove any controls or change any deadlines, meaning the software supply chain requirements apply to both versions.

Demystifying SBOMs

A software bill of materials (SBOM) is fundamentally an inventory of all software dependencies utilized by a given application. Analogous to a “Bill of Materials” in manufacturing, which lists all raw materials and components used to produce a product, an SBOM provides a detailed list of software components, including libraries, 3rd-party software, and services, that constitute an application. 

The benefits of maintaining SBOMs are manifold, including enhanced transparency into the software supply chain, improved vulnerability management by identifying at-risk components, facilitating license compliance management, and providing a foundation for comprehensive supply chain risk assessment.

PCI DSS Requirement 6: Develop and Maintain Secure Systems and Software

PCI DSS Principal Requirement 6, titled “Develop and Maintain Secure Systems and Software,” aims to ensure the creation and upkeep of secure systems and applications through robust security measures and regular vulnerability assessments and updates. This requirement encompasses five primary areas:

  1. Processes and mechanisms for developing and maintaining secure systems and software are defined and understood
  2. Bespoke and custom software are developed securely
  3. Security vulnerabilities are identified and addressed
  4. Public-facing web applications are protected against attacks
  5. Changes to all system components are managed securely

Deep Dive into Requirement 6.3.2: Component Inventory for Vulnerability Management

Within the “Security vulnerabilities are identified and addressed” category of Requirement 6, Requirement 6.3.2 mandates: 

An inventory of bespoke and custom software, and 3rd-party software components incorporated into bespoke and custom software is maintained to facilitate vulnerability and patch management

The purpose of this evolving requirement is to enable organizations to effectively manage vulnerabilities and patches within all software components, including 3rd-party components such as libraries and APIs embedded in their bespoke and custom software. 

While PCI DSS v4.0 does not explicitly prescribe the use of SBOMs, they represent the cybersecurity industry’s consensus method for achieving compliance with this requirement by providing a detailed and readily accessible inventory of software components.

How SBOMs Enable Compliance with 6.3.2

By requiring an inventory of all software components, Requirement 6.3.2 necessitates a mechanism for comprehensive tracking. SBOMs automatically generate an inventory of all components in use, whether developed internally or sourced from third parties.

This detailed inventory forms the bedrock for identifying known vulnerabilities associated with these components. Platforms leveraging SBOMs can map component inventories to databases of known vulnerabilities, providing continuous insights into potential risks. 

Consequently, SBOMs are instrumental in facilitating effective vulnerability and patch management by enabling organizations to understand their software supply chain and prioritize remediation efforts.

Connecting SBOMs to other relevant PCI DSS 4.0 Requirements

Beyond Requirement 6.3.2, SBOMs offer synergistic benefits in achieving compliance with other aspects of PCI DSS v4.0.

Requirement 11.3.1.1 

This requirement necessitates the resolution of high-risk or critical vulnerabilities. SBOMs enable ongoing vulnerability monitoring, providing alerts for newly disclosed vulnerabilities affecting the identified software components, thereby complementing the requirement for tri-annual vulnerability scans. 

Platforms like Anchore Secure can track newly disclosed vulnerabilities against SBOM inventories, facilitating proactive risk mitigation.

Implementing SBOMs for PCI DSS 4.0: Practical Guidance

Generating Your First SBOM

The generation of SBOMs can be achieved through various methods. A Software Composition Analysis (SCA) tool, like the open source SCA Syft or the commercial AnchoreCTL, offer automated software composition scanning and SBOM generation for source code, containers or software binaries.

Looking for a step-by-step “how to” guide for generating your first SBOM? Read our technical guide.

These tools integrate with build pipelines and can output SBOMs in standard formats like SPDX and CycloneDX. For legacy systems or situations where automated tools have limitations, manual inventory processes may be necessary, although this approach is generally less scalable and prone to inaccuracies. 

Regardless of the method, it is crucial to ensure the accuracy and completeness of the SBOM, including both direct and transitive software dependencies.

Essential Elements of an SBOM for PCI DSS

While PCI DSS v4.0 does not mandate specific data fields for SBOMs, it is prudent to include essential information that facilitates vulnerability management and component tracking. Drawing from recommendations by the National Telecommunications and Information Administration (NTIA), a robust SBOM should, at a minimum, contain:

  • Component Name
  • Version String
  • Supplier Name
  • Unique Identifier (e.g., PURL or CPE)
  • Component Hash
  • Author Name

Operationalizing SBOMs: Beyond Inventory

The true value of an SBOM lies in its active utilization for software supply chain use-cases beyond component inventory management.

Vulnerability Management

SBOMs serve as the foundation for continuous vulnerability monitoring. By integrating SBOM data with vulnerability databases, organizations can proactively identify components with known vulnerabilities. Platforms like Anchore Secure enable the mapping of SBOMs to known vulnerabilities, tracking exploitability and patching cadence.

Patch Management

A comprehensive SBOM facilitates informed patch management by highlighting the specific components that require updating to address identified vulnerabilities. This allows security teams to prioritize patching efforts based on the severity and exploitability of the vulnerabilities within their software ecosystem.

Maintaining Vulnerability Remediation Documentation

It is essential to maintain thorough documentation of vulnerability remediation efforts in order to achieve the emerging continuous compliance trend from global regulatory bodies. Utilizing formats like CVE (Common vulnerabilities and Exposures) or VEX (Vulnerability Exploitability eXchange) alongside SBOMs can provide a standardized way to communicate the status of vulnerabilities, whether a product is affected, and the steps taken for mitigation.

Acquiring SBOMs from Third-Party Suppliers

PCI DSS Requirement 6.3.2 explicitly includes 3rd-party software components. Therefore, organizations must not only generate SBOMs for their own bespoke and custom software but also obtain SBOMs from their technology vendors for any libraries, applications, or APIs that are part of the card processing environment.

Engaging with suppliers to request SBOMs, potentially incorporating this requirement into contractual agreements, is a critical step. It is advisable to communicate preferred SBOM formats (e.g., CycloneDX, SPDX) and desired data fields to ensure the received SBOMs are compatible with internal vulnerability management processes. Challenges may arise if suppliers lack the capability to produce accurate SBOMs; in such instances, alternative risk mitigation strategies and ongoing communication are necessary.

NOTE: Remember the OSS maintainers that authored the open source components integrated into your application code are NOT 3rd-party suppliers in the traditional sense—you are! Almost all OSS licenses contain an “as is” clause that absolves them of liability for any code quality issues like vulnerabilities. This means that by using their code, you are now responsible for any security vulnerabilities in the code (both known and unknown).

Navigating the Challenges and Ensuring Success

Addressing Common Challenges in SBOM Adoption

Implementing SBOMs across an organization can present several challenges:

  • Generating SBOMs for closed-source or legacy systems where build tool integration is difficult may require specialized tools or manual effort
  • The volume and frequency of software updates necessitate automated processes for SBOM generation and continuous monitoring
  • Ensuring the accuracy and completeness of SBOM data, including all levels of dependencies, is crucial for effective risk management
  • Integrating SBOM management into existing software development lifecycle (SDLC) and security workflows requires collaboration and process adjustments
  • Effective SBOM adoption necessitates cross-functional collaboration between development, security, and procurement teams to establish policies and manage vendor relationships

Best Practices for SBOM Management

To ensure the sustained effectiveness of SBOMs for PCI DSS v4.0 compliance and beyond, organizations should adopt the following best practices:

  • Automate SBOM generation and updates wherever possible to maintain accuracy and reduce manual effort
  • Establish clear internal SBOM policies regarding format, data fields, update frequency, and retention
  • Select and implement appropriate SBOM management tooling that integrates with existing security and development infrastructure
  • Clearly define roles and responsibilities for SBOM creation, maintenance, and utilization across relevant teams
  • Provide education and training to development, security, and procurement teams on the importance and practical application of SBOMs

The Broader Landscape: SBOMs Beyond PCI DSS 4.0

As predicted, the global regulatory push toward software supply chain security and risk management with SBOMs as the foundation continues to gain momentum in 2025. PCI DSS v4.0 is the next major regulatory framework embracing SBOMs. This follows the pattern set by the US Executive Order 14028 and the EU Cyber Resilience Act, further cementing SBOMs as a cornerstone of modern cybersecurity best practice. 

Wrap-Up: Embracing SBOMs for a Secure Payment Ecosystem

The integration of SBOMs into PCI DSS v4.0 signifies a fundamental shift towards a more secure and transparent payment ecosystem. SBOMs are no longer merely a recommended practice but a critical component for achieving and maintaining compliance with the evolving requirements of PCI DSS v4.0, particularly Requirement 6.3.2. 

By providing a comprehensive inventory of software components and their dependencies, SBOMs empower organizations to enhance their security posture, reduce the risk of costly data breaches, improve their vulnerability management capabilities, and effectively navigate the complexities of regulatory compliance. Embracing SBOM implementation is not just about meeting a requirement; it is about building a more resilient and trustworthy software foundation for handling sensitive payment card data.

If you’re interested to learn more about how Anchore Enterprise can help your organization harden their software supply chain and achieve PCI DSS v4.0 compliance, get in touch with our team!


Interested to learn about all of the software supply chain use-cases that SBOMs enable? Read our new white paper and start unlocking enterprise value.

WHITE PAPER Rnd Rect | Unlock Enterprise Value with SBOMs: Use-Cases for the Entire Organization

The Developer’s Guide to SBOMs & Policy-as-Code

If you’re a developer, this vignette may strike a chord: You’re deep in the flow, making great progress on your latest feature, when someone from the security team sends you an urgent message. A vulnerability has been discovered in one of your dependencies and has failed a compliance review. Suddenly, your day is derailed as you shift from coding to a gauntlet of bureaucratic meetings.

This is an unfortunate reality for developers at organizations where security and compliance are bolt-on processes rather than integrated parts of the whole. Your valuable development time is consumed with digging through arcane compliance documentation, attending security reviews and being relegated to compliance training sessions. Every context switch becomes another drag on your productivity, and every delayed deployment impacts your ability to ship code.

Two niche DevSecOps/software supply chain technologies have come together to transform the dynamic between developers and organizational policy—software bills of materials (SBOMs) and policy-as-code (PaC). Together, they dramatically reduce the friction between development velocity and risk management requirements by making policy evaluation and enforcement:

  • Automated and consistent
  • Integrated into your existing workflows
  • Visible early in the development process

In this guide, we’ll explore how SBOMs and policy-as-code work, the specific benefits they bring to your daily development work, and how to implement them in your environment. By the end, you’ll understand how these tools can help you spend less time manually doing someone else’s job and more time doing what you do best—writing great code.


Interested to learn about all of the software supply chain use-cases that SBOMs enable? Read our new white paper and start unlocking enterprise value.

WHITE PAPER Rnd Rect | Unlock Enterprise Value with SBOMs: Use-Cases for the Entire Organization

A Brief Introduction to Policy-as-Code

You’re probably familiar with Infrastructure-as-Code (IaC) tools like Terraform, AWS CloudFormation, or Pulumi. These tools allow you to define your cloud infrastructure in code rather than clicking through web consoles or manually running commands. Policy-as-Code (PaC) applies this same principle to policies from other departments of an organization.

What is policy-as-code?

At its core, policy-as-code translates organizational policies—whether they’re security requirements, licensing restrictions, or compliance mandates—from human-readable documents into machine-readable representations that integrate seamlessly with your existing DevOps platform and tooling.

Think of it this way: IaC gives you a DSL for provisioning and managing cloud resources, while PaC extends this concept to other critical organizational policies that traditionally lived outside engineering teams. This creates a bridge between development workflows and business requirements that previously existed in separate silos.

Why do I care?

Let’s play a game of would you rather. Choose the activity from the table below that you’d rather do:

Before Policy-as-CodeAfter Policy-as-Code
Read lengthy security/legal/compliance  documentation to understand requirementsReference policy translated into code with clear comments and explanations
Manually review your code policy compliance and hope you interpreted policy correctlyReceive automated, deterministic policy evaluation directly in CI/CD build pipeline
Attend compliance training sessions because you didn’t read the documentationLearn policies by example as concrete connections to actual development tasks
Setup meetings with security, legal or compliance teams to get code approvalGet automated approvals through automated policy evaluation without review meetings
Wait till end of sprint and hope VP of Eng can get exception to ship with policy violationsIdentify and fix policy violations early when changes are simple to implement

While the game is a bit staged, it isn’t divorced from reality. PaC is meant to relieve much of the development friction associated with the external requirements that are typically hoisted onto the shoulders of developers.

From oral tradition to codified knowledge

Perhaps one of the most under appreciated benefits of policy-as-code is how it transforms organizational knowledge. Instead of policies living in outdated Word documents or in the heads of long-tenured employees, they exist as living code that evolves with your organization.

When a developer asks “Why do we have this restriction?” or “What’s the logic behind this policy?”, the answer isn’t “That’s just how we’ve always done it” or “Ask Alice in Compliance.” Instead, they can look at the policy code, read the annotations, and understand the reasoning directly.

In the next section, we’ll explore how software bills of materials (SBOMs) provide the perfect data structure to pair with policy-as-code for managing software supply chain security.

A Brief Introduction to SBOMs (in the Context of PaC)

If policy-as-code provides the rules engine for your application’s dependency supply chain, then Software Bills of Materials (SBOMs) provide the structured, supply chain data that the policy engine evaluates.

What is an SBOM?

An SBOM is a formal, machine-readable inventory of all components and dependencies used in building a software artifact. If you’re familiar with Terraform, you can think of an SBOM as analogous to a dev.tfstate file but it stores the state of your application code’s 3rd-party dependency supply chain which is then reconciled against a main.tf file (i.e., policy) to determine if the software supply chain is compliant or in violation of the defined policy.

SBOMs vs package manager dependency files

You may be thinking, “Don’t I already have this information in my package.json, requirements.txt, or pom.xml file?” While these files declare your direct dependencies, they don’t capture the complete picture:

  1. They don’t typically include transitive dependencies (dependencies of your dependencies)
  2. They don’t include information about the components within container images you’re using
  3. They don’t provide standardized metadata about vulnerabilities, licenses, or provenance
  4. They aren’t easily consumable by automated policy engines across different programming languages and environments

SBOMs solve these problems by providing a standardized format that comprehensively documents your entire software supply chain in a way that policy engines can consistently evaluate.

A universal policy interface: How SBOMs enable policy-as-code

Think of SBOMs as creating a standardized “policy interface” for your software’s supply chain metadata. Just as APIs create a consistent way to interact with services, SBOMs create a consistent way for policy engines to interact with your software’s composable structure.

This standardization is crucial because it allows policy engines to operate on a known data structure rather than having to understand the intricacies of each language’s package management system, build tool, or container format.

For example, a security policy that says “No components with critical vulnerabilities may be deployed to production” can be applied consistently across your entire software portfolio—regardless of the technologies used—because the SBOM provides a normalized view of the components and their vulnerabilities.

In the next section, we’ll explore the concrete benefits that come from combining SBOMs with policy-as-code in your development workflow.

How do I get Started with SBOMs and Policy-as-Code

Now that you understand what SBOMs and policy-as-code are and why they’re valuable, let’s walk through a practical implementation. We’ll use Anchore Enterprise as an example of a policy engine that has a DSL to express a security policy which is then directly integrated into a CI/CD runbook. The example will focus on a common software supply chain security best practice: preventing the deployment of applications with critical vulnerabilities.

Tools we’ll use

For this example implementation, we’ll use the following components from Anchore:

  • AnchoreCTL: A software composition analysis (SCA) tool and SBOM generator that scans source code, container images or application binaries to populate an SBOM with supply chain metadata
  • Anchore Enforce: The policy engine that evaluates SBOMs against defined policies
  • Anchore Enforce JSON: The Domain-Specific Language (DSL) used to define policies in a machine-readable format

While we’re using Anchore in this example, the concepts apply to other SBOM generators and policy engines as well.

Step 1: Translate human-readable policies to machine-readable code

The first step is to take your organization’s existing policies and translate them into a format that a policy engine can understand. Let’s start with a simple but effective policy.

Human-Readable Policy:

Applications with critical vulnerabilities must not be deployed to production environments.

This policy needs to be translated into the Anchore Enforce JSON policy format:

{
  "id": "critical_vulnerability_policy",
  "version": "1.0",
  "name": "Block Critical Vulnerabilities",
  "comment": "Prevents deployment of applications with critical vulnerabilities",
  "rules": [
    {
      "id": "block_critical_vulns",
      "gate": "vulnerabilities",
      "trigger": "package",
      "comment": "Rule evaluates each dependency in an SBOM against vulnerability database. If the dependency is found in the database, all known vulnerability severity scores are evaluated for a critical value. If match if found policy engine returns STOP action to CI/CD build task",
      "parameters": [
        { "name": "package_type", "value": "all" },
        { "name": "severity_comparison", "value": "=" },
        { "name": "severity", "value": "critical" },
      ],
      "action": "stop"
    }
  ]
}

This policy code instructs the policy engine to:

  1. Examine all application dependencies (i.e., packages) in the SBOM
  2. Check if any dependency/package has vulnerabilities with a severity of “critical”
  3. If found, return a “stop” action that will fail the build

If you’re looking for more information on the capabilities of the Anchore Enforce DSL, our documentation provides the full capabilities of the Anchore Enforce policy engine.

Step 2: Deploy Anchore Enterprise with the policy engine

With the example policy defined, the next step is to deploy Anchore Enterprise (AE) and configure the Anchore Enforce policy engine. The high-level steps are:

  1. Deploy Anchore Enterprise platform in your test environment via Helm Chart (or other); includes policy engine
  2. Load your policy into the policy engine
  3. Configure access controls/permissions between AE deployment and CI/CD build pipeline

If you’re interested to get hands-on with this, we have developed a self-paced workshop that walks you through a full deployment and how to set up a policy. You can get a trial license by signing up for our free trial.

Step 3: Integrate SBOM generation into your CI/CD pipeline

Now you need to generate SBOMs as part of your build process and have them evaluated against your policies. Here’s an example of how this might look in a GitHub Actions workflow:

name: Build App and Evaluate Supply Chain for Vulnerabilities

on:
  push:
    branches: [ main ]
  pull_request:
    branches: [ main ]

jobs:
  build-and-evaluate:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v3
      
      - name: Build Application
        run: |
          # Build application as container image
          docker build -t myapp:latest .
      
      - name: Generate SBOM
        run: |
          # Install AnchoreCTL
          curl -sSfL https://anchorectl-releases.anchore.io/v1.0.0/anchorectl_1.0.0_linux_amd64.tar.gz | tar xzf - -C /usr/local/bin
          
          # Execute supply chain composition scan of container image, generate SBOM and send to policy engine for evaluation
          anchorectl image add --wait myapp:latest
          
      - name: Evaluate Policy
        run: |
          # Get policy evaluation results
          RESULT=$(anchorectl image check myapp:latest --policy critical_vulnerability_policy)
          
          # Handle the evaluation result
          if [[ $RESULT == *"Status: pass"* ]]; then
            echo "Policy evaluation passed! Proceeding with deployment."
          else
            echo "Policy evaluation failed! Deployment blocked."
            exit 1
          fi
      
      - name: Deploy if Passed
        if: success()
        run: |
          # Your deployment steps here

This workflow:

  1. Builds your application as a container image using Docker
  2. Installs AnchoreCTL
  3. Scans container image with SCA tool to map software supply chain
  4. Generates an SBOM based on the SCA results
  5. Submits the SBOM to the policy engine for evaluation
  6. Gets evaluation results from policy engine response
  7. Continues or halts the pipeline based on the policy response

Step 4: Test the integration

With the integration in place, it’s time to test that everything works as expected:

  1. Create a test build that intentionally includes a component with a known critical vulnerability
  2. Push the build through your CI/CD pipeline
  3. Confirm that:
    • The SBOM is correctly generated
    • The policy engine identifies the vulnerability
    • The pipeline fails as expected

If all goes well, you’ve successfully implemented your first policy-as-code workflow using SBOMs!

Step 5: Expand your policy coverage

Once you have the basic integration working, you can begin expanding your policy coverage to include:

  • Security policies
  • Compliance policies
  • Software license policies
  • Custom organizational policies
  • Environment-specific requirements (e.g., stricter policies for production vs. development)

Work with your security and compliance teams to translate their requirements into policy code, and gradually expand your automated policy coverage. This process is a large upfront investment but creates recurring benefits that pay dividends over the long-term.

Step 6: Profit!

With SBOMs and policy-as-code implemented, you’ll start seeing the benefits almost immediately:

  • Fast feedback on security and compliance issues
  • Reduced manual compliance tasks
  • Better documentation of what’s in your software and why
  • Consistent evaluation and enforcement of policies
  • Certainty about policy approvals

The key to success is getting your security and compliance teams to embrace the policy-as-code approach. Help them understand that by translating their policies into code, they gain more consistent enforcement while reducing manual effort.

Wrap-Up

As we’ve explored throughout this guide, SBOMs and policy-as-code represent a fundamental shift in how developers interact with security and compliance requirements. Rather than treating these as external constraints that slow down development, they become integrated features of your DevOps pipeline.

Key takeaways

  • Policy-as-Code transforms organizational policies from static documents into dynamic, version-controlled code that can be automated, tested, and integrated into CI/CD pipelines.
  • SBOMs provide a standardized format for documenting your software’s components, creating a consistent interface that policy engines can evaluate.
  • Together, they enable “shift-left” security and compliance, providing immediate feedback on policy violations without meetings or context switching.
  • Integration is straightforward with pre-built plugins for popular DevOps platforms, allowing you to automate policy evaluation as part of your existing build process.
  • The benefits extend beyond security to include faster development cycles, reduced compliance burden, and better visibility into your software supply chain.

Get started today

Ready to bring SBOMs and policy-as-code to your development environment? Anchore Enterprise provides a comprehensive platform for generating SBOMs, defining policies, and automating policy evaluation across your software supply chain.


Learn the 5 best practices for container security and how SBOMs play a pivotal role in securing your software supply chain.

Software Supply Chain Transparency: Why SBOMs Are the Missing Piece in Your ConMon Strategy

Two cybersecurity buzzwords are rapidly shaping how organizations manage risk and streamline operations: Continuous Monitoring (ConMon) and Software Bill of Materials (SBOMs). ConMon, rooted in the traditional security principle—“trust but verify”—has evolved into an iterative process where organizations measure, analyze, design, and implement improvements based on real-time data. Meanwhile, SBOMs offer a snapshot of an application’s composition (i.e., 3rd-party dependency supply chain) at any given point in the DevSecOps lifecycle. 

Understanding these concepts is crucial because together they unlock significant enterprise value—ranging from rapid zero-day response to scalable vulnerability management and even automated compliance enforcement. By integrating SBOMs into a robust ConMon strategy, organizations can not only mitigate supply chain risks but also ensure that every stage of software development adheres to stringent security and compliance standards.

A Brief Introduction to Continuous Monitoring (ConMon)

Continuous Monitoring is a wide ranging topic applied across various contexts such as application performance, error tracking, and security oversight. In the most general sense, ConMon is an iterative cycle of:

  • Measure: Collect relevant data.
  • Analyze: Analyze raw data and generate actionable insights.
  • Design: Develop solution(s) based on the insights.
  • Implement: Execute solution to resolve issue(s) or improve performance.
  • Repeat

ConMon is underpinned by the well-known management theory maxim, “You can’t manage what you don’t measure.” Historically, the term has its origins in the federal compliance world—think FedRAMP and cATO—where continuous monitoring was initially embraced as an evolution of traditional point-in-time security compliance audits.

So, where do SBOMs fit into this picture?

A Brief Introduction to SBOMs (in the Context of ConMon)

In the world of ConMon, SBOMs are a source of data that can be measured and analyzed to extract actionable insights. SBOMs are specifically a structured store of software supply chain metadata. They track the evolution of a software artifacts supply chain as it develops throughout the software development lifecycle.

An SBOM catalogs information like software supply chain dependencies, security vulnerabilities and licensing contracts. In this way SBOMs act as the source of truth for the state of an application’s software supply chain during the development process.

SBOM Lifecycle Graphic

The ConMon process utilizes the supply chain data stored in the SBOMs to generate actionable insights like: 

  • uncovering critical supply chain vulnerabilities,
  • identifying legally risky open source licenses, or 
  • catching new software dependencies that break key compliance frameworks like FedRAMP.

SBOMs are the key data source that allows ConMon to apply its goal of continuously monitoring and improving an organization’s software development environment—specifically, the software supply chain.

Benefits of using SBOMs as the central component of ConMon

As the cornerstone of the software supply chain, SBOMs play a central role in supply chain security which falls under the purview of the continuous monitoring workflow. Given this, it shouldn’t be a surprise that there are cross-over use-cases between the two domains. Leveraging the standardized data structure of SBOMs unlocks a wealth of opportunities for automating supply chain security use-cases and scaling the principles of ConMon. Key use-cases and benefits include:

  1.  Rapid incident response to zero-day disclosure
  • Reduced exploitation risk: SBOMs reduce the risk of supply chain exploitation by dramatically reducing the time to identify if vulnerable components are present in an organization’s software environment and how to prioritize remediation efforts.
  • Prevent wasted organizational resources: A centralized SBOM repository enables organizations to automate manual dependency audits into a single search query. This prevents the need to re-task software engineers away from development work to deal with incident response.
  1. Software dependency drift detection
  • Reduced exploitation risk: When SBOM generation is integrated natively into the DevSecOps pipeline a historical record of the development of an application becomes available. Each development stage is compared against the previous to identify dependency injection in real-time. Catching and remediating malicious injections as early as possible significantly reduces the risk of exploitation by threat actors.
  1. Proactive and scalable vulnerability management
  • Reduced security risk: SBOMs empower organizations to decouple software composition scanning from vulnerability scanning, enabling a scalable vulnerability management approach that meets cloud-native expectations. By generating an SBOM early in the DevSecOps pipeline, teams can continuously cross-reference software components against known vulnerability databases, proactively detecting risks before they reach production.
  • Reduced time on vulnerability management: This streamlined process reduces the manual tasks associated with legacy vulnerability management. Teams are now enabled to focus their efforts on higher-level issues rather than be bogged down with tedious manual tasks.
  1. Automated non-compliance alerting and enforcement
  1. Automated compliance report generation
  • Reduce time spent generating compliance audit evidence: Manual generation of compliance audit reports to prove security best practices are in place is a time consuming process. SBOMs unlock the power to automate the generation of audit evidence for the software supply chain. This protects organizational resources for higher-value tasks.
  1. Automated OSS license management
  • Reduced legal risk: An adversarial OSS license accidentally entering a commercial application opens up an enterprise to significant legal risk. SBOMs enable organizations to automate the process of scanning for OSS licenses and assessing the legal risk of the entire software supply chain. Having real-time visibility into the licenses of an organization’s entire supply chain dramatically reduces the risk of legal penalties.

In essence, SBOMs serve as the heart of a robust ConMon strategy, providing the critical data needed to scale automation and ensure that the software supply chain remains secure and compliant.


Interested to learn about all of the software supply chain use-cases that SBOMs enable? Read our new white paper and start unlocking enterprise value.

WHITE PAPER Rnd Rect | Unlock Enterprise Value with SBOMs: Use-Cases for the Entire Organization

SBOMs and ConMon applied to the DevSecOps lifecycle

Integrating SBOMs within the DevSecOps lifecycle enables organizations to realize security outcomes efficiently through a cyclical process of measurement, analysis, design and implementation. We’ll go through each step of the process.

1. Measure

Measurement is the most obvious stage of ConMon given its synonym “monitoring” is in the name. The measurement step is primarily concerned with collecting as much data as possible that will later power the analysis stage. ConMon traditionally focuses on collecting security specific data like:

  • Observability Telemetry: Application logs, metrics, and traces
  • Audit Logs: Records of application administration: access and modifications
  • Supply Chain Metadata: Point-in-time scans of the composition of a software artifacts supply chain of 3rd-party dependencies

Software composition analysis (SCA) scanners and SBOM generation tools create an additional dimension of information about software that can then be analyzed. The supply chain metadata that is generated powers the insights that feeds the ConMon flywheel and increases transparency into software supply chain issues.

External measurements (i.e., data sources)

Additionally, external databases (e.g., NVD or VEX) can be referenced to correlate valuable information aggregated by public interest organizations like NIST (National Institute of Standards and Measures) and CISA (Cybersecurity and Infrastructure Security Agency). These databases act as comprehensive stores of the collective work of security researchers worldwide. 

As software components are analyzed and tested for vulnerabilities and exploits, researchers submit their findings to these public goods databases. The security findings are tied to the specific software components (i.e., the same component identifier stored in an SBOM).

After collecting all of this information, we are ready to move to the next phase of the ConMon lifecycle: analysis.

2. Analyze

The foundational premise of ConMon is to monitor continuously in order to uncover security threats before they can cause damage. The analysis step is what transforms raw data into actionable security insights that reduce the risk of a breach. 

  • Queries over Software Supply Chain Data:
    • As the central data format for the software supply chain, SBOMs act as the data source that queries are crafted to extract insight from
  • Actionable Insights:
    • Highlight known vulnerabilities by cross-referencing the entire 3rd-party software supply chain against vulnerability databases like NVD
    • Compare SBOMs from one stage of the pipeline to a later stage in order to uncover dependency injections (both unintentional and malicious)
    • Codify software supply chain security or regulatory compliance policies into policy-as-code that can alert on policy drift in real-time

3. Design

After generating insights from analysis of the raw data, the next step is to design a solution based on the analysis from the previous step. The goal of the designed solution is to reduce the risk of a security incident.

Example Solutions Based on Analysis

  • Vulnerability Remediation Workflow: After analysis has uncovered known vulnerabilities, designing a system to remediate the vulnerabilities comes into focus. Utilizing the existing engineering task management process is ideal. To live up to the promise of ConMon, the analysis to remediation process should be an automated system that pushes supply chain data from the analysis platform directly into the remediation queue.
  • Dependency Injection Detection via SBOM Drift Analysis: SBOMs can be scanned at each stage of the DevSecOps pipeline, when anomalous dependencies are flagged as not coming from a known good source this analysis can be used to prompt an investigation. These types of investigations are generally too complex to be automated but the investigation process still requires design.
  • Automated Compliance Enforcement via Policy-as-Code: By codifying software supply chain best practices or regulatory compliance controls into policy-as-code, organizations can be alerted to policy violations in real-time. The solution design includes a mapping of policies into code, scans against containers in the DevSecOps pipeline and a notification system that can act on the analysis.

4. Implement

The final step of ConMon involves implementing the design from the previous step. This also sets up the entire process to restart from the beginning. After the design is implemented it is ready to be monitored again to assess effectiveness of the implemented design.

Execution of Solution Design

  • Vulnerability Remediation Workflow: Configure the analysis platform to push a report of identified vulnerabilities to the engineering organization’s existing ticketing system. Prioritize the vulnerabilities based on their severity score to increase signal-to-noise ratio for the assigned developer or gate the analysis platform to only push reports if a vulnerability is classified as high or critical.
  • Dependency Injection Detection via SBOM Drift Analysis: Integrate SBOM generation into two or more stages of the DevSecOps pipeline to enable diff analysis of software supply chain analysis over time. Allowlists and denylists can fine tune which types of dependency injections are expected and which are anomalous. Investigations into suspected dependency injection can be triggered based on anomaly detection.
  • Automated Compliance Enforcement via Policy-as-Code: Security policies and/or compliance controls will need to be translated from english into code but this is a one-time, upfront investment to enable scalable, automated policy scanning. A scanner will need to be integrated into one or more places in the CI/CD build pipeline in order to detect policy violations. As violations are identified, the analysis platform can push notifications to the appropriate team.

5. Repeat

Step 5 isn’t actually a different step of ConMon, it is just a reminder to return to Step 1 for another turn of the ConMon cycle. The ‘continuous’ in ConMon means that we continue to repeat the process indefinitely. As security of a system is measured and evaluated, new security issues are discovered that then require a solution design and design implementation. This is the flywheel cycle of ConMon that endlessly improves the security of the system that is monitored.

Real-world Scenario

SBOMs and ConMon aren’t just a theoretical framework, there are a number of SBOM use-cases in production that are delivering value to enterprises. The most prominent of these is the security incident response use-case. This use-case moved into the limelight in the wake of the string of infamous software supply chain attacks of the past decade: Solarwinds, Log4j, XZ Utils, etc.

The biggest takeaway from these high-profile incidents is that enterprises were caught unprepared and experienced pain as a result of not having the tools to measure, analyze, design and implement solutions to these supply chain attacks.

Large enterprises like Google responded to this deficit by deploying an SBOM-powered software supply chain solution that generates over 100 million SBOMs a year! As a result of having this system in place Google was able to rapidly respond to the XZ Utils incident without having to utilize any manual processes that typically extend supply chain attacks like these from minutes into days or weeks. Brandon Lum, Google’s SBOM lead, recounts that “within 10 minutes of the XZ [Utils] announcement, the entire Google organization was able to rule out that anything was impacted.”

This outcome was only possible due to Google’s foresight and willingness to instrument their DevSecOps pipeline with tools like SCAs and SBOM generators and continuously monitor their software supply chain.

Ready to Reap the Rewards of SBOM-powered ConMon?

Integrating SBOMs as a central element of your ConMon strategy transforms how organizations manage software supply chain security. By aligning continuous monitoring with the principles of DevSecOps, security and engineering leaders can ensure that their organizations are well-prepared to face the evolving threat landscape—keeping operations secure, compliant, and efficient.

If you’re interested in embracing the power of SBOMs and ConMon but your team doesn’t want to take on this side question themselves, Anchore offers a turnkey platform to unlock the benefits without the headache of building and managing a solution from scratch. Anchore Enterprise is a complete SBOM-powered, supply chain security solution that extends ConMon into your organization’s software supply chain. Reach out to our team to learn more or start a free trial to kick the tires yourself.

Learn the 5 best practices for container security and how SBOMs play a pivotal role in securing your software supply chain.

Effortless SBOM Analysis: How Anchore Enterprise Simplifies Integration

As software supply chain security becomes a top priority, organizations are turning to Software Bill of Materials (SBOM) generation and analysis to gain visibility into the composition of their software and supply chain dependencies in order to reduce risk. However, integrating SBOM analysis tools into existing workflows can be complex, requiring extensive configuration and technical expertise. Anchore Enterprise, a leading SBOM management and container security platform, simplifies this process with seamless integration capabilities that cater to modern DevSecOps pipelines.

This article explores how Anchore makes SBOM analysis effortless by offering automation, compatibility with industry standards, and integration with popular CI/CD tools.

Learn about the role that SBOMs for the security of your organization in this white paper.

Learn about the role that SBOMs for the security, including open source software (OSS) security, of your organization in this white paper.

The Challenge of SBOM Analysis Integration

SBOMs play a crucial role in software security, compliance, and vulnerability management. However, organizations often face challenges when adopting SBOM analysis tools:

  • Complex Tooling: Many SBOM solutions require significant setup and customization.
  • Scalability Issues: Enterprises managing thousands of dependencies need scalable and automated solutions.
  • Compatibility Concerns: Ensuring SBOM analysis tools work seamlessly across different DevOps environments can be difficult.
  • Compliance Requirements: Organizations must align with frameworks like Executive Order 14028, EU Cybersecurity Resilience Act (CRA), ISO 27001, and the Secure Software Development Framework (SSDF) 

Anchore addresses these challenges by providing a sleek approach to SBOM analysis with easy-to-use integrations.

How Anchore Simplifies SBOM Analysis Integration

1. Automated SBOM Generation and Analysis

Anchore automates SBOM generation from various sources, including container images, software packages, and application dependencies. This eliminates the need for manual intervention, ensuring continuous security and compliance monitoring.

  • Supports multiple SBOM formats: CycloneDX, SPDX, and Anchore’s native json format.
  • Automatically scans and analyzes SBOMs for vulnerabilities, licensing issues, and  security and compliance policy violations.
  • Provides real-time insights to security teams.

2. Seamless CI/CD Integration

DevSecOps teams require tools that integrate effortlessly into their existing workflows. Anchore achieves this by offering:

  • Popular CI/CD platform plugins: Jenkins, GitHub Actions, GitLab CI/CD, Azure DevOps and more.
  • API-driven architecture: Embed SBOM generation and analysis in any DevOps pipeline.
  • Policy-as-code support: Enforce security and compliance policies within CI/CD workflows.
  • AnchoreCTL: A command-line (CLI) tool for developers to generate and analyze SBOMs locally before pushing to production.

3. Cloud Native and On-Premises Deployment

Organizations have diverse infrastructure requirements, and Anchore provides flexibility through:

  • Cloud native support: Works seamlessly with Kubernetes, OpenShift, AWS, and GCP.
  • On-premises deployment: For organizations requiring strict control over data security.
  • Hybrid model: Allows businesses to use cloud-based Anchore Enterprise while maintaining on-premises security scanning.

Bonus: Anchore also offers an air-gapped deployment option for organizations working with customers that provide critical national infrastructure like energy, financial services or defense.

See how Anchore Enterprise enabled Dreamfactory to support the defense industry.

4. Comprehensive Policy and Compliance Management

Anchore helps organizations meet regulatory requirements with built-in policy enforcement:

  • Out-of-the-box policies: CIS benchmarks, FedRAMP, and DISA STIG compliance.
  • Integrated vulnerability databases: Automated vulnerability assessment using industry-standard databases like OSS Index, NVD, VEX, and Snyk.
  • User-defined policy-as-code: Custom policies to detect software misconfigurations and enforce security best practices.

Custom user policies is a helpful feature to define security policies based on geography; security and compliance can vary widely depending on national borders.

5. Developer-Friendly Approach

A major challenge in SBOM adoption is developer resistance due to complexity. Anchore makes security analysis developer-friendly by:

  • Providing CLI and API tools for easy interaction.
  • Delivering clear, actionable vulnerability reports instead of overwhelming developers with false positives.
  • Integrating directly with development environments, such as VS Code and JetBrains IDEs.
  • Providing an industry standard 24/7 customer support through Anchore’s customer success team.

Conclusion

Anchore has positioned itself as a leader in SBOM analysis by making integration effortless, automating security checks, and supporting industry standards. Whether an organization is adopting SBOMs for the first time or looking to enhance its software supply chain security, Anchore provides a scalable and developer-friendly solution.

By integrating automated SBOM generation, CI/CD compatibility, cloud native deployment, and compliance management, Anchore enables businesses (no matter the size) and government institutions to adopt SBOM analysis without disrupting their workflows. As software security becomes increasingly critical, tools like Anchore will play a pivotal role in ensuring a secure and transparent software supply chain.For organizations seeking a simple to deploy SBOM analysis solution, Anchore Enterprise is here to deliver results to your organization. Request a demo with our team today!

Learn the 5 best practices for container security and how SBOMs play a pivotal role in securing your software supply chain.

DORA + SBOM Primer: Achieving Software Supply Chain Security in Regulated Industries

At Anchore, we frequently discuss the steady drum beat of regulatory bodies mandating SBOMs (Software Bills of Materials) as the central element of modern software supply chain security. The Digital Operational Resilience Act (DORA) is the most recent framework responding to the accelerating growth of software supply chain attacks—by requiring, in all but name, the kind of structured software inventory that SBOMs provide.

In this post, we provide an educational overview of DORA, explain its software supply chain implications, and outline how SBOMs factor into DORA compliance. We’ll also share how to achieve compliance—and how Anchore Enterprise can serve as your DORA compliance “easy button.”

What is DORA?

The Digital Operational Resilience Act (DORA)—formally Regulation (EU) 2022/2554—is an EU regulatory framework designed to ensure the digital operational resilience of financial entities. Key points include:

Effective Date: January 17, 2025

  • TL;DR: It is already being enforced.

Scope: Applies to a wide range of EU financial entities, including:

  • Banks
  • Payment service providers
  • Investment firms
  • Crypto-asset service providers, and more

Core Topics

  • Proactive Risk Management: For both 1st- and 3rd-party software.
  • Incident Response and Recovery: Mandating robust strategies to handle ICT (Information and Communication Technology) disruptions.
  • Resilience Testing: Regular, thorough testing of incident response and risk management systems.
  • Industry Threat Information Sharing: Collaboration across the sector to share threat intelligence.
  • 3rd-party Software Supplier Oversight: Continuous monitoring of 3rd-party software supply chain.

DORA is organized into a high-level cybersecurity and risk management framework document and a separate technical control document—referred to as the “Regulatory Standards Technical Document”—that outlines in detail how to achieve compliance. If you’re familiar with NIST’s RMF (NIST 800-37) and its “Control Catalog” (NIST 800-53) DORA follows this pattern.

What challenge does DORA solve?

In part driven by a 2020 study that highlighted “systemic cyber risk” due to the “high level of interconnectedness” among the technologies used by financial organizations, DORA aims to mitigate the risk that a vulnerability in one component could lead to widespread sector disruption. Two critical factors underline this need:

  • The Structure of Modern Software Development: With extensive 3rd-party and open source dependencies, any gap in security can have cascading consequences.
  • The Rise of Software Supply Chain Attacks: Now that open source software “constitutes 70-90% of any given piece of modern software” threat actors have embraced this attack vector. Software supply chain attacks have not only become a primary cybersecurity target but are seeing accelerating growth.

DORA is designed to fortify the financial sector’s digital resilience by addressing vulnerabilities in modern software development and countering the rapid rise of software supply chain attacks.

What are the consequences of DORA non-compliance?

Compliance is not optional. The European Supervisory Authorities (ESAs) have been given broad powers to:

  • Access Documents and Data: Assessing an organization’s compliance status through comprehensive audits.
  • Conduct On-Site Investigations: Ensuring that all software supply chain controls are in place and functioning.
  • Enforce Steep Penalties: For instance, DORA Article 35 notes that critical ICT third-party service providers could face fines of up to “1% of the average daily worldwide turnover… in the preceding business year.”

For financial entities—and their technology suppliers—the cost of non-compliance is too high to ignore.

Does DORA Require an SBOM?

DORA does not explicitly mention “SBOMs” by name. Instead, it mandates organizations track “third-party libraries, including open-source libraries”. SBOMs are the industry standard method for achieving this result in an automated and scalable manner. 

Specifically, financial entities are required to track:

  • Third-Party Libraries: Including open-source libraries used by ICT services that support critical or important functions.
  • In-House or Custom Software: ICT services developed internally or specifically customized by an ICT third-party service provider.

These “general” requirements without specifically naming a specific technology (like an SBOM) is a common pattern for other global regulatory compliance frameworks (e.g., SSDF).

Another reason to adopt SBOMs for DORA compliance is that the EU Cyber Resilience Act (CRA) compliance specifically names SBOMs as a required compliance artifact. SBOMs knock out two birds with one stone.

DORA and Software Supply Chain Security 

DORA Regulation 56 underscores the necessity of open source analysis (or Software Composition Analysis, SCA) as a fundamental component for achieving operational resilience. SCA’s are software supply chain security tools that are typically tightly coupled with SBOM generators.

Standalone SCA’s and SBOM generation are fantastic tools to create simple point-in-time inventories for generating the necessary compliance artifacts to pass an initial audit. Unfortunately, DORA demands that financial entities continuously monitor their software supply chain:

  • Ongoing Monitoring: Article 10 of DORA requires that financial entities, in collaboration with their ICT third-party service providers, not only maintain an inventory but also track version updates and monitor third-party libraries on an ongoing basis.
  • Continuous Software Supply Chain Risk Management: It’s not enough to have an SBOM at one moment in time; you must continuously scan and update the inventory to ensure that vulnerabilities are promptly identified and remediated.

This level of supply chain security requires organizations to directly integrate SBOM generation into their DevSecOps pipeline and utilize an SBOM management platform.

How to Fulfill DORA’s Software Supply Chain Requirements

1. Software Composition Analysis (SCA) and SBOM Generation

2. Ingest SBOMs from Third-Party Suppliers

  • Collaborative Supply Chain Management: Ensure that you receive and maintain SBOM data from all your third-party suppliers to achieve full visibility into the software components you rely on.

3. Continuous Monitoring in Production

  • Regular Scanning: Implement continuous monitoring to detect any unexpected changes or vulnerabilities in production environments.
  • Key Features to Look For:
    • Alerts for unexpected software components
    • A centralized repository to store and manage SBOMs
    • Revision history tracking to monitor changes over time

DORA Compliance Easy Button: Anchore Enterprise

Anchore Enterprise is engineered to satisfy all of DORA’s software supply chain requirements, acting as your DORA compliance easy button. Here’s how Anchore Enterprise can help:

Automated Software Supply Chain Risk Management

  • End-to-End SBOM Lifecycle Management (Anchore SBOM): Automatically generate and update SBOMs throughout your software development lifecycle.
  • Programmatic Vulnerability Scanning & Risk Assessment (Anchore Secure): Continuously scan software components and assess risk based on real-time data.
  • Policy-as-Code Enforcement (Anchore Enforce): Automate risk management policies to ensure adherence to DORA’s requirements.

Software Supply Chain Incident Response Automation

  • Continuous SCA and SBOM Generation (Anchore SBOM): Keep an updated view of your production software environment.
  • Surgical Zero-Day Vulnerability Identification (Anchore Secure): Quickly identify and remediate vulnerabilities, reducing the potential blast radius of an attack.

Google resolved the XZ Utils zero-day incident in less than 10 minutes by utilizing SBOMs and an SBOM management platform. Anchore Enterprise can help your organization achieve similar results >> SBOM management solutions.

Continuous Compliance Monitoring

  • Real-Time Dashboards (Anchore Enforce): Monitor compliance status with customizable, real-time dashboards.
  • Automated Compliance Reports (Anchore Enforce): Generate and share compliance reports with stakeholders effortlessly.
  • Policy-as-Code Compliance Automation (Anchore Enforce): Enforce compliance at every stage of the software development lifecycle.

If you’re interested in trying any of these features for yourself, Anchore Enterprise offers a 15-day free trial or reach out to our team for a demo of the platform.

Wrap-Up

DORA is redefining software supply chain security in the financial sector by demanding transparency, proactive risk management, and continuous monitoring of 3rd-party suppliers. For technology providers, this shift represents both a challenge and an opportunity: by embracing SBOMs and comprehensive supply chain security practices, you not only help your customers achieve regulatory compliance but also strengthen your own security posture.

At Anchore, we’re committed to helping you navigate this evolving landscape with solutions designed for the modern world of software supply chain security. Ready to meet DORA head-on? Contact us today or visit our blog for more insights and resources.

Learn about the role that SBOMs for the security of your organization in this white paper.

Learn about the role that SBOMs for the security, including open source software (OSS) security, of your organization in this white paper.

2025 Cybersecurity Executive Order Requires Up Leveled Software Supply Chain Security

A few weeks ago, the Biden administration published a new Executive Order (EO) titled “Executive Order on Strengthening and Promoting Innovation in the Nation’s Cybersecurity”. This is a follow-up to the original cybersecurity executive order—EO 14028—from May 2021. This latest EO specifically targets improvements to software supply chain security that addresses gaps and challenges that have surfaced since the release of EO 14028. 

While many issues were identified, the executive order named and shamed software vendors for signing and submitting secure software development compliance documents without fully adhering to the framework. The full quote:

In some instances, providers of software to the Federal Government commit[ed] to following cybersecurity practices, yet do not fix well-known exploitable vulnerabilities in their software, which put the Government at risk of compromise. The Federal Government needs to adopt more rigorous 3rd-party risk management practices and greater assurance that software providers… are following the practices to which they attest.

In response to this behavior, the 2025 Cybersecurity EO has taken a number of additional steps to both encourage cybersecurity compliance and deter non-compliance—the carrot and the stick. This comes in the form of 4 primary changes:

  1. Compliance verification by CISA
  2. Legal repercussions for non-compliance
  3. Contract modifications for Federal agency software acquisition
  4. Mandatory adoption of software supply chain risk management practices by Federal agencies

In this post, we’ll explore the new cybersecurity EO in detail, what has changed in software supply chain security compliance and what both federal agencies and software providers can do right now to prepare.

What Led to the New Cybersecurity Executive Order?

In the wake of massive growth of supply chain attacks—especially those from nation-state threat actors like China—EO 14028 made software bill of materials (SBOMs) and software supply chain security spotlight agenda items for the Federal government. As directed by the EO, the National Institute of Standards and Technology (NIST) authored the Secure Software Development Framework (SSDF) to codify the specific secure software development practices needed to protect the US and its citizens. 

Following this, the Office of Management and Budget (OMB) published a memo that established a deadline for agencies to require vendor compliance with the SSDF. Importantly, the memo allowed vendors to “self-attest” to SSDF compliance.

In practice, many software providers chose to go the easy route and submitted SSDF attestations while only partially complying with the framework. Although the government initially hoped that vendors would not exploit this somewhat obvious loophole, reality intervened, leading to the issuance of the 2025 cybersecurity EO to close off these opportunities for non-compliance.

What’s Changing in the 2025 EO?

1. Rigorous verification of secure software development compliance

No longer can vendors simply self-attest to SSDF compliance. The Cybersecurity and Infrastructure Security Agency (CISA) is now tasked with validating these attestations and via the additional compliance artifacts the new EO requires. Providers that fail validation risk increased scrutiny and potential consequences such as…

2. CISA authority to refer non-compliant vendors to DOJ

A major shift comes from the EO’s provision allowing CISA to forward fraudulent attestations to the Department of Justice (DOJ). In the EO’s words, officials may “refer attestations that fail validation to the Attorney General for action as appropriate.” This raises the stakes for software vendors, as submitting false information on SSDF compliance could lead to legal consequences. 

3. Explicit SSDF compliance in software acquisition contracts

The Federal Acquisition Regulatory Council (FAR Council) will issue contract modifications that explicitly require SSDF compliance and additional items to enable CISA to programmatically verify compliance. Federal agencies will incorporate updated language in their software acquisition contracts, making vendors contractually accountable for any misrepresentations in SSDF attestations.

See the “FAR council contract updates” section below for the full details.

4. Mandatory adoption of supply chain risk management

Agencies must now embed supply chain risk management (SCRM) practices agency-wide, aligning with NIST SP 800-161, which details best practices for assessing and mitigating risks in the supply chain. This elevates SCRM to a “must-have” strategy for every Federal agency.

Other Notable Changes

Updated software supply chain security compliance controls

NIST will update both NIST SP 800-53, the “Control Catalog”, and the SSDF (NIST SP 800-218) to align with the new policy. The updates will incorporate additional controls and greater detail on existing controls. Although no controls have yet been formally added or modified, NIST is tasked with modernizing these documents to align with changes in secure software development practices. Once those updates are complete, agencies and vendors will be expected to meet the revised requirements.

Policy-as-code pilot

Section 7 of the EO describes a pilot program focused on translating security controls from compliance frameworks into “rules-as-code” templates. Essentially, this adopts a policy-as-code approach, often seen in DevSecOps, to automate compliance. By publishing machine-readable security controls that can be integrated directly into DevOps, security, and compliance tooling, organizations can reduce manual overhead and friction, making it easier for both agencies and vendors to consistently meet regulatory expectations.

FedRAMP incentives and new key management controls

The Federal Risk and Authorization Management Program (FedRAMP), responsible for authorizing cloud service providers (CSPs) for federal use, will also see important updates. FedRAMP will develop policies that incentivize or require CSPs to share recommended security configurations, thereby promoting a standard baseline for cloud security. The EO also proposes updates to FedRAMP security controls to address cryptographic key management best practices, ensuring that CSPs properly safeguard cryptographic keys throughout their lifecycle.

How to Prepare for the New Requirements

FAR council contract updates

Within 30 days of the EO release, the FAR Council will publish recommended contract language. This updated language will mandate:

  • Machine-readable SSDF Attestations: Vendors must provide an SSDF attestation in a structured, machine-readable format.
  • Compliance Reporting Artifacts: High-level artifacts that demonstrate evidence of SSDF compliance, potentially including automated scan results, security test reports, or vulnerability assessments.
  • Customer List: A list of all civilian agencies using the vendor’s software, enabling CISA and federal agencies to understand the scope of potential supply chain risk.

Important Note: The 30-day window applies to the FAR Council to propose new contract language—not for software vendors to become fully compliant. However, once the new contract clauses are in effect, vendors who want to sell to federal agencies will need to meet these updated requirements.

Action Steps for Federal Agencies

Federal agencies will bear new responsibilities to ensure compliance and mitigate supply chain risk. Here’s what you should do now:

  1. Inventory 3rd-Party Software Component Suppliers
  2. Assess Visibility into Supplier Risk
    • Perform a vulnerability scan on all 3rd-party components. If you already have SBOMs, scanning them for vulnerabilities is a quick way to find identity risk.
  3. Identify Critical Suppliers
    • Determine which software vendors are mission-critical. This helps you understand where to focus your risk management efforts.
  4. Assess Data Sensitivity
    • If a vendor handles sensitive data (e.g., PII), extra scrutiny is needed. A breach here has broader implications for the entire agency.
  5. Map Potential Lateral Movement Risk
    • Understand if a vendor’s software could provide attackers with lateral movement opportunities within your infrastructure.

Action Steps for Software Providers

For software vendors, especially those who sell to the federal government, proactivity is key to maintaining and expanding federal contracts.

  1. Inventory Your Software Supply Chain
    • Implement an SBOM-powered SCA solution within your DevSecOps pipeline to gain real-time visibility into all 3rd-party components.
  2. Assess Supplier Risk
    • Perform vulnerability scanning on 3rd-party supplier components to identify any that could jeopardize your compliance or your customers’ security.
  3. Identify Sensitive Data Handling
    • If your software processes personally identifiable information (PII) or other sensitive data, expect increased scrutiny. On the flip side, this may make your offering “mission-critical” and less prone to replacement—but it also means compliance standards will be higher.
  4. Map Your Own Attack Surface
    • Assess whether a 3rd-party supplier breach could cascade into your infrastructure and, by extension, your customers.
  5. Prepare Compliance Evidence
    • Start collecting artifacts—such as vulnerability scan reports, secure coding guidelines, and internal SSDF compliance checklists—so you can quickly meet new attestation requirements when they come into effect.

Wrap-Up

The 2025 Cybersecurity EO is a direct response to the flaws uncovered in EO 14028’s self-attestation approach. By requiring 3rd-party validation of SSDF compliance, the government aims to create tangible improvements in its cybersecurity posture—and expects the same from all who supply its agencies.

Given the rapid timeline, preparation is crucial. Both federal agencies and software providers should begin assessing their supply chain risks, implementing SBOM-driven visibility, and proactively planning for new attestation and reporting obligations. By taking steps now, you’ll be better positioned to meet the new requirements.

Learn about SSDF Attestation with this guide. The eBook will take you through everything you need to know to get started.

SSDF Attestation 101: A Practical Guide for Software Producers

Software Supply Chain Security in 2025: SBOMs Take Center Stage

In recent years, we’ve witnessed software supply chain security transition from a quiet corner of cybersecurity into a primary battlefield. This is due to the increasing complexity of modern software that obscures the full truth—applications are a tower of components of unknown origin. Cybercriminals have fully embraced this hidden complexity as a ripe vector to exploit.

Software Bills of Materials (SBOMs) have emerged as the focal point to achieve visibility and accountability in a software ecosystem that will only grow more complex. SBOMs are an inventory of the complex dependencies that make up modern applications. SBOMs help organizations scale vulnerability management and automate compliance enforcement. The end goal is to increase transparency in an organization’s supply chain where 70-90% of modern applications are open source software (OSS) dependencies. This significant source of risk demands a proactive, data-driven solution.

Looking ahead to 2025, we at Anchore, see two trends for SBOMs that foreshadow their growing importance in software supply chain security:

  1. Global regulatory bodies continue to steadily drive SBOM adoption
  2. Foundational software ecosystems begin to implement build-native SBOM support

In this blog, we’ll walk you through the contextual landscape that leads us to these conclusions; keep reading if you want more background.

Global Regulatory Bodies Continue Growing Adoption of SBOMs

As supply chain attacks surged, policymakers and standards bodies recognized this new threat vector as a critical threat with national security implications. To stem the rising tide supply chain threats, global regulatory bodies have recognized that SBOMs are one of the solutions.

Over the past decade, we’ve witnessed a global legislative and regulatory awakening to the utility of SBOMs. Early attempts like the US Cyber Supply Chain Management and Transparency Act of 2014 may have failed to pass, but they paved the way for more significant milestones to come. Things began to change in 2021, when the US Executive Order (EO) 14028 explicitly named SBOMs as the foundation for a secure software supply chain. The following year the European Union’s Cyber Resilience Act (CRA) pushed SBOMs from “suggested best practice” to “expected norm.”

The one-two punch of the US’s EO 14028 and the EU’s CRA has already prompted action among regulators worldwide. In the years following these mandates, numerous global bodies issued or updated their guidance on software supply chain security practices—specifically highlighting SBOMs. Cybersecurity offices in Germany, India, Britain, Australia, and Canada, along with the broader European Union Agency for Cybersecurity (ENISA), have each underscored the importance of transparent software component inventories. At the same time, industry consortiums in the US automotive (Auto-ISAC) and medical device (IMDRF) sectors recognized that SBOMs can help safeguard their own complex supply chains, as have federal agencies such as the FDA, NSA, and the Department of Commerce.

By the close of 2024, the pressure mounted further. In the US, the Office of Management and Budget (OMB) set a due date requiring all federal agencies to comply with the Secure Development Framework (SSDF), effectively reinforcing SBOM usage as part of secure software development. Meanwhile, across the Atlantic, the EU CRA officially became law, cementing SBOMs as a cornerstone of modern software security. This constant pressure ensures that SBOM adoption will only continue to grow. It won’t be long until SBOMs become table stakes for anyone operating an online business. We expect the steady march forward of SBOMs to continue in 2025. 

In fact, this regulatory push has been noticed by the foundational ecosystems of the software industry and they are reacting accordingly.

Software Ecosystems Trial Build-Native SBOM Support

Until now, SBOM generation has been relegated to afterthought in software ecosystems. Businesses scan their internal supply chains with software composition analysis (SCA) tools; trying to piece together a picture of their dependencies. But as SBOM adoption continues its upward momentum, this model is evolving. In 2025, we expect that leading software ecosystems will promote SBOMs to a first-class citizen and integrate them natively into their build tools.

Industry experts have recently begun advocating for this change. Brandon Lum, the SBOM Lead at Google, notes, “The software industry needs to improve build tools propagating software metadata.” Rather than forcing downstream consumers to infer the software’s composition after the fact, producers will generate SBOMs as part of standard build pipelines. This approach reduces friction, makes application composition discoverable, and ensures that software supply chain security is not left behind.

We are already seeing early examples:

  • Linux Ecosystem (Yocto): The Yocto Project’s OpenEmbedded build system now includes native SBOM generation. This demonstrates the feasibility of integrating SBOM creation directly into the developer toolchain, establishing a blueprint for other ecosystems to follow.
  • Python Ecosystem: In 2024, Python maintainers explored proposals for build-native SBOM support, motivated by the regulations such as, the Secure Software Development Framework (SSDF) and the EU’s CRA. They’ve envisioned a future where projects, package maintainers, and contributors can easily annotate their code with software dependency metadata that will automatically propagate at build time.
  • Perl Ecosystem: The Perl Security Working Group has also begun exploring internal proposals for SBOM generation, again driven by the CRA’s regulatory changes. Their goal: ensure that Perl packages have SBOM data baked into their ecosystems so that compliance and security requirements can be met more effortlessly.
  • Java Ecosystem: The Eclipse Foundation and VMware’s Spring Boot team have introduced plug-ins for Java build tools like Maven or Gradle that streamline SBOM generation. While not fully native to the compiler or interpreter, these integrations lower the barrier to SBOM adoption within mainstream Java development workflows.

In 2025 we won’t just be talking about build-native SBOMs in abstract terms—we’ll have experimental support for them from the most forward thinking ecosystems. This shift is still in its infancy but it foreshadows the central role that SBOMs will play in the future of cybersecurity and software development as a whole.

Closing Remarks

The writing on the wall is clear: supply chain attacks aren’t slowing down—they are accelerating. In a world of complex, interconnected dependencies, every organization must know what’s inside its software to quickly spot and respond to risk. As SBOMs move from a nice-to-have to a fundamental part of building secure software, teams can finally gain the transparency they need over every component they use, whether open source or proprietary. This clarity is what will help them respond to the next Log4j or XZ Utils issue before it spreads, putting security team’s back in the driver’s seat and ensuring that software innovation doesn’t come at the cost of increased vulnerability.

Learn about the role that SBOMs for the security of your organization in this white paper.

Learn about the role that SBOMs for the security, including open source software (OSS) security, of your organization in this white paper.

All Things SBOM in 2025: a Weekly Webinar Series

Software Bills of Materials (SBOMs) have quickly become a critical component in modern software supply chain security. By offering a transparent view of all the components that make up your applications, SBOMs enable you to pinpoint vulnerabilities before they escalate into costly incidents.

As we enter 2025, software supply chain security and risk management for 3rd-party software dependencies are top of mind for organizations. The 2024 Anchore Software Supply Chain Security Survey notes that 76% of organizations consider these challenges top priorities. Given this, it is easy to see why understanding what SBOMs are—and how to implement them—is key to a secure software supply chain.

To help organizations achieve these top priorities Anchore is hosting a weekly webinar series dedicated entirely to SBOMs. Beginning January 14 and running throughout Q1, our webinar line-up will explore a wide range of topics (see below). Industry luminaries like Kate Stewart (co-founder of the SPDX project) and Steve Springett (Chair of the OWASP CycloneDX Core Working Group) will be dropping in to provide unique insights and their special blend of expertise on all things SBOMs.

The series will cover:

  • SBOM basics and best practices
  • SDPX and SBOMs in-depth with Kate Stewart
  • Getting started: How to generate an SBOM
  • Software supply chain security and CycloneDX with Steve Springett
  • Scaling SBOMs for the enterprise
  • Real-world insights on applying SBOMs in high-stakes or regulated sectors
  • A look ahead at the future of SBOMs and software supply chain security with Kate Stewart
  • And more!

We invite you to learn from experts, gain practical skills, and stay ahead of the rapidly evolving world of software supply chain security. Visit our events page to register for the webinars now or keep reading to get a sneak peek at the content.

#1 Understanding SBOMs: An Intro to Modern Development

Date/Time: Tuesday, January 14, 2025 – 10am PST / 1pm EST
Featuring: 

  • Lead Developer of Syft
  • Anchore VP of Security
  • Anchore Director of Developer Relations

We are kicking off the series with an introduction to the essentials of SBOMs. This session will cover the basics of SBOMs—what they are, why they matter, and how to get started generating and managing them. Our experts will walk you through real-world examples (including Log4j) to show just how vital it is to know what’s in your software.

Key Topics:

This webinar is perfect for both technical practitioners and business leaders looking to establish a strong SBOM foundation.

#2 Understanding SBOMs: Deep Dive with Kate Stewart

Date/Time: Wednesday, January 22, 2025 – 10am PST / 1pm EST
Featured Guest: Kate Stewart (co-founder of SPDX)

Our second session brings you a front-row seat to an in-depth conversation with Kate Stewart, co-founder of the SPDX project. Kate is a leading voice in software supply chain security and the SBOM standard. From the origins of the SPDX standard to the latest challenges in license compliance, Kate will provide an extensive behind-the-scenes look into the world of SBOMs.

Key Topics:

  • The history and evolution of SBOMs, including the creation of SPDX
  • Balancing license compliance with security requirements
  • How SBOMs support critical infrastructure with national security concerns
  • The impact of emerging technology—like open source LLMs—on SBOM generation and analysis

If you’re ready for a more advanced look at SBOMs and their strategic impact, you won’t want to miss this conversation.

#3 How to Automate, Generate, and Manage SBOMs

Date/Time: Wednesday, January 29, 2025 – 12pm EST / 9am PST
Featuring: 

  • Anchore Director of Developer Relations
  • Anchore Principal Solutions Engineer

For those seeking a hands-on approach, this webinar dives into the specifics of automating SBOM generation and management within your CI/CD pipeline. Anchore’s very own Alan Pope (Developer Relations) and Sean Fazenbaker (Solutions) will walk you through proven techniques for integrating SBOMs to reveal early vulnerabilities, minimize manual interventions, and improve overall security posture.

Key Topics:

This is the perfect session for teams focused on shifting security left and preserving developer velocity.

What’s Next?

Beyond our January line-up, we have more exciting sessions planned throughout Q1. Each webinar will feature industry experts and dive deeper into specialized use-cases and future technologies:

  • CycloneDX & OWASP with Steve Springett – A closer look at this popular SBOM format, its technical architecture, and VEX integration.
  • SBOM at Scale: Enterprise SBOM Management – Learn from large organizations that have successfully rolled out SBOM practices across hundreds of applications.
  • SBOMs in High-Stakes Environments – Explore how regulated industries like healthcare, finance, and government handle unique compliance challenges and risk management.
  • The Future of Software Supply Chain Security – Join us in March as we look ahead at emerging standards, tools, and best practices with Kate Stewart returning as the featured guest.

Stay tuned for dates and registration details for each upcoming session. Follow us on your favorite social network (Twitter, Linkedin, Bluesky) or visit our events page to stay up-to-date.

Learn about the role that SBOMs for the security of your organization in this white paper.

Learn about the role that SBOMs for the security, including open source software (OSS) security, of your organization in this white paper.

Automating SBOMs: From Creation to Scanning & Analysis

This blog post has been archived and replaced by the supporting pillar page that can be found here: https://anchore.com/wp-admin/post.php?post=987474667&action=edit

The blog post is meant to remain “public” so that it will continue to show on the /blog feed. This will help discoverability for people browsing the blog and potentially help SEO. If it is clicked on it will automatically redirect to the pillar page.

Navigating Open Source Software Compliance in Regulated Industries

Open source software (OSS) brings a wealth of benefits; speed, innovation, cost savings. But when serving customers in highly regulated industries like defense, energy, or finance, a new complication enters the picture—compliance.

Imagine your DevOps-fluent engineering team has been leveraging OSS to accelerate product delivery, and suddenly, a major customer hits you with a security compliance questionnaire. What now? 

Regulatory compliance isn’t just about managing the risks of OSS for your business anymore; it’s about providing concrete evidence that you meet standards like FedRAMP and the Secure Software Development Framework (SSDF).

The tricky part is that the OSS “suppliers” making up 70-90% of your software supply chain aren’t traditional vendors—they don’t have the same obligations or accountability, and they’re not necessarily aligned with your compliance needs. 

So, who bears the responsibility? You do.

The OSS your engineering team consumes is your resource and your responsibility. This means you’re not only tasked with managing the security risks of using OSS but also with proving that both your applications and your OSS supply chain meet compliance standards. 

In this post, we’ll explore why you’re ultimately responsible for the OSS you consume and outline practical steps to help you use OSS while staying compliant.

Learn about CISA’s SSDF attestation form and how to meet compliance.

What does it mean to use open source software in a regulated environment?

Highly regulated environments add a new wrinkle to the OSS security narrative. The OSS developers that author the software dependencies that make up the vast majority of modern software supply chains aren’t vendors in the traditional sense. They are more of a volunteer workforce that allow you to re-use their work but it is a take it or leave it agreement. You have no recourse if it doesn’t work as expected, or worse, has vulnerabilities in it.

So, how do you meet compliance standards when your software supply chain is built on top of a foundation of OSS?

Who is the vendor? You are!

Whether you have internalized this or not the open source software that your developers consume is your resource and thus your responsibility.

This means that you are shouldered with the burden of not only managing the security risk of consuming OSS but also having to shoulder the burden of proving that both your applications and the your OSS supply chain meets compliance.

Open source software is a natural resource

Before we jump into how to accomplish the task set forth in the previous section, let’s take some time to understand why you are the vendor when it comes to open source software.

The common idea is that OSS is produced by a 3rd-party that isn’t part of your organization, so they are the software supplier. Shouldn’t they be the ones required to secure their code? They control and maintain what goes in, right? How are they not responsible?

To answer that question, let’s think about OSS as a natural resource that is shared by the public at large, for instance the public water supply.

This shouldn’t be too much of a stretch. We already use terms like upstream and downstream to think about the relationship between software dependencies and the global software supply chain.

Using this mental model, it becomes easier to understand that a public good isn’t a supplier. You can’t ask a river or a lake for an audit report that it is contaminant free and safe to drink. 

Instead the organization that processes and provides the water to the community is responsible for testing the water and guaranteeing its safety. In this metaphor, your company is the one processing the water and selling it as pristine bottled water. 

How do you pass the buck to your “supplier”? You can’t. That’s the point.

This probably has you asking yourself, if I am responsible for my own OSS supply chain then how to meet a compliance standard for something that I don’t have control over? Keep reading and you’ll find out.

How do I use OSS and stay compliant?

While compliance standards are often thought of as rigid, the reality is much more nuanced. Just because your organization doesn’t own/control the open source projects that you consume doesn’t mean that you can’t use OSS and meet compliance requirements.

There are a few different steps that you need to take in order to build a “reasonably secure” OSS supply chain that will pass a compliance audit. We’ll walk you through the steps below:

Step 1 — Know what you have (i.e., an SBOM inventory)

The foundation of the global software supply chain is the SBOM (software bill of materials) standard. Each of the security and compliance functions outlined in the steps below use or manipulate an SBOM.

SBOMs are the foundational component of the global software supply chain because they record the ingredients that were used to produce the application an end-user will consume. If you don’t have a good grasp of the ingredients of your applications there isn’t much hope for producing any upstream security or compliance guarantees.

The best way to create observability into your software supply chain is to generate an SBOM for every single application in your DevSecOps build pipeline—at each stage of the pipeline!

Step 2 — Maintain a historical record of application source code

To meet compliance standards like FedRAMP and SSDF, you need to be able to maintain a historical record of the source code of your applications, including: 

  • Where it comes from, 
  • Who created it, and 
  • Any modifications made to it over time.

SBOMs were designed to meet these requirements. They act as a record of how applications were built and when/where OSS dependencies were introduced. They also double as compliance artifacts that prove you are compliant with regulatory standards.

Governments aren’t content with self-attestation (at least not for long); they need hard evidence to verify that you are trustworthy. Even though SSDF is currently self-attestation only, the federal government is known for rolling out compliance frameworks in stages. First advising on best-practices, then requiring self-attestation, finally external validation via a certification process. 

The Cybersecurity Maturity Model Certification (CMMC) is a good example of this dynamic process. It recently transitioned from self-attestation to external validation with the introduction of the 2.0 release of the framework.

Step 3 — Manage your OSS vulnerabilities

Not only do you need to keep a record of applications as they evolve over time, you have to track the known vulnerabilities of your OSS dependencies to achieve compliance. Just as SBOMs prove provenance, vulnerability scans are proof that your application and its dependencies aren’t vulnerable. These scans are a crucial piece of the evidence that you will need to provide to your compliance officer as you go through the certification process. 

Remember the buck stops with you! If the OSS that your application consumes doesn’t supply an SBOM and vulnerability scan (which is essentially all OSS projects) then you are responsible to create them. There is no vendor to pass the blame to for proving that your supply chain is reasonably secure and thus compliant.

Step 4 — Continuous compliance of open source software supply chain

It is important to recognize that modern compliance standards are no longer sprints but marathons. Not only do you have to prove that your application(s) are compliant at the time of audit but you have to be able to demonstrate that it remains secure continuously in order to maintain your certification.

This can be challenging to scale but it is made easier by integrating SBOM generation, vulnerability scanning and policy checks directly into the DevSecOps pipeline. This is the approach that modern, SBOM-powered SCAs advocate for.

By embedding the compliance policy-as-code into your DevSecOps pipeline as policy gates, compliance can be maintained over time. Developers are alerted when their code doesn’t meet a compliance standard and are directed to take the corrective action. Also, these compliance checks can be used to automatically generate the compliance artifacts needed. 

You already have an automated DevSecOps pipeline that is producing and delivering applications with minimal human intervention, why not take advantage of this existing tooling to automate open source software compliance in the same way that security was integrated directly into DevOps.

Real-world Examples

To help bring these concepts to life, we’ve outlined some real-world examples of how open source software and compliance intersect:

Open source project has unfixed vulnerabilities

This is far and wide the most common issue that comes up during compliance audits. One of your application’s OSS dependencies has a known vulnerability that has been sitting in the backlog for months or even years!

There are several reasons why an open source software developer might leave a known vulnerability unresolved:

  • They prioritize a feature over fixing a vulnerability
  • The vulnerability is from a third-party dependency they don’t control and can’t fix
  • They don’t like fixing vulnerabilities and choose to ignore it
  • They reviewed the vulnerability and decided it’s not likely to be exploited, so it’s not worth their time
  • They’re planning a codebase refactor that will address the vulnerability in the future

These are all rational reasons for vulnerabilities to persist in a codebase. Remember, OSS projects are owned and maintained by 3rd-party developers who control the repository; they make no guarantees about its quality. They are not vendors.

You, on the other hand, are a vendor and must meet compliance requirements. The responsibility falls on you. An OSS vulnerability management program is how you meet your compliance requirements while enjoying the benefits of OSS.

Need to fill out a supplier questionnaire

Imagine you’re a cloud service provider or software vendor. Your sales team is trying to close a deal with a significant customer. As the contract nears signing, the customer’s legal team requests a security questionnaire. They’re in the business of protecting their organization from financial risk stemming from their supply chain, and your company is about to become part of that supply chain.

These forms are usually from lawyers, very formal, and not focused on technical attacks. They just want to know what you’re using. The quick answer? “Here’s our SBOM.” 

Compliance comes in the form of public standards like FedRAMP, SSDF, NIST, etc., and these less formal security questionnaires. Either way, being unable to provide a full accounting of the risks in your software supply chain can be a speed bump to your organization’s revenue growth and success.

Integrating SBOM scanning, generation, and management deeply into your DevSecOps pipeline is key to accelerating the sales process and your organization’s overall success.

Prove provenance

CISA’s SSDF Attestation form requires that enterprises selling software to the federal government can produce a historical record of their applications. Quoting directly: “The software producer [must] maintain provenance for internal code and third-party components incorporated into the software to the greatest extent feasible.”

If you want access to the revenue opportunities the U.S. federal government offers, SSDF attestation is the needle you have to thread. Meeting this requirement without hiring an army of compliance engineers to manually review your entire DevSecOps pipeline demands an automated OSS component observability and management system.

Often, we jump to cryptographic signatures, encryption keys, trust roots—this quickly becomes a mess. Really, just a hash of the files in a database (read: SBOM inventory) satisfies the requirement. Sometimes, simpler is better. 

Discover the “easy button” to SSDF Attestation and OSS supply chain compliance in our previous blog post.

Takeaways

OSS Is Not a Vendor—But You Are! The best way to have your OSS cake and eat it too (without the indigestion) is to:

  1. Know Your Ingredients: Maintain an SBOM inventory of your OSS supply chain.
  2. Maintain a Complete Historical Record: Keep track of your application’s source code and build process.
  3. Scan for Known Vulnerabilities: Regularly check your OSS dependencies.
  4. Continuous Compliance thru Automation: Generate compliance records automatically to scale your compliance process.

There are numerous reasons to aim for open source software compliance, especially for your software supply chain:

  • Balance Gains Against Risks: Leverage OSS benefits while managing associated risks.
  • Reduce Financial Risk: Protect your organization’s existing revenue.
  • Increase Revenue Opportunities: Access new markets that mandate specific compliance standards.
  • Avoid Becoming a Cautionary Tale: Stay ahead of potential security incidents.

Regardless of your motivation for wanting to use OSS and use it responsibly (i.e., securely and compliantly), Anchore is here to help. Reach out to our team to learn more about how to build and manage a secure and compliant OSS supply chain.

Learn the container security best practices to reduce the risk of software supply chain attacks.

Learn the container security best practices, including open source software (OSS) security, to reduce the risk of software supply chain attacks.

How to build an OSS risk management program

In previous blog posts we have covered the risks of open source software (OSS) and security best practices to manage that risk. From there we zoomed in on the benefits of tightly coupling two of those best practices (SBOMs and vulnerability scanning)

Now, we’ll dig deeper into the practical considerations of integrating this paired solution into a DevSecOps pipeline. By examining the design and implementation of SBOMs and vulnerability scanning, we’ll illuminate the path to creating a holistic open source software (OSS) risk management program.

Learn about the role that SBOMs for the security of your organization in this white paper.

Learn about the role that SBOMs for the security, including open source software (OSS) security, of your organization in this white paper.

How do I integrate SBOM management and vulnerability scanning into my development process?

Ideally, you want to generate an SBOM at each stage of the software development process (see image below). By generating an SBOM and scanning for vulnerabilities at each stage, you unlock a number of novel use-cases and benefits that we covered previously.

DevSecOps lifecycle diagram with all stages to integrate SBOM generation and vulnerability scanning.

Let’s break down how to integrate SBOM generation and vulnerability scanning into each stage of the development pipeline:

Source (PLAN & CODE)

The easiest way to integrate SBOM generation and vulnerability scanning into the design and coding phases is to provide CLI (command-line interface) tools to your developers. Engineers are already used to these tools—and have a preference for them!

If you’re going the open source route, we recommend both Syft (SBOM generation) and Grype (vulnerability scanner) as easy options to get started. If you’re interested in an integrated enterprise tool then you’ll want to look at AnchoreCTL.

Developers can generate SBOMs and run vulnerability scans right from the workstation. By doing this at design or commit time developers can shift security left and know immediately about security implications of their design decisions.

If existing vulnerabilities are found, developers can immediately pivot to OSS dependencies that are clean or start a conversation with their security team to understand if their preferred framework will be a deal breaker. Either way, security risk is addressed early before any design decisions are made that will be difficult to roll back.

Build (BUILD + TEST)

The ideal location to integrate SBOM generation and vulnerability scanning during the build and test phases are directly into the organization’s continuous integration (CI) pipeline.

The same self-contained CLI tools used during the source stage are integrated as additional steps into CI scripts/runbooks. When a developer pushes a commit that triggers the build process, the new steps are executed and both an SBOM and vulnerability scan are created as outputs. 

Check out our docs site to see how AnchoreCTL (running in distributed mode) makes this integration a breeze.

If you’re having trouble convincing your developers to jump on the SBOM train, we recommend that developers think about all security scans as just another unit test that is part of their testing suite.

Running these steps in the CI pipeline delays feedback a little versus performing the check as incremental code commits are made as an application is being coded but it is still light years better than waiting till a release is code complete. 

If you are unable to enforce vulnerability scanning of OSS dependencies by your engineering team, a CI-based strategy can be a good happy medium. It is much easier to ensure every build runs exactly the same each time than it is to do the same for developers.

Release (aka Registry)

Another integration option is the container registry. This option will require you to either roll your own service that will regularly call the registry and scan new containers or use a service that does this for you.

See how Anchore Enterprise can automate this entire process by reviewing our integration docs.

Regardless of the path you choose, you will end up creating an IAM service account within your CI application which will give your SBOM and vulnerability scanning solution the access to your registries.

The release stage tends to be fairly far along in the development process and is not an ideal location for these functions to run. Most of the benefits of a shift left security posture won’t be available anymore.

If this is an additional vulnerability scanning stage—rather than the sole stage—then this is a fantastic environment to integrate into. Software supply chain attacks that target registries are popular and can be prevented with a continuous scanning strategy.

Deploy

This is the traditional stage of the SDLC (software development lifecycle) to run vulnerability scans. SBOM generation can be added on as another step in an organization’s continuous deployment (CD) runbook.

Similar to the build stage, the best integration method is by calling CLI tools directly in the deploy script to generate the SBOM and then scan it for vulnerabilities.

Alternatively, if you utilize a container orchestrator like Kubernetes you can also configure an admission controller to act as a deployment gate. The admissions controller should be configured to make a call out to a standalone SBOM generator and vulnerability scanner. 

If you’d like to understand how this is implemented with Anchore Enterprise, see our docs.

While this is the traditional location for running vulnerability scans, it is not recommended that this is the only stage to scan for vulnerabilities. Feedback about security issues would be arriving very late in the development process and prior design decisions may prevent vulnerabilities from being easily remediated. Don’t do this unless you have no other option.

Production (OPERATE + MONITOR)

This is not a traditional stage to run vulnerability scans since the goal is to prevent vulnerabilities from getting to production. Regardless, this is still an important environment to scan. Production containers have a tendency to drift from their pristine build states (DevSecOps pipelines are leaky!).

Also, new vulnerabilities are discovered all of the time and being able to prioritize remediation efforts to the most vulnerable applications (i.e., runtime containers) considerably reduces the risk of exploitation.

The recommended way to run SBOM generation and vulnerability scans in production is to run an independent container with the SBOM generator and vulnerability scanner installed. Most container orchestrators have SDKs that will allow you to integrate an SBOM generator and vulnerability scanner to the preferred administration CLI (e.g., kubectl for k8s clusters). 

Read how Anchore Enterprise integrates these components together into a single container for both Kubernetes and Amazon ECS.

How do I manage all of the SBOMs and vulnerability scans?

Tightly coupling SBOM generation and vulnerability scanning creates a number of benefits but it also creates one problem; a firehose of data. This unintended side effect is named SBOM sprawl and it inevitably becomes a headache in and of itself.

The concise solution to this problem is to create a centralized SBOM repository. The brevity of this answer downplays the challenges that go along with building and managing a new data pipeline.

We’ll walk you through the high-level steps below but if you’re looking to understand the challenges and solutions of SBOM sprawl in more detail, we have a separate article that covers that.

Integrating SBOMs and vulnerability scanning for better OSS risk management

Assuming you’ve deployed an SBOM generator and vulnerability scanner into at least one of your development stages (as detailed above in “How do I integrate SBOM management and vulnerability scanning into my development process?”) and have an SBOM repository for storing your SBOMs and/or vulnerability scans, we can now walkthrough how to tie these systems together.

  1. Create a system to pull vulnerability feeds from reputable sources. If you’re looking for a way to get started here, read our post on how to get started.
  2. Regularly scan your catalog of SBOMs for vulnerabilities, storing the results alongside the SBOMs.
  3. Implement a query system to extract insights from your inventory of SBOMs.
  4. Create a dashboard to visualize your software supply chain’s health.
  5. Build alerting automation to ping your team as newly discovered vulnerabilities are announced.
  6. Maintain all of these DIY security applications and tools. 
  7. Continue to incrementally improve on these tools as new threats emerge, technologies evolve and development processes change.

If this feels like more work than you’re willing to take on, this is why security vendors exist. See the benefits of a managed SBOM-powered SCA below.

Prefer not to DIY? Evaluate Anchore Enterprise

Anchore Enterprise was designed from the ground up to provide a reliable software supply chain security platform that requires the least amount of work to integrate and maintain. Included in the product is:

  • Out-of-the-box integrations for popular CI/CD software (e.g., GitHub, Jenkins, GitLab, etc.)
  • End-to-end SBOM management
  • Enterprise-grade vulnerability scanning with best-in-class false positives
  • Built-in SBOM drift detection
  • Remediation recommendations
  • Continuous visibility and monitoring of software supply chain health

Enterprises like NVIDIA, Cisco, Infoblox, etc. have chosen Anchore Enterprise as their “easy button” to achieve open source software security with the least amount of lift.

If you’re interested to learn more about how to roll out a complete OSS security solution without the blood, sweat and tears that come with the DIY route—reach out to our team to get a demo or try Anchore Enterprise yourself with a 15-day free trial.

Learn the container security best practices to reduce the risk of software supply chain attacks.

Learn the container security best practices, including open source software (OSS) security, to reduce the risk of software supply chain attacks.

SBOMs and Vulnerability Management: OSS Security in the DevSecOps Era

The rise of open-source software (OSS) development and DevOps practices has unleashed a paradigm shift in OSS security. As traditional approaches to OSS security have proven inadequate in the face of rapid development cycles, the Software Bill of Materials (SBOM) has re-made OSS vulnerability management in the era of DevSecOps.

This blog post zooms in on two best practices from our introductory article on OSS security and the software supply chain:

  1. Maintain a Software Dependency Inventory
  2. Implement Vulnerability Scanning

These two best practices are set apart from the rest because they are a natural pair. We’ll cover how this novel approach,

  • Scaled OSS vulnerability management under the pressure of rapid software delivery
  • Is set apart from legacy SCAs
  • Unlocks new use-cases in software supply chain security, OSS risk management, etc.
  • Benefits software engineering orgs
  • Benefits an organization’s overall security posture
  • Has measurably impacted modern enterprises, such as, NVIDIA, Infoblox, etc.

Whether you’re a seasoned DevSecOps professional or just beginning to tackle the challenges of securing your software supply chain, this blog post offers insights into how SBOMs and vulnerability management can transform your approach to OSS security.

Learn about the role that SBOMs for the security of your organization in this white paper.

Learn about the role that SBOMs for the security, including open source software (OSS) security, of your organization in this white paper.

Why do I need SBOMs for OSS vulnerability management?

The TL;DR is SBOMs enabled DevSecOps teams to scale OSS vulnerability management programs in a modern, cloud native environment. Legacy security tools (i.e., SCA platforms) weren’t built to handle the pace of software delivery after a DevOps face lift.

Answering this question in full requires some historical context. Below is a speed-run of how we got to a place where SBOMs became the clear solution for vulnerability management after the rise of DevOps and OSS; the original longform is found on our blog.

If you’re not interested in a history lesson, skip to the next section, “What new use-cases are unlocked with a software dependency inventory?” to get straight to the impact of this evolution on software supply chain security (SSCS).

A short history on software composition analysis (SCA)

  • SCAs were originally designed to solve the problem of OSS licensing risk
  • Remember that Microsoft made a big fuss about the dangers of OSS at the turn of the millennium
  • Vulnerability scanning and management was tacked-on later
  • These legacy SCAs worked well enough until DevOps and OSS popularity hit critical mass

How the rise of OSS and DevOps principles broke legacy SCAs

  • DevOps and OSS movements hit traction in the 2010s
  • Software development and delivery transitioned from major updates with long development times to incremental updates with frequent releases
  • Modern engineering organizations are measured and optimized for delivery speed
  • Legacy SCAs were designed to scan a golden image once and take as much as needed to do it; upwards of weeks in some cases
  • This wasn’t compatible with the DevOps promise and created friction between engineering and security
  • This meant not all software could be scanned and much was scanned after release increasing the risk of a security breach

SBOMs as the solution

  • SBOMs were introduced as a standardized data structure that comprised a complete list of all software dependencies (OSS or otherwise)
  • These lightweight files created a reliable way to scan software for vulnerabilities without the slow performance of scanning the entire application—soup to nuts
  • Modern SCAs utilize SBOMs as the foundational layer to power vulnerability scanning in DevSecOps pipelines
  • SBOMs + SCAs deliver on the performance of DevOps without compromising security

What is the difference between SBOMs and legacy SCA scanning?

SBOMs offer two functional innovations over the legacy model: 

  1. Deeper visibility into an organization’s application inventory and; 
  2. A record of changes to applications over-time.

The deeper visibility comes from the fact that modern SCA scanners identify software dependencies recursively and build a complete software dependency tree (both direct and transitive). The record of changes comes from the fact that the OSS ecosystem has begun to standardize the contents of SBOMs to allow interoperability between OSS consumers and producers.

Legacy SCAs typically only scan for direct software dependencies and don’t recursively scan for dependencies of dependencies. Also, legacy SCAs don’t generate standardized scans that can then be used to track changes over time.

What new use-cases are unlocked with an SBOM inventory?

The innovations brought by SBOMs (see above) have unlocked new use-cases that benefit both the software supply chain security niche and the greater DevSecOps world. See the list below:

OSS Dependency Drift Detection

Ideally software dependencies are only injected in source code but the reality is that CI/CD pipelines are leaky and both automated and one-off modifications are made at all stages of development. Plugging 100% of the leaks is a strategy with diminishing returns. Application drift detection is a scalable solution to this challenge.

SBOMs unlocks drift detection by creating a point-in-time record on the composition of an application at each stage of the development process. This creates an auditable record of when software builds are modified; how they are changed and who changed it. 

Software Supply Chain Attack Detection

Not all dependency injections are performed by benevolent 1st-party developers. Malicious threat actors who gain access to your organization’s DevSecOps pipeline or the pipeline of one of your OSS suppliers can inject malicious code into your applications.

An SBOM inventory creates the historical record that can identify anomalous behavior and catch these security breaches before organizational damage is done. This is a particularly important strategy for dealing with advanced persistent threats (APTs) that are expert at infiltration and stealth. For a real-world example, see our blog on the recent XZ supply chain attack.

OSS Licensing Risk Management

OSS licenses are currently undergoing the beginning of a new transformation. The highly permissive licenses that came into fashion over the last 20 years are proving to be unsustainable. As prominent open source startups amend their licenses (e.g., Hashicorp, Elastic, Redis, etc.), organizations need to evaluate these changes and how it impacts their OSS supply chain strategy.

Similar to the benefits during a security incident, an SBOM inventory acts as the source of truth for OSS licensing risk. As licenses are amended, an organization can quickly evaluate their risk by querying their inventory and identifying who their “critical” OSS suppliers are. 

Domain Expertise Risk Management

Another emerging use-case of software dependency inventories is the management of domain expertise of developers in your organization. A comprehensive inventory of software dependencies allows organization’s to map critical software to individual employee’s domain knowledge. This creates a measurement of how well resourced your engineering organization is and who owns the knowledge that could impact business operations.

While losing an employee with a particular set of skills might not have the same urgency as a security incident, over time this gap can create instability. An SBOM inventory allows organizations to maintain a list of critical OSS suppliers and get ahead of any structural risks in their organization.

What are the benefits of a software dependency inventory?

SBOM inventories create a number of benefits for tangential domains, such as, software supply chain security, risk management, etc. but there is one big benefit for the core practices of software development.

Reduced engineering and QA time for debugging

A software dependency inventory stores metadata about applications and their OSS dependencies over-time in a centralized repository. This datastore is a simple and efficient way to search and answer critical questions about the state of an organization’s software development pipeline.

Previously, engineering and QA teams had to manually search codebases and commits in order to determine the source of a rogue dependency being added to an application. A software dependency inventory combines a centralized repository of SBOMs with an intuitive search interface. Now, these time consuming investigations can be accomplished in minutes versus hours.

What are the benefits of scanning SBOMs for vulnerabilities?

There are a number of security benefits that can be achieved by integrating SBOMs and vulnerability scanning. We’ve highlighted the most important below:

Reduce risk by scaling vulnerability scanning for complete coverage

One of the side effects of transitioning to DevOps practices was that legacy SCAs couldn’t keep up with the software output of modern engineering orgs. This meant that not all applications were scanned before being deployed to production—a risky security practice!

Modern SCAs solved this problem by scanning SBOMs rather than applications or codebases. These lightweight SBOM scans are so efficient that they can keep up with the pace of DevOps output. Scanning 100% of applications reduces risk by preventing unscanned software from being deployed into vulnerable environments.

Prevent delays in software delivery

Overall organizational productivity can be increased by adopting modern, SBOM-powered SCAs that allow organizations to shift security left. When vulnerabilities are uncovered during application design, developers can make informed decisions about the OSS dependencies that they choose. 

This prevents the situation where engineering creates a new application or feature but right before it is deployed into production the security team scans the dependencies and finds a critical vulnerability. These last minute security scans can delay a release and create frustration across the organization. Scanning early and often prevents this productivity drain from occurring at the worst possible time.

Reduced financial risk during a security incident

The faster a security incident is resolved the less risk that an organization is exposed to. The primary metric that organizations track is called mean-time-to-recovery (MTTR). SBOM inventories are utilized to significantly reduce this metric and improve incident outcomes.

An application inventory with full details on the software dependencies is a prerequisite for rapid security response in the event of an incident. A single SQL query to an SBOM inventory will return a list of all applications that have exploitable dependencies installed. Recent examples include Log4j and XZ. This prevents the need for manual scanning of codebases or production containers. This is the difference between a zero-day incident lasting a few hours versus weeks.

Reduce hours spent on compliance with automation

Compliance certifications are powerful growth levers for organizations; they open up new market opportunities. The downside is that they create a lot of work for organizations. Manually confirming that each compliance control is met and providing evidence for the compliance officer to review discourages organizations from pursuing these certifications.

Providing automated vulnerability scans from DevSecOps pipelines that integrate SBOM inventories and vulnerability scanners significantly reduces the hours needed to generate and collect evidence for compliance audits.

How impactful are these benefits?

Many modern enterprises are adopting SBOM-powered SCAs and reaping the benefits outlined above. The quantifiable benefits to any organization are unique to that enterprise but anecdotal evidence is still helpful when weighing how to prioritize a software supply chain security initiative, like the adoption of an SBOM-powered SCA against other organizational priorities.

As a leading SBOM-powered SCA, Anchore has helped numerous organizations achieve the benefits of this evolution in the software industry. To get an estimate of what your organization can expect, see the case studies below:

NVIDIA

  • Reduced time to production by scanning SBOMs instead of full applications
  • Scaled vulnerability scanning and management program to 100% coverage across 1000s of containerized applications and 100,000s of containers

Read the full NVIDIA case study here >>

Infoblox

  • 75% reduction in engineering hours spent performing manual vulnerability detection
  • 55% reduction in hours allocated to retroactive remediation of vulnerabilities
  • 60% reduction in hours spent on manual compliance discovery and documentation

Read the full Infoblox case study here >>

DreamFactory

  • 75% reduction in engineering hours spent on vulnerability management and compliance
  • 70% faster production deployments with automated vulnerability scanning and management

Read the full DreamFactory case study here >>

Next Steps

Hopefully you now have a better understanding of the power of integrating an SBOM inventory into OSS vulnerability management. This “one-two” combo has unlocked novel use-cases, numerous benefits and measurable results for modern enterprises.

If you’re interested in learning more about how Anchore can help your organization achieve similar results, reach out to our team.

Learn the container security best practices to reduce the risk of software supply chain attacks.

Learn the container security best practices, including open source software (OSS) security, to reduce the risk of software supply chain attacks.

How is Open Source Software Security Managed in the Software Supply Chain?

Open source software has revolutionized the way developers build applications, offering a treasure trove of pre-built software “legos” that dramatically boost productivity and accelerate innovation. By leveraging the collective expertise of a global community, developers can create complex, feature-rich applications in a fraction of the time it would take to build everything from scratch. However, this incredible power comes with a significant caveat: the open source model introduces risk.

Organizations inherit both the good and bad parts of the OSS source code they don’t own. This double-edged sword of open source software necessitates a careful balance between harnessing its productivity benefits and managing the risks. A comprehensive OSS security program is the industry standard best practice for managing the risk of open source software within an organization’s software supply chain.

Learn the container security best practices to reduce the risk of software supply chain attacks.

Learn the container security best practices, including open source software security, to reduce the risk of software supply chain attacks.

What is open source software security?

Open source software security is the ecosystem of security tools (some of it being OSS!) that have developed to compensate for the inherent risk of OSS development. The security of the OSS environment was founded on the idea that “given enough eyeballs, all bugs are shallow”. The reality of OSS is that the majority of it is written and maintained by single contributors. The percentage of open source software that passes the qualifier of “enough eyeballs” is miniscule.

Does that mean open source software isn’t secure? Fortunately, no. The OSS community still produces secure software but an entire ecosystem of tools ensure that this is verified—not only trusted implicitly.

What is the difference between closed source and open source software security?

The primary difference between open source software security and closed source software security is how much control you have over the source code. Open source code is public and can have many contributors that are not employees of your organization while proprietary source code is written exclusively by employees of your organization. The threat models required to manage risk for each of these software development methods are informed by these differences.

Due to the fact that open source software is publicly accessible and can be contributed to by a diverse, often anonymous community, its threat model must account for the possibility of malicious code contributions, unintentional vulnerabilities introduced by inexperienced developers, and potential exploitation of disclosed vulnerabilities before patches are applied. This model emphasizes continuous monitoring, rigorous code review processes, and active community engagement to mitigate risks. 

In contrast, proprietary software’s threat model centers around insider threats, such as disgruntled employees or lapses in secure coding practices, and focuses heavily on internal access controls, security audits, and maintaining strict development protocols. 

The need for external threat intelligence is also greater in OSS, as the public nature of the code makes it a target for attackers seeking to exploit weaknesses, while proprietary software relies on obscurity and controlled access as a first line of defense against potential breaches.

What are the risks of using open source software?

  1. Vulnerability exploitation of your application
    • The bargain that is struck when utilizing OSS is your organization gives up significant amounts of control of the quality of the software. When you use OSS you inherit both good AND bad (read: insecure) code. Any known or latent vulnerabilities in the software become your problem.
  2. Access to source code increases the risk of vulnerabilities being discovered by threat actors
    • OSS development is unique in that both the defenders and the attackers have direct access to the source code. This gives the threat actors a leg up. They don’t have to break through perimeter defenses before they get access to source code that they can then analyze for vulnerabilities.
  3. Increased maintenance costs for DevSecOps function
    • Adopting OSS into an engineering organization is another function that requires management. Data has to be collected about the OSS that is embedded in your applications. That data has to be stored and made available in case of the event of a security incident. These maintenance costs are typically incurred by the DevOps and Security teams.
  4. OSS license legal exposure
    • OSS licenses are mostly permissive for use within commercial applications but a non-trivial subset are not, or worse they are highly adversarial when used by a commercial enterprise. Organizations that don’t manage this risk increase the potential for legal action to be taken against them.

How serious are the risks associated with the use of open source software?

Current estimates are that 70-90% of modern applications are composed of open source software. This means that only 10-30% of applications developed by organizations are written by developers employed by the organization. Without having significant visibility into the security of OSS, organization’s are handing over the keys to the castle to the community and hoping for the best.

Not only is OSS a significant footprint in modern application composition but its growth is accelerating. This means the associated risks are growing just as fast. This is part of the reason we see an acceleration in the frequency of software supply chain attacks. Organizations that aren’t addressing these realities are getting caught on their back foot when zero-days are announced like the recent XZ utils backdoor.

Why are SBOMs important to open source software security?

Software Bills of Materials (SBOMs) serve as the foundation of software supply chain security by providing a comprehensive “ingredient list” of all components within an application. This transparency is crucial in today’s software landscape, where modern applications are a complex web of mostly open source software dependencies that can harbor hidden vulnerabilities. 

SBOMs enable organizations to quickly identify and respond to security threats, as demonstrated during incidents like Log4Shell, where companies with centralized SBOM repositories were able to locate vulnerable components in hours rather than days. By offering a clear view of an application’s composition, SBOMs form the bedrock upon which other software supply chain security measures can be effectively built and validated.

The importance of SBOMs in open source software security cannot be overstated. Open source projects often involve numerous contributors and dependencies, making it challenging to maintain a clear picture of all components and their potential vulnerabilities. By implementing SBOMs, organizations can proactively manage risks associated with open source software, ensure regulatory compliance, and build trust with customers and partners. 

SBOMs enable quick responses to newly discovered vulnerabilities, facilitate automated vulnerability management, and support higher-level security abstractions like cryptographically signed images or source code. In essence, SBOMs provide the critical knowledge needed to navigate the complex world of open source dependencies by enabling us to channel our inner GI Joe—”knowing is half the battle” in software supply chain security.

Best practices for securing open source software?

Open source software has become an integral part of modern development practices, offering numerous benefits such as cost-effectiveness, flexibility, and community-driven innovation. However, with these advantages come unique security challenges. To mitigate risks and ensure the safety of your open source components, consider implementing the following best practices:

1. Model Security Scans as Unit Tests

Re-branding security checks as another type of unit test helps developers orient to DevSecOps principles. This approach helps developers re-imagine security as an integral part of their workflow rather than a separate, post-development concern. By modeling security checks as unit tests, you can:

  • Catch vulnerabilities earlier in the development process
  • Reduce the time between vulnerability detection and remediation
  • Empower developers to take ownership of security issues
  • Create a more seamless integration between development and security teams

Remember, the goal is to make security an integral part of the development process, not a bottleneck. By treating security checks as unit tests, you can achieve a balance between rapid development and robust security practices.

2. Review Code Quality

Assessing the quality of open source code is crucial for identifying potential vulnerabilities and ensuring overall software reliability. Consider the following steps:

  • Conduct thorough code reviews, either manually or using automated tools
  • Look for adherence to coding standards and best practices
  • Look for projects developed with secure-by-default principles
  • Evaluate the overall architecture and design patterns used

Remember, high-quality code is generally more secure and easier to maintain.

3. Assess Overall Project Health

A vibrant, active community and committed maintainers are crucial indicators of a well-maintained open source project. When evaluating a project’s health and security:

  • Examine community involvement:
    • Check the number of contributors and frequency of contributions
    • Review the project’s popularity metrics (e.g., GitHub stars, forks, watchers)
    • Assess the quality and frequency of discussions in forums or mailing lists
  • Evaluate maintainer(s) commitment:
    • Check the frequency of commits, releases, and security updates
    • Check for active engagement between maintainers and contributors
    • Review the time taken to address reported bugs and vulnerabilities
    • Look for a clear roadmap or future development plans

4. Maintain a Software Dependency Inventory

Keeping track of your open source dependencies is crucial for managing security risks. To create and maintain an effective inventory:

  • Use tools like Syft or Anchore SBOM to automatically scan your application source code for OSS dependencies
    • Include both direct and transitive dependencies in your scans
  • Generate a Software Bill of Materials (SBOM) from the dependency scan
    • Your dependency scanner should also do this for you
  • Store your SBOMs in a central location that can be searched and analyzed
  • Scan your entire DevSecOps pipeline regularly (ideally every build and deploy)

An up-to-date inventory allows for quicker responses to newly discovered vulnerabilities.

5. Implement Vulnerability Scanning

Regular vulnerability scanning helps identify known security issues in your open source components. To effectively scan for vulnerabilities:

  • Use tools like Grype or Anchore Secure to automatically scan your SBOMs for vulnerabilities
  • Automate vulnerability scanning tools directly into your CI/CD pipeline
    • At minimum implement vulnerability scanning as containers are built
    • Ideally scan container registries, container orchestrators and even each time a new dependency is added during design
  • Set up alerts for newly discovered vulnerabilities in your dependencies
  • Establish a process for addressing identified vulnerabilities promptly

6. Implement Version Control Best Practices

Version control practices are crucial for securing all DevSecOps pipelines that utilize open source software:

  • Implement branch protection rules to prevent unauthorized changes
  • Require code reviews and approvals before merging changes
  • Use signed commits to verify the authenticity of contributions

By implementing these best practices, you can significantly enhance the security of your software development pipeline and reduce the risk intrinsic to open source software. By doing this you will be able to have your cake (productivity boost of OSS) and eat it too (without the inherent risk).

How do I integrate open source software security into my development process?

DIY a comprehensive OSS security system

We’ve written about the steps to build a OSS security system from scratch in a previous blog post—below is the TL;DR:

  • Integrate dependency scanning, SBOM generation and vulnerability scanning into your DevSecOps pipeline
  • Implement a data pipeline to manage the influx of security metadata
  • Use automated policy-as-code “security tests” to provide rapid feedback to developers
  • Automate remediation recommendations to reduce cognitive load on developers

Outsource OSS security to a turnkey vendor

Modern software composition analysis (SCA) tools, like Anchore Enterprise, are purpose built to provide you with a comprehensive OSS security system out-of-the-box. All of the same features of DIY but without the hassle of building while maintaining your current manual process.

  • Anchore SBOM: comprehensive dependency scanning, SBOM generation and management
  • Anchore Secure: vulnerability scanning and management
  • Anchore Enforce: automated security enforcement and compliance

Whether you want to scale an understaffed security to increase their reach across your organization or free your team up to focus on different priorities, the buy versus build opportunity cost is a straightforward decision.

Next Steps

Hopefully, you now have a strong understanding of the risks associated with adopting open source software. If you’re looking to continue your exploration into the intricacies of software supply chain security, Anchore has a catalog of deep dive content on our website. If you’d prefer to get your hands dirty, we also offer a 15-day free trial of Anchore Enterprise.

Learn about the role that SBOMs for the security of your organization in this white paper.

Learn about the role that SBOMs for the security, including open source software security, of your organization in this white paper.

Unpacking the Power of Policy at Scale in Anchore

Generating a software bill of materials (SBOM) is starting to become common practice. Is your organization using them to their full potential? Here are a couple questions Anchore can help you answer with SBOMs and the power of our policy engine:

  • How far off are we from meeting the security requirements that Iron Bank, NIST, CIS, and DISA put out around container images?
  • How can I standardize the way our developers build container images to improve security without disrupting the development team’s output?
  • How can I best prioritize this endless list of security issues for my container images?
  • I’m new to containers. Where do I start on securing them?

If any of those questions still need answering at your organization and you have five minutes, you’re in the right place. Let’s dive in.

If you’re reading this you probably already know that Anchore creates developer tools to generate SBOMs, and has been since 2016. Beyond just SBOM generation, Anchore truly shines when it comes to its policy capabilities. Every company operates differently — some need to meet strict compliance standards while others are focused on refining their software development practices for enhanced security. No matter where you’re at in your container security journey today, Anchore’s policy framework can help improve your security practices.

Anchore Enterprise has a tailored approach to policy and enforcement that means whether you’re a healthcare provider abiding by stringent regulations or a startup eager to fortify its digital defenses, Anchore has got you covered. Our granular controls allow teams to craft policies that align perfectly with their security goals.

Exporting Policy Reports with Ease

Anchore also has a nifty command line tool called anchorectl that allows you to grab SBOMs and policy results related to those SBOMs. There are a lot of cool things you can do with a little bit of scripting and all the data that Anchore Enterprise stores. We are going to cover one example in this blog.

Once Anchore has created and stored an SBOM for a container image, you can quickly get policy results related to that image. The anchorectl command that will evaluate an image against the docker-cis-benchmark policy bundle:

anchorectl image details <image-id> -p docker-cis-benchmark

That command will return the policy result in a few seconds. Let’s say your organization develops 100 images and you want to meet the CIS benchmark standard. You wouldn’t want to assess each of these images individually, that sounds exhausting. 

To solve this problem, we have created a script that can iterate over any number of images, merge the results into a single policy report, and export that into a csv file. This allows you to make strategic decisions about how you can most effectively move towards compliance with the CIS benchmark (or any standard).

In this example, we ran the script against 30 images in my Anchore deployment. Now take a look holistically at how far off we are from CIS compliance. Here are a few metrics that standout:

  • 26 of the 30 images are running as ‘root’
  • 46.9% of our total vulnerabilities have fixes available (4978 /10611)
  • ADD instructions are being used in 70% of our images
  • Health checks missing in 80% of our images
  • 14 secrets (all from the same application team)
  • 1 malware hit (Cryptominer Casey is at it again)

As a security team member, we didn’t write any of this code myself, which means I need to work with my developer colleagues on the product/application teams to clear up these security issues. Usually this means an email that educates my colleagues on how to utilize health checks, prefer COPY instead over ADD in Dockerfiles, declaring a non-privileged user instead of root, and methods to upgrade packages with fixes available (e.g., Dependabot). Finally, we would prioritize investigating how that malware made its way into that image for myself.

This example illustrates how storing SBOMs and applying policy rules against them at scale can streamline your path to your container security goals.

Visualizing Your Aggregated Policy Reports

While this raw data is useful in and of itself, there are times when you may want to visualize the data in a way that is easier to understand.  While Anchore Enterprise does provide some dashboarding capabilities, it is not and does not aim to be a versatile dashboarding tool. This is where utilizing an observability vendor comes in handy.

In this example, I’ll be using New Relic as they provide a free tier that you can sign up for and begin using immediately. However, other providers such as Datadog and Grafana would also work quite well for this use case. 

Importing your Data

  1. Download the tsv-to-json.py script
  2. Save the data produced by the policy-report.py script as a TSV file
    • We use TABs as a separator because commas are used in many of the items contained in the report.
  3. Run the tsv-to-json.py script against the TSV file:
python3 tsv-to-json.py aggregated_output.tsv > test.json
  1. Sign-up for a New Relic account here
  2. Find your New Relic Account ID and License Key
    • Your New Relic Account ID can be seen in your browser’s address bar upon logging in to New Relic, and your New Relic License Key can be found on the right hand side of the screen upon initial login to your New Relic account.
  3. Use curl to push the data to New Relic:
gzip -c test.json | curl \
-X POST \
-H "Content-Type: application/json" \
-H "Api-Key: <YOUR_NEWRELIC_LICENSE_KEY>" \
-H "Content-Encoding: gzip" \
https://insights-collector.newrelic.com/v1/accounts/<YOUR_NEWRELIC_ACCOUNT_ID>/events \
--data-binary @-

Visualizing Your Data

New Relic uses the New Relic Query Language (NRQL) to perform queries and render charts based on the resulting data set.  The tsv-to-json.py script you ran earlier converted your TSV file into a JSON file compatible with New Relic’s event data type.  You can think of each collection of events as a table in a SQL database.  The tsv-to-json.py script will automatically create an event type for you, combining the string “Anchore” with a timestamp.

To create a dashboard in New Relic containing charts, you’ll need to write some NRQL queries.  Here is a quick example:

FROM Anchore1698686488 SELECT count(*) FACET severity

This query will count the total number of entries in the event type named Anchore1698686488 and group them by the associated vulnerability’s severity. You can experiment with creating your own, or start by importing a template we have created for you here.

Wrap-Up

The security data that your tools create is only as good as the insights that you are able to derive from them. In this blog post, we covered a way to help security practitioners turn a mountain of security data into actionable and prioritized security insights. That can help your organization to improve its security posture and meet compliance standards quicker. That being said this blog is dependent on you already being a customer of Anchore Enterprise.

Looking to learn more about how to utilizing a policy-based security posture to meet DoD compliance standards like cATO or CMMC? One of the most popular technology shortcuts is to utilize a DoD software factory. Anchore has been helping organizations and agencies put the Sec in DevSecOps by securing traditional software factories, transforming them into DoD software factories. Get caught up with the content below: