From War Room to Workflow: How Anchore Transforms CVE Incident Response

When CVE-2025-1974 (#IngressNightmare) was disclosed, incident response teams had hours—at most—before exploits appeared in the wild. Imagine two companies responding: 

  • Company A rallies a war room with 13 different teams frantically running kubectl commands across the org’s 30+ clusters while debugging inconsistent permission issues. 
  • Company B’s security analyst runs a single query against their centralized SBOM inventory and their policy-as-code engine automatically dispatches alerts and remediation recommendations to affected teams. 

Which camp would you rather be in when the next critical CVE drops? Most of us prefer the team that built visibility for their software supply chain security before the crisis hit.

CVE-2025-1974 was particularly acute because of ingress-nginx‘s popularity as a Kubernetes Admission Controller (40%+ of Kubernetes administrators) and the type/severity of the vulnerability (RCE & CVSS 9.8—scary!) We won’t go deep on the details; there are plenty of good existing resources already.

Instead we’ll focus on: 

  • The inconsistency between the naive incident response guidance and real-world challenges
  • The negative impacts common to incident response for enterprise-scale Kubernetes deployments
  • How Anchore Enterprise alleviates these consequences
  • The benefits of an integrated incident response strategy
  • How to utilize Anchore Enterprise to respond in real-time to a security incident

Learn how SBOMs enable organizations to react to zero-day disclosures in minutes rather than days or weeks.

Rapid Incident Response to Zero-Day Vulnerabilities with SBOMs | Webinar

An Oversimplified Response to a Complex Threat

When the Ingress Nightmare vulnerability was published, security blogs and advisories quickly filled with remediation advice. The standard recommendation was clear and seemed straightforward: run a simple kubectl command to determine if your organization was impacted:

kubectl get pods --all-namespaces --selector app.kubernetes.io/name=ingress-nginx

If vulnerable versions were found, upgrade immediately to the patched versions.

This advice isn’t technically wrong. The command will indeed identify instances of the vulnerable ingress-nginx controller. But it makes a set of assumptions that bear little resemblance to Kubernetes deployments in modern enterprise organizations:

  • That you run a single Kubernetes cluster
  • That you have a single Kubernetes administrator
  • That this admin has global privileges across the entire cluster

For the vast majority of enterprises today, none of these assumptions are true.

The Reality of Enterprise-Scale Kubernetes: Complex & Manual

The reality of Kubernetes deployments at large organizations is far more complex than most security advisories acknowledge:

1. Inherited Complexity

Kubernetes administration structures almost always mirror organizational complexity. A typical enterprise doesn’t have a single cluster managed by a single team—they have dozens of clusters spread across multiple business units, each with their own platform teams, their own access controls, and often their own security policies.

This organizational structure, while necessary for business operations, creates significant friction for vital incident response activities; vulnerability detection and remediation. When a critical CVE like Ingress Nightmare drops, there’s no single person who can run that kubectl command across all environments.

2. Vulnerability Management Remains Manual

While organizations have embraced Kubernetes to automate their software delivery pipelines and increase velocity, the DevOps-ification of vulnerability and patch management have lagged. Instead they retain their manual, human-driven processes.

During the Log4j incident in 2021, we observed engineers across industries frantically connecting to servers via SSH and manually dissecting container images, trying to determine if they were vulnerable. Three years later, for many organizations, the process hasn’t meaningfully improved—they’ve just moved the complexity to Kubernetes.

The idea that teams can manually track and patch vulnerabilities across a sprawling Kubernetes estate is not just optimistic—it’s impossible at enterprise-scale.

The Cascading Negative Impacts: Panic, Manual Coordination & Crisis Response

When critical vulnerabilities emerge, organizations without supply chain visibility face:

  • Organizational Panic: The CISO demands answers within the hour while security teams scramble through endless logs, completely blind to which systems contain the vulnerable components.
  • Complex Manual Coordination: Security leads discover they need to scan hundreds of clusters but have access to barely a fifth of them, as Slack channels erupt with conflicting information and desperate access requests.
  • Resource-Draining Incident Response: By day three of the unplanned war room, engineers with bloodshot eyes and unchanged clothes stare at monitors, missing family weekends while piecing together an ever-growing list of affected systems.
  • Delayed Remediation: Six weeks after discovering the vulnerability in a critical payment processor, the patch remains undeployed as IT bureaucracy delays the maintenance window while exposed customer data hangs in the balance.

The Solution: Centralized SBOM Inventory + Automated Policy Enforcement

Organizations with mature software supply chain security leverage Anchore Enterprise to address these challenges through an integrated SBOM inventory and policy-as-code approach:

1. Anchore SBOM: Comprehensive Component Visibility

Anchore Enterprise transforms vulnerability response through its industry-leading SBOM repository. When a critical vulnerability like Ingress Nightmare emerges, security teams use Anchore’s intuitive dashboard to instantly answer the existential question: “Are we impacted?”

This approach works because:

  • Role-based access to a centralized inventory is provided by Anchore SBOM for security incident response teams, cataloging every component across all Kubernetes clusters regardless of administrative boundaries
  • Components missed by standard package manager checks (including binaries, language-specific packages, and container base images) are identified by AnchoreCTL, a modern software composition analysis (SCA) scanner
  • Vulnerability correlation in seconds is enabled through Anchore SBOM’s repository with its purpose-built query engine, turning days of manual work into a simple search operation

2. Anchore Enforce: Automated Policy Enforcement

Beyond just identifying vulnerable components, Anchore Enforce’s policy engine integrates directly into an existing CI/CD pipeline (i.e., policy-as-code security gates). This automatically answers the follow-up questions: “Where and how do we remediate?”

Anchore Enforce empowers teams to:

  • Alert code owners to the specific location of vulnerable components
  • Provide remediation recommendations directly in developer workflows (Jira, Slack, GitLab, GitHub, etc.)
  • Eliminate manual coordination between security and development teams with the policy engine and DevTools-native integrations

Quantifiable Benefits: No Panic, Reduced Effort & Reduced Risk

Organizations that implement this approach see dramatic improvements across multiple dimensions:

  1. Eliminated Panic: The fear and uncertainty that typically accompany vulnerability disclosures disappear when you can answer “Does this impact us?” in minutes rather than days.
    • Immediate clarity on the impact of the disclosure is at your finger tips with the Anchore SBOM inventory and Kubernetes Runtime Dashboard
  2. Reduced Detection Effort: The labor-intensive coordination between security, platform, and application teams becomes unnecessary.
    • Security incident response teams already have access to all the data they need through the centralized Anchore SBOM inventory generated as part of normal CI/CD pipeline use.
  3. Minimized Exploitation Risk: The window of vulnerability shrinks dramatically as developers are able to address vulnerabilities before they can be exploited.
    • Developers receive automated alerts and remediation recommendations from Anchore Enforce’s policy engine that integrate natively with existing development workflows.

How to Mitigate CVE-2025-1974 with Anchore Enterprise

Let’s walk through how to detect and mitigate CVE-2025-1974 with Anchore Enterprise across a Kubernetes cluster. The Kubernetes Runtime Dashboard serves as the user interface for your SBOM database. We’ll demonstrate how to:

  • Identify container images with ingress-nginx integrated
  • Locate images where CVE-2025-1974 has been detected
  • Generate reports of all vulnerable container images
  • Generate reports of all vulnerable running container instances in your Kubernetes cluster

Step 1: Identify location(s) of impacted assets

The Anchore Enterprise Dashboard can be filtered to show all clusters with the ingress-nginx controller deployed. Thanks to the existing SBOM inventory of cluster assets, this becomes a straightforward task, allowing you to quickly pinpoint where vulnerable components might exist.

Step 2: Drill into container image analysis for additional details

By examining vulnerability and policy compliance analysis at the container image level, you gain increased visibility into the potential cluster impact. This detailed view helps prioritize remediation efforts based on risk levels.

Step 3: Drill down into container image vulnerability report

When you drill down into the CVE-2025-1974 vulnerability, you can view additional details that help understand its nature and impact. Note the vulnerability’s unique identifier, which will be needed for subsequent steps. From here, you can click the ‘Report’ button to generate a comprehensive vulnerability report for CVE-2025-1974.

Step 4: Configure a vulnerability report for CVE-2025-1974

To generate a report on all container images tagged with the CVE-2025-1974 unique vulnerability ID:

  • Select the Vulnerability Id filter
  • Paste the CVE-2025-1974 vulnerability ID into the filter field
  • Click ‘Preview Results’ to see affected images

Step 5: Generate container image vulnerability report

The vulnerability report identifies all container images tagged with the unique vulnerability ID. To remediate the vulnerability effectively, base images that running instances are built from need to be updated to ensure the fix propagates across all cluster services.

Step 6: Generate Kubernetes namespace vulnerability report

While there may be only two base images containing the vulnerability, these images might be reused across multiple products and services in the Kubernetes cluster. A report based solely on base images can obscure the true scale of vulnerable assets in a cluster. A namespace-based report provides a more accurate picture of your exposure.

Wrap-Up: Building Resilience Before the Crisis

The next Ingress Nightmare-level vulnerability isn’t a question of if, but when. Organizations that invest in software supply chain security before a crisis occurs will respond with targeted remediation rather than scrambling in war rooms.

Anchore’s SBOM-powered SCA provides the comprehensive visibility and automated policy enforcement needed to transform vulnerability response from a chaotic emergency into a routine, manageable process. By building software supply chain security into your DevSecOps pipeline today, you ensure you’ll have the visibility you need when it matters most.

Ready to see how Anchore Enterprise can strengthen your Kubernetes security posture? Request a demo today to learn how our solutions can help protect your critical infrastructure from vulnerabilities like CVE-2025-1974.


Learn how Spectro Cloud secured their Kubernetes-based software supply chain and the pivotal role SBOMs played.

The NVD Enrichment Crisis: One Year Later—How Anchore is Filling the Vulnerability Data Gap

About one year ago, Anchore’s own Josh Bressers broke the story that NVD (National Vulnerability Database) was not keeping up with its vulnerability enrichment. This week, we sat down with Josh to see how things are going.

> Josh, can you tell our readers what you mean when you say NVD stopped enriching data?

Sure! When people or organizations disclose a new security vulnerability, it’s often just a CVE (Common Vulnerabilities and Exposures) number (like CVE-2024-1234) and a description. 

Historically, NVD would take this data, and NVD analysts would add two key pieces of information: the CPEs (Common Platform Enumerations), which are meant to identify the affected software, and the CVSS (Common Vulnerability Scoring System) score, which is meant to give users of the data a sense of how serious the vulnerability is and how it can be exploited. 

For many years, NVD kept up pretty well. Then, in March 2024, they stopped.

> That sounds bad. Were they able to catch up?

Not really. 

One of the problems they face is that the number of CVEs in existence is growing exponentially. They were having trouble keeping up in 2024, but 2025 is making CVEs even faster than 2024 did, plus they have the backlog of CVEs that weren’t enriched during 2024. 

It seems unlikely that they can catch up at this point.

Graph showing how few CVE IDs are being enriched with matching data since April 2024
Graph showing how few CVE IDs are being enriched with matching data since April 2024
Graph showing the number of total CVEs (green) and the number of enriched CVEs (red). "The line slopes say it all"—NVD is behind and the number of unreviewed CVEs is growing.
Graph showing the number of total CVEs (green) and the number of enriched CVEs (red). “The line slopes say it all”—NVD is behind and the number of unreviewed CVEs is growing.

> So what’s the upshot here? Why should we care that NVD isn’t able to enrich vulnerabilities?

Well, there are basically two problems with NVD not enriching vulnerabilities. 

First, if they don’t have CPEs on them, there’s no machine-readable way to know what software they affect. In other words, part of the work NVD was doing is writing down what software (or hardware) is affected in a machine-readable way, enabling vulnerability scanners and other software to tell which components are affected. 

The loss of this is obviously bad. It means that there is a big pile of security flaws that are public—meaning that threat actors know about them—but security teams will have a harder time detecting them. Un-enriched CVEs are not labeled with CPEs, so programmatic analysis is off the table and teams will have to fall back to manual review.

Second, enrichment of CVEs is supposed to add a CVSS score—essentially a severity level—to CVEs. CVSS isn’t perfect, but it does allow organizations to say things like, “this vulnerability is very easy to exploit, so we need to get it fixed before this other CVE which is very hard to exploit.” Without CVSS or something like it, these tradeoffs are much harder for organizations to make.

> And this has been going on for more than a year? That sounds bad. What is Anchore doing to keep their customers safe?

The first thing we needed to do was make a place where we can take up some of the slack that NVD can’t. To do this, we created a public database of our own CVE enrichment. This means that, when major CVEs are disclosed, we can enrich them ahead of NVD, so that our scanning tools (both Grype and Anchore Secure) are able to detect vulnerable packages—even if NVD never has the resources to look into that particular CVE.

Additionally, because NVD severity scores are becoming less reliable and less available, we’ve built a prioritization algorithm into Anchore Secure that allows customers to keep doing the kind of triaging they used to rely on NVD CVSS for.

> Is the vulnerability enhancement data publicly available?

Yes, the data is publicly available. 

Also, the process for changing it is out in the open. One of the more frustrating things about working with NVD enrichment was that sometimes they would publish an enrichment with really bad data and then all you could do was email them—sometimes they would fix it right away and sometimes they would never get to it.

With Anchore’s open vulnerability data, anyone in the community can review and comment on these enrichments.

> So what are your big takeaways from the past year?

I think the biggest takeaway is that we can still do vulnerability matching. 

We’re pulling together our own public vulnerability database, plus data feeds from various Linux distributions and of course GitHub Security Advisories to give our customers the most accurate vulnerability scan we can. In many ways, reducing our reliance on NVD CPEs has improved our matching (see this post, for example).

The other big takeaway is that, because so much of our data and tooling are open source, the community can benefit from and help with our efforts to provide the most accurate security tools in the world.

> What can community members do to help?

Well, first off, if you’re really interested in vulnerability data or have expertise with the security aspects of specific open source projects/operating systems, head on over to our vulnerability enhancement repo or start contributing to the tools that go into our matching like Syft, Grype, and Vunnel.

But the other thing to do, and I think more people can do this, is just use our open source tools!

File issues when you find things that aren’t perfect. Ask questions on our forum.

And of course, when you get to the point that you have dozens of folders full of Syft SBOMs and tons of little scripts running Grype everywhere—call us—and we can let Anchore Enterprise take care of that for you.


Learn the 5 best practices for container security and how SBOMs play a pivotal role in securing your software supply chain.

Automate Your Compliance: How Anchore Enforce Secures the Software Supply Chain

In an era where a single line of compromised code can bring entire enterprise systems to their knees, software supply chain security has transformed from an afterthought to a mission-critical priority. The urgency is undeniable: while software supply chain attacks grew by a staggering 540% year-over-year from 2019 to 2022, organizations have rapidly responded. 

Organizations have taken notice—the priority given to software supply chain security saw a remarkable 200% increase in 2024 alone, signaling a collective awakening to the existential threat of supply chain attacks. Cybercriminals are no longer just targeting individual applications—they’re weaponizing the complex, interconnected software supply chains that power global businesses.

To combat this rising threat, organizations are deploying platforms to automate BOTH detecting vulnerabilities AND enforcing supply chain security policies. This one-two combo is reducing the risk of a breach from a 3rd-party supplier from cascading into their software environment.

Anchore Enforce, a module of Anchore Enterprise, enables organizations to automate both security and compliance policy checks throughout the development lifecycle. It allows teams to shift compliance left and easily generate reporting evidence for auditors by defining detailed security standards and internal best practices ‘as-code‘.

In this blog post, we’ll demonstrate how to get started with using Anchore Enforce’s policy engine to automate both discovering non-compliant software and preventing it from reaching production.


Learn about software supply chain security in the real-world with former Google Distinguish Engineer, Kelsey Hightower.

Software Security in the Real World with Kelsey Hightower

A Brief Primer on Policy-as-Code & Policy Packs

Policy-as-code (PaC) translates organizational policies—whether security requirements, licensing restrictions, or compliance mandates—from human-readable documentation into machine-executable code that integrates with your existing DevSecOps platform and tooling. This typically comes in the form of a policy pack.

A policy pack is a set of pre-defined security and compliance rules the policy engine executes to evaluate source code, container images or binaries.

To make policy integration as easy as possible, Anchore Enforce comes with out-of-the-box policy packs for a number of popular compliance frameworks (e.g., FedRAMP or STIG compliance).

A policy consists of three key components: 

  • Triggers:  The code that checks whether a specific compliance control is present and configured correctly
  • Gates:  A group of triggers that act as a checklist of security controls to check for
  • Actions: A stop, warn or go directive explaining the policy-compliant action to take

To better understand PaC and policy packs, we use airport security as an analogy.

When you travel, you pass through multiple checkpoints, each designed to identify and catch different risks. At security screening, officers check for weapons, liquids, and explosives. At immigration control, officials verify visas and passports. If something is wrong, like an expired visa or a prohibited item, you might be stopped, warned, or denied entry.

Anchore Enforce works in a similar way for container security. Policy gates act as checkpoints, ensuring only safe and compliant images are deployed. One aspect of a policy might check for vulnerabilities (like a security screening for dangerous items), while another ensures software licenses are valid (like immigration checking travel documents). If a container has a critical flaw such as a vulnerable version of Log4j it gets blocked, just like a flagged passenger would be stopped from boarding a flight.

By enforcing these policies, Anchore Enforce helps secure an organization’s software supply chain; just as airport security ensures dangerous passengers/items from making it through.

If you’re looking for a deeper dive on PaC, read Anchore’s Developer’s Guide to SBOMs & Policy-as-Code.

Getting Started: the Developer Perspective

Getting started with Anchore Enforce is easy but determining where to insert it into your workflow is critical. A perfect home for Anchore Enforce is distributed within the CI/CD process, specifically during the localised build process. 

This approach enables rapid feedback for developers, providing a gate which can determine whether a build should progress or halt depending on your policies.

Container images are great for software developers—they encapsulate an application and all of its dependencies into a portable package, providing consistency and simplified management. As a developer, you might be building a container image on a local machine or in a pipeline, using Docker and a dockerfile.

For this example, we’ll assume you are using a GitLab Runner to run a job which builds an image for your application. We’ll also be using AnchoreCTL, Anchore Enterprise’s CLI tool to automate calling Anchore Enforce’s policy engine to evaluate your container against the CIS security standard—a set of industry standard container security best practices.

First, you’ll want to set a number of environment variables in your GitLab repository:

ANCHORECTL_USERNAME  (protected)
ANCHORECTL_PASSWORD (protected and masked)
ANCHORECTL_URL (protected)
ANCHORECTL_ACCOUNT

These variables will be used to authenticate against your Anchore Enterprise deployment. Anchore Enterprise also supports API keys.  

Next, you’ll want to set up your GitLab Runner job definition whereby AnchoreCTL is run after you’ve built a container image. The job definition below shows how you might build an image, then run AnchoreCTL to perform a policy evaluation:

### Anchore Distributed Scan
  # You will need three variables defined:
  # ANCHORECTL_USERNAME
  # ANCHORECTL_PASSWORD
  # ANCHORECTL_URL
  # ANCHORECTL_ACCOUNT

.login_gitlab_registry:
 - echo "$CI_REGISTRY_PASSWORD" | docker login $CI_REGISTRY -u $CI_REGISTRY_USER --password-stdin

.install_anchorectl_alpine:
  - apk update && apk add curl
  - 'echo "Downloading anchorectl from: ${ANCHORECTL_URL}"'
  - 'curl $ANCHORECTL_URL/v2/system/anchorectl?operating_system=linux&architecture=amd64" -H "accept: */*" | tar -zx anchorectl && mv -v anchorectl /usr/bin && chmod +x /usr/bin/anchorectl && /usr/bin/anchorectl version'

image: docker:latest
services:
- docker:dind
stages:
- build
- anchore
variables:
  ANCHORECTL_FAIL_BASED_ON_RESULTS: "true"
  ANCHORE_IMAGE: ${CI_REGISTRY_IMAGE}:${CI_COMMIT_REF_SLUG}

Build:
  stage: build
  before_script:
    - !reference [.login_gitlab_registry]
  script:
    - docker build -t ${ANCHORE_IMAGE} .
    - docker push ${ANCHORE_IMAGE}

Anchore:
  stage: anchore
  before_script:
    - !reference [.install_anchorectl_alpine]
    - !reference [.login_gitlab_registry]
  script:
    - 'export PATH="${HOME}/.local/bin/:${PATH}"'
    ### scan image and push to anchore enterprise
    - anchorectl image add --no-auto-subscribe --wait --dockerfile ./Dockerfile --from registry ${ANCHORE_IMAGE} 
    ### then get the results:
    - anchorectl image check --detail ${ANCHORE_IMAGE} 

The following environment variable (which can also be passed as the -f flag to AnchoreCTL) ensures that the return code is set to 1 if the policy evaluation result shows as ‘fail’. You can use this to break your build:

  ANCHORECTL_FAIL_BASED_ON_RESULTS: "true"

Then the AnchoreCTL image check command can be used to either validate against the default policy or specifically against a given policy (using the -p flag). This could be useful since your account in Anchore Enterprise can only have one default policy permanently active:

anchorectl image check --detail ${ANCHORE_IMAGE} -p <DESIRED_POLICY_ID>

When executed, this pipeline will scan your container image against your selected policy requirements and immediately provide feedback. Developers see exactly which policy gates failed and receive specific remediation steps, often as simple as updating a package or adjusting a configuration parameter.

And that’s it! With a few extra lines in your job definition, you’re now validating your newly built image against Anchore Enterprise for policy violations.

On failure, the job will stop and if the build fails in this manner, the –detail option will give you an explanation of failures with remediation recommendations! This is a great way to get fast feedback and stop/warn/go directives directly within the development flow. 

Operationalizing Compliance Checks:  the Security Engineer Perspective

While developers benefit from shift-left security checks during builds, security teams need a broader view across the entire container landscape. They’ll likely be working to scan containers after they are built by the development teams, having already been pushed or staged testing or even deployed and already running. The critical requirement for the security team is the need to evaluate a large number of images regularly for the latest critical vulnerabilities. This can also be done with policy evaluation; feeding everything in the registry through policy gates. 

Below you can see how the team might manage this via the Anchore Enforce user interface (UI). The security team has access to a range of policies, including:

  • CIS (included by default in all Anchore Enterprise deployments)
  • NIST 800-53
  • NIST 800-190
  • US DoD (Iron Bank)
  • DISA
  • FedRAMP

For this UI walkthrough, we will demonstrate the use-case using the CIS policy pack. Navigate to the policy section in your Anchore UI and activate your desired policy.

If you are an Anchore customer and do not have a desired policy pack, contact our Customer Success team for further information on entitlements.

Once this is activated, we will see how this is set in action by scanning an image. 

Navigate to Images, and select the image you want to check for compliance by clicking on the image digest. 

Once the policy check is complete, you will see a screen containing the results of the policy check.

This screen displays the actions applied to various artifacts based on Anchore Enforce’s policy engine findings, aligned with the rules defined in the policy packs. It also highlights the specific rule an artifact is failing. Based on these results, you can determine the appropriate remediation approach. 

The security team can generate reports in JSON or CSV format, simplifying the sharing of compliance check results.

Wrap-Up

As software supply chain attacks continue to evolve and grow in sophistication, organizations need robust, automated solutions to protect their environments. Anchore Enforce delivers exactly that by providing:

  • Automated compliance enforcement that catches issues early in the development process, when they’re easiest and least expensive to fix
  • Comprehensive policy coverage with pre-built packs for major standards like CIS, NIST, and FedRAMP that eliminate the need to translate complex requirements into executable controls
  • Flexible implementation options for both developers seeking immediate feedback and security teams managing enterprise-wide compliance
  • Actionable remediation guidance that helps teams quickly address policy violations without extensive research or security expertise

By integrating Anchore Enforce into your DevSecOps workflow, you’re not just checking a compliance box—you’re establishing a powerful defense against the rising tide of supply chain attacks. You’re also saving developer time, reducing friction between security and development teams, and building confidence with customers and regulators who demand proof of your security posture.

The software supply chain security challenge isn’t going away. With Anchore Enforce, you can meet it head-on with automation that scales with your organization. Reach out to our team to learn more or start a free trial to kick the tires yourself.


The Critical Role of SBOMs in PCI DSS 4.0 Compliance

Is your organization’s PCI compliance coming up for renewal in 2025? Or are you looking to achieve PCI compliance for the first time?

Version 4.0 of the Payment Card Industry Data Security Standard (PCI DSS) became mandatory on March 31, 2025. For enterprise’s utilizing a 3rd-party software software supply chain—essentially all companies, according to The Linux Foundation’s report on open source penetration—PCI DSS v4.0 requires companies to maintain comprehensive inventories of supply chain components. The SBOM standard has become the cybersecurity industry’s consensus best practice for securing software supply chains and meeting the requirements mandated by regulatory compliance frameworks.

This document serves as a comprehensive guide to understanding the pivotal role of SBOMs in navigating the complexities of PCI DSS v4.0 compliance.


Learn about the role that SBOMs for the security of your organization in this white paper.

Learn about the role that SBOMs for the security, including open source software (OSS) security, of your organization in this white paper.

Understanding the Fundamentals: PCI DSS 4.0 and SBOMs

What is PCI DSS 4.0?

Developed to strengthen payment account data (e.g., credit cards) security and standardize security controls globally, PCI DSS v4.0 represents the next evolution of this standard; ultimately benefiting consumers worldwide. 

This version supersedes PCI DSS 3.2.1, which was retired on March 31, 2023. The explicit goals of PCI DSS v4.0 include promoting security as a continuous process, enhancing flexibility in implementation, and introducing enhancements in validation methods. PCI DSS v4.0 achieved this by introducing a total of 64 new security controls.

NOTE: PCI DSS had a minor version bump to 4.0.1 in mid-2024. The update is limited and doesn’t add or remove any controls or change any deadlines, meaning the software supply chain requirements apply to both versions.

Demystifying SBOMs

A software bill of materials (SBOM) is fundamentally an inventory of all software dependencies utilized by a given application. Analogous to a “Bill of Materials” in manufacturing, which lists all raw materials and components used to produce a product, an SBOM provides a detailed list of software components, including libraries, 3rd-party software, and services, that constitute an application. 

The benefits of maintaining SBOMs are manifold, including enhanced transparency into the software supply chain, improved vulnerability management by identifying at-risk components, facilitating license compliance management, and providing a foundation for comprehensive supply chain risk assessment.

PCI DSS Requirement 6: Develop and Maintain Secure Systems and Software

PCI DSS Principal Requirement 6, titled “Develop and Maintain Secure Systems and Software,” aims to ensure the creation and upkeep of secure systems and applications through robust security measures and regular vulnerability assessments and updates. This requirement encompasses five primary areas:

  1. Processes and mechanisms for developing and maintaining secure systems and software are defined and understood
  2. Bespoke and custom software are developed securely
  3. Security vulnerabilities are identified and addressed
  4. Public-facing web applications are protected against attacks
  5. Changes to all system components are managed securely

Deep Dive into Requirement 6.3.2: Component Inventory for Vulnerability Management

Within the “Security vulnerabilities are identified and addressed” category of Requirement 6, Requirement 6.3.2 mandates: 

An inventory of bespoke and custom software, and 3rd-party software components incorporated into bespoke and custom software is maintained to facilitate vulnerability and patch management

The purpose of this evolving requirement is to enable organizations to effectively manage vulnerabilities and patches within all software components, including 3rd-party components such as libraries and APIs embedded in their bespoke and custom software. 

While PCI DSS v4.0 does not explicitly prescribe the use of SBOMs, they represent the cybersecurity industry’s consensus method for achieving compliance with this requirement by providing a detailed and readily accessible inventory of software components.

How SBOMs Enable Compliance with 6.3.2

By requiring an inventory of all software components, Requirement 6.3.2 necessitates a mechanism for comprehensive tracking. SBOMs automatically generate an inventory of all components in use, whether developed internally or sourced from third parties.

This detailed inventory forms the bedrock for identifying known vulnerabilities associated with these components. Platforms leveraging SBOMs can map component inventories to databases of known vulnerabilities, providing continuous insights into potential risks. 

Consequently, SBOMs are instrumental in facilitating effective vulnerability and patch management by enabling organizations to understand their software supply chain and prioritize remediation efforts.

Connecting SBOMs to other relevant PCI DSS 4.0 Requirements

Beyond Requirement 6.3.2, SBOMs offer synergistic benefits in achieving compliance with other aspects of PCI DSS v4.0.

Requirement 11.3.1.1 

This requirement necessitates the resolution of high-risk or critical vulnerabilities. SBOMs enable ongoing vulnerability monitoring, providing alerts for newly disclosed vulnerabilities affecting the identified software components, thereby complementing the requirement for tri-annual vulnerability scans. 

Platforms like Anchore Secure can track newly disclosed vulnerabilities against SBOM inventories, facilitating proactive risk mitigation.

Implementing SBOMs for PCI DSS 4.0: Practical Guidance

Generating Your First SBOM

The generation of SBOMs can be achieved through various methods. A Software Composition Analysis (SCA) tool, like the open source SCA Syft or the commercial AnchoreCTL, offer automated software composition scanning and SBOM generation for source code, containers or software binaries.

Looking for a step-by-step “how to” guide for generating your first SBOM? Read our technical guide.

These tools integrate with build pipelines and can output SBOMs in standard formats like SPDX and CycloneDX. For legacy systems or situations where automated tools have limitations, manual inventory processes may be necessary, although this approach is generally less scalable and prone to inaccuracies. 

Regardless of the method, it is crucial to ensure the accuracy and completeness of the SBOM, including both direct and transitive software dependencies.

Essential Elements of an SBOM for PCI DSS

While PCI DSS v4.0 does not mandate specific data fields for SBOMs, it is prudent to include essential information that facilitates vulnerability management and component tracking. Drawing from recommendations by the National Telecommunications and Information Administration (NTIA), a robust SBOM should, at a minimum, contain:

  • Component Name
  • Version String
  • Supplier Name
  • Unique Identifier (e.g., PURL or CPE)
  • Component Hash
  • Author Name

Operationalizing SBOMs: Beyond Inventory

The true value of an SBOM lies in its active utilization for software supply chain use-cases beyond component inventory management.

Vulnerability Management

SBOMs serve as the foundation for continuous vulnerability monitoring. By integrating SBOM data with vulnerability databases, organizations can proactively identify components with known vulnerabilities. Platforms like Anchore Secure enable the mapping of SBOMs to known vulnerabilities, tracking exploitability and patching cadence.

Patch Management

A comprehensive SBOM facilitates informed patch management by highlighting the specific components that require updating to address identified vulnerabilities. This allows security teams to prioritize patching efforts based on the severity and exploitability of the vulnerabilities within their software ecosystem.

Maintaining Vulnerability Remediation Documentation

It is essential to maintain thorough documentation of vulnerability remediation efforts in order to achieve the emerging continuous compliance trend from global regulatory bodies. Utilizing formats like CVE (Common vulnerabilities and Exposures) or VEX (Vulnerability Exploitability eXchange) alongside SBOMs can provide a standardized way to communicate the status of vulnerabilities, whether a product is affected, and the steps taken for mitigation.

Acquiring SBOMs from Third-Party Suppliers

PCI DSS Requirement 6.3.2 explicitly includes 3rd-party software components. Therefore, organizations must not only generate SBOMs for their own bespoke and custom software but also obtain SBOMs from their technology vendors for any libraries, applications, or APIs that are part of the card processing environment.

Engaging with suppliers to request SBOMs, potentially incorporating this requirement into contractual agreements, is a critical step. It is advisable to communicate preferred SBOM formats (e.g., CycloneDX, SPDX) and desired data fields to ensure the received SBOMs are compatible with internal vulnerability management processes. Challenges may arise if suppliers lack the capability to produce accurate SBOMs; in such instances, alternative risk mitigation strategies and ongoing communication are necessary.

NOTE: Remember the OSS maintainers that authored the open source components integrated into your application code are NOT 3rd-party suppliers in the traditional sense—you are! Almost all OSS licenses contain an “as is” clause that absolves them of liability for any code quality issues like vulnerabilities. This means that by using their code, you are now responsible for any security vulnerabilities in the code (both known and unknown).

Navigating the Challenges and Ensuring Success

Addressing Common Challenges in SBOM Adoption

Implementing SBOMs across an organization can present several challenges:

  • Generating SBOMs for closed-source or legacy systems where build tool integration is difficult may require specialized tools or manual effort
  • The volume and frequency of software updates necessitate automated processes for SBOM generation and continuous monitoring
  • Ensuring the accuracy and completeness of SBOM data, including all levels of dependencies, is crucial for effective risk management
  • Integrating SBOM management into existing software development lifecycle (SDLC) and security workflows requires collaboration and process adjustments
  • Effective SBOM adoption necessitates cross-functional collaboration between development, security, and procurement teams to establish policies and manage vendor relationships

Best Practices for SBOM Management

To ensure the sustained effectiveness of SBOMs for PCI DSS v4.0 compliance and beyond, organizations should adopt the following best practices:

  • Automate SBOM generation and updates wherever possible to maintain accuracy and reduce manual effort
  • Establish clear internal SBOM policies regarding format, data fields, update frequency, and retention
  • Select and implement appropriate SBOM management tooling that integrates with existing security and development infrastructure
  • Clearly define roles and responsibilities for SBOM creation, maintenance, and utilization across relevant teams
  • Provide education and training to development, security, and procurement teams on the importance and practical application of SBOMs

The Broader Landscape: SBOMs Beyond PCI DSS 4.0

As predicted, the global regulatory push toward software supply chain security and risk management with SBOMs as the foundation continues to gain momentum in 2025. PCI DSS v4.0 is the next major regulatory framework embracing SBOMs. This follows the pattern set by the US Executive Order 14028 and the EU Cyber Resilience Act, further cementing SBOMs as a cornerstone of modern cybersecurity best practice. 

Wrap-Up: Embracing SBOMs for a Secure Payment Ecosystem

The integration of SBOMs into PCI DSS v4.0 signifies a fundamental shift towards a more secure and transparent payment ecosystem. SBOMs are no longer merely a recommended practice but a critical component for achieving and maintaining compliance with the evolving requirements of PCI DSS v4.0, particularly Requirement 6.3.2. 

By providing a comprehensive inventory of software components and their dependencies, SBOMs empower organizations to enhance their security posture, reduce the risk of costly data breaches, improve their vulnerability management capabilities, and effectively navigate the complexities of regulatory compliance. Embracing SBOM implementation is not just about meeting a requirement; it is about building a more resilient and trustworthy software foundation for handling sensitive payment card data.

If you’re interested to learn more about how Anchore Enterprise can help your organization harden their software supply chain and achieve PCI DSS v4.0 compliance, get in touch with our team!


Interested to learn about all of the software supply chain use-cases that SBOMs enable? Read our new white paper and start unlocking enterprise value.

WHITE PAPER Rnd Rect | Unlock Enterprise Value with SBOMs: Use-Cases for the Entire Organization

The Developer’s Guide to SBOMs & Policy-as-Code

If you’re a developer, this vignette may strike a chord: You’re deep in the flow, making great progress on your latest feature, when someone from the security team sends you an urgent message. A vulnerability has been discovered in one of your dependencies and has failed a compliance review. Suddenly, your day is derailed as you shift from coding to a gauntlet of bureaucratic meetings.

This is an unfortunate reality for developers at organizations where security and compliance are bolt-on processes rather than integrated parts of the whole. Your valuable development time is consumed with digging through arcane compliance documentation, attending security reviews and being relegated to compliance training sessions. Every context switch becomes another drag on your productivity, and every delayed deployment impacts your ability to ship code.

Two niche DevSecOps/software supply chain technologies have come together to transform the dynamic between developers and organizational policy—software bills of materials (SBOMs) and policy-as-code (PaC). Together, they dramatically reduce the friction between development velocity and risk management requirements by making policy evaluation and enforcement:

  • Automated and consistent
  • Integrated into your existing workflows
  • Visible early in the development process

In this guide, we’ll explore how SBOMs and policy-as-code work, the specific benefits they bring to your daily development work, and how to implement them in your environment. By the end, you’ll understand how these tools can help you spend less time manually doing someone else’s job and more time doing what you do best—writing great code.


Interested to learn about all of the software supply chain use-cases that SBOMs enable? Read our new white paper and start unlocking enterprise value.

WHITE PAPER Rnd Rect | Unlock Enterprise Value with SBOMs: Use-Cases for the Entire Organization

A Brief Introduction to Policy-as-Code

You’re probably familiar with Infrastructure-as-Code (IaC) tools like Terraform, AWS CloudFormation, or Pulumi. These tools allow you to define your cloud infrastructure in code rather than clicking through web consoles or manually running commands. Policy-as-Code (PaC) applies this same principle to policies from other departments of an organization.

What is policy-as-code?

At its core, policy-as-code translates organizational policies—whether they’re security requirements, licensing restrictions, or compliance mandates—from human-readable documents into machine-readable representations that integrate seamlessly with your existing DevOps platform and tooling.

Think of it this way: IaC gives you a DSL for provisioning and managing cloud resources, while PaC extends this concept to other critical organizational policies that traditionally lived outside engineering teams. This creates a bridge between development workflows and business requirements that previously existed in separate silos.

Why do I care?

Let’s play a game of would you rather. Choose the activity from the table below that you’d rather do:

Before Policy-as-CodeAfter Policy-as-Code
Read lengthy security/legal/compliance  documentation to understand requirementsReference policy translated into code with clear comments and explanations
Manually review your code policy compliance and hope you interpreted policy correctlyReceive automated, deterministic policy evaluation directly in CI/CD build pipeline
Attend compliance training sessions because you didn’t read the documentationLearn policies by example as concrete connections to actual development tasks
Setup meetings with security, legal or compliance teams to get code approvalGet automated approvals through automated policy evaluation without review meetings
Wait till end of sprint and hope VP of Eng can get exception to ship with policy violationsIdentify and fix policy violations early when changes are simple to implement

While the game is a bit staged, it isn’t divorced from reality. PaC is meant to relieve much of the development friction associated with the external requirements that are typically hoisted onto the shoulders of developers.

From oral tradition to codified knowledge

Perhaps one of the most under appreciated benefits of policy-as-code is how it transforms organizational knowledge. Instead of policies living in outdated Word documents or in the heads of long-tenured employees, they exist as living code that evolves with your organization.

When a developer asks “Why do we have this restriction?” or “What’s the logic behind this policy?”, the answer isn’t “That’s just how we’ve always done it” or “Ask Alice in Compliance.” Instead, they can look at the policy code, read the annotations, and understand the reasoning directly.

In the next section, we’ll explore how software bills of materials (SBOMs) provide the perfect data structure to pair with policy-as-code for managing software supply chain security.

A Brief Introduction to SBOMs (in the Context of PaC)

If policy-as-code provides the rules engine for your application’s dependency supply chain, then Software Bills of Materials (SBOMs) provide the structured, supply chain data that the policy engine evaluates.

What is an SBOM?

An SBOM is a formal, machine-readable inventory of all components and dependencies used in building a software artifact. If you’re familiar with Terraform, you can think of an SBOM as analogous to a dev.tfstate file but it stores the state of your application code’s 3rd-party dependency supply chain which is then reconciled against a main.tf file (i.e., policy) to determine if the software supply chain is compliant or in violation of the defined policy.

SBOMs vs package manager dependency files

You may be thinking, “Don’t I already have this information in my package.json, requirements.txt, or pom.xml file?” While these files declare your direct dependencies, they don’t capture the complete picture:

  1. They don’t typically include transitive dependencies (dependencies of your dependencies)
  2. They don’t include information about the components within container images you’re using
  3. They don’t provide standardized metadata about vulnerabilities, licenses, or provenance
  4. They aren’t easily consumable by automated policy engines across different programming languages and environments

SBOMs solve these problems by providing a standardized format that comprehensively documents your entire software supply chain in a way that policy engines can consistently evaluate.

A universal policy interface: How SBOMs enable policy-as-code

Think of SBOMs as creating a standardized “policy interface” for your software’s supply chain metadata. Just as APIs create a consistent way to interact with services, SBOMs create a consistent way for policy engines to interact with your software’s composable structure.

This standardization is crucial because it allows policy engines to operate on a known data structure rather than having to understand the intricacies of each language’s package management system, build tool, or container format.

For example, a security policy that says “No components with critical vulnerabilities may be deployed to production” can be applied consistently across your entire software portfolio—regardless of the technologies used—because the SBOM provides a normalized view of the components and their vulnerabilities.

In the next section, we’ll explore the concrete benefits that come from combining SBOMs with policy-as-code in your development workflow.

How do I get Started with SBOMs and Policy-as-Code

Now that you understand what SBOMs and policy-as-code are and why they’re valuable, let’s walk through a practical implementation. We’ll use Anchore Enterprise as an example of a policy engine that has a DSL to express a security policy which is then directly integrated into a CI/CD runbook. The example will focus on a common software supply chain security best practice: preventing the deployment of applications with critical vulnerabilities.

Tools we’ll use

For this example implementation, we’ll use the following components from Anchore:

  • AnchoreCTL: A software composition analysis (SCA) tool and SBOM generator that scans source code, container images or application binaries to populate an SBOM with supply chain metadata
  • Anchore Enforce: The policy engine that evaluates SBOMs against defined policies
  • Anchore Enforce JSON: The Domain-Specific Language (DSL) used to define policies in a machine-readable format

While we’re using Anchore in this example, the concepts apply to other SBOM generators and policy engines as well.

Step 1: Translate human-readable policies to machine-readable code

The first step is to take your organization’s existing policies and translate them into a format that a policy engine can understand. Let’s start with a simple but effective policy.

Human-Readable Policy:

Applications with critical vulnerabilities must not be deployed to production environments.

This policy needs to be translated into the Anchore Enforce JSON policy format:

{
  "id": "critical_vulnerability_policy",
  "version": "1.0",
  "name": "Block Critical Vulnerabilities",
  "comment": "Prevents deployment of applications with critical vulnerabilities",
  "rules": [
    {
      "id": "block_critical_vulns",
      "gate": "vulnerabilities",
      "trigger": "package",
      "comment": "Rule evaluates each dependency in an SBOM against vulnerability database. If the dependency is found in the database, all known vulnerability severity scores are evaluated for a critical value. If match if found policy engine returns STOP action to CI/CD build task",
      "parameters": [
        { "name": "package_type", "value": "all" },
        { "name": "severity_comparison", "value": "=" },
        { "name": "severity", "value": "critical" },
      ],
      "action": "stop"
    }
  ]
}

This policy code instructs the policy engine to:

  1. Examine all application dependencies (i.e., packages) in the SBOM
  2. Check if any dependency/package has vulnerabilities with a severity of “critical”
  3. If found, return a “stop” action that will fail the build

If you’re looking for more information on the capabilities of the Anchore Enforce DSL, our documentation provides the full capabilities of the Anchore Enforce policy engine.

Step 2: Deploy Anchore Enterprise with the policy engine

With the example policy defined, the next step is to deploy Anchore Enterprise (AE) and configure the Anchore Enforce policy engine. The high-level steps are:

  1. Deploy Anchore Enterprise platform in your test environment via Helm Chart (or other); includes policy engine
  2. Load your policy into the policy engine
  3. Configure access controls/permissions between AE deployment and CI/CD build pipeline

If you’re interested to get hands-on with this, we have developed a self-paced workshop that walks you through a full deployment and how to set up a policy. You can get a trial license by signing up for our free trial.

Step 3: Integrate SBOM generation into your CI/CD pipeline

Now you need to generate SBOMs as part of your build process and have them evaluated against your policies. Here’s an example of how this might look in a GitHub Actions workflow:

name: Build App and Evaluate Supply Chain for Vulnerabilities

on:
  push:
    branches: [ main ]
  pull_request:
    branches: [ main ]

jobs:
  build-and-evaluate:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v3
      
      - name: Build Application
        run: |
          # Build application as container image
          docker build -t myapp:latest .
      
      - name: Generate SBOM
        run: |
          # Install AnchoreCTL
          curl -sSfL https://anchorectl-releases.anchore.io/v1.0.0/anchorectl_1.0.0_linux_amd64.tar.gz | tar xzf - -C /usr/local/bin
          
          # Execute supply chain composition scan of container image, generate SBOM and send to policy engine for evaluation
          anchorectl image add --wait myapp:latest
          
      - name: Evaluate Policy
        run: |
          # Get policy evaluation results
          RESULT=$(anchorectl image check myapp:latest --policy critical_vulnerability_policy)
          
          # Handle the evaluation result
          if [[ $RESULT == *"Status: pass"* ]]; then
            echo "Policy evaluation passed! Proceeding with deployment."
          else
            echo "Policy evaluation failed! Deployment blocked."
            exit 1
          fi
      
      - name: Deploy if Passed
        if: success()
        run: |
          # Your deployment steps here

This workflow:

  1. Builds your application as a container image using Docker
  2. Installs AnchoreCTL
  3. Scans container image with SCA tool to map software supply chain
  4. Generates an SBOM based on the SCA results
  5. Submits the SBOM to the policy engine for evaluation
  6. Gets evaluation results from policy engine response
  7. Continues or halts the pipeline based on the policy response

Step 4: Test the integration

With the integration in place, it’s time to test that everything works as expected:

  1. Create a test build that intentionally includes a component with a known critical vulnerability
  2. Push the build through your CI/CD pipeline
  3. Confirm that:
    • The SBOM is correctly generated
    • The policy engine identifies the vulnerability
    • The pipeline fails as expected

If all goes well, you’ve successfully implemented your first policy-as-code workflow using SBOMs!

Step 5: Expand your policy coverage

Once you have the basic integration working, you can begin expanding your policy coverage to include:

  • Security policies
  • Compliance policies
  • Software license policies
  • Custom organizational policies
  • Environment-specific requirements (e.g., stricter policies for production vs. development)

Work with your security and compliance teams to translate their requirements into policy code, and gradually expand your automated policy coverage. This process is a large upfront investment but creates recurring benefits that pay dividends over the long-term.

Step 6: Profit!

With SBOMs and policy-as-code implemented, you’ll start seeing the benefits almost immediately:

  • Fast feedback on security and compliance issues
  • Reduced manual compliance tasks
  • Better documentation of what’s in your software and why
  • Consistent evaluation and enforcement of policies
  • Certainty about policy approvals

The key to success is getting your security and compliance teams to embrace the policy-as-code approach. Help them understand that by translating their policies into code, they gain more consistent enforcement while reducing manual effort.

Wrap-Up

As we’ve explored throughout this guide, SBOMs and policy-as-code represent a fundamental shift in how developers interact with security and compliance requirements. Rather than treating these as external constraints that slow down development, they become integrated features of your DevOps pipeline.

Key takeaways

  • Policy-as-Code transforms organizational policies from static documents into dynamic, version-controlled code that can be automated, tested, and integrated into CI/CD pipelines.
  • SBOMs provide a standardized format for documenting your software’s components, creating a consistent interface that policy engines can evaluate.
  • Together, they enable “shift-left” security and compliance, providing immediate feedback on policy violations without meetings or context switching.
  • Integration is straightforward with pre-built plugins for popular DevOps platforms, allowing you to automate policy evaluation as part of your existing build process.
  • The benefits extend beyond security to include faster development cycles, reduced compliance burden, and better visibility into your software supply chain.

Get started today

Ready to bring SBOMs and policy-as-code to your development environment? Anchore Enterprise provides a comprehensive platform for generating SBOMs, defining policies, and automating policy evaluation across your software supply chain.


Learn the 5 best practices for container security and how SBOMs play a pivotal role in securing your software supply chain.

Software Supply Chain Transparency: Why SBOMs Are the Missing Piece in Your ConMon Strategy

Two cybersecurity buzzwords are rapidly shaping how organizations manage risk and streamline operations: Continuous Monitoring (ConMon) and Software Bill of Materials (SBOMs). ConMon, rooted in the traditional security principle—“trust but verify”—has evolved into an iterative process where organizations measure, analyze, design, and implement improvements based on real-time data. Meanwhile, SBOMs offer a snapshot of an application’s composition (i.e., 3rd-party dependency supply chain) at any given point in the DevSecOps lifecycle. 

Understanding these concepts is crucial because together they unlock significant enterprise value—ranging from rapid zero-day response to scalable vulnerability management and even automated compliance enforcement. By integrating SBOMs into a robust ConMon strategy, organizations can not only mitigate supply chain risks but also ensure that every stage of software development adheres to stringent security and compliance standards.

A Brief Introduction to Continuous Monitoring (ConMon)

Continuous Monitoring is a wide ranging topic applied across various contexts such as application performance, error tracking, and security oversight. In the most general sense, ConMon is an iterative cycle of:

  • Measure: Collect relevant data.
  • Analyze: Analyze raw data and generate actionable insights.
  • Design: Develop solution(s) based on the insights.
  • Implement: Execute solution to resolve issue(s) or improve performance.
  • Repeat

ConMon is underpinned by the well-known management theory maxim, “You can’t manage what you don’t measure.” Historically, the term has its origins in the federal compliance world—think FedRAMP and cATO—where continuous monitoring was initially embraced as an evolution of traditional point-in-time security compliance audits.

So, where do SBOMs fit into this picture?

A Brief Introduction to SBOMs (in the Context of ConMon)

In the world of ConMon, SBOMs are a source of data that can be measured and analyzed to extract actionable insights. SBOMs are specifically a structured store of software supply chain metadata. They track the evolution of a software artifacts supply chain as it develops throughout the software development lifecycle.

An SBOM catalogs information like software supply chain dependencies, security vulnerabilities and licensing contracts. In this way SBOMs act as the source of truth for the state of an application’s software supply chain during the development process.

SBOM Lifecycle Graphic

The ConMon process utilizes the supply chain data stored in the SBOMs to generate actionable insights like: 

  • uncovering critical supply chain vulnerabilities,
  • identifying legally risky open source licenses, or 
  • catching new software dependencies that break key compliance frameworks like FedRAMP.

SBOMs are the key data source that allows ConMon to apply its goal of continuously monitoring and improving an organization’s software development environment—specifically, the software supply chain.

Benefits of using SBOMs as the central component of ConMon

As the cornerstone of the software supply chain, SBOMs play a central role in supply chain security which falls under the purview of the continuous monitoring workflow. Given this, it shouldn’t be a surprise that there are cross-over use-cases between the two domains. Leveraging the standardized data structure of SBOMs unlocks a wealth of opportunities for automating supply chain security use-cases and scaling the principles of ConMon. Key use-cases and benefits include:

  1.  Rapid incident response to zero-day disclosure
  • Reduced exploitation risk: SBOMs reduce the risk of supply chain exploitation by dramatically reducing the time to identify if vulnerable components are present in an organization’s software environment and how to prioritize remediation efforts.
  • Prevent wasted organizational resources: A centralized SBOM repository enables organizations to automate manual dependency audits into a single search query. This prevents the need to re-task software engineers away from development work to deal with incident response.
  1. Software dependency drift detection
  • Reduced exploitation risk: When SBOM generation is integrated natively into the DevSecOps pipeline a historical record of the development of an application becomes available. Each development stage is compared against the previous to identify dependency injection in real-time. Catching and remediating malicious injections as early as possible significantly reduces the risk of exploitation by threat actors.
  1. Proactive and scalable vulnerability management
  • Reduced security risk: SBOMs empower organizations to decouple software composition scanning from vulnerability scanning, enabling a scalable vulnerability management approach that meets cloud-native expectations. By generating an SBOM early in the DevSecOps pipeline, teams can continuously cross-reference software components against known vulnerability databases, proactively detecting risks before they reach production.
  • Reduced time on vulnerability management: This streamlined process reduces the manual tasks associated with legacy vulnerability management. Teams are now enabled to focus their efforts on higher-level issues rather than be bogged down with tedious manual tasks.
  1. Automated non-compliance alerting and enforcement
  1. Automated compliance report generation
  • Reduce time spent generating compliance audit evidence: Manual generation of compliance audit reports to prove security best practices are in place is a time consuming process. SBOMs unlock the power to automate the generation of audit evidence for the software supply chain. This protects organizational resources for higher-value tasks.
  1. Automated OSS license management
  • Reduced legal risk: An adversarial OSS license accidentally entering a commercial application opens up an enterprise to significant legal risk. SBOMs enable organizations to automate the process of scanning for OSS licenses and assessing the legal risk of the entire software supply chain. Having real-time visibility into the licenses of an organization’s entire supply chain dramatically reduces the risk of legal penalties.

In essence, SBOMs serve as the heart of a robust ConMon strategy, providing the critical data needed to scale automation and ensure that the software supply chain remains secure and compliant.


Interested to learn about all of the software supply chain use-cases that SBOMs enable? Read our new white paper and start unlocking enterprise value.

WHITE PAPER Rnd Rect | Unlock Enterprise Value with SBOMs: Use-Cases for the Entire Organization

SBOMs and ConMon applied to the DevSecOps lifecycle

Integrating SBOMs within the DevSecOps lifecycle enables organizations to realize security outcomes efficiently through a cyclical process of measurement, analysis, design and implementation. We’ll go through each step of the process.

1. Measure

Measurement is the most obvious stage of ConMon given its synonym “monitoring” is in the name. The measurement step is primarily concerned with collecting as much data as possible that will later power the analysis stage. ConMon traditionally focuses on collecting security specific data like:

  • Observability Telemetry: Application logs, metrics, and traces
  • Audit Logs: Records of application administration: access and modifications
  • Supply Chain Metadata: Point-in-time scans of the composition of a software artifacts supply chain of 3rd-party dependencies

Software composition analysis (SCA) scanners and SBOM generation tools create an additional dimension of information about software that can then be analyzed. The supply chain metadata that is generated powers the insights that feeds the ConMon flywheel and increases transparency into software supply chain issues.

External measurements (i.e., data sources)

Additionally, external databases (e.g., NVD or VEX) can be referenced to correlate valuable information aggregated by public interest organizations like NIST (National Institute of Standards and Measures) and CISA (Cybersecurity and Infrastructure Security Agency). These databases act as comprehensive stores of the collective work of security researchers worldwide. 

As software components are analyzed and tested for vulnerabilities and exploits, researchers submit their findings to these public goods databases. The security findings are tied to the specific software components (i.e., the same component identifier stored in an SBOM).

After collecting all of this information, we are ready to move to the next phase of the ConMon lifecycle: analysis.

2. Analyze

The foundational premise of ConMon is to monitor continuously in order to uncover security threats before they can cause damage. The analysis step is what transforms raw data into actionable security insights that reduce the risk of a breach. 

  • Queries over Software Supply Chain Data:
    • As the central data format for the software supply chain, SBOMs act as the data source that queries are crafted to extract insight from
  • Actionable Insights:
    • Highlight known vulnerabilities by cross-referencing the entire 3rd-party software supply chain against vulnerability databases like NVD
    • Compare SBOMs from one stage of the pipeline to a later stage in order to uncover dependency injections (both unintentional and malicious)
    • Codify software supply chain security or regulatory compliance policies into policy-as-code that can alert on policy drift in real-time

3. Design

After generating insights from analysis of the raw data, the next step is to design a solution based on the analysis from the previous step. The goal of the designed solution is to reduce the risk of a security incident.

Example Solutions Based on Analysis

  • Vulnerability Remediation Workflow: After analysis has uncovered known vulnerabilities, designing a system to remediate the vulnerabilities comes into focus. Utilizing the existing engineering task management process is ideal. To live up to the promise of ConMon, the analysis to remediation process should be an automated system that pushes supply chain data from the analysis platform directly into the remediation queue.
  • Dependency Injection Detection via SBOM Drift Analysis: SBOMs can be scanned at each stage of the DevSecOps pipeline, when anomalous dependencies are flagged as not coming from a known good source this analysis can be used to prompt an investigation. These types of investigations are generally too complex to be automated but the investigation process still requires design.
  • Automated Compliance Enforcement via Policy-as-Code: By codifying software supply chain best practices or regulatory compliance controls into policy-as-code, organizations can be alerted to policy violations in real-time. The solution design includes a mapping of policies into code, scans against containers in the DevSecOps pipeline and a notification system that can act on the analysis.

4. Implement

The final step of ConMon involves implementing the design from the previous step. This also sets up the entire process to restart from the beginning. After the design is implemented it is ready to be monitored again to assess effectiveness of the implemented design.

Execution of Solution Design

  • Vulnerability Remediation Workflow: Configure the analysis platform to push a report of identified vulnerabilities to the engineering organization’s existing ticketing system. Prioritize the vulnerabilities based on their severity score to increase signal-to-noise ratio for the assigned developer or gate the analysis platform to only push reports if a vulnerability is classified as high or critical.
  • Dependency Injection Detection via SBOM Drift Analysis: Integrate SBOM generation into two or more stages of the DevSecOps pipeline to enable diff analysis of software supply chain analysis over time. Allowlists and denylists can fine tune which types of dependency injections are expected and which are anomalous. Investigations into suspected dependency injection can be triggered based on anomaly detection.
  • Automated Compliance Enforcement via Policy-as-Code: Security policies and/or compliance controls will need to be translated from english into code but this is a one-time, upfront investment to enable scalable, automated policy scanning. A scanner will need to be integrated into one or more places in the CI/CD build pipeline in order to detect policy violations. As violations are identified, the analysis platform can push notifications to the appropriate team.

5. Repeat

Step 5 isn’t actually a different step of ConMon, it is just a reminder to return to Step 1 for another turn of the ConMon cycle. The ‘continuous’ in ConMon means that we continue to repeat the process indefinitely. As security of a system is measured and evaluated, new security issues are discovered that then require a solution design and design implementation. This is the flywheel cycle of ConMon that endlessly improves the security of the system that is monitored.

Real-world Scenario

SBOMs and ConMon aren’t just a theoretical framework, there are a number of SBOM use-cases in production that are delivering value to enterprises. The most prominent of these is the security incident response use-case. This use-case moved into the limelight in the wake of the string of infamous software supply chain attacks of the past decade: Solarwinds, Log4j, XZ Utils, etc.

The biggest takeaway from these high-profile incidents is that enterprises were caught unprepared and experienced pain as a result of not having the tools to measure, analyze, design and implement solutions to these supply chain attacks.

Large enterprises like Google responded to this deficit by deploying an SBOM-powered software supply chain solution that generates over 100 million SBOMs a year! As a result of having this system in place Google was able to rapidly respond to the XZ Utils incident without having to utilize any manual processes that typically extend supply chain attacks like these from minutes into days or weeks. Brandon Lum, Google’s SBOM lead, recounts that “within 10 minutes of the XZ [Utils] announcement, the entire Google organization was able to rule out that anything was impacted.”

This outcome was only possible due to Google’s foresight and willingness to instrument their DevSecOps pipeline with tools like SCAs and SBOM generators and continuously monitor their software supply chain.

Ready to Reap the Rewards of SBOM-powered ConMon?

Integrating SBOMs as a central element of your ConMon strategy transforms how organizations manage software supply chain security. By aligning continuous monitoring with the principles of DevSecOps, security and engineering leaders can ensure that their organizations are well-prepared to face the evolving threat landscape—keeping operations secure, compliant, and efficient.

If you’re interested in embracing the power of SBOMs and ConMon but your team doesn’t want to take on this side question themselves, Anchore offers a turnkey platform to unlock the benefits without the headache of building and managing a solution from scratch. Anchore Enterprise is a complete SBOM-powered, supply chain security solution that extends ConMon into your organization’s software supply chain. Reach out to our team to learn more or start a free trial to kick the tires yourself.

Learn the 5 best practices for container security and how SBOMs play a pivotal role in securing your software supply chain.

How to Automate Container Vulnerability Scanning for Harbor Registry with Anchore Enterprise

Security engineers at modern enterprises face an unprecedented challenge: managing software supply chain risk without impeding development velocity, all while threat actors exploit the rapidly expanding attack surface. With over 25,000 new vulnerabilities in 2023 alone and supply chain attacks surging 540% year-over-year from 2019 to 2022, the exploding adoption of open source software has created an untenable security environment. To overcome these challenges security teams are in need of tools to scale their impact and invert the they are a speed bump for high velocity software delivery.

If your DevSecOps pipeline utilizes the open source Harbor registry then we have the perfect answer to your needs. Integrating Anchore Enterprise—the SBOM-powered container vulnerability management platform—with Harbor offers the force-multiplier security teams need. This one-two combo delivers:

  • Proactive vulnerability management: Automatically scan container images before they reach production
  • Actionable security insights: Generate SBOMs, identify vulnerabilities and alert on actionable insights to streamline remediation efforts
  • Lightweight implementation: Native Harbor integration requiring minimal configuration while delivering maximum value
  • Improved cultural dynamics: Reduce security incident risk and, at the same time, burden on development teams while building cross-functional trust

This technical guide walks through the implementation steps for integrating Anchore Enterprise into Harbor, equipping security engineers with the knowledge to secure their software supply chain without sacrificing velocity.

Learn the essential container security best practices to reduce the risk of software supply chain attacks in this white paper.

Reduce Risk for Software Supply Chain Attacks: Best Practices for Container Security

Integration Overview

Anchore Enterprise can integrate with Harbor in two different ways—each has pros and cons:

Pull Integration Model

In this model, Anchore uses registry credentials to pull and analyze images from Harbor:

  • Anchore accesses Harbor using standard Docker V2 registry integration
  • Images are analyzed directly within Anchore Enterprise
  • Results are available in Anchore’s interface and API
  • Ideal for organizations where direct access to Harbor is restricted but API access is permitted

Push Integration Model

In this model, Harbor uses its native scanner adapter feature to push images to Anchore for analysis:

  • Harbor initiates scans on-demand through its scanner adapter as images are added
  • Images are scanned within the Anchore deployment
  • Vulnerability scan results are stored in Anchore and sent to Harbor’s UI
  • Better for environments with direct access to Harbor that want immediate scans

Both methods provide strong security benefits but differ in workflow and where results are accessed.

Setting Up the Pull Integration

Let’s walk through how to configure Anchore Enterprise to pull and analyze images from your Harbor registry.

Prerequisites

  • Anchore Enterprise installed and running
  • Harbor registry deployed and accessible
  • Harbor user account with appropriate permissions

Step 1: Configure Registry Credentials in Anchore

  1. In Anchore Enterprise, navigate to the “Registries” section
  2. Select “Add Registry”
  1. Fill in the following details:
Registry Hostname or IP Address: [your Harbor API URL or IP address, e.g., http://harbor.yourdomain.com]

Name: [Human readable name]

Type: docker_v2

Username: [your Harbor username, e.g., admin]

Password: [your Harbor password]
  1. Configure any additional options like SSL validation if necessary
  2. Test the connection
  3. Save the configuration

Step 2: Analyze an Image from Harbor

Once the registry is configured, you can analyze images stored in Harbor:

  1. Navigate to the “Images” section in Anchore Enterprise
  2. Select “Add Image”
  1. Choose your Harbor registry from the dropdown
  2. Specify the repository and tag for the image you want to analyze
  3. Click “Analyze”

Anchore will pull the image from Harbor, decompose it, generate an SBOM, and scan for vulnerabilities. This process typically takes a few minutes depending on image size.

Step 3: Review Analysis Results

After analysis completes:

  1. View the vulnerability report in the Anchore UI
  2. Check the generated SBOM for all dependencies
  3. Review compliance status against configured policies
  4. Export reports or take remediation actions as needed

Setting Up the Push Integration

Now let’s configure Harbor to push images to Anchore for scanning using the Harbor Scanner Adapter.

Prerequisites

  • Harbor v2.0 or later installed
  • Anchore Enterprise deployed and accessible
  • Harbor Scanner Adapter for Anchore installed

Step 1: Deploy the Harbor Scanner Adapter

If not already deployed, install the Harbor Scanner Adapter for Anchore:

  1. Download or copy the harbor-adapter-anchore.yaml template from our GitHub repository
  2. Customize the template for your Harbor deployment. The required fields are:
ANCHORE_ENDPOINT 

ANCHORE_USERNAME 

ANCHORE_PASSWORD
  1. Apply the Kubernetes manifest:
kubectl apply -f harbor-adapter-anchore.yaml

Step 2: Configure the Scanner in Harbor

  1. Log in to Harbor as an administrator
  2. Navigate to “Administration” → “Interrogation Services”
  3. In the “Scanners” tab, click “New Scanner”
  1. Enter the following details:
Name: Anchore

Description: Anchore Enterprise Scanner

Endpoint: http://harbor-scanner-anchore:8080

Auth: None (or as required by your configuration)
  1. Save and set as default if desired

Step 3: Configure Project Scanning Settings

For each project that should use Anchore scanning:

  1. Navigate to the project in Harbor
  2. Go to “Configuration”
  3. Enable “Automatically scan images on push” AND Enable “Automatically generate SBOM on push”
  1. Save the configuration

Step 4: Test the Integration

  1. Tag an image for your Harbor project:
docker tag my-test-application:latest harbor.yourdomain.com/project-name/my-test-application:latest
  1. Push the image to Harbor:
docker push harbor.yourdomain.com/project-name/my-test-application:latest
  1. Verify the automatic scan starts in Harbor
  2. Review the results in your Harbor UI once scanning completes

Advanced Configuration Features

Now that you have the base configuration working for the Harbor Scanner Adapter, you are ready to consider some additional features to increase your security posture.

Scheduled Scanning

Beyond on-push scanning, you can configure scheduled scanning to catch newly discovered vulnerabilities in existing images:

  1. In Harbor, navigate to “Administration” → “Interrogation Services” → “Vulnerability”
  1. Set the scan schedule (hourly, daily, weekly, etc.)
  2. Save the configuration

This ensures all images are regularly re-scanned as vulnerability databases are updated with newly discovered and documented vulnerabilities.

Security Policy Enforcement

To enforce security at the pipeline level:

  1. In your Harbor project, navigate to “Configuration”
  1. Enable “Prevent vulnerable images from running”
  2. Select the vulnerability severity level threshold (Low, Medium, High, Critical)
  3. Images with vulnerabilities above this threshold will be blocked from being pulled*

*Be careful with this setting for a production environment. If an image is flagged as having a vulnerability and your container orchestrator attempts to pull the image to auto-scale a service it may cause instability for users.

Proxy Image Cache

Harbor’s proxy cache capability provides an additional security layer:

  1. Navigate to “Registries” in Harbor and select “New Endpoint”
  1. Configure a proxy cache to a public registry like Docker Hub
  2. All images pulled from Docker Hub will be cached locally and automatically scanned for vulnerabilities based on your project settings

Security Tips and Best Practices from the Anchore Team

Use Anchore Enterprise for highest fidelity vulnerability data

  • The Anchore Enterprise dashboard surfaces complete vulnerability details
  • Full vulnerability data can be configured with downstream integrations like Slack, Jira, ServiceNow, etc. 

“Good data empowers good people to make good decisions.”

—Dan Perry, Principal Customer Success Engineer, Anchore

Configuration Best Practices

For optimal security posture:

  • Configure per Harbor project: Use different vulnerability scanning settings for different risk profiles
  • Mind your environment topology: Adjust network timeouts and SSL settings based on network topology; make sure Harbor and Anchore Enterprise deployments are able to communicate securely

Secure Access Controls

  • Adopt least privilege principle: Use different credentials per repository
  • Utilize API keys: For service accounts and integrations, use API keys rather than user credentials

Conclusion

Integrating Anchore Enterprise with Harbor registry creates a powerful security checkpoint in your DevSecOps pipeline. By implementing either the pull or push model based on your specific needs, you can automate vulnerability scanning, enforce security policies, and maintain compliance requirements.

This integration enables security teams to:

  • Detect vulnerabilities before images reach production
  • Generate and maintain accurate SBOMs
  • Enforce security policies through prevention controls
  • Maintain continuous security through scheduled scans

With these tools properly integrated, you can significantly reduce the risk of deploying vulnerable containers to production environments, helping to secure your software supply chain.

If you’re a visual learner, this content is also available in webinar format. Watch it on-demand below:

NIST SP 800-190: Overview & Compliance Checklist

This blog post has been archived and replaced by the supporting pillar page that can be found here: https://anchore.com/wp-admin/post.php?post=987474946&action=edit

The blog post is meant to remain “public” so that it will continue to show on the /blog feed. This will help discoverability for people browsing the blog and potentially help SEO. If it is clicked on it will automatically redirect to the pillar page.

Unlocking the Power of SBOMs: A Complete Guide

Software Bill of Materials (SBOMs) are no longer optional—they’re mission-critical.

That’s why we’re excited to announce the release of our new white paper, “Unlock Enterprise Value with SBOMs: Use-Cases for the Entire Organization.” This comprehensive guide is designed for security and engineering leadership at both commercial enterprises and federal agencies, providing actionable insights into how SBOMs are transforming the way organizations manage software complexity, mitigate risk, and drive business outcomes.

From software supply chain security to DevOps acceleration and regulatory compliance, SBOMs have emerged as a cornerstone of modern software development. They do more than provide a simple inventory of application components; they enable rapid security incident response, automated compliance, reduced legal risk, and accelerated software delivery.

⏱️ Can’t wait till the end?
📥 Download the white paper now 👇👇👇

WHITE PAPER Rnd Rect | Unlock Enterprise Value with SBOMs: Use-Cases for the Entire Organization

Why SBOMs Are a Game Changer

SBOMs are no longer just a checklist item—they’re a strategic asset. They provide an in-depth inventory of every component within your software ecosystem, complete with critical metadata about suppliers, licensing rights, and security postures. This newfound transparency is revolutionizing cross-functional operations across enterprises by:

  • Accelerating Incident Response: Quickly identify vulnerable components and neutralize threats before they escalate.
  • Enhancing Vulnerability Management: Prioritize remediation efforts based on risk, ensuring that developer resources are optimally deployed.
  • Strengthening Compliance: Automate and streamline adherence to complex regulatory requirements such as FedRAMP, SSDF Attestation, and DoD’s continuous ATO.
  • Reducing Legal Risk: Manage open source license obligations proactively, ensuring that every component meets your organization’s legal and security standards.

What’s Inside the White Paper?

Our white paper is organized by organizational function; each section highlighting the relevant SBOM use-cases. Here’s a glimpse of what you can expect:

  • Security: Rapidly identify and mitigate zero-day vulnerabilities, scale vulnerability management, and detect software drift to prevent breaches.
  • Engineering & DevOps: Eliminate wasted developer time with real-time feedback, automate dependency management, and accelerate software delivery.
  • Regulatory Compliance: Automate policy checks, streamline compliance audits, and meet requirements like FedRAMP and SSDF Attestation with ease.
  • Legal: Reduce legal exposure by automating open source license risk management.
  • Sales: Instill confidence in customers and accelerate sales cycles by proactively providing SBOMs to quickly build trust.

Also, you’ll find real-world case studies from organizations that have successfully implemented SBOMs to reduce risk, boost efficiency, and gain a competitive edge. Learn how companies like Google and Cisco are leveraging SBOMs to drive business outcomes.

Empower Your Enterprise with SBOM-Centric Strategies

The white paper underscores that SBOMs are not a one-trick pony. They are the cornerstone of modern software supply chain management, driving benefits across security, engineering, compliance, legal, and customer trust. Whether your organization is embarking on its SBOM journey or refining an established process, this guide will help you unlock cross-functional value and future-proof your technology operations.

Download the White Paper Today

SBOMs are more than just compliance checkboxes—they are a strategic enabler for your organization’s security, development, and business operations. Whether your enterprise is just beginning its SBOM journey or operating a mature SBOM initiative, this white paper will help you uncover new ways to maximize value.

Explore SBOM use-cases for almost any department of the enterprise and learn how to unlock enterprise value to make the most of your software supply chain.

WHITE PAPER Rnd Rect | Unlock Enterprise Value with SBOMs: Use-Cases for the Entire Organization

Generating Python SBOMs: Using pipdeptree and Syft

SBOM (software bill of materials) generation is becoming increasingly important for software supply chain security and compliance. Several approaches exist for generating SBOMs for Python projects, each with its own strengths. In this post, we’ll explore two popular methods: using pipdeptree with cyclonedx-py and Syft. We’ll examine their differences and see why Syft is better for many use-cases.

Learn the 5 best practices for container security and how SBOMs play a pivotal role in securing your software supply chain.

Why Generate Python Package SBOMs?

Before diving into the tools, let’s understand why generating an SBOM for your Python packages is increasingly critical in modern software development. Security analysis is a primary driver—SBOMs provide a detailed inventory of your dependencies that security teams can use to identify vulnerabilities in your software supply chain and respond quickly to newly discovered threats. The cybersecurity compliance landscape is also evolving rapidly, with many organizations and regulations (e.g., EO 14028) now requiring SBOMs as part of software delivery to ensure transparency and traceability in an organization’s software supply chain.

From a maintenance perspective, understanding your complete dependency tree is essential for effective project management. SBOMs help development teams track dependencies, plan updates, and understand the potential impact of changes across their applications. They’re particularly valuable when dealing with complex Python applications that may have hundreds of transitive dependencies.

License compliance is another crucial aspect where SBOMs prove invaluable. By tracking software licenses across your entire dependency tree, you can ensure your project complies with various open source licenses and identify potential conflicts before they become legal issues. This is especially important in Python projects, where dependencies might introduce a mix of licenses that need careful consideration.

Generating a Python SBOM with pipdeptree and cyclonedx-py

The first approach we’ll look at combines two specialized Python tools: pipdeptree for dependency analysis and cyclonedx-py for SBOM generation. Here’s how to use them:

# Install the required tools
$ pip install pipdeptree cyclonedx-bom

# Generate requirements with dependencies
$ pipdeptree --freeze > requirements.txt

# Generate SBOM in CycloneDX format
$ cyclonedx-py requirements requirements.txt > cyclonedx-sbom.json

This Python-specific approach leverages pipdeptree‘s deep understanding of Python package relationships. pipdeptree excels at:

  • Detecting circular dependencies
  • Identifying conflicting dependencies
  • Providing a clear, hierarchical view of package relationships

Generating a Python SBOM with Syft: A Universal SBOM Generator

Syft takes a different approach. As a universal SBOM generator, it can analyze Python packages and multiple software artifacts. Here’s how to use Syft with Python projects:

# Install Syft (varies by platform)
# See: https://github.com/anchore/syft#installation
$ curl -sSfL https://raw.githubusercontent.com/anchore/syft/main/install.sh | sh -s -- -b /usr/local/bin

# Generate SBOM from requirements.txt
$ syft requirements.txt -o cyclonedx-json

# Or analyze an entire Python project
$ syft path/to/project -o cyclonedx-json

Key Advantages of Syft

Syft’s flexibility in output formats sets it apart from other tools. In addition to the widely used CycloneDX format, it supports SPDX for standardized software definitions and offers its own native JSON format that includes additional metadata. This format flexibility allows teams to generate SBOMs that meet various compliance requirements and tooling needs without switching between multiple generators.

Syft truly shines in its comprehensive analysis capabilities. Rather than limiting itself to a single source of truth, Syft examines your entire Python environment, detecting packages from multiple sources, including requirements.txt files, setup.py configurations, and installed packages. It seamlessly handles virtual environments and can even identify system-level dependencies that might impact your application.

The depth of metadata Syft provides is particularly valuable for security and compliance teams. For each package, Syft captures not just basic version information but also precise package locations within your environment, file hashes for integrity verification, detailed license information, and Common Platform Enumeration (CPE) identifiers. This rich metadata enables more thorough security analysis and helps teams maintain compliance with security policies.

Comparing the Outputs

We see significant differences in detail and scope when examining the outputs from both approaches. The pipdeptree with cyclonedx-py combination produces a focused output that concentrates specifically on Python package relationships. This approach yields a simpler, more streamlined SBOM that’s easy to read but contains limited metadata about each package.

Syft, on the other hand, generates a more comprehensive output that includes extensive metadata for each package. Its SBOM provides rich details about package origins, includes comprehensive CPE identification for better vulnerability matching, and offers built-in license detection. Syft also tracks the specific locations of files within your project and includes additional properties that can be valuable for security analysis and compliance tracking.

Here’s a snippet comparing the metadata for the rich package in both outputs:

// pipdeptree + cyclonedx-py
{
  "bom-ref":"pkg:pypi/[email protected]",
  "type":"library",
  "name":"rich",
  "version":"13.9.4"
}

// Syft
{
  "bom-ref":"pkg:pypi/[email protected]",
  "type":"library",
  "author":"Will McGugan <[email protected]>",
  "name":"rich",
  "version":"13.9.4",
  "licenses":[{"license":{"id": "MIT"}}],
  "cpe":"cpe:2.3:a:will_mcgugan_project:python-rich:13.9.4:*:*:*:*:*:*:*",
  "purl":"pkg:pypi/[email protected]",
  "properties":[
    {
      "name":"syft:package:language",
      "value":"python"
    },
    {
      "name":"syft:location:0:path",
      "value":"/.venv/lib/python3.10/site-packages/rich-13.9.4.dist-info/METADATA"
    }
  ]
}

Why Choose Syft?

While both approaches are valid, Syft offers several compelling advantages. As a universal tool that works across multiple software ecosystems, Syft eliminates the need to maintain different tools for different parts of your software stack.

Its rich metadata gives you deeper insights into your dependencies, including detailed license information and precise package locations. Syft’s support for multiple output formats, including CycloneDX, SPDX, and its native format, ensures compatibility with your existing toolchain and compliance requirements.

The project’s active development means you benefit from regular updates and security fixes, keeping pace with the evolving software supply chain security landscape. Finally, Syft’s robust CLI and API options make integrating into your existing automation pipelines and CI/CD workflows easy.

How to Generate a Python SBOM with Syft

Ready to generate SBOMs for your Python projects? Here’s how to get started with Syft:

  1. Install Syft following the official installation guide
  2. For a quick SBOM of your Python project:
$ syft path/to/your/project -o cyclonedx-json
  1. Explore different output formats:
$ syft path/to/your/project -o spdx-json
$ syft path/to/your/project -o syft-json

Conclusion

While pipdeptree combined with cyclonedx-py provides a solid Python-specific solution, Syft offers a more comprehensive and versatile approach to SBOM generation. Its ability to handle multiple ecosystems, provide rich metadata, and support various output formats makes it an excellent choice for modern software supply chain security needs.

Whether starting with SBOMs or looking to improve your existing process, Syft provides a robust, future-proof solution that grows with your needs. Try it and see how it can enhance your software supply chain security today.

Learn about the role that SBOMs for the security of your organization in this white paper.

Learn about the role that SBOMs for the security, including open source software (OSS) security, of your organization in this white paper.

Effortless SBOM Analysis: How Anchore Enterprise Simplifies Integration

As software supply chain security becomes a top priority, organizations are turning to Software Bill of Materials (SBOM) generation and analysis to gain visibility into the composition of their software and supply chain dependencies in order to reduce risk. However, integrating SBOM analysis tools into existing workflows can be complex, requiring extensive configuration and technical expertise. Anchore Enterprise, a leading SBOM management and container security platform, simplifies this process with seamless integration capabilities that cater to modern DevSecOps pipelines.

This article explores how Anchore makes SBOM analysis effortless by offering automation, compatibility with industry standards, and integration with popular CI/CD tools.

Learn about the role that SBOMs for the security of your organization in this white paper.

Learn about the role that SBOMs for the security, including open source software (OSS) security, of your organization in this white paper.

The Challenge of SBOM Analysis Integration

SBOMs play a crucial role in software security, compliance, and vulnerability management. However, organizations often face challenges when adopting SBOM analysis tools:

  • Complex Tooling: Many SBOM solutions require significant setup and customization.
  • Scalability Issues: Enterprises managing thousands of dependencies need scalable and automated solutions.
  • Compatibility Concerns: Ensuring SBOM analysis tools work seamlessly across different DevOps environments can be difficult.
  • Compliance Requirements: Organizations must align with frameworks like Executive Order 14028, EU Cybersecurity Resilience Act (CRA), ISO 27001, and the Secure Software Development Framework (SSDF) 

Anchore addresses these challenges by providing a sleek approach to SBOM analysis with easy-to-use integrations.

How Anchore Simplifies SBOM Analysis Integration

1. Automated SBOM Generation and Analysis

Anchore automates SBOM generation from various sources, including container images, software packages, and application dependencies. This eliminates the need for manual intervention, ensuring continuous security and compliance monitoring.

  • Supports multiple SBOM formats: CycloneDX, SPDX, and Anchore’s native json format.
  • Automatically scans and analyzes SBOMs for vulnerabilities, licensing issues, and  security and compliance policy violations.
  • Provides real-time insights to security teams.

2. Seamless CI/CD Integration

DevSecOps teams require tools that integrate effortlessly into their existing workflows. Anchore achieves this by offering:

  • Popular CI/CD platform plugins: Jenkins, GitHub Actions, GitLab CI/CD, Azure DevOps and more.
  • API-driven architecture: Embed SBOM generation and analysis in any DevOps pipeline.
  • Policy-as-code support: Enforce security and compliance policies within CI/CD workflows.
  • AnchoreCTL: A command-line (CLI) tool for developers to generate and analyze SBOMs locally before pushing to production.

3. Cloud Native and On-Premises Deployment

Organizations have diverse infrastructure requirements, and Anchore provides flexibility through:

  • Cloud native support: Works seamlessly with Kubernetes, OpenShift, AWS, and GCP.
  • On-premises deployment: For organizations requiring strict control over data security.
  • Hybrid model: Allows businesses to use cloud-based Anchore Enterprise while maintaining on-premises security scanning.

Bonus: Anchore also offers an air-gapped deployment option for organizations working with customers that provide critical national infrastructure like energy, financial services or defense.

See how Anchore Enterprise enabled Dreamfactory to support the defense industry.

4. Comprehensive Policy and Compliance Management

Anchore helps organizations meet regulatory requirements with built-in policy enforcement:

  • Out-of-the-box policies: CIS benchmarks, FedRAMP, and DISA STIG compliance.
  • Integrated vulnerability databases: Automated vulnerability assessment using industry-standard databases like OSS Index, NVD, VEX, and Snyk.
  • User-defined policy-as-code: Custom policies to detect software misconfigurations and enforce security best practices.

Custom user policies is a helpful feature to define security policies based on geography; security and compliance can vary widely depending on national borders.

5. Developer-Friendly Approach

A major challenge in SBOM adoption is developer resistance due to complexity. Anchore makes security analysis developer-friendly by:

  • Providing CLI and API tools for easy interaction.
  • Delivering clear, actionable vulnerability reports instead of overwhelming developers with false positives.
  • Integrating directly with development environments, such as VS Code and JetBrains IDEs.
  • Providing an industry standard 24/7 customer support through Anchore’s customer success team.

Conclusion

Anchore has positioned itself as a leader in SBOM analysis by making integration effortless, automating security checks, and supporting industry standards. Whether an organization is adopting SBOMs for the first time or looking to enhance its software supply chain security, Anchore provides a scalable and developer-friendly solution.

By integrating automated SBOM generation, CI/CD compatibility, cloud native deployment, and compliance management, Anchore enables businesses (no matter the size) and government institutions to adopt SBOM analysis without disrupting their workflows. As software security becomes increasingly critical, tools like Anchore will play a pivotal role in ensuring a secure and transparent software supply chain.For organizations seeking a simple to deploy SBOM analysis solution, Anchore Enterprise is here to deliver results to your organization. Request a demo with our team today!

Learn the 5 best practices for container security and how SBOMs play a pivotal role in securing your software supply chain.

Syft 1.20: Faster Scans, Smarter License Detection, and Enhanced Bitnami Support

We’re excited to announce Syft v1.20.0! If you’re new to the community, Syft is Anchore’s open source software composition analysis (SCA) and SBOM generation tool that provides foundational support for software supply chain security for modern DevSecOps workflows.

The latest version is packed with performance improvements, enhanced SBOM accuracy, and several community-driven features that make software composition scanning more comprehensive and efficient than ever.

Understand, Implement & Leverage SBOMs for Stronger Security & Risk Management

50x Faster Windows Scans

Scanning projects with numerous DLLs was reported to take peculiarly long when running on Windows, sometimes up to 50 minutes. A sharp-eyed community member (@rogueai) discovered that certificate validation was being performed unnecessarily during DLL scanning. A fix was merged into this release and those lengthy scans have been dramatically reduced from to just a few minutes—a massive performance improvement for Windows users!

Bitnami Embedded SBOM Support: Maximum Accuracy

Container images from Bitnami include valuable embedded SBOMs located at /opt/bitnami/. These SBOMs, packaged by the image creators themselves, represent the most authoritative source for package metadata. Thanks to community member @juan131 and maintainer @willmurphyscode, Syft now includes a dedicated cataloger for these embedded SBOMs.

This feature wasn’t simple to implement. It required careful handling of package relationships and sophisticated deduplication logic to merge authoritative vendor data with Syft’s existing scanning capabilities. The result? Scanning Bitnami images gives you the most accurate SBOM possible, combining authoritative vendor data with Syft’s comprehensive analysis.

Smarter License Detection

Handling licenses for non-open source projects can be a bit tricky. We discovered that when license files can’t be matched to a valid SPDX expression, they sometimes get erroneously marked as “unlicensed”—even when valid license text is present. For example, our dpkg cataloger occasionally encountered a license like:

NVIDIA Software License Agreement and CUDA Supplement to Software License Agreement

And categorized the package as unlicensed. Ideally, the cataloger would capture this non-standards compliant license whether the maintainer follows SDPX or not.

Community member @HeyeOpenSource and maintainer @spiffcs tackled this challenge with an elegant solution: a new configuration option that preserves the original license text when SPDX matching fails. While disabled by default for compatibility, you can enable this feature with license.include-unknown-license-content: true in your configuration. This ensures you never lose essential license information, even for non-standard licenses.

Go 1.24: Better Performance and Versioning

The upgrade to Go 1.24 brings two significant improvements:

  • Faster Scanning: Thanks to Go 1.24’s optimized map implementations, discussed in this Bytesize Go post—and other performance improvements—we’re seeing scan times reduced by as much as 20% in our testing.
  • Enhanced Version Detection: Go 1.24’s new version embedding means Syft can now accurately report its version and will increasingly provide more accurate version information for Go applications it scans:
syft:   go1.24.0
$ go version -m ./syft
path    github.com/anchore/syft/cmd/syft
mod     github.com/anchore/syft v1.20.0

This also means that as more applications are built with Go 1.24—the versions reported by Syft will become increasingly accurate over time. Everyone’s a winner!

Join the Conversation

We’re proud of these enhancements and grateful to the community for their contributions. If you’re interested in contributing or have ideas for future improvements, head to our GitHub repo and join the conversation. Your feedback and pull requests help shape the future of Syft and our other projects. Happy scanning!

Stay updated on future community spotlights and events by subscribing to our community newsletter.

Learn how MegaLinter leverages Syft and Grype to generate SBOMs and create vulnerability reports

How Syft Scans Software to Generate SBOMs

Syft is an open source CLI tool and Go library that generates a Software Bill of Materials (SBOM) from source code, container images and packaged binaries. It is a foundational building block for various use-cases: from vulnerability scanning with tools like Grype, to OSS license compliance with tools like Grant. SBOMs track software components—and their associated supplier, security, licensing, compliance, etc. metadata—through the software development lifecycle.

At a high level, Syft takes the following approach to generating an SBOM:

  1. Determine the type of input source (container image, directory, archive, etc.)
  2. Orchestrate a pluggable set of catalogers to scan the source or artifact
    • Each package cataloger looks for package types it knows about (RPMs, Debian packages, NPM modules, Python packages, etc.)
    • In addition, the file catalogers gather other metadata and generate file hashes
  3. Aggregate all discovered components into an SBOM document
  4. Output the SBOM in the desired format (Syft, SPDX, CycloneDX, etc.)

Let’s dive into each of these steps in more detail.


Learn the 5 best practices for container security and how SBOMs play a pivotal role in securing your software supply chain.


Flexible Input Sources

Syft can generate an SBOM from several different source types:

  • Container images (both from registries and local Docker/Podman engines)
  • Local filesystems and directories
  • Archives (TAR, ZIP, etc.)
  • Single files

This flexibility is important as SBOMs are used in a variety of environments, from a developer’s workstation to a CI/CD pipeline.

When you run Syft, it first tries to autodetect the source type from the provided input. For example:

# Scan a container image 
syft ubuntu:latest

# Scan a local filesystem
syft ./my-app/

Pluggable Package Catalogers

The heart of Syft is its decoupled architecture for software composition analysis (SCA). Rather than one monolithic scanner, Syft delegates scanning to a collection of catalogers, each focused on a specific software ecosystem.

Some key catalogers include:

  • apk-db-cataloger for Alpine packages
  • dpkg-db-cataloger for Debian packages
  • rpm-db-cataloger for RPM packages (sourced from various databases)
  • python-package-cataloger for Python packages
  • java-archive-cataloger for Java archives (JAR, WAR, EAR)
  • npm-package-cataloger for Node/NPM packages

Syft automatically selects which catalogers to run based on the source type. For a container image, it will run catalogers for the package types installed in containers (RPM, Debian, APK, NPM, etc). For a filesystem, Syft runs a different set of catalogers looking for installed software that is more typical for filesystems and source code.

This pluggable architecture gives Syft broad coverage while keeping the core streamlined. Each cataloger can focus on accurately detecting its specific package type.

If we look at a snippet of the trace output from scanning an Ubuntu image, we can see some catalogers in action:

[0001] DEBUG discovered 91 packages cataloger=dpkg-db-cataloger...  
[0001] DEBUG discovered 0 packages cataloger=rpm-db-cataloger
[0001] DEBUG discovered 0 packages cataloger=npm-package-cataloger

Here, the dpkg-db-cataloger found 91 Debian packages, while the rpm-db-cataloger and npm-package-cataloger didn’t find any packages of their types—which makes sense for an Ubuntu image.

Aggregating and Outputting Results

Once all catalogers have finished, Syft aggregates the results into a single SBOM document. This normalized representation abstracts away the implementation details of the different package types.

The SBOM includes key data for each package like:

  • Name
  • Version
  • Type (Debian, RPM, NPM, etc)
  • Files belonging to the package
  • Source information (repository, download URL, etc.)
  • File digests and metadata

It also contains essential metadata, including a copy of the configuration used when generating the SBOM (for reproducibility). The SBOM will contain detailed information about package evidence, which packages were parsed from (within package.Metadata).

Finally, Syft serializes this document into one or more output formats. Supported formats include:

  • Syft’s native JSON format
  • SPDX’s tag-value and JSON
  • CycloneDX’s JSON and XML

Having multiple formats allows integrating Syft into a variety of toolchains and passing data between systems that expect certain standards.

Revisiting the earlier Ubuntu example, we can see a snippet of the final output:

NAME         VERSION            TYPE
apt          2.7.14build2       deb
base-files   13ubuntu10.1       deb
bash         5.2.21-2ubuntu4    deb

Container Image Parsing with Stereoscope

To generate high-quality SBOMs from container images, Syft leverages a stereoscope library for parsing container image formats.

Stereoscope does the heavy lifting of unpacking an image into its constituent layers, understanding the image metadata, and providing a unified filesystem view for Syft to scan.

This encapsulation is quite powerful, as it abstracts the details of different container image specs (Docker, OCI, etc.), allowing Syft to focus on SBOM generation while still supporting a wide range of images.

Cataloging Challenges and Future Work

While Syft can generate quality SBOMs for many source types, there are still challenges and room for improvement.

One challenge is supporting the vast variety of package types and versioning schemes. Each ecosystem has its own conventions, making it challenging to extract metadata consistently. Syft has added support for more ecosystems and evolved its catalogers to handle edge-cases to provide support for an expanding array of software tooling.

Another challenge is dynamically generated packages, like those created at runtime or built from source. Capturing these requires more sophisticated analysis that Syft does not yet do. To illustrate, let’s look at two common cases:

Runtime Generated Packages

Imagine a Python application that uses a web framework like Flask or Django. These frameworks allow defining routes and views dynamically at runtime based on configuration or plugin systems.

For example, an application might scan a /plugins directory on startup, importing any Python modules found and registering their routes and models with the framework. These plugins could pull in their own dependencies dynamically using importlib.

From Syft’s perspective, none of this dynamic plugin and dependency discovery happens until the application actually runs. The Python files Syft scans statically won’t reveal those runtime behaviors.

Furthermore, plugins could be loaded from external sources not even present in the codebase Syft analyzes. They might be fetched over HTTP from a plugin registry as the application starts.

To truly capture the full set of packages in use, Syft would need to do complex static analysis to trace these dynamic flows, or instrument the running application to capture what it actually loads. Both are much harder than scanning static files.

Source Built Packages

Another typical case is building packages from source rather than installing them from a registry like PyPI or RubyGems.

Consider a C++ application that bundles several libraries in a /3rdparty directory and builds them from source as part of its build process.

When Syft scans the source code directory or docker image, it won’t find any already built C++ libraries to detect as packages. All it will see are raw source files, which are much harder to map to packages and versions.

One approach is to infer packages from standard build tool configuration files, like CMakeLists.txt or Makefile. However, resolving the declared dependencies to determine the full package versions requires either running the build or profoundly understanding the specific semantics of each build tool. Both are fragile compared to scanning already built artifacts.

Some Language Ecosystems are Harder Than Others

It’s worth noting that dynamism and source builds are more or less prevalent in different language ecosystems.

Interpreted languages like Python, Ruby, and JavaScript tend to have more runtime dynamism in their package loading compared to compiled languages like Java or Go. That said, even compiled languages have ways of loading code dynamically, it just tends to be less common.

Likewise, some ecosystems emphasize always building from source, while others have a strong culture of using pre-built packages from central registries.

These differences mean the level of difficulty for Syft in generating a complete SBOM varies across ecosystems. Some will be more amenable to static analysis than others out of the box.

What Could Help?

To be clear, Syft has already done impressive work in generating quality SBOMs across many ecosystems despite these challenges. But to reach the next level of coverage, some additional analysis techniques could help:

  1. Static analysis to trace dynamic code flows and infer possible loaded packages (with soundness tradeoffs to consider)
  2. Dynamic instrumentation/tracing of applications to capture actual package loads (sampling and performance overhead to consider)
  3. Standardized metadata formats for build systems to declare dependencies (adoption curve and migration path to consider)
  4. Heuristic mapping of source files to known packages (ambiguity and false positives to consider)

None are silver bullets, but they illustrate the approaches that could help push SBOM coverage further in complex cases.

Ultimately, there will likely always be a gap between what static tools like Syft can discover versus the actual dynamic reality of applications. But that doesn’t mean we shouldn’t keep pushing the boundary! Even incremental improvements in these areas help make the software ecosystem more transparent and secure.

Syft also has room to grow in terms of programming language support. While it covers major ecosystems like Java and Python well, more work is needed to cover languages like Go, Rust, and Swift completely.

As the SBOM landscape evolves, Syft will continue to adapt to handle more package types, sources, and formats. Its extensible architecture is designed to make this growth possible.

Get Involved

Syft is fully open source and welcomes community contributions. If you’re interested in adding support for a new ecosystem, fixing bugs, or improving SBOM generation, the repo is the place to get started.

There are issues labeled “Good First Issue” for those new to the codebase. For more experienced developers, the code is structured to make adding new catalogers reasonably straightforward.

No matter your experience level, there are ways to get involved and help push the state of the art in SBOM generation. We hope you’ll join us!


Learn about the role that SBOMs for the security of your organization in this white paper.

Learn about the role that SBOMs for the security, including open source software (OSS) security, of your organization in this white paper.

DORA + SBOM Primer: Achieving Software Supply Chain Security in Regulated Industries

At Anchore, we frequently discuss the steady drum beat of regulatory bodies mandating SBOMs (Software Bills of Materials) as the central element of modern software supply chain security. The Digital Operational Resilience Act (DORA) is the most recent framework responding to the accelerating growth of software supply chain attacks—by requiring, in all but name, the kind of structured software inventory that SBOMs provide.

In this post, we provide an educational overview of DORA, explain its software supply chain implications, and outline how SBOMs factor into DORA compliance. We’ll also share how to achieve compliance—and how Anchore Enterprise can serve as your DORA compliance “easy button.”

What is DORA?

The Digital Operational Resilience Act (DORA)—formally Regulation (EU) 2022/2554—is an EU regulatory framework designed to ensure the digital operational resilience of financial entities. Key points include:

Effective Date: January 17, 2025

  • TL;DR: It is already being enforced.

Scope: Applies to a wide range of EU financial entities, including:

  • Banks
  • Payment service providers
  • Investment firms
  • Crypto-asset service providers, and more

Core Topics

  • Proactive Risk Management: For both 1st- and 3rd-party software.
  • Incident Response and Recovery: Mandating robust strategies to handle ICT (Information and Communication Technology) disruptions.
  • Resilience Testing: Regular, thorough testing of incident response and risk management systems.
  • Industry Threat Information Sharing: Collaboration across the sector to share threat intelligence.
  • 3rd-party Software Supplier Oversight: Continuous monitoring of 3rd-party software supply chain.

DORA is organized into a high-level cybersecurity and risk management framework document and a separate technical control document—referred to as the “Regulatory Standards Technical Document”—that outlines in detail how to achieve compliance. If you’re familiar with NIST’s RMF (NIST 800-37) and its “Control Catalog” (NIST 800-53) DORA follows this pattern.

What challenge does DORA solve?

In part driven by a 2020 study that highlighted “systemic cyber risk” due to the “high level of interconnectedness” among the technologies used by financial organizations, DORA aims to mitigate the risk that a vulnerability in one component could lead to widespread sector disruption. Two critical factors underline this need:

  • The Structure of Modern Software Development: With extensive 3rd-party and open source dependencies, any gap in security can have cascading consequences.
  • The Rise of Software Supply Chain Attacks: Now that open source software “constitutes 70-90% of any given piece of modern software” threat actors have embraced this attack vector. Software supply chain attacks have not only become a primary cybersecurity target but are seeing accelerating growth.

DORA is designed to fortify the financial sector’s digital resilience by addressing vulnerabilities in modern software development and countering the rapid rise of software supply chain attacks.

What are the consequences of DORA non-compliance?

Compliance is not optional. The European Supervisory Authorities (ESAs) have been given broad powers to:

  • Access Documents and Data: Assessing an organization’s compliance status through comprehensive audits.
  • Conduct On-Site Investigations: Ensuring that all software supply chain controls are in place and functioning.
  • Enforce Steep Penalties: For instance, DORA Article 35 notes that critical ICT third-party service providers could face fines of up to “1% of the average daily worldwide turnover… in the preceding business year.”

For financial entities—and their technology suppliers—the cost of non-compliance is too high to ignore.

Does DORA Require an SBOM?

DORA does not explicitly mention “SBOMs” by name. Instead, it mandates organizations track “third-party libraries, including open-source libraries”. SBOMs are the industry standard method for achieving this result in an automated and scalable manner. 

Specifically, financial entities are required to track:

  • Third-Party Libraries: Including open-source libraries used by ICT services that support critical or important functions.
  • In-House or Custom Software: ICT services developed internally or specifically customized by an ICT third-party service provider.

These “general” requirements without specifically naming a specific technology (like an SBOM) is a common pattern for other global regulatory compliance frameworks (e.g., SSDF).

Another reason to adopt SBOMs for DORA compliance is that the EU Cyber Resilience Act (CRA) compliance specifically names SBOMs as a required compliance artifact. SBOMs knock out two birds with one stone.

DORA and Software Supply Chain Security 

DORA Regulation 56 underscores the necessity of open source analysis (or Software Composition Analysis, SCA) as a fundamental component for achieving operational resilience. SCA’s are software supply chain security tools that are typically tightly coupled with SBOM generators.

Standalone SCA’s and SBOM generation are fantastic tools to create simple point-in-time inventories for generating the necessary compliance artifacts to pass an initial audit. Unfortunately, DORA demands that financial entities continuously monitor their software supply chain:

  • Ongoing Monitoring: Article 10 of DORA requires that financial entities, in collaboration with their ICT third-party service providers, not only maintain an inventory but also track version updates and monitor third-party libraries on an ongoing basis.
  • Continuous Software Supply Chain Risk Management: It’s not enough to have an SBOM at one moment in time; you must continuously scan and update the inventory to ensure that vulnerabilities are promptly identified and remediated.

This level of supply chain security requires organizations to directly integrate SBOM generation into their DevSecOps pipeline and utilize an SBOM management platform.

How to Fulfill DORA’s Software Supply Chain Requirements

1. Software Composition Analysis (SCA) and SBOM Generation

2. Ingest SBOMs from Third-Party Suppliers

  • Collaborative Supply Chain Management: Ensure that you receive and maintain SBOM data from all your third-party suppliers to achieve full visibility into the software components you rely on.

3. Continuous Monitoring in Production

  • Regular Scanning: Implement continuous monitoring to detect any unexpected changes or vulnerabilities in production environments.
  • Key Features to Look For:
    • Alerts for unexpected software components
    • A centralized repository to store and manage SBOMs
    • Revision history tracking to monitor changes over time

DORA Compliance Easy Button: Anchore Enterprise

Anchore Enterprise is engineered to satisfy all of DORA’s software supply chain requirements, acting as your DORA compliance easy button. Here’s how Anchore Enterprise can help:

Automated Software Supply Chain Risk Management

  • End-to-End SBOM Lifecycle Management (Anchore SBOM): Automatically generate and update SBOMs throughout your software development lifecycle.
  • Programmatic Vulnerability Scanning & Risk Assessment (Anchore Secure): Continuously scan software components and assess risk based on real-time data.
  • Policy-as-Code Enforcement (Anchore Enforce): Automate risk management policies to ensure adherence to DORA’s requirements.

Software Supply Chain Incident Response Automation

  • Continuous SCA and SBOM Generation (Anchore SBOM): Keep an updated view of your production software environment.
  • Surgical Zero-Day Vulnerability Identification (Anchore Secure): Quickly identify and remediate vulnerabilities, reducing the potential blast radius of an attack.

Google resolved the XZ Utils zero-day incident in less than 10 minutes by utilizing SBOMs and an SBOM management platform. Anchore Enterprise can help your organization achieve similar results >> SBOM management solutions.

Continuous Compliance Monitoring

  • Real-Time Dashboards (Anchore Enforce): Monitor compliance status with customizable, real-time dashboards.
  • Automated Compliance Reports (Anchore Enforce): Generate and share compliance reports with stakeholders effortlessly.
  • Policy-as-Code Compliance Automation (Anchore Enforce): Enforce compliance at every stage of the software development lifecycle.

If you’re interested in trying any of these features for yourself, Anchore Enterprise offers a 15-day free trial or reach out to our team for a demo of the platform.

Wrap-Up

DORA is redefining software supply chain security in the financial sector by demanding transparency, proactive risk management, and continuous monitoring of 3rd-party suppliers. For technology providers, this shift represents both a challenge and an opportunity: by embracing SBOMs and comprehensive supply chain security practices, you not only help your customers achieve regulatory compliance but also strengthen your own security posture.

At Anchore, we’re committed to helping you navigate this evolving landscape with solutions designed for the modern world of software supply chain security. Ready to meet DORA head-on? Contact us today or visit our blog for more insights and resources.

Learn about the role that SBOMs for the security of your organization in this white paper.

Learn about the role that SBOMs for the security, including open source software (OSS) security, of your organization in this white paper.

SBOMs 101: A Free, Open Source eBook for the DevSecOps Community

Today, we’re excited to announce the launch of “Software Bill of Materials 101: A Guide for Developers, Security Engineers, and the DevSecOps Community”. This eBook is free and open source resource that provides a comprehensive introduction to all things SBOMs.

Why We Created This Guide

While SBOMs have become increasingly critical for software supply chain security, many developers and security professionals still struggle to understand and implement them effectively. We created this guide to help bridge that knowledge gap, drawing on our experience building popular SBOM tools like Syft.

What’s Inside

The ebook covers essential SBOM topics, including:

  • Core concepts and evolution of SBOMs
  • Different SBOM formats (SPDX, CycloneDX) and their use cases
  • Best practices for generating and managing SBOMs
  • Real-world examples of SBOM deployments at scale
  • Practical guidance for integrating SBOMs into DevSecOps pipelines

We’ve structured the content to be accessible to newcomers while providing enough depth for experienced practitioners looking to expand their knowledge.

Community-Driven Development

This guide is published under an open source license and hosted on GitHub at https://github.com/anchore/sbom-ebook. The collective wisdom of the DevSecOps community will strengthen this resource over time. We welcome contributions whether fixes, new content, or translations.

Getting Started

You can read the guide online, download PDF/ePub versions, or clone the repository to build it locally. The source is in Markdown format, making it easy to contribute improvements.

Join Us

We invite you to:

  1. Read the guide at https://github.com/anchore/sbom-ebook
  2. Star the repository to show your support
  3. Share feedback through GitHub issues
  4. Contribute improvements via pull requests
  5. Help spread the word about SBOM best practices

The software supply chain security challenges we face require community collaboration. We hope this guide advances our collective understanding of SBOMs and their role in securing the software ecosystem.


Learn about the role that SBOMs for the security of your organization in this white paper.

Learn about the role that SBOMs for the security, including open source software (OSS) security, of your organization in this white paper.

How to Tackle SBOM Sprawl and Secure Your Supply Chain

Software Bill of Materials (SBOM) has emerged as a pivotal technology to scale product innovation while taming the inevitable growth of complexity of modern software development. SBOMs are typically thought of as a comprehensive inventory of all software components—both open source and proprietary—within an application. But they are more than just a simple list of “ingredients”. They offer deeper insights that enable organizations to unlock enterprise-wide value. From automating security and compliance audits to assessing legal risks and scaling continuous regulatory compliance, SBOMs are central to the software development lifecycle.

However, as organizations begin to mature their SBOM initiative, the rapid growth in SBOM documents can quickly become overwhelming—a phenomenon known as SBOM sprawl. Below, we’ll outline how SBOM sprawl happens, why it’s a natural byproduct of a maturing SBOM program, and the best practices for wrangling the complexity to extract real value.

Learn about the role that SBOMs for the security of your organization in this white paper.

Learn about the role that SBOMs for the security, including open source software (OSS) security, of your organization in this white paper.

How does SBOM Sprawl Happen?

SBOM adoption typically begins ad hoc when a Software Engineer starts self-scanning their source code to look for vulnerabilities or a DevOps Engineer integrating an open source SBOM generator into a non-critical CI/CD pipeline as a favor to a friendly Security Engineer. SBOMs begin to pile up niches across the organization and adoption continues to grow through casual conversation. 

It’s only a matter of time before you accumulate a flood of SBOMs. As ad hoc adoption reaches scale, the volume of SBOM data balloons. We’re not all running at Google’s scale but their SBOM growth chart still illustrates the story of how new SBOM programs can quickly get out of hand.

Chart showing growth of the number of SBOMs that Google has generated over time. Stonks meme is overlayed on the chart.

We call this SBOM sprawl. It is the overabundance of unmanaged and unorganized SBOM documents. When left unchecked, SBOM sprawl prevents SBOMs from delivering the value of its use-cases like real-time vulnerability discovery, automated compliance checks, or supply chain insights.

So, how do we regain control as our SBOM initiative grows? The answer lies in a centralized data store. Instead of letting SBOMs scatter, we bring them together in a robust SBOM management system that can index, analyze, and make these insights actionable. Think of it like any large data management problem. The same best practices used for enterprise data management—ETL pipelines, centralized storage, robust queries—apply to SBOMs as well.

What is SBOM Management (SBOMOps)?

SBOM management—or SBOMOps—is defined as:

A data pipeline, centralized repository and query engine purpose-built to manage and extract actionable software supply chain insights from software component metadata (i.e., SBOMs).

In practical terms, SBOM management focuses on:

  • Reliably generating SBOMs at key points in the development lifecycle
  • Ingesting and converting 3rd-party supplier SBOMs
  • Storing the SBOM documents in a centralized repository
  • Enriching SBOMs from additional data sources (e.g., security, compliance, licensing, etc.)
  • Generating the use-case specific queries and reports that drive results

When done right, SBOMOps reduces chaos, making SBOMs instantly available for everything from zero-day vulnerability checks to open source license tracking—without forcing teams to manually search scattered locations for relevant files.

SBOM Management Best Practices

1. Store and manage SBOMs in a central repository

A single, central repository for SBOMs is the antidote to sprawl. Even if developer teams keep SBOM artifacts near their code repositories for local reference, the security organization should maintain a comprehensive SBOM inventory across all applications.

Why it matters:

  • Rapid incident response: When a new zero-day hits, you need a quick way to identify which applications are affected. That’s nearly impossible if your SBOM data is scattered or nonexistent.
  • Time savings: Instead of rescanning every application from scratch, you can consult the repository for an instant snapshot of dependencies.

2. Support SBOM standards—but don’t stop there

SPDX and CycloneDX are the two primary SBOM standards today, each with their own unique strengths. Any modern SBOM management system should be able to ingest and produce both. However, the story doesn’t end there.

Tooling differences:

  • Not all SBOM generation tools are created equal. For instance, Syft (OSS) and AnchoreCTL can produce a more comprehensive intermediate SBOM format containing additional metadata beyond what SPDX or CycloneDX alone might capture. This extra data can lead to:
    • More precise vulnerability detection
    • More granular policy enforcement
    • Fewer false positives

3. Require SBOMs from all 3rd-party software suppliers

It’s not just about your own code—any 3rd-party software you include is part of your software supply chain. SBOM best practice demands obtaining an SBOM from your suppliers, whether that software is open source or proprietary.

Challenges:

4. Generate SBOMs at each step in the development process and for each build

Yes, generating an SBOM with every build inflates your data management and storage needs. However, if you have an SBOM management system in place then storing these extra files is a non-issue—and the benefits are massive.

SBOM drift detection:
By generating SBOMs multiple times along the DevSecOps pipeline, you can detect any unexpected changes (drift) in your supply chain—whether accidental or malicious. This approach can thwart adversaries (e.g., APTs) that compromise upstream supplier code. An SBOM audit trail lets you pinpoint exactly when and where an unexpected code injection occurred.

5. Pay it forward >> Create an SBOM for your releases

If you expect third parties to provide you SBOMs, it’s only fair to provide your own to downstream users. This transparency fosters trust and positions you for future SBOM-related regulatory or compliance requirements. If SBOM compliance becomes mandatory (we think the industry is trending that direction), you’ll already be prepared.

How to Architect an SBOM Management System

The reference architecture below outlines the major components needed for a reliable SBOM management solution:

  1. Integrate SBOM Generation into CI/CD Pipeline: Embed a SBOM generator in every relevant DevSecOps pipeline stage to automate software composition analysis and SBOM generation.
  2. Ingest 3rd-party supplier SBOMs: Ingest and normalize external SBOMs from all 3rd-party software component suppliers.
  3. Send All SBOM Artifacts to Repository: Once generated, ensure each SBOM is automatically shipped to the central repository to serve as your source-of-truth for software supply chain data.
  4. Enrich SBOMs with critical business data: Enrich SBOMs from additional data sources (e.g., security, compliance, licensing, etc.)
  5. Stand Up Data Analysis & Visualization: Build or adopt dashboards and query tooling to generate the software supply chain insights desired from the scoped SBOM use-cases.
  6. Profit! (Figuratively): Gain comprehensive software supply chain visibility, faster incident response, and the potential to unlock advanced use-cases like automated policy-as-code enforcement.

You can roll your own SBOM management platform or opt for a turnkey solution 👇.

SBOM Management Using Anchore SBOM and Anchore Enterprise

Anchore offers a comprehensive SBOM management platform that help you centralize and operationalize SBOM data—unlocking the full strategic value of your software bills of materials:

  • Anchore Enterprise
    • Out-of-the-box SBOM generation: Combined SCA scanning, SBOM generation and SBOM format support for full development environment coverage.
    • Purpose-built SBOM Inventory: Eliminates SBOM sprawl with a centralized SBOM repository to power SBOM use-cases like rapid incident response, license risk management and automated compliance.
    • DevSecOps Integrations: Drop-in plugins for standard DevOps tooling; CI/CD pipelines, container registries, ticketing systems, and more.
    • SBOM Enrichment Data Pipeline: Fully automated data enrichment pipeline for security, compliance, licensing, etc. data.
    • Pre-built Data Analysis & Visualization: Delivers immediate insights on the health of your software supply chain with visualization dashboards for vulnerabilities, compliance, and policy checks.
    • Policy-as-Code Enforcement: Customizable policy rules guarantee continuous compliance, preventing high-risk or non-compliant components from entering production and reducing manual intervention.

Ready to get serious about securing your software supply chain?
Request a demo to learn how Anchore Enterprise can help you implement a scalable SBOM management solution.

Learn the 5 best practices for container security and how SBOMs play a pivotal role in securing your software supply chain.

2025 Cybersecurity Executive Order Requires Up Leveled Software Supply Chain Security

A few weeks ago, the Biden administration published a new Executive Order (EO) titled “Executive Order on Strengthening and Promoting Innovation in the Nation’s Cybersecurity”. This is a follow-up to the original cybersecurity executive order—EO 14028—from May 2021. This latest EO specifically targets improvements to software supply chain security that addresses gaps and challenges that have surfaced since the release of EO 14028. 

While many issues were identified, the executive order named and shamed software vendors for signing and submitting secure software development compliance documents without fully adhering to the framework. The full quote:

In some instances, providers of software to the Federal Government commit[ed] to following cybersecurity practices, yet do not fix well-known exploitable vulnerabilities in their software, which put the Government at risk of compromise. The Federal Government needs to adopt more rigorous 3rd-party risk management practices and greater assurance that software providers… are following the practices to which they attest.

In response to this behavior, the 2025 Cybersecurity EO has taken a number of additional steps to both encourage cybersecurity compliance and deter non-compliance—the carrot and the stick. This comes in the form of 4 primary changes:

  1. Compliance verification by CISA
  2. Legal repercussions for non-compliance
  3. Contract modifications for Federal agency software acquisition
  4. Mandatory adoption of software supply chain risk management practices by Federal agencies

In this post, we’ll explore the new cybersecurity EO in detail, what has changed in software supply chain security compliance and what both federal agencies and software providers can do right now to prepare.

What Led to the New Cybersecurity Executive Order?

In the wake of massive growth of supply chain attacks—especially those from nation-state threat actors like China—EO 14028 made software bill of materials (SBOMs) and software supply chain security spotlight agenda items for the Federal government. As directed by the EO, the National Institute of Standards and Technology (NIST) authored the Secure Software Development Framework (SSDF) to codify the specific secure software development practices needed to protect the US and its citizens. 

Following this, the Office of Management and Budget (OMB) published a memo that established a deadline for agencies to require vendor compliance with the SSDF. Importantly, the memo allowed vendors to “self-attest” to SSDF compliance.

In practice, many software providers chose to go the easy route and submitted SSDF attestations while only partially complying with the framework. Although the government initially hoped that vendors would not exploit this somewhat obvious loophole, reality intervened, leading to the issuance of the 2025 cybersecurity EO to close off these opportunities for non-compliance.

What’s Changing in the 2025 EO?

1. Rigorous verification of secure software development compliance

No longer can vendors simply self-attest to SSDF compliance. The Cybersecurity and Infrastructure Security Agency (CISA) is now tasked with validating these attestations and via the additional compliance artifacts the new EO requires. Providers that fail validation risk increased scrutiny and potential consequences such as…

2. CISA authority to refer non-compliant vendors to DOJ

A major shift comes from the EO’s provision allowing CISA to forward fraudulent attestations to the Department of Justice (DOJ). In the EO’s words, officials may “refer attestations that fail validation to the Attorney General for action as appropriate.” This raises the stakes for software vendors, as submitting false information on SSDF compliance could lead to legal consequences. 

3. Explicit SSDF compliance in software acquisition contracts

The Federal Acquisition Regulatory Council (FAR Council) will issue contract modifications that explicitly require SSDF compliance and additional items to enable CISA to programmatically verify compliance. Federal agencies will incorporate updated language in their software acquisition contracts, making vendors contractually accountable for any misrepresentations in SSDF attestations.

See the “FAR council contract updates” section below for the full details.

4. Mandatory adoption of supply chain risk management

Agencies must now embed supply chain risk management (SCRM) practices agency-wide, aligning with NIST SP 800-161, which details best practices for assessing and mitigating risks in the supply chain. This elevates SCRM to a “must-have” strategy for every Federal agency.

Other Notable Changes

Updated software supply chain security compliance controls

NIST will update both NIST SP 800-53, the “Control Catalog”, and the SSDF (NIST SP 800-218) to align with the new policy. The updates will incorporate additional controls and greater detail on existing controls. Although no controls have yet been formally added or modified, NIST is tasked with modernizing these documents to align with changes in secure software development practices. Once those updates are complete, agencies and vendors will be expected to meet the revised requirements.

Policy-as-code pilot

Section 7 of the EO describes a pilot program focused on translating security controls from compliance frameworks into “rules-as-code” templates. Essentially, this adopts a policy-as-code approach, often seen in DevSecOps, to automate compliance. By publishing machine-readable security controls that can be integrated directly into DevOps, security, and compliance tooling, organizations can reduce manual overhead and friction, making it easier for both agencies and vendors to consistently meet regulatory expectations.

FedRAMP incentives and new key management controls

The Federal Risk and Authorization Management Program (FedRAMP), responsible for authorizing cloud service providers (CSPs) for federal use, will also see important updates. FedRAMP will develop policies that incentivize or require CSPs to share recommended security configurations, thereby promoting a standard baseline for cloud security. The EO also proposes updates to FedRAMP security controls to address cryptographic key management best practices, ensuring that CSPs properly safeguard cryptographic keys throughout their lifecycle.

How to Prepare for the New Requirements

FAR council contract updates

Within 30 days of the EO release, the FAR Council will publish recommended contract language. This updated language will mandate:

  • Machine-readable SSDF Attestations: Vendors must provide an SSDF attestation in a structured, machine-readable format.
  • Compliance Reporting Artifacts: High-level artifacts that demonstrate evidence of SSDF compliance, potentially including automated scan results, security test reports, or vulnerability assessments.
  • Customer List: A list of all civilian agencies using the vendor’s software, enabling CISA and federal agencies to understand the scope of potential supply chain risk.

Important Note: The 30-day window applies to the FAR Council to propose new contract language—not for software vendors to become fully compliant. However, once the new contract clauses are in effect, vendors who want to sell to federal agencies will need to meet these updated requirements.

Action Steps for Federal Agencies

Federal agencies will bear new responsibilities to ensure compliance and mitigate supply chain risk. Here’s what you should do now:

  1. Inventory 3rd-Party Software Component Suppliers
  2. Assess Visibility into Supplier Risk
    • Perform a vulnerability scan on all 3rd-party components. If you already have SBOMs, scanning them for vulnerabilities is a quick way to find identity risk.
  3. Identify Critical Suppliers
    • Determine which software vendors are mission-critical. This helps you understand where to focus your risk management efforts.
  4. Assess Data Sensitivity
    • If a vendor handles sensitive data (e.g., PII), extra scrutiny is needed. A breach here has broader implications for the entire agency.
  5. Map Potential Lateral Movement Risk
    • Understand if a vendor’s software could provide attackers with lateral movement opportunities within your infrastructure.

Action Steps for Software Providers

For software vendors, especially those who sell to the federal government, proactivity is key to maintaining and expanding federal contracts.

  1. Inventory Your Software Supply Chain
    • Implement an SBOM-powered SCA solution within your DevSecOps pipeline to gain real-time visibility into all 3rd-party components.
  2. Assess Supplier Risk
    • Perform vulnerability scanning on 3rd-party supplier components to identify any that could jeopardize your compliance or your customers’ security.
  3. Identify Sensitive Data Handling
    • If your software processes personally identifiable information (PII) or other sensitive data, expect increased scrutiny. On the flip side, this may make your offering “mission-critical” and less prone to replacement—but it also means compliance standards will be higher.
  4. Map Your Own Attack Surface
    • Assess whether a 3rd-party supplier breach could cascade into your infrastructure and, by extension, your customers.
  5. Prepare Compliance Evidence
    • Start collecting artifacts—such as vulnerability scan reports, secure coding guidelines, and internal SSDF compliance checklists—so you can quickly meet new attestation requirements when they come into effect.

Wrap-Up

The 2025 Cybersecurity EO is a direct response to the flaws uncovered in EO 14028’s self-attestation approach. By requiring 3rd-party validation of SSDF compliance, the government aims to create tangible improvements in its cybersecurity posture—and expects the same from all who supply its agencies.

Given the rapid timeline, preparation is crucial. Both federal agencies and software providers should begin assessing their supply chain risks, implementing SBOM-driven visibility, and proactively planning for new attestation and reporting obligations. By taking steps now, you’ll be better positioned to meet the new requirements.

Learn about SSDF Attestation with this guide. The eBook will take you through everything you need to know to get started.

SSDF Attestation 101: A Practical Guide for Software Producers

Software Supply Chain Security in 2025: SBOMs Take Center Stage

In recent years, we’ve witnessed software supply chain security transition from a quiet corner of cybersecurity into a primary battlefield. This is due to the increasing complexity of modern software that obscures the full truth—applications are a tower of components of unknown origin. Cybercriminals have fully embraced this hidden complexity as a ripe vector to exploit.

Software Bills of Materials (SBOMs) have emerged as the focal point to achieve visibility and accountability in a software ecosystem that will only grow more complex. SBOMs are an inventory of the complex dependencies that make up modern applications. SBOMs help organizations scale vulnerability management and automate compliance enforcement. The end goal is to increase transparency in an organization’s supply chain where 70-90% of modern applications are open source software (OSS) dependencies. This significant source of risk demands a proactive, data-driven solution.

Looking ahead to 2025, we at Anchore, see two trends for SBOMs that foreshadow their growing importance in software supply chain security:

  1. Global regulatory bodies continue to steadily drive SBOM adoption
  2. Foundational software ecosystems begin to implement build-native SBOM support

In this blog, we’ll walk you through the contextual landscape that leads us to these conclusions; keep reading if you want more background.

Global Regulatory Bodies Continue Growing Adoption of SBOMs

As supply chain attacks surged, policymakers and standards bodies recognized this new threat vector as a critical threat with national security implications. To stem the rising tide supply chain threats, global regulatory bodies have recognized that SBOMs are one of the solutions.

Over the past decade, we’ve witnessed a global legislative and regulatory awakening to the utility of SBOMs. Early attempts like the US Cyber Supply Chain Management and Transparency Act of 2014 may have failed to pass, but they paved the way for more significant milestones to come. Things began to change in 2021, when the US Executive Order (EO) 14028 explicitly named SBOMs as the foundation for a secure software supply chain. The following year the European Union’s Cyber Resilience Act (CRA) pushed SBOMs from “suggested best practice” to “expected norm.”

The one-two punch of the US’s EO 14028 and the EU’s CRA has already prompted action among regulators worldwide. In the years following these mandates, numerous global bodies issued or updated their guidance on software supply chain security practices—specifically highlighting SBOMs. Cybersecurity offices in Germany, India, Britain, Australia, and Canada, along with the broader European Union Agency for Cybersecurity (ENISA), have each underscored the importance of transparent software component inventories. At the same time, industry consortiums in the US automotive (Auto-ISAC) and medical device (IMDRF) sectors recognized that SBOMs can help safeguard their own complex supply chains, as have federal agencies such as the FDA, NSA, and the Department of Commerce.

By the close of 2024, the pressure mounted further. In the US, the Office of Management and Budget (OMB) set a due date requiring all federal agencies to comply with the Secure Development Framework (SSDF), effectively reinforcing SBOM usage as part of secure software development. Meanwhile, across the Atlantic, the EU CRA officially became law, cementing SBOMs as a cornerstone of modern software security. This constant pressure ensures that SBOM adoption will only continue to grow. It won’t be long until SBOMs become table stakes for anyone operating an online business. We expect the steady march forward of SBOMs to continue in 2025. 

In fact, this regulatory push has been noticed by the foundational ecosystems of the software industry and they are reacting accordingly.

Software Ecosystems Trial Build-Native SBOM Support

Until now, SBOM generation has been relegated to afterthought in software ecosystems. Businesses scan their internal supply chains with software composition analysis (SCA) tools; trying to piece together a picture of their dependencies. But as SBOM adoption continues its upward momentum, this model is evolving. In 2025, we expect that leading software ecosystems will promote SBOMs to a first-class citizen and integrate them natively into their build tools.

Industry experts have recently begun advocating for this change. Brandon Lum, the SBOM Lead at Google, notes, “The software industry needs to improve build tools propagating software metadata.” Rather than forcing downstream consumers to infer the software’s composition after the fact, producers will generate SBOMs as part of standard build pipelines. This approach reduces friction, makes application composition discoverable, and ensures that software supply chain security is not left behind.

We are already seeing early examples:

  • Linux Ecosystem (Yocto): The Yocto Project’s OpenEmbedded build system now includes native SBOM generation. This demonstrates the feasibility of integrating SBOM creation directly into the developer toolchain, establishing a blueprint for other ecosystems to follow.
  • Python Ecosystem: In 2024, Python maintainers explored proposals for build-native SBOM support, motivated by the regulations such as, the Secure Software Development Framework (SSDF) and the EU’s CRA. They’ve envisioned a future where projects, package maintainers, and contributors can easily annotate their code with software dependency metadata that will automatically propagate at build time.
  • Perl Ecosystem: The Perl Security Working Group has also begun exploring internal proposals for SBOM generation, again driven by the CRA’s regulatory changes. Their goal: ensure that Perl packages have SBOM data baked into their ecosystems so that compliance and security requirements can be met more effortlessly.
  • Java Ecosystem: The Eclipse Foundation and VMware’s Spring Boot team have introduced plug-ins for Java build tools like Maven or Gradle that streamline SBOM generation. While not fully native to the compiler or interpreter, these integrations lower the barrier to SBOM adoption within mainstream Java development workflows.

In 2025 we won’t just be talking about build-native SBOMs in abstract terms—we’ll have experimental support for them from the most forward thinking ecosystems. This shift is still in its infancy but it foreshadows the central role that SBOMs will play in the future of cybersecurity and software development as a whole.

Closing Remarks

The writing on the wall is clear: supply chain attacks aren’t slowing down—they are accelerating. In a world of complex, interconnected dependencies, every organization must know what’s inside its software to quickly spot and respond to risk. As SBOMs move from a nice-to-have to a fundamental part of building secure software, teams can finally gain the transparency they need over every component they use, whether open source or proprietary. This clarity is what will help them respond to the next Log4j or XZ Utils issue before it spreads, putting security team’s back in the driver’s seat and ensuring that software innovation doesn’t come at the cost of increased vulnerability.

Learn about the role that SBOMs for the security of your organization in this white paper.

Learn about the role that SBOMs for the security, including open source software (OSS) security, of your organization in this white paper.

All Things SBOM in 2025: a Weekly Webinar Series

Software Bills of Materials (SBOMs) have quickly become a critical component in modern software supply chain security. By offering a transparent view of all the components that make up your applications, SBOMs enable you to pinpoint vulnerabilities before they escalate into costly incidents.

As we enter 2025, software supply chain security and risk management for 3rd-party software dependencies are top of mind for organizations. The 2024 Anchore Software Supply Chain Security Survey notes that 76% of organizations consider these challenges top priorities. Given this, it is easy to see why understanding what SBOMs are—and how to implement them—is key to a secure software supply chain.

To help organizations achieve these top priorities Anchore is hosting a weekly webinar series dedicated entirely to SBOMs. Beginning January 14 and running throughout Q1, our webinar line-up will explore a wide range of topics (see below). Industry luminaries like Kate Stewart (co-founder of the SPDX project) and Steve Springett (Chair of the OWASP CycloneDX Core Working Group) will be dropping in to provide unique insights and their special blend of expertise on all things SBOMs.

The series will cover:

  • SBOM basics and best practices
  • SDPX and SBOMs in-depth with Kate Stewart
  • Getting started: How to generate an SBOM
  • Software supply chain security and CycloneDX with Steve Springett
  • Scaling SBOMs for the enterprise
  • Real-world insights on applying SBOMs in high-stakes or regulated sectors
  • A look ahead at the future of SBOMs and software supply chain security with Kate Stewart
  • And more!

We invite you to learn from experts, gain practical skills, and stay ahead of the rapidly evolving world of software supply chain security. Visit our events page to register for the webinars now or keep reading to get a sneak peek at the content.

#1 Understanding SBOMs: An Intro to Modern Development

Date/Time: Tuesday, January 14, 2025 – 10am PST / 1pm EST
Featuring: 

  • Lead Developer of Syft
  • Anchore VP of Security
  • Anchore Director of Developer Relations

We are kicking off the series with an introduction to the essentials of SBOMs. This session will cover the basics of SBOMs—what they are, why they matter, and how to get started generating and managing them. Our experts will walk you through real-world examples (including Log4j) to show just how vital it is to know what’s in your software.

Key Topics:

This webinar is perfect for both technical practitioners and business leaders looking to establish a strong SBOM foundation.

#2 Understanding SBOMs: Deep Dive with Kate Stewart

Date/Time: Wednesday, January 22, 2025 – 10am PST / 1pm EST
Featured Guest: Kate Stewart (co-founder of SPDX)

Our second session brings you a front-row seat to an in-depth conversation with Kate Stewart, co-founder of the SPDX project. Kate is a leading voice in software supply chain security and the SBOM standard. From the origins of the SPDX standard to the latest challenges in license compliance, Kate will provide an extensive behind-the-scenes look into the world of SBOMs.

Key Topics:

  • The history and evolution of SBOMs, including the creation of SPDX
  • Balancing license compliance with security requirements
  • How SBOMs support critical infrastructure with national security concerns
  • The impact of emerging technology—like open source LLMs—on SBOM generation and analysis

If you’re ready for a more advanced look at SBOMs and their strategic impact, you won’t want to miss this conversation.

#3 How to Automate, Generate, and Manage SBOMs

Date/Time: Wednesday, January 29, 2025 – 12pm EST / 9am PST
Featuring: 

  • Anchore Director of Developer Relations
  • Anchore Principal Solutions Engineer

For those seeking a hands-on approach, this webinar dives into the specifics of automating SBOM generation and management within your CI/CD pipeline. Anchore’s very own Alan Pope (Developer Relations) and Sean Fazenbaker (Solutions) will walk you through proven techniques for integrating SBOMs to reveal early vulnerabilities, minimize manual interventions, and improve overall security posture.

Key Topics:

This is the perfect session for teams focused on shifting security left and preserving developer velocity.

What’s Next?

Beyond our January line-up, we have more exciting sessions planned throughout Q1. Each webinar will feature industry experts and dive deeper into specialized use-cases and future technologies:

  • CycloneDX & OWASP with Steve Springett – A closer look at this popular SBOM format, its technical architecture, and VEX integration.
  • SBOM at Scale: Enterprise SBOM Management – Learn from large organizations that have successfully rolled out SBOM practices across hundreds of applications.
  • SBOMs in High-Stakes Environments – Explore how regulated industries like healthcare, finance, and government handle unique compliance challenges and risk management.
  • The Future of Software Supply Chain Security – Join us in March as we look ahead at emerging standards, tools, and best practices with Kate Stewart returning as the featured guest.

Stay tuned for dates and registration details for each upcoming session. Follow us on your favorite social network (Twitter, Linkedin, Bluesky) or visit our events page to stay up-to-date.

Learn about the role that SBOMs for the security of your organization in this white paper.

Learn about the role that SBOMs for the security, including open source software (OSS) security, of your organization in this white paper.

The Top Ten List: The 2024 Anchore Blog

To close out 2024, we’re going to count down the top 10 hottest hits from the Anchore blog in 2024! The Anchore content team continued our tradition of delivering expert guidance, practical insights, and forward-looking strategies on DevSecOps, cybersecurity compliance, and software supply chain management.

This top ten list spotlights our most impactful blog posts from the year, each tackling a different angle of modern software development and security. Hot topics this year include: 

  • All things SBOM (software bill of materials)
  • DevSecOps compliance for the Department of Defense (DoD)
  • Regulatory compliance for federal government vendors (e.g., FedRAMP & SSDF Attestation)
  • Vulnerability scanning and management—a perennial favorite!

Our selection runs the gamut of knowledge needed to help your team stay ahead of the compliance curve, boost DevSecOps efficiency, and fully embrace SBOMs. So, grab your popcorn and settle in—it’s time to count down the blog posts that made the biggest splash this year!

The Top Ten List

10 | A Guide to Air Gapping

Kicking us off at number 10 is a blog that’s all about staying off the grid—literally. Ever wonder what it really means to keep your network totally offline? 

A Guide to Air Gapping: Balancing Security and Efficiency in Classified Environments breaks down the concept of “air gapping”—literally disconnecting a computer from the internet by leaving a “gap of air” between your computer and an ethernet cable. It is generally considered a security practice to protect classified, military-grade data or similar.

Our blog covers the perks, like knocking out a huge range of cyber threats, and the downsides, like having to manually update and transfer data. It also details how Anchore Enforce Federal Edition can slip right into these ultra-secure setups, blending top-notch protection with the convenience of automated, cloud-native software checks.

9 | SBOMs + Vulnerability Management == Open Source Security++

Coming in at number nine on our countdown is a blog that breaks down two of our favorite topics; SBOMs and vulnerability scanners—And how using SBOMs as your foundation for vulnerability management can level up your open source security game.

SBOMs and Vulnerability Management: OSS Security in the DevSecOps Era is all about getting a handle on: 

  • every dependency in your code, 
  • scanning for issues early and often, and 
  • speeding up the DevSecOps process so you don’t feel the drag of legacy security tools. 

By switching to this modern, SBOM-driven approach, you’ll see benefits like faster fixes, smoother compliance checks, and fewer late-stage security surprises—just ask companies like NVIDIA, Infoblox, DreamFactory and ModuleQ, who’ve saved tons of time and hassle by adopting these practices.

8 | Improving Syft’s Binary Detection

Landing at number eight, we’ve got a blog post that’s basically a backstage pass to Syft’s binary detection magic. Improving Syft’s Binary Detection goes deep on how Syft—Anchore’s open source SBOM generation tool—uncovers out the details of executable files and how you can lend a hand in making it even better. 

We walk you through the process of adding new binary detection rules, from finding the right binaries and testing them out, to fine-tuning the patterns that match their version strings. 

The end goal? Helping all open source contributors quickly get started making their first pull request and broadening support for new ecosystems. A thriving, community-driven approach to better securing the global open source ecosystem.

7 | A Guide to FedRAMP in 2024: FAQs & Key Takeaways

Sliding in at lucky number seven, we’ve got the ultimate cheat sheet for FedRAMP in 2024 (and 2025😉)! Ever wonder how Uncle Sam greenlights those fancy cloud services? A Guide to FedRAMP in 2024: FAQs & Key Takeaways spills the beans on all the FedRAMP basics you’ve been struggling to find—fast answers without all the fluff. 

It covers what FedRAMP is, how it works, who needs it, and why it matters; detailing the key players and how it connects with other federal security standards like FISMA. The idea is to help you quickly get a handle on why cloud service providers often need FedRAMP certification, what benefits it offers, and what’s involved in earning that gold star of trust from federal agencies. By the end, you’ll know exactly where to start and what to expect if you’re eyeing a spot in the federal cloud marketplace.

6 | Introducing Grant: OSS Licence Management

At number six on tonight’s countdown, we’re rolling out the red carpet for Grant—Anchore’s snazzy new software license-wrangling sidekick! Introducing Grant: An OSS project for inspecting and checking license compliance using SBOMs covers how Grant helps you keep track of software licenses in your projects. 

By using SBOMs, Grant can quickly show you which licenses are in play—and whether any have changed from something friendly to something more restrictive. With handy list and check commands, Grant makes it easier to spot and manage license risk, ensuring you keep shipping software without getting hit with last-minute legal surprises.

5 | An Overview of SSDF Attestation: Compliance Need-to-Knows

Landing at number five on tonight’s compliance countdown is a big wake-up call for all you software suppliers eyeing Uncle Sam’s checkbook: the SSDF Attestation Form. That’s right—starting now, if you wanna do business with the feds, you gotta show off those DevSecOps chops, no exceptions! In Using the Common Form for SSDF Attestation: What Software Producers Need to Know we break down the new Secure Software Development Attestation Form—released in March 2024—that’s got everyone talking in the federal software space. 

In short, if you’re a software vendor wanting to sell to the US government, you now have to “show your work” when it comes to secure software development. The form builds on the SSDF framework, turning it from a nice-to-have into a must-do. It covers which software is included, the timelines you need to know, and what happens if you don’t shape up.

There are real financial risks if you can’t meet the deadlines or if you fudge the details (hello, criminal penalties!). With this new rule, it’s time to get serious about locking down your dev practices or risk losing out on government contracts.

4 | Prep your Containers for STIG

At number four, we’re diving headfirst into the STIG compliance world—the DoD’s ultimate ‘tough crowd’ when it comes to security! If you’re feeling stressed about locking down those container environments—we’ve got you covered. 4 Ways to Prepare your Containers for the STIG Process is all about tackling the often complicated STIG process for containers in DoD projects. 

You’ll learn how to level up your team by cross-training cybersecurity pros in container basics and introducing your devs and architects to STIG fundamentals. It also suggests going straight to the official DISA source for current STIG info, making the STIG Viewer a must-have tool on everyone’s workstation, and looking into automation to speed up compliance checks. 

Bottom line: stay informed, build internal expertise, and lean on the right tools to keep the STIG process from slowing you down.

3 | Syft Graduates to v1.0!

Give it up for number three on our countdown—Syft’s big graduation announcement! In Syft Reaches v1.0! Syft celebrates hitting the big 1.0 milestone! 

Syft is Anchore’s OSS tool for generating SBOMs, helping you figure out exactly what’s inside your software, from container images to source code. Over the years, it’s grown to support over 40 package types, outputting SBOMs in various formats like SPDX and CycloneDX. With v1.0, Syft’s CLI and API are now stable, so you can rely on it for consistent results and long-term compatibility. 

But don’t worry—development doesn’t stop here. The team plans to keep adding support for more ecosystems and formats, and they invite the community to join in, share ideas, and contribute to the future of Syft.

2 | RAISE 2.0 Overview: RMF and ATO for the US Navy

Next up at number two is the lowdown on RAISE 2.0—your backstage pass to lightning-fast software approvals with the US Navy! In RMF and ATO with RAISE 2.0 — Navy’s DevSecOps solution for Rapid Delivery we break down what RAISE 2.0 means for teams working with the Department of the Navy’s containerized software.  The key takeaway? By using an approved DevSecOps platform—known as an RPOC—you can skip getting separate ATOs for every new app. 

The guidelines encourage a “shift left” approach, focusing on early and continuous security checks throughout development. Tools like Anchore Enforce Federal Edition can help automate the required security gates, vulnerability scans, and policy checks, making it easier to keep everything compliant. 

In short, RAISE 2.0 is all about streamlining security, speeding up approvals, and helping you deliver secure code faster.

1 | Introduction to the DoD Software Factory

Taking our top spot at number one, we’ve got the DoD software factory—the true VIP of the DevSecOps world! We’re talking about a full-blown, high-security software pipeline that cranks out code for the defense sector faster than a fighter jet screaming across the sky. In Introduction to the DoD Software Factory we break down what a DoD software factory really is—think of it as a template to build a DoD-approved DevSecOps pipeline. 

The blog post details how concepts like shifting security left, using microservices, and leveraging automation all come together to meet the DoD’s sky-high security standards. Whether you choose an existing DoD software factory (like Platform One) or build your own, the goal is to streamline development without sacrificing security. 

Tools like Anchore Enforce Federal Edition can help with everything from SBOM generation to continuous vulnerability scanning, so you can meet compliance requirements and keep your mission-critical apps protected at every stage.

Wrap-Up

That wraps up the top ten Anchore blog posts of 2024! We covered it all: next-level software supply chain best practices, military-grade compliance tactics, and all the open-source goodies that keep your DevSecOps pipeline firing on all cylinders. 

The common thread throughout them all is the recognition that security and speed can work hand-in-hand. With SBOM-driven approaches, modern vulnerability management, and automated compliance checks, organizations can achieve the rapid, secure, and compliant software delivery required in the DevSecOps era. We hope these posts will serve as a guide and inspiration as you continue to refine your DevSecOps practice, embrace new technologies, and steer your organization toward a more secure and efficient future.

If you enjoyed our greatest hits album of 2024 but need more immediacy in your life, follow along in 2025 by subscribing to the Anchore Newsletter or following Anchore on your favorite social platform:

Learn about the role that SBOMs for the security of your organization in this white paper.

Learn about the role that SBOMs for the security, including open source software (OSS) security, of your organization in this white paper.

Going All In: Anchore at SBOM Plugfest 2024

When we were invited to participate in Carnegie Mellon University’s Software Engineering Institute (SEI) SBOM Harmonization Plugfest 2024, we saw an opportunity to contribute to SBOM generation standardization efforts and thoroughly exercise our open-source SBOM generator, Syft

While the Plugfest only required two SBOM submissions, we decided to go all in – and learned some valuable lessons along the way.

The Plugfest Challenge

The SBOM Harmonization Plugfest aims to understand why different tools generate different SBOMs for the same software. It’s not a competition but a collaborative study to improve SBOM implementation harmonization. The organizers selected eight diverse software projects, ranging from Node.js applications to C++ libraries, and asked participants to generate SBOMs in standard formats like SPDX and CycloneDX.

Going Beyond the Minimum

Instead of just submitting two SBOMs, we decided to:

  1. SBOM generation for all eight target projects
  2. Create both source and binary analysis SBOMs where possible
  3. Output in every format Syft supports
  4. Test both enriched and non-enriched versions
  5. Validate everything thoroughly

This comprehensive approach would give us (and the broader community) much more data to work with.

Automation: The Key to Scale

To handle this expanded scope, we created a suite of scripts to automate the entire process:

  1. Target acquisition
  2. Source SBOM generation
  3. Binary building
  4. Binary SBOM generation
  5. SBOM validation

The entire pipeline runs in about 38 minutes on a well-connected server, generating nearly three hundred SBOMs across different formats and configurations.

The Power of Enrichment

One of Syft’s interesting features is its --enrich option, which can enhance SBOMs with additional metadata from online sources. Here’s a real example showing the difference in a CycloneDX SBOM for Dependency-Track:

$ wc -l dependency-track/cyclonedx-json.json dependency-track/cyclonedx-json_enriched.json
  5494 dependency-track/cyclonedx-json.json
  6117 dependency-track/cyclonedx-json_enriched.json

The enriched version contains additional information like license URLs and CPE identifiers:

{
  "license": {
    "name": "Apache 2",
    "url": "http://www.apache.org/licenses/LICENSE-2.0"
  },
  "cpe": "cpe:2.3:a:org.sonatype.oss:JUnitParams:1.1.1:*:*:*:*:*:*:*"
}

These additional identifiers are crucial for security and compliance teams – license URLs help automate legal compliance checks, while CPE identifiers enable consistent vulnerability matching across security tools.

SBOM Generation of Binaries

While source code analysis is valuable, many Syft users analyze built artifacts and containers. This reflects real-world usage where organizations must understand what’s being deployed, not just what’s in the source code. We built and analyzed binaries for most target projects:

PackageBuild MethodKey Findings
Dependency TrackDockerThe container SBOMs included ~1000 more items than source analysis, including base image components like Debian packages
HTTPiepip installBinary analysis caught runtime Python dependencies not visible in source
jqDockerPython dependencies contributed significant additional packages
MinecoloniesGradleJava runtime java archives appeared in binary analysis, but not in the source
OpenCVCMakeBinary and source SBOMs were largely the same
hexylCargo buildRust static linking meant minimal difference from source
nodejs-goofDockerNode.js runtime and base image packages significantly increased the component count

Some projects, like gin-gonic (a library) and PHPMailer, weren’t built as they’re not typically used as standalone binaries.

The differences between source and binary SBOMs were striking. For example, the Dependency-Track container SBOM revealed:

  • Base image operating system packages
  • Runtime dependencies not visible in source analysis
  • Additional layers of dependencies from the build process
  • System libraries and tools included in the container

This perfectly illustrates why both source and binary analysis are important:

  • Source SBOMs show some direct development dependencies
  • Binary/container SBOMs show the complete runtime environment
  • Together, they provide a full picture of the software supply chain

Organizations can leverage these differences in their CI/CD pipelines – using source SBOMs for early development security checks and binary/container SBOMs for final deployment validation and runtime security monitoring.

Unexpected Discovery: SBOM Generation Bug

One of the most valuable outcomes wasn’t planned at all. During our comprehensive testing, we discovered a bug in Syft’s SPDX document generation. The SPDX validators were flagging our documents as invalid due to absolute file paths:

file name must not be an absolute path starting with "/", but is: 
/.github/actions/bootstrap/action.yaml
file name must not be an absolute path starting with "/", but is: 
/.github/workflows/benchmark-testing.yaml
file name must not be an absolute path starting with "/", but is: 
/.github/workflows/dependabot-automation.yaml
file name must not be an absolute path starting with "/", but is: 
/.github/workflows/oss-project-board-add.yaml

The SPDX specification requires relative file paths in the SBOM, but Syft used absolute paths. Our team quickly developed a fix, which involved converting absolute paths to relative ones in the format model logic:

// spdx requires that the file name field is a relative filename
// with the root of the package archive or directory
func convertAbsoluteToRelative(absPath string) (string, error) {
    // Ensure the absolute path is absolute (although it should already be)
    if !path.IsAbs(absPath) {
        // already relative
        log.Debugf("%s is already relative", absPath)
        return absPath, nil
    }
    // we use "/" here given that we're converting absolute paths from root to relative
    relPath, found := strings.CutPrefix(absPath, "/")
    if !found {
        return "", fmt.Errorf("error calculating relative path: %s", absPath)
    }
    return relPath, nil
}

The fix was simple but effective – stripping the leading “/” from absolute paths while maintaining proper error handling and logging. This change was incorporated into Syft v1.18.0, which we used for our final Plugfest submissions.

This discovery highlights the value of comprehensive testing and community engagement. What started as a participation in the Plugfest ended up improving Syft for all users, ensuring more standard-compliant SPDX documents. It’s a perfect example of how collaborative efforts like the Plugfest can benefit the entire SBOM ecosystem.

SBOM Validation

We used multiple validation tools to verify our SBOMs:

Interestingly, we found some disparities between validators. For example, some enriched SBOMs that passed sbom-utility validation failed with pyspdxtools. Further, the NTA online validator gave us another different result in many cases. This highlights the ongoing challenges in SBOM standardization – even the tools that check SBOM validity don’t always agree!

Key Takeaways

  • Automation is crucial: Our scripted approach allowed us to efficiently generate and validate hundreds of SBOMs.
  • Real-world testing matters: Building and analyzing binaries revealed insights (and bugs!) that source-only analysis might have missed.
  • Enrichment adds value: Additional metadata can significantly enhance SBOM utility, though support varies by ecosystem.
  • Validation is complex: Different validators can give different results, showing the need for further standardization.

Looking Forward

The SBOM Harmonization Plugfest results will be analyzed in early 2025, and we’re eager to see how different tools handled the same targets. Our comprehensive submission will help identify areas where SBOM generation can be improved and standardized.

More importantly, this exercise has already improved Syft for our users through the bug fix and given us valuable insights for future development. We’re committed to continuing this thorough testing and community participation to make SBOM generation more reliable and consistent for everyone.

The final SBOMs are published in the plugfest-sboms repo, with the scripts in the plugfest-scripts repository. Consider using Syft for SBOM generation against your code and containers, and let us know how you get on in our community discourse.

Automating SBOMs: From Creation to Scanning & Analysis

This blog post has been archived and replaced by the supporting pillar page that can be found here: https://anchore.com/wp-admin/post.php?post=987474667&action=edit

The blog post is meant to remain “public” so that it will continue to show on the /blog feed. This will help discoverability for people browsing the blog and potentially help SEO. If it is clicked on it will automatically redirect to the pillar page.

ModuleQ reduces vulnerability management time by 80% with Anchore Secure

ModuleQ, an AI-driven enterprise knowledge platform, knows only too well the stakes for a company providing software solutions in the highly regulated financial services sector. In this world where data breaches are cause for termination of a vendor relationship and evolving cyberthreats loom large, proactive vulnerability management is not just a best practice—it’s a necessity. 

ModuleQ required a vulnerability management platform that could automatically identify and remediate vulnerabilities, maintain airtight security, and meet stringent compliance requirements—all without slowing down their development velocity.


Learn the essential container security best practices to reduce the risk of software supply chain attacks in this white paper.


The Challenge: Scaling Security in a High-Stakes Environment

ModuleQ found itself drowning in a flood of newly released vulnerabilities—over 25,000 in 2023 alone. Operating in a heavily regulated industry meant any oversight could have severe repercussions. High-profile incidents like the Log4j exploit underscored the importance of supply chain security, yet the manual, resource-intensive nature of ModuleQ’s vulnerability management process made it hard to keep pace.

The mandate that no critical vulnerabilities reached production was a particularly high bar to meet with the existing manual review process. Each time engineers stepped away from their coding environment to check a separate security dashboard, they lost context, productivity, and confidence. The fear of accidentally letting something slip through the cracks was ever present.

The Solution: Anchore Secure for Automated, Integrated Vulnerability Management

ModuleQ chose Anchore Secure to simplify, automate, and fully integrate vulnerability management into their existing DevSecOps workflows. Instead of relying on manual security reviews, Anchore Secure injected security measures seamlessly into ModuleQ’s Azure DevOps pipelines, .NET, and C# environment. Every software build—staged nightly through a multi-step pipeline—was automatically scanned for vulnerabilities. Any critical issues triggered immediate notifications and halted promotions to production, ensuring that potential risks were addressed before they could ever reach customers.

Equally important, Anchore’s platform was built to operate in on-prem or air-gapped environments. This guaranteed that ModuleQ’s clients could maintain the highest security standards without the need for external connectivity. For an organization whose customers demand this level of diligence, Anchore’s design provided peace of mind and strengthened client relationships.

Results: Faster, More Secure Deployments

By adopting Anchore Secure, ModuleQ dramatically accelerated and enhanced its vulnerability management approach:

  • 80% Reduction in Vulnerability Management Time: Automated scanning, triage, and reporting freed the team from manual checks, letting them focus on building new features rather than chasing down low-priority issues.
  • 50% Less Time on Security Tasks During Deployments: Proactive detection of high-severity vulnerabilities streamlined deployment workflows, enabling ModuleQ to deliver software faster—without compromising security.
  • Unwavering Confidence in Compliance: With every new release automatically vetted for critical vulnerabilities, ModuleQ’s customers in the financial sector gained renewed trust. Anchore’s support for fully on-prem deployments allowed ModuleQ to meet stringent security demands consistently.

Looking Ahead

In an era defined by unrelenting cybersecurity threats, ModuleQ proved that speed and security need not be at odds. Anchore Secure provided a turnkey solution that integrated seamlessly into their workflow, saving time, strengthening compliance, and maintaining the agility to adapt to future challenges. By adopting an automated security backbone, ModuleQ has positioned itself as a trusted and reliable partner in the financial services landscape.

Looking for more details? Read the ModuleQ case study in full. If you’re ready to move forward see all of the features on Anchore Secure’s product page or reach out to our team to schedule a demo.

The Evolution of SBOMs in the DevSecOps Lifecycle: Part 2

Welcome back to the second installment of our two-part series on “The Evolution of SBOMs in the DevSecOps Lifecycle”. In our first post, we explored how Software Bills of Materials (SBOMs) evolve over the first 4 stages of the DevSecOps pipeline—Plan, Source, Build & Test—and how each type of SBOM serves different purposes. Some of those use-cases include: shift left vulnerability detection, regulatory compliance automation, OSS license risk management and incident root cause analysis.

In this part, we’ll continue our exploration with the final 4 stages of the DevSecOps lifecycle, examining:

  • Analyzed SBOMs at the Release (Registry) stage
  • Deployed SBOMs during the Deployment phase
  • Runtime SBOMs in Production (Operate & Monitor stages)

As applications migrate down the pipeline, design decisions made at the beginning begin to ossify becoming more difficult to change; this influences the challenges that are experienced and the role that SBOMs play in overcoming these novel problems. Some of the new challenges that come up include: pipeline leaks, vulnerabilities in third-party packages, and runtime injection. All of which introduce significant risk. Understanding how SBOMs evolve across these stages helps organizations mitigate these risks effectively.

Whether you’re aiming to enhance your security posture, streamline compliance reporting, or improve incident response times, this comprehensive guide will equip you with the knowledge to leverage SBOMs effectively from Release to Production. Additionally, we’ll offer pro tips to help you maximize the benefits of SBOMs in your DevSecOps practices.

So, let’s continue our journey through the DevSecOps pipeline and discover how SBOMs can transform the latter stages of your software development lifecycle.


Learn the 5 best practices for container security and how SBOMs play a pivotal role in securing your software supply chain.


Release (or Registry) => Analyzed SBOM

After development is completed and the new release of the software is declared a “golden” image the build system will push the release artifact to a registry for storage until it is deployed. At this stage, an SBOM that is generated based on these container images, binaries, etc. is named an “Analyzed SBOM” by CISA. The name is a little confusing since all SBOMs should be analyzed regardless of the stage they are generated. A more appropriate name might be a Release SBOM but we’ll stick with CISA’s name for now.

At first glance, it would seem that Analyzed SBOMs and the final Build SBOMs should be identical since it is the same software but that doesn’t hold up in practice. DevSecOps pipelines aren’t hermetically sealed systems, they can be “leaky”. You might be surprised what finds its way into this storage repository and eventually gets deployed bypassing your carefully constructed build and test setup.

On top of that, the registry holds more than just first-party applications that are built in-house. It also stores 3rd-party container images like operating systems and any other self-contained applications used by the organization.

The additional metadata that is collected for an Analyzed SBOM includes:

  • Release images that bypass the happy path build and test pipeline
  • 3rd-party container images, binaries and applications

Pros and Cons

Pros:

  • Comprehensive Artifact Inventory: A more holistic view of all software—both 1st- and 3rd-party—that is utilized in production.
  • Enhanced Security and Compliance Posture: Catches vulnerabilities and non-compliant images for all software that will be deployed to production. This reduces the risk of security incidents and compliance violations.
  • Third-Party Supply Chain Risk Management: Provides insights into the vulnerabilities and compliance status of third-party components.
  • Ease of implementation: This stage is typically the lowest lift for implementation given that most SBOM generators can be deployed standalone and pointed at the registry to scan all images.

Cons:

  • High Risk for Release Delays: Scanning images at this stage are akin to traditional waterfall-style development patterns. Most design decisions are baked-in and changes typically incur a steep penalty.
  • Difficult to Push Feedback into Exist Workflow: The registry sits outside of typical developer workflows and creating a feedback loop that seamlessly reports issues without changing the developer’s process is a non-trivial amount of work.
  • Complexity in Management: Managing SBOMs for both internally developed and third-party components adds complexity to the software supply chain.

Use-Cases

  • Software Supply Chain Security: Organizations can detect vulnerabilities in both their internal developed software and external software to prevent supply chain injections from leading to a security incident.
  • Compliance Reporting: Reporting on both 1st- and 3rd-party software is necessary for industries with strict regulatory requirements.
  • Detection of Leaky Pipelines: Identifies release images that have bypassed the standard build and test pipeline, allowing teams to take corrective action.
  • Third-Party Risk Analysis: Assesses the security and compliance of third-party container images, binaries, and applications before they are deployed.

Example: An organization subject to strict compliance standards like FedRAMP or cATO uses Analyzed SBOMs to verify that all artifacts in their registry, including third-party applications, comply with security policies and licensing requirements. This practice not only enhances their security posture but also streamlines the audit process.

Pro Tip

A registry is an easy and non-invasive way to test and evaluate potential SBOM generators. It won’t give you a full picture of what can be found in your DevSecOps pipeline but it will at least give you an initial idea of its efficacy and help you make the decision on whether to go through the effort of integrating it into your build pipeline where it will produce deeper insights.

Deploy => Deployed SBOM

As your container orchestrator is deploying an image from your registry into production it will also orchestrate any production dependencies such as sidecar containers or production dependencies. At this stage, an SBOM that is generated is named an “Deployed SBOM” by CISA.

The ideal scenario is that your operations team is storing all of these images in the same central registry as your engineering team but—as we’ve noted before—reality diverges from the ideal.

The additional metadata that is collected for a Deployed SBOM includes:

  • Any additional sidecar containers or production dependencies that are injected or modified through a release controller.

Pros and Cons

Pros:

  • Enhanced Security Posture: The final gate to prevent vulnerabilities from being deployed into production. This reduces the risk of security incidents and compliance violations.
  • Leaky Pipeline Detection: Another location to increase visibility into the happy path of the DevSecOps pipeline being circumvented.
  • Compliance Enforcement: Some compliance standards require a deployment breaking enforcement gate before any software is deployed to production. A container orchestrator release controller is the ideal location to implement this.

Cons:

Essentially the same issues that come up during the release phase.

  • High Risk for Release Delays: Scanning images at this stage are even later than traditional waterfall-style development patterns and will incur a steep penalty if an issue is uncovered.
  • Difficult to Push Feedback into Exist Workflow: A deployment release controller sits outside of typical developer workflows and creating a feedback loop that seamlessly reports issues without changing the developer’s process is a non-trivial amount of work.

Use-Cases

  • Strict Software Supply Chain Security: Implementing a pipeline breaking gating mechanism is typically reserved for only the most critical security vulnerabilities (think: an actively exploitable known vulnerability).
  • High-Stakes Compliance Enforcement: Industries like defense, financial services and critical infrastructure will require vendors to implement a deployment gate for specific risk scenarios beyond actively exploitable vulnerabilities.
  • Compliance Audit Automation: Most regulatory compliance frameworks mandate audit artifacts at deploy time, these documents can be automatically generated and stored for future audits.

Example: A Deployed SBOM can be used as the source of truth for generating a report that demonstrates that no HIGH or CRITICAL vulnerabilities were deployed to production during an audit period.

Pro Tip

Combine a Deployed SBOM with a container vulnerability scanner that cross-checks all vulnerabilities against CISA’s Known Exploitable Vulnerability (KEV) database. In the scenario where a matching KEV is found for a software component you can configure your vulnerability scanner to return a FAIL response to your release controller to abort the deployment.

This strategy creates an ideal balance between not adding delays to software delivery and an extremely high probability for a security incident. 

Operate & Monitor (or Production) => Runtime SBOM

After your container orchestrator has deployed an application into your production environment it is live and serving customer traffic. An SBOM that is generated at this stage don’t have a name as specified by CISA. They are sometimes referred to as “Runtime SBOMs”. SBOMs are still a new-ish standard and will continue to evolve. 

The additional metadata that is collected for a Runtime SBOM includes:

  • Modifications (i.e., intentional hotfixes or malicious malware injection) made to running applications in your production environment. 

Pros and Cons

Pros:

  • Continuous Security Monitoring: Identifies new vulnerabilities that emerge after deployment.
  • Active Runtime Inventory: Provides a canonical view into an organization’s active software landscape.
  • Low Lift Implementation: Deploying SBOM generation into a production environment typically only requires deploying the scanner as another container and giving it permission to access the rest of the production environment.

Cons:

  • No Shift Left Security: By definition is excluded as a part of a shift left security posture.
  • Potential for Release Rollbacks: Scanning images at this stage is the worst possible place for proactive remediation. Discovering a vulnerability could potentially cause a security incident and force a release rollback.

Use-Cases

  • Rapid Incident Management: When new critical vulnerabilities are discovered and announced by the community the first priority for an organization is to determine exposure. An accurate production inventory, down to the component-level, is needed to answer this critical question.
  • Threat Detection: Continuously monitoring for anomalous activity linked to specific components. Sealing your system off completely from advanced persistent threats (APTs) is an unfeasible goal. Instead, quick detection and rapid intervention is the scalable solution to limit the impact of these adversaries.
  • Patch Management: As new releases of 3rd-party components and applications are released an inventory of impacted production assets provides helpful insights that can direct the prioritization of engineering efforts.

Example: When the XZ Utils vulnerability was announced in the spring of 2024, organizations that already automatically generated a Runtime SBOM inventory ran a simple search query against their SBOM database and knew within minutes—or even seconds—whether they were impacted.

Pro Tip

If you want to learn about how Google was able to go from an all-hands on deck security incident when the XZ Utils vulnerability was announced to an all clear under 10 minutes, watch our webinar with the lead of Google’s SBOM initiative.



Wrap-Up

As the SBOM standard has evolved the subject has grown considerably. What started as a structured way to store information about open source licenses has expanded to include numerous use-cases. A clear understanding of the evolution of SBOMs throughout the DevSecOps lifecycle is essential for organizations aiming to solve problems ranging from software supply chain security to regulatory compliance to legal risk management.

SBOMs are a powerful tool in the arsenal of modern software development. By recognizing their importance and integrating them thoughtfully across the DevSecOps lifecycle, you position your organization at the forefront of secure, efficient, and compliant software delivery.

Ready to secure your software supply chain and automate compliance tasks with SBOMs? Anchore is here to help. We offer SBOM management, vulnerability scanning and compliance automation enforcement solutions. If you still need some more information before looking at solutions, check out our webinar below on scaling a secure software supply chain with Kubernetes. 👇👇👇

Learn how Spectro Cloud secured their Kubernetes-based software supply chain and the pivotal role SBOMs played.

The Evolution of SBOMs in the DevSecOps Lifecycle: From Planning to Production

The software industry has wholeheartedly adopted the practice of building new software on the shoulders of the giants that came before them. To accomplish this developers construct a foundation of pre-built, 3rd-party components together then wrap custom 1st-party code around this structure to create novel applications. It is an extraordinarily innovative and productive practice but it also introduces challenges ranging from security vulnerabilities to compliance headaches to legal risk nightmares. Software bills of materials (SBOMs) have emerged to provide solutions for these wide ranging problems.

An SBOM provides a detailed inventory of all the components that make up an application at a point in time. However, it’s important to recognize that not all SBOMs are the same—even for the same piece of software! SBOMs evolve throughout the DevSecOps lifecycle; just as an application evolves from source code to a container image to a running application. The Cybersecurity and Infrastructure Security Agency’s (CISA) has codified this idea by differentiating between all of the different types of SBOMs. Each type serves different purposes and captures information about an application through its evolutionary process.

In this 2-part blog series, we’ll deep dive into each stage of the DevSecOps process and the associated SBOM. Highlighting the differences, the benefits and disadvantages and the use-cases that each type of SBOM supports. Whether you’re just beginning your SBOM journey or looking to deepen your understanding of how SBOMs can be integrated into your DevSecOps practices, this comprehensive guide will provide valuable insights and advice from industry experts.


Learn about the role that SBOMs for the security of your organization in this white paper.

Learn about the role that SBOMs for the security, including open source software (OSS) security, of your organization in this white paper.

Types of SBOMs and the DevSecOps Pipeline

Over the past decade the US government got serious about software supply chain security and began advocating for SBOMs as the standardized approach to the problem. As part of this initiative CISA created the Types of Software Bill of Material (SBOM) Documents white paper that codified the definitions of the different types of SBOMs and mapped them to each stage of the DevSecOps lifecycle. We will discuss each in turn but before we do, let’s anchor on some terminology to prevent confusion or misunderstanding.

Below is a diagram that lays out each stage of the DevSecOps lifecycle as well as the naming convention we will use going forward.

With that out of the way, let’s get started!

Plan => Design SBOM

As the DevSecOps paradigm has spread across the software industry, a notable best practice known as the security architecture review has become integral to the development process. This practice embodies the DevSecOps goal of integrating security into every phase of the software lifecycle, aligning perfectly with the concept of Shift-Left Security—addressing security considerations as early as possible.

At this stage, the SBOM documents the planned components of the application. The CISA refers to SBOMs generated during this phase as Design SBOMs. These SBOMs are preliminary and outline the intended components and dependencies before any code is written.

The metadata that is collected for a Design SBOM includes:

  • Component Inventory: Identifying potential OSS libraries and frameworks to be used as well as the dependency relationship between the components.
  • Licensing Information: Understanding the licenses associated with selected components to ensure compliance.
  • Risk Assessment Data: Evaluating known vulnerabilities and security risks associated with each component.

This might sound like a lot of extra work but luckily if you’re already performing DevSecOps-style planning that incorporates a security and legal review—as is best practice—you’re already surfacing all of this information. The only thing that is different is that this preliminary data is formatted and stored in a standardized data structure, namely an SBOM.

Pros and Cons

Pros:

  • Maximal Shift-Left Security: Vulnerabilities cannot be found any earlier in the software development process. Design time security decisions are the peak of a proactive security posture and preempt bad design decisions before they become ingrained into the codebase.
  • Cost Efficiency: Resolving security issues at this stage is generally less expensive and less disruptive than during later stages of development or—worst of all—after deployment.
  • Legal and Compliance Risk Mitigation: Ensures that all selected components meet necessary compliance standards, avoiding legal complications down the line.

Cons:

  • Upfront Investment: Gathering detailed information on potential components and maintaining an SBOM at this stage requires a non-trivial commitment of time and effort.
  • Incomplete Information: Projects are not static, they will adapt as unplanned challenges surface. A design SBOM likely won’t stay relevant for long.

Use-Cases

There are a number of use-cases that are enabled by 

  • Security Policy Enforcement: Automatically checking proposed components against organizational security policies to prevent the inclusion of disallowed libraries or frameworks.
  • License Compliance Verification: Ensuring that all components comply with the project’s licensing requirements, avoiding potential legal issues.
  • Vendor and Third-Party Risk Management: Assessing the security posture of third-party components before they are integrated into the application.
  • Enhance Transparency and Collaboration: A well-documented SBOM provides a clear record of the software’s components but more importantly that the project aligns with the goals of all of the stakeholders (engineering, security, legal, etc). This builds trust and creates a collaborative environment that increases the chances of each individual stakeholder outcome will be achieved.

Example:

A financial services company operating within a strict regulatory environment uses SBOMs during planning to ensure that all components comply with compliance standards like PCI DSS. By doing so, they prevent the incorporation of insecure components that won’t meet PCI compliance. This reduces the risk of the financial penalties associated with security breaches and regulatory non-compliance.

Pro Tip

If your organization is still early in the maturity of its SBOM initiative then we generally recommend moving the integration of design time SBOMs to the back of the queue. As we mention at the beginning of this the information stored in a design SBOMs is naturally surfaced during the DevSecOps process, as long as the information is being recorded and stored much of the value of a design SBOM will be captured in the artifact. This level of SBOM integration is best saved for later maturity stages when your organization is ready to begin exploring deeper levels of insights that have a higher risk-to-reward ratio. 

Alternatively, if your organization is having difficulty getting your teams to adopt a collaborative DevSecOps planning process mandating a SBOM as a requirement can act as a forcing function to catalyze a cultural shift.

Source => Source SBOM

During the development stage, engineers implement the selected 3rd-party components into the codebase. CISA refers to SBOMs generated during this phase as Source SBOMs. The SBOMs generated here capture the actual implemented components and additional information that is specific to the developer who is doing the work.

The additional metadata that is collected for a Source SBOM includes:

  • Dependency Mapping: Documenting direct and transitive dependencies.
  • Identity Metadata: Adding contributor and commit information.
  • Developer Environment: Captures information about the development environment.

Unlike Design SBOMs which are typically done manually, these SBOMs can be generated programmatically with a software composition analysis (SCA) tool—like Syft. They are usually packaged as command line interfaces (CLIs) since this is the preferred interface for developers. 

If you’re looking for an SBOM generation tool (SCA embedded), we have a comprehensive list of options to make this decision easier.

Pros and Cons

Pros:

  • Accurate and Timely Component Inventory: Reflects the actual components used in the codebase and tracks changes as codebase is actively being developed.
  • Shift-Left Vulnerability Detection: Identifies vulnerabilities as components are integrated but requires commit level automation and feedback mechanisms to be effective.
  • Facilitates Collaboration and Visibility: Keeps all stakeholders members informed about divergence from the original plan and provokes conversations as needed. This is also dependent on automation to record changes during development and the notification systems to broadcast the updates.

Example: A developer adds a new logging library to the project like an outdated version of Log4j. The SBOM, paired with a vulnerability scanner, immediately flags the Log4Shell vulnerability, prompting the engineer to update to a patched version.

Cons:

  • Noise from Developer Toolchains: A lot of times developer environments are bespoke. This creates noise for security teams by recording development dependencies.
  • Potential Overhead: Continuous updates to the SBOM can be resource-intensive when done manually; the only resource efficient method is by using an SBOM generation tool that automates the process.
  • Possibility of Missing Early Risks: Issues not identified during planning may surface here, requiring code changes.

Use-Cases

  • Faster Root Cause Analysis: During service incident retrospectives questions about where, when and by whom a specific component was introduced into an application. Source SBOMs are the programmatic record that can provide answers and decrease manual root cause analysis.
  • Real-Time Security Alerts: Immediate notification of vulnerabilities upon adding new components, decreasing time to remediation and keeping security teams informed.
  • Automated Compliance Checks: Ensuring added components comply with security or license policies to manage compliance risk.
  • Effortless Collaboration: Stakeholders can subscribe to a live feed of changes and immediately know when implementation diverges from the plan.

Pro Tip

Some SBOM generators allow developers to specify development dependencies that should be ignored, similar to .gitignore file. This can help cut down on the noise created by unique developer setups.

Build & Test => Build SBOM

When a developer pushes a commit to the CI/CD build system an automated process initiates that converts the application source code into an artifact that can then be deployed. CISA refers to SBOMs generated during this phase as Build SBOMs. These SBOMs capture both source code dependencies and build tooling dependencies.

The additional metadata that is collected includes:

  • Build Dependencies: Build tooling such as the language compilers, testing frameworks, package managers, etc.
  • Binary Analysis Data: Metadata for compiled binaries that don’t utilize traditional container formats.
  • Configuration Parameters: Details on build configuration files that might impact security or compliance.

Pros and Cons

Pros:

  • Build Infrastructure Analysis: Captures build-specific components which may have their own vulnerability or compliance issues.
  • Reuses Existing Automation Tooling: Enables programmatic security and compliance scanning as well as policy enforcement without introducing any additional build tooling.
  • Reuses Existing Automation Tooling: Directly integrates with developer workflow. Engineers receive security, compliance, etc. feedback without the need to reference a new tool.
  • Reproducibility: Facilitates reproducing builds for debugging and auditing.

Cons:

  • SBOM Sprawl: Build processes run frequently, if it is generating an SBOM with each run you will find yourself with a glut of files that you will have to manage.
  • Delayed Detection: Vulnerabilities or non-compliance issues found at this stage may require rework.

Use-Cases

  • SBOM Drift Detection: By comparing SBOMs from two or more stages, unexpected dependency injection can be detected. This might take the form of a benign, leaky build pipeline that requires manual workarounds or a malicious actor attempting to covertly introduce malware. Either way this provides actionable and valuable information.
  • Policy Enforcement: Enables the creation of build breaking gates to enforce security or compliance. For high-stakes operating environments like defense, financial services or critical infrastructure, automating security and compliance at the expense of some developer friction is a net-positive strategy.
  • Automated Compliance Artifacts: Compliance requires proof in the form of reports and artifacts. Re-utilizing existing build tooling automation to automate this task significantly reduces the manual work required by security teams to meet compliance requirements.

Example: A security scan during testing uses the Build SBOM to identify a critical vulnerability and alerts the responsible engineer. The remediation process is initiated and a patch is applied before deployment.

Pro Tip

If your organization is just beginning their SBOM journey, this is the recommended phase of the DevSecOps lifecycle to implement SBOMs first. The two primary cons of this phase can be mitigated the easiest. For SBOM sprawl, you can procure a turnkey SBOM management solution like Anchore SBOM

As for the delay in feedback created by waiting till the build phase, if your team is utilizing DevOps best practices and breaking features up into smaller components that fit into 2-week sprints then this tight scoping will limit the impact of any significant vulnerabilities or non-compliance discovered.

Intermission

So far we’ve covered the first half of the DevSecOps lifecycle. Next week we will publish the second part of this blog series where we’ll cover the remainder of the pipeline. Watch our socials to be sure you get notified when part 2 is published.

If you’re looking for some additional reading in the meantime, check out our container security white paper below.

Learn the 5 best practices for container security and how SBOMs play a pivotal role in securing your software supply chain.

Choosing the Right SBOM Generator: A Framework for Success

Choosing the right SBOM (software bill of materials) generator is tricker than it looks at first glance. SBOMs are the foundation for a number of different uses ranging from software supply chain security to continuous regulatory compliance. Due to its cornerstone nature, the SBOM generator that you choose will either pave the way for achieving your organization’s goals or become a road block that delays critical initiatives.

But how do you navigate the crowded market of SBOM generation tools to find the one that aligns with your organization’s unique needs? It’s not merely about selecting a tool with the most features or the nicest CLI. It’s about identifying a solution that maps directly to your desired outcomes and use-cases, whether that’s rapid incident response, proactive vulnerability management, or compliance reporting.

We at Anchore have been enabling organizations to achieve their SBOM-related outcomes and do it with the least amount of frustration and setbacks. We’ve compiled our learnings on choosing the right SBOM generation tool into a framework to help the wider community make decisions that set them up for success.

Below is a quick TL;DR of the high-level evaluation criteria that we cover in this blog post:

  • Understanding Your Use-Cases: Aligning the tool with your specific goals.
  • Ecosystem Compatibility: Ensuring support for your programming languages, operating systems, and build artifacts.
  • Data Accuracy: Evaluating the tool’s ability to provide comprehensive and precise SBOMs.
  • DevSecOps Integration: Assessing how well the tool fits into your existing DevSecOps tooling.
  • Proprietary vs. Open Source: Weighing the long-term implications of your choice.

By focusing on these key areas, you’ll be better equipped to select an SBOM generator that not only meets your current requirements but also positions your organization for future success.


Learn about the role that SBOMs for the security of your organization in this white paper.

Learn about the role that SBOMs for the security, including open source software (OSS) security, of your organization in this white paper.

Know your use-cases

When choosing from the array of SBOM generation tools in the market, it is important to frame your decision with the outcome(s) that you are trying to achieve. If your goal is to improve the response time/mean time to remediation when the next Log4j-style incident occurs—and be sure that there will be a next time—an SBOM tool that excels at correctly identifying open source licenses in a code base won’t be the best solution for your use-case (even if you prefer its CLI ;-D).

What to Do:

  • Identify and prioritize the outcomes that your organization is attempting to achieve
  • Map the outcomes to the relevant SBOM use-cases
  • Review each SBOM generation tool to determine whether they are best suited to your use-cases

It can be tempting to prioritize an SBOM generator that is best suited to our preferences and workflows; we are the ones that will be using the tool regularly—shouldn’t we prioritize what makes our lives easier? If we prioritize our needs above the goal of the initiative we might end up putting ourselves into a position where our choice in tools impedes our ability to recognize the desired outcome. Using the correct framing, in this case by focusing on the use-cases, will keep us focused on delivering the best possible outcome.

SBOMs can be utilized for numerous purposes: security incident response, open source license compliance, proactive vulnerability management, compliance reporting or software supply chain risk management. We won’t address all use-cases/outcomes in this blog post, a more comprehensive treatment of all of the potential SBOM use-cases can be found on our website.

Example SBOM Use-Cases:

  • Security incident response: an inventory of all applications and their dependencies that can be queried quickly and easily to identify whether a newly announced zero-day impacts the organization.
  • Proactive vulnerability management: all software and dependencies are scanned for vulnerabilities as part of the DevSecOps lifecycle and remediated based on organizational priority.
  • Regulatory compliance reporting: compliance artifacts and reports are automatically generated by the DevSecOps pipeline to enable continuous compliance and prevent manual compliance work.
  • Software supply chain risk management: an inventory of software components with identified vulnerabilities used to inform organizational decision making when deciding between remediating risk versus building new features.
  • Open source license compliance: an inventory of all software components and the associated OSS license to measure potential legal exposure.

Pro tip: While you will inevitably leave many SBOM use-cases out of scope for your current project, keeping secondary use-cases in the back of your mind while making a decision on the right SBOM tool will set you up for success when those secondary use-cases eventually become a priority in the future.

Does the SBOM generator support your organization’s ecosystem of programming languages, etc?

SBOM generators aren’t just tools to ingest data and re-format it into a standardized format. They are typically paired with a software composition analysis (SCA) tool that scans an application/software artifact for metadata that will populate the final SBOM.

Support for the complete array of programming languages, build artifacts and operating system ecosystems is essentially an impossible task. This means that support varies significantly depending on the SBOM generator that you select. An SBOM generator’s ability to help you reach your organizational goals is directly related to its support for your organization’s software tooling preferences. This will likely be one of the most important qualifications when choosing between different options and will rule out many that don’t meet the needs of your organization.

Considerations:

  • Programming Languages: Does the tool support all languages used by your team?
  • Operating Systems: Can it scan the different OS environments your applications run on top of?
  • Build Artifacts: Does the tool scan containers? Binaries? Source code repositories? 
  • Frameworks and Libraries: Does it recognize the frameworks and libraries your applications depend on?

Data accuracy

This is one of the most important criteria when evaluating different SBOM tools. An SBOM generator may claim support for a particular programming language but after testing the scanner you may discover that it returns an SBOM with only direct dependencies—honestly not much better than a package.json or go.mod file that your build process spits out.

Two different tools might both generate a valid SPDX SBOM document when run against the same source artifact, but the content of those documents can vary greatly. This variation depends on what the tool can inspect, understand, and translate. Being able to fully scan an application for both direct and transitive dependencies as well as navigate non-ideomatic patterns for how software can be structured end up being the true differentiators between the field of SBOM generation contenders.

Imagine using two SBOM tools on a Debian package. One tool recognizes Debian packages and includes detailed information about them in the SBOM. The latter can’t fully parse the Debian .deb format and omits critical information. Both produce an SBOM, but only one provides the data you need to power use-case based outcomes like security incident response or proactive vulnerability management.

Let’s make this example more concrete by simulating this difference with Syft, Anchore’s open source SBOM generation tool:

$ syft -q -o spdx-json nginx:latest > nginx_a.spdx.json
$ grype -q nginx_a.spdx.json | grep Critical
libaom3             3.6.0-1+deb12u1          (won't fix)       deb   CVE-2023-6879     Critical    
libssl3             3.0.14-1~deb12u2         (won't fix)       deb   CVE-2024-5535     Critical    
openssl             3.0.14-1~deb12u2         (won't fix)       deb   CVE-2024-5535     Critical    
zlib1g              1:1.2.13.dfsg-1          (won't fix)       deb   CVE-2023-45853    Critical

In this example, we first generate an SBOM using Syft then run it through Grype—our vulnerability scanning tool. Syft + Grype uncover 4 critical vulnerabilities.

Now let’s try the same thing but “simulate” an SBOM generator that can’t fully parse the structure of the software artifact in question:

$ syft -q -o spdx-json --select-catalogers "-dpkg-db-cataloger,-binary-classifier-cataloger" nginx:latest > nginx_b.spdx.json 
$ grype -q nginx_b.spdx.json | grep Critical
$

In this case, we are returned none of the critical vulnerabilities found with the former tool.

This highlights the importance of careful evaluation of the SBOM generator that you decide on. It could mean the difference between effective vulnerability risk management and a security incident.

Can the SBOM tool integration into your DevSecOps pipeline?

If the SBOM generator is packaged as a self-contained binary with a command line interface (CLI) then it should tick this box. CI/CD build tools are most amenable to this deployment model. If the SBOM generation tool in question isn’t a CLI then it should at least run as a server with an API that can be called as part of the build process.

Integrating with an organization’s DevSecOps pipeline is key to enable a scalable SBOM generation process. By implementing SBOM creation directly into the existing build tooling, organizations can leverage existing automation tools to ensure consistency and efficiency which are necessary for achieving the desired outcomes.

Proprietary vs. open source SBOM generator?

Using an open source SBOM tool is considered an industry best practice. This is because it guards against the risks associated with vendor lock-in. As a bonus, the ecosystem for open source SBOM generation tooling is very healthy. OSS will always have an advantage over proprietary in regards to ecosystem coverage and data quality because it will get into the hands of more users which will create a feedback loop that closes gaps in coverage or quality.

Finally, even if your organization decides to utilize a software supply chain security product that has its own proprietary SBOM generator, it is still better to create your SBOMs with an open source SBOM generator, export to a standardized format (e.g., SPDX or CycloneDX) then have your software supply chain security platform ingest these non-proprietary data structures. All platforms will be able to ingest SBOMs from one or both of these standards-based formats.

Wrap-Up

In a landscape where the next security/compliance/legal challenge is always just around the corner, equipping your team with the right SBOM generator empowers you to act swiftly and confidently. It’s an investment not just in a tool, but in the resilience and security of your entire software supply chain. By making a thoughtful, informed choice now, you’re laying the groundwork for a more secure and efficient future.

Introducing Anchore Data Service and Anchore Enterprise 5.10

We are thrilled to announce the release of Anchore Enterprise 5.10, our tenth release of 2024. This update brings two major enhancements that will elevate your experience and bolster your security posture: the new Anchore Data Service (ADS) and expanded AnchoreCTL ecosystem coverage. 

With ADS, we’ve built a fast and reliable solution that reduces time spent by DevSecOps teams debugging intermittent network issues from flaky services that are vital to software supply chain security.

On top of that, we have buffed our software composition analysis (SCA) scanner’s ecosystem coverage (e.g., C++, Swift, Elixir, R, etc.) for all Anchore customers. To do this we embedded Syftour popular, open source SCA/SBOM (software bill of materials) generator—directly into Anchore Enterprise.

It’s been a fall of big releases at Anchore and we’re excited to continue delivering value to our loyal customers. Read on to get all of the gory details >>

Announcing the Anchore Data Service

Before, customers ran the Anchore Feed Service within their environment to pull data feeds into their Anchore Enterprise deployment. To get an idea of what this looked like, see the architecture diagram of Anchore Enterprise prior to version 5.10:

Originally we did this to give customers more control over their environment. Unfortunately this wasn’t without its issues. The data feeds are provided by the community which means the services were designed to be accessible but cost-efficient. This meant they were unreliable; frequently having accessibility issues.

We only have to stretch our memory back to the spring to recall an example that made national headlines. The National Vulnerability Database (NVD) ran into funding issues. This impacted both the creation of new vulnerabilities AND the availability of their API. This created significant friction for Anchore customers—not to mention the entirety of the software industry.

Now, Anchore is running its own enterprise-grade service, named Anchore Data Service (ADS). It is a replacement for the former feed service. ADS aggregates all of the community data feeds, enriches the data (with proprietary threat data) and packages it for customers; all of this with a service availability guarantee expected of an enterprise service.

The new architecture with ADS as the intermediary is illustrated below:

As a bonus for our customers running air-gapped deployments of Anchore Enterprise, there is no more need to run a second deployment of Anchore Enterprise in a DMZ to pull down the data feeds. Instead a single file is pulled from ADS then transferred to a USB thumb drive. From there a single CLI command is run to update your air-gapped deployment of Anchore Enterprise.

Increased AnchoreCTL Ecosystem Coverage

We have increased the number of supported ecosystems (e.g., C++, Swift, Elixir, R, etc.) in Anchore Enterprise. This improves coverage and increases the likelihood that all of your organization’s applications can be scanned and protected by Anchore Enterprise.

More importantly, we have completely re-architected the process for how Anchore Enterprise supports new ecosystems. By integrating Syft—Anchore’s open source SBOM generation tool—directly into AnchoreCTL, Anchore’s customers will now get access to new ecosystem support as they are merged into Syft’s codebase.

Before Syft and AnchoreCTL were somewhat separate which caused AnchoreCTL’s support for new ecosystems to lag Syft’s. Now, they are fully integrated. This enables all of Anchore’s enterprise and public sector customers to take full advantage of the open source community’s development velocity.

Full list of support ecosystems

Below is a complete list of all supported ecosystems by both Syft and AnchoreCTL (as of Anchore Enterprise 5.10; see our docs for most current info):

  • Alpine (apk)
  • C (conan)
  • C++ (conan)
  • Dart (pubs)
  • Debian (dpkg)
  • Dotnet (deps.json)
  • Objective-C (cocoapods)
  • Elixir (mix)
  • Erlang (rebar3)
  • Go (go.mod, Go binaries)
  • Haskell (cabal, stack)
  • Java (jar, ear, war, par, sar, nar, native-image)
  • JavaScript (npm, yarn)
  • Jenkins Plugins (jpi, hpi)
  • Linux kernel archives (vmlinuz)
  • Linux kernel a (ko)
  • Nix (outputs in /nix/store)
  • PHP (composer)
  • Python (wheel, egg, poetry, requirements.txt)
  • Red Hat (rpm)
  • Ruby (gem)
  • Rust (cargo.lock)
  • Swift (cocoapods, swift-package-manager)
  • WordPress plugins

After you update to Anchore Enterprise 5.10, the SBOM inventory will now display all of the new ecosystems. Any SBOMs that have been generated for a particular ecosystem will show up top. The screenshot below gives you an idea of what this will look like:

Wrap-Up

Anchore Enterprise 5.10 marks a new chapter in providing reliable, enterprise-ready security tooling for modern software development. The introduction of the Anchore Data Service ensures that you have consistent and dependable access to critical vulnerability and exploit data, while the expanded ecosystem support means that no part of your tech stack is left unscrutinized for latent risk. Upgrade to the latest version and experience these new features for yourself.

To update and leverage these new features check out our docs, reach out to your Customer Success Engineer or contact our support team. Your feedback is invaluable to us, and we look forward to continuing to support your organization’s security needs.We are offering all of our product updates as a new quarterly product update webinar series. Watch the fall webinar update in the player below to get all of the juicy tidbits from our product team.

Navigating Open Source Software Compliance in Regulated Industries

Open source software (OSS) brings a wealth of benefits; speed, innovation, cost savings. But when serving customers in highly regulated industries like defense, energy, or finance, a new complication enters the picture—compliance.

Imagine your DevOps-fluent engineering team has been leveraging OSS to accelerate product delivery, and suddenly, a major customer hits you with a security compliance questionnaire. What now? 

Regulatory compliance isn’t just about managing the risks of OSS for your business anymore; it’s about providing concrete evidence that you meet standards like FedRAMP and the Secure Software Development Framework (SSDF).

The tricky part is that the OSS “suppliers” making up 70-90% of your software supply chain aren’t traditional vendors—they don’t have the same obligations or accountability, and they’re not necessarily aligned with your compliance needs. 

So, who bears the responsibility? You do.

The OSS your engineering team consumes is your resource and your responsibility. This means you’re not only tasked with managing the security risks of using OSS but also with proving that both your applications and your OSS supply chain meet compliance standards. 

In this post, we’ll explore why you’re ultimately responsible for the OSS you consume and outline practical steps to help you use OSS while staying compliant.

Learn about CISA’s SSDF attestation form and how to meet compliance.

What does it mean to use open source software in a regulated environment?

Highly regulated environments add a new wrinkle to the OSS security narrative. The OSS developers that author the software dependencies that make up the vast majority of modern software supply chains aren’t vendors in the traditional sense. They are more of a volunteer workforce that allow you to re-use their work but it is a take it or leave it agreement. You have no recourse if it doesn’t work as expected, or worse, has vulnerabilities in it.

So, how do you meet compliance standards when your software supply chain is built on top of a foundation of OSS?

Who is the vendor? You are!

Whether you have internalized this or not the open source software that your developers consume is your resource and thus your responsibility.

This means that you are shouldered with the burden of not only managing the security risk of consuming OSS but also having to shoulder the burden of proving that both your applications and the your OSS supply chain meets compliance.

Open source software is a natural resource

Before we jump into how to accomplish the task set forth in the previous section, let’s take some time to understand why you are the vendor when it comes to open source software.

The common idea is that OSS is produced by a 3rd-party that isn’t part of your organization, so they are the software supplier. Shouldn’t they be the ones required to secure their code? They control and maintain what goes in, right? How are they not responsible?

To answer that question, let’s think about OSS as a natural resource that is shared by the public at large, for instance the public water supply.

This shouldn’t be too much of a stretch. We already use terms like upstream and downstream to think about the relationship between software dependencies and the global software supply chain.

Using this mental model, it becomes easier to understand that a public good isn’t a supplier. You can’t ask a river or a lake for an audit report that it is contaminant free and safe to drink. 

Instead the organization that processes and provides the water to the community is responsible for testing the water and guaranteeing its safety. In this metaphor, your company is the one processing the water and selling it as pristine bottled water. 

How do you pass the buck to your “supplier”? You can’t. That’s the point.

This probably has you asking yourself, if I am responsible for my own OSS supply chain then how to meet a compliance standard for something that I don’t have control over? Keep reading and you’ll find out.

How do I use OSS and stay compliant?

While compliance standards are often thought of as rigid, the reality is much more nuanced. Just because your organization doesn’t own/control the open source projects that you consume doesn’t mean that you can’t use OSS and meet compliance requirements.

There are a few different steps that you need to take in order to build a “reasonably secure” OSS supply chain that will pass a compliance audit. We’ll walk you through the steps below:

Step 1 — Know what you have (i.e., an SBOM inventory)

The foundation of the global software supply chain is the SBOM (software bill of materials) standard. Each of the security and compliance functions outlined in the steps below use or manipulate an SBOM.

SBOMs are the foundational component of the global software supply chain because they record the ingredients that were used to produce the application an end-user will consume. If you don’t have a good grasp of the ingredients of your applications there isn’t much hope for producing any upstream security or compliance guarantees.

The best way to create observability into your software supply chain is to generate an SBOM for every single application in your DevSecOps build pipeline—at each stage of the pipeline!

Step 2 — Maintain a historical record of application source code

To meet compliance standards like FedRAMP and SSDF, you need to be able to maintain a historical record of the source code of your applications, including: 

  • Where it comes from, 
  • Who created it, and 
  • Any modifications made to it over time.

SBOMs were designed to meet these requirements. They act as a record of how applications were built and when/where OSS dependencies were introduced. They also double as compliance artifacts that prove you are compliant with regulatory standards.

Governments aren’t content with self-attestation (at least not for long); they need hard evidence to verify that you are trustworthy. Even though SSDF is currently self-attestation only, the federal government is known for rolling out compliance frameworks in stages. First advising on best-practices, then requiring self-attestation, finally external validation via a certification process. 

The Cybersecurity Maturity Model Certification (CMMC) is a good example of this dynamic process. It recently transitioned from self-attestation to external validation with the introduction of the 2.0 release of the framework.

Step 3 — Manage your OSS vulnerabilities

Not only do you need to keep a record of applications as they evolve over time, you have to track the known vulnerabilities of your OSS dependencies to achieve compliance. Just as SBOMs prove provenance, vulnerability scans are proof that your application and its dependencies aren’t vulnerable. These scans are a crucial piece of the evidence that you will need to provide to your compliance officer as you go through the certification process. 

Remember the buck stops with you! If the OSS that your application consumes doesn’t supply an SBOM and vulnerability scan (which is essentially all OSS projects) then you are responsible to create them. There is no vendor to pass the blame to for proving that your supply chain is reasonably secure and thus compliant.

Step 4 — Continuous compliance of open source software supply chain

It is important to recognize that modern compliance standards are no longer sprints but marathons. Not only do you have to prove that your application(s) are compliant at the time of audit but you have to be able to demonstrate that it remains secure continuously in order to maintain your certification.

This can be challenging to scale but it is made easier by integrating SBOM generation, vulnerability scanning and policy checks directly into the DevSecOps pipeline. This is the approach that modern, SBOM-powered SCAs advocate for.

By embedding the compliance policy-as-code into your DevSecOps pipeline as policy gates, compliance can be maintained over time. Developers are alerted when their code doesn’t meet a compliance standard and are directed to take the corrective action. Also, these compliance checks can be used to automatically generate the compliance artifacts needed. 

You already have an automated DevSecOps pipeline that is producing and delivering applications with minimal human intervention, why not take advantage of this existing tooling to automate open source software compliance in the same way that security was integrated directly into DevOps.

Real-world Examples

To help bring these concepts to life, we’ve outlined some real-world examples of how open source software and compliance intersect:

Open source project has unfixed vulnerabilities

This is far and wide the most common issue that comes up during compliance audits. One of your application’s OSS dependencies has a known vulnerability that has been sitting in the backlog for months or even years!

There are several reasons why an open source software developer might leave a known vulnerability unresolved:

  • They prioritize a feature over fixing a vulnerability
  • The vulnerability is from a third-party dependency they don’t control and can’t fix
  • They don’t like fixing vulnerabilities and choose to ignore it
  • They reviewed the vulnerability and decided it’s not likely to be exploited, so it’s not worth their time
  • They’re planning a codebase refactor that will address the vulnerability in the future

These are all rational reasons for vulnerabilities to persist in a codebase. Remember, OSS projects are owned and maintained by 3rd-party developers who control the repository; they make no guarantees about its quality. They are not vendors.

You, on the other hand, are a vendor and must meet compliance requirements. The responsibility falls on you. An OSS vulnerability management program is how you meet your compliance requirements while enjoying the benefits of OSS.

Need to fill out a supplier questionnaire

Imagine you’re a cloud service provider or software vendor. Your sales team is trying to close a deal with a significant customer. As the contract nears signing, the customer’s legal team requests a security questionnaire. They’re in the business of protecting their organization from financial risk stemming from their supply chain, and your company is about to become part of that supply chain.

These forms are usually from lawyers, very formal, and not focused on technical attacks. They just want to know what you’re using. The quick answer? “Here’s our SBOM.” 

Compliance comes in the form of public standards like FedRAMP, SSDF, NIST, etc., and these less formal security questionnaires. Either way, being unable to provide a full accounting of the risks in your software supply chain can be a speed bump to your organization’s revenue growth and success.

Integrating SBOM scanning, generation, and management deeply into your DevSecOps pipeline is key to accelerating the sales process and your organization’s overall success.

Prove provenance

CISA’s SSDF Attestation form requires that enterprises selling software to the federal government can produce a historical record of their applications. Quoting directly: “The software producer [must] maintain provenance for internal code and third-party components incorporated into the software to the greatest extent feasible.”

If you want access to the revenue opportunities the U.S. federal government offers, SSDF attestation is the needle you have to thread. Meeting this requirement without hiring an army of compliance engineers to manually review your entire DevSecOps pipeline demands an automated OSS component observability and management system.

Often, we jump to cryptographic signatures, encryption keys, trust roots—this quickly becomes a mess. Really, just a hash of the files in a database (read: SBOM inventory) satisfies the requirement. Sometimes, simpler is better. 

Discover the “easy button” to SSDF Attestation and OSS supply chain compliance in our previous blog post.

Takeaways

OSS Is Not a Vendor—But You Are! The best way to have your OSS cake and eat it too (without the indigestion) is to:

  1. Know Your Ingredients: Maintain an SBOM inventory of your OSS supply chain.
  2. Maintain a Complete Historical Record: Keep track of your application’s source code and build process.
  3. Scan for Known Vulnerabilities: Regularly check your OSS dependencies.
  4. Continuous Compliance thru Automation: Generate compliance records automatically to scale your compliance process.

There are numerous reasons to aim for open source software compliance, especially for your software supply chain:

  • Balance Gains Against Risks: Leverage OSS benefits while managing associated risks.
  • Reduce Financial Risk: Protect your organization’s existing revenue.
  • Increase Revenue Opportunities: Access new markets that mandate specific compliance standards.
  • Avoid Becoming a Cautionary Tale: Stay ahead of potential security incidents.

Regardless of your motivation for wanting to use OSS and use it responsibly (i.e., securely and compliantly), Anchore is here to help. Reach out to our team to learn more about how to build and manage a secure and compliant OSS supply chain.

Learn the container security best practices to reduce the risk of software supply chain attacks.

Learn the container security best practices, including open source software (OSS) security, to reduce the risk of software supply chain attacks.

How to build an OSS risk management program

In previous blog posts we have covered the risks of open source software (OSS) and security best practices to manage that risk. From there we zoomed in on the benefits of tightly coupling two of those best practices (SBOMs and vulnerability scanning)

Now, we’ll dig deeper into the practical considerations of integrating this paired solution into a DevSecOps pipeline. By examining the design and implementation of SBOMs and vulnerability scanning, we’ll illuminate the path to creating a holistic open source software (OSS) risk management program.

Learn about the role that SBOMs for the security of your organization in this white paper.

Learn about the role that SBOMs for the security, including open source software (OSS) security, of your organization in this white paper.

How do I integrate SBOM management and vulnerability scanning into my development process?

Ideally, you want to generate an SBOM at each stage of the software development process (see image below). By generating an SBOM and scanning for vulnerabilities at each stage, you unlock a number of novel use-cases and benefits that we covered previously.

DevSecOps lifecycle diagram with all stages to integrate SBOM generation and vulnerability scanning.

Let’s break down how to integrate SBOM generation and vulnerability scanning into each stage of the development pipeline:

Source (PLAN & CODE)

The easiest way to integrate SBOM generation and vulnerability scanning into the design and coding phases is to provide CLI (command-line interface) tools to your developers. Engineers are already used to these tools—and have a preference for them!

If you’re going the open source route, we recommend both Syft (SBOM generation) and Grype (vulnerability scanner) as easy options to get started. If you’re interested in an integrated enterprise tool then you’ll want to look at AnchoreCTL.

Developers can generate SBOMs and run vulnerability scans right from the workstation. By doing this at design or commit time developers can shift security left and know immediately about security implications of their design decisions.

If existing vulnerabilities are found, developers can immediately pivot to OSS dependencies that are clean or start a conversation with their security team to understand if their preferred framework will be a deal breaker. Either way, security risk is addressed early before any design decisions are made that will be difficult to roll back.

Build (BUILD + TEST)

The ideal location to integrate SBOM generation and vulnerability scanning during the build and test phases are directly into the organization’s continuous integration (CI) pipeline.

The same self-contained CLI tools used during the source stage are integrated as additional steps into CI scripts/runbooks. When a developer pushes a commit that triggers the build process, the new steps are executed and both an SBOM and vulnerability scan are created as outputs. 

Check out our docs site to see how AnchoreCTL (running in distributed mode) makes this integration a breeze.

If you’re having trouble convincing your developers to jump on the SBOM train, we recommend that developers think about all security scans as just another unit test that is part of their testing suite.

Running these steps in the CI pipeline delays feedback a little versus performing the check as incremental code commits are made as an application is being coded but it is still light years better than waiting till a release is code complete. 

If you are unable to enforce vulnerability scanning of OSS dependencies by your engineering team, a CI-based strategy can be a good happy medium. It is much easier to ensure every build runs exactly the same each time than it is to do the same for developers.

Release (aka Registry)

Another integration option is the container registry. This option will require you to either roll your own service that will regularly call the registry and scan new containers or use a service that does this for you.

See how Anchore Enterprise can automate this entire process by reviewing our integration docs.

Regardless of the path you choose, you will end up creating an IAM service account within your CI application which will give your SBOM and vulnerability scanning solution the access to your registries.

The release stage tends to be fairly far along in the development process and is not an ideal location for these functions to run. Most of the benefits of a shift left security posture won’t be available anymore.

If this is an additional vulnerability scanning stage—rather than the sole stage—then this is a fantastic environment to integrate into. Software supply chain attacks that target registries are popular and can be prevented with a continuous scanning strategy.

Deploy

This is the traditional stage of the SDLC (software development lifecycle) to run vulnerability scans. SBOM generation can be added on as another step in an organization’s continuous deployment (CD) runbook.

Similar to the build stage, the best integration method is by calling CLI tools directly in the deploy script to generate the SBOM and then scan it for vulnerabilities.

Alternatively, if you utilize a container orchestrator like Kubernetes you can also configure an admission controller to act as a deployment gate. The admissions controller should be configured to make a call out to a standalone SBOM generator and vulnerability scanner. 

If you’d like to understand how this is implemented with Anchore Enterprise, see our docs.

While this is the traditional location for running vulnerability scans, it is not recommended that this is the only stage to scan for vulnerabilities. Feedback about security issues would be arriving very late in the development process and prior design decisions may prevent vulnerabilities from being easily remediated. Don’t do this unless you have no other option.

Production (OPERATE + MONITOR)

This is not a traditional stage to run vulnerability scans since the goal is to prevent vulnerabilities from getting to production. Regardless, this is still an important environment to scan. Production containers have a tendency to drift from their pristine build states (DevSecOps pipelines are leaky!).

Also, new vulnerabilities are discovered all of the time and being able to prioritize remediation efforts to the most vulnerable applications (i.e., runtime containers) considerably reduces the risk of exploitation.

The recommended way to run SBOM generation and vulnerability scans in production is to run an independent container with the SBOM generator and vulnerability scanner installed. Most container orchestrators have SDKs that will allow you to integrate an SBOM generator and vulnerability scanner to the preferred administration CLI (e.g., kubectl for k8s clusters). 

Read how Anchore Enterprise integrates these components together into a single container for both Kubernetes and Amazon ECS.

How do I manage all of the SBOMs and vulnerability scans?

Tightly coupling SBOM generation and vulnerability scanning creates a number of benefits but it also creates one problem; a firehose of data. This unintended side effect is named SBOM sprawl and it inevitably becomes a headache in and of itself.

The concise solution to this problem is to create a centralized SBOM repository. The brevity of this answer downplays the challenges that go along with building and managing a new data pipeline.

We’ll walk you through the high-level steps below but if you’re looking to understand the challenges and solutions of SBOM sprawl in more detail, we have a separate article that covers that.

Integrating SBOMs and vulnerability scanning for better OSS risk management

Assuming you’ve deployed an SBOM generator and vulnerability scanner into at least one of your development stages (as detailed above in “How do I integrate SBOM management and vulnerability scanning into my development process?”) and have an SBOM repository for storing your SBOMs and/or vulnerability scans, we can now walkthrough how to tie these systems together.

  1. Create a system to pull vulnerability feeds from reputable sources. If you’re looking for a way to get started here, read our post on how to get started.
  2. Regularly scan your catalog of SBOMs for vulnerabilities, storing the results alongside the SBOMs.
  3. Implement a query system to extract insights from your inventory of SBOMs.
  4. Create a dashboard to visualize your software supply chain’s health.
  5. Build alerting automation to ping your team as newly discovered vulnerabilities are announced.
  6. Maintain all of these DIY security applications and tools. 
  7. Continue to incrementally improve on these tools as new threats emerge, technologies evolve and development processes change.

If this feels like more work than you’re willing to take on, this is why security vendors exist. See the benefits of a managed SBOM-powered SCA below.

Prefer not to DIY? Evaluate Anchore Enterprise

Anchore Enterprise was designed from the ground up to provide a reliable software supply chain security platform that requires the least amount of work to integrate and maintain. Included in the product is:

  • Out-of-the-box integrations for popular CI/CD software (e.g., GitHub, Jenkins, GitLab, etc.)
  • End-to-end SBOM management
  • Enterprise-grade vulnerability scanning with best-in-class false positives
  • Built-in SBOM drift detection
  • Remediation recommendations
  • Continuous visibility and monitoring of software supply chain health

Enterprises like NVIDIA, Cisco, Infoblox, etc. have chosen Anchore Enterprise as their “easy button” to achieve open source software security with the least amount of lift.

If you’re interested to learn more about how to roll out a complete OSS security solution without the blood, sweat and tears that come with the DIY route—reach out to our team to get a demo or try Anchore Enterprise yourself with a 15-day free trial.

Learn the container security best practices to reduce the risk of software supply chain attacks.

Learn the container security best practices, including open source software (OSS) security, to reduce the risk of software supply chain attacks.

SBOMs and Vulnerability Management: OSS Security in the DevSecOps Era

The rise of open-source software (OSS) development and DevOps practices has unleashed a paradigm shift in OSS security. As traditional approaches to OSS security have proven inadequate in the face of rapid development cycles, the Software Bill of Materials (SBOM) has re-made OSS vulnerability management in the era of DevSecOps.

This blog post zooms in on two best practices from our introductory article on OSS security and the software supply chain:

  1. Maintain a Software Dependency Inventory
  2. Implement Vulnerability Scanning

These two best practices are set apart from the rest because they are a natural pair. We’ll cover how this novel approach,

  • Scaled OSS vulnerability management under the pressure of rapid software delivery
  • Is set apart from legacy SCAs
  • Unlocks new use-cases in software supply chain security, OSS risk management, etc.
  • Benefits software engineering orgs
  • Benefits an organization’s overall security posture
  • Has measurably impacted modern enterprises, such as, NVIDIA, Infoblox, etc.

Whether you’re a seasoned DevSecOps professional or just beginning to tackle the challenges of securing your software supply chain, this blog post offers insights into how SBOMs and vulnerability management can transform your approach to OSS security.

Learn about the role that SBOMs for the security of your organization in this white paper.

Learn about the role that SBOMs for the security, including open source software (OSS) security, of your organization in this white paper.

Why do I need SBOMs for OSS vulnerability management?

The TL;DR is SBOMs enabled DevSecOps teams to scale OSS vulnerability management programs in a modern, cloud native environment. Legacy security tools (i.e., SCA platforms) weren’t built to handle the pace of software delivery after a DevOps face lift.

Answering this question in full requires some historical context. Below is a speed-run of how we got to a place where SBOMs became the clear solution for vulnerability management after the rise of DevOps and OSS; the original longform is found on our blog.

If you’re not interested in a history lesson, skip to the next section, “What new use-cases are unlocked with a software dependency inventory?” to get straight to the impact of this evolution on software supply chain security (SSCS).

A short history on software composition analysis (SCA)

  • SCAs were originally designed to solve the problem of OSS licensing risk
  • Remember that Microsoft made a big fuss about the dangers of OSS at the turn of the millennium
  • Vulnerability scanning and management was tacked-on later
  • These legacy SCAs worked well enough until DevOps and OSS popularity hit critical mass

How the rise of OSS and DevOps principles broke legacy SCAs

  • DevOps and OSS movements hit traction in the 2010s
  • Software development and delivery transitioned from major updates with long development times to incremental updates with frequent releases
  • Modern engineering organizations are measured and optimized for delivery speed
  • Legacy SCAs were designed to scan a golden image once and take as much as needed to do it; upwards of weeks in some cases
  • This wasn’t compatible with the DevOps promise and created friction between engineering and security
  • This meant not all software could be scanned and much was scanned after release increasing the risk of a security breach

SBOMs as the solution

  • SBOMs were introduced as a standardized data structure that comprised a complete list of all software dependencies (OSS or otherwise)
  • These lightweight files created a reliable way to scan software for vulnerabilities without the slow performance of scanning the entire application—soup to nuts
  • Modern SCAs utilize SBOMs as the foundational layer to power vulnerability scanning in DevSecOps pipelines
  • SBOMs + SCAs deliver on the performance of DevOps without compromising security

What is the difference between SBOMs and legacy SCA scanning?

SBOMs offer two functional innovations over the legacy model: 

  1. Deeper visibility into an organization’s application inventory and; 
  2. A record of changes to applications over-time.

The deeper visibility comes from the fact that modern SCA scanners identify software dependencies recursively and build a complete software dependency tree (both direct and transitive). The record of changes comes from the fact that the OSS ecosystem has begun to standardize the contents of SBOMs to allow interoperability between OSS consumers and producers.

Legacy SCAs typically only scan for direct software dependencies and don’t recursively scan for dependencies of dependencies. Also, legacy SCAs don’t generate standardized scans that can then be used to track changes over time.

What new use-cases are unlocked with an SBOM inventory?

The innovations brought by SBOMs (see above) have unlocked new use-cases that benefit both the software supply chain security niche and the greater DevSecOps world. See the list below:

OSS Dependency Drift Detection

Ideally software dependencies are only injected in source code but the reality is that CI/CD pipelines are leaky and both automated and one-off modifications are made at all stages of development. Plugging 100% of the leaks is a strategy with diminishing returns. Application drift detection is a scalable solution to this challenge.

SBOMs unlocks drift detection by creating a point-in-time record on the composition of an application at each stage of the development process. This creates an auditable record of when software builds are modified; how they are changed and who changed it. 

Software Supply Chain Attack Detection

Not all dependency injections are performed by benevolent 1st-party developers. Malicious threat actors who gain access to your organization’s DevSecOps pipeline or the pipeline of one of your OSS suppliers can inject malicious code into your applications.

An SBOM inventory creates the historical record that can identify anomalous behavior and catch these security breaches before organizational damage is done. This is a particularly important strategy for dealing with advanced persistent threats (APTs) that are expert at infiltration and stealth. For a real-world example, see our blog on the recent XZ supply chain attack.

OSS Licensing Risk Management

OSS licenses are currently undergoing the beginning of a new transformation. The highly permissive licenses that came into fashion over the last 20 years are proving to be unsustainable. As prominent open source startups amend their licenses (e.g., Hashicorp, Elastic, Redis, etc.), organizations need to evaluate these changes and how it impacts their OSS supply chain strategy.

Similar to the benefits during a security incident, an SBOM inventory acts as the source of truth for OSS licensing risk. As licenses are amended, an organization can quickly evaluate their risk by querying their inventory and identifying who their “critical” OSS suppliers are. 

Domain Expertise Risk Management

Another emerging use-case of software dependency inventories is the management of domain expertise of developers in your organization. A comprehensive inventory of software dependencies allows organization’s to map critical software to individual employee’s domain knowledge. This creates a measurement of how well resourced your engineering organization is and who owns the knowledge that could impact business operations.

While losing an employee with a particular set of skills might not have the same urgency as a security incident, over time this gap can create instability. An SBOM inventory allows organizations to maintain a list of critical OSS suppliers and get ahead of any structural risks in their organization.

What are the benefits of a software dependency inventory?

SBOM inventories create a number of benefits for tangential domains, such as, software supply chain security, risk management, etc. but there is one big benefit for the core practices of software development.

Reduced engineering and QA time for debugging

A software dependency inventory stores metadata about applications and their OSS dependencies over-time in a centralized repository. This datastore is a simple and efficient way to search and answer critical questions about the state of an organization’s software development pipeline.

Previously, engineering and QA teams had to manually search codebases and commits in order to determine the source of a rogue dependency being added to an application. A software dependency inventory combines a centralized repository of SBOMs with an intuitive search interface. Now, these time consuming investigations can be accomplished in minutes versus hours.

What are the benefits of scanning SBOMs for vulnerabilities?

There are a number of security benefits that can be achieved by integrating SBOMs and vulnerability scanning. We’ve highlighted the most important below:

Reduce risk by scaling vulnerability scanning for complete coverage

One of the side effects of transitioning to DevOps practices was that legacy SCAs couldn’t keep up with the software output of modern engineering orgs. This meant that not all applications were scanned before being deployed to production—a risky security practice!

Modern SCAs solved this problem by scanning SBOMs rather than applications or codebases. These lightweight SBOM scans are so efficient that they can keep up with the pace of DevOps output. Scanning 100% of applications reduces risk by preventing unscanned software from being deployed into vulnerable environments.

Prevent delays in software delivery

Overall organizational productivity can be increased by adopting modern, SBOM-powered SCAs that allow organizations to shift security left. When vulnerabilities are uncovered during application design, developers can make informed decisions about the OSS dependencies that they choose. 

This prevents the situation where engineering creates a new application or feature but right before it is deployed into production the security team scans the dependencies and finds a critical vulnerability. These last minute security scans can delay a release and create frustration across the organization. Scanning early and often prevents this productivity drain from occurring at the worst possible time.

Reduced financial risk during a security incident

The faster a security incident is resolved the less risk that an organization is exposed to. The primary metric that organizations track is called mean-time-to-recovery (MTTR). SBOM inventories are utilized to significantly reduce this metric and improve incident outcomes.

An application inventory with full details on the software dependencies is a prerequisite for rapid security response in the event of an incident. A single SQL query to an SBOM inventory will return a list of all applications that have exploitable dependencies installed. Recent examples include Log4j and XZ. This prevents the need for manual scanning of codebases or production containers. This is the difference between a zero-day incident lasting a few hours versus weeks.

Reduce hours spent on compliance with automation

Compliance certifications are powerful growth levers for organizations; they open up new market opportunities. The downside is that they create a lot of work for organizations. Manually confirming that each compliance control is met and providing evidence for the compliance officer to review discourages organizations from pursuing these certifications.

Providing automated vulnerability scans from DevSecOps pipelines that integrate SBOM inventories and vulnerability scanners significantly reduces the hours needed to generate and collect evidence for compliance audits.

How impactful are these benefits?

Many modern enterprises are adopting SBOM-powered SCAs and reaping the benefits outlined above. The quantifiable benefits to any organization are unique to that enterprise but anecdotal evidence is still helpful when weighing how to prioritize a software supply chain security initiative, like the adoption of an SBOM-powered SCA against other organizational priorities.

As a leading SBOM-powered SCA, Anchore has helped numerous organizations achieve the benefits of this evolution in the software industry. To get an estimate of what your organization can expect, see the case studies below:

NVIDIA

  • Reduced time to production by scanning SBOMs instead of full applications
  • Scaled vulnerability scanning and management program to 100% coverage across 1000s of containerized applications and 100,000s of containers

Read the full NVIDIA case study here >>

Infoblox

  • 75% reduction in engineering hours spent performing manual vulnerability detection
  • 55% reduction in hours allocated to retroactive remediation of vulnerabilities
  • 60% reduction in hours spent on manual compliance discovery and documentation

Read the full Infoblox case study here >>

DreamFactory

  • 75% reduction in engineering hours spent on vulnerability management and compliance
  • 70% faster production deployments with automated vulnerability scanning and management

Read the full DreamFactory case study here >>

Next Steps

Hopefully you now have a better understanding of the power of integrating an SBOM inventory into OSS vulnerability management. This “one-two” combo has unlocked novel use-cases, numerous benefits and measurable results for modern enterprises.

If you’re interested in learning more about how Anchore can help your organization achieve similar results, reach out to our team.

Learn the container security best practices to reduce the risk of software supply chain attacks.

Learn the container security best practices, including open source software (OSS) security, to reduce the risk of software supply chain attacks.

How is Open Source Software Security Managed in the Software Supply Chain?

Open source software has revolutionized the way developers build applications, offering a treasure trove of pre-built software “legos” that dramatically boost productivity and accelerate innovation. By leveraging the collective expertise of a global community, developers can create complex, feature-rich applications in a fraction of the time it would take to build everything from scratch. However, this incredible power comes with a significant caveat: the open source model introduces risk.

Organizations inherit both the good and bad parts of the OSS source code they don’t own. This double-edged sword of open source software necessitates a careful balance between harnessing its productivity benefits and managing the risks. A comprehensive OSS security program is the industry standard best practice for managing the risk of open source software within an organization’s software supply chain.

Learn the container security best practices to reduce the risk of software supply chain attacks.

Learn the container security best practices, including open source software security, to reduce the risk of software supply chain attacks.

What is open source software security?

Open source software security is the ecosystem of security tools (some of it being OSS!) that have developed to compensate for the inherent risk of OSS development. The security of the OSS environment was founded on the idea that “given enough eyeballs, all bugs are shallow”. The reality of OSS is that the majority of it is written and maintained by single contributors. The percentage of open source software that passes the qualifier of “enough eyeballs” is miniscule.

Does that mean open source software isn’t secure? Fortunately, no. The OSS community still produces secure software but an entire ecosystem of tools ensure that this is verified—not only trusted implicitly.

What is the difference between closed source and open source software security?

The primary difference between open source software security and closed source software security is how much control you have over the source code. Open source code is public and can have many contributors that are not employees of your organization while proprietary source code is written exclusively by employees of your organization. The threat models required to manage risk for each of these software development methods are informed by these differences.

Due to the fact that open source software is publicly accessible and can be contributed to by a diverse, often anonymous community, its threat model must account for the possibility of malicious code contributions, unintentional vulnerabilities introduced by inexperienced developers, and potential exploitation of disclosed vulnerabilities before patches are applied. This model emphasizes continuous monitoring, rigorous code review processes, and active community engagement to mitigate risks. 

In contrast, proprietary software’s threat model centers around insider threats, such as disgruntled employees or lapses in secure coding practices, and focuses heavily on internal access controls, security audits, and maintaining strict development protocols. 

The need for external threat intelligence is also greater in OSS, as the public nature of the code makes it a target for attackers seeking to exploit weaknesses, while proprietary software relies on obscurity and controlled access as a first line of defense against potential breaches.

What are the risks of using open source software?

  1. Vulnerability exploitation of your application
    • The bargain that is struck when utilizing OSS is your organization gives up significant amounts of control of the quality of the software. When you use OSS you inherit both good AND bad (read: insecure) code. Any known or latent vulnerabilities in the software become your problem.
  2. Access to source code increases the risk of vulnerabilities being discovered by threat actors
    • OSS development is unique in that both the defenders and the attackers have direct access to the source code. This gives the threat actors a leg up. They don’t have to break through perimeter defenses before they get access to source code that they can then analyze for vulnerabilities.
  3. Increased maintenance costs for DevSecOps function
    • Adopting OSS into an engineering organization is another function that requires management. Data has to be collected about the OSS that is embedded in your applications. That data has to be stored and made available in case of the event of a security incident. These maintenance costs are typically incurred by the DevOps and Security teams.
  4. OSS license legal exposure
    • OSS licenses are mostly permissive for use within commercial applications but a non-trivial subset are not, or worse they are highly adversarial when used by a commercial enterprise. Organizations that don’t manage this risk increase the potential for legal action to be taken against them.

How serious are the risks associated with the use of open source software?

Current estimates are that 70-90% of modern applications are composed of open source software. This means that only 10-30% of applications developed by organizations are written by developers employed by the organization. Without having significant visibility into the security of OSS, organization’s are handing over the keys to the castle to the community and hoping for the best.

Not only is OSS a significant footprint in modern application composition but its growth is accelerating. This means the associated risks are growing just as fast. This is part of the reason we see an acceleration in the frequency of software supply chain attacks. Organizations that aren’t addressing these realities are getting caught on their back foot when zero-days are announced like the recent XZ utils backdoor.

Why are SBOMs important to open source software security?

Software Bills of Materials (SBOMs) serve as the foundation of software supply chain security by providing a comprehensive “ingredient list” of all components within an application. This transparency is crucial in today’s software landscape, where modern applications are a complex web of mostly open source software dependencies that can harbor hidden vulnerabilities. 

SBOMs enable organizations to quickly identify and respond to security threats, as demonstrated during incidents like Log4Shell, where companies with centralized SBOM repositories were able to locate vulnerable components in hours rather than days. By offering a clear view of an application’s composition, SBOMs form the bedrock upon which other software supply chain security measures can be effectively built and validated.

The importance of SBOMs in open source software security cannot be overstated. Open source projects often involve numerous contributors and dependencies, making it challenging to maintain a clear picture of all components and their potential vulnerabilities. By implementing SBOMs, organizations can proactively manage risks associated with open source software, ensure regulatory compliance, and build trust with customers and partners. 

SBOMs enable quick responses to newly discovered vulnerabilities, facilitate automated vulnerability management, and support higher-level security abstractions like cryptographically signed images or source code. In essence, SBOMs provide the critical knowledge needed to navigate the complex world of open source dependencies by enabling us to channel our inner GI Joe—”knowing is half the battle” in software supply chain security.

Best practices for securing open source software?

Open source software has become an integral part of modern development practices, offering numerous benefits such as cost-effectiveness, flexibility, and community-driven innovation. However, with these advantages come unique security challenges. To mitigate risks and ensure the safety of your open source components, consider implementing the following best practices:

1. Model Security Scans as Unit Tests

Re-branding security checks as another type of unit test helps developers orient to DevSecOps principles. This approach helps developers re-imagine security as an integral part of their workflow rather than a separate, post-development concern. By modeling security checks as unit tests, you can:

  • Catch vulnerabilities earlier in the development process
  • Reduce the time between vulnerability detection and remediation
  • Empower developers to take ownership of security issues
  • Create a more seamless integration between development and security teams

Remember, the goal is to make security an integral part of the development process, not a bottleneck. By treating security checks as unit tests, you can achieve a balance between rapid development and robust security practices.

2. Review Code Quality

Assessing the quality of open source code is crucial for identifying potential vulnerabilities and ensuring overall software reliability. Consider the following steps:

  • Conduct thorough code reviews, either manually or using automated tools
  • Look for adherence to coding standards and best practices
  • Look for projects developed with secure-by-default principles
  • Evaluate the overall architecture and design patterns used

Remember, high-quality code is generally more secure and easier to maintain.

3. Assess Overall Project Health

A vibrant, active community and committed maintainers are crucial indicators of a well-maintained open source project. When evaluating a project’s health and security:

  • Examine community involvement:
    • Check the number of contributors and frequency of contributions
    • Review the project’s popularity metrics (e.g., GitHub stars, forks, watchers)
    • Assess the quality and frequency of discussions in forums or mailing lists
  • Evaluate maintainer(s) commitment:
    • Check the frequency of commits, releases, and security updates
    • Check for active engagement between maintainers and contributors
    • Review the time taken to address reported bugs and vulnerabilities
    • Look for a clear roadmap or future development plans

4. Maintain a Software Dependency Inventory

Keeping track of your open source dependencies is crucial for managing security risks. To create and maintain an effective inventory:

  • Use tools like Syft or Anchore SBOM to automatically scan your application source code for OSS dependencies
    • Include both direct and transitive dependencies in your scans
  • Generate a Software Bill of Materials (SBOM) from the dependency scan
    • Your dependency scanner should also do this for you
  • Store your SBOMs in a central location that can be searched and analyzed
  • Scan your entire DevSecOps pipeline regularly (ideally every build and deploy)

An up-to-date inventory allows for quicker responses to newly discovered vulnerabilities.

5. Implement Vulnerability Scanning

Regular vulnerability scanning helps identify known security issues in your open source components. To effectively scan for vulnerabilities:

  • Use tools like Grype or Anchore Secure to automatically scan your SBOMs for vulnerabilities
  • Automate vulnerability scanning tools directly into your CI/CD pipeline
    • At minimum implement vulnerability scanning as containers are built
    • Ideally scan container registries, container orchestrators and even each time a new dependency is added during design
  • Set up alerts for newly discovered vulnerabilities in your dependencies
  • Establish a process for addressing identified vulnerabilities promptly

6. Implement Version Control Best Practices

Version control practices are crucial for securing all DevSecOps pipelines that utilize open source software:

  • Implement branch protection rules to prevent unauthorized changes
  • Require code reviews and approvals before merging changes
  • Use signed commits to verify the authenticity of contributions

By implementing these best practices, you can significantly enhance the security of your software development pipeline and reduce the risk intrinsic to open source software. By doing this you will be able to have your cake (productivity boost of OSS) and eat it too (without the inherent risk).

How do I integrate open source software security into my development process?

DIY a comprehensive OSS security system

We’ve written about the steps to build a OSS security system from scratch in a previous blog post—below is the TL;DR:

  • Integrate dependency scanning, SBOM generation and vulnerability scanning into your DevSecOps pipeline
  • Implement a data pipeline to manage the influx of security metadata
  • Use automated policy-as-code “security tests” to provide rapid feedback to developers
  • Automate remediation recommendations to reduce cognitive load on developers

Outsource OSS security to a turnkey vendor

Modern software composition analysis (SCA) tools, like Anchore Enterprise, are purpose built to provide you with a comprehensive OSS security system out-of-the-box. All of the same features of DIY but without the hassle of building while maintaining your current manual process.

  • Anchore SBOM: comprehensive dependency scanning, SBOM generation and management
  • Anchore Secure: vulnerability scanning and management
  • Anchore Enforce: automated security enforcement and compliance

Whether you want to scale an understaffed security to increase their reach across your organization or free your team up to focus on different priorities, the buy versus build opportunity cost is a straightforward decision.

Next Steps

Hopefully, you now have a strong understanding of the risks associated with adopting open source software. If you’re looking to continue your exploration into the intricacies of software supply chain security, Anchore has a catalog of deep dive content on our website. If you’d prefer to get your hands dirty, we also offer a 15-day free trial of Anchore Enterprise.

Learn about the role that SBOMs for the security of your organization in this white paper.

Learn about the role that SBOMs for the security, including open source software security, of your organization in this white paper.