Beyond the Bottleneck: How RAISE 2.0 is Transforming Navy DevSecOps

If you’ve been following the Department of Defense’s (DoD) modernization efforts, you have probably noticed a significant shift in how the US Navy approaches deployment. Historically, the path to an Authorization to Operate (ATO) was a grueling marathon that often delayed critical capabilities by months or years. Today, the U.S. Navy is pivoting toward a model that prioritizes speed without sacrificing the rigorous security standards required for mission-critical systems.

At the heart of this transformation is RAISE 2.0. It is a framework designed to automate the Risk Management Framework (RMF) and move away from static, one-time security checks. By leveraging an authorized RAISE Platform of Choice (RPoC), the Navy is essentially rewriting the rules of software delivery.

Visibility: The Foundation of Every Security Check

Before you can secure a system, you must understand its composition. In the microservices-based world we operate in today, software stacks have become increasingly complex. They are no longer single blocks of code but intricate webs of dependencies. The Navy recognizes that you cannot defend a “black box” against modern threats like Log4j or any of the recent supply chain compromises.

As Brian Thomason, Partner and Solutions Engineering Manager at Anchore, puts it: “It’s hard to know what to fix in your software…if you don’t know what’s in your software.” This is why the security process begins with a high-fidelity Software Bill of Materials. An SBOM is a comprehensive ingredients list for your application. 

How can we expect to manage risk if we are blind to the very components we are deploying?

Stop Waiting for ATOs: The Power of Inheritance

The traditional ATO process required every new application to justify its entire existence from the hardware up. This created a massive bottleneck where security teams were constantly re-evaluating the same underlying infrastructure and DevSecOps tools. RAISE 2.0 solves this by introducing the concept of a RAISE Platform of Choice (RPoC) (i.e., the Navy’s version of the more general DoD DevSecOps Platform or DSOP) with a pre-existing ATO.

“Raise 2.0 automates the RMF…eliminating the need for a new ATO; instead you inherit the ATO of the RPoC.” This mechanism allows application teams to focus solely on the security of their specific code rather than the platform it runs on. Why waste months certifying a pipeline that has already been vetted and hardened by experts? By inheriting a certified platform’s posture, developers can move from “code complete” to “production ready” in days rather than years.

The Accidental Leak: Moving Beyond Simple CVEs

Security isn’t just about identifying known vulnerabilities in 3rd-party libraries; it’s also about catching the human errors that occur during the heat of a sprint. While we like to think our internal processes are foolproof, history shows that sensitive credentials (think: AWS or SSH keys) find their way into repositories with surprising frequency.

Anchore’s platform doesn’t just look for CVEs; it scans for these “silent” risks. Brian notes: “We also detect leaked secrets… not that anybody in your organization would do this… but companies have been known to…” This capability acts as a critical safety net for developers working in high-pressure environments. What happens to your security posture if a single accidental commit exposes the keys to your entire cloud environment?

Security for Tomorrow: Managing the Zero-Day Disclosures

The moment an application is deployed, its security posture begins to decay. New zero-day disclosures and changing requirements mean that a “clean” scan from yesterday may be irrelevant by tomorrow morning. Static security checks are insufficient for modern warfare. We need continuous intelligence that tracks code throughout its entire lifecycle.

“Future changes; new requirements, zero-day disclosures, etc. are covered by Anchore’s SBOM-based approach.” By maintaining a version-controlled SBOM in a tool like Anchore Enterprise, the Navy can rescan deployed code against new threat intelligence every 12 hours. This provides a continuous feedback loop: if a new vulnerability is discovered in a library you deployed six months ago, you are notified immediately. Is your current process capable of identifying risks in software that is already deployed in production?

The Path Forward: Two Steps and One Habit

Implementing RAISE 2.0 principles isn’t an overnight task, but it is the only viable path for modernizing federal software delivery.

  • Step 1: Standardize on SBOM generation. Integrate high-fidelity SBOM creation into your build process immediately to establish a baseline of truth.
  • Step 2: Map your RMF controls to automation. Identify which parts of your compliance checklist can be handled by a platform like Black Pearl and/or Anchore.
  • The Habit: Continuous Rescanning. Make it a daily practice to check your production inventory against the latest vulnerability data (KEV/EPSS), rather than waiting for your next scheduled audit.

The goal isn’t just to achieve an ATO today; it’s to build a system that remains secure and compliant through whatever challenges tomorrow brings.


STIG in Action: 4 Lessons on Automating Compliance with MITRE SAF

If you have ever tried to manually apply a Security Technical Implementation Guide (STIG) to a modern containerized environment, you know it feels like trying to fit a square peg into a round hole…while the hole is moving at 60 miles per hour.

The Department of Defense’s move to DevSecOps (and adoption of the DoD Software Factory paradigm) has forced a collision between rigid compliance standards and the fluid reality of cloud-native infrastructure. The old way of “scan, patch, report” simply doesn’t scale when you are deploying thousands of containers a day.

We recently sat down with Aaron Lipult, Chief Architect at MITRE to discuss the MITRE Security Automation Framework (SAF) to discuss how they are solving this friction. The conversation moved past the basics of “what is a STIG?” and into the architectural philosophy of how we can actually automate compliance without breaking the mission.

Here are four key takeaways on why the future of government compliance is open, active, and strictly standardized.

Collaboration over monetization

In an industry often dominated by proprietary “black box” security tools, MITRE SAF stands out by being radically open. The framework wasn’t designed to lock users into a vendor ecosystem; it was designed to solve a national security problem.

The philosophy is simple: security validation code should be as accessible as the application code it protects.

“MITRE SAF came from public funds, it should go back into the public domain. In my opinion, it was built to solve a problem for everybody—not just us.”

This approach fundamentally changes the dynamic between government agencies and software vendors. Instead of every agency reinventing the wheel, the community converged on a shared standard. When one team solves a compliance check for Red Hat Enterprise Linux 8, that solution goes back into the public domain for every other agency to use. It shifts compliance from a competitive differentiator to a collaborative baseline.

“Immutable” container myth

There is a prevalent theory in DevSecOps that containers are immutable artifacts. In a perfect world, you build an image, scan it, deploy it, and never touch it again. If you need to change something, you rebuild the image.

The reality of operations is much messier. Drift happens. Emergency patches happen. Humans happen.

“Ops will still login and mess with ‘immutable’ production containers. I really like the ability to scan running containers.”

If your compliance strategy relies solely on scanning images in the registry, you are missing half the picture. A registry scan tells you what you intended to deploy. A runtime scan tells you what is actually happening.

MITRE SAF accounts for this by enabling validation across the lifecycle. It acknowledges the operational headache that rigid immutability purism ignores: sometimes you need to know if a production container has drifted from its baseline, regardless of what the “gold image” says.

Real system interrogation vs static analysis

For years, the standard for compliance scanning has been SCAP (Security Content Automation Protocol). While valuable, legacy tools often rely on static analysis. They check file versions or registry keys without understanding the running context.

Modern infrastructure requires more than just checking if a package is installed. You need to know how it is configured, what process it is running under, and how it interacts with the system.

“Older tools like SCAP do static file system analysis. It doesn’t actually do real system interrogation. That’s what we’re changing here. If we didn’t, we would deploy insecure systems into production.”

This is the shift from “checking a box” to “verifying a state.” Real system interrogation means asking the live system questions. Is the port actually open? Is the configuration file active, or is it being overridden by an environment variable?

By moving to “real interrogation,” we stop deploying systems that are technically compliant on paper but insecure in practice.

The discipline of compliance automation

One of the most frustrating aspects of STIG compliance is the rigidity of the source material. Engineers often look at a STIG requirement and think, “I know a better way to secure this.”

But in the world of DoD authorization (ATO), creativity can be a liability. The goal of automation isn’t just security; it’s auditability.

“We write the SAF rules to follow the STIG profile ‘as written’, even if we know it could be done ‘better.’ You are being held accountable to the profile, not what is ‘best’.”

This is the hard truth of compliance automation. MITRE SAF creates a direct, defensible mapping between the written requirement and the automated check. If the STIG says “Check parameter X,” the automation must check parameter X, even if checking parameter Y would be more efficient.

This discipline ensures that when an auditor reviews your automated results, there is zero ambiguity. You aren’t being graded on your creativity; you are being graded on your adherence to the profile. By keeping the tooling “true to the document,” MITRE SAF streamlines the most painful part of the ATO process: proving that you did exactly what you said you would do.

The Path Forward

The transition to automated compliance isn’t just about buying a new tool; it’s about adopting a new mindset. It requires moving from static files to active interrogation, from proprietary silos to open collaboration, and from “creative” security to disciplined adherence.

MITRE SAF provides the framework to make this transition possible. By standardizing how we plan, harden, and validate our systems, we can finally stop fighting the compliance paperwork and start focusing on the mission.

Ready to see it in action? Watch our full webinar with the MITRE team.


Learn how to use the MITRE corporation’s SAF framework to automation compliance audits. Never fill out another compliance spreadsheet.

STIG in Action: Continuous Compliance with MITRE & Anchore

The Top Ten List: The 2025 Anchore Blog

As 2025 draws to a close, we are looking back at the posts that defined the year in software supply chain security. If 2024 was the year the industry learned what an SBOM was, 2025 was the year we figured out how to use them effectively and why they are critical for the regulatory landscape ahead.

The Anchore content team spent the last twelve months delivering expert guides, engineering deep dives, and strategic advice to help you navigate everything from the EU Cyber Resilience Act to the complexities of Python dependencies.

This top ten list reflects a maturing industry where the focus has shifted from basic awareness to actionable implementation. Hot topics this year included:

  • Mastering SBOM generation for complex ecosystems like JavaScript and Python
  • Preparing for major regulations like the EU CRA and DoD STIGs
  • Reducing noise in vulnerability scanning (see ya later, false positives!)
  • Engineering wins that make SBOM scanning faster and vulnerability databases smaller

So, grab your popcorn and settle in; it’s time to count down the most popular Anchore blog posts of 2025!

The Top Ten List

10 | Add SBOM Generation to Your GitHub Project with Syft

Kicking us off at number 10 is a blog dedicated to making security automation painless. We know that if security isn’t easy, it often doesn’t happen.

Add SBOM Generation to Your GitHub Project with Syft is a practical guide on integrating sbom-action directly into your GitHub workflows. It details how to set up a “fire and forget” system where SBOMs are automatically generated on every push or release.

This post is all about removing friction. By automating the visibility of your software components, you take the first step toward a transparent software supply chain without adding manual overhead to your developers’ plates.

9 | Syft 1.20: Faster Scans, Smarter License Detection

Coming in at number nine is a celebration of speed and accuracy. Two things every DevSecOps team craves.

Syft 1.20: Faster Scans, Smarter License Detection made waves this year by announcing a massive performance boost; 50x faster scans on Windows! But speed wasn’t the only headline. This release also introduced improved Bitnami support and smarter handling of unknown software licenses.

It’s a look at how we are continuously refining the open source tools that power your supply chain security. The improvements ensure that as your projects grow larger, your scans don’t get slower.

8 | False Positives and False Negatives in Vulnerability Scanning

Landing at number eight is a piece tackling the industry’s “Boy Who Cried Wolf” problem: noise.

False Positives and False Negatives in Vulnerability Scanning explores why scanners sometimes get it wrong and what we are doing about it. It details Anchore’s evolution in detection logic. Spoiler alert: we moved away from simple CPE matching toward more precise GHSA data. This was done to build trust in your scan results.

Reducing false positives isn’t just about convenience; it’s about combating alert fatigue so your security team can stop chasing ghosts and focus on the real threats that matter.

7 | Generating SBOMs for JavaScript Projects

Sliding in at lucky number seven, we have a guide for taming the chaos of node_modules.

Generating SBOMs for JavaScript Projects addresses one of the most notoriously complex ecosystems in development. JavaScript dependencies can be a labyrinth of nested packages but this guide provides a clear path for developers to map them accurately using Syft.

We cover both package.json manifests and deeply nested, transitive dependencies. This is essential for frontend, backend and full stack devs looking to secure their modern web applications against supply chain attacks.

6 | Generating Python SBOMs: Using pipdeptree and Syft

At number six, we turn our attention to the data scientists and backend engineers working in Python.

Generating Python SBOMs: Using pipdeptree and Syft offers a technical comparison between standard tools like pipdeptree and Syft’s universal approach. Python environments can be tricky, but this post highlights why Syft’s ability to capture extensive metadata offers a more comprehensive view of risks.

If you want better visibility into transitive dependencies (the libraries of your libraries) this post explains exactly how to get it.

5 | Grype DB Schema Evolution: From v5 to v6

Breaking into the top five, we have an engineering deep dive for those who love to see what happens under the hood.

Grype DB Schema Evolution: From v5 to v6 details the redesign of the Grype vulnerability database. While database schemas might not sound like the flashiest topic, the results speak for themselves: moving to Schema v6 reduced download sizes by roughly 69% and significantly sped up updates.

This is a critical improvement for users in air-gapped environments or those running high-volume CI/CD pipelines where every second and megabyte counts.

4 | Strengthening Software Security: The Anchore and Chainguard Partnership

At number four, we highlight a power move in the industry: two leaders joining forces for a unified goal.

Strengthening Software Security: The Anchore and Chainguard Partnership details how we teamed up with Chainguard to help you “Start Safe and Stay Secure.” It explains how combining Chainguard’s hardened wolfi images with the Anchore Enforce‘s continuous compliance platform creates a seamless, secure workflow from build to runtime.

The key takeaway? Reducing your attack surface starts with a secure base image but maintaining that secure initial state requires continuous monitoring.

3 | EU CRA SBOM Requirements: Overview & Compliance Tips

Taking the bronze medal at number three is a wake-up call regarding the “Compliance Cascade.”

EU CRA SBOM Requirements: Overview & Compliance Tips breaks down the EU Cyber Resilience Act (CRA), a regulation that is reshaping the global software market. We covered the timeline, the mandatory SBOM requirements coming in 2027, and why compliance is now a competitive differentiator.

If you sell software in Europe (or sell to a business that sells software in Europe) this post was your signal to start preparing your evidence now. Waiting until the last minute is not a strategy!

2 | DISA STIG Compliance Requirements Explained

Just missing the top spot at number two is our comprehensive guide to the DoD’s toughest security standard.

DISA STIG Compliance Requirements Explained demystifies the Security Technical Implementation Guides (STIGs). We broke down the difference between Category I, II, and III vulnerabilities and showed how to automate the validation process for containers.

This is a must-read for any vendor aiming to operate within the Department of Defense network. It turns a daunting set of requirements into a manageable checklist for your DevSecOps pipeline.

1 | How Syft Scans Software to Generate SBOMs

And finally, taking the number one spot for 2025, is the ultimate technical deep dive!

How Syft Scans Software to Generate SBOMs peeled back the layers of our open source engine to show you exactly how the magic happens. It explained Syft’s architecture of catalogers, how stereoscope parses image layers, and the logic Syft uses to determine what is actually installed in your container.

Trust requires understanding. By showing exactly how we build an SBOM, we empower teams to trust the data they rely on for critical security decisions.

Wrap-Up

That wraps up the top ten Anchore blog posts of 2025! From deep dives into scanning logic to high-level regulatory strategy, this year was about bridging the gap between knowing you need security and doing it effectively.

The common thread? Whether it’s complying with the EU CRA or optimizing a GitHub Action, the goal remains the same: security and speed at scale. We hope these posts serve as a guide as you refine your DevSecOps practice and steer your organization toward a more secure future.

Stay ahead of the curve in 2026. Subscribe to the Anchore Newsletter or follow us on your favorite social platform to catch the next big update:

Why SBOMs Are No Longer Optional in 2025

If you’ve spent any time in the software security space recently, you’ve likely heard the comparison: a Software Bill of Materials (SBOM) is essentially an “ingredients list” for your software. Much like the label on a box of crackers, an SBOM tells you exactly what components, libraries, and dependencies make up your application.

But as any developer knows, a simple label can be deceptive. “Spices” on a food label could mean anything; “tomatoes” could be fresh, canned, or powdered. In software, the challenge is moving from a vague inventory to a detailed, machine-readable explanation of what is truly inside.

In a recent Cloud Native Now webinar, Anchore’s VP of Security, Josh Bressers, demystified the process of generating these critical documents using free, open source tools. He demonstrated the practical “how-to” for a world where SBOMs have moved from “nice-to-have” to “must-have.”

From Security Novelty to Compliance Mandate

For years, early adopters used SBOMs because they were “doing the right thing.” It was a hallmark of a high-maturity security program; a way to gain visibility that others lacked. But the landscape shifted recently.

“Before 2025, SBOMs were ‘novelties;’ they were ‘doing the right thing’ for security. Now they are mandatory due to compliance requirements.”

Global regulations like the EU’s Cyber Resilience Act (CRA) and FDA mandates in the U.S. have changed the math. If you want to sell software into the European market, or the healthcare sector, an SBOM is no longer a gold star on your homework; it’s the price of admission. The “novelty” phase is over. We are now in the era of enforcement.

Why Compliance is the New Proof

We often talk about SBOMs in the context of security. They are vital for identifying vulnerabilities like Log4j in minutes rather than months. However, the primary driver for adoption across the industry isn’t a sudden surge in altruism. It’s the need for verifiable evidence.

“So compliance is why we’re going to need SBOMs. That’s the simple answer. It’s not about security. It’s not about saying we are doing the right thing. It’s proof.”

Security is the outcome, but compliance is the driver. An SBOM provides the machine-readable “proof” that regulators and customers now demand. It proves you know what you’re shipping, where it came from, and that you are monitoring it for risks. In the eyes of a regulator, if it isn’t documented in a standard format like SPDX or CycloneDX, it doesn’t exist.

Getting Started: The Crawl, Walk, Run Approach

When teams realize they need an SBOM strategy, the first instinct is often to over-engineer. They look for complex database integrations or expensive enterprise platforms before they’ve even generated their first file. My advice is always to start with the simplest path possible.

“To start, store the SBOM in the project’s directory. This is one of those situations where you crawl, walk, run. Start putting them somewhere easy. Don’t overthink it.”

You don’t need a massive infrastructure to begin. Using open source tools like Syft, you can generate an SBOM from a container image or a local directory in seconds.

  1. Crawl: Generate an SBOM manually using the CLI and save it as a JSON file in your project repo.
  2. Walk: Integrate that generation into your CI/CD pipeline (e.g., using a GitHub Action) so an SBOM is created automatically with every release.
  3. Run: Generate an SBOM for multiple stages of the DevSecOps pipeline, store them in a central repository and query them for actionable supply chain insights.

The Pursuit of Perfection in an Imperfect World

Software is messy. Dependencies have dependencies and scanners sometimes miss things or produce false positives. While the industry is working hard to improve the accuracy of software supply chain tools, transparency about our limitations is key.

“Our goal is perfection. We know it’s unattainable, but that’s what we’re working towards.”

We strive for a 100% accurate inventory, but “perfect” should never be the enemy of “better.” Having a 95% accurate SBOM today is infinitely more valuable during a zero-day event than having no SBOM at all while you wait for a perfect solution.

Wrap-Up

The transition from manual audits to automated, compliance-driven transparency is the biggest shift in software security this decade. By starting small with open source tooling, focusing on compliance as your baseline, and iterating toward better visibility, you can transform your security posture from reactive to proactive.

Ready to generate your first SBOM?

  • Download Syft: The easiest way to generate an SBOM for containers and filesystems.
  • Try Grype: Vulnerability scanning that works seamlessly with your SBOMs.
  • Watch the full webinar below.

Stay ahead of the next regulatory mandate: Follow Anchore on LinkedIn for more insights into the evolving world of software supply chain security.

The Death of Manual SBOM Management and an Automated Future

The transition from physical servers to Infrastructure as Code fundamentally transformed operations in the 2010s—bringing massive scalability alongside new management complexities. We’re witnessing history repeat itself with software supply chain security. The same pattern that made manual server provisioning obsolete is now playing out with Software Bill of Materials (SBOM) management. This pattern is creating an entirely new category of operational debt for organizations that refuse to adapt.

The shift from ad-hoc security scans to continuous, automated supply chain management is not just a technical upgrade. At enterprise scale, you simply cannot secure what you cannot see. You cannot trust what you cannot verify. Automation is the only mechanism that delivers consistent visibility and confidence in the system.

“Establishing trust starts with verifying the provenance of OSS code and validating supplier SBOMs. As well as, storing the SBOMs to track your ingredients over time.”

The Scale Problem: When “Good Enough” Isn’t

Manual processes work fine until they don’t. When you are managing a single application with a handful of dependencies you can get away with lots of unscalable solutions. But modern enterprise environments are fundamentally different. A single monolithic application might have had stable, well-understood libraries but modern cloud-native architectures rely on thousands of ephemeral components that change daily.

This fundamental difference creates a visibility crisis that traditional spreadsheets and manual scans cannot solve. Organizations attempting to manage this complexity with “Phase 1” tactics like manual scans or simple CI scripts typically find themselves buried under a mountain of data.

Supply Chain Security Evolution

  • Phase 1: The Ad-Hoc Era (Pre-2010s) was characterized by manual generation and point-in-time scanning. Developers would run a tool on their local machine before a release. This was feasible because release cycles were measured in weeks or months, and dependency trees were relatively shallow.
  • Phase 2: The Scripted Integration (2020s) brought entry-level automation. Teams wired open source tools like Syft and Grype into CI pipelines. This exploded the volume of security data without a solution for scaling data management. “Automate or not?” became the debate, but it missed the point. As Sean Fazenbaker, Solution Architect at Anchore, notes: “‘Automate or not?’ is the wrong question. ‘How can we make our pipeline set and forget?’ is the better question.”
  • Phase 3: The Enterprise Platform (Present) emerged as organizations realized that generating an SBOM is only the starting line. True security requires managing that data over time. Platforms like Anchore Enterprise transformed SBOMs from static compliance artifacts into dynamic operational intelligence, making continuous monitoring a standard part of the workflow.

The Operational Reality of “Set and Forget”

The goal of Phase 3 is to move beyond the reactive “firefighting” mode of security. In this model, a vulnerability disclosure like Log4j triggers a panic: teams scramble to re-scan every artifact in every registry to see if they are affected.

In an automated, platform-centric model, the data already exists. You don’t need to re-scan the images; you simply query the data you’ve already stored. This is fundamentally different from traditional vulnerability management.

Anchore scans SBOMs built whenever: five months from now, six months ago, 30 years in the future. If a new vulnerability is detected, you’ll know when, where and for how long.

Continuous assessment of historical artifacts is what separates compliance theater from true resilience. It allows organizations to answer the critical question (“Are we affected?”) in minutes rather than months.

The Implementation of Shift Left

Automation also fundamentally changes the developer experience. In traditional models, security is a gatekeeper that fails builds at the last minute, forcing context-switching and delays. In an automated, policy-driven environment, security feedback is immediate.

When automation is integrated correctly into the pull request workflow, developers can resolve issues before code ever merges. “I’ve identified issues. Fixed them. Rebuilt and pushed. I didn’t rely on another team to catch my mistakes. I shifted left instead.”

This is the promise of DevSecOps: security becomes a quality metric of the code, handled with the same speed and autonomy as a syntax error or a failed unit test.

Where Do We Go From Here?

We are still in the early stages of this evolution, which creates both risk and opportunity. First-movers can establish a trust foundation before the next major supply chain incident. Those who wait will face the crushing weight of manual management.

Crawl: The Open Source Foundation

Start with industry standards. Tools like Syft (SBOM generation) and Grype (vulnerability scanning) provide the baseline capabilities needed to understand your software.

  1. Generate SBOMs for your critical applications using Syft.
  2. Scan for vulnerabilities using Grype to understand your current risk posture.
  3. Archive these artifacts to begin building a history, even if it is just in a local filesystem or S3 bucket.

Walk: Integrated Automation

Early adopters can take concrete steps to wire these tools into their daily flow:

  1. Integrate scans into GitHub Actions (or your CI of choice) to catch issues on every commit.
  2. Define basic policies (e.g., “fail on critical severity”) to prevent new risks from entering production.
  3. Separate generation from scanning. It is often more efficient to generate the SBOM once and scan the JSON artifact repeatedly, rather than re-analyzing the heavy container image every time.

Want to bring these steps to life? Watch the full Automate, Generate, and Manage SBOMs webinar to see exactly how to wire this up in your own pipeline.

Anchore Enterprise 5.24: Native filesystem SBOMs and policy gates for BYOS

Anchore Enterprise 5.24 adds native filesystem scanning and policy enforcement for imported SBOMs so platform engineers and security architects can secure non-container assets with the same rigor as containers. With software supply chains expanding beyond registries to include: 

  • virtual machine images,
  • source code tarballs, and
  • directory-based artifacts.

This release focuses on increasing supply chain coverage and active governance. It replaces disparate, manual workflows for non-container assets with a unified approach. And turns passive 3rd-party SBOMs into active components of your compliance strategy.

What’s New in AE 5.24

This release introduces three capabilities designed to unify security operations across your entire software stack:

  • Native Filesystem Scanning: Ingest and analyze VMs, source directories, and archives directly via anchorectl, removing the need for manual SBOM generation steps.
  • Policy Enforcement for Imported SBOMs: Apply vulnerability policy gates to 3rd-party SBOMs to automate compliance decisions for software you didn’t build.
  • Advanced Vulnerability Search: Instantly locate specific CVEs or Advisory IDs across your entire asset inventory for rapid zero-day response.

Watch a walkthrough of new features including a demo with Alex Rybak, Director of Product.

Watch Now

Native Filesystem Scanning & Analysis

Anchore Enterprise now natively supports the ingestion and analysis of arbitrary filesystems. Previously, users had to run Syft independently to generate an SBOM and then upload it. Now, the platform handles the heavy lifting directly via anchorectl.

This streamlines the workflow for hybrid environments. You can now scan a mounted VMDK, a tarball of source code, or a build directory using the same pipeline logic used for container images.

Using the updated anchorectl CLI, you can point directly to a directory or mount point. Anchore handles the SBOM generation and ingestion in a single step.

# Example: Ingesting a mounted VM image for analysis

anchorectl sbom add \

  --from ./my_vmdk_mount_point \

  --name my-vm-image \

  --version 1.0 \

  --sbom_type file-system

Active Compliance for Imported SBOMs (BYOS)

Imported SBOMs (Bring Your Own SBOM) have graduated from read-only data artifacts to fully governed assets. AE 5.24 introduces vulnerability policy gates for imported SBOMs.

Visibility without enforcement is noise. By enabling policy assessments on imported SBOMs, you can act as a gatekeeper for vendor-supplied software. For example, you can now automatically fail a build or flag a vendor release if the provided SBOM contains critical vulnerabilities that violate your internal security standards (e.g., Block if Critical Severity count > 0).

When a major vulnerability (like Log4j or OpenSSL) is disclosed, the time to identify affected assets is critical. AE 5.24 adds a unified search filter to the Vulnerabilities List View that accepts both Vulnerability IDs (CVE) and Advisory IDs.

This reduces triage time during zero-day incidents. Security teams can paste a specific ID into a single filter to immediately identify exposure across all managed SBOMs and images, regardless of the asset type.

Expanded STIG Compliance Support

Continuing our support for public sector and regulated industries, this release expands the library of out-of-the-box compliance profiles. AE 5.24 adds support for:

  • Apache Tomcat 9
  • NGINX v2.3.0

These profiles map directly to DISA STIG standards, allowing teams to automate the validation of these ubiquitous web server technologies.

How to Get Started

  1. Upgrade to Anchore Enterprise 5.24. Release notes →
  2. Ingest a Filesystem: Use the new anchorectl sbom add --from <path> command to test scanning a local directory or VM mount.
  3. Enforce Policy: Navigate to the Policies tab and verify that your default vulnerability rules are now evaluating your imported SBOMs.
  4. Validate Compliance: Run a report against the new Tomcat or NGINX profiles if applicable to your stack.

Ready to Upgrade?

Anchore Enterprise 5.24 provides the universal visibility and active governance required to secure modern, hybrid software supply chains.

  • Existing customers: Contact support or your account manager to plan your upgrade.
  • New to Anchore? Request a demo to see the new features in action.
  • Community: Explore our open-source tools Syft and Grype for local SBOM generation and scanning.

Watch a walkthrough of new features including a demo with Alex Rybak, Director of Product.

Watch Now

Start Safe, Stay Secure: How Anchore and Chainguard Libraries Strengthen Software Supply Chains

Using DevSecOps principles to approach software development is always the ideal. We love “secure by design” at Anchore but…unfortunately there are limits to how far this practice can stretch before it breaks. The messy reality of user needs and operational constraints often forces organizations to veer off the “golden path” paved by the best intentions of our security teams.

This is precisely where comprehensive software supply chain security and compliance solutions become critical. A start safe, stay secure approach can bridge the gap between the platonic ideal of security as it collides with the mess of real-world complexity.

Today, Anchore and Chainguard are expanding their partnership to bring that same philosophy to application dependencies. With Anchore Enterprise now integrated with Chainguard Libraries for Python, joint customers can validate the critical and high-severity CVEs Chainguard remediates. This reduces risk, eliminates unnecessary triage work, and secures dependencies without disrupting existing workflows.  

What Chainguard Libraries Means for Supply Chain Security

Chainguard Libraries extends the company’s “golden path” philosophy from minimal OS images to the application dependencies built on top. It provides a set of popular open source libraries, starting with Java, Python and JavaScript. The libraries are built from source in a tamper-proof, SLSA L2-certified environment that’s immune to build-time and distribution-stage malware injections. The goal is to provide developers with a set of trusted building blocks from the very start of the development process.

Anchore Enterprise users depend on continuous scanning and policy enforcement to manage software supply chain risk. But public package registries produce a relentless stream of alerts; many of them noisy, many irrelevant, and all of them requiring investigation. Even simple patching cycles become burdensome, reactive workstreams. This integration changes that.

More details about the integration:

  • Validate Chainguard Python Library CVE Remediation in Anchore Enterprise Workflows: Anchore Enterprise users can now use their existing scanning pipelines to validate that CVEs remediated by Chainguard Libraries for Python correctly show up as fixed or absent. This brings trusted upstream content directly into Anchore; no new workflows and no operational friction. Just fewer critical vulnerabilities for your team to deal with.
  • Strengthen Dependency Security and Reduce Malware Risk: Chainguard Libraries are built in a tamper-proof environment and free from supply chain refuse. This benefits Anchore customers by eliminating unverified/compromised packages and reducing dependency triage workload.  Recent ecosystem attacks like ultralytics or num2words underscore the importance of this integration.

Teams no longer start their security journey by cleaning up unknown packages from public registries. They begin with dependencies that are already vetted, traceable, and significantly safer.

Start Safe, Stay Secure, and Stay Compliant: From Golden Path to Real-World Operations

This is where Anchore Enterprise provides the critical framework to ‘Stay Secure and Compliant,’ bridging the gap between a secure-by-design foundation and the fluid realities of day-to-day operations.

Software Supply Chain Policy Scanning and Enforcement

Chainguard Libraries enable organizations to start safe. But applications evolve. Developers regularly need to diverge from these golden bases for legitimate business reasons.

How do we stay secure, even as we take a necessary side quest from the happy path? The answer is moving from static prevention to continuous policy enforcement. Anchore Enterprise enables organizations to stay both secure and compliant by enforcing risk-based policies, even when the security principles embedded in the Chainguard artifacts conflict with the immediate needs of the organization.

Zero-Day Disclosure Alerts on Chainguard OSes & Libs

A library or OS is only secure up until a zero-day disclosure is published. Chainguard publishes a security advisory feed (an OpenVEX feed) which provides a list of vulnerabilities associated with the libraries they distribute. When a new vulnerability is disclosed, Anchore Enterprise will detect this and flag it against the relevant content. This can be used to either drive a manual or automated pull of newer content from the Chainguard Libraries repo. Anchore Enterprise’s Policy Engine allows you to filter these out using simple rules to ensure you are not distracted except for the most critical of issues.

Proprietary & Compiled Binaries Vulnerability Scanning

The visibility challenge extends far beyond open source language libraries. Modern enterprise applications often integrate proprietary components where the content is not in a packaged form: think 3rd-party observability (or security runtime) agents, proprietary SDKs, compiled binaries from vendors, and custom in-house tooling. Organizations still require the ability to track and remediate vulnerabilities within these closed source components.

Anchore Enterprise solves this critical gap by employing deep binary analysis techniques. This capability allows the platform to analyze compiled files (binaries) and non-standard packages to identify and report vulnerabilities, licenses, and policy violations, ensuring a truly comprehensive security posture across every layer of the stack, not just the known-good base components.

Ingest Chainguard OS & Libraries SBOMs for Full Supply Chain Visibility

Ultimately, supply chain risk visibility, compliance and risk management allow a business to make informed decisions about when and how to allocate resources. To do this well, you need a system to store, query, and generate actionable insights from your evidence.

This presents another “buy vs. build” decision. An organization can build this system itself, or it can deploy a turnkey system like Anchore Enterprise. Anchore can generate SBOMs from Chainguard OS/Libraries or ingest the SBOMs from the Chainguard Registry, providing a single system to store, query, and manage risk across your entire software supply chain.

For a closer look, please connect with us or Chainguard for a demo

4 Lessons on the Future of Software Transparency from Steve Springett of CycloneDX

If you follow the software supply chain space, you’ve heard the noise. The industry often gets stuck in a format war loop; debating schema rather than focusing on the utility of the stored data. It’s like arguing about font kerning on a nutrition label while the ingredients list is passed over.

We recently hosted Steve Springett, Chair of the CycloneDX Core Working Group, to cut through this noise. The conversation moved past the basic definition of an SBOM and into the mechanics of true software transparency.

Here are four takeaways on where the industry is heading—and why the specific format doesn’t matter.

1. Content is king

For years, the debate has centered on “which standard will win.” But this is the wrong question to ask. The goal isn’t to produce a perfectly formatted SBOM; the goal is to reduce systemic risk and increase software transparency.

As Springett noted during the session:

“The format doesn’t really matter as long as that format represents the use cases. It’s really about the content.”

When you focus on form over function, you end up generating an SBOM to satisfy a regulator even while your security team gains no actionable intelligence. The shift we are witnessing is from generation to consumption.

Does your data describe the components? Does it capture the licensing? More importantly, does it support your specific use case, whether that’s procurement, vulnerability management, or forensics? If the content is empty, the schema validation is irrelevant.

2. When theory and reality diverge

In physical manufacturing, there is often a gap between the engineering diagrams and the finished product. Software is no different. We have the source code (the intent) and the compiled binary (the reality).

Springett ran into a situation where a manufacturer needed a way to model the dependencies of the process that created a product:

“We created a manufacturing bill of materials (MBOM) to describe how something should be built versus how it was actually built.”

This distinction is critical for integrity. A “Design BOM” tells you what libraries you intended to pull in. In this case, the Design MBOM and the Build MBOM were able to explain what parts of the process were diverging from the ideal path. Capturing this delta allows you to verify the integrity of the pipeline itself, not just the source that entered it.

3. Solving the compliance cascade

Security teams are drowning in standards. From SSDF to FedRAMP to the EU CRA, the overlap in requirements is massive, yet the evidence collection remains manual and disjointed. It is the classic “many-to-many” problem.

Machine-readable attestations are the mechanism to solve this asymmetry.

“A single attestation can attest to multiple standards simultaneously. This saves a lot of hours!”

Instead of manually filling out a spreadsheet for every new regulation, you map a single piece of evidence—like a secure build log—to multiple requirements. If you prove you use MFA for code changes, that single data point satisfies requirements in FedRAMP, PCI DSS 4.0, and SSDF simultaneously.

This shifts compliance from a manual, document-based operation to an automated process. You attest once, and the policy engine applies it everywhere.

4. Blueprints and behavioral analysis

Reproducible builds are a strong defense, but they aren’t a silver bullet. A compromised build system can very accurately reproduce the malware that has been pulled in from a transitive dependency. To catch this, you need to understand the intended behavior of the system, not just its static composition.

This is where the concept of “blueprints” comes into play.

“Blueprints are the high-level architecture AND what the application does. This is critically important because reproducible builds are fine, but can also be compromised.”

A blueprint describes the expected architecture. It maps the data flows, the expected external connections, and the boundary crossings. If your SBOM says “Calculator App,” but the runtime behavior opens a socket to an unknown IP, a static scan won’t catch it.

By comparing the architectural blueprint against the runtime reality, you can detect anomalies that standard composition analysis misses. It moves the defense line from “what is in this?” to “what is this doing?”

The Path Forward

We’ve moved past the era of format wars. The takeaways are clear: prioritize content over schema, capture the “as-built” reality, automate your compliance evidence, and start validating system behavior, not just static ingredients.

But this is just the baseline. In the full hour, Steve Springett dives much deeper into the mechanics of transparency. He discusses how to handle AI model cards to track training data and bias, how to manage information overload so you don’t drown in “red lights,” and what’s coming next in CycloneDX 1.7 regarding threat modeling and patent tracking.

To get the complete picture—and to see how these pieces fit into a “system of systems” approach—watch the full webinar. It’s the fastest way to move your strategy from passive documentation to active verification.


Learn about how SBOMs, and CycloneDX specifically, planning for the future. Spoiler alert: compliance, attestations and software transparency are all on deck.

Anchore Welcomes SBOM Pioneer Dr. Allan Friedman as Board Advisor

The modern software supply chain is more complex and critical than ever. In an age of high-profile breaches and new global regulations like the EU’s Cyber Resilience Act, software supply chain security has escalated from an IT concern to a top-level business imperative for every organization. In this new landscape, transparency is foundational, and the Software Bill of Materials (SBOM) has emerged as the essential element for achieving that transparency and security.

Perhaps no single individual has been more central to the global adoption of SBOMs than Dr. Allan Friedman which only serves to increase our excitement to announce that Allan has joined Anchore as a Board Advisor.

A Shared Vision for a Secure Supply Chain

For years, Anchore has partnered with Allan to help build the SBOM community he first envisioned at NTIA and CISA. From active participation in his flagship “SBOM-a-Rama” events as an “SBOM-Solutions Showcase” to contributing to the Vulnerability Exploitability eXchange (VEX) standard.

Our VP of Security, Josh Bressers, has even taken over stewardship of Allan’s weekly SBOM community calls in a new form via the OpenSSF SBOM Coffee Club.

We’re thrilled to codify the partnership that has been built over many years with Allan and his vision for software supply chain security.

An In-Depth: A Conversation with Allan Friedman

We sat down with Allan to get his candid thoughts on the future of software supply chain security, the challenges that remain, and why he’s betting on Anchore.

You’ve been one of the primary architects of SBOM and software transparency policy at the federal level. What motivated you to join in the first place and what have you accomplished throughout your tenureship?

Security relies on innovation, but it also depends on collective action, building out a shared vision of solutions that we all need. My background is technical, but my PhD was actually on the economics of information security, and there are still some key areas where collective action by a community can make it easier and cheaper to do the right thing with respect to security. 

Before tackling software supply chain security, I launched the first public-private effort in the US government on vulnerability disclosure, bringing together security researchers and product security teams, and another effort on IoT “patchability.”

I certainly wasn’t the first person to talk about SBOM, but we helped build a common space where experts from across the entire software world could come together to build out a shared vision of what SBOM could look like. Like most hard problems, it wasn’t just technical, or business, or policy, and we tried to address all those issues in parallel. 

I also like to think we did so in a fashion that was more accessible than a lot of government efforts, building a real community and encouraging everyone to see each other as individuals. Dare I say it was fun? I mean, they let me run an international cybersecurity event called “SBOM-a-Rama.”

SBOM is a term that’s gone from a niche concept to a mainstream mandate. For organizations still struggling with adoption, what is the single most important piece of advice you can offer?

Even before we get to compliance, let’s talk about trust. Why would your customers believe in the security–or even the quality or value–of your products or processes if you can’t say with confidence what’s in the software? We also have safety in numbers now–this isn’t an unproven idea, and not only will peer organizations have SBOMs, your potential customers are going to start asking why you can’t do this if others can.

How do you see the regulatory environment developing in the US, Europe, or Asia as it relates to SBOMs over the next few years?

SBOM is becoming less of its own thing and more part of the healthy breakfast that is broader cybersecurity requirements and third party risk management. Over 2025, the national security community has made it clear that SBOM requirements are not just not fading away but are going to be front and center. 

Organizations that trade globally should already be paying attention to the SBOM requirements in the European Union’s Cyber Resilience Act. The requirements are now truly global: Japan has been a leader in sharing SBOM guidance since 2020, Korea integrated SBOM into their national guidance, and India has introduced SBOM requirements into their financial regulations.

Beyond government requirements, supply chain transparency has been discussed in sector-specific requirements and guidance, including PCI-DSS, the automotive sector, and telecommunications equipment.

Now that we see the relative success of SBOMs, as you look three to five years down the road, what do you see as the next major focus area, or challenge, in securing the software supply chain that organizations should be preparing for today?

As SBOM has gone from a controversial idea to a standard part of business, we’re seeing pushes for greater transparency across the software-driven world, with a host of other BOMs. 

Artificial intelligence systems should have transparency about their software, but also about their data, the underlying models, the provenance, and maybe even the underlying infrastructure. As quantum decryption shifts from “always five years away” to something we can imagine, we’ll need inventories of the encryption tools, libraries, and algorithms across complex systems. 

It would be nice if we can have transparency into the “how” as well as the “what,” and secure attestation technologies are transitioning from research to accessible automation-friendly processes that real dev shops can implement. 

And lastly, one of my new passions, software runs on hardware, and we are going to need to pay a lot more attention to where those chips are from and why they can be trusted: HBOM!

What do you hope to bring to the Anchore team and its strategy from your unique vantage point in government and policy?

I’m looking forward to working with the great Anchore team on a number of important topics. For their customers, how do we help them prioritize, and use SBOM as an excuse to level up on software quality, efficiency, and digital modernization. 

We also need to help the global community, especially policy-makers, understand the importance of data quality and completeness, not just slapping an SBOM label on every pile of JSON handy. This can be further supported by standardization activities, where we can help lead on making sure we’re promoting quality specifications. VEX is another area where we can help facilitate conversations with existing and potential users to make sure its being adopted, and can fit into an automated tool chain. 

And lastly, security doesn’t stop with the creation of SBOM data, and we can have huge improvements in security by working with cybersecurity tooling to make sure they understand SBOM data and can deliver value with better vulnerability management, asset management, and third party risk management tooling that organizations already use today.

Building the Future of Software Security, Together

We are incredibly excited about this partnership and what it means for our customers and the open-source community. With Allan’s guidance, Anchore is better positioned than ever to help every organization build trust in their software.To stay current on the progress that Allan Friedman and Anchore are making to secure the software industry’s supply chain, sign-up for the Anchore Newsletter.

Anchore Enterprise 5.23: CycloneDX VEX and VDR Support

Anchore Enterprise 5.23 adds CycloneDX VEX and VDR support, completing our vulnerability communication capabilities for software publishers who need to share accurate vulnerability context with customers. With OpenVEX support shipped in 5.22 and CycloneDX added now, teams can choose the format that fits their supply chain ecosystem while maintaining consistent vulnerability annotations across both standards.

This release includes:

  • CycloneDX VEX export for vulnerability annotations
  • CycloneDX VDR (Vulnerability Disclosure Report) for standardized vulnerability inventory
  • Expanded policy gates for one-time scans (see below for full list)
  • STIG profiles delivered via Anchore Data Service

The Publisher’s Dilemma: When Your Customers Find “Vulnerabilities” You’ve Already Fixed

Software publishers face a recurring challenge: customers scan your delivered software with their own tools and send back lists of vulnerabilities that your team already knows about, has mitigated, or that simply don’t apply to the deployed context. Security teams waste hours explaining the same fixes, architectural decisions, and false positives to each customer—time that could be spent on actual security improvements.

VEX (Vulnerability Exploitability eXchange) standards solve this by allowing publishers to document vulnerability status alongside scan data—whether a CVE was patched in your internal branch, affects a component you don’t use, or is scheduled for remediation in your next release. With two competing VEX formats—OpenVEX and CycloneDX VEX—publishers need to support both to reach their entire ecosystem. Anchore Enterprise 5.23 completes this picture.

How CycloneDX VEX Works in Anchore Enterprise

The vulnerability annotation workflow remains identical to the OpenVEX implementation introduced in 5.22. Teams can add annotations through either the UI or API, documenting whether vulnerabilities are:

  • Not applicable to the specific deployment context
  • Mitigated through compensating controls
  • Under investigation for remediation
  • Scheduled for fixes in upcoming releases

The difference is in the export. When you download the vulnerability report, you can now select CycloneDX VEX format instead of (or in addition to) OpenVEX. The annotation data translates cleanly to either standard, maintaining context and machine-readability.

Adding Annotations

Via UI: Navigate to the Vulnerability tab for any scanned image, select vulnerabilities requiring annotation, and choose Annotate to add status and context.

Via API: Use the /vulnerabilities/annotations endpoint to programmatically apply annotations during automated workflows.

Exporting CycloneDX VEX

After annotations are applied:

  1. Navigate to the Vulnerability Report for your image
  2. Click the Export button above the vulnerability table
  3. In the export dialog, select CycloneDX VEX (JSON or XML format)
  4. Download the machine-readable document for distribution

The exported CycloneDX VEX document includes all vulnerability findings with their associated annotations, PURL identifiers for precise package matching, and metadata about the scanned image. Customers can import this document into CycloneDX-compatible tools to automatically update their vulnerability databases with your authoritative assessments.

VDR: Standardized Vulnerability Disclosure

The Vulnerability Disclosure Report (VDR) provides a complete inventory of identified vulnerabilities in CycloneDX format, regardless of annotation status. Unlike previous raw exports, VDR adheres to the CycloneDX standard for vulnerability disclosure, making it easier for security teams and compliance auditors to process the data.

VDR serves different use cases than VEX:

  • VEX communicates vulnerability status (not applicable, mitigated, under investigation)
  • VDR provides comprehensive vulnerability inventory (all findings with available metadata)

Organizations can export both formats from the same Export dialog: VDR for complete vulnerability disclosure to auditors or security operations teams, and VEX for communicating remediation status to customers or downstream consumers.

To generate a VDR, click the Export button above the vulnerability table and select CycloneDX VDR (JSON or XML format). The resulting CycloneDX document includes vulnerability identifiers, severity ratings, affected packages with PURLs, and any available fix information.

Enforce Gates Policy Support for One-Time Scans

Anchore One-Time Scans now support eight additional policy gates beyond vulnerability checks, enabling comprehensive compliance evaluation directly in CI/CD pipelines without persistent SBOM storage. The newly supported gates include:

This expansion allows teams to enforce compliance requirements—NIST SSDF, CIS Benchmarks, FedRAMP controls—at build time through the API. Evaluate Dockerfile security practices, verify license compliance, check for exposed credentials, and validate package integrity before artifacts reach registries.

STIG profiles delivered via Anchore Data Service

STIG profiles are now delivered through Anchore Data Service, replacing the previous feed service architecture. DoD customers receive DISA STIG updates with the same enterprise-grade reliability as other vulnerability data, supporting both static container image evaluations and runtime Kubernetes assessments required for continuous ATO processes.

The combination means organizations can implement policy-as-code for both commercial compliance frameworks and DoD-specific requirements through a single, streamlined scanning workflow.

Get Started with 5.23

Existing Anchore Enterprise Customers:

  • Contact your account manager to upgrade to Anchore Enterprise 5.23
  • Review implementation documentation for CycloneDX VEX/VDR configuration
  • Reach out to your Customer Success for guidance on annotation workflows

The EU CRA “Compliance Cascade”: Why Your Customers (and Acquirers) Now Demand a Verifiable DevSecOps Pipeline

The EU’s Cyber Resilience Act (CRA) is fundamentally changing how we buy and build software. This isn’t just another regulation; it’s re-shaping the market landscape. We sat down with industry experts Andrew Katz (CEO, Orcro Limited & Head of Open Source, Bristows LLP), Leon Schwarz (Principal, GTC Law Group), and Josh Bressers (VP of Security, Anchore) to discuss how to best take advantage of and prepare for this coming change .

The key takeaway? You can either continue to view compliance as a “regulatory burden” or invert the narrative and frame it as a “competitive differentiator.” The panel revealed that market pressure is already outpacing regulation, and a verifiable, automated compliance process is the new standard for winning deals and proving your company’s value.

The “Compliance Cascade” is Coming

Long before a regulator knocks on your door, your biggest customer will. The new wave of regulations creates a shared responsibility that cascades down the entire supply chain.

As Leon Schwarz explained, “If you sell enough software… you’re going to find that your customers are going to start asking the same questions that all of these regulations are asking”. Andrew Katz noted that this responsibility is recursive: “[Your] responsibility will actually be for all components at all levels of the stack. You know, it doesn’t matter which turtle you’re sitting on” .

The panel made it clear: the “compliance cascade” is about to begin. Once one major enterprise in your supply chain takes the EU CRA seriously, they will contractually force that requirement onto every supplier they have. This is a fundamentally different pressure than traditional, internal audits.

EU CRA Compliance as Market Differentiator

During the discussion, Leon Schwarz described the real-world pressure this compliance cascade creates for suppliers. His “big fear is that during diligence, somebody’s going to come in and say, ‘You didn’t do the reasonable thing here. You didn’t do what everybody else is doing'”.

That fear is the sound of the market setting a new baseline. As the “compliance cascade” forces responsibility down the supply chain, “doing what everyone else is doing” becomes the new definition for what is “reasonable” compliance expectations during procurement. Any supplier who isn’t falling in line becomes the odd one out—a high-risk partner. You will be disqualified from contracts before you even get a chance to demonstrate your value.

But this creates a fundamental, short-term opportunity.

In the beginning, many vendors and suppliers won’t be compliant. Proactive, EU CRA-ready suppliers will be the exception. This is the moment to re-frame the challenge: compliance isn’t a hurdle to be cleared; it’s a competitive differentiator that wins you the deal.

Early adopters will partner with other suppliers who take this change seriously. By having a provable process, they will be the first to adapt to the new compliance landscape, giving them the ability to win business while their competitors are still scrambling to catch up.

A Good Process Increases Your Acquisition Valuation

This new standard of diligence impacts more than just sales; it will materially affect your company’s value during an M&A event.

As Andrew Katz explained, “An organization that’s got a well-run [compliance] process is actually going to be much more valuable; different than an organization where they have to retrofit the process after the transaction has closed”.

An acquirer isn’t just buying your product; they are also buying your access to markets. A company that needs compliance tacked-on has a massive, hidden liability, and the buyer will discount your valuation to compensate for the additional risk.

The Real Goal Isn’t the SBOM; It’s the Evidence

For those new to this, the most critical change is that the new requirement is creating evidence. Just as compliance is shifting from an “annual ritual” to a continuous process, new standards are demanding evidence be collected continuously.

Leon Schwarz summed up the new gold standard for auditors and acquirers: “It’s not enough to have a policy. It’s not enough to have a process. You have to have materials that prove you follow it”. Your process is the “engine” that creates this continuous stream of evidence; an SBOM is just one piece of that evidence. 

As Andrew Katz noted, an SBOM is “just a snapshot” , which is insufficient in a world of “continuous development”. But a process that generates SBOMs for every commit, build or artifact, creates a never ending stream of compliance evidence.

CompOps is How You Automate Trust

This new, continuous demand for proof requires a fundamentally different approach: CompOps (Compliance Operations).

With the EU CRA demanding SBOMs for every release and PCI-DSS 4 requiring scans every three months, compliance must become “part of our development and operations processes” . This is where CompOps, which borrows its “resilient and repeatable” principles from DevOps, becomes essential. It’s not about manual checks; it’s about building automated feedback loops.

Leon described this perfectly: “As developers figure out that if [they] use the things in this bucket of compliant components that their code  is automatically checked in; those are the components they will default to”. That “bucket” is CompOps in action—an automated process that shapes developer behavior with a simple, positive incentive (a green checkmark) and generates auditable proof at the same time.

Are You Building a Speed Bump or a Navigation System?

The experts framed the ultimate choice: you can treat compliance as a “speed bump” that slows developers and creates friction. Or, you can build a “navigible system”.

A good CompOps process acts as that navigation, guiding developers to the path of least resistance that also happens to be the compliant path. This system makes development faster while automatically producing the evidence you need to win deals and prove your value.

This is a fundamentally different way of thinking about compliance, one that moves it from a cost center to a strategic asset.

This was just a fraction of the insights from this expert discussion. The full webinar covers how to handle deep-stack dependencies, specific license scenarios, and how to get buy-in from your leadership.

To learn how to turn compliance from a burden into your biggest competitive advantage, watch the complete on-demand webinar, “The Regulation and Liability of Open Source Software,” today.


Security Without Friction: How RepoFlow Created a DevSecOps Package Manager with Grype

RepoFlow was created with a clear goal: to provide a simple package management alternative that just works without the need for teams to manage or maintain it. Many existing solutions required constant setup, tuning, and oversight. RepoFlow focused on removing that overhead entirely, letting organizations run a reliable system that stays out of the way. 

As adoption grew, one request came up often: built-in vulnerability scanning.

When “Nice-to-Have” Became Non-Negotiable

Package management complexity has reached a breaking point. Developers context-switch between npm registries, container repositories, language-specific package systems, and artifact storage platforms. Each ecosystem brings its own interface, authentication model, and workflow patterns. Tomer Cohen founded RepoFlow in 2024 to collapse this fragmentation into a single, intuitive interface where platform teams could manage packages without cognitive overhead.

Early traction validated the approach. Development teams appreciated the consolidation. But procurement conversations kept hitting the same obstacle: “We can’t adopt this without vulnerability scanning.”

This wasn’t a feature request, it was a compliance requirement. Security scanning has become table stakes for developer tooling in 2025, not because it provides competitive differentiation, but because organizations can’t justify purchases without it. The regulatory landscape around software supply chain security, from NIST SSDF to emerging EU Cyber Resilience Act requirements, means security visibility isn’t optional anymore.

But here’s the problem that most tool builders fail to solve: security tools are notorious for adding back the complexity they’re meant to protect against. Slow scans block workflows. Heavy resource consumption degrades performance. Separate interfaces force context switching. Authentication complexity creates friction. For a product whose entire value proposition centered on reducing cognitive load, adding security capabilities meant walking a tightrope. Ship it wrong, and the product’s core promise evaporates.

RepoFlow needed vulnerability scanning that was fundamentally different from traditional security tooling; fast enough not to disrupt workflows, lightweight enough not to burden infrastructure, and integrated enough to avoid context switching.

The Solution: Grype and Syft to the Rescue

RepoFlow’s engineers started from a blank slate. Two options surfaced:

  1. Build a custom scanner: maximum control, but months of work and constant feed maintenance.
  2. Integrate an open source tool: faster delivery, but only if the tool met strict performance and reliability bars.

They needed something fast, reliable, and light enough to run alongside package operations. Anchore’s Grype checked every box.

Grype runs as a lightweight CLI directly inside RepoFlow. Scans trigger on demand, initiated by developers rather than background daemons. It handles multiple artifact types: containers, npm, Ruby gems, PHP packages, and Rust cargo crates, without consuming extra resources.

Under the hood, results flow through a concise pattern:

  1. Generate SBOMs (Software Bills of Materials) with Syft.
  2. Scan those SBOMs with Grype for known CVEs (Common Vulnerabilities and Exposures).
  3. Parse the JSON output, deduplicate results, and store in RepoFlow’s database.
  4. Surface findings in a new Security Scan tab, right beside existing package details.

Parallel execution and caching keep even large-image scans under a minute. The UI remains responsive; users never leave the page.

This looks straightforward, run a scan, show a table but the user experience determines whether developers embrace it or work around it.


Buy vs. Build (What the Evaluation Revealed)

RepoFlow benchmarked several approaches:

CriterionRequirementWhy Grype Won
SpeedMust not introduce developer frictionSub-minute scan times on standard containers
ReliabilityMust work across languagesConsistent results across npm, Ruby, PHP, Rust
Resource useMust be lightweightMinimal CPU / memory impact
MaintainabilityMust stay currentActive Anchore OSS community & frequent DB updates

During testing, RepoFlow opened a few GitHub issues around database sync behavior. The Anchore OSS team responded quickly, closing each one; an example of open source collaboration shortening the feedback loop from months to days.

The result: an integration that feels native, not bolted on.


The Payoff: Context Without Complexity

Developers now see vulnerabilities in the same pane where they manage packages. No new credentials, no separate dashboards, no waiting for background jobs to finish. Security became part of the workflow rather than a parallel audit.

Adoption followed. Enterprise prospects who had paused evaluations re-engaged. Support tickets dropped. Teams stopped exporting data between tools just to validate package risk.

Anchore’s open-source stack, Syft for SBOMs, Grype for vulnerability scanning, proved that open foundations can deliver enterprise-grade value without enterprise-grade overhead.

Getting Started

For RepoFlow Users

Vulnerability scanning is available in RepoFlow version 0.4.0 and later. The Security Scan tab appears in package detail views for all supported artifact types.

RepoFlow website: repoflow.io

Documentation and configuration guidance: docs.repoflow.io

For Tool Builders Considering Similar Integrations

Anchore’s open source projects provide the foundation RepoFlow leveraged:

The Anchore OSS community maintains active discussions on integration patterns, configuration approaches, and implementation guidance. Contributing improvements and reporting issues benefits the entire ecosystem; just as RepoFlow’s database update feedback improved the tools for all users.

Anchore Enterprise 5.22: OpenVEX, PURLs, and RHEL EUS Support

Anchore Enterprise 5.22 introduces three capabilities designed to make vulnerability management clearer, cleaner, and more trustworthy: 

  • VEX annotations with OpenVEX export
  • PURLs by default, and
  • RHEL Extended Update Support (EUS) indicators.

Each of these features adds context and precision to vulnerability data—helping teams reduce noise, speed triage, and strengthen communication across the supply chain.

Security teams are flooded with vulnerability alerts that lack actionable context. A single CVE may appear in thousands of scans—even if it’s already fixed, mitigated, or irrelevant to the deployed package. The emerging VEX (Vulnerability Exploitability eXchange) standards aim to fix that by allowing publishers to communicate the status of vulnerabilities alongside scan data.

Anchore Enterprise 5.22 builds on this movement with better data hygiene and interoperability: improving how vulnerabilities are described (via annotations), identified (via PURLs), and evaluated (via RHEL EUS awareness).

VEX Annotations and OpenVEX Support

Anchore Enterprise users can now add annotations to individual vulnerabilities on an image—via either the API or the UI—to record their status with additional context. These annotated findings can be exported as an OpenVEX document, enabling teams to share accurate vulnerability states with downstream consumers.

When customers scan your software using their own tools, they may flag vulnerabilities that your team already understands or has mitigated. Annotations let publishers include authoritative explanations—such as “not applicable,” “patched in internal branch,” or “mitigated via configuration.” Exporting this context in OpenVEX, a widely recognized standard, prevents repetitive triage cycles and improves trust across the supply chain.

(CycloneDX VEX support is coming next, ensuring full compatibility with both major standards.)

The annotation workflow supports multiple status indicators that align with VEX standards, allowing teams to document whether vulnerabilities are:

  • Not applicable to the specific deployment context
  • Mitigated through compensating controls
  • Under investigation for remediation
  • Scheduled for fixes in upcoming releases

Once annotations are applied to an image, users can download the complete vulnerability list with all contextual annotations in OpenVEX format—a standardized, machine-readable structure that security tools can consume automatically. Read the docs →

PURLs by Default

All Anchore Enterprise APIs now return Package URLs (PURLs) by default for software components where one exists. A PURL provides a canonical, standardized identity for a package—combining its ecosystem, name, and version into a single unambiguous reference.

The PURL format follows the specification:

pkg:ecosystem/namespace/name@version (e.g., pkg:npm/[email protected])

Unlike older CPE identifiers, PURLs precisely map vulnerabilities to the correct package—even when names or versions overlap across ecosystems. This precision improves downstream workflows such as VEX annotations, ensuring that vulnerability status is attached only to the intended software component, not an entire family of similarly named packages. This leads to more reliable matching, fewer false correlations, and a cleaner chain of evidence in SBOM and VEX exchanges.

For packages without ecosystem-specific PURLs, Anchore Enterprise continues to provide alternative identifiers while working toward comprehensive PURL coverage.

PURLs + VEX Workflows

PURLs significantly improve the precision of VEX annotations. When documenting that a vulnerability is not applicable or has been mitigated, the PURL ensures the annotation applies to exactly the intended package—not a range of similarly-named packages across different ecosystems.

This precision prevents misapplication of vulnerability status when:

  • Multiple ecosystems contain packages with identical names
  • Different versions exist across a software portfolio
  • Vulnerability annotations need to be narrowly scoped
  • Automated tools process VEX documents

For organizations distributing software to customers with their own security scanning tools, PURLs provide the unambiguous identifiers necessary for reliable vulnerability communication.

RHEL EUS Indicators

Anchore Enterprise now detects and flags RHEL Extended Update Support (EUS) content in container images, applying the correct EUS vulnerability data automatically.

RHEL EUS subscribers receive backported fixes for longer lifecycles than standard RHEL releases. Without this visibility, scanners can misclassify vulnerabilities—either missing patches or reporting false positives. The new EUS indicators verify that vulnerability assessments are based on the right lifecycle data, ensuring consistent and accurate reporting.

If an image is based on an EUS branch (e.g., RHEL 8.8 EUS), Anchore now displays that context directly in the vulnerability report, confirming that all findings use EUS-aware data feeds.

How to Get Started

  1. Upgrade to Anchore Enterprise 5.22. Release notes →
  2. Add annotations: via UI (Vulnerability tab → Annotate) or API (/vulnerabilities/annotations).
  3. Export OpenVEX: from the Vulnerability Report interface or CLI to share with partners.
  4. Check EUS status: in the Vulnerability Report summary—look for “EUS Detected.”
  5. Integrate PURLs: via API or SBOM exports for precise package mapping.

Ready to Upgrade?

Anchore Enterprise 5.22 delivers the vulnerability communication and software identification capabilities that modern software distribution requires. The combination of OpenVEX support, PURL integration, and RHEL EUS detection enables teams to manage vulnerability workflows with precision while reducing noise in security communications.

Existing customers: Contact your account manager to upgrade to Anchore Enterprise 5.22 and begin leveraging OpenVEX annotations, PURL identifiers, and EUS detection.

Technical guidance: Visit our documentation site for detailed configuration and implementation instructions.

New to Anchore? Request a demo to see these features in action.


Learn the 5 best practices for container security and how SBOMs play a pivotal role in securing your software supply chain.

Anchore Assessed “Awardable” for Department of Defense Work in the P1 Solutions Marketplace

SANTA BARBARA, CA – October 9, 2025Anchore, a leading provider of software supply chain security solutions, today announced that it has achieved “Awardable” status through the Platform One (P1) Solutions Marketplace.

The P1 Solutions Marketplace is a digital repository of post-competition, 5-minute long readily-awardable pitch videos, which address the Government’s greatest requirements in hardware, software and service solutions.

Anchore’s solutions are designed to secure the software supply chain through comprehensive SBOM generation, vulnerability scanning, and compliance automation. They are used by a wide range of businesses, including Fortune 500 companies, government agencies, and organizations across defense, healthcare, financial services, and technology sectors.

“We’re honored to achieve Awardable status in the P1 Solutions Marketplace,” said Tim Zeller, Senior Vice President of Sales and Strategic Partnerships at Anchore. “Nation-state actors and advanced persistent threats are actively targeting the open source supply chain to infiltrate Department of Defense infrastructure. Our recognition in the P1 marketplace demonstrates that Anchore’s approach—combining open source tools like Syft and Grype with enterprise-grade solutions—can help defense organizations detect and defend against these sophisticated supply chain attacks at scale.”

Anchore’s video, “Secure Your Software Supply Chain with Anchore Enterprise,” accessible only by government customers on the P1 Solutions Marketplace, presents an actual use case in which the company demonstrates automated SBOM generation, vulnerability detection, and compliance monitoring across containerized and traditional software deployments. Anchore was recognized among a competitive field of applicants to the P1 Solutions Marketplace whose solutions demonstrated innovation, scalability, and potential impact on DoD missions. Government customers interested in viewing the video solution can create a P1 Solutions Marketplace account at https://p1-marketplace.com/.

How Sabel Systems Reduced Vulnerability Review Time by 75% While Maintaining Zero Critical Vulnerabilities

We’re excited to share a new case study highlighting how Sabel Systems transformed their security review process while scaling their Code Foundry platform to support Department of Defense (DoD) missions.

Sabel Systems provides managed DevSecOps pipeline-as-a-service for DoD contractors developing mission-critical vehicle systems. With a lean team of 10 supporting over 100 developers across hundreds of applications, they faced a critical challenge: their manual vulnerability review process couldn’t keep pace with growth.


⏱️ Can’t wait till the end?
📥 Download the case study now 👇👇👇

Sabel Systems Case Study

The Challenge: Security Reviews That Couldn’t Scale

When you’re providing platform-as-a-service for DoD vehicle systems, security isn’t optional—it’s mission-critical. But Sabel Systems was facing a bottleneck that threatened their ability to serve their growing customer base.

Their security team spent 1-2 weeks manually reviewing vulnerabilities for each new build of Code Foundry. As Robert McKay, Digital Solutions Architect at Sabel Systems, explains: “We’d have to first build the actual software on the image and then go through all the different connection points and dependencies.”

This wasn’t just slow—it was unsustainable. Code Foundry serves Army, Air Force, and Navy contractors who need to achieve Authority to Operate (ATO) for their systems. These customers operate in IL5 (controlled unclassified) environments on NIPR networks, with strict requirements for zero critical vulnerabilities. The manual process meant delayed deliveries and limited capacity for growth.

Adding to the complexity, Code Foundry is designed to be cloud-agnostic and CI/CD-agnostic, deploying across different DoD-approved cloud providers and integrating with various version control systems (GitLab, Bitbucket, GitHub) and CI/CD tools (GitLab CI, Jenkins). Any security solution would need to work seamlessly across this diverse technical landscape—all while running in air-gapped, government-controlled environments.

The Solution: Automated Security at DoD Scale

Sabel Systems selected Anchore Enterprise to automate their vulnerability management without compromising their strict security standards. The results speak for themselves: vulnerability review time dropped from 1-2 weeks to just 3 days—a 75% reduction that enabled the same 10-person team to support exponentially more applications.

Here’s what made the difference:

Automated scanning integrated directly into CI/CD pipelines. Anchore Enterprise scans every container image immediately after build, providing instant feedback on security posture. Rather than security reviews becoming a bottleneck, they now happen seamlessly in the background while developers continue working.

On-premises deployment built for DoD requirements. Anchore Enterprise runs entirely within government-approved infrastructure, meeting IL5 compliance requirements. Pre-built policy packs for FedRAMP, NIST, and STIG frameworks enable automated compliance checking—no external connectivity required.

API-first architecture that works anywhere. Deploying via Helm charts into Kubernetes clusters, Anchore Enterprise integrates with whatever CI/CD stack each military branch prefers. Sabel Systems embedded AnchoreCTL directly into their pipeline images, keeping all connections within the cluster without requiring SSH access to running pods.

Perhaps most importantly for DoD work, Anchore Enterprise enables real-time transparency for government auditors. Instead of waiting weeks for static compliance reports, reviewers access live security dashboards showing the current state of all applications.

As Joe Bem, Senior Manager at Sabel Systems, notes: “The idea is that you can replace your static contract deliverables with dynamic ones—doing review meetings based on live data instead of ‘here’s my written report that took me a week to write up on what we found last week,’ and by the time the government gets it, it’s now 2-3 weeks old.”

Results: Security That Enables Growth

The implementation of Anchore Enterprise transformed how Code Foundry operates:

  • 75% faster vulnerability reviews allowed the security team to scale without adding headcount
  • Zero critical vulnerabilities maintained across 100+ applications in multiple IL5 environments
  • Real-time audit transparency replaced weeks-old static reports with live compliance dashboards
  • Faster ATO processes for DoD contractors through proactive security feedback

This isn’t just about efficiency—it’s about enabling Sabel Systems to serve more DoD missions without compromising security standards. Rather than security reviews constraining business growth, they now happen seamlessly as part of the development workflow.

Learn More

The full case study dives deeper into the technical architecture, specific compliance requirements, and implementation details that enabled these results. Whether you’re supporting defense contractors, operating in regulated environments, or simply looking to scale your security operations, Sabel Systems’ experience offers valuable insights.

Download the complete Sabel Systems case study to see how automated vulnerability management can transform your security posture while enabling growth.

Questions about implementing Anchore Enterprise in your environment? Get in touch with our team—we’re here to help.


Learn how to harden your containers and make them “STIG-Ready” with our definitive guide.

Complete Guide to Hardening Containers with STIG | Anchore

Compliance Requirements for DISA’s Security Technical Implementation Guides (STIGs)

Fast Facts

  • To help organizations meet the DoD’s security controls, DISA develops Security Technical Implementation Guides (STIGs) to provide guidance on how to secure operating systems, network devices, software, and other IT systems. DISA regularly updates and releases new STIG versions.
  • STIG compliance is mandatory for any organization that operates within the DoD network or handles DoD information, including DoD contractors and vendors, government agencies, and DoD IT teams.
  • With more than 500 total STIGs (and counting!), your organization can streamline the STIG compliance process by identifying applicable STIGs upfront, prioritizing fixes, establishing a maintenance schedule, and assigning clear responsibilities to team members.
  • Tools like DISA STIG Viewer, Anchore Enterprise, and SCARP Compliance Checker can help track and automate STIG compliance.

In the rapidly modernizing landscape of cybersecurity compliance, evolving to a continuous compliance posture is more critical than ever, particularly for organizations involved with the Department of Defense (DoD) and other government agencies. In February 2025, Microsoft reported that governments are in the top 3 most targeted sectors worldwide.

At the heart of the DoD’s modern approach to software development is the DoD Enterprise DevSecOps Reference Design, commonly implemented as a DoD Software Factory. A key component of this framework is adhering to the Security Technical Implementation Guides (STIGs) developed by the Defense Information Systems Agency (DISA).

STIG compliance within the DevSecOps pipeline not only accelerates the delivery of secure software but also embeds robust security practices directly into the development process, safeguarding sensitive data and reinforcing national security.

This comprehensive guide will walk you through what STIGs are, who should care about them, the levels and key categories of STIG compliance, how to prepare for the compliance process, and tools available to automate STIG implementation and maintenance. Read on for the full overview or skip ahead to find the information you need:

  1. What are STIGs?
  2. Who needs to comply?
  3. Levels of STIG compliance
  4. Key categories of requirements
  5. Preparing for the STIG compliance process
  6. STIG compliance tools

What are STIGs and who should care?

Understanding DISA and STIGs

The Defense Information Systems Agency (DISA) is the DoD agency responsible for delivering information technology (IT) support to ensure the security of U.S. national defense systems. To help organizations meet the DoD’s rigorous security controls, DISA develops Security Technical Implementation Guides (STIGs).

STIGs are configuration standards that provide prescriptive guidance on how to secure operating systems, network devices, software, and other IT systems. They serve as a secure configuration standard to harden systems against cyber threats.

For example, a STIG for the open source Apache web server would specify that encryption is enabled for all traffic (incoming or outgoing). This would require the generation of SSL/TLS certificates on the server in the correct location, updating the server’s configuration file to reference this certificate and re-configuration of the server to serve traffic from a secure port rather than the default insecure port.

Who should care about STIG compliance?

In its annual Software Supply Chain Security report, Anchore found that the average organization complies with 4.9 cybersecurity compliance standards. STIG compliance, in particular, is mandatory for any organization that operates within the DoD network or handles DoD information. This includes:

  • DoD Contractors and Vendors: Companies providing products or services to the DoD, a.k.a. the defense industrial base (DIB)
  • Government Agencies: Federal agencies interfacing with the DoD
  • DoD Information Technology Teams: IT professionals within the DoD responsible for system security

Connection to the RMF and NIST SP 800-53

The Risk Management Framework (RMF)—known formally as NIST 800-37—is a framework that integrates security and risk management into IT systems as they are being developed. The STIG compliance process outlined below is directly integrated into the higher-level RMF process. As you follow the RMF, the individual steps of STIG compliance will be completed in turn.

STIGs are also closely connected to the NIST 800-53, colloquially known as the “Control Catalog”. NIST 800-53 outlines security and privacy controls for all federal information systems; the controls are not prescriptive about the implementation, only the best practices and outcomes that need to be achieved. 

As DISA developed the STIG compliance standard, they started with the NIST 800-53 controls as a baseline, then “tailored” them to meet the needs of the DoD; these customized security best practices are known as Security Requirements Guides (SRGs). In order to remove all ambiguity around how to meet these higher-level best practices STIGs were created with implementation specific instructions.

For example, an SRG will mandate that all systems utilize a cybersecurity best practice, such as, role-based access control (RBAC) to prevent users without the correct privileges from accessing certain systems. A STIG, on the other hand, will detail exactly how to configure an RBAC system to meet the highest security standards.

Levels of STIG Compliance

The DISA STIG compliance standard uses Severity Category Codes to classify vulnerabilities based on their potential impact on system security. These codes help organizations prioritize remediation efforts. The three Severity Category Codes are:

  1. Category I (Cat I): These are the highest severity or highest risk vulnerabilities, allowing an attacker immediate access to a system or network or allowing superuser access. Due to their high risk nature, these vulnerabilities be addressed immediately.
  2. Category II (Cat II): These vulnerabilities provide information with a high potential of giving access to intruders. These findings are considered a medium risk and should be remediated promptly.
  3. Category III (Cat III): These vulnerabilities constitute the lowest risk, providing information that could potentially lead to compromise. Although not as pressing as Cat II & III issues, it is still important to address these vulnerabilities to minimize risk and enhance overall security.

Understanding these categories is crucial in the STIG process, as they guide organizations in prioritizing remediation of vulnerabilities.

Key categories of STIG requirements

Given the extensive range of technologies used in DoD environments, there are nearly 500 STIGs (as of May 2025) applicable to different systems, devices, applications, and more. While we won’t list all STIG requirements and benchmarks here, it’s important to understand the key categories and who they apply to.

1. Operating System STIGs

Applies to: System Administrators and IT Teams managing servers and workstations

Examples:

  • Microsoft Windows STIGs: Provides guidelines for securing Windows operating systems.
  • Linux STIGs: Offers secure configuration requirements for various Linux distributions.

2. Network Device STIGs

Applies to: Network Engineers and Administrators

Examples:

  • Network Router STIGs: Outlines security configurations for routers to protect network traffic.
  • Network Firewall STIGs: Details how to secure firewall settings to control access to networks.

3. Application STIGs

Applies to: Software Developers and Application Managers

Examples:

  • Generic Application Security STIG: Outlines the necessary security best practices needed to be STIG compliant
  • Web Server STIG: Provides security requirements for web servers.
  • Database STIG: Specifies how to secure database management systems (DBMS).

4. Mobile Device STIGs

Applies to: Mobile Device Administrators and Security Teams

Examples:

  • Apple iOS STIG: Guides securing of Apple mobile devices used within the DoD.
  • Android OS STIG: Details security configurations for Android devices.

5. Cloud Computing STIGs

Applies to: Cloud Service Providers and Cloud Infrastructure Teams

Examples:

  • Microsoft Azure SQL Database STIG: Offers security requirements for Azure SQL Database cloud service.
  • Cloud Computing OS STIG: Details secure configurations for any operating system offered by a cloud provider that doesn’t have a specific STIG.

Each category addresses specific technologies and includes a STIG checklist to ensure all necessary configurations are applied. 

See an example of a STIG checklist for “Application Security and Development” here.

How to Prepare for the STIG Compliance Process

Achieving DISA STIG compliance involves a structured approach. Here are the stages of the STIG process and tips to prepare:

Stage 1: Identifying Applicable STIGs

With hundreds of STIGs relevant to different organizations and technology stacks, this step should not be underestimated. First, conduct an inventory of all systems, devices, applications, and technologies in use. Then, review the complete list of STIGs to match each to your inventory to ensure that all critical areas requiring secure configuration are addressed. This step is essential to avoiding gaps in compliance.

Tip: Use automated tools to scan your environment, then match assets to relevant STIGs.

Stage 2: Implementation

After you’ve mapped your technology to the corresponding STIGs, the process of implementing the security configurations outlined in the guides begins. This step may require collaboration between IT, security, and development teams to ensure that the configurations are compatible with the organization’s infrastructure while enforcing strict security standards. Be sure to keep detailed records of changes made.

Tip: Prioritize implementing fixes for Cat I vulnerabilities first, followed by Cat II and Cat III. Depending on the urgency and needs of the mission, ATO can still be achieved with partial STIG compliance. Prioritizing efforts increases the chances that partial compliance is permitted.

Stage 3: Auditing & Maintenance

After the STIGs have been implemented, regular auditing and maintenance are critical to ensure ongoing compliance, verifying that no deviations have occurred over time due to system updates, patches, or other changes. This stage includes periodic scans, manual reviews, and remediation of any identified gaps.

Organizations should also develop a plan to stay informed about new STIG releases and updates from DISA. You can sign up for automated emails on https://www.cyber.mil/stigs.

Tip: Establish a maintenance schedule and assign responsibilities to team members. Alternatively, you can adopt a policy-as-code approach to continuous compliance by embedding STIG requirements directly into your DevSecOps pipeline, enabling automated, ongoing compliance.

General Implementation Tips

  • Training: Ensure your team is familiar with STIG requirements and the compliance process.
  • Collaboration: Work cross-functionally with all relevant departments, including IT, security, and compliance teams.
  • Resource Allocation: Dedicate sufficient resources, including time and personnel, to the compliance effort.
  • Continuous Improvement: Treat STIG compliance as an ongoing process rather than a one-time project.
  • Test for Impact on Functionality: The downside of STIG controls’ high level of security is a potential to negatively impact functionality. Be sure to conduct extensive testing to identify broken features, compatibility issues, interoperability challenges, and more.

Tools to automate STIG implementation and maintenance

The 2024 Report on Software Supply Chain Security found “automating compliance checks” is a top priority, with 52% of respondents ranking it in their top 3 supply chain security challenges. For STIGs, automation can significantly streamline the compliance process. Here are a few tools that can help:

1. Anchore STIG (Static and Runtime)

  • Purpose: Automates the process of checking container images against STIG requirements.
  • Benefits:
    • Simplifies compliance for containerized applications.
    • Integrates into CI/CD pipelines for continuous compliance.
  • Use Case: Ideal for DevSecOps teams utilizing containers in their deployments.

2. SCAP Compliance Checker

  • Purpose: Provides automated compliance scanning using the Security Content Automation Protocol (SCAP).
  • Benefits:
    • Validates system configurations against STIGs.
    • Generates detailed compliance reports.
  • Use Case: Useful for system administrators needing to audit various operating systems.

3. DISA STIG Viewer

  • Purpose: Helps in viewing and managing STIG checklists.
  • Benefits:
    • Allows for easy navigation of STIG requirements.
    • Facilitates documentation and reporting.
  • Use Case: Assists compliance officers in tracking compliance status.

4. DevOps Automation Tools

  • Infrastructure Automation Examples: Red Hat Ansible, Perforce Puppet, Hashicorp Terraform
  • Software Build Automation Examples: CloudBees CI, GitLab
  • Purpose: Automate the deployment of secure configurations that meet STIG compliance across multiple systems.
  • Benefits:
    • Ensures consistent application of secure configuration standards.
    • Reduces manual effort and the potential for errors.
  • Use Case: Suitable for large-scale environments where manual configuration is impractical.

5. Vulnerability Management Tools

  • Examples: Anchore Secure
  • Purpose: Identify vulnerabilities and compliance issues within your network.
  • Benefits:
    • Provides actionable insights to remediate security gaps.
    • Offers continuous monitoring capabilities.
  • Use Case: Critical for security teams focused on proactive risk management.

Wrap-Up

Achieving DISA STIG compliance is mandatory for organizations working with the DoD. By understanding what STIGs are, who they apply to, and how to navigate the compliance process, your organization can meet the stringent compliance requirements set forth by DISA. As a bonus, you will enhance its security posture and reduce the potential for a security breach.

Remember, compliance is not a one-time event but an ongoing effort that requires regular updates, audits, and maintenance. Leveraging automation tools like Anchore STIG and Anchore Secure can significantly ease this burden, allowing your team to focus on strategic initiatives rather than manual compliance tasks.

Stay proactive, keep your team informed, and make use of the resources available to ensure that your IT systems remain secure and compliant.

Anchore Enterprise is now SPDX 3 Ready

We’re excited to announce that Anchore Enterprise is now SDPX 3 ready. If you’re a native to the world of SBOMs this may feel a bit confusing given that the Linux Foundation announced the release of SPDX 3 last year. While this is true, it is also true that the software ecosystem is still awaiting reference implementations which is blocking the SBOM tools community from rolling out the new format. Regardless of this dynamic situation, Anchore is hearing demand from existing customers to stay at the cutting edge of the evolution of SBOMs. To that end, Anchore Enterprise now includes initial support for SPDX 3. These forward looking enterprises are seeking to future-proof their software development process and begin building a fine-grained historical record of their software supply chain while the software ecosystem matures.

Organizations can now upload, store, and download SPDX 3 formatted SBOMs. SBOM formats are in transition from traditional software-oriented standards to future service-oriented and AI-native formats that can capture AI infused, distributed system complexities. In this blog, we’ll walk you through how to navigate this transition, why it’s important to begin now and how Anchore Enterprise is enabling organizations to accomplish this.

The Dual-Track Future of SBOM Standards

Organizations today rely predominantly on two established SBOM standards: SPDX and CycloneDX. Many organizations mix-and-match these formats to address different aspects of modern security and risk management requirements, from increasing transparency into software component supply chains and managing third-party dependency vulnerabilities to enforcing regulatory compliance controls and software license management.

These traditional software-oriented formats continue to deliver significant enterprise value and remain essential for current operational needs. However, the software ecosystem is evolving toward distributed systems and AI-native applications that require a corresponding transformation of SBOM capabilities.

SPDX 3 represents this next generation, designed to capture complex interdependencies in modern distributed architectures that interweave AI features. Since the ecosystem is still awaiting an official reference implementation for SPDX 3 early adopters are experiencing significant turbulence.

For now, organizations need a dual-track approach: maintaining proven standards like SPDX 2.3 and CycloneDX for immediate vulnerability and license scanning needs while beginning to collect SPDX 3 documents in preparation for the ecosystem’s maturation. This parallel strategy ensures operational continuity while positioning organizations for the advanced capabilities that next-generation formats will enable.

The Value of Starting Your SPDX 3 Collection Today

While SPDX 3 processing capabilities are still maturing across the ecosystem, there’s compelling value in beginning collection today. Just as Anchore customers benefit from comprehensive SBOM historical records during zero-day vulnerability investigations, starting your SPDX 3 collection today creates an auditable trail that will power future service-oriented and AI specific use cases as they emerge.

The development lifecycle generates valuable state information at every stage—information that becomes irreplaceable during incident response and compliance audits. By collecting SPDX 3 SBOMs now, organizations ensure they have the historical context needed to leverage new capabilities as the ecosystem matures, rather than starting from zero when scalable SPDX 3 SBOM processing becomes available.

Anchore Enterprise, SPDX 3 Ready: Upgrade Now

As of version 5.20, Anchore Enterprise provides SPDX 3 document storage. This positions organizations for a seamless transition as the ecosystem matures. Users can upload, store, and retrieve valid SPDX 3 SBOMs through existing interfaces while maintaining operational workflows with battle-tested standards.

Organizations can now easily implement the dual-track approach that will allow them to have their SBOM cake and eat it too. The latest releases of Anchore Enterprise deliver the foundational capabilities organizations need to stay ahead of evolving supply chain security requirements. The combination of SPDX 3 support and enhanced SBOM management positions teams for success as software architectures continue to evolve toward distributed, AI-native systems.

Ready to upgrade?

  • Existing customers should reach out to their account manager to access the latest version of Anchore Enterprise and begin storing SPDX 3 SBOMs

New to Anchore? Request a guided demo to see this new feature in action


Explore SBOM use-cases for almost any department of the enterprise and learn how to unlock enterprise value to make the most of your software supply chain.

WHITE PAPER Rnd Rect | Unlock Enterprise Value with SBOMs: Use-Cases for the Entire Organization

Minutes vs. Months: The SBOM Advantage in Zero-Day Response

When Log4Shell hit, one Anchore Enterprise customer faced the same nightmare scenario as thousands of organizations worldwide: Where is log4j hiding in our infrastructure?

The difference? While most organizations spent weeks manually hunting through systems, this customer ran a single API command and identified every instance of log4j across their entire environment in five minutes.

That’s the transformation Josh Bressers (VP of Security, Anchore) and Brian Thomason (Solutions Engineering Manager, Anchore) demonstrated in their recent webinar on rapid incident response to zero-day vulnerabilities—and it represents a fundamental shift in how security teams can respond to critical threats.

TL;DR: Traditional vulnerability management treats SBOMs as compliance artifacts, but modern incident response requires treating them as operational intelligence.

This technical deep-dive covers three critical scenarios that every security team will face:

  • Proactive Threat Hunting: How to identify vulnerable components before CVE disclosure using SBOM archaeology
  • Runtime Vulnerability Prioritization: Real-time identification of critical vulnerabilities in production Kubernetes environments
  • CI/CD Security Blindness: The massive attack surface hiding in build environments that most teams never scan

Ready to see the difference between reactive firefighting and strategic preparation? Keep reading for the technical insights that will change how you approach zero-day response.

The CUPS Case Study: Getting Ahead of Zero-Day Disclosure

In September 2024, security researchers began dropping hints on Twitter about a critical Linux vulnerability affecting systems “by default.” No CVE. No technical details. Just cryptic warnings about a two-week disclosure timeline.

The security community mobilized to solve the puzzle, eventually identifying CUPS as the target. But here’s where most organizations hit a wall: How do you prepare for a vulnerability when you don’t know what systems contain the affected component?

Traditional approaches require manual system audits—a process that scales poorly and often misses transitive dependencies buried deep in container layers. The SBOM-centric approach inverts this narrative.

“One of the examples I like to use is when log4j happened, we have an Anchore enterprise customer that had all of their infrastructure stored inside of Anchore Enterprise as SBOMs. Log4Shell happens and they’re like, holy crap, we need to search for log4Shell. And so we’re like, ah, you can do that here, run this command. And literally in five minutes they knew where every instance of log4j was in all of their environments.
—Josh Bressers, VP of Security, Anchore

The Technical Implementation

What was the command they used? The webinar demonstrates this live against thousands of stored SBOMs to locate CUPS across an entire infrastructure:

$ curl -u admin:password \
  "https://enterprise.example.com/v1/images/by_package?name=cups" \
  | jq '.results[] | .tag_history[0].tag'

This single command returns every container image containing CUPS, complete with version information, registry details, and deployment metadata. The query executes against historical and current SBOMs, providing comprehensive coverage across the entire software supply chain.

Security teams can begin impact assessment and remediation planning before vulnerability details become public, transforming reactive incident response into proactive threat management.

What Else You’ll Discover

This proactive discovery capability represents just the foundation of a comprehensive demonstration that tackles the blind spots plaguing modern security operations.

Runtime Vulnerability Management: The Infrastructure You Don’t Control

Josh revealed a critical oversight in most security programs; vulnerabilities in Kubernetes infrastructure components that application teams never see.

The demonstration focused on a critical CVE in the nginx ingress controller—infrastructure deployed by SRE teams but invisible to application security scans. Using Anchore Enterprise’s Kubernetes runtime capabilities, the team showed how to:

  • Identify running containers with critical vulnerabilities in real-time
  • Prioritize remediation based on production deployment status
  • Bridge the visibility gap between application and infrastructure security

“I could have all of my software tracked in Anchore Enterprise and I wouldn’t have any insight into this — because it wasn’t my code. It was someone else’s problem. But it’s still my risk.”
—Josh Bressers, VP of Security, Anchore

CI/CD Archaeology: When the Past Comes Back

The most eye-opening demonstration involved scanning a GitHub Actions runner environment—revealing 13,000 vulnerabilities across thousands of packages in a standard build environment.

The technical process showcased how organizations can:

  • Generate comprehensive SBOMs of build environments using filesystem scanning
  • Maintain historical records of CI/CD dependencies for incident investigation
  • Identify potentially compromised build tools (like the TJ Actions backdoor incident)

“This is literally someone else’s computer building our software, and we might not know what’s in it. That’s why SBOM archaeology matters.”
—Josh Bressers, VP of Security, Anchore

Why SBOMs Are the Strategic Differentiator

Four truths stood out:

  • Speed is critical: Minutes, not months, decide outcomes.
  • Visibility gaps are real: Runtime and CI/CD are blind spots for most teams.
  • History matters: SBOMs are lightweight evidence when past build logs are gone.
  • Automation is essential: Manual tracking doesn’t scale to millions of dependencies.

Or as Josh put it:

“Storing images forever is expensive. Storing SBOMs? Easy. They’re just JSON documents—and we’re really good at searching JSON.”

The Bottom Line: Minutes vs. Months

When the next zero-day hits your infrastructure, will you spend five minutes identifying affected systems or five months hunting through manual inventories?

The technical demonstrations in this webinar show exactly how SBOM-driven incident response transforms security operations from reactive firefighting into strategic threat management. This is the difference between organizations that contain breaches and those that make headlines.

Stay ahead of the next disclosure:

  1. 👉 Watch the full webinar on-demand
  2. Follow Anchore on LinkedIn and X for zero-day analysis and SBOM best practices.
  3. Subscribe to our newsletter for exclusive insights into supply chain security, automation, and compliance.

Zero-day vulnerabilities aren’t slowing down. But with SBOM-driven response, your timeline doesn’t have to be measured in months.


Learn how SBOMs enable organizations to react to zero-day disclosures in minutes rather than days or weeks.

Rapid Incident Response to Zero-Day Vulnerabilities with SBOMs | Webinar

Anchore is Excited to Announce it’s Inclusion in the IBM PDE Factory: An Open Source-Powered Secure Software Development Platform

Powered by Anchore’s Syft & Grype, IBM’s Platform Development Environment Factory delivers DevSecOps-as-a-Service for federal agencies seeking operational readiness without the integration nightmare.


Federal agencies are navigating a complex landscape: while DevOps has delivered on its promise of increased velocity, modern compliance frameworks like EO 14028 and continuous Authority to Operate (cATO) requirements introduce new challenges that demand sophisticated DevSecOps practices across civilian agencies and the Defense Industrial Base (DIB). For many teams, maintaining both speed and compliance requires careful orchestration of security tools, visibility platforms, and audit processes that can impact development resources.

The challenge often lies in implementation complexity. Software development platforms built from disparate components that should integrate seamlessly often require significant customization work. Teams can find themselves spending valuable time on integration tasks—configuring YAML files, writing connectivity code, and troubleshooting compatibility issues—rather than focusing on mission-critical capabilities. Building and managing a standards-compliant DevSecOps platform requires specialized expertise to deliver the reliability that developers need, while compliance audit processes add operational overhead that can slow time to production.

Net effect: Projects stall in glue-code purgatory long before a single security control is satisfied.

IBM Federal’s PDE Factory changes this equation entirely. This isn’t another pick-your-own-modules starter repository—it’s a fully composed DevSecOps platform you can deploy in hours, not months, with SBOM-powered supply chain security baked into every layer.

Challenge: Tool Sprawl Meets Compliance Deadlines

An application stack destined for federal deployment might need a vulnerability scanner, SBOM generator, signing service, policy engine, and runtime monitoring—each potentially from different vendors. Development teams burn entire sprints wiring these modules together, patching configuration files, and writing custom integration code to resolve subtle interoperability issues that surface during testing.

Every integration introduces fresh risk. Versions drift between environments. APIs break without warning. Documentation assumes knowledge that exists nowhere in your organization. Meanwhile, compliance frameworks like NIST’s Secure Software Development Framework (SSDF) demand comprehensive coverage across software bill of materials (SBOM) generation, continuous vulnerability management, and policy enforcement. Miss one pillar, and the entire compliance review fails.

DIY Integration PainMission Impact
Fragmented visibilityVulnerability scanners can’t correlate with registry contents; audit trails become patchwork documentation spread across multiple systems.
Context-switching overheadEngineers toggle between six different UIs and CLI tools to trace a single CVE from detection through remediation.
Late-stage discoveryCritical security issues surface after artifacts are already staged for production, triggering war-room incidents that halt deployments.
Compliance scrambleEvidence collection requires manual log parsing and screenshot gathering—none of it standardized, signed, or audit-ready.

The US Air Force Platform One learned the lessons above early. Their container ecosystem, now secured with Anchore Enterprise, required extensive tooling integration to achieve the operational readiness standards demanded by mission-critical workloads. Similarly, Iron Bank—the DoD’s hardened container repository—relies on Anchore Enterprise to maintain the security posture that defense contractors and military units depend on for operational continuity.

Solution: A Pre-Wired Factory, No Yak-Shaving Required

IBM Federal’s PDE Factory eliminates the integration nightmare by delivering a fully composed DevSecOps platform deployable in hours rather than months. This isn’t about faster setup—it’s about operational readiness from day one.

Architecture at a glance:

  • GitLab CI orchestrates every build with security gates enforced at each stage
  • Harbor registry stores signed container images with embedded SBOMs
  • Argo CD drives GitOps-based deployments into production Kubernetes clusters
  • Terraform automation executes the entire stack deployment with enterprise-grade reliability
  • Syft & Grype by Anchore: comes integrated with the PDE Factory giving users SBOM-powered vulnerability scanning “out of the box”

Outcome: A production-ready DevSecOps environment that supports the code-to-cloud kill chain federal agencies need, deployable in hours instead of the weeks-to-months typical of greenfield builds.

Anchore Inside: The SBOM Backbone

Before any container image reaches your registry, Anchore’s battle-tested supply chain tools attach comprehensive security and compliance metadata that travels through your entire deployment pipeline.

How the integration works:

  1. Syft performs deep software composition analysis, cataloging every component down to transitive dependencies and generating standards-compliant SBOMs
  2. Grype ingests those SBOMs and enriches them with current vulnerability data from multiple threat intelligence feeds
  3. Policy enforcement blocks non-compliant builds before they can compromise downstream systems
  4. Evidence collection happens automatically—when auditors arrive, you hand them signed JSON artifacts instead of manually compiled documentation

SBOM = portable mission truth. Because SBOMs are machine-readable and cryptographically signed, PDE Factory can automate both rapid “shift-left” feedback loops and comprehensive audit trail generation. This aligns directly with CISA’s Secure by Design initiative—preventing insecure builds from entering the pipeline rather than detecting problems after deployment.

The US Navy’s Black Pearl Factory exemplifies this approach in action. Working with Sigma Defense, they reduced audit preparation time from three days of manual evidence gathering to two minutes of automated report generation—a force multiplier that redirects valuable engineering resources from compliance overhead back to mission delivery.

Day-in-the-Life: From Commit to Compliant Deploy

Here’s how operational readiness looks in practice:

  1. Developer commits code to GitLab, triggering the automated security pipeline
  2. Container build includes Syft SBOM generation and cryptographic signing
  3. Grype vulnerability scanning correlates SBOM components against current threat data
  4. Policy gates enforce NIST SSDF requirements before allowing registry promotion
  5. Argo CD deployment validates runtime security posture against DoD standards
  6. Kubernetes admission controller performs final compliance verification using stored SBOM and vulnerability data

Result: A hardened deployment pipeline that maintains operational readiness without sacrificing development velocity.


For agencies requiring enhanced security posture, upgrading to Anchore Enterprise unlocks Compliance-as-a-Service capabilities:

Open Source FoundationAnchore Enterprise UpgradeOperational Advantage
Syft & GrypeAnchore Secure with centralized vulnerability managementHours saved on manual CVE triage and false positive elimination
Basic policy enforcementAnchore Enforce with pre-built SSDF, DISA, and NIST policy packsAccelerated ATO timelines through automated compliance validation
Manual evidence collectionAutomated audit trail generationWeeks removed from compliance preparation cycles

Operational Payoff: Mission Metrics That Matter

Capability MetricDIY Integration ApproachIBM PDE Factory
Platform deployment time45-120 days< 8 hours
Security rework percentage per sprint~20%< 5%
Critical vulnerability MTTR~4 hours< 1 hour
Audit preparation effortWeeks of manual workAutomated nightly exports

This isn’t just about developer productivity—it’s about mission continuity. When federal agencies can deploy secure software faster and maintain compliance posture without operational overhead, they can focus resources on capabilities that directly serve citizens and national security objectives.

Your Operational Readiness Path Forward

Federal agencies have an opportunity to streamline their development processes by adopting proven infrastructure that the DoD already trusts.

IBM Federal’s PDE Factory, powered by Anchore’s SBOM-first approach, delivers the operational readiness federal agencies need while reducing the integration complexity that often challenges DevSecOps initiatives. Start with the open source foundation—Syft and Grype provide immediate value. Scale to Anchore Enterprise when you need Compliance-as-a-Service capabilities that accelerate your Authority to Operate timeline.

Ready to see proven DoD software factory security in action?

Anchore brings deep expertise in securing mission-critical software factories across the Department of Defense, from Platform One to Iron Bank to the Navy’s Black Pearl Factory. Our battle-tested SBOM-powered approach has enabled DoD organizations to achieve operational readiness while maintaining the security standards required for defense environments.

Book an Anchore Enterprise demo to see how our proven software supply chain security integrates with IBM’s PDE Factory to deliver “no SBOM, no deploy” enforcement without compromising development velocity.

Fortify your pipeline. Harden your releases. Accelerate your operational readiness.

The mission demands secure software. Your developers deserve tools that deliver it.


Learn how to harden your containers and make them “STIG-Ready” with our definitive guide.

Complete Guide to Hardening Containers with STIG | Anchore

From Cost Center to Revenue Driver: How Compliance Became Security’s Best Friend

An exclusive look at insights from the ITGRC Forum’s latest webinar on demonstrating the value of cybersecurity investments.

Three cybersecurity veterans with a combined 80+ years of experience recently gathered for a Forum webinar that challenged everything we thought we knew about the funding of enterprise security investments. 

  • Colin Whitaker (30+ years, Informed Risk Decisions), 
  • Paulo Amarol (Senior Director GRC, Diligent), 
  • Dirk Shrader (25+ years, Netwrix), and 
  • Josh Bressers (VP Security, Anchore) delivered insights that explain why some organizations effortlessly secure millions for security initiatives while others struggle for basic tool budgets.

The central revelation? Compliance isn’t just regulatory burden—it’s become the primary pathway for security investment in modern enterprises.

The 75-minute discussion covered critical territory for any security or GRC professional trying to demonstrate value to leadership:

  • When Compliance Became the Gateway to Security Investment: How regulatory requirements transformed from cost centers to business enablers
  • The Software Supply Chain Compliance Revolution: Why SBOM mandates are forcing visibility that security teams have wanted for decades
  • Death by a Thousand Cuts: The Hidden Costs of Fragmented Compliance: The true operational impact of manual compliance processes
  • The Future of Compliance-Driven Security Investment: Where emerging regulations are heading and how to get ahead

Not ready to commit to a full webinar? Keep reading to get a taste for the discussion and how it will change your perspective on the relationship between cybersecurity and regulatory compliance.


⏱️ Can’t wait till the end?
📥 Watch the full webinar now 👇👇👇


When Compliance Became the Gateway to Security Investment

For decades, security professionals have faced an uphill battle for executive attention and funding. While IT budgets grew and development teams expanded, security often fought for scraps—forced to justify theoretical risks against concrete revenue opportunities.

Traditional security arguments relied on preventing abstract future threats. Leadership heard endless presentations about potential breaches, theoretical vulnerabilities, and statistical possibilities.

When the business is deciding between allocating resources toward revenue-generating features that will generate an ROI in months versus product security features that will reduce—BUT never eliminate—the possibility of a breach; it’s not difficult to figure out how we got into this situation. Meanwhile, regulatory compliance offered something security never could: immediate business necessity.

Modern compliance frameworks (e.g., EU CRA, DORA, NIS2) invert this narrative by making penalties certain, quantifiable, and time-sensitive. Annual non-compliance penalties and the threat of losing access to sell into European markets shift the story from “possible future breach” to “definite revenue loss.”

“I think now that there’s regulators saying you have to do this stuff or you can’t sell your product here now we have business incentive right because just from a purely practical perspective if a business can’t sell into one of the largest markets on the planet that has enormous consequences for the business.”
Josh Bressers, VP of Security, Anchore

Not only does modern regulatory compliance create the “financial teeth” needed to align business incentives but it has also evolved the security requirements to be at parity with current DevSecOps best practices. The days of laughable security controls and checkbox compliance are past. Modern laws are now delivering on the promise of “Trust, but verify.”

The Strategic Partnership Opportunity

These two fundamental changes—business-aligned incentives and technically sound requirements—create an unprecedented opportunity for security and compliance teams to partner in reducing organizational risk. Rather than working in silos with competing priorities, both functions can now pursue shared objectives that directly support business goals.

Security teams gain access to executive attention and budget allocation through compliance mandates. Compliance teams benefit from security expertise and automation capabilities that reduce manual audit overhead. Together, they can implement comprehensive risk management programs that satisfy regulatory requirements while building genuine security capabilities.

The result transforms both functions from cost centers into strategic business enablers—compliance ensures market access while security protects the operations that generate revenue.

“However when security and compliance work together now security has a story they can start to tell that gets you the funding you need that get you the support you need from your leadership.”
Josh Bressers, VP of Security, Anchore

What Else You’ll Discover in the Full Webinar

This transformation in security funding represents just one thread in a comprehensive discussion that tackles the most pressing challenges facing security and GRC professionals today.

The Software Supply Chain Compliance Revolution

Josh Bressers reveals why organizations with proper SBOM capabilities identified Log4j vulnerabilities in 10 minutes while others needed 3 months—and how compliance mandates are finally forcing the software supply chain visibility security teams have wanted for decades.

“Between 70-90% of all code is open source [and] … 95% of products have open source inside of them. The numbers are just absolutely staggering.”
—Josh Bressers, VP of Security, Anchore

Death by a Thousand Cuts: The Hidden Costs of Fragmented Compliance

Dirk Shrader breaks down the operational disruption costs that 54% of organizations recognize but haven’t calculated, including the “mangled effort” of manual compliance processes that diverts skilled staff from strategic initiatives.

“Security and IT teams spend excessive time pulling data from disparate systems: correlating activities, generating audit reports … chasing that individual rabbit.”
Dirk Shrader, Global VP Security Research, Netwrix

The Future of Compliance-Driven Security Investment

Paulo Amarol demonstrates how GRC platforms are evolving from “evidence lockers” into strategic business intelligence systems that translate technical security data into executive-ready risk assessments.

“We’re able to slice and combine data from various sources—apps, operational security tooling, awareness training, even identity provider data—in ways that our leaders can bring this risk data into their decision-making. You can really automate the process of bringing data in, normalizing it, and mapping it to bigger picture strategic risks.”
Paulo Amarol, Senior Director GRC, Diligent Corporation

The panelists also explore:

  • Poll insights revealing where most organizations stand on compliance cost calculations
  • Regulatory proliferation across global markets and how to find common ground
  • Automation imperatives for continuous compliance monitoring
  • Cultural transformation as security and GRC functions converge
  • Implementation strategies for aligning security programs with business objectives

Ready to Transform Your Security Investment Strategy?

This isn’t another theoretical discussion about security ROI. It’s a practical guide from practitioners who’ve solved the funding challenge by repositioning security as a compliance-driven business enabler.

Watch the full ITGRC Forum webinar on-demand to access all 75 minutes of expert insights, poll results, and audience Q&A.

Stay ahead of the compliance-security convergence: Follow Anchore on LinkedIn and Bluesky for ongoing analysis of emerging regulations, industry trends, and practical implementation guidance from software supply chain security experts.

Subscribe to our newsletter for exclusive insights on SBOM requirements, compliance automation, and the strategic intersection of security and regulatory requirements.

The convergence of security and compliance isn’t just happening—it’s accelerating. Don’t get left behind.


Explore SBOM use-cases for almost any department of the enterprise and learn how to unlock enterprise value to make the most of your software supply chain.

WHITE PAPER Rnd Rect | Unlock Enterprise Value with SBOMs: Use-Cases for the Entire Organization

Beyond Software Dependencies: The Data Supply Chain Security Challenge of AI-Native Applications

Just as the open source software revolution fundamentally transformed software development in the 2000s—bringing massive productivity gains alongside unprecedented supply chain complexity—we’re witnessing history repeat itself with Large Language Models (LLMs). The same pattern that caused organizations to lose visibility into their software dependencies is now playing out with LLMs, creating an entirely new category of supply chain risk.

Not to worry though, The Linux Foundation has been preparing for this eventuality. SPDX 3.0 provides the foundational metadata standard needed to extend proven DevSecOps practices to applications that integrate LLMs. 

By introducing AI and Dataset Profiles, it enables organizations to apply the same supply chain security practices that have proven effective for software dependencies to the emerging world of AI supply chains. History may be repeating itself but this time, we have the opportunity to get ahead of it.

LLMs Create New Supply Chain Vulnerabilities That Traditional Security Tools Can’t Grok

The integration of LLMs into software applications has fundamentally altered the threat landscape. Unlike traditional software vulnerabilities that exploit code weaknesses, LLM-era attacks target the unique characteristics of AI systems: 

  • their training data is both data and code, and
  • their behavior (i.e., both data and code) can be manipulated by users.

This represents a paradigm shift that requires security teams to think beyond traditional application security.

LLMs merge data and code + a second supply chain to secure

LLMs are fundamentally different from traditional software components. Where conventional code follows deterministic logic paths. LLMs operate on statistical patterns learned from “training” on datasets. This fundamental difference creates a new category of “code” that needs to be secured—not just the model weights and architecture, but the training data, fine-tuning datasets, and even the prompts that guide model behavior.

When organizations integrate LLMs into their applications, they’re not just adding another software dependency. They’re creating an entire second supply chain—the LLM data supply chain—that operates alongside their traditional software supply chain

The challenge is that this new supply chain operates with fundamentally different risk patterns. Where software vulnerabilities are typically discrete and patchable, AI risks can be subtle, emergent, and difficult to detect. 

  • A single compromised dataset can introduce bias that affects all downstream applications. 
  • A prompt injection attack can manipulate model behavior without touching any traditional code. 
  • Model theft can occur through API interactions that leave no trace in traditional security logs.

Data poisoning and model theft: Novel attack vectors

The emergence of LLMs has introduced attack vectors that simply didn’t exist in traditional software systems, requiring security teams to expand their threat models and defensive strategies.

  1. Data Poisoning Attacks represent one of the most intractable new threat categories. Training data manipulation can occur at multiple points in the AI supply chain.

    Consider this: what’s stopping a threat actor from modifying a public dataset that’s regularly used to train foundational LLM models? Popular datasets hosted on platforms like Hugging Face or GitHub can be edited by contributors, and if these poisoned datasets are used in model training, the resulting models inherit the malicious behavior.

    RAG poisoning attacks take this concept further by targeting the retrieval systems that many production LLM applications rely on. Attackers can create SEO-optimized content and embed hidden text with instructions designed to manipulate the model’s behavior.

    When RAG systems retrieve this content as context for user queries, the hidden instructions can override the model’s original alignment, leading to unauthorized actions or information disclosure. Recent research has demonstrated that attackers can inject as few as five poisoned documents into datasets of millions and achieve over 90% success rates in manipulating model outputs.
  2. Model Theft and Extraction attacks exploit the API-accessible nature of modern LLM deployments. Through carefully crafted queries, attackers can extract valuable intellectual property without ever accessing the underlying model files. API-based extraction attacks involve sending thousands of strategically chosen prompts to a target model and using the responses to train a “shadow model” that replicates much of the original’s functionality.

    Self-instruct model replication takes this further by using the target model to generate synthetic training data, effectively teaching a competitor model to mimic the original’s capabilities.

These attacks create new categories of business risk that organizations must consider. Beyond traditional concerns about data breaches or system availability, LLM-integrated applications face risks of intellectual property theft, reputational damage from biased or inappropriate outputs, and regulatory compliance violations in increasingly complex AI governance environments.

Enterprises are losing supply chain visibility as AI-native applications grow

Organizations are mostly unaware of the fact that the data supply chain for LLMs is equally as important to track as their software supply chain. As teams integrate foundation model APIs, deploy RAG systems, and fine-tune models for specific use cases, the complexity of LLM data supply chains is exploding. 

Traditional security tools that excel at scanning software dependencies for known vulnerabilities are blind to LLM-specific risks like bias, data provenance, or model licensing complications.

This growing attack surface extends far beyond what traditional application security can address. When a software component has a vulnerability, it can typically be patched or replaced. When an AI model exhibits bias or has been trained on problematic data, the remediation may require retraining, which can cost millions of dollars and months of time. The stakes are fundamentally different, and the traditional reactive approach to security simply doesn’t scale.

So how do we deal with this fundamental shift in how we secure supply chains?

Next-Gen SBOM Formats Extend Proven Supply Chain Security to AI-Native Applications

The answer is—unsurprisinglySBOMs. But more specifically, next-generation SBOM formats like SPDX 3.0. While Anchore doesn’t have an official tagline, if we did, there’s a strong chance it would be “you can’t secure your supply chain without knowing what is in it.” SPDX 3.0 has updated the SBOM standard to store AI model and dataset metadata, extending the proven principles of software supply chain security to the world of LLMs.

AI Bill of Materials: machine-readable security metadata for LLMs

SPDX 3.0 introduces AI and Dataset Profiles that create machine-readable metadata for LLM system components. These profiles provide comprehensive tracking of models, datasets, and their relationships, creating what’s essentially an “LLM Bill of Materials” that documents every component in an AI-powered application.

The breakthrough is that SPDX 3.0 increases visibility into AI systems by defining the key AI model metadata—read: security signals—that are needed to track risk and define enterprise-specific security policies. This isn’t just documentation for documentation’s sake; it’s about creating structured data that existing DevSecOps infrastructure can consume and act upon. 

The bonus is that this works with existing tooling: SBOMs, CI/CD pipelines, vulnerability scanners, and policy-as-code evaluation engines can all be extended to handle AI profile metadata without requiring organizations to rebuild their security infrastructure from scratch.


Learn about how SBOMs have adapted to the world of micro-services architecture with the co-founder of SPDX and SBOMs.


3 novel security use-cases for AI-native apps enabled by SPDX 3.0

  1. Bias Detection & Policy Enforcement becomes automated through the knownBias field, which allows organizations to scan AI BOMs for enterprise-defined bias policies just like they scan software SBOMs for vulnerable components.

    Traditional vulnerability scanners can be enhanced to flag models or datasets that contain documented biases that violate organizational policies. Policy-as-code frameworks can enforce bias thresholds automatically, preventing deployment of AI systems that don’t meet enterprise standards.
  2. Risk-Based Deployment Gates leverage the safetyRiskAssessment field, which follows EU risk assessment methodology to categorize AI systems as serious, high, medium, or low risk.

    This enables automated risk scoring in CI/CD pipelines, where deployment gates can block high-risk models from reaching production or require additional approvals based on risk levels. Organizations can set policy thresholds that align with their risk tolerance and regulatory requirements.
  3. Data Provenance Validation uses fields like dataCollectionProcess and suppliedBy to track the complete lineage of training data and models. This enables allowlist and blocklist enforcement for data sources, ensuring that models are only trained on approved datasets.

    Supply chain integrity verification becomes possible by tracking the complete chain of custody for AI components, from original data collection through model training and deployment.

An SPDX 3.0 SBOM hierarchy for an AI-native application might look like this:

The key insight is that SPDX 3.0 makes AI systems legible to existing DevSecOps infrastructure. Rather than requiring organizations to build parallel security processes for AI workflows and components, it extends current security investments to cover the new AI supply chain. This approach reduces adoption friction by leveraging familiar tooling and processes that security teams already understand and trust.

History Repeats Itself: The Supply Chain Security Story

This isn’t the first time we’ve been through a transition where software development evolution increases productivity while also creating supply chain opacity. The pattern we’re seeing with LLM data supply chains is remarkably similar to what happened with the open source software explosion of the 2000s.

Software supply chains evolution: From trusted vendors to open source complexity to automated security

  • Phase 1: The Trusted World (Pre-2000s) was characterized by 1st-party code and trusted commercial vendors. Organizations primarily wrote their own software or purchased it from established vendors with clear support relationships.

    Manual security reviews were feasible because dependency trees were small and well-understood. There was high visibility into what components were being used and controlled dependencies that could be thoroughly vetted.
  • Phase 2: Open Source Software Explosion (2000s-2010s) brought massive productivity gains from open source libraries and frameworks. Package managers like npm, Maven, and PyPI made it trivial to incorporate thousands of 3rd-party components into applications.

    Dependency trees exploded from dozens to thousands of components, creating a visibility crisis where organizations could no longer answer the basic question: “What’s actually in my application?”

    This led to major security incidents like the Equifax breach (Apache Struts vulnerability), the SolarWinds supply chain attack, and the event-stream npm package compromise that affected millions of applications.
  • Phase 3: Industry Response (2010s-2020s) emerged as the security industry developed solutions to restore visibility and control.

    SBOM standards like SPDX and CycloneDX provided standardized ways to document software components. Software Composition Analysis (SCA) tools proliferated, offering automated scanning and vulnerability detection for open source dependencies. DevSecOps integration and “shift-left” security practices made supply chain security a standard part of the development workflow.

LLM supply chains evolution: Same same—just faster

We’re now seeing this exact pattern repeat with AI systems, just compressed into a much shorter timeframe.

Phase 1: Model Gardens (2020-2023) featured trusted foundation models from established providers like OpenAI, Google, and Anthropic. LLM-powered application architectures were relatively simple, with limited data sources and clear model provenance.

Manual AI safety reviews were feasible because the number of models and data sources was manageable. Organizations could maintain visibility into their AI components through manual processes and documentation.

Phase 2: LLM/RAG Explosion (2023-Present) has brought foundation model APIs that enable massive productivity gains for AI application development.

Complex AI supply chains now feature transitive dependencies where models are fine-tuned on other models, RAG systems pull data from multiple sources, and agent frameworks orchestrate multiple AI components.

We’re currently re-living the same but different visibility crisis where organizations have lost the ability to understand the supply chains that power their production systems. Emerging attacks like data poisoning, and model theft are targeting these complex supply chains with increasing sophistication.

Phase 3: Industry Response (Near Future) is just beginning to emerge. SBOM standards like SPDX 3.0 are leading the charge to re-enable supply chain transparency for LLM supply chains constructed from both code and data. AI-native security tools are starting to appear, and we’re seeing the first extensions of DevSecOps principles to AI systems.

Where do we go from here?

We are still in the early stages of new software supply chain evolution, which creates both risk and opportunity for enterprises. Those who act now can establish LLM data supply chain security practices before the major attacks hit, while those who wait will likely face the same painful lessons that organizations experienced during the software supply chain security crisis of the 2010s.

Crawl: Embed SBOMs into your current DevSecOps pipeline

A vital first step is making sure you have a mature SBOM initiative for your traditional software supply chains. You won’t be ready for the future transition to LLM supply chains without this base.

This market is mature and relatively lightweight to deploy. It will power software supply chain security or up-level current software supply chain security (SSCS) practices. Organizations that have already invested in SBOM tooling and processes will find it much easier to extend these capabilities to an AI-native world.

Walk: Experiment with SPDX 3.0 and system bills of materials

Early adopters who want to over-achieve can take several concrete steps:

  1. Upgrade to SPDX 3.0 and begin experimenting with the AI and Dataset Profiles. Even if you’re not ready for full production deployment, understanding the new metadata fields and how they map to your LLM system components will prepare you for the tooling that’s coming.
  2. Begin testing AI model metadata collection by documenting the models, datasets, and AI components currently in use across your organization. This inventory process will reveal gaps in visibility and help identify which components pose the highest risk.
  3. Insert AI metadata into SBOMs for applications that already integrate AI components. This creates a unified view of both software and LLM dependencies, enabling security teams to assess risk across the entire application stack.
  4. Explore trends and patterns to extract insights from your LLM component inventory. Look for patterns in data sources, model licensing, risk levels, and update frequencies that can inform policy development.

This process will eventually evolve into a full production LLM data supply chain security capability that will power AI model security at scale. Organizations that begin this journey now will have significant advantages as AI supply chain attacks become more sophisticated and regulatory requirements continue to expand.

The window of opportunity is open, but it won’t remain that way indefinitely. Just as organizations that ignored software supply chain security in the 2000s paid a heavy price in the 2010s, those who ignore AI supply chain security today will likely face significant challenges as AI attacks mature and regulatory pressure increases.

Follow us on LinkedIn or subscribe to our newsletter to stay up-to-date on progress. We will continue to update as this space evolves, sharing practical guidance and real-world experiences as organizations begin implementing LLM data supply chain security at scale.


Explore SBOM use-cases for almost any department of the enterprise and learn how to unlock enterprise value to make the most of your software supply chain.

WHITE PAPER Rnd Rect | Unlock Enterprise Value with SBOMs: Use-Cases for the Entire Organization

Anchore Enterprise 5.19: Automated STIG Compliance and Flexible Scanning for Modern DevSecOps

The latest release of Anchore Enterprise 5.19 features two major enhancements that address critical needs in government, defense, and enterprise environments:

  • Anchore STIG for Container Images, and 
  • Anchore One-Time Scan

Anchore STIG for Container Images automates the process of running a STIG evaluation against a container image to shift compliance “left”. By embedding STIG validation directly into the CI/CD pipeline as automated policy-as-code rules, compliance violations are detected early, reducing the time to reach compliance in production.

Anchore One-Time Scan is a new API which is optimized for scanning in CI/CD by removing the persistence requirement for storing the SBOM. Now security and software engineers can get stateless scanning, comprehensive vulnerability assessment and policy evaluation through a single CLI command or API call.

These new features bring automated compliance validation and flexible scanning options directly into your DevSecOps workflows, enabling organizations to maintain security standards without sacrificing development velocity.

Anchore STIG for Container Images: Compliance Automation at Scale

Before we jump into the technical details, it’s important to understand the compliance challenges that government and defense organizations face daily. Security Technical Implementation Guides (STIGs) represent the gold standard for cybersecurity hardening in federal environments, providing detailed configuration requirements that systems must meet to operate securely. However, traditional STIG compliance has been a largely manual process—time-consuming, error-prone, and difficult to integrate into automated CI/CD pipelines.

What is STIG and Why It Matters

STIGs are cybersecurity best practices developed by the Defense Information Systems Agency (DISA) that focus on proactive system configuration and hardening.

The challenge for modern development teams is that STIG evaluations have traditionally required manual assessment and configuration validation, creating bottlenecks in deployment pipelines and increasing the risk of non-compliant systems reaching production. For organizations pursuing FedRAMP authorization or operating under federal compliance mandates, this manual overhead can significantly slow development cycles while still leaving room for human error.

For a real-world example of how STIG compliance challenges are being solved at scale, check out our Cisco Umbrella case study, which details how Cisco uses Anchore Enterprise with STIG for Container Images on their AWS EC2 base images.


Learn how to harden your containers and make them “STIG-Ready” with our definitive guide.

Complete Guide to Hardening Containers with STIG | Anchore

Why Adopt Anchore STIG for Container Images?

Anchore STIG for Container Images delivers immediate value across multiple organizational levels: 

  • Development teams gain access to “STIG Ready” base images
  • Security teams can access STIG evaluation documents in a single location

The automated approach eliminates the manual audit overhead that traditionally slows compliance workflows, while the policy gate integration prevents images which are not evaluated from reaching production. This proactive compliance model significantly reduces the risk of security violations and streamlines the path to regulatory compliance authorizations such as FedRAMP or DoD ATO.

How Anchore STIG for Container Images Works

Anchore STIG for Container Images automates the entire STIG evaluation process through seamless integration with Cinc (i.e., open source Chef IaC system) and AnchoreCTL orchestration. The solution provides a four-step workflow that transforms manual compliance checking into an automated pipeline component:

  1. Install Cinc on your scanning host alongside AnchoreCTL
  2. Extract supported STIG profiles
$ anchorectl image stig write-profiles  [--include-experimental]
  1. Execute STIG checks using specific profiles through AnchoreCTL commands
$ anchorectl image stig run <FULLY_QUALIFIED_URL_TO_CONTAINER_IMAGE> \
--stig-profile ./<DIRECTORY_PATH_TO_EXTRACTED_STIG_PROFILES>/ubi8/anchore-ubi8-disa-stig-1.0.0.tar.gz
  1. Upload results directly to Anchore Enterprise for centralized management and reporting

The add-on supports comprehensive profiles for RHEL 8/9 and Ubuntu 22.04/24.04, with tech preview profiles available for critical applications including: 

  • PostgreSQL
  • Apache Tomcat
  • MongoDB Enterprise
  • Java Runtime Environment

New API endpoints provide full programmatic access to STIG evaluations, while the integrated policy gate ensures that only compliant images can progress through your deployment pipeline. The screenshot below shows an example gate that can evaluate whether a STIG evaluation exists for a container and if the age of the evaluation is older than a specified number of days.

Anchore Enterprise One-Time Scan: Lightweight Security for Agile Workflows

Not every security scanning scenario requires persistent data storage in your Anchore Enterprise deployment. Modern DevSecOps teams often need quick vulnerability assessments for third-party images, temporary validation in CI/CD pipelines, or rapid security triage during incident response. Traditional scanning approaches that persist all data can create unnecessary overhead for these ephemeral use-cases.

CI/CD pipeline flexibility is particularly important for organizations operating at scale, where resource optimization and scanning speed directly impact development velocity. Teams need the ability to perform comprehensive security evaluation without the infrastructure overhead of full data persistence, especially when assessing external images or performing one-off security validations.

Why and Where to Utilize the One-Time Scan Feature

One-Time Scan significantly reduces scanning overhead by eliminating the storage and processing requirements associated with persistent image data. This approach is particularly valuable for organizations scanning large numbers of ephemeral workloads or performing frequent one-off assessments.

Primary Use Cases:

  • CI/CD Pipeline Validation: Quick security checks for ephemeral build environments
  • Third-Party Image Assessment: Evaluate external images without adding them to your inventory
  • Incident Response: Rapid vulnerability assessment during security investigations
  • Compliance Verification: Policy evaluation for images that don’t require long-term tracking

The stateless operation of One Time Scan provides faster scanning results for time-sensitive workflows, while the new stateless_sbom_evaluation metric enables teams to track usage patterns and optimize their scanning strategies. This flexibility supports diverse operational requirements without compromising security analysis quality.

How One Time Scan Works

Anchore Enterprise’s One Time Scan feature introduces a stateless scanning capability that delivers comprehensive vulnerability assessment and policy evaluation without persisting data in your Anchore Enterprise deployment. The feature provides a single API endpoint (POST /v2/scan) that accepts image references and returns complete security analysis results in real-time.

The stateless operation includes full policy evaluation against your active policy bundles, specifically leveraging Anchore Secure’s gates for vulnerabilities and secret scans. This ensures that even temporary scans benefit from your organization’s established security policies and risk thresholds. 

For CLI-based workflows, the new AnchoreCTL command anchorectl image one-time-scan <image> provides immediate access to stateless scanning capabilities.

$ anchorectl image one-time-scan python:latest --from registry
 ✔ Completed one time scan                                                                                                                             python:latest
Tag: python:latest
Digest: sha256:238379aacf40f83bfec1aa261924a463a91564b85fbbb97c9a96d44dc23bebe7
Policy ID: anchore_secure_default
Last Evaluation: 2025-07-08T14:29:47Z
Evaluation: pass
Final Action: warn
Reason: policy_evaluation

Policy Evaluation Details:
┌─────────────────┬─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┐
│ GATE            │ TRIGGER │ DESCRIPTION                                                                                                                                                                                   │ ACTION │ RECOMMENDATION                                                                                                                                                                   │
├─────────────────┼─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┤
│ vulnerabilities │ package │ HIGH Vulnerability found in os package type (dpkg) - libdjvulibre-text-3.5.28-2 (fixed in: 3.5.28-2.1~deb12u1)(CVE-2025-53367 - https://security-tracker.debian.org/tracker/CVE-2025-53367)   │ warn   │ Packages with low, medium, and high vulnerabilities present can be upgraded to resolve these findings. If upgrading is not possible the finding should be added to an allowlist. │
│ vulnerabilities │ package │ HIGH Vulnerability found in os package type (dpkg) - libdjvulibre21-3.5.28-2+b1 (fixed in: 3.5.28-2.1~deb12u1)(CVE-2025-53367 - https://security-tracker.debian.org/tracker/CVE-2025-53367)   │ warn   │ Packages with low, medium, and high vulnerabilities present can be upgraded to resolve these findings. If upgrading is not possible the finding should be added to an allowlist. │
│ vulnerabilities │ package │ MEDIUM Vulnerability found in non-os package type (binary) - /usr/local/bin/python3.13 (fixed in: 3.14.0b3)(CVE-2025-6069 - https://nvd.nist.gov/vuln/detail/CVE-2025-6069)                   │ warn   │ Packages with low, medium, and high vulnerabilities present can be upgraded to resolve these findings. If upgrading is not possible the finding should be added to an allowlist. │
│ vulnerabilities │ package │ HIGH Vulnerability found in os package type (dpkg) - libdjvulibre-dev-3.5.28-2+b1 (fixed in: 3.5.28-2.1~deb12u1)(CVE-2025-53367 - https://security-tracker.debian.org/tracker/CVE-2025-53367) │ warn   │ Packages with low, medium, and high vulnerabilities present can be upgraded to resolve these findings. If upgrading is not possible the finding should be added to an allowlist. │
└─────────────────┴─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┘

Upgrade to Anchore Enterprise 5.19

Anchore Enterprise 5.19 represents a significant advancement in container security automation, delivering the compliance capabilities and scanning flexibility that modern organizations require. The combination of automated STIG compliance and stateless scanning options enables teams to maintain rigorous security standards without creating a drag on development velocity.

Whether you’re pursuing FedRAMP authorization, managing compliance requirements in government environments, or simply need more flexible scanning options for your DevSecOps workflows, these new capabilities provide the foundation for scalable, automated container security.

Ready to upgrade?


Anchore Achieves AWS Security Competency & Launches Anchore Enterprise AMI

We are excited to announce two significant milestones that further strengthen our partnership with Amazon Web Services (AWS):

These announcements represent another major step in Anchore and AWS’s deepening collaboration to help Fortune 2000 enterprises, federal agencies, and defense contractors secure their software supply chains.

AWS Security Competency: SBOM Leadership Validation

The AWS Security Competency validates what Anchore customers have known for many years — Anchore is ready to provide SBOM management, container security and automated compliance enforcement to Fortune 2000 enterprises, federal agencies, and defense contractors who require a bullet-proof software supply chain.

This competency represents technical validation of Anchore’s SBOM-powered security capabilities through a rigorous AWS assessment of our solution architecture and customer success stories. AWS evaluated our platform’s ability to deliver comprehensive software supply chain transparency, vulnerability management, and automated compliance enforcement at enterprise scale.

Real-world validation comes from customers like:

Cisco Umbrella leveraged Anchore’s SBOM-powered container security to accelerate meeting all six FedRAMP requirements. They deployed Anchore into a high-security environment that had to meet stringent compliance standards, including STIG compliance for Amazon EC2 nodes backing their Amazon EKS deployment.

DoD Iron Bank adopted Anchore for SBOM-powered container security and DoD software factory compliance, validating our platform’s ability to meet the most demanding security requirements in government and defense environments.

For decision makers, the AWS Security Competency provides confidence in solution reliability and seamless AWS integration. It streamlines procurement through verified partner status and ensures enhanced support through our strengthened AWS partnership.

Anchore Enterprise Cloud Image: Simplifying Deployment with an AWS AMI

The Anchore Enterprise Cloud Image represents a pre-built, virtual appliance deployment option that serves as an alternative to the popular Kubernetes Helm chart deployments for use-cases that require a lightweight, batteries-included integration. This isn’t about reducing features—it’s about eliminating complexity where Kubernetes expertise isn’t readily available or necessary.

Technical advantages include:

Dramatically reduced deployment complexity through a ready-to-run Amazon Machine Image (AMI) that eliminates the need for a PhD in Kubernetes. The AMI delivers optimized performance on select AWS instance types, with deterministic performance guidelines for better capacity planning and cost management.

Anchore’s interactive Cloud Image Manager provides guided setup that intelligently assesses your AWS environment, ensures correct resource provisioning, and automates installation with appropriate configurations. Integrated compliance policy packs for NIST, SSDF and FedRAMP frameworks ensure your container security posture aligns with regulatory requirements from day one.

Business benefits that matter to leadership:

Faster time-to-value for container security initiatives means your teams can focus on securing containers rather than managing infrastructure. Reduced operational overhead frees up resources for strategic security initiatives rather than deployment troubleshooting.

This prescriptive solution is ideal for teams without extensive Kubernetes expertise, proof-of-concept deployments, and smaller-scale implementations that need enterprise-grade security without enterprise-level complexity.

Strengthening Our AWS Partnership for Customer Success

These milestones build on our growing AWS collaboration, including our AWS Marketplace availability and ISV Accelerate Program membership. This represents our broader commitment to enterprise and public sector customers who rely on AWS infrastructure for their most critical applications.

The joint value proposition is clear: seamless AWS infrastructure integration combined with enhanced support through our combined AWS and Anchore expertise. We’re addressing the full spectrum of deployment preferences, whether you need the scale-out capabilities of EKS or the simplified deployment of our EC2 AMI option.

This partnership strengthening directly benefits our mutual customers through validated integration patterns, streamlined support channels, and deployment flexibility that matches your team’s expertise and requirements.

Moving Forward Together

The combination of AWS Security Competency validation and simplified AMI deployment options demonstrates our commitment to comprehensive support for enterprise and government security requirements. These milestones strengthen our partnership and enable customer success at scale, whether you’re securing containers for a commercial enterprise or meeting compliance requirements for federal agencies.

Our deepening AWS partnership ensures you have the deployment flexibility, validated security capabilities, and enterprise support needed to secure your software supply chain with confidence.

Ready to get started?

  • For AMI deployment: Contact our sales team for Cloud Image deployment consultation tailored to your AWS environment
  • For general inquiries: Connect with our team to discuss how AWS Security Competency benefits and deployment options can accelerate your software supply chain security initiatives

SPDX 3.0: From Software Inventory to System Risk Orchestration

The next phase of software supply chain security isn’t about better software supply chain inventory management—it’s the realization that distributed, micro-services architecture expands an application’s “supply chain” beyond the walls of isolated, monolithic containers to a dynamic graph of interconnected services working in concert.

Kate Stewart, co-founder of SPDX and one of the most influential voices in software supply chain security, discovered this firsthand while developing SPDX 3.0. Users were importing SBOMs into databases and asking interconnection questions that the legacy format couldn’t answer. Her key insight drove the development of SPDX 3.0: “It’s more than just software now, it really is a system.” The goal became transforming the SBOM format into a graph-native data structure that captures complex interdependencies between constellations of services.

In a recent interview with Anchore’s Director of Developer Relations on the Future of SBOMs, Stewart shared insights, shaped by decades of collaboration in the trenches with SBOM users and the sculpting of SBOM standards based on ground truth needs. Her perspectives are uniquely tailored to illuminate the challenge of adapting traditional security models designed for fully self-contained applications to the world of distributed micro-services architectures.

The architectural evolution from monolithic, containerized application to interconnected constellations of single-purpose services doesn’t just change how software is built—it fundamentally changes what we’re trying to secure.


Learn about how SBOMs have adapted to the world of micro-services architecture with the co-founder of SPDX and SBOMs.


When Software Became Systems

In the containerized monolith era, traditional SBOMs (think: < SPDX 2.2) were perfectly suited for their purpose. They were designed for self-contained applications with clear boundaries where everything needed was packaged together. Risk assessment was straightforward: audit the container, secure the application.

Thing to scan 👇
================

+-------------------------------------------------+
|                    Container                    |
|  +-------------------------------------------+  |
|  |          Monolithic Application           |  |
|  |  +----------+  +---------+  +----------+  |  |
|  |  | Frontend |  | Backend |  | Database |  |  |
|  |  +----------+  +---------+  +----------+  |  |
|  +-------------------------------------------+  |
+-------------------------------------------------+

       [ User ]
          |
          v
    +------------+
    |  Frontend  |  (container)      👈 Thing...
    +------------+
          |
          v
    +--------------+
    |  API Server  |  (container)    👈 [s]...
    +--------------+
        /    \
       v      v
+----------+ +--------+
| Auth Svc | | Orders | (container)  👈 to...
+----------+ +--------+
       \      /
        v    v
    +------------+
    |  Database  | (container)       👈 scan.
    +------------+

But the distributed architecture movement changed everything. Cloud-native architectures spread components across multiple domains. Microservices created interdependencies that span networks, data stores, and third-party services. AI systems introduced entirely new categories of components including training data, model pipelines, and inference endpoints. Suddenly, the neat boundaries of traditional applications dissolved into complex webs of interconnected services.

Even with this evolution in software systems, the fundamental question of software supply chain security hasn’t evolved. Security teams still need to know, “what showed up; at what point in time AND do it at scale.” The new challenge is that system complexity has exploded exponentially and the legacy SBOM standards weren’t prepared for it.

Supply chain risk now flows through connections, not just components. Understanding what you’re securing requires mapping relationships, not just cataloging parts.

But if the structure of risk has changed, so has the nature of vulnerabilities themselves.

Where Tomorrow’s Vulnerabilities Will Hide

The next generation of critical vulnerabilities won’t just be in code—they’ll emerge from the connections and interactions between complex webs of software services.

Traditional security models relied on a castle-and-moat approach: scan containers at build time, stamp them for clearance, and trust them within the perimeter. But distributed architectures expose the fundamental flaw in this thinking. When applications are decomposed into atomic services the holistic application context is lost. A low severity vulnerability in one system component that is white listed for the sake of product delivery speed can still be exploited and alter a payload that is benign to the exploited component but disastrous to a downstream component.

The shift to interconnected services demands a zero-trust security paradigm where each interaction between services requires the same level of assurance as initial deployment. Point-in-time container scans can’t account for the dynamic nature of service-to-service communication, configuration changes, or the emergence of new attack vectors through legitimate service interactions.

In order to achieve this new security paradigm, SPDX needed a facelift. The new idea about an SBOM that can store the entire application context across independent services is sometimes called a SaaSBOM. SPDX 3.0 implements this idea via a new concept called profiles, where application profiles can be built from a collection of individual service profiles and operations or infrastructure profiles can also capture data on the build and runtime environments.

Your risk surface isn’t just your code anymore—it’s your entire operational ecosystem from hardware component supplier to data providers to third-party cloud service.

Understanding these expanding risks requires a fundamental shift from periodic snapshots (i.e., castle-and-moat posture) to continuous intelligence (i.e., zero-trust posture).

From Periodic Audits to Continuous Risk Intelligence

The shift to zero-trust architectures requires more than just changing security policies—it demands a fundamental reimagining of how we monitor and verify the safety of interconnected systems in real-time.

Traditional compliance operates on snapshot thinking: quarterly audits, annual assessments, point-in-time inventories. This approach worked when applications were monolithic containers that changed infrequently. But when services communicate continuously across network boundaries, static assessments become obsolete before they’re complete. By the time audit results are available, dozens of deployments, configuration changes, and scaling events have already altered the system’s risk profile.

Kate Stewart’s vision of “continuous compliance” addresses this fundamental mismatch between static assessment and dynamic systems. S—System—BOMs capture dependencies and their relationships in real-time as they evolve, enabling automated policy enforcement that can keep pace with DevOps-speed development. This continuous visibility means teams can verify that each service-to-service interaction maintains the same security assurance as initial deployment, fulfilling the zero-trust requirement.

The operational transformation is profound. Teams can understand blast radius immediately when incidents occur, tracing impact through the actual dependency graph rather than outdated documentation. Compliance verification happens inline with development pipelines rather than as a separate audit burden. Most importantly, security teams can identify and address misconfigurations or policy violations before they create exploitable vulnerabilities.

This evolution transforms security from a periodic checkpoint into continuous strategic intelligence, turning what was once a cost center into a competitive advantage that enables faster, safer innovation.

The Strategic Imperative—Why This Matters Now

Organizations that adapt to system-level visibility will have decisive advantages in risk management, compliance, and operational resilience as the regulatory and competitive landscape evolves.

The visibility problem remains foundational: you can’t secure what you can’t see. Traditional tools provide (system) component visibility, but emergent system risks only emerge through relationship mapping. Kate emphasizes this idea noting that “safety is a system property”. If you want to achieve system-level guarantees of security or risk, being able to see only the trees and not the forest won’t cut it.

Regulatory evolution is driving urgency around this transition. Emerging regulations (e.g., EO 14028, EU CRA, DORA, FedRAMP, etc.) increasingly focus on system-level accountability, making organizations liable for the security of entire systems, including interactions with trusted third-parties. Evidence requirements are evolving from point-in-time documentation to continuously demonstrable evidence, as seen in initiatives like FedRAMP 20x. Audit expectations are moving toward continuous verification rather than periodic assessment.

Competitive differentiation emerges through comprehensive risk visibility that enables faster, safer innovation. Organizations achieve reduced time-to-market through automated compliance verification. Customer trust builds through demonstrable security posture. Operational resilience becomes a competitive moat in markets where system reliability determines business outcomes.

Business continuity integration represents perhaps the most significant strategic opportunity. Security risk management aligns naturally with business continuity planning. System understanding enables scenario planning and resilience testing. Risk intelligence feeds directly into business decision-making. Security transforms from a business inhibitor into an enabler of agility.

This isn’t just about security—it’s about business resilience and agility in an increasingly interconnected world.

The path forward requires both vision and practical implementation.

The Path Forward

The transition from S—software—BOMs to S—system—BOMs represents more than technological evolution—it’s a fundamental shift in how we think about risk management in distributed systems.

Four key insights emerge from this evolution. 

  1. Architectural evolution demands corresponding security model evolution—the tools and approaches that worked for monoliths cannot secure distributed systems. 
  2. Risk flows through connections, requiring graph-based understanding that captures relationships and dependencies. 
  3. Continuous monitoring and compliance must replace periodic audits to match the pace of modern development and deployment. 
  4. System-level visibility becomes a competitive advantage for organizations that embrace it early.

Organizations that make this transition now will be positioned for success as distributed architectures become even more complex and regulatory requirements continue to evolve. The alternative—continuing to apply monolithic security thinking to distributed systems—becomes increasingly untenable.

The future of software supply chain security isn’t about better inventory—it’s about intelligent orchestration of system-wide risk management.


If you’re interested in how to make the transition from generating static software SBOMs to dynamic system SBOMs, check out Anchore SBOM or reach out to our team to schedule a demo.


Explore SBOM use-cases for almost any department of the enterprise and learn how to unlock enterprise value to make the most of your software supply chain.

WHITE PAPER Rnd Rect | Unlock Enterprise Value with SBOMs: Use-Cases for the Entire Organization

How to Respond When Your Customers Require an SBOM (and Even Write It Into the Contract!)

Your sales team just got off a call with a major prospect. The customer is asking for an SBOM—a software bill of materials—and they want it written directly into the contract. The request is escalated to the executive team and from there directly into your inbox. Maybe it’s a government agency responding to new federal mandates, a highly regulated enterprise implementing board-level security requirements, or a large EU-based SaaS buyer preparing for upcoming regulatory changes.

Suddenly, a deal that seemed straightforward now hinges on your ability to deliver comprehensive software supply chain transparency. If this scenario sounds familiar, you’re definitely not alone. SBOM requirements are increasing across industries, fueled by new regulations like the US Executive Order 14028 and the EU Cyber Resilience Act. For most software vendors, this represents entirely new territory where the stakes—revenue, reputation, and customer trust—are very real.

This development isn’t entirely negative news, however. Organizations that proactively build robust SBOM capabilities are discovering they’re not just avoiding lost deals—they’re actually building a significant competitive advantage. Early adopters consistently report faster sales cycles with security-conscious prospects and access to previously unreachable government contracts that require supply chain transparency. 

Don’t believe me? I’ve brought the receipts:

We’re seeing a lot of traction with data warehousing use-cases. Security is absolutely critical for these environments. Being able to bring an SBOM to the conversation at the very beginning completely changes the conversation and allows CISOs to say, ‘let’s give this a go’.

—CEO, API Management Vendor

>> Read whole customer case study here >>

This blog post will walk you through the critical steps needed to meet customer SBOM demands effectively, help you avoid costly implementation mistakes, and even show you how to turn compliance requirements into genuine business advantages.


5-Minute Decision Framework: Are SBOMs Urgent for Your Organization?

SBOM Urgency Thermometer

High urgency signals: Government prospects, enterprise RFPs mentioning SBOMs, existing customers asking software supply chain security questions, competitors promoting SBOM capabilities.

Medium urgency signals: Industry peers discussing SBOM strategies, security questionnaires becoming more detailed, procurement teams asking about vulnerability management.

Preparation signals: Strong CI/CD pipelines, good dependency management, existing security tooling, cross-functional project execution capability.

Red flags: Legacy systems with unknown dependencies, manual build processes, siloed teams, limited engineering bandwidth.


Why Customers Are Demanding SBOMs—And What That Means For You

SBOMs aren’t a passing trend. In fact, the regulatory pressure from governments around the world are steadily driving SBOM adoption outside of the public sector. These new regulations have forced vendors, especially those selling to the US government and in the EU, to scrutinize what’s in the software.

  • US Government: EO 14028 requires federal agencies to collect SBOMs from vendors
  • EU Enterprises: The EU Cyber Resilience Act (CRA) requires an SBOM for any enterprise that sells “products with software components” in the EU market
    • BUT won’t be fully enforced until December 2027—you still have time to get ahead of this one!
  • Highly regulated industries: Meaning defense (continuous ATO), healthcare (FDA approval) and finance (DORA, PCI DSS 4.0) all require SBOMs

But what are your customers really after? Most are looking for:

  • A clear, standardized inventory of what’s in your software (open source, third-party, proprietary)
  • Evidence you’re proactively remediating supply chain vulnerabilities
  • A baseline for risk assessment and future incident response

Some customers will have strict formats or frequent asks; others are just “checking the box.” It’s important to clarify what’s really required.

Decoding the Request for an SBOM: What’s Actually Required?

When a customer asks for an SBOM, don’t assume you know what they want. Here’s how to get clarity:

Ask these questions

  • Format: Do you require SPDX, CycloneDX, or will any standard SBOM format do?
  • Update Frequency: Is a one-time SBOM sufficient, or do you require continuous updates with every new release?
  • Depth: Do you want only direct dependencies, or transitive (all sub-dependencies) as well?
  • Delivery: How do you want to receive the SBOM—portal, API, email, physical media?

Minimum requirements

Most regulated buyers accept SPDX or CycloneDX formats as long as they meet the NTIA’s Minimum Elements. One SBOM per major release is typical, unless otherwise specified.

Red Flags

  • Unreasonably frequent update requests (e.g., “every nightly build”)
  • Requests for highly granular or proprietary information you can’t legally or safely disclose

Contract language examples

  • “Vendor shall provide an SBOM in SPDX or CycloneDX format at product release.”
  • “Vendor will update the SBOM within 30 days of any significant component change.”

Key Risks and Negotiation Tactics

The biggest risk? Overcommitting—contractually agreeing to deliver what you can’t. 

Contract negotiations around SBOM requirements present unique challenges that combine technical complexity with significant business risk. Understanding common problematic language and developing protective strategies prevents overcommitment and reduces legal exposure.

Here’s how to stay safe:

Risks

  • Operational: You lack fully instrumented software development pipeline with integrated SBOM generation and can’t meet the update frequency as promised.
  • Legal: You don’t have complete supply chain transparency and risk exposing proprietary or third-party code you’re not allowed to disclose.
  • Reputational: Missing deadlines or failing to deliver undermines customer trust.

Red flags in contracts

  1. Unlimited liability clauses for SBOM accuracy 
  • 100% accurate SBOMs create impossible standards—no automated tool achieves this level of accuracy, and manual verification is prohibitively expensive
  1. Penalty clauses for incomplete or inaccurate SBOMs
  • You should be able to remediate mistakes in a reasonable timeframe
  1. Real-time or continuous SBOM update requirements ignoring practical development and release cycles
  2. Requirements for complete proprietary component disclosure 
  • May violate third-party licensing agreements or expose competitive advantages
  1. No provision for IP protection
  • If you’re increasing their supply chain transparency they need to reciprocate and protect your interests
  1. Vague standards (“must provide industry best-practice SBOMs” without specifics)

How to negotiate

Push back on frequency: 

“We can provide an updated SBOM at each major release, but not with every build.”

Standard delivery timelines should align with existing release cycles—quarterly updates for stable enterprise software, per-release delivery for rapidly evolving products.

Don’t roll over on accuracy:

“We can generate SBOMs automatically as part of our normal software development process, provide reasonable manual validation and correct any identified issues.”

Reasonable accuracy standards acknowledge tool limitations while demonstrating good faith effort.

Protect sensitive info: 

“SBOM details do not extend to proprietary components or components protected by confidentiality.”

Redact or omit sensitive components, and communicate this upfront.

Quick-Start: Fast Path to SBOM Compliance (for Resource-Constrained Teams)

You don’t need to boil the ocean. Here’s how to get started—fast:

First Five Moves

  1. Clarify the ask: Use the questions above to pin down what’s really required.
  2. Inventory your software: Identify what you build, ship, and what major dependencies you include.
  3. Choose your tooling:
  • For modern apps, consider open source tools (e.g., Syft) or commercial platforms (e.g., Anchore SBOM).
  • For legacy systems, some manual curation may be needed.
  1. Assign ownership: Clearly define who in engineering, security, or compliance is responsible.
  2. Pilot a single SBOM: Run a proof of concept for one release, review, and iterate.

Pro tips:

  • Automate where possible (integrate SBOM tools into CI/CD).
  • Don’t over-engineer for the first ask—start with what you can support.

Handling legacy/complex systems:

Sometimes, a partial or high-level SBOM is enough. Communicate limitations honestly and document your rationale.

Efficient Operationalization: Making SBOMs Work in Your Workflow

When you’re ready to operationalize your SBOM initiative, there are four important topics to consider:

  1. Automate SBOM creation:
    Integrate tools into your build pipeline; trigger SBOM creation with each release.
  2. SBOM management:
    Store SBOMs in a central repository for easy search and analysis.
  3. Version and change management:
    Update SBOMs when major dependencies or components change.
  4. Delivery methods:
    • Secure portal
    • Customer-specific API
    • Encrypted email attachment

This is also a good time to consider the build vs buy question. There are commercial options to solve this challenge if building a homegrown system would be a distraction to your company’s core mission.

Turning Compliance into Opportunity

SBOMs aren’t just a checkbox—they can help your business:

  • Win deals faster: “Having a ready SBOM helped us close with a major public sector buyer ahead of a competitor.”
  • Shorten security reviews: Automated SBOM delivery means less back-and-forth during customer due diligence.
  • Build trust: Demonstrate proactive risk management and transparency.

Consider featuring your SBOM readiness as a differentiator in sales and marketing materials.

SBOM Readiness Checklist

::Checklist::

Have we clarified the customer’s actual SBOM requirements?

✅: Continue

❌: Send request back to customer account team with SBOM requirements

Do we know which SBOM format(s) are acceptable?

✅: Continue

❌: Send request back to customer account team with SBOM requirements

Have we inventoried all shipped software and dependencies?

✅: Continue

❌: Send to engineering to build a software supply chain inventory

Have we selected and tested an SBOM generation tool?

✅: Continue

❌: Send to engineering to select and integrate an SBOM generation tool into CI/CD pipeline

Do we have clear roles for SBOM creation, review, and delivery?

✅: Continue

❌: Work with legal, compliance, security and engineering to document SBOM workflow

Are our contractual obligations documented and achievable?

✅: Continue

❌: Work to legal to clarify and document obligations

Do we have a process for handling sensitive or proprietary code?

✅: You’re all good

❌: Work with engineering and security to identify sensitive or proprietary information and develop a redaction process

Conclusion: From Reactive to Strategic

SBOM requirements are here to stay—but meeting them doesn’t have to be painful or risky.

The most forward-thinking organizations are transforming SBOM compliance from a burden into a strategic advantage. By proactively developing robust SBOM capabilities, you’re not just checking a box—you’re positioning your company as a market leader in security maturity and transparency. As security expectations rise across all sectors, your investment in SBOM readiness can become a key differentiator, driving higher contract values and protecting your business against less-prepared competitors.

Ready to take the first step?

The SBOM Paradox: Why ‘Useless’ Today Means Essential Tomorrow

“Most SBOMs are barely valid, few meet minimum government requirements, and almost none are useful.”

Harsh. But this is still a common sentiment by SBOM users on LinkedIn. Software bills of materials (SBOMs) often feel like glorified packing slips—technically present but practically worthless.

Yet Kate Stewart, one of the most respected figures in open source, has dedicated over a decade of her career to SBOMs. As co-founder of SPDX and a Linux Foundation Fellow, she’s guided this standard from its inception in 2010 through multiple evolutions. Why would someone of her caliber pour years into something supposedly “useless”?

Because Stewart, the Linux Foundation and the legion of SDPX contributors see something the critics don’t: today’s limitations aren’t a failure of vision—they’re a foundation for the growing complexity of the software supply chain of the future.

Because Stewart, the Linux Foundation and the legion of SDPX contributors see something the critics don’t: today’s limitations aren’t a failure of vision—they’re a deliberate strategy. They’re following the classic startup playbook: nail a minimal use-case first, achieve critical mass, then expand horizontally. The “uselessness” critics complain about? That’s a feature, not a bug.

Death by a Thousand Cuts

To understand where we’re headed, we need to start where it all began: back in 2009 with Kate and a few of her key software architects at Freescale Semiconductor spending their weekends manually scanning software packages for licenses before the launch of a new semiconductor chip.

Stewart and her team faced what seemed like a manageable challenge—tracking open source software (OSS) licenses for roughly 500 dependencies. But as she recalls, “It was death by a thousand cuts.” Every weekend, they’d pore over packages, hunting for license information, trying to avoid the legal landmines hidden in their newest chip’s software supply chain.

The real shock came from discovering how naive their assumptions were. “Everyone assumes the top-level license is all there is,” Stewart explains, “but surprise!” Buried deep in transitive dependencies—the dependencies of dependencies—were licenses that could torpedo entire projects. GPL code hidden three layers deep could force a proprietary product open source. MIT licenses with unusual clauses could create attribution nightmares.

This wasn’t just Freescale’s problem. Across the industry, companies were hemorrhaging engineering hours on manual license compliance.

The Counterintuitive Choice

Here’s where the story takes an unexpected turn. When the Linux Foundation’s FOSSBazaar working group came together to design a solution, they made a choice that still frustrates people today: they went minimal. Radically minimal.

The working group—including Stewart and other industry experts—envisioned SBOMs as “software metadata containers”—infinitely expandable vessels for any information about software components. The technology could support hashing, cryptographic attestations, vulnerability data, quality metrics, and more. But instead of trying to predict every potential use-case they chose to pare the original SPDX spec down to only its essentials.

Stewart knew that removing these features would make SBOMs “appear” almost useless for any purpose. So why did they proceed?

The answer lies in a philosophy that would define SBOM’s entire evolution:

“[We framed] SBOMs as simply an “ingredients list”…but there’s a lot more information and metadata that you can annotate and document to open up significant new use-cases. [The additional use-cases are] really powerful BUT we needed to start with the minimum viable definition.”

The goal wasn’t to solve every problem on day one—it was to get everyone speaking the same language. They chose adoption over the overwhelming complexity of a fully featured standard.

Why the ‘Useless’ Jab Persists

By launching SPDX with a minimal definition to encourage broad adoption and make the concept approachable, the industry began evaluating it equally as minimally—seeing SBOMs as simple ingredient lists rather than an extensible framework. The simplicity of the standard made it easier to grasp, but also easier to dismiss.

Today’s critics have valid points:

  • Most SBOMs barely meet government minimums
  • They’re treated as static documents, generated once and forgotten
  • Organizations create them purely for compliance, extracting zero operational value
  • The tools generating them produce inconsistent, often invalid outputs

But here’s what the critics miss: SBOMs aren’t truly static documents—at least, not in practice. They’re more like Git version-controlled files: static snapshots that form a dynamic record over time. Each one captures the meta state of an application at a given moment, but their value emerges from their evolution. As Stewart emphasizes, “Every time you apply a security fix you are revving your package. A new revision needs a new SBOM.” Just as Git commits accumulate to form a living history of a codebase, SBOMs should accumulate and evolve to reflect the ongoing lifecycle of an application.

The perception problem is real, but it’s also temporary.

The HTTP Playbook

To understand why the minimal SBOMs strategy is powerful, consider the evolution of HTTP.

In 1991, the original HTTP/0.9 protocol could only request a document using a GET method and receive raw HTML bytes in return. There were no status codes, no headers, and no extensibility. Critics at the time leveled familiar critiques against the fledgling protocol—”barely functional”, “useless”, etc. But that simplicity was its genius. It was a minimum viable definition that was easy to implement and rapidly adopted. 

And because it was adopted, it grew and evolved.

Today, HTTP headers handle:

  • Security policies (Content‑Security‑Policy, Strict‑Transport‑Security)
  • Performance optimization (caching, compression)
  • State management (cookies and session handling)
  • Authentication and authorization
  • Client hints and feature detection

Nobody in 1991 imagined we’d use HTTP headers to prevent cross‑site scripting attacks or optimize mobile performance. But the extensible design made it possible.

SBOMs are following the exact same playbook. The industry expected them to solve license management—the original Package Facts vision. Instead, the killer app turned out to be vulnerability management, driven by the explosion of software supply chain attacks like SolarWinds and Log4j.

“SPDX has grown use‑case by use‑case,” Stewart notes. And each new use-case doesn’t just add features—it enables entirely new categories of applications.

SBOMs today are where HTTP was in 1991—functionally limited but primed for explosion.

The Expansion Is Already Here

The evolution from SPDX 2.x to 3.0 proves this strategy is working. The changes aren’t incremental—they’re transformational:

From Documents to Graphs: SPDX 3.0 abandons the monolithic document model for knowledge graph model. Instead of one big file, you have interconnected metadata that can be queried, analyzed, and visualized as a network.

From Software to Systems: The new specification handles…

  • Service profiles for cloud infrastructure
  • AI model and dataset profiles (tracking what data trained your models)
  • Hardware BOMs for IoT and embedded systems
  • Build profiles that cryptographically link source to binary
  • End-of-life metadata for dependency lifecycle management

Real-World Implementations: This isn’t theoretical. The Yocto project already generates build-native SBOMs. The Zephyr project produces three interlinked SBOMs:

  1. Source SBOM for the Zephyr RTOS itself
  2. Source SBOM for your application
  3. Build SBOM that cryptographically links everything together

These implementations show SBOMs evolving from compliance checkboxes to operational necessities.

The Endgame: Transparency at Scale

Kate Stewart summarizes the vision in seven words: “Transparency is the path to minimizing risk.”

But transparency alone isn’t valuable—it’s what transparency enables that matters. When every component in your software supply chain has rich, queryable metadata, you can:

The platform effect is already kicking in. More adoption drives more use-cases. More use-cases drive better tooling. Better tooling drives more adoption. It’s the same virtuous cycle that turned HTTP from a simple network protocol into the nervous system of the web.


Explore SBOM use-cases for almost any department of the enterprise and learn how to unlock enterprise value to make the most of your software supply chain.

WHITE PAPER Rnd Rect | Unlock Enterprise Value with SBOMs: Use-Cases for the Entire Organization

Playing the Long Game

The critiques of SBOMs as they are today suffer from a failure of imagination. Yes, they’re minimal. Yes, they’re often poorly implemented. Yes, they feel like “compliance theater”. All true.

The founders of SPDX made a calculated bet: it’s better to have adoption of a simple but potentially “useless” standard that can evolve than to have a perfect standard that nobody uses. By starting small, they avoided the fate of countless over-engineered standards that died in committee.

Now, with the cold start overcome and adoption growing, the real expansion begins. As software supply chains grow more complex—incorporating AI models, IoT devices, and cloud services—the metadata infrastructure to manage them must evolve as well.

The teams generating “barely valid” SBOMs today are building the muscle memory and tooling that will power tomorrow’s software transparency infrastructure. Every “useless” SBOM is a vote for an open, transparent, secure software ecosystem.

The paradox resolves itself: SBOMs are useless today precisely so they can become essential tomorrow.


Learn about SBOMs, how they came to be and how they are used to enable valuable use-cases for modern software.

Understanding SBOMS: Deep Dive with Kate Stewart

False Positives and False Negatives in Vulnerability Scanning: Lessons from the Trenches

When Good Scanners Flag Bad Results

Imagine this: Friday afternoon, your deployment pipeline runs smoothly, tests pass, and you’re ready to push that new release to production. Then suddenly: BEEP BEEP BEEP – your vulnerability scanner lights up like a Christmas tree: “CRITICAL VULNERABILITY DETECTED!”

Your heart sinks. Is it a legitimate security concern requiring immediate action, or just another false positive that will consume your weekend? If you’ve worked in DevSecOps for over five minutes, you know this scenario all too well.

False positives and false negatives are the yin and yang of vulnerability scanning – equally problematic but in opposite ways. False positives cry wolf when there’s no real threat, leading to alert fatigue and wasted resources. False negatives are the silent killers, allowing actual vulnerabilities to slip through undetected. Both undermine confidence in your security tooling.

At Anchore, we’ve been battling these issues alongside our community, and the GitHub issues for our open source scanner, Grype, tell quite a story. In this post, we’ll dissect real-world examples of false results, explain their root causes, and show how vulnerability scanning has evolved to become more accurate over time.

The Curious Case of Cross-Ecosystem Confusion

One of the most common causes of false positives is “cross-ecosystem confusion.” This happens when a vulnerability scanner mistakenly applies a vulnerability from one ecosystem to a different but similarly named package in another ecosystem.

Take the case of Google’s Protobuf libraries. In early 2023, Grype flagged Go applications using google.golang.org/protobuf as vulnerable to CVE-2015-5237 and CVE-2021-22570, both of which affect the C++ version of Protobuf.

As one frustrated user commented in Issue #1179:

“I was just bitten by the CVEs affecting the C++ version of protobuf when I’m using the Go package. Arguably, it shouldn’t even be included on those CVEs in Github because it’s a completely different code base…”

This user wasn’t alone. Looking at the data, we found a whopping 44 instances of these cross-ecosystem false positives across various projects, affecting everything from etcd to Prometheus to kubectl.

The root cause? CPE-based vulnerability matching. The Common Platform Enumeration (CPE) system, while standardized, often lacks the granularity needed to distinguish between different implementations of similarly named software.

When Binary Isn’t So Binary: The System Package Conundrum

Another fascinating case study comes from Issue #2527, where Grype reported CVE-2022-1271 for the gzip utility on Ubuntu 22.04 despite the package being patched.

The problem stemmed from how Linux distributions like Ubuntu handle symbolic links between /bin and /usr/bin. The package manager knew the file was part of the gzip package, but Syft (Grype’s companion tool for generating SBOMs) was identifying the binary separately without connecting it to its parent package.

As Grype contributor Alex Goodman explained during a live stream:

“This issue was related to how Syft handled symlinks, particularly with the ‘user merge’ in some Linux distributions. Syft wasn’t correctly following symlinks in parent directories when associating files with their Debian packages.”

This case is particularly interesting because it highlights the complex relationship between package managers and the actual files on disk. Even when a vulnerability is properly patched in a package, the scanner might still flag the binary if it doesn’t correctly associate it.

The .NET Parent-Child Relationship Drama

.NET developers will appreciate this next one. In Issue #1693, a user reported that Grype wasn’t detecting the GHSA-98g6-xh36-x2p7 vulnerability in System.Data.SqlClient version 4.8.5.

The issue was related to how .NET packages are cataloged. Syft was finding the .NET assemblies and reporting their assembly versions (like 4.700.22.51706), but these don’t align with the NuGet package versions (4.8.5) used in vulnerability databases.

A contributor demonstrated:

$ grype -q dir:.
✔ Vulnerability DB                [no update available]
✔ Indexed file system             /Users/wagoodman/scratch/grype-1693
✔ Cataloged contents              500f014f33608c18
  ├── ✔ Packages                  [1 packages]
  └── ✔ Executables               [0 executables]
✔ Scanned for vulnerabilities     [0 vulnerability matches]

NAME                   INSTALLED  FIXED-IN  TYPE    VULNERABILITY        SEVERITY
System.Data.SqlClient  4.8.5      4.8.6     dotnet  GHSA-98g6-xh36-x2p7  High

This issue highlights the challenges of correctly identifying artifacts across different packaging systems, especially when version information is stored or represented differently.

Goodbye CPE, Hello GHSA: The Evolution of Matching

If there’s a hero in these tales of false results, it’s the shift from CPE-based matching to more ecosystem-aware approaches. In 2023, we published a blog post, “Say Goodbye to False Positives, ” announcing a significant change in Grype’s approach.

As Keith Zantow explained:

“After experimenting with a number of options for improving vulnerability matching, ultimately one of the simplest solutions proved most effective: stop matching with CPEs.”

Instead, Grype primarily relies on the GitHub Advisory Database (GHSA) for vulnerability data. This change led to dramatic improvements:

“In our set of test data, we have been able to reduce false positive matches by 2,000+, while only seeing 11 false negatives.”

That’s a trade-off most security teams would gladly accept! The shift to GHSA-based matching also brought another significant benefit: community involvement in correcting vulnerability data.

Practical Strategies for Managing False Results

Based on our experiences and community feedback, here are some practical strategies for dealing with false results in vulnerability scanning:

  • Use a quality gate in your CI/CD pipeline: Similar to Grype’s quality gate, which compares results against manually labeled vulnerabilities, you can create a baseline of known issues to avoid regression.
  • Customize matching behavior: Modern vulnerability scanners like Grype allow you to adjust matching behavior through configuration. For instance, you can modify CPE matching for specific ecosystems:
   match:
     java:
       using-cpes: false
     python:
       using-cpes: true
  • Create ignore rules for known false positives: When all else fails, explicitly ignore known false positives. Grype supports this through configuration:
   ignore:
     - vulnerability: CVE-2022-1271
       fix-state: unknown
       package:
         type: binary
         version: 18.17.1
  • Contribute upstream: We believe the best solution is often to fix the data at its source. This is not a consistent practice across the industry. However, as one contributor noted in Issue #773:

“Since we use GHSA now, it’s possible for users to seek to correct the data by raising an issue or PR against https://github.com/github/advisory-database.”

Conclusion: The Never-Ending Quest for Accuracy

The battle against false results in vulnerability scanning is never truly over. Scanners must continuously adapt as software ecosystems evolve and new packaging systems emerge.

The good news is that we’re making substantial progress. By analyzing the closed issues in the Grype repository over the past 12 months, we can see that the community has successfully addressed dozens of false-positive patterns affecting hundreds of real-world applications.

In the immortal words of one relieved user after we fixed a particularly vexing set of false positives: “OMG. This is my favorite GH issue ever now. Great work to the grype team. Holy cow! 🐮 I’m really impressed.”

At Anchore, we remain committed to this quest for accuracy. After all, vulnerability scanning is only helpful if you can trust the results. Whether you’re using our open-source tools like Grype and Syft or Anchore Enterprise, know that each false positive you report helps improve the system for everyone.

So the next time your vulnerability scanner lights up like a Christmas tree on Friday afternoon, remember: you’re not alone in this battle, and the tools are improving daily. And who knows? Maybe it’s a real vulnerability this time, and you’ll be the hero who saved the day!


Are you struggling with false positives or false negatives in your vulnerability scanning? Share your experiences on our Discourse, and report any issues on GitHub. And if you’re looking for a way to manage your SBOMs and vulnerability findings at scale, check out Anchore Enterprise.

NIS2 Compliance with SBOMs: a Scalable, Secure Supply Chain Solution

This blog post has been archived and replaced by the supporting pillar page that can be found here: https://anchore.com/wp-admin/post.php?post=987475325&action=edit

The blog post is meant to remain “public” so that it will continue to show on the /blog feed. This will help discoverability for people browsing the blog and potentially help SEO. If it is clicked on it will automatically redirect to the pillar page.

What is Software Composition Analysis (SCA)?

This blog post has been archived and replaced by the supporting pillar page that can be found here: https://anchore.com/wp-admin/post.php?post=987475061&action=edit

The blog post is meant to remain “public” so that it will continue to show on the /blog feed. This will help discoverability for people browsing the blog and potentially help SEO. If it is clicked on it will automatically redirect to the pillar page.

EU CRA SBOM Requirements: Overview & Compliance Tips

This blog post has been archived and replaced by the supporting pillar page that can be found here: https://anchore.com/wp-admin/post.php?post=987475103&action=edit

The blog post is meant to remain “public” so that it will continue to show on the /blog feed. This will help discoverability for people browsing the blog and potentially help SEO. If it is clicked on it will automatically redirect to the pillar page.

SBOMs as the Crossroad of the Software Supply Chain: Anchore Learning Week  (Day 5)

Welcome to the final installment in our 5-part series on Software Bills of Materials (SBOMs). Throughout this series, we’ve explored 

Now, we’ll examine how SBOMs intersect with various disciplines across the software ecosystem.

SBOMs don’t exist in isolation—they’re part of a broader landscape of software development, security, and compliance practices. Understanding these intersections is crucial for organizations looking to maximize the value of their SBOM initiatives.

Regulatory Compliance and SBOMs: Global SBOM Mandates

As regulations increasingly mandate SBOMs, staying informed about compliance requirements is crucial for software businesses.

  • The US was the first-mover in the “mandatory SBOM for securing software supply chains” movement with the White House’s Executive Order (EO) 14028 impacting enterprises that do business with the US federal government
  • The EU Cyber Resilience Act (CRA) was the fast follower of the movement but with a much larger scope. Any company selling software in the EU must maintain SBOMs of their product

Our Ask Me Anything: SBOMs and the Executive Order webinar features Anchore SBOM and government compliance experts advising on how to avoid common pitfalls in EO 14028. You’ll learn:

  • How to interpret specific EO 14028 requirements for your organization
  • Which artifacts satisfy compliance requirements and which don’t
  • Pro tips on how to navigate EO 14028 with the least amount of frustration

Open Source Software Security and SBOMs: Risk Management for Invisible Risk

Open source components dominate modern applications, yet create an accountability paradox. Your software likely contains 150+ OSS dependencies you didn’t write and can’t fully audit but you’re entirely responsible for any vulnerabilities they introduce. On top of this, OSS adoption is only getting bigger. This means your organization will inherit more vulnerabilities as time goes on.

Our guide to resolving the challenges of this accountability paradox, How is Open Source Software Security Managed in the Software Supply Chain?:

  • Examines the unique challenges of securing open source components
  • Offers practical strategies for managing open source risk at scale
  • Provides frameworks for evaluating the security maturity of OSS projects

DevSecOps and SBOMS: Types and Uses for Each Stage

The integration of SBOMs into DevSecOps workflows represents a powerful opportunity to enhance security while maintaining development velocity.

The Evolution of SBOMs in the DevSecOps Lifecycle is a two-part series that breaks down how SBOMs fit into each phase of the DevSecOps lifecycle:

Part 1: From Planning to Build

  • Explores how different SBOM types support specific DevSecOps stages
  • Maps SBOM creation points to key development milestones
  • Demonstrates how early SBOM integration prevents costly late-stage issues

Part 2: From Release to Production

  • Shows how to automate SBOM generation, validation, and analysis
  • Explores integration with release and deploy pipelines
  • Provides practical examples of SBOM-driven security gates

Conclusion: The SBOM Journey Continues

Throughout our five-part series on SBOMs, we’ve provided the knowledge you need to implement effective software supply chain security. From foundational concepts to technical implementation, scaling strategies, and regulatory compliance, you now have comprehensive understanding to put SBOMs to work immediately. Software supply chain attacks continue to escalate, making SBOM implementation essential for proactive security. 

Ready to see immediate results? Experience how Anchore Enterprise transforms SBOM management—sign up for a free trial or contact us for a demo today.


Don’t want to miss a day? Subscribe to our newsletter for updates or follow us on LinkedIn, X or BlueSky to get notifications as each post is published.

SBOM Insights on LLMs, Compliance Attestations and Security Mental Models: Anchore Learning Week (Day 4)

Welcome to the fourth installment in our 5-part series on software bill of materials (SBOMs) In our previous posts, we’ve covered SBOM fundamentals, SBOM generation and scalable SBOM management. Now, we shift our focus to the bigger picture, exploring strategic perspectives from software supply chain thought leaders. After you’ve finished day four, dive into day five, “SBOMs as the Crossroad of the Software Supply Chain“.

Understanding the evolving role of SBOMs in software supply chain security requires more than just technical knowledge—it demands strategic vision. In this post, we share insights from industry experts who are shaping the future of SBOM standards, practices, and use-cases.

Insights on SBOMs in the LLM Era

LLMs have impacted every aspect of the software industry and software supply chain security is no exception. To understand how industry luminaries like Kate Stewart are thinking about the future of SBOMs through this evolution, watch Understanding SBOMs: Deep Dive with Kate Stewart.

This webinar highlights several key points:

  • LLMs pose unique transparency challenges:The emergence of large language models reduces transparency since behavior is stored in datasets and training processes rather than code
  • Software introspection limitations: Already difficult with traditional software, introspection becomes both harder AND more important in the LLM era
  • Dataset lineage tracking: Stewart draws a parallel between SBOMs for supply chain security and the need for dataset provenance for LLMs
  • Behavior traceability: She advocates for “SBOMs of [training] datasets” that allow organizations to trace behavior back to a foundational source

“Transparency is the path to minimizing risk.”
—Kate Stewart

This perspective expands the SBOM concept beyond mere software component inventories to encompass the broader information needed for transparency in AI-powered systems.

Watch the talk.

SBOMs as Compliance Attestation Data Containers—Not Supply Chain Documents

Compliance requirements for software supply chain security continue to evolve rapidly. To understand how SBOMs are being reimagined as compliance attestation containers rather than static supply chain documents, watch Trust in the Supply Chain: CycloneDX Attestations & SBOMs with Steve Springett.

This webinar highlights several key points:

  • Content over format debates: Springett emphasizes that “content is king”—the actual data within SBOMs and their practical use-cases matter far more than format wars
  • Machine-readable attestations: Historically manual compliance activities can now be automated through structured data that provides verifiable evidence to auditors
  • Business process metadata: CycloneDX can include compliance process metadata like security training completion, going beyond component inventories
  • Compliance flexibility: The ability to attest to any standard, from government requirements to custom internal company policies
  • Quality-focused approach: Springett introduces five dimensions for evaluating SBOM completeness and a maturity model with profiles for different stakeholders (AppSec, SRE, NetSec, Legal/IP)
“The end-goal is transparency.” — Steve Springett

Echoing the belief of Kate Stewart, Springett reinforces the purpose of SBOMs as transparency tools. His perspective transforms our understanding of SBOMs from static component inventories to versatile data containers that attest to broader security and compliance activities.

Watch the talk.

Security as Unit Tests: A New Mental Model

Kelsey Hightower, Google’s former distinguished engineer, offers a pragmatic perspective that reframes security in developer-friendly terms. Watch Software Security in the Real World with Kelsey Hightower to learn how his “Security as Unit Tests” mental model helps developers integrate security naturally into their workflow by:

  • Treating security requirements as testable assertions
  • How SBOMs act as source of truth for supply chain data for tests
  • Integrating verification into the CI/CD pipeline
  • Making security outcomes measurable and reproducible

Hightower’s perspective helps bridge the gap between development practices and security requirements, with SBOMs serving as a foundational element in automated verification.

Watch the talk.

Looking Ahead

As we’ve seen from these expert perspectives, SBOMs are not just a technical tool but a strategic asset that intersects with many aspects of software development and security. In our final post, we’ll explore these intersections in depth, examining how SBOMs relate to DevSecOps, open source security, and regulatory compliance.

Stay tuned for the final installment in our series, “SBOMs as the Crossroad of the Software Supply Chain,” where we’ll complete our comprehensive exploration of software bills of materials.


Don’t want to miss a day? Subscribe to our newsletter for updates or follow us on LinkedIn, X or BlueSky to get notifications as each post is published.

DevOps-Scale SBOM Management: Anchore Learning Week (Day 3)

Welcome to the third installment in our 5-part series on software bill of materials (SBOMs)—check here for day 1 and day 2. Now, we’re leveling up to tackle one of the most significant challenges organizations face: scaling SBOM management to keep pace with the velocity of modern, DevOps-based software development. After you’ve digested this part, jump into day four, “SBOM Insights on LLMs, Compliance Attestations and Security Mental Models“, and day five “SBOMs as the Crossroad of the Software Supply Chain“.

As your SBOM adoption graduates from proof-of-concept to enterprise implementation, several critical questions emerge:

  • How do you manage thousands—or even millions—of SBOMs?
  • How do you seamlessly integrate SBOM processes into complex CI/CD environments?
  • How do you extract maximum value from your growing SBOM repository?

Let’s explore three powerful resources that form a roadmap for scaling your SBOM initiative across your organization.

SBOM Automation: The Key to Scale

After you’ve generated your first SBOM and discovered the value, the next frontier is scaling across your entire software environment. Without robust automation, manual SBOM processes quickly become bottlenecks in fast-moving DevOps environments.

Key benefits:

  • Eliminates time-consuming manual SBOM generation and analysis
  • Ensures consistent SBOM quality across all repositories
  • Enables real-time security and compliance insights

The webinar Understanding SBOMs: How to Automate, Generate & Manage SBOMs delivers practical strategies for building automation into your SBOM pipeline from day one. This session unpacks how well-designed SBOM management services can handle CI/CD pipelines that process millions of software artifacts daily.

Real-world SBOMs: How Google Scaled to 4M+ SBOMs Daily

Nothing builds confidence like seeing how industry leaders have conquered the same challenges you’re facing. Google’s approach to SBOM implementation offers invaluable lessons for organizations of any size.

The webinar “How SBOMs Protect Google’s Massive Software Supply Chain” reveals how one of tech’s largest players scaled their SBOM program to an astonishing 4 million+ SBOMs generated daily. This deep dive shows you:

  • How Google architected their SBOM ecosystem for massive scale
  • Integration patterns that connect SBOMs to their broader security infrastructure
  • Practical lessons learned during their implementation journey

This resource transforms theoretical SBOM scaling concepts into tangible strategies you can adapt for your environment. If an organization as large and complex as Google can successfully deploy an SBOM initiative at scale—you can too!

Watch the talk now.

Build vs Buy?—Anchore Enterprise

Building a scalable SBOM data pipeline with advanced features like vulnerability management and automated compliance policy enforcement represents a significant engineering investment. For many organizations, leveraging purpose-built solutions makes strategic sense.

Anchore Enterprise offers an alternative path with three integrated components:

  • Anchore SBOM: A turnkey SBOM management platform with enterprise-grade features
  • Anchore Secure: Cloud-native vulnerability management powered by comprehensive SBOM data
  • Anchore Enforce: An SBOM-driven policy enforcement engine that automates compliance checks

Start with a free trial for AWS customers or a tailored demo.

The Road Ahead

As you scale your SBOM initiative, keep one eye on emerging trends and use cases. The SBOM ecosystem continues to evolve rapidly, with new applications emerging regularly.

In our next post, we’ll explore insights from industry experts on the future of SBOMs and their strategic importance. Stay tuned for part four of our series, “SBOM Insights on LLMs, Compliance Attestations and Security Mental Models”.


Don’t want to miss a day? Subscribe to our newsletter for updates or follow us on LinkedIn, X or BlueSky to get notifications as each post is published.

SBOM Generation Step-by-Step: Anchore Learning Week (Day 2)

Welcome to day 2 of our 5-part series on Software Bills of Materials (SBOMs). In our previous post, we covered the basics of SBOMs and why they’re essential for modern software security. Now, we’re ready to roll up our sleeves and get technical. After you’ve digested this part, jump into day three, “DevOps-Scale SBOM Management“, day four, “SBOM Insights on LLMs, Compliance Attestations and Security Mental Models“, and day five “SBOMs as the Crossroad of the Software Supply Chain“.

This post is designed for hands-on practitioners—the engineers, developers, and security professionals who want to move from theory to implementation. We’ll explore practical tools and techniques for generating, integrating, and leveraging SBOMs in your development workflows.

Getting Started: Step-by-Step SBOM Generation Guides

Ready to generate your first SBOM? How to Generate an SBOM with Free, Open Source Tools will guide you through everything you need to know.

What you’ll learn:

  • A list of the 4 most popular SBOM generation tools
  • How to install and configure Syft
  • How to scan source code, a container or a file directory’s supply chain composition
  • How to generate an SBOM in CycloneDX or SPDX formats based on the supply chain composition scan
  • A decision framework for evaluating and choosing an SBOM generator

Generating accurate SBOMs is the foundation of your software supply chain transparency initiative. Without SBOMs, valuable use-cases like vulnerability management, compliance audit management or license management are low-value, time sinks instead of efficient, value-add activities.

Follow the step by step guide on the blog.

If you’re looking for step-by-step guides for popular ecosystems like Javascript, Python, GitHub or Docker 👈follow the links).

Under the Hood: How SBOM Generation Works

For those interested in the gory technical details of how a software composition analysis (SCA) tool and SBOM generator scale this function, How Syft Scans Software to Generate SBOMs is the perfect blog post to scratch that intellectual itch.

What you’ll learn:

  • The scanning algorithms that identify software components
  • How Syft handles package ecosystems (npm, PyPI, Go modules, etc.)
  • Performance optimization techniques for large codebases
  • Ways to contribute to the open source project

Understanding the “how” behind the SBOM generation process enables you to troubleshoot edge cases and customize tools when you’re ready to squeeze the most value from your SBOM initiative.

Read the blog for details.

Pro tip: Clone the Syft repository and step through the code with a debugger to really understand what’s happening during a scan. It’s the developer equivalent of taking apart an engine to see how it works.

Advancing with Policy-as-Code

Our guide, The Developer’s Guide to SBOMs & Policy-as-Code, bridges the gap between generating SBOMs and automating the SBOM use-cases that align with business objectives. A policy-as-code strategy allows many of the use-cases to scale in cloud native environments and deliver outsized value.

What you’ll learn:

  • How to automate tedious compliance tasks with PaC and SBOMs
  • How to define security policies (via PaC) that leverage SBOM data
  • Integration patterns for CI/CD pipelines
  • How to achieve continuous compliance with automated policy enforcement

Combining SBOMs with policy-as-code creates a force multiplier for your security efforts, allowing you to automate compliance and vulnerability management at scale.

Read the blog for details.

Pro tip: Start with simple policies that flag known CVEs, then gradually build more sophisticated rules as your team gets comfortable with the approach

Taking the Next Step

After dipping your feet into the shallow end of SBOM generation and integration, the learning continues with an educational track on scaling SBOMs for enterprise-grade deployments. In our next post, we’ll lay out how to take your SBOM initiative from proof-of-concept to production, with insights on automation, management, and real-world case studies. 

Stay tuned for part three of our series, “DevOps-Scale SBOM Management,” where we’ll tackle the challenges of implementing SBOMs across large teams and complex environments.


Don’t want to miss a day? Subscribe to our newsletter for updates or follow us on LinkedIn, X or BlueSky to get notifications as each post is published.

SBOM Fundamentals: Anchore Learning Week (Day 1)

This blog post is the first in our 5-day series exploring the world of SBOMs and their role in securing the foundational but often overlooked 3rd-party software supply chain. Whether you’re just beginning your SBOM journey or looking to refresh your foundational knowledge, these resources will provide a solid understanding of what SBOMs are and why they matter. Day two is a guide to “SBOM Generation Step-by-Step“, day three presents “DevOps-Scale SBOM Management“, day four, “SBOM Insights on LLMs, Compliance Attestations and Security Mental Models“, and day five “SBOMs as the Crossroad of the Software Supply Chain“.

Learn SBOM Fundamentals in 1 Hour or Less

Short on time but need to understand SBOMs yesterday? Start your educational journey with this single-serving webinar on SBOM fundamentals—watch it at 2x for a true ​​speedrun.

Understanding SBOMs: An Introduction to Modern Development

This webinar features Anchore’s team of SBOM experts who guide you through all the SBOM basics – topics covered:

  • Defining SBOM standards and formats
  • Best practices for generating and automating SBOMs
  • Integrating SBOMs into existing infrastructure and workflows
  • Practical tips for protecting against emerging supply chain threats

“You really need to know what you’re shipping and what’s there.”
—Josh Bressers

This straightforward yet overlooked insight demonstrates the foundational nature of SBOMs to software supply chain security. Operating without visibility into your components creates significant security blind spots. SBOMs create the transparency needed to defend against the rising tide of supply chain attacks.

Improve SBOM Initiative Success: Crystalize the Core ​​SBOM Mental Models

Enjoyed the webinar but want to go deeper? Our eBook, SBOM 101: Understand, Implement & Leverage SBOMs for Stronger Security & Risk Management, covers similar ground but with the depth and nuance to level up your SBOM knowledge:

  • Why SBOMs matter
  • How to choose an SBOM format
  • Explain how SBOMs are the central component of software supply chain
  • A quick reference table of SBOM use-cases

This gives you a strong foundation to build your SBOM initiative on. The mental models presented in the eBook help you: 

  • avoid common implementation pitfalls, 
  • align your SBOM strategy with security objectives, and 
  • communicate SBOM value to stakeholders across your organization. 

Rather than blindly following compliance requirements, you’ll learn the “why” behind SBOMs and make informed decisions about automation tools, integration points, and formats that are best suited for your specific environment.

SBOM Use-Cases: Generate Enterprise Value Across Entire Organization

To round out your SBOM fundamentals education, How to Unlock Enterprise Value with SBOMs: Use Cases for Security, Engineering, Compliance, Legal and Sales is our white paper that deep dives into the surprisingly wide range of SBOM use-cases. SBOMs don’t just provide value to security teams, they’re a cross-functional technology that creates value across your organization.

  • Security teams: Rapidly identify vulnerable components when zero-days hit the news
  • Engineering teams: Make data-driven architecture decisions about third-party dependencies to incorporate
  • Compliance teams: Automate evidence collection for compliance audits
  • Legal teams: Proactively manage software license compliance and IP risks
  • Sales teams: Accelerate sales cycles by using transparency as a tool to build trust fast

“Transparency is the path to minimizing risk.”
—Kate Stewart, VP of Embedded Systems at The Linux Foundation and Founder of SPDX

This core SBOM principle applies across all business functions. Our white paper shows how properly implemented SBOMs create a unified source of truth about your software components that empowers teams beyond security to make better decisions.

Perfect for technical leaders who need to justify SBOM investments and drive cross-team adoption.

What’s Next?

After completing the fundamentals, you’re ready to get your hands dirty and learn the nitty-gritty of SBOM generation and CI/CD build pipeline integration. In our next post, we’ll map out a technical learning path with deep-dives for practitioners looking to get hands-on experience. Stay tuned for part two of our series, “SBOM Generation Step-by-Step”.


Don’t want to miss a day? Subscribe to our newsletter for updates or follow us on LinkedIn, X or BlueSky to get notifications as each post is published.

Introduction to the DoD Software Factory

Fast Facts

  • A DoD Software Factory is a DevSecOps-based development pipeline adapted to the DoD’s high-threat environment, reflecting the government’s broader push for software and cybersecurity modernization.
  • DoD software factories typically include code repositories, CI/CD build pipelines, artifact repositories, and runtime orchestrators and platforms.
  • Use pre-existing software factories or roll out your own by following DoD best practices like continuous vulnerability scanning and automated policy checks.
  • SCA tools like Anchore Enterprise address the unique security, compliance, and operational needs of DoD Software Factories by delivering end-to-end software supply chain security and automated compliance.

In the rapidly evolving landscape of national defense and cybersecurity, the concept of a Department of Defense (DoD) software factory has emerged as a cornerstone of innovation and security. These software factories represent an integration of the principles and practices found within the DevSecOps movement, tailored to meet the unique security requirements of the DoD and Defense Industrial Base (DIB). 

By fostering an environment that emphasizes continuous monitoring, automation, and cyber resilience, DoD Software Factories are at the forefront of the United States Government’s push towards modernizing its software and cybersecurity capabilities. This initiative not only aims to enhance the velocity of software development but also ensures that these advancements are achieved without compromising on security, even against the backdrop of an increasingly sophisticated threat landscape.

Building and running a DoD software factory is so central to the future of software development that “Establish a Software Factory” is the one of the explicitly named plays from the DoD DevSecOps Playbook. On top of that, the compliance capstone of the authorization to operate (ATO) or its DevSecOps infused cousin the continuous ATO (cATO) effectively require a software factory in order to meet the requirements of the standard. In this blog post, we’ll break down the concept of a DoD software factory and a high-level overview of the components that make up one.

What is a DoD software factory?

A Department of Defense (DoD) Software Factory is a software development pipeline that embodies the principles and tools of the larger DevSecOps movement with a few choice modifications that conform to the extremely high threat profile of the DoD and DIB. It is part of the larger software and cybersecurity modernization trend that has been a central focus for the United States Government in the last two decades.

The goal of a DoD Software Factory is aimed at creating an ecosystem that enables continuous delivery of secure software that meet the needs of end-users while ensuring cyber resilience (a DoD catchphrase that emphasizes the transition from point-in-time security compliance to continuous security compliance). In other words, the goal is to leverage automation of software security tasks in order to fulfill the promise of the DevSecOps movement to increase the velocity of software development.

Example of a DoD software factory

Platform One is the canonical example of a DoD software factory. Run by the US Air Force, it offers both a comprehensive portfolio of software development tools and services. It has come to prominence due to its hosted services like Repo One for source code hosting and collaborative development, Big Bang for a end-to-end DevSecOps CI/CD platform and the Iron Bank for centralized container storage (i.e., container registry). These services have led the way to demonstrating that the principles of DevSecOps can be integrated into mission critical systems while still preserving the highest levels of security to protect the most classified information.

If you’re interested to learn more about how Platform One has unlocked the productivity bonus of DevSecOps while still maintaining DoD levels of security, watch our webinar with Camdon Cady, Chief of Operations and Chief Technology Officer of Platform One.

Who does it apply to?

Federal Service Integrators (FSI)

Any organization that works with the DoD as a federal service integrator will want to be intimately familiar with DoD software factories as they will either have to build on top of existing software factories or, if the mission/program wants to have full control over their software factory, be able to build their own for the agency.

Department of Defense (DoD) Mission

Any Department of Defense (DoD) mission will need to be well-versed on DoD software factories as all of their software and systems will be required to run on a software factory as well as both reach and maintain a cATO.

Principles of DevSecOps embedded into a DoD software factory

A DoD software factory is composed of both high-level principles and specific technologies that meet these principles. Below are a list of some of the most significant principles of a DoD software factory:

  1. Breakdown organizational silos: This principle is borrowed directly from the DevSecOps movement, specifically the DoD aims to integrate software development, test, deployment, security and operations into a single culture with the organization.
  2. Open source and reusable code: Composable software building blocks is another principle of the DevSecOps that increases productivity and reduces security implementation errors from developers writing secure software packages that they are not experts in.
  3. Immutable Infrastructure-as-Code (IaC): This principle focuses on treating the infrastructure that software runs on as ephemeral and managed via configuration rather than manual systems operations. Enabled by cloud computing (i.e., hardware virtualization) this principle increases the security of the underlying infrastructure through templated secure-by-design defaults and reliability of software as all infrastructure has to be designed to fail at any moment.
  4. Microservices architecture (via containers): Microservices are a design pattern that creates smaller software services that can be built and scale independently of each other. This principle allows for less complex software that only performs a limited set of behavior.
  5. Shift Left: Shift left is the DevSecOps principle that re-frames when and how security testing is done in the software development lifecycle. The goal is to begin security testing while software is being written and tested rather than after the software is “complete”. This prevents insecure practices from cascading into significant issues right as software is ready to be deployed.
  6. Continuous improvement through key capabilities: The principle of continuous improvement is a primary characteristic of the DevSecOps ethos but the specific key capabilities that are defined in the DoD DevSecOps playbook are what make this unique to the DoD.
  7. Define a DevSecOps pipeline: A DevSecOps pipeline is the workflow that utilizes all of the preceding principles in order to create the continuously improving security outcomes that is the goal of the DoD software factory program.
  8. Cyber resilience: Cyber resiliency is the goal of a DoD software factory, is it defined as, “the ability to anticipate, withstand, recover from, and adapt to adverse conditions, stresses, attacks, or compromises on the systems that include cyber resources.”

Common tools and systems for implementing a DoD software factory

Implementing a DoD software factory requires more than just modern development practices; it depends on a secure, repeatable toolchain that meets strict compliance and accreditation standards. At its core, a software factory is built on a set of foundational systems that move code from development through deployment. Below are the key components most commonly used across DoD software factories, and how they work together to deliver secure, mission-ready software.

  1. Code Repository (e.g., Repo One): Central location where software source code is stored and version-controlled. In DoD environments, repositories ensure controlled access, auditability, and secure collaboration across distributed teams.
  2. CI/CD Build Pipeline (e.g., Big Bang): Automates builds, runs security and compliance checks, executes tests, and packages code for deployment. Automation reduces human error and enforces consistency so that every release meets DoD security and accreditation requirements.
  3. Artifact Repository (e.g., Iron Bank): Trusted storage for approved software components and final build artifacts. Iron Bank, for example, provides digitally signed and hardened container images, reducing supply chain risk and ensuring only vetted software moves forward.
  4. Runtime Orchestrator and Platform (e.g., Big Bang): Deploys and manages software artifacts at scale. Orchestrators like hardened Kubernetes stacks enable repeatable deployments across multiple environments (classified and unclassified), while maintaining security baselines and reliability.
IBM Federal PDE Factory

Together, these systems form a secure pipeline: code enters Repo One, passes through CI/CD checks, vetted artifacts are stored in Iron Bank, and then deployed and orchestrated with Big Bang. Anchore Enterprise integrates directly into this flow, scanning and enforcing policy at each stage to ensure only compliant, secure software artifacts move through the factory.

How do I meet the security requirements for a DoD Software Factory? (Best Practices)

Use a pre-existing software factory

The benefit of using a pre-existing DoD software factory is the same as using a public cloud provider; someone else manages the infrastructure and systems. What you lose is the ability to highly customize your infrastructure to your specific needs. What you gain is the simplicity of only having to write software and allow others with specialized skill sets to deal with the work of building and maintaining the software infrastructure. When you are a car manufacturer, you don’t also want to be a civil engineering firm that designs roads.

To view existing DoD software factories, visit the Software Factory Ecosystem Coalition website.

Map of all DoD software factories in the US.

Roll out your own by following DoD best practices 

If you need the flexibility and customization of managing your own software factory then we’d recommend following the DoD Enterprise DevSecOps Reference Design as the base framework. There are a few software supply chain security recommendations that we would make in order to ensure that things go smoothly during the authorization to operate (ATO) process:

  1. Continuous vulnerability scanning across all stages of CI/CD pipeline: Use a cloud-native vulnerability scanner that can be directly integrated into your CI/CD pipeline and called automatically during each phase of the SDLC
  2. Automated policy checks to enforce requirements and achieve ATO: Use a cloud-native policy engine in tandem with your vulnerability scanner in order to automate the reporting and blocking of software that is a security threat and a compliance risk
  3. Remediation feedback: Use a cloud-native policy engine that can provide automated remediation feedback to developers in order to maintain a high velocity of software development
  4. Compliance (Trust but Verify): Use a reporting system that can be directly integrated with your CI/CD pipeline to create and collect the compliance artifacts that can prove compliance with DoD frameworks (e.g., CMMC and cATO)
  5. Air-gapped system: Utilize a cloud-native software supply chain security platform that can be deployed into an air-gapped environment in order to maintain the most strict security for classified missions

Is a software factory required in order to achieve cATO?

Technically, no. Effectively, yes. A cATO requires that your software is deployed on an Approved DoD Enterprise DevSecOps Reference Design not a software factory specifically. If you build your own DevSecOps platform that meets the criteria for the reference design then you have effectively rolled your own software factory.

How Anchore can help

The easiest and most effective method for achieving the security guarantees that a software factory is required to meet for its software supply chain are by using: 

  1. An SBOM generation and management tool that integrates directly into your software development pipeline
  2. A container vulnerability scanner that integrates directly into your software development pipeline
  3. A policy engine that integrates directly into your software development pipeline
  4. A centralized database to store all of your software supply chain security logs
  5. A query engine that can continuously monitor your software supply chain and automate the creation of compliance artifacts

These are the primary components of both Anchore Enterprise and Anchore Federal cloud native, SBOM-powered software composition analysis (SCA) platforms that provide an end-to-end software supply chain security to holistically protect your DevSecOps pipeline and automate compliance. This approach has been validated by the DoD, in fact the DoD’s Container Hardening Process Guide specifically named Anchore Federal as a recommended container hardening solution.

Learn more about how Anchore fuses DevSecOps and DoD software factories.

Conclusion and Next Steps

DoD software factories can come off as intimidating at first but hopefully we have broken them down into a more digestible form. At their core they reflect the best of the DevSecOps movement with specific adaptations that are relevant to the extreme threat environment that the DoD has to operate in, as well as, the intersecting trend of the modernization of federal security compliance standards.

If you’re looking to dive deeper into all things DoD software factory, we have a white paper that lays out the 6 best practices for container images in highly secure environments. Download the white paper below.

Anchore’s SBOM Learning Week: From Reactive to Resilient in 5 Days

Your software contains 150+ dependencies you didn’t write, don’t maintain, and can’t fully audit—yet you’re accountable for every vulnerability they introduce. Organizations implementing comprehensive SBOM strategies detect supply chain compromises in minutes instead of days—or worse after a breach.

Anchore has been leading the SBOM charge for almost a decade: providing educational resources, tools and insights, and to help organizations secure their software supply chains. To help organizations navigate this critical aspect of software development, we’re excited to announce SBOM Learning Week

Each day of the week we will be publishing a new blog post that provides an overview of how to progress on your SBOM educational journey. By the end of the week, you will have a full learning path laid out to guide you from SBOM novice to SBOM expert.

Why SBOM Learning Week, Why Now?

With recent executive orders (e.g., EO 14028) mandating SBOMs for federal software vendors and industry standards increasingly recommending their adoption, organizations across sectors are racing to weave SBOMs into their software development lifecycle. However, many still struggle with fundamental questions:

  • What exactly is an SBOM and why does it matter?
  • How do I generate, manage, and leverage SBOMs effectively?
  • How do I scale SBOM practices across a large organization?
  • What do leading experts predict for the future of SBOM adoption?
  • How do SBOMs integrate with existing security and development practices?

SBOM Learning Week answers these questions through a carefully structured learning journey designed for both newcomers and experienced practitioners.

What to Expect Each Day

Monday: SBOM Fundamentals

We’ll start with the fundamentals, exploring what SBOMs are, why they matter, and the key standards that define them. This foundational knowledge will prepare you for the more advanced topics to come.

Read Day 1: SBOM Fundamentals now

Tuesday: Technical Deep-dives

Day two focuses on hands-on implementation, with practical guidance for generating SBOMs using open source tools, integrating them into CI/CD pipelines, and examining how SBOM generation actually works under the hood.

Read Day 2: SBOM Generation Step-by-Step now

Wednesday: DevOps-Scale SBOM Management

Moving beyond initial implementation, we’ll explore how organizations can scale their SBOM practices across enterprise environments, featuring real-world examples from companies like Google.

Read Day 3: DevOps-Scale SBOM Management now

Thursday: SBOM Insights on LLMs, Compliance Attestations and Security Mental Models

On day four, we’ll share insights from industry thought leaders on how software supply chain security and SBOMs are adapting to LLMs, how SBOMs are better thought of as compliance data containers than supply chain documents and how SBOMs and vulnerability scanners fit into existing developer mental models.

Read Day 4: SBOM Insights now

Friday: SBOMs as the Crossroad of the Software Supply Chain

We’ll conclude by examining how SBOMs intersect with DevSecOps, open source security, and regulatory compliance, providing a holistic view of how SBOMs fit into the broader security landscape.

Read Day 5: SBOM Intersections now

Join Us on This Learning Journey

Whether you’re a security leader looking to strengthen your organization’s defenses, a developer seeking to integrate security into your workflows, or an IT professional responsible for compliance, SBOM Learning Week offers valuable insights for your role.

Each day’s post will build on the previous content, creating a comprehensive resource you can reference as you develop and mature your organization’s SBOM initiative. We’ll also be monitoring comments and questions on our social channels (LinkedIn, BlueSky, X) throughout the week to help clarify concepts and address specific challenges you might face.

Mark your calendars and join us starting Monday as we embark on this exploration of one of today’s most important cybersecurity technologies. The journey to a more secure software supply chain begins with understanding what’s in your code—and SBOM Week will show you exactly how to get there.


Don’t want to miss a day? Subscribe to our newsletter for updates or follow us on LinkedIn, X or BlueSky to get notifications as each post is published.

Navigating the Path to Federal Markets: Your Complete FedRAMP Guide

The federal cloud market is projected to reach $78+ billion by 2029, but only a small fraction of cloud providers have successfully achieved FedRAMP authorization.

That’s why we’re excited to announce our new white paper, “Unlocking Federal Markets: The Enterprise Guide to FedRAMP.” This comprehensive resource is designed for cloud service providers (CSPs) looking to navigate the complex FedRAMP authorization process, providing actionable insights and step-by-step guidance to help you access the lucrative federal cloud marketplace.

From understanding the authorization process to implementing continuous monitoring requirements, this guide offers a clear roadmap through the FedRAMP journey. More than just a compliance checklist, it delivers strategic insights on how to approach FedRAMP as a business opportunity while minimizing the time and resources required.

⏱️ Can’t wait till the end?
📥 Download the white paper now 👇👇👇

Unlocking Federal Markets: The Enterprise Guide to FedRAMP White Paper Rnd Rect

Why FedRAMP Authorization Matters

FedRAMP is the gateway to federal cloud business, but many organizations underestimate its complexity and strategic importance. Our white paper transforms your approach by:

  • Clarifying the Authorization Process: Understand the difference between FedRAMP authorization and certification, and learn the specific roles of key stakeholders.
  • Streamlining Compliance: Learn how to integrate security and compliance directly into your development lifecycle, reducing costs and accelerating time-to-market.
  • Establishing Continuous Monitoring: Build sustainable processes that maintain your authorization status through the required continuous monitoring activities.
  • Creating Business Value: Position your FedRAMP authorization as a competitive advantage that opens doors across multiple agencies.

What’s Inside the White Paper?

Our guide is organized to follow your FedRAMP journey from start to finish. Here’s a preview of what you’ll find:

  • FedRAMP Overview: Learn about the historical context, goals and benefits of the program.
  • Key Stakeholders: Understand the roles of federal agencies, 3PAOs and the FedRAMP PMO.
  • Authorization Process: Navigate through all phases—Preparation, Authorization and Continuous Monitoring—with detailed guidance for each step.
  • Strategic Considerations: Make informed decisions about impact levels, deployment models and resource requirements.
  • Compliance Automation: Discover how Anchore Enforce can transform FedRAMP from a burdensome audit exercise into a streamlined component of your software delivery pipeline.

You’ll also find practical insights on staffing your authorization effort, avoiding common pitfalls and estimating the level of effort required to achieve and maintain FedRAMP authorization.

Transform Your Approach to Federal Compliance

The white paper emphasizes that FedRAMP compliance isn’t just a one-time hurdle but an ongoing commitment that requires a strategic approach. By treating compliance as an integral part of your DevSecOps practice—with automation, policy-as-code and continuous monitoring—you can turn FedRAMP from a cost center into a competitive advantage.

Whether your organization is just beginning to explore FedRAMP or looking to optimize existing compliance processes, this guide provides the insights needed to build a sustainable approach that opens doors to federal business opportunities.

Download the White Paper Today

FedRAMP authorization is more than a compliance checkbox—it’s a strategic enabler for your federal market strategy. Our comprehensive guide gives you the knowledge and tools to navigate this complex process successfully.

📥 Download the white paper now and unlock your path to federal markets.

Learn how to navigate FedRAMP authorization while avoiding all of the most common pitfalls.

Unlocking Federal Markets: The Enterprise Guide to FedRAMP White Paper Rnd Rect

The Critical Role of SBOMs in PCI DSS 4.0 Compliance

Is your organization’s PCI compliance coming up for renewal in 2025? Or are you looking to achieve PCI compliance for the first time?

Version 4.0 of the Payment Card Industry Data Security Standard (PCI DSS) became mandatory on March 31, 2025. For enterprise’s utilizing a 3rd-party software software supply chain—essentially all companies, according to The Linux Foundation’s report on open source penetration—PCI DSS v4.0 requires companies to maintain comprehensive inventories of supply chain components. The SBOM standard has become the cybersecurity industry’s consensus best practice for securing software supply chains and meeting the requirements mandated by regulatory compliance frameworks.

This document serves as a comprehensive guide to understanding the pivotal role of SBOMs in navigating the complexities of PCI DSS v4.0 compliance.


Learn about the role that SBOMs for the security of your organization in this white paper.

Learn about the role that SBOMs for the security, including open source software (OSS) security, of your organization in this white paper.

Understanding the Fundamentals: PCI DSS 4.0 and SBOMs

What is PCI DSS 4.0?

Developed to strengthen payment account data (e.g., credit cards) security and standardize security controls globally, PCI DSS v4.0 represents the next evolution of this standard; ultimately benefiting consumers worldwide. 

This version supersedes PCI DSS 3.2.1, which was retired on March 31, 2023. The explicit goals of PCI DSS v4.0 include promoting security as a continuous process, enhancing flexibility in implementation, and introducing enhancements in validation methods. PCI DSS v4.0 achieved this by introducing a total of 64 new security controls.

NOTE: PCI DSS had a minor version bump to 4.0.1 in mid-2024. The update is limited and doesn’t add or remove any controls or change any deadlines, meaning the software supply chain requirements apply to both versions.

Demystifying SBOMs

A software bill of materials (SBOM) is fundamentally an inventory of all software dependencies utilized by a given application. Analogous to a “Bill of Materials” in manufacturing, which lists all raw materials and components used to produce a product, an SBOM provides a detailed list of software components, including libraries, 3rd-party software, and services, that constitute an application. 

The benefits of maintaining SBOMs are manifold, including enhanced transparency into the software supply chain, improved vulnerability management by identifying at-risk components, facilitating license compliance management, and providing a foundation for comprehensive supply chain risk assessment.

PCI DSS Requirement 6: Develop and Maintain Secure Systems and Software

PCI DSS Principal Requirement 6, titled “Develop and Maintain Secure Systems and Software,” aims to ensure the creation and upkeep of secure systems and applications through robust security measures and regular vulnerability assessments and updates. This requirement encompasses five primary areas:

  1. Processes and mechanisms for developing and maintaining secure systems and software are defined and understood
  2. Bespoke and custom software are developed securely
  3. Security vulnerabilities are identified and addressed
  4. Public-facing web applications are protected against attacks
  5. Changes to all system components are managed securely

Deep Dive into Requirement 6.3.2: Component Inventory for Vulnerability Management

Within the “Security vulnerabilities are identified and addressed” category of Requirement 6, Requirement 6.3.2 mandates: 

An inventory of bespoke and custom software, and 3rd-party software components incorporated into bespoke and custom software is maintained to facilitate vulnerability and patch management

The purpose of this evolving requirement is to enable organizations to effectively manage vulnerabilities and patches within all software components, including 3rd-party components such as libraries and APIs embedded in their bespoke and custom software. 

While PCI DSS v4.0 does not explicitly prescribe the use of SBOMs, they represent the cybersecurity industry’s consensus method for achieving compliance with this requirement by providing a detailed and readily accessible inventory of software components.

How SBOMs Enable Compliance with 6.3.2

By requiring an inventory of all software components, Requirement 6.3.2 necessitates a mechanism for comprehensive tracking. SBOMs automatically generate an inventory of all components in use, whether developed internally or sourced from third parties.

This detailed inventory forms the bedrock for identifying known vulnerabilities associated with these components. Platforms leveraging SBOMs can map component inventories to databases of known vulnerabilities, providing continuous insights into potential risks. 

Consequently, SBOMs are instrumental in facilitating effective vulnerability and patch management by enabling organizations to understand their software supply chain and prioritize remediation efforts.

Connecting SBOMs to other relevant PCI DSS 4.0 Requirements

Beyond Requirement 6.3.2, SBOMs offer synergistic benefits in achieving compliance with other aspects of PCI DSS v4.0.

Requirement 11.3.1.1 

This requirement necessitates the resolution of high-risk or critical vulnerabilities. SBOMs enable ongoing vulnerability monitoring, providing alerts for newly disclosed vulnerabilities affecting the identified software components, thereby complementing the requirement for tri-annual vulnerability scans. 

Platforms like Anchore Secure can track newly disclosed vulnerabilities against SBOM inventories, facilitating proactive risk mitigation.

Implementing SBOMs for PCI DSS 4.0: Practical Guidance

Generating Your First SBOM

The generation of SBOMs can be achieved through various methods. A Software Composition Analysis (SCA) tool, like the open source SCA Syft or the commercial AnchoreCTL, offer automated software composition scanning and SBOM generation for source code, containers or software binaries.

Looking for a step-by-step “how to” guide for generating your first SBOM? Read our technical guide.

These tools integrate with build pipelines and can output SBOMs in standard formats like SPDX and CycloneDX. For legacy systems or situations where automated tools have limitations, manual inventory processes may be necessary, although this approach is generally less scalable and prone to inaccuracies. 

Regardless of the method, it is crucial to ensure the accuracy and completeness of the SBOM, including both direct and transitive software dependencies.

Essential Elements of an SBOM for PCI DSS

While PCI DSS v4.0 does not mandate specific data fields for SBOMs, it is prudent to include essential information that facilitates vulnerability management and component tracking. Drawing from recommendations by the National Telecommunications and Information Administration (NTIA), a robust SBOM should, at a minimum, contain:

  • Component Name
  • Version String
  • Supplier Name
  • Unique Identifier (e.g., PURL or CPE)
  • Component Hash
  • Author Name

Operationalizing SBOMs: Beyond Inventory

The true value of an SBOM lies in its active utilization for software supply chain use-cases beyond component inventory management.

Vulnerability Management

SBOMs serve as the foundation for continuous vulnerability monitoring. By integrating SBOM data with vulnerability databases, organizations can proactively identify components with known vulnerabilities. Platforms like Anchore Secure enable the mapping of SBOMs to known vulnerabilities, tracking exploitability and patching cadence.

Patch Management

A comprehensive SBOM facilitates informed patch management by highlighting the specific components that require updating to address identified vulnerabilities. This allows security teams to prioritize patching efforts based on the severity and exploitability of the vulnerabilities within their software ecosystem.

Maintaining Vulnerability Remediation Documentation

It is essential to maintain thorough documentation of vulnerability remediation efforts in order to achieve the emerging continuous compliance trend from global regulatory bodies. Utilizing formats like CVE (Common vulnerabilities and Exposures) or VEX (Vulnerability Exploitability eXchange) alongside SBOMs can provide a standardized way to communicate the status of vulnerabilities, whether a product is affected, and the steps taken for mitigation.

Acquiring SBOMs from Third-Party Suppliers

PCI DSS Requirement 6.3.2 explicitly includes 3rd-party software components. Therefore, organizations must not only generate SBOMs for their own bespoke and custom software but also obtain SBOMs from their technology vendors for any libraries, applications, or APIs that are part of the card processing environment.

Engaging with suppliers to request SBOMs, potentially incorporating this requirement into contractual agreements, is a critical step. It is advisable to communicate preferred SBOM formats (e.g., CycloneDX, SPDX) and desired data fields to ensure the received SBOMs are compatible with internal vulnerability management processes. Challenges may arise if suppliers lack the capability to produce accurate SBOMs; in such instances, alternative risk mitigation strategies and ongoing communication are necessary.

NOTE: Remember the OSS maintainers that authored the open source components integrated into your application code are NOT 3rd-party suppliers in the traditional sense—you are! Almost all OSS licenses contain an “as is” clause that absolves them of liability for any code quality issues like vulnerabilities. This means that by using their code, you are now responsible for any security vulnerabilities in the code (both known and unknown).

Navigating the Challenges and Ensuring Success

Addressing Common Challenges in SBOM Adoption

Implementing SBOMs across an organization can present several challenges:

  • Generating SBOMs for closed-source or legacy systems where build tool integration is difficult may require specialized tools or manual effort
  • The volume and frequency of software updates necessitate automated processes for SBOM generation and continuous monitoring
  • Ensuring the accuracy and completeness of SBOM data, including all levels of dependencies, is crucial for effective risk management
  • Integrating SBOM management into existing software development lifecycle (SDLC) and security workflows requires collaboration and process adjustments
  • Effective SBOM adoption necessitates cross-functional collaboration between development, security, and procurement teams to establish policies and manage vendor relationships

Best Practices for SBOM Management

To ensure the sustained effectiveness of SBOMs for PCI DSS v4.0 compliance and beyond, organizations should adopt the following best practices:

  • Automate SBOM generation and updates wherever possible to maintain accuracy and reduce manual effort
  • Establish clear internal SBOM policies regarding format, data fields, update frequency, and retention
  • Select and implement appropriate SBOM management tooling that integrates with existing security and development infrastructure
  • Clearly define roles and responsibilities for SBOM creation, maintenance, and utilization across relevant teams
  • Provide education and training to development, security, and procurement teams on the importance and practical application of SBOMs

The Broader Landscape: SBOMs Beyond PCI DSS 4.0

As predicted, the global regulatory push toward software supply chain security and risk management with SBOMs as the foundation continues to gain momentum in 2025. PCI DSS v4.0 is the next major regulatory framework embracing SBOMs. This follows the pattern set by the US Executive Order 14028 and the EU Cyber Resilience Act, further cementing SBOMs as a cornerstone of modern cybersecurity best practice. 

Wrap-Up: Embracing SBOMs for a Secure Payment Ecosystem

The integration of SBOMs into PCI DSS v4.0 signifies a fundamental shift towards a more secure and transparent payment ecosystem. SBOMs are no longer merely a recommended practice but a critical component for achieving and maintaining compliance with the evolving requirements of PCI DSS v4.0, particularly Requirement 6.3.2. 

By providing a comprehensive inventory of software components and their dependencies, SBOMs empower organizations to enhance their security posture, reduce the risk of costly data breaches, improve their vulnerability management capabilities, and effectively navigate the complexities of regulatory compliance. Embracing SBOM implementation is not just about meeting a requirement; it is about building a more resilient and trustworthy software foundation for handling sensitive payment card data.

If you’re interested to learn more about how Anchore Enterprise can help your organization harden their software supply chain and achieve PCI DSS v4.0 compliance, get in touch with our team!


Interested to learn about all of the software supply chain use-cases that SBOMs enable? Read our new white paper and start unlocking enterprise value.

WHITE PAPER Rnd Rect | Unlock Enterprise Value with SBOMs: Use-Cases for the Entire Organization

Generating SBOMs for JavaScript Projects: A Developer’s Guide

Let’s be honest: modern JavaScript projects can feel like a tangled web of packages. Knowing exactly what’s in your final build is crucial, especially with rising security concerns. That’s where a Software Bill of Materials (SBOM) comes in handy – it lists out all the components. We’ll walk you through creating SBOMs for your JavaScript projects using Anchore’s open-source tool called Syft, which makes the process surprisingly easy (and free!).

Why You Need SBOMs for Your JavaScript Projects

JavaScript developers face unique supply chain security challenges. The NPM ecosystem has seen numerous security incidents, from protestware to dependency confusion attacks. With most JavaScript applications containing hundreds or even thousands of dependencies, manually tracking each one becomes impossible.

SBOMs solve this problem by providing:

  • Vulnerability management: Quickly identify affected packages when new vulnerabilities emerge
  • License compliance: Track open source license obligations across all dependencies
  • Dependency visibility: Map your complete software supply chain
  • Regulatory compliance: Meet evolving government and industry requirements

Let’s explore how to generate SBOMs across different JavaScript project scenarios.

Getting Started with Syft

Syft is an open source SBOM generation tool that supports multiple formats including SPDX and CycloneDX. It’s written in Go, and ships as a single binary. Let’s install it:

For Linux & macOS:

# Install the latest release of Syft using our installer script
curl -sSfL https://raw.githubusercontent.com/anchore/syft/main/install.sh | sh -s -- -b /usr/local/bin

Alternatively, use Homebrew on macOS:

brew install syft

For Microsoft Windows:

winget install Anchore.Syft

Verify the installation:

syft version
Application:     syft
Version:         1.20.0
BuildDate:       2025-02-21T20:44:47Z
GitCommit:       46522bcc5dff8b65b61a7cda1393abe515802306
GitDescription:  v1.20.0
Platform:        darwin/arm64
GoVersion:       go1.24.0
Compiler:        gc

Scenario 1: Scanning a JavaScript Container Image

Let’s start by scanning a container image of EverShop, an open source NodeJS e-commerce platform. Container scanning is perfect for projects already containerized or when you want to analyze production-equivalent environments.

# Pull and scan the specified container
syft evershop/evershop:latest

Here’s the first few lines, which summarise the work Syft has done.

  Loaded image        evershop/evershop:latest
  Parsed image        sha256:d29e670d6b2ada863…
  Cataloged contents  9f402cbc7ddf769ce068a101…
   ├──  Packages                        [1,188 packages]
   ├──  File digests                    [1,255 files]
   ├──  File metadata                   [1,255 locations]
   └──  Executables                     [26 executables]

Next is a human-readable table consisting of the name of the software package, the version found and the type which could be npm, deb, rpm and so-on. The output is very long (over a thousand lines), because, as we know, javascript applications often contain many packages. We’re only showing the first and last few lines here:

NAME                       VERSION         TYPE
@alloc/quick-lru           5.2.0           npm
@ampproject/remapping      2.3.0           npm
@babel/cli                 7.26.4          npm
@babel/code-frame          7.26.2          npm
@babel/compat-data         7.26.3          npm

yargs                      16.2.0          npm
yargs-parser               20.2.9          npm
yarn                       1.22.22         npm
zero-decimal-currencies    1.2.0           npm
zlib                       1.3.1-r2        apk

The output shows a comprehensive inventory of packages found in the container, including:

  • System packages (like Ubuntu/Debian packages)
  • Node.js dependencies from package.json
  • Other language dependencies if present

For a more structured output that can be consumed by other tools, use format options:

# Scan the container and output a CycloneDX SBOM
syft evershop/evershop:latest -o cyclonedx-json > ./evershop-sbom.json

This command generates a CycloneDX JSON SBOM, which is widely supported by security tools and can be shared with customers or partners.

Scenario 2: Scanning Source Code Directories

When working with source code only, Syft can extract dependency information directly from package manifest files.

Let’s clone the EverShop repository and scan it:

# Clone the repo
git clone https://github.com/evershopcommerce/evershop.git
cd ./evershop
# Check out the latest release
git checkout v1.2.2
# Create a human readble list of contents
syft dir:.
  Indexed file system  .
  Cataloged contents   cdb4ee2aea69cc6a83331bbe96dc2c…
   ├──  Packages                        [1,045 packages]
   ├──  File digests                    [3 files]
   ├──  File metadata                   [3 locations]
   └──  Executables                     [0 executables]
[0000]  WARN no explicit name and version provided for directory source, deriving artifact ID from the given path (which is not ideal)
NAME                       VERSION         TYPE
@alloc/quick-lru.          5.2.0           npm
@ampproject/remapping      2.3.0           npm
@aws-crypto/crc32          5.2.0           npm
@aws-crypto/crc32c         5.2.0           npm
@aws-crypto/sha1-browser   5.2.0           npm

yaml                       1.10.2          npm
yaml                       2.6.0           npm
yargs                      16.2.0          npm
yargs-parser               20.2.9          npm
zero-decimal-currencies    1.2.0           npm

The source-only scan focuses on dependencies declared in package.json files but won’t include installed packages in node_modules or system libraries that might be present in a container.

For tracking changes between versions, we can check out a specific tag:

# Check out an earlier tag from over a year ago
git checkout v1.0.0
# Create a machine readable SBOM document in SPDX format
syft dir:. -o spdx-json > ./evershop-v1.0.0-sbom.json

Scenario 3: Scanning a Built Project on Your Workstation

For the most complete view of your JavaScript project, scan the entire built project with installed dependencies:

# Assuming you're in your project directory and have run npm install
syft dir:. -o spdx-json > ./evershop-v1.2.2-sbom.json
# Grab five random examples from the SBOM with version and license info
jq '.packages[] | "\(.name) \(.versionInfo) \(.licenseDeclared)"' \
    < ./evershop-v1.2.2-sbom.json | shuf | head -n 5
"pretty-time 1.1.0 MIT"
"postcss-js 4.0.1 MIT"
"minimist 1.2.8 MIT"
"@evershop/postgres-query-builder 1.2.0 MIT"
"path-type 4.0.0 MIT"

This approach captures:

  • Declared dependencies from package.json
  • Actual installed packages in node_modules
  • Development dependencies if they’re installed
  • Any other files that might contain package information

Going Beyond SBOM Generation: Finding Vulnerabilities with Grype

An SBOM is most valuable when you use it to identify security issues. Grype, another open source tool from Anchore, can scan directly or use Syft SBOMs to find vulnerabilities.

For Linux & macOS:

# Install the latest release of Grype using our installer script
curl -sSfL https://raw.githubusercontent.com/anchore/grype/main/install.sh | sh -s -- -b /usr/local/bin

Alternatively, use Homebrew on macOS:

brew install grype

For Microsoft Windows:

winget install Anchore.Grype

Verify the installation:

grype version
Application:         grype
Version:             0.89.1
BuildDate:           2025-03-13T20:22:27Z
GitCommit:           718ea3060267edcae7b10a9bf16c0acdad10820a
GitDescription:      v0.89.1
Platform:            darwin/arm64
GoVersion:           go1.24.1
Compiler:            gc
Syft Version:        v1.20.0
Supported DB Schema: 6

Let’s check an older version of EverShop for known vulnerabilities. Note that the first time you run grype, it will download a ~66MB daily vulnerability database and unpack it.

# Clone the example repo, if we haven't already
git clone https://github.com/evershopcommerce/evershop.git
cd ./evershop
# Check out an older release of the application from > 1 year ago
git checkout v1.0.0
# Create an SPDX formatted SBOM and keep it
syft dir:. -o spdx-json > ./evershop-v1.0.0-sbom.json
# Scan the SBOM for known vulnerabilities
grype ./evershop-v1.0.0-sbom.json

We can also scan the directory directly with Grype, which leverages Syft internally. However, it’s usually preferable to use Syft to generate the SBOM initially, because that’s a time consuming part of the process.

grype dir:.

Either way we run it, Grype identifies vulnerabilities in the dependencies, showing severity levels, the vulnerability ID, and version that the issue was fixed in.

  Scanned for vulnerabilities     [43 vulnerability matches]
   ├── by severity: 2 critical, 19 high, 14 medium, 8 low, 0 negligible
   └── by status:   40 fixed, 3 not-fixed, 0 ignored
NAME                    INSTALLED   FIXED-IN    TYPE  VULNERABILITY        SEVERITY
@babel/helpers          7.20.7      7.26.10     npm   GHSA-968p-4wvh-cqc8  Medium
@babel/runtime          7.22.5      7.26.10     npm   GHSA-968p-4wvh-cqc8  Medium
@babel/traverse         7.20.12     7.23.2      npm   GHSA-67hx-6x53-jw92  Critical
@evershop/evershop      1.0.0-rc.8  1.0.0-rc.9  npm   GHSA-32r3-57hp-cgfw  Critical
@evershop/evershop      1.0.0-rc.8  1.0.0-rc.9  npm   GHSA-ggpm-9qfx-mhwg  High
axios                   0.21.4      1.8.2       npm   GHSA-jr5f-v2jv-69x6  High

We can even ask Grype to explain the vulnerabilities in more detail. Let’s take one of the critical vulnerabilities and get Grype to elaborate on the details. Note that we are scanning the existing SBOM, which is faster than running Grype against the container or directory, as it skips the need to build the SBOM internally.

grype ./evershop-v1.0.0-sbom.json -o json | grype explain --id GHSA-67hx-6x53-jw92

The output is a human readable description with clickable links to find out more from the upstream sources.

GHSA-67hx-6x53-jw92 from github:language:javascript (Critical)
Babel vulnerable to arbitrary code execution when compiling specifically crafted malicious code
Related vulnerabilities:
    - nvd:cpe CVE-2023-45133 (High)
Matched packages:
    - Package: @babel/traverse, version: 7.20.12
      PURL: pkg:npm/%40babel/[email protected]
      Match explanation(s):
          - github:language:javascript:GHSA-67hx-6x53-jw92 Direct match (package name, version, and ecosystem) against @babel/traverse (version 7.20.12).
      Locations:
URLs:
    - https://github.com/advisories/GHSA-67hx-6x53-jw92
    - https://nvd.nist.gov/vuln/detail/CVE-2023-45133

Auditing Licenses with Grant

Security isn’t the only compliance concern for JavaScript developers. Grant helps audit license compliance based on the SBOM data.

For Linux & macOS:

curl -sSfL https://raw.githubusercontent.com/anchore/grant/main/install.sh | sh -s -- -b /usr/local/bin

Alternatively, use Homebrew on macOS:

brew install anchore/grant/grant

Grant is not currently published for Microsoft Windows, but can be built from source.

Verify the installation:

grant version
Application: grant
Version:    0.2.6
BuildDate:  2025-01-22T21:09:16Z
GitCommit:  d24cecfd62c471577bef8139ad28a8078604589e
GitDescription: v0.2.6
Platform:   darwin/arm64
GoVersion:  go1.23.4
Compiler:   gc
# Analyze licenses used by packages listed in the SBOM
grant analyze -s evershop-sbom.json

Grant identifies licenses for each component and flags any potential license compliance issues in your dependencies. By default the Grant configuration has a deny-all for all licenses.

* ./evershop-v1.0.0-sbom.json
  * license matches for rule: default-deny-all; matched with pattern *
    * Apache-2.0
    * Artistic-2.0
    * BSD-2-Clause
    * BSD-3-Clause
    * CC-BY-3.0
    * CC0-1.0
    * ISC
    * MIT
    * Unlicense
    * WTFPL

Finding out which packages are under what license is straightforward with the --show-packages option:

grant check ./evershop-v1.0.0-sbom.json --show-packages
* ./evershop-v1.0.0-sbom.json
  * license matches for rule: default-deny-all; matched with pattern *
    * Apache-2.0
      * @ampproject/remapping
      * @webassemblyjs/leb128
      * @xtuc/long
      * acorn-node
      * ansi-html-community

Integrating SBOMs into Your Development Workflow

For maximum benefit, integrate SBOM generation and vulnerability scanning into your CI/CD pipeline:

  • Generate during builds: Add SBOM generation to your build process
  • Scan for vulnerabilities: Automatically check for security issues
  • Store SBOMs as artifacts: Keep them alongside each release
  • Track changes: Compare SBOMs between versions to identify supply chain changes

For example, in GitHub workflows use our sbom-action and scan-action, built on Syft and Grype:

jobs:
  build:
    runs-on: ubuntu-latest
    steps:
      - name: Checkout code
        uses: actions/checkout@v4

      - name: Create SBOM
        uses: anchore/sbom-action@v0
        id: sbom
        with:
          format: spdx-json
          output-file: "${{ github.event.repository.name }}-sbom.spdx.json"
      
      - name: Scan SBOM
        uses: anchore/scan-action@v6
        id: scan
        with:
          sbom: "${{ github.event.repository.name }}-sbom.spdx.json"
          fail-build: false
          severity-cutoff: medium
          output-format: json
          
      - name: Upload SBOM as artifact
        uses: actions/upload-artifact@v2
        with:
          name: sbom.json
          path: sarif_file: ${{ steps.sbom.outputs.sbom }}

Best Practices for JavaScript SBOM Generation

  • Generate SBOMs for both development and production dependencies: Each has different security implications
  • Use package lockfiles: These provide deterministic builds and more accurate SBOM generation
  • Include SBOMs in your release process: Make them available to users of your libraries or applications
  • Automate the scanning process: Don’t rely on manual checks
  • Keep tools updated: Vulnerability databases are constantly evolving

Wrapping Up

The JavaScript ecosystem moves incredibly fast, and keeping track of what’s in your apps can feel like a never-ending battle. That’s where tools like Syft, Grype, and Grant come in. They give you X-ray vision into your dependencies without the hassle of sign-ups, API keys, or usage limits.

Once developers start generating SBOMs and actually see what’s lurking in their node_modules folders, they can’t imagine going back to flying blind. Whether you’re trying to patch the next Log4j-style vulnerability in record time or just making sure you’re not accidentally violating license terms, having that dependency data at your fingertips is a game-changer.

Give these tools a spin in your next project. Your future self will thank you when that critical security advisory hits your inbox, and you can immediately tell if you’re affected and exactly where.


Want to learn more about software supply chain security? Check out our resources on SBOM management and container vulnerability scanning.

The Developer’s Guide to SBOMs & Policy-as-Code

If you’re a developer, this vignette may strike a chord: You’re deep in the flow, making great progress on your latest feature, when someone from the security team sends you an urgent message. A vulnerability has been discovered in one of your dependencies and has failed a compliance review. Suddenly, your day is derailed as you shift from coding to a gauntlet of bureaucratic meetings.

This is an unfortunate reality for developers at organizations where security and compliance are bolt-on processes rather than integrated parts of the whole. Your valuable development time is consumed with digging through arcane compliance documentation, attending security reviews and being relegated to compliance training sessions. Every context switch becomes another drag on your productivity, and every delayed deployment impacts your ability to ship code.

Two niche DevSecOps/software supply chain technologies have come together to transform the dynamic between developers and organizational policy—software bills of materials (SBOMs) and policy-as-code (PaC). Together, they dramatically reduce the friction between development velocity and risk management requirements by making policy evaluation and enforcement:

  • Automated and consistent
  • Integrated into your existing workflows
  • Visible early in the development process

In this guide, we’ll explore how SBOMs and policy-as-code work, the specific benefits they bring to your daily development work, and how to implement them in your environment. By the end, you’ll understand how these tools can help you spend less time manually doing someone else’s job and more time doing what you do best—writing great code.


Interested to learn about all of the software supply chain use-cases that SBOMs enable? Read our new white paper and start unlocking enterprise value.

WHITE PAPER Rnd Rect | Unlock Enterprise Value with SBOMs: Use-Cases for the Entire Organization

A Brief Introduction to Policy-as-Code

You’re probably familiar with Infrastructure-as-Code (IaC) tools like Terraform, AWS CloudFormation, or Pulumi. These tools allow you to define your cloud infrastructure in code rather than clicking through web consoles or manually running commands. Policy-as-Code (PaC) applies this same principle to policies from other departments of an organization.

What is policy-as-code?

At its core, policy-as-code translates organizational policies—whether they’re security requirements, licensing restrictions, or compliance mandates—from human-readable documents into machine-readable representations that integrate seamlessly with your existing DevOps platform and tooling.

Think of it this way: IaC gives you a DSL for provisioning and managing cloud resources, while PaC extends this concept to other critical organizational policies that traditionally lived outside engineering teams. This creates a bridge between development workflows and business requirements that previously existed in separate silos.

Why do I care?

Let’s play a game of would you rather. Choose the activity from the table below that you’d rather do:

Before Policy-as-CodeAfter Policy-as-Code
Read lengthy security/legal/compliance  documentation to understand requirementsReference policy translated into code with clear comments and explanations
Manually review your code policy compliance and hope you interpreted policy correctlyReceive automated, deterministic policy evaluation directly in CI/CD build pipeline
Attend compliance training sessions because you didn’t read the documentationLearn policies by example as concrete connections to actual development tasks
Setup meetings with security, legal or compliance teams to get code approvalGet automated approvals through automated policy evaluation without review meetings
Wait till end of sprint and hope VP of Eng can get exception to ship with policy violationsIdentify and fix policy violations early when changes are simple to implement

While the game is a bit staged, it isn’t divorced from reality. PaC is meant to relieve much of the development friction associated with the external requirements that are typically hoisted onto the shoulders of developers.

From oral tradition to codified knowledge

Perhaps one of the most under appreciated benefits of policy-as-code is how it transforms organizational knowledge. Instead of policies living in outdated Word documents or in the heads of long-tenured employees, they exist as living code that evolves with your organization.

When a developer asks “Why do we have this restriction?” or “What’s the logic behind this policy?”, the answer isn’t “That’s just how we’ve always done it” or “Ask Alice in Compliance.” Instead, they can look at the policy code, read the annotations, and understand the reasoning directly.

In the next section, we’ll explore how software bills of materials (SBOMs) provide the perfect data structure to pair with policy-as-code for managing software supply chain security.

A Brief Introduction to SBOMs (in the Context of PaC)

If policy-as-code provides the rules engine for your application’s dependency supply chain, then Software Bills of Materials (SBOMs) provide the structured, supply chain data that the policy engine evaluates.

What is an SBOM?

An SBOM is a formal, machine-readable inventory of all components and dependencies used in building a software artifact. If you’re familiar with Terraform, you can think of an SBOM as analogous to a dev.tfstate file but it stores the state of your application code’s 3rd-party dependency supply chain which is then reconciled against a main.tf file (i.e., policy) to determine if the software supply chain is compliant or in violation of the defined policy.

SBOMs vs package manager dependency files

You may be thinking, “Don’t I already have this information in my package.json, requirements.txt, or pom.xml file?” While these files declare your direct dependencies, they don’t capture the complete picture:

  1. They don’t typically include transitive dependencies (dependencies of your dependencies)
  2. They don’t include information about the components within container images you’re using
  3. They don’t provide standardized metadata about vulnerabilities, licenses, or provenance
  4. They aren’t easily consumable by automated policy engines across different programming languages and environments

SBOMs solve these problems by providing a standardized format that comprehensively documents your entire software supply chain in a way that policy engines can consistently evaluate.

A universal policy interface: How SBOMs enable policy-as-code

Think of SBOMs as creating a standardized “policy interface” for your software’s supply chain metadata. Just as APIs create a consistent way to interact with services, SBOMs create a consistent way for policy engines to interact with your software’s composable structure.

This standardization is crucial because it allows policy engines to operate on a known data structure rather than having to understand the intricacies of each language’s package management system, build tool, or container format.

For example, a security policy that says “No components with critical vulnerabilities may be deployed to production” can be applied consistently across your entire software portfolio—regardless of the technologies used—because the SBOM provides a normalized view of the components and their vulnerabilities.

In the next section, we’ll explore the concrete benefits that come from combining SBOMs with policy-as-code in your development workflow.

How do I get Started with SBOMs and Policy-as-Code

Now that you understand what SBOMs and policy-as-code are and why they’re valuable, let’s walk through a practical implementation. We’ll use Anchore Enterprise as an example of a policy engine that has a DSL to express a security policy which is then directly integrated into a CI/CD runbook. The example will focus on a common software supply chain security best practice: preventing the deployment of applications with critical vulnerabilities.

Tools we’ll use

For this example implementation, we’ll use the following components from Anchore:

  • AnchoreCTL: A software composition analysis (SCA) tool and SBOM generator that scans source code, container images or application binaries to populate an SBOM with supply chain metadata
  • Anchore Enforce: The policy engine that evaluates SBOMs against defined policies
  • Anchore Enforce JSON: The Domain-Specific Language (DSL) used to define policies in a machine-readable format

While we’re using Anchore in this example, the concepts apply to other SBOM generators and policy engines as well.

Step 1: Translate human-readable policies to machine-readable code

The first step is to take your organization’s existing policies and translate them into a format that a policy engine can understand. Let’s start with a simple but effective policy.

Human-Readable Policy:

Applications with critical vulnerabilities must not be deployed to production environments.

This policy needs to be translated into the Anchore Enforce JSON policy format:

{
  "id": "critical_vulnerability_policy",
  "version": "1.0",
  "name": "Block Critical Vulnerabilities",
  "comment": "Prevents deployment of applications with critical vulnerabilities",
  "rules": [
    {
      "id": "block_critical_vulns",
      "gate": "vulnerabilities",
      "trigger": "package",
      "comment": "Rule evaluates each dependency in an SBOM against vulnerability database. If the dependency is found in the database, all known vulnerability severity scores are evaluated for a critical value. If match if found policy engine returns STOP action to CI/CD build task",
      "parameters": [
        { "name": "package_type", "value": "all" },
        { "name": "severity_comparison", "value": "=" },
        { "name": "severity", "value": "critical" },
      ],
      "action": "stop"
    }
  ]
}

This policy code instructs the policy engine to:

  1. Examine all application dependencies (i.e., packages) in the SBOM
  2. Check if any dependency/package has vulnerabilities with a severity of “critical”
  3. If found, return a “stop” action that will fail the build

If you’re looking for more information on the capabilities of the Anchore Enforce DSL, our documentation provides the full capabilities of the Anchore Enforce policy engine.

Step 2: Deploy Anchore Enterprise with the policy engine

With the example policy defined, the next step is to deploy Anchore Enterprise (AE) and configure the Anchore Enforce policy engine. The high-level steps are:

  1. Deploy Anchore Enterprise platform in your test environment via Helm Chart (or other); includes policy engine
  2. Load your policy into the policy engine
  3. Configure access controls/permissions between AE deployment and CI/CD build pipeline

If you’re interested to get hands-on with this, we have developed a self-paced workshop that walks you through a full deployment and how to set up a policy. You can get a trial license by signing up for our free trial.

Step 3: Integrate SBOM generation into your CI/CD pipeline

Now you need to generate SBOMs as part of your build process and have them evaluated against your policies. Here’s an example of how this might look in a GitHub Actions workflow:

name: Build App and Evaluate Supply Chain for Vulnerabilities

on:
  push:
    branches: [ main ]
  pull_request:
    branches: [ main ]

jobs:
  build-and-evaluate:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v3
      
      - name: Build Application
        run: |
          # Build application as container image
          docker build -t myapp:latest .
      
      - name: Generate SBOM
        run: |
          # Install AnchoreCTL
          curl -sSfL https://anchorectl-releases.anchore.io/v1.0.0/anchorectl_1.0.0_linux_amd64.tar.gz | tar xzf - -C /usr/local/bin
          
          # Execute supply chain composition scan of container image, generate SBOM and send to policy engine for evaluation
          anchorectl image add --wait myapp:latest
          
      - name: Evaluate Policy
        run: |
          # Get policy evaluation results
          RESULT=$(anchorectl image check myapp:latest --policy critical_vulnerability_policy)
          
          # Handle the evaluation result
          if [[ $RESULT == *"Status: pass"* ]]; then
            echo "Policy evaluation passed! Proceeding with deployment."
          else
            echo "Policy evaluation failed! Deployment blocked."
            exit 1
          fi
      
      - name: Deploy if Passed
        if: success()
        run: |
          # Your deployment steps here

This workflow:

  1. Builds your application as a container image using Docker
  2. Installs AnchoreCTL
  3. Scans container image with SCA tool to map software supply chain
  4. Generates an SBOM based on the SCA results
  5. Submits the SBOM to the policy engine for evaluation
  6. Gets evaluation results from policy engine response
  7. Continues or halts the pipeline based on the policy response

Step 4: Test the integration

With the integration in place, it’s time to test that everything works as expected:

  1. Create a test build that intentionally includes a component with a known critical vulnerability
  2. Push the build through your CI/CD pipeline
  3. Confirm that:
    • The SBOM is correctly generated
    • The policy engine identifies the vulnerability
    • The pipeline fails as expected

If all goes well, you’ve successfully implemented your first policy-as-code workflow using SBOMs!

Step 5: Expand your policy coverage

Once you have the basic integration working, you can begin expanding your policy coverage to include:

  • Security policies
  • Compliance policies
  • Software license policies
  • Custom organizational policies
  • Environment-specific requirements (e.g., stricter policies for production vs. development)

Work with your security and compliance teams to translate their requirements into policy code, and gradually expand your automated policy coverage. This process is a large upfront investment but creates recurring benefits that pay dividends over the long-term.

Step 6: Profit!

With SBOMs and policy-as-code implemented, you’ll start seeing the benefits almost immediately:

  • Fast feedback on security and compliance issues
  • Reduced manual compliance tasks
  • Better documentation of what’s in your software and why
  • Consistent evaluation and enforcement of policies
  • Certainty about policy approvals

The key to success is getting your security and compliance teams to embrace the policy-as-code approach. Help them understand that by translating their policies into code, they gain more consistent enforcement while reducing manual effort.

Wrap-Up

As we’ve explored throughout this guide, SBOMs and policy-as-code represent a fundamental shift in how developers interact with security and compliance requirements. Rather than treating these as external constraints that slow down development, they become integrated features of your DevOps pipeline.

Key takeaways

  • Policy-as-Code transforms organizational policies from static documents into dynamic, version-controlled code that can be automated, tested, and integrated into CI/CD pipelines.
  • SBOMs provide a standardized format for documenting your software’s components, creating a consistent interface that policy engines can evaluate.
  • Together, they enable “shift-left” security and compliance, providing immediate feedback on policy violations without meetings or context switching.
  • Integration is straightforward with pre-built plugins for popular DevOps platforms, allowing you to automate policy evaluation as part of your existing build process.
  • The benefits extend beyond security to include faster development cycles, reduced compliance burden, and better visibility into your software supply chain.

Get started today

Ready to bring SBOMs and policy-as-code to your development environment? Anchore Enterprise provides a comprehensive platform for generating SBOMs, defining policies, and automating policy evaluation across your software supply chain.


Learn the 5 best practices for container security and how SBOMs play a pivotal role in securing your software supply chain.

Software Supply Chain Transparency: Why SBOMs Are the Missing Piece in Your ConMon Strategy

This blog post has been archived and replaced by the supporting pillar page that can be found here:

https://anchore.com/wp-admin/post.php?post=987475395&action=edit

The blog post is meant to remain “public” so that it will continue to show on the /blog feed. This will help discoverability for people browsing the blog and potentially help SEO. If it is clicked on it will automatically redirect to the pillar page.

How to Automate Container Vulnerability Scanning for Harbor Registry with Anchore Enterprise

Security engineers at modern enterprises face an unprecedented challenge: managing software supply chain risk without impeding development velocity, all while threat actors exploit the rapidly expanding attack surface. With over 25,000 new vulnerabilities in 2023 alone and supply chain attacks surging 540% year-over-year from 2019 to 2022, the exploding adoption of open source software has created an untenable security environment. To overcome these challenges security teams are in need of tools to scale their impact and invert the they are a speed bump for high velocity software delivery.

If your DevSecOps pipeline utilizes the open source Harbor registry then we have the perfect answer to your needs. Integrating Anchore Enterprise—the SBOM-powered container vulnerability management platform—with Harbor offers the force-multiplier security teams need. This one-two combo delivers:

  • Proactive vulnerability management: Automatically scan container images before they reach production
  • Actionable security insights: Generate SBOMs, identify vulnerabilities and alert on actionable insights to streamline remediation efforts
  • Lightweight implementation: Native Harbor integration requiring minimal configuration while delivering maximum value
  • Improved cultural dynamics: Reduce security incident risk and, at the same time, burden on development teams while building cross-functional trust

This technical guide walks through the implementation steps for integrating Anchore Enterprise into Harbor, equipping security engineers with the knowledge to secure their software supply chain without sacrificing velocity.

Learn the essential container security best practices to reduce the risk of software supply chain attacks in this white paper.

Reduce Risk for Software Supply Chain Attacks: Best Practices for Container Security

Integration Overview

Anchore Enterprise can integrate with Harbor in two different ways—each has pros and cons:

Pull Integration Model

In this model, Anchore uses registry credentials to pull and analyze images from Harbor:

  • Anchore accesses Harbor using standard Docker V2 registry integration
  • Images are analyzed directly within Anchore Enterprise
  • Results are available in Anchore’s interface and API
  • Ideal for organizations where direct access to Harbor is restricted but API access is permitted

Push Integration Model

In this model, Harbor uses its native scanner adapter feature to push images to Anchore for analysis:

  • Harbor initiates scans on-demand through its scanner adapter as images are added
  • Images are scanned within the Anchore deployment
  • Vulnerability scan results are stored in Anchore and sent to Harbor’s UI
  • Better for environments with direct access to Harbor that want immediate scans

Both methods provide strong security benefits but differ in workflow and where results are accessed.

Setting Up the Pull Integration

Let’s walk through how to configure Anchore Enterprise to pull and analyze images from your Harbor registry.

Prerequisites

  • Anchore Enterprise installed and running
  • Harbor registry deployed and accessible
  • Harbor user account with appropriate permissions

Step 1: Configure Registry Credentials in Anchore

  1. In Anchore Enterprise, navigate to the “Registries” section
  2. Select “Add Registry”
  1. Fill in the following details:
Registry Hostname or IP Address: [your Harbor API URL or IP address, e.g., http://harbor.yourdomain.com]

Name: [Human readable name]

Type: docker_v2

Username: [your Harbor username, e.g., admin]

Password: [your Harbor password]
  1. Configure any additional options like SSL validation if necessary
  2. Test the connection
  3. Save the configuration

Step 2: Analyze an Image from Harbor

Once the registry is configured, you can analyze images stored in Harbor:

  1. Navigate to the “Images” section in Anchore Enterprise
  2. Select “Add Image”
  1. Choose your Harbor registry from the dropdown
  2. Specify the repository and tag for the image you want to analyze
  3. Click “Analyze”

Anchore will pull the image from Harbor, decompose it, generate an SBOM, and scan for vulnerabilities. This process typically takes a few minutes depending on image size.

Step 3: Review Analysis Results

After analysis completes:

  1. View the vulnerability report in the Anchore UI
  2. Check the generated SBOM for all dependencies
  3. Review compliance status against configured policies
  4. Export reports or take remediation actions as needed

Setting Up the Push Integration

Now let’s configure Harbor to push images to Anchore for scanning using the Harbor Scanner Adapter.

Prerequisites

  • Harbor v2.0 or later installed
  • Anchore Enterprise deployed and accessible
  • Harbor Scanner Adapter for Anchore installed

Step 1: Deploy the Harbor Scanner Adapter

If not already deployed, install the Harbor Scanner Adapter for Anchore:

  1. Download or copy the harbor-adapter-anchore.yaml template from our GitHub repository
  2. Customize the template for your Harbor deployment. The required fields are:
ANCHORE_ENDPOINT 

ANCHORE_USERNAME 

ANCHORE_PASSWORD
  1. Apply the Kubernetes manifest:
kubectl apply -f harbor-adapter-anchore.yaml

Step 2: Configure the Scanner in Harbor

  1. Log in to Harbor as an administrator
  2. Navigate to “Administration” → “Interrogation Services”
  3. In the “Scanners” tab, click “New Scanner”
  1. Enter the following details:
Name: Anchore

Description: Anchore Enterprise Scanner

Endpoint: http://harbor-scanner-anchore:8080

Auth: None (or as required by your configuration)
  1. Save and set as default if desired

Step 3: Configure Project Scanning Settings

For each project that should use Anchore scanning:

  1. Navigate to the project in Harbor
  2. Go to “Configuration”
  3. Enable “Automatically scan images on push” AND Enable “Automatically generate SBOM on push”
  1. Save the configuration

Step 4: Test the Integration

  1. Tag an image for your Harbor project:
docker tag my-test-application:latest harbor.yourdomain.com/project-name/my-test-application:latest
  1. Push the image to Harbor:
docker push harbor.yourdomain.com/project-name/my-test-application:latest
  1. Verify the automatic scan starts in Harbor
  2. Review the results in your Harbor UI once scanning completes

Advanced Configuration Features

Now that you have the base configuration working for the Harbor Scanner Adapter, you are ready to consider some additional features to increase your security posture.

Scheduled Scanning

Beyond on-push scanning, you can configure scheduled scanning to catch newly discovered vulnerabilities in existing images:

  1. In Harbor, navigate to “Administration” → “Interrogation Services” → “Vulnerability”
  1. Set the scan schedule (hourly, daily, weekly, etc.)
  2. Save the configuration

This ensures all images are regularly re-scanned as vulnerability databases are updated with newly discovered and documented vulnerabilities.

Security Policy Enforcement

To enforce security at the pipeline level:

  1. In your Harbor project, navigate to “Configuration”
  1. Enable “Prevent vulnerable images from running”
  2. Select the vulnerability severity level threshold (Low, Medium, High, Critical)
  3. Images with vulnerabilities above this threshold will be blocked from being pulled*

*Be careful with this setting for a production environment. If an image is flagged as having a vulnerability and your container orchestrator attempts to pull the image to auto-scale a service it may cause instability for users.

Proxy Image Cache

Harbor’s proxy cache capability provides an additional security layer:

  1. Navigate to “Registries” in Harbor and select “New Endpoint”
  1. Configure a proxy cache to a public registry like Docker Hub
  2. All images pulled from Docker Hub will be cached locally and automatically scanned for vulnerabilities based on your project settings

Security Tips and Best Practices from the Anchore Team

Use Anchore Enterprise for highest fidelity vulnerability data

  • The Anchore Enterprise dashboard surfaces complete vulnerability details
  • Full vulnerability data can be configured with downstream integrations like Slack, Jira, ServiceNow, etc. 

“Good data empowers good people to make good decisions.”

—Dan Perry, Principal Customer Success Engineer, Anchore

Configuration Best Practices

For optimal security posture:

  • Configure per Harbor project: Use different vulnerability scanning settings for different risk profiles
  • Mind your environment topology: Adjust network timeouts and SSL settings based on network topology; make sure Harbor and Anchore Enterprise deployments are able to communicate securely

Secure Access Controls

  • Adopt least privilege principle: Use different credentials per repository
  • Utilize API keys: For service accounts and integrations, use API keys rather than user credentials

Conclusion

Integrating Anchore Enterprise with Harbor registry creates a powerful security checkpoint in your DevSecOps pipeline. By implementing either the pull or push model based on your specific needs, you can automate vulnerability scanning, enforce security policies, and maintain compliance requirements.

This integration enables security teams to:

  • Detect vulnerabilities before images reach production
  • Generate and maintain accurate SBOMs
  • Enforce security policies through prevention controls
  • Maintain continuous security through scheduled scans

With these tools properly integrated, you can significantly reduce the risk of deploying vulnerable containers to production environments, helping to secure your software supply chain.

If you’re a visual learner, this content is also available in webinar format. Watch it on-demand below:

NIST SP 800-190: Overview & Compliance Checklist

This blog post has been archived and replaced by the supporting pillar page that can be found here: https://anchore.com/wp-admin/post.php?post=987474946&action=edit

The blog post is meant to remain “public” so that it will continue to show on the /blog feed. This will help discoverability for people browsing the blog and potentially help SEO. If it is clicked on it will automatically redirect to the pillar page.

Unlocking the Power of SBOMs: A Complete Guide

Software Bill of Materials (SBOMs) are no longer optional—they’re mission-critical.

That’s why we’re excited to announce the release of our new white paper, “Unlock Enterprise Value with SBOMs: Use-Cases for the Entire Organization.” This comprehensive guide is designed for security and engineering leadership at both commercial enterprises and federal agencies, providing actionable insights into how SBOMs are transforming the way organizations manage software complexity, mitigate risk, and drive business outcomes.

From software supply chain security to DevOps acceleration and regulatory compliance, SBOMs have emerged as a cornerstone of modern software development. They do more than provide a simple inventory of application components; they enable rapid security incident response, automated compliance, reduced legal risk, and accelerated software delivery.

⏱️ Can’t wait till the end?
📥 Download the white paper now 👇👇👇

WHITE PAPER Rnd Rect | Unlock Enterprise Value with SBOMs: Use-Cases for the Entire Organization

Why SBOMs Are a Game Changer

SBOMs are no longer just a checklist item—they’re a strategic asset. They provide an in-depth inventory of every component within your software ecosystem, complete with critical metadata about suppliers, licensing rights, and security postures. This newfound transparency is revolutionizing cross-functional operations across enterprises by:

  • Accelerating Incident Response: Quickly identify vulnerable components and neutralize threats before they escalate.
  • Enhancing Vulnerability Management: Prioritize remediation efforts based on risk, ensuring that developer resources are optimally deployed.
  • Strengthening Compliance: Automate and streamline adherence to complex regulatory requirements such as FedRAMP, SSDF Attestation, and DoD’s continuous ATO.
  • Reducing Legal Risk: Manage open source license obligations proactively, ensuring that every component meets your organization’s legal and security standards.

What’s Inside the White Paper?

Our white paper is organized by organizational function; each section highlighting the relevant SBOM use-cases. Here’s a glimpse of what you can expect:

  • Security: Rapidly identify and mitigate zero-day vulnerabilities, scale vulnerability management, and detect software drift to prevent breaches.
  • Engineering & DevOps: Eliminate wasted developer time with real-time feedback, automate dependency management, and accelerate software delivery.
  • Regulatory Compliance: Automate policy checks, streamline compliance audits, and meet requirements like FedRAMP and SSDF Attestation with ease.
  • Legal: Reduce legal exposure by automating open source license risk management.
  • Sales: Instill confidence in customers and accelerate sales cycles by proactively providing SBOMs to quickly build trust.

Also, you’ll find real-world case studies from organizations that have successfully implemented SBOMs to reduce risk, boost efficiency, and gain a competitive edge. Learn how companies like Google and Cisco are leveraging SBOMs to drive business outcomes.

Empower Your Enterprise with SBOM-Centric Strategies

The white paper underscores that SBOMs are not a one-trick pony. They are the cornerstone of modern software supply chain management, driving benefits across security, engineering, compliance, legal, and customer trust. Whether your organization is embarking on its SBOM journey or refining an established process, this guide will help you unlock cross-functional value and future-proof your technology operations.

Download the White Paper Today

SBOMs are more than just compliance checkboxes—they are a strategic enabler for your organization’s security, development, and business operations. Whether your enterprise is just beginning its SBOM journey or operating a mature SBOM initiative, this white paper will help you uncover new ways to maximize value.

Explore SBOM use-cases for almost any department of the enterprise and learn how to unlock enterprise value to make the most of your software supply chain.

WHITE PAPER Rnd Rect | Unlock Enterprise Value with SBOMs: Use-Cases for the Entire Organization

Generating Python SBOMs: Using pipdeptree and Syft

SBOM (software bill of materials) generation is becoming increasingly important for software supply chain security and compliance. Several approaches exist for generating SBOMs for Python projects, each with its own strengths. In this post, we’ll explore two popular methods: using pipdeptree with cyclonedx-py and Syft. We’ll examine their differences and see why Syft is better for many use-cases.

Learn the 5 best practices for container security and how SBOMs play a pivotal role in securing your software supply chain.

Why Generate Python Package SBOMs?

Before diving into the tools, let’s understand why generating an SBOM for your Python packages is increasingly critical in modern software development. Security analysis is a primary driver—SBOMs provide a detailed inventory of your dependencies that security teams can use to identify vulnerabilities in your software supply chain and respond quickly to newly discovered threats. The cybersecurity compliance landscape is also evolving rapidly, with many organizations and regulations (e.g., EO 14028) now requiring SBOMs as part of software delivery to ensure transparency and traceability in an organization’s software supply chain.

From a maintenance perspective, understanding your complete dependency tree is essential for effective project management. SBOMs help development teams track dependencies, plan updates, and understand the potential impact of changes across their applications. They’re particularly valuable when dealing with complex Python applications that may have hundreds of transitive dependencies.

License compliance is another crucial aspect where SBOMs prove invaluable. By tracking software licenses across your entire dependency tree, you can ensure your project complies with various open source licenses and identify potential conflicts before they become legal issues. This is especially important in Python projects, where dependencies might introduce a mix of licenses that need careful consideration.

Generating a Python SBOM with pipdeptree and cyclonedx-py

The first approach we’ll look at combines two specialized Python tools: pipdeptree for dependency analysis and cyclonedx-py for SBOM generation. Here’s how to use them:

# Install the required tools
$ pip install pipdeptree cyclonedx-bom

# Generate requirements with dependencies
$ pipdeptree --freeze > requirements.txt

# Generate SBOM in CycloneDX format
$ cyclonedx-py requirements requirements.txt > cyclonedx-sbom.json

This Python-specific approach leverages pipdeptree‘s deep understanding of Python package relationships. pipdeptree excels at:

  • Detecting circular dependencies
  • Identifying conflicting dependencies
  • Providing a clear, hierarchical view of package relationships

Generating a Python SBOM with Syft: A Universal SBOM Generator

Syft takes a different approach. As a universal SBOM generator, it can analyze Python packages and multiple software artifacts. Here’s how to use Syft with Python projects:

# Install Syft (varies by platform)
# See: https://github.com/anchore/syft#installation
$ curl -sSfL https://raw.githubusercontent.com/anchore/syft/main/install.sh | sh -s -- -b /usr/local/bin

# Generate SBOM from requirements.txt
$ syft requirements.txt -o cyclonedx-json

# Or analyze an entire Python project
$ syft path/to/project -o cyclonedx-json

Key Advantages of Syft

Syft’s flexibility in output formats sets it apart from other tools. In addition to the widely used CycloneDX format, it supports SPDX for standardized software definitions and offers its own native JSON format that includes additional metadata. This format flexibility allows teams to generate SBOMs that meet various compliance requirements and tooling needs without switching between multiple generators.

Syft truly shines in its comprehensive analysis capabilities. Rather than limiting itself to a single source of truth, Syft examines your entire Python environment, detecting packages from multiple sources, including requirements.txt files, setup.py configurations, and installed packages. It seamlessly handles virtual environments and can even identify system-level dependencies that might impact your application.

The depth of metadata Syft provides is particularly valuable for security and compliance teams. For each package, Syft captures not just basic version information but also precise package locations within your environment, file hashes for integrity verification, detailed license information, and Common Platform Enumeration (CPE) identifiers. This rich metadata enables more thorough security analysis and helps teams maintain compliance with security policies.

Comparing the Outputs

We see significant differences in detail and scope when examining the outputs from both approaches. The pipdeptree with cyclonedx-py combination produces a focused output that concentrates specifically on Python package relationships. This approach yields a simpler, more streamlined SBOM that’s easy to read but contains limited metadata about each package.

Syft, on the other hand, generates a more comprehensive output that includes extensive metadata for each package. Its SBOM provides rich details about package origins, includes comprehensive CPE identification for better vulnerability matching, and offers built-in license detection. Syft also tracks the specific locations of files within your project and includes additional properties that can be valuable for security analysis and compliance tracking.

Here’s a snippet comparing the metadata for the rich package in both outputs:

// pipdeptree + cyclonedx-py
{
  "bom-ref":"pkg:pypi/[email protected]",
  "type":"library",
  "name":"rich",
  "version":"13.9.4"
}

// Syft
{
  "bom-ref":"pkg:pypi/[email protected]",
  "type":"library",
  "author":"Will McGugan <[email protected]>",
  "name":"rich",
  "version":"13.9.4",
  "licenses":[{"license":{"id": "MIT"}}],
  "cpe":"cpe:2.3:a:will_mcgugan_project:python-rich:13.9.4:*:*:*:*:*:*:*",
  "purl":"pkg:pypi/[email protected]",
  "properties":[
    {
      "name":"syft:package:language",
      "value":"python"
    },
    {
      "name":"syft:location:0:path",
      "value":"/.venv/lib/python3.10/site-packages/rich-13.9.4.dist-info/METADATA"
    }
  ]
}

Why Choose Syft?

While both approaches are valid, Syft offers several compelling advantages. As a universal tool that works across multiple software ecosystems, Syft eliminates the need to maintain different tools for different parts of your software stack.

Its rich metadata gives you deeper insights into your dependencies, including detailed license information and precise package locations. Syft’s support for multiple output formats, including CycloneDX, SPDX, and its native format, ensures compatibility with your existing toolchain and compliance requirements.

The project’s active development means you benefit from regular updates and security fixes, keeping pace with the evolving software supply chain security landscape. Finally, Syft’s robust CLI and API options make integrating into your existing automation pipelines and CI/CD workflows easy.

How to Generate a Python SBOM with Syft

Ready to generate SBOMs for your Python projects? Here’s how to get started with Syft:

  1. Install Syft following the official installation guide
  2. For a quick SBOM of your Python project:
$ syft path/to/your/project -o cyclonedx-json
  1. Explore different output formats:
$ syft path/to/your/project -o spdx-json
$ syft path/to/your/project -o syft-json

Conclusion

While pipdeptree combined with cyclonedx-py provides a solid Python-specific solution, Syft offers a more comprehensive and versatile approach to SBOM generation. Its ability to handle multiple ecosystems, provide rich metadata, and support various output formats makes it an excellent choice for modern software supply chain security needs.

Whether starting with SBOMs or looking to improve your existing process, Syft provides a robust, future-proof solution that grows with your needs. Try it and see how it can enhance your software supply chain security today.

Learn about the role that SBOMs for the security of your organization in this white paper.

Learn about the role that SBOMs for the security, including open source software (OSS) security, of your organization in this white paper.

Syft 1.20: Faster Scans, Smarter License Detection, and Enhanced Bitnami Support

We’re excited to announce Syft v1.20.0! If you’re new to the community, Syft is Anchore’s open source software composition analysis (SCA) and SBOM generation tool that provides foundational support for software supply chain security for modern DevSecOps workflows.

The latest version is packed with performance improvements, enhanced SBOM accuracy, and several community-driven features that make software composition scanning more comprehensive and efficient than ever.

Understand, Implement & Leverage SBOMs for Stronger Security & Risk Management

50x Faster Windows Scans

Scanning projects with numerous DLLs was reported to take peculiarly long when running on Windows, sometimes up to 50 minutes. A sharp-eyed community member (@rogueai) discovered that certificate validation was being performed unnecessarily during DLL scanning. A fix was merged into this release and those lengthy scans have been dramatically reduced from to just a few minutes—a massive performance improvement for Windows users!

Bitnami Embedded SBOM Support: Maximum Accuracy

Container images from Bitnami include valuable embedded SBOMs located at /opt/bitnami/. These SBOMs, packaged by the image creators themselves, represent the most authoritative source for package metadata. Thanks to community member @juan131 and maintainer @willmurphyscode, Syft now includes a dedicated cataloger for these embedded SBOMs.

This feature wasn’t simple to implement. It required careful handling of package relationships and sophisticated deduplication logic to merge authoritative vendor data with Syft’s existing scanning capabilities. The result? Scanning Bitnami images gives you the most accurate SBOM possible, combining authoritative vendor data with Syft’s comprehensive analysis.

Smarter License Detection

Handling licenses for non-open source projects can be a bit tricky. We discovered that when license files can’t be matched to a valid SPDX expression, they sometimes get erroneously marked as “unlicensed”—even when valid license text is present. For example, our dpkg cataloger occasionally encountered a license like:

NVIDIA Software License Agreement and CUDA Supplement to Software License Agreement

And categorized the package as unlicensed. Ideally, the cataloger would capture this non-standards compliant license whether the maintainer follows SDPX or not.

Community member @HeyeOpenSource and maintainer @spiffcs tackled this challenge with an elegant solution: a new configuration option that preserves the original license text when SPDX matching fails. While disabled by default for compatibility, you can enable this feature with license.include-unknown-license-content: true in your configuration. This ensures you never lose essential license information, even for non-standard licenses.

Go 1.24: Better Performance and Versioning

The upgrade to Go 1.24 brings two significant improvements:

  • Faster Scanning: Thanks to Go 1.24’s optimized map implementations, discussed in this Bytesize Go post—and other performance improvements—we’re seeing scan times reduced by as much as 20% in our testing.
  • Enhanced Version Detection: Go 1.24’s new version embedding means Syft can now accurately report its version and will increasingly provide more accurate version information for Go applications it scans:
syft:   go1.24.0
$ go version -m ./syft
path    github.com/anchore/syft/cmd/syft
mod     github.com/anchore/syft v1.20.0

This also means that as more applications are built with Go 1.24—the versions reported by Syft will become increasingly accurate over time. Everyone’s a winner!

Join the Conversation

We’re proud of these enhancements and grateful to the community for their contributions. If you’re interested in contributing or have ideas for future improvements, head to our GitHub repo and join the conversation. Your feedback and pull requests help shape the future of Syft and our other projects. Happy scanning!

Stay updated on future community spotlights and events by subscribing to our community newsletter.

Learn how MegaLinter leverages Syft and Grype to generate SBOMs and create vulnerability reports

FedRAMP Continuous Monitoring: Overview & Checklist

This blog post has been archived and replaced by the supporting pillar page that can be found here:
https://anchore.com/wp-admin/post.php?post=987474886&action=edit

The blog post is meant to remain “public” so that it will continue to show on the /blog feed. This will help discoverability for people browsing the blog and potentially help SEO. If it is clicked on it will automatically redirect to the pillar page.

How Syft Scans Software to Generate SBOMs

Syft is an open source CLI tool and Go library that generates a Software Bill of Materials (SBOM) from source code, container images and packaged binaries. It is a foundational building block for various use-cases: from vulnerability scanning with tools like Grype, to OSS license compliance with tools like Grant. SBOMs track software components—and their associated supplier, security, licensing, compliance, etc. metadata—through the software development lifecycle.

At a high level, Syft takes the following approach to generating an SBOM:

  1. Determine the type of input source (container image, directory, archive, etc.)
  2. Orchestrate a pluggable set of catalogers to scan the source or artifact
    • Each package cataloger looks for package types it knows about (RPMs, Debian packages, NPM modules, Python packages, etc.)
    • In addition, the file catalogers gather other metadata and generate file hashes
  3. Aggregate all discovered components into an SBOM document
  4. Output the SBOM in the desired format (Syft, SPDX, CycloneDX, etc.)

Let’s dive into each of these steps in more detail.


Explore SBOM use-cases for almost any department of the enterprise and learn how to unlock enterprise value to make the most of your software supply chain.

WHITE PAPER Rnd Rect | Unlock Enterprise Value with SBOMs: Use-Cases for the Entire Organization

Flexible Input Sources

Syft can generate an SBOM from several different source types:

  • Container images (both from registries and local Docker/Podman engines)
  • Local filesystems and directories
  • Archives (TAR, ZIP, etc.)
  • Single files

This flexibility is important as SBOMs are used in a variety of environments, from a developer’s workstation to a CI/CD pipeline.

When you run Syft, it first tries to autodetect the source type from the provided input. For example:

# Scan a container image 
syft ubuntu:latest

# Scan a local filesystem
syft ./my-app/

Pluggable Package Catalogers

The heart of Syft is its decoupled architecture for software composition analysis (SCA). Rather than one monolithic scanner, Syft delegates scanning to a collection of catalogers, each focused on a specific software ecosystem.

Some key catalogers include:

  • apk-db-cataloger for Alpine packages
  • dpkg-db-cataloger for Debian packages
  • rpm-db-cataloger for RPM packages (sourced from various databases)
  • python-package-cataloger for Python packages
  • java-archive-cataloger for Java archives (JAR, WAR, EAR)
  • npm-package-cataloger for Node/NPM packages

Syft automatically selects which catalogers to run based on the source type. For a container image, it will run catalogers for the package types installed in containers (RPM, Debian, APK, NPM, etc). For a filesystem, Syft runs a different set of catalogers looking for installed software that is more typical for filesystems and source code.

This pluggable architecture gives Syft broad coverage while keeping the core streamlined. Each cataloger can focus on accurately detecting its specific package type.

If we look at a snippet of the trace output from scanning an Ubuntu image, we can see some catalogers in action:

[0001] DEBUG discovered 91 packages cataloger=dpkg-db-cataloger...  
[0001] DEBUG discovered 0 packages cataloger=rpm-db-cataloger
[0001] DEBUG discovered 0 packages cataloger=npm-package-cataloger

Here, the dpkg-db-cataloger found 91 Debian packages, while the rpm-db-cataloger and npm-package-cataloger didn’t find any packages of their types—which makes sense for an Ubuntu image.

Aggregating and Outputting Results

Once all catalogers have finished, Syft aggregates the results into a single SBOM document. This normalized representation abstracts away the implementation details of the different package types.

The SBOM includes key data for each package like:

  • Name
  • Version
  • Type (Debian, RPM, NPM, etc)
  • Files belonging to the package
  • Source information (repository, download URL, etc.)
  • File digests and metadata

It also contains essential metadata, including a copy of the configuration used when generating the SBOM (for reproducibility). The SBOM will contain detailed information about package evidence, which packages were parsed from (within package.Metadata).

Finally, Syft serializes this document into one or more output formats. Supported formats include:

  • Syft’s native JSON format
  • SPDX’s tag-value and JSON
  • CycloneDX’s JSON and XML

Having multiple formats allows integrating Syft into a variety of toolchains and passing data between systems that expect certain standards.

Revisiting the earlier Ubuntu example, we can see a snippet of the final output:

NAME         VERSION            TYPE
apt          2.7.14build2       deb
base-files   13ubuntu10.1       deb
bash         5.2.21-2ubuntu4    deb

Container Image Parsing with Stereoscope

To generate high-quality SBOMs from container images, Syft leverages a stereoscope library for parsing container image formats.

Stereoscope does the heavy lifting of unpacking an image into its constituent layers, understanding the image metadata, and providing a unified filesystem view for Syft to scan.

This encapsulation is quite powerful, as it abstracts the details of different container image specs (Docker, OCI, etc.), allowing Syft to focus on SBOM generation while still supporting a wide range of images.

Cataloging Challenges and Future Work

While Syft can generate quality SBOMs for many source types, there are still challenges and room for improvement.

One challenge is supporting the vast variety of package types and versioning schemes. Each ecosystem has its own conventions, making it challenging to extract metadata consistently. Syft has added support for more ecosystems and evolved its catalogers to handle edge-cases to provide support for an expanding array of software tooling.

Another challenge is dynamically generated packages, like those created at runtime or built from source. Capturing these requires more sophisticated analysis that Syft does not yet do. To illustrate, let’s look at two common cases:

Runtime Generated Packages

Imagine a Python application that uses a web framework like Flask or Django. These frameworks allow defining routes and views dynamically at runtime based on configuration or plugin systems.

For example, an application might scan a /plugins directory on startup, importing any Python modules found and registering their routes and models with the framework. These plugins could pull in their own dependencies dynamically using importlib.

From Syft’s perspective, none of this dynamic plugin and dependency discovery happens until the application actually runs. The Python files Syft scans statically won’t reveal those runtime behaviors.

Furthermore, plugins could be loaded from external sources not even present in the codebase Syft analyzes. They might be fetched over HTTP from a plugin registry as the application starts.

To truly capture the full set of packages in use, Syft would need to do complex static analysis to trace these dynamic flows, or instrument the running application to capture what it actually loads. Both are much harder than scanning static files.

Source Built Packages

Another typical case is building packages from source rather than installing them from a registry like PyPI or RubyGems.

Consider a C++ application that bundles several libraries in a /3rdparty directory and builds them from source as part of its build process.

When Syft scans the source code directory or docker image, it won’t find any already built C++ libraries to detect as packages. All it will see are raw source files, which are much harder to map to packages and versions.

One approach is to infer packages from standard build tool configuration files, like CMakeLists.txt or Makefile. However, resolving the declared dependencies to determine the full package versions requires either running the build or profoundly understanding the specific semantics of each build tool. Both are fragile compared to scanning already built artifacts.

Some Language Ecosystems are Harder Than Others

It’s worth noting that dynamism and source builds are more or less prevalent in different language ecosystems.

Interpreted languages like Python, Ruby, and JavaScript tend to have more runtime dynamism in their package loading compared to compiled languages like Java or Go. That said, even compiled languages have ways of loading code dynamically, it just tends to be less common.

Likewise, some ecosystems emphasize always building from source, while others have a strong culture of using pre-built packages from central registries.

These differences mean the level of difficulty for Syft in generating a complete SBOM varies across ecosystems. Some will be more amenable to static analysis than others out of the box.

What Could Help?

To be clear, Syft has already done impressive work in generating quality SBOMs across many ecosystems despite these challenges. But to reach the next level of coverage, some additional analysis techniques could help:

  1. Static analysis to trace dynamic code flows and infer possible loaded packages (with soundness tradeoffs to consider)
  2. Dynamic instrumentation/tracing of applications to capture actual package loads (sampling and performance overhead to consider)
  3. Standardized metadata formats for build systems to declare dependencies (adoption curve and migration path to consider)
  4. Heuristic mapping of source files to known packages (ambiguity and false positives to consider)

None are silver bullets, but they illustrate the approaches that could help push SBOM coverage further in complex cases.

Ultimately, there will likely always be a gap between what static tools like Syft can discover versus the actual dynamic reality of applications. But that doesn’t mean we shouldn’t keep pushing the boundary! Even incremental improvements in these areas help make the software ecosystem more transparent and secure.

Syft also has room to grow in terms of programming language support. While it covers major ecosystems like Java and Python well, more work is needed to cover languages like Go, Rust, and Swift completely.

As the SBOM landscape evolves, Syft will continue to adapt to handle more package types, sources, and formats. Its extensible architecture is designed to make this growth possible.

Get Involved

Syft is fully open source and welcomes community contributions. If you’re interested in adding support for a new ecosystem, fixing bugs, or improving SBOM generation, the repo is the place to get started.

There are issues labeled “Good First Issue” for those new to the codebase. For more experienced developers, the code is structured to make adding new catalogers reasonably straightforward.

No matter your experience level, there are ways to get involved and help push the state of the art in SBOM generation. We hope you’ll join us!


Learn about the role that SBOMs for the security of your organization in this white paper.

Learn about the role that SBOMs for the security, including open source software (OSS) security, of your organization in this white paper.

DORA + SBOM Primer: Achieving Software Supply Chain Security in Regulated Industries

This blog post has been archived and replaced by the supporting pillar page that can be found here: https://anchore.com/wp-admin/post.php?post=987475393&action=edit

The blog post is meant to remain “public” so that it will continue to show on the /blog feed. This will help discoverability for people browsing the blog and potentially help SEO. If it is clicked on it will automatically redirect to the pillar page.

How to Tackle SBOM Sprawl and Secure Your Supply Chain

This blog post has been archived and replaced by the supporting pillar page that can be found here:

https://anchore.com/wp-admin/post.php?post=987475397&action=edit

The blog post is meant to remain “public” so that it will continue to show on the /blog feed. This will help discoverability for people browsing the blog and potentially help SEO. If it is clicked on it will automatically redirect to the pillar page.

2025 Cybersecurity Executive Order Requires Up Leveled Software Supply Chain Security

A few weeks ago, the Biden administration published a new Executive Order (EO) titled “Executive Order on Strengthening and Promoting Innovation in the Nation’s Cybersecurity”. This is a follow-up to the original cybersecurity executive order—EO 14028—from May 2021. This latest EO specifically targets improvements to software supply chain security that addresses gaps and challenges that have surfaced since the release of EO 14028. 

While many issues were identified, the executive order named and shamed software vendors for signing and submitting secure software development compliance documents without fully adhering to the framework. The full quote:

In some instances, providers of software to the Federal Government commit[ed] to following cybersecurity practices, yet do not fix well-known exploitable vulnerabilities in their software, which put the Government at risk of compromise. The Federal Government needs to adopt more rigorous 3rd-party risk management practices and greater assurance that software providers… are following the practices to which they attest.

In response to this behavior, the 2025 Cybersecurity EO has taken a number of additional steps to both encourage cybersecurity compliance and deter non-compliance—the carrot and the stick. This comes in the form of 4 primary changes:

  1. Compliance verification by CISA
  2. Legal repercussions for non-compliance
  3. Contract modifications for Federal agency software acquisition
  4. Mandatory adoption of software supply chain risk management practices by Federal agencies

In this post, we’ll explore the new cybersecurity EO in detail, what has changed in software supply chain security compliance and what both federal agencies and software providers can do right now to prepare.

What Led to the New Cybersecurity Executive Order?

In the wake of massive growth of supply chain attacks—especially those from nation-state threat actors like China—EO 14028 made software bill of materials (SBOMs) and software supply chain security spotlight agenda items for the Federal government. As directed by the EO, the National Institute of Standards and Technology (NIST) authored the Secure Software Development Framework (SSDF) to codify the specific secure software development practices needed to protect the US and its citizens. 

Following this, the Office of Management and Budget (OMB) published a memo that established a deadline for agencies to require vendor compliance with the SSDF. Importantly, the memo allowed vendors to “self-attest” to SSDF compliance.

In practice, many software providers chose to go the easy route and submitted SSDF attestations while only partially complying with the framework. Although the government initially hoped that vendors would not exploit this somewhat obvious loophole, reality intervened, leading to the issuance of the 2025 cybersecurity EO to close off these opportunities for non-compliance.

What’s Changing in the 2025 EO?

1. Rigorous verification of secure software development compliance

No longer can vendors simply self-attest to SSDF compliance. The Cybersecurity and Infrastructure Security Agency (CISA) is now tasked with validating these attestations and via the additional compliance artifacts the new EO requires. Providers that fail validation risk increased scrutiny and potential consequences such as…

2. CISA authority to refer non-compliant vendors to DOJ

A major shift comes from the EO’s provision allowing CISA to forward fraudulent attestations to the Department of Justice (DOJ). In the EO’s words, officials may “refer attestations that fail validation to the Attorney General for action as appropriate.” This raises the stakes for software vendors, as submitting false information on SSDF compliance could lead to legal consequences. 

3. Explicit SSDF compliance in software acquisition contracts

The Federal Acquisition Regulatory Council (FAR Council) will issue contract modifications that explicitly require SSDF compliance and additional items to enable CISA to programmatically verify compliance. Federal agencies will incorporate updated language in their software acquisition contracts, making vendors contractually accountable for any misrepresentations in SSDF attestations.

See the “FAR council contract updates” section below for the full details.

4. Mandatory adoption of supply chain risk management

Agencies must now embed supply chain risk management (SCRM) practices agency-wide, aligning with NIST SP 800-161, which details best practices for assessing and mitigating risks in the supply chain. This elevates SCRM to a “must-have” strategy for every Federal agency.

Other Notable Changes

Updated software supply chain security compliance controls

NIST will update both NIST SP 800-53, the “Control Catalog”, and the SSDF (NIST SP 800-218) to align with the new policy. The updates will incorporate additional controls and greater detail on existing controls. Although no controls have yet been formally added or modified, NIST is tasked with modernizing these documents to align with changes in secure software development practices. Once those updates are complete, agencies and vendors will be expected to meet the revised requirements.

Policy-as-code pilot

Section 7 of the EO describes a pilot program focused on translating security controls from compliance frameworks into “rules-as-code” templates. Essentially, this adopts a policy-as-code approach, often seen in DevSecOps, to automate compliance. By publishing machine-readable security controls that can be integrated directly into DevOps, security, and compliance tooling, organizations can reduce manual overhead and friction, making it easier for both agencies and vendors to consistently meet regulatory expectations.

FedRAMP incentives and new key management controls

The Federal Risk and Authorization Management Program (FedRAMP), responsible for authorizing cloud service providers (CSPs) for federal use, will also see important updates. FedRAMP will develop policies that incentivize or require CSPs to share recommended security configurations, thereby promoting a standard baseline for cloud security. The EO also proposes updates to FedRAMP security controls to address cryptographic key management best practices, ensuring that CSPs properly safeguard cryptographic keys throughout their lifecycle.

How to Prepare for the New Requirements

FAR council contract updates

Within 30 days of the EO release, the FAR Council will publish recommended contract language. This updated language will mandate:

  • Machine-readable SSDF Attestations: Vendors must provide an SSDF attestation in a structured, machine-readable format.
  • Compliance Reporting Artifacts: High-level artifacts that demonstrate evidence of SSDF compliance, potentially including automated scan results, security test reports, or vulnerability assessments.
  • Customer List: A list of all civilian agencies using the vendor’s software, enabling CISA and federal agencies to understand the scope of potential supply chain risk.

Important Note: The 30-day window applies to the FAR Council to propose new contract language—not for software vendors to become fully compliant. However, once the new contract clauses are in effect, vendors who want to sell to federal agencies will need to meet these updated requirements.

Action Steps for Federal Agencies

Federal agencies will bear new responsibilities to ensure compliance and mitigate supply chain risk. Here’s what you should do now:

  1. Inventory 3rd-Party Software Component Suppliers
  2. Assess Visibility into Supplier Risk
    • Perform a vulnerability scan on all 3rd-party components. If you already have SBOMs, scanning them for vulnerabilities is a quick way to find identity risk.
  3. Identify Critical Suppliers
    • Determine which software vendors are mission-critical. This helps you understand where to focus your risk management efforts.
  4. Assess Data Sensitivity
    • If a vendor handles sensitive data (e.g., PII), extra scrutiny is needed. A breach here has broader implications for the entire agency.
  5. Map Potential Lateral Movement Risk
    • Understand if a vendor’s software could provide attackers with lateral movement opportunities within your infrastructure.

Action Steps for Software Providers

For software vendors, especially those who sell to the federal government, proactivity is key to maintaining and expanding federal contracts.

  1. Inventory Your Software Supply Chain
    • Implement an SBOM-powered SCA solution within your DevSecOps pipeline to gain real-time visibility into all 3rd-party components.
  2. Assess Supplier Risk
    • Perform vulnerability scanning on 3rd-party supplier components to identify any that could jeopardize your compliance or your customers’ security.
  3. Identify Sensitive Data Handling
    • If your software processes personally identifiable information (PII) or other sensitive data, expect increased scrutiny. On the flip side, this may make your offering “mission-critical” and less prone to replacement—but it also means compliance standards will be higher.
  4. Map Your Own Attack Surface
    • Assess whether a 3rd-party supplier breach could cascade into your infrastructure and, by extension, your customers.
  5. Prepare Compliance Evidence
    • Start collecting artifacts—such as vulnerability scan reports, secure coding guidelines, and internal SSDF compliance checklists—so you can quickly meet new attestation requirements when they come into effect.

Wrap-Up

The 2025 Cybersecurity EO is a direct response to the flaws uncovered in EO 14028’s self-attestation approach. By requiring 3rd-party validation of SSDF compliance, the government aims to create tangible improvements in its cybersecurity posture—and expects the same from all who supply its agencies.

Given the rapid timeline, preparation is crucial. Both federal agencies and software providers should begin assessing their supply chain risks, implementing SBOM-driven visibility, and proactively planning for new attestation and reporting obligations. By taking steps now, you’ll be better positioned to meet the new requirements.

Learn about SSDF Attestation with this guide. The eBook will take you through everything you need to know to get started.

SSDF Attestation 101: A Practical Guide for Software Producers

A Complete Guide to Container Security

This blog post has been archived and replaced by the supporting pillar page that can be found here: https://anchore.com/wp-admin/post.php?post=987474704&action=edit

The blog post is meant to remain “public” so that it will continue to show on the /blog feed. This will help discoverability for people browsing the blog and potentially help SEO. If it is clicked on it will automatically redirect to the pillar page.

All Things SBOM in 2025: a Weekly Webinar Series

Software Bills of Materials (SBOMs) have quickly become a critical component in modern software supply chain security. By offering a transparent view of all the components that make up your applications, SBOMs enable you to pinpoint vulnerabilities before they escalate into costly incidents.

As we enter 2025, software supply chain security and risk management for 3rd-party software dependencies are top of mind for organizations. The 2024 Anchore Software Supply Chain Security Survey notes that 76% of organizations consider these challenges top priorities. Given this, it is easy to see why understanding what SBOMs are—and how to implement them—is key to a secure software supply chain.

To help organizations achieve these top priorities Anchore is hosting a weekly webinar series dedicated entirely to SBOMs. Beginning January 14 and running throughout Q1, our webinar line-up will explore a wide range of topics (see below). Industry luminaries like Kate Stewart (co-founder of the SPDX project) and Steve Springett (Chair of the OWASP CycloneDX Core Working Group) will be dropping in to provide unique insights and their special blend of expertise on all things SBOMs.

The series will cover:

  • SBOM basics and best practices
  • SDPX and SBOMs in-depth with Kate Stewart
  • Getting started: How to generate an SBOM
  • Software supply chain security and CycloneDX with Steve Springett
  • Scaling SBOMs for the enterprise
  • Real-world insights on applying SBOMs in high-stakes or regulated sectors
  • A look ahead at the future of SBOMs and software supply chain security with Kate Stewart
  • And more!

We invite you to learn from experts, gain practical skills, and stay ahead of the rapidly evolving world of software supply chain security. Visit our events page to register for the webinars now or keep reading to get a sneak peek at the content.

#1 Understanding SBOMs: An Intro to Modern Development

Date/Time: Tuesday, January 14, 2025 – 10am PST / 1pm EST
Featuring: 

  • Lead Developer of Syft
  • Anchore VP of Security
  • Anchore Director of Developer Relations

We are kicking off the series with an introduction to the essentials of SBOMs. This session will cover the basics of SBOMs—what they are, why they matter, and how to get started generating and managing them. Our experts will walk you through real-world examples (including Log4j) to show just how vital it is to know what’s in your software.

Key Topics:

This webinar is perfect for both technical practitioners and business leaders looking to establish a strong SBOM foundation.

#2 Understanding SBOMs: Deep Dive with Kate Stewart

Date/Time: Wednesday, January 22, 2025 – 10am PST / 1pm EST
Featured Guest: Kate Stewart (co-founder of SPDX)

Our second session brings you a front-row seat to an in-depth conversation with Kate Stewart, co-founder of the SPDX project. Kate is a leading voice in software supply chain security and the SBOM standard. From the origins of the SPDX standard to the latest challenges in license compliance, Kate will provide an extensive behind-the-scenes look into the world of SBOMs.

Key Topics:

  • The history and evolution of SBOMs, including the creation of SPDX
  • Balancing license compliance with security requirements
  • How SBOMs support critical infrastructure with national security concerns
  • The impact of emerging technology—like open source LLMs—on SBOM generation and analysis

If you’re ready for a more advanced look at SBOMs and their strategic impact, you won’t want to miss this conversation.

#3 How to Automate, Generate, and Manage SBOMs

Date/Time: Wednesday, January 29, 2025 – 12pm EST / 9am PST
Featuring: 

  • Anchore Director of Developer Relations
  • Anchore Principal Solutions Engineer

For those seeking a hands-on approach, this webinar dives into the specifics of automating SBOM generation and management within your CI/CD pipeline. Anchore’s very own Alan Pope (Developer Relations) and Sean Fazenbaker (Solutions) will walk you through proven techniques for integrating SBOMs to reveal early vulnerabilities, minimize manual interventions, and improve overall security posture.

Key Topics:

This is the perfect session for teams focused on shifting security left and preserving developer velocity.

What’s Next?

Beyond our January line-up, we have more exciting sessions planned throughout Q1. Each webinar will feature industry experts and dive deeper into specialized use-cases and future technologies:

  • CycloneDX & OWASP with Steve Springett – A closer look at this popular SBOM format, its technical architecture, and VEX integration.
  • SBOM at Scale: Enterprise SBOM Management – Learn from large organizations that have successfully rolled out SBOM practices across hundreds of applications.
  • SBOMs in High-Stakes Environments – Explore how regulated industries like healthcare, finance, and government handle unique compliance challenges and risk management.
  • The Future of Software Supply Chain Security – Join us in March as we look ahead at emerging standards, tools, and best practices with Kate Stewart returning as the featured guest.

Stay tuned for dates and registration details for each upcoming session. Follow us on your favorite social network (Twitter, Linkedin, Bluesky) or visit our events page to stay up-to-date.

Learn about the role that SBOMs for the security of your organization in this white paper.

Learn about the role that SBOMs for the security, including open source software (OSS) security, of your organization in this white paper.

The Top Ten List: The 2024 Anchore Blog

To close out 2024, we’re going to count down the top 10 hottest hits from the Anchore blog in 2024! The Anchore content team continued our tradition of delivering expert guidance, practical insights, and forward-looking strategies on DevSecOps, cybersecurity compliance, and software supply chain management.

This top ten list spotlights our most impactful blog posts from the year, each tackling a different angle of modern software development and security. Hot topics this year include: 

  • All things SBOM (software bill of materials)
  • DevSecOps compliance for the Department of Defense (DoD)
  • Regulatory compliance for federal government vendors (e.g., FedRAMP & SSDF Attestation)
  • Vulnerability scanning and management—a perennial favorite!

Our selection runs the gamut of knowledge needed to help your team stay ahead of the compliance curve, boost DevSecOps efficiency, and fully embrace SBOMs. So, grab your popcorn and settle in—it’s time to count down the blog posts that made the biggest splash this year!

The Top Ten List

10 | A Guide to Air Gapping

Kicking us off at number 10 is a blog that’s all about staying off the grid—literally. Ever wonder what it really means to keep your network totally offline? 

A Guide to Air Gapping: Balancing Security and Efficiency in Classified Environments breaks down the concept of “air gapping”—literally disconnecting a computer from the internet by leaving a “gap of air” between your computer and an ethernet cable. It is generally considered a security practice to protect classified, military-grade data or similar.

Our blog covers the perks, like knocking out a huge range of cyber threats, and the downsides, like having to manually update and transfer data. It also details how Anchore Enforce Federal Edition can slip right into these ultra-secure setups, blending top-notch protection with the convenience of automated, cloud-native software checks.

9 | SBOMs + Vulnerability Management == Open Source Security++

Coming in at number nine on our countdown is a blog that breaks down two of our favorite topics; SBOMs and vulnerability scanners—And how using SBOMs as your foundation for vulnerability management can level up your open source security game.

SBOMs and Vulnerability Management: OSS Security in the DevSecOps Era is all about getting a handle on: 

  • every dependency in your code, 
  • scanning for issues early and often, and 
  • speeding up the DevSecOps process so you don’t feel the drag of legacy security tools. 

By switching to this modern, SBOM-driven approach, you’ll see benefits like faster fixes, smoother compliance checks, and fewer late-stage security surprises—just ask companies like NVIDIA, Infoblox, DreamFactory and ModuleQ, who’ve saved tons of time and hassle by adopting these practices.

8 | Improving Syft’s Binary Detection

Landing at number eight, we’ve got a blog post that’s basically a backstage pass to Syft’s binary detection magic. Improving Syft’s Binary Detection goes deep on how Syft—Anchore’s open source SBOM generation tool—uncovers out the details of executable files and how you can lend a hand in making it even better. 

We walk you through the process of adding new binary detection rules, from finding the right binaries and testing them out, to fine-tuning the patterns that match their version strings. 

The end goal? Helping all open source contributors quickly get started making their first pull request and broadening support for new ecosystems. A thriving, community-driven approach to better securing the global open source ecosystem.

7 | A Guide to FedRAMP in 2024: FAQs & Key Takeaways

Sliding in at lucky number seven, we’ve got the ultimate cheat sheet for FedRAMP in 2024 (and 2025😉)! Ever wonder how Uncle Sam greenlights those fancy cloud services? A Guide to FedRAMP in 2024: FAQs & Key Takeaways spills the beans on all the FedRAMP basics you’ve been struggling to find—fast answers without all the fluff. 

It covers what FedRAMP is, how it works, who needs it, and why it matters; detailing the key players and how it connects with other federal security standards like FISMA. The idea is to help you quickly get a handle on why cloud service providers often need FedRAMP certification, what benefits it offers, and what’s involved in earning that gold star of trust from federal agencies. By the end, you’ll know exactly where to start and what to expect if you’re eyeing a spot in the federal cloud marketplace.

6 | Introducing Grant: OSS Licence Management

At number six on tonight’s countdown, we’re rolling out the red carpet for Grant—Anchore’s snazzy new software license-wrangling sidekick! Introducing Grant: An OSS project for inspecting and checking license compliance using SBOMs covers how Grant helps you keep track of software licenses in your projects. 

By using SBOMs, Grant can quickly show you which licenses are in play—and whether any have changed from something friendly to something more restrictive. With handy list and check commands, Grant makes it easier to spot and manage license risk, ensuring you keep shipping software without getting hit with last-minute legal surprises.

5 | An Overview of SSDF Attestation: Compliance Need-to-Knows

Landing at number five on tonight’s compliance countdown is a big wake-up call for all you software suppliers eyeing Uncle Sam’s checkbook: the SSDF Attestation Form. That’s right—starting now, if you wanna do business with the feds, you gotta show off those DevSecOps chops, no exceptions! In Using the Common Form for SSDF Attestation: What Software Producers Need to Know we break down the new Secure Software Development Attestation Form—released in March 2024—that’s got everyone talking in the federal software space. 

In short, if you’re a software vendor wanting to sell to the US government, you now have to “show your work” when it comes to secure software development. The form builds on the SSDF framework, turning it from a nice-to-have into a must-do. It covers which software is included, the timelines you need to know, and what happens if you don’t shape up.

There are real financial risks if you can’t meet the deadlines or if you fudge the details (hello, criminal penalties!). With this new rule, it’s time to get serious about locking down your dev practices or risk losing out on government contracts.

4 | Prep your Containers for STIG

At number four, we’re diving headfirst into the STIG compliance world—the DoD’s ultimate ‘tough crowd’ when it comes to security! If you’re feeling stressed about locking down those container environments—we’ve got you covered. 4 Ways to Prepare your Containers for the STIG Process is all about tackling the often complicated STIG process for containers in DoD projects. 

You’ll learn how to level up your team by cross-training cybersecurity pros in container basics and introducing your devs and architects to STIG fundamentals. It also suggests going straight to the official DISA source for current STIG info, making the STIG Viewer a must-have tool on everyone’s workstation, and looking into automation to speed up compliance checks. 

Bottom line: stay informed, build internal expertise, and lean on the right tools to keep the STIG process from slowing you down.

3 | Syft Graduates to v1.0!

Give it up for number three on our countdown—Syft’s big graduation announcement! In Syft Reaches v1.0! Syft celebrates hitting the big 1.0 milestone! 

Syft is Anchore’s OSS tool for generating SBOMs, helping you figure out exactly what’s inside your software, from container images to source code. Over the years, it’s grown to support over 40 package types, outputting SBOMs in various formats like SPDX and CycloneDX. With v1.0, Syft’s CLI and API are now stable, so you can rely on it for consistent results and long-term compatibility. 

But don’t worry—development doesn’t stop here. The team plans to keep adding support for more ecosystems and formats, and they invite the community to join in, share ideas, and contribute to the future of Syft.

2 | RAISE 2.0 Overview: RMF and ATO for the US Navy

Next up at number two is the lowdown on RAISE 2.0—your backstage pass to lightning-fast software approvals with the US Navy! In RMF and ATO with RAISE 2.0 — Navy’s DevSecOps solution for Rapid Delivery we break down what RAISE 2.0 means for teams working with the Department of the Navy’s containerized software.  The key takeaway? By using an approved DevSecOps platform—known as an RPOC—you can skip getting separate ATOs for every new app. 

The guidelines encourage a “shift left” approach, focusing on early and continuous security checks throughout development. Tools like Anchore Enforce Federal Edition can help automate the required security gates, vulnerability scans, and policy checks, making it easier to keep everything compliant. 

In short, RAISE 2.0 is all about streamlining security, speeding up approvals, and helping you deliver secure code faster.

1 | Introduction to the DoD Software Factory

Taking our top spot at number one, we’ve got the DoD software factory—the true VIP of the DevSecOps world! We’re talking about a full-blown, high-security software pipeline that cranks out code for the defense sector faster than a fighter jet screaming across the sky. In Introduction to the DoD Software Factory we break down what a DoD software factory really is—think of it as a template to build a DoD-approved DevSecOps pipeline. 

The blog post details how concepts like shifting security left, using microservices, and leveraging automation all come together to meet the DoD’s sky-high security standards. Whether you choose an existing DoD software factory (like Platform One) or build your own, the goal is to streamline development without sacrificing security. 

Tools like Anchore Enforce Federal Edition can help with everything from SBOM generation to continuous vulnerability scanning, so you can meet compliance requirements and keep your mission-critical apps protected at every stage.

Wrap-Up

That wraps up the top ten Anchore blog posts of 2024! We covered it all: next-level software supply chain best practices, military-grade compliance tactics, and all the open-source goodies that keep your DevSecOps pipeline firing on all cylinders. 

The common thread throughout them all is the recognition that security and speed can work hand-in-hand. With SBOM-driven approaches, modern vulnerability management, and automated compliance checks, organizations can achieve the rapid, secure, and compliant software delivery required in the DevSecOps era. We hope these posts will serve as a guide and inspiration as you continue to refine your DevSecOps practice, embrace new technologies, and steer your organization toward a more secure and efficient future.

If you enjoyed our greatest hits album of 2024 but need more immediacy in your life, follow along in 2025 by subscribing to the Anchore Newsletter or following Anchore on your favorite social platform:

Learn about the role that SBOMs for the security of your organization in this white paper.

Learn about the role that SBOMs for the security, including open source software (OSS) security, of your organization in this white paper.

Automating SBOMs: From Creation to Scanning & Analysis

This blog post has been archived and replaced by the supporting pillar page that can be found here: https://anchore.com/wp-admin/post.php?post=987474667&action=edit

The blog post is meant to remain “public” so that it will continue to show on the /blog feed. This will help discoverability for people browsing the blog and potentially help SEO. If it is clicked on it will automatically redirect to the pillar page.

ModuleQ reduces vulnerability management time by 80% with Anchore Secure

ModuleQ, an AI-driven enterprise knowledge platform, knows only too well the stakes for a company providing software solutions in the highly regulated financial services sector. In this world where data breaches are cause for termination of a vendor relationship and evolving cyberthreats loom large, proactive vulnerability management is not just a best practice—it’s a necessity. 

ModuleQ required a vulnerability management platform that could automatically identify and remediate vulnerabilities, maintain airtight security, and meet stringent compliance requirements—all without slowing down their development velocity.


Learn the essential container security best practices to reduce the risk of software supply chain attacks in this white paper.


The Challenge: Scaling Security in a High-Stakes Environment

ModuleQ found itself drowning in a flood of newly released vulnerabilities—over 25,000 in 2023 alone. Operating in a heavily regulated industry meant any oversight could have severe repercussions. High-profile incidents like the Log4j exploit underscored the importance of supply chain security, yet the manual, resource-intensive nature of ModuleQ’s vulnerability management process made it hard to keep pace.

The mandate that no critical vulnerabilities reached production was a particularly high bar to meet with the existing manual review process. Each time engineers stepped away from their coding environment to check a separate security dashboard, they lost context, productivity, and confidence. The fear of accidentally letting something slip through the cracks was ever present.

The Solution: Anchore Secure for Automated, Integrated Vulnerability Management

ModuleQ chose Anchore Secure to simplify, automate, and fully integrate vulnerability management into their existing DevSecOps workflows. Instead of relying on manual security reviews, Anchore Secure injected security measures seamlessly into ModuleQ’s Azure DevOps pipelines, .NET, and C# environment. Every software build—staged nightly through a multi-step pipeline—was automatically scanned for vulnerabilities. Any critical issues triggered immediate notifications and halted promotions to production, ensuring that potential risks were addressed before they could ever reach customers.

Equally important, Anchore’s platform was built to operate in on-prem or air-gapped environments. This guaranteed that ModuleQ’s clients could maintain the highest security standards without the need for external connectivity. For an organization whose customers demand this level of diligence, Anchore’s design provided peace of mind and strengthened client relationships.

Results: Faster, More Secure Deployments

By adopting Anchore Secure, ModuleQ dramatically accelerated and enhanced its vulnerability management approach:

  • 80% Reduction in Vulnerability Management Time: Automated scanning, triage, and reporting freed the team from manual checks, letting them focus on building new features rather than chasing down low-priority issues.
  • 50% Less Time on Security Tasks During Deployments: Proactive detection of high-severity vulnerabilities streamlined deployment workflows, enabling ModuleQ to deliver software faster—without compromising security.
  • Unwavering Confidence in Compliance: With every new release automatically vetted for critical vulnerabilities, ModuleQ’s customers in the financial sector gained renewed trust. Anchore’s support for fully on-prem deployments allowed ModuleQ to meet stringent security demands consistently.

Looking Ahead

In an era defined by unrelenting cybersecurity threats, ModuleQ proved that speed and security need not be at odds. Anchore Secure provided a turnkey solution that integrated seamlessly into their workflow, saving time, strengthening compliance, and maintaining the agility to adapt to future challenges. By adopting an automated security backbone, ModuleQ has positioned itself as a trusted and reliable partner in the financial services landscape.

Looking for more details? Read the ModuleQ case study in full. If you’re ready to move forward see all of the features on Anchore Secure’s product page or reach out to our team to schedule a demo.

The Evolution of SBOMs in the DevSecOps Lifecycle: Part 2

Welcome back to the second installment of our two-part series on “The Evolution of SBOMs in the DevSecOps Lifecycle”. In our first post, we explored how Software Bills of Materials (SBOMs) evolve over the first 4 stages of the DevSecOps pipeline—Plan, Source, Build & Test—and how each type of SBOM serves different purposes. Some of those use-cases include: shift left vulnerability detection, regulatory compliance automation, OSS license risk management and incident root cause analysis.

In this part, we’ll continue our exploration with the final 4 stages of the DevSecOps lifecycle, examining:

  • Analyzed SBOMs at the Release (Registry) stage
  • Deployed SBOMs during the Deployment phase
  • Runtime SBOMs in Production (Operate & Monitor stages)

As applications migrate down the pipeline, design decisions made at the beginning begin to ossify becoming more difficult to change; this influences the challenges that are experienced and the role that SBOMs play in overcoming these novel problems. Some of the new challenges that come up include: pipeline leaks, vulnerabilities in third-party packages, and runtime injection. All of which introduce significant risk. Understanding how SBOMs evolve across these stages helps organizations mitigate these risks effectively.

Whether you’re aiming to enhance your security posture, streamline compliance reporting, or improve incident response times, this comprehensive guide will equip you with the knowledge to leverage SBOMs effectively from Release to Production. Additionally, we’ll offer pro tips to help you maximize the benefits of SBOMs in your DevSecOps practices.

So, let’s continue our journey through the DevSecOps pipeline and discover how SBOMs can transform the latter stages of your software development lifecycle.


Learn the 5 best practices for container security and how SBOMs play a pivotal role in securing your software supply chain.


Release (or Registry) => Analyzed SBOM

After development is completed and the new release of the software is declared a “golden” image the build system will push the release artifact to a registry for storage until it is deployed. At this stage, an SBOM that is generated based on these container images, binaries, etc. is named an “Analyzed SBOM” by CISA. The name is a little confusing since all SBOMs should be analyzed regardless of the stage they are generated. A more appropriate name might be a Release SBOM but we’ll stick with CISA’s name for now.

At first glance, it would seem that Analyzed SBOMs and the final Build SBOMs should be identical since it is the same software but that doesn’t hold up in practice. DevSecOps pipelines aren’t hermetically sealed systems, they can be “leaky”. You might be surprised what finds its way into this storage repository and eventually gets deployed bypassing your carefully constructed build and test setup.

On top of that, the registry holds more than just first-party applications that are built in-house. It also stores 3rd-party container images like operating systems and any other self-contained applications used by the organization.

The additional metadata that is collected for an Analyzed SBOM includes:

  • Release images that bypass the happy path build and test pipeline
  • 3rd-party container images, binaries and applications

Pros and Cons

Pros:

  • Comprehensive Artifact Inventory: A more holistic view of all software—both 1st- and 3rd-party—that is utilized in production.
  • Enhanced Security and Compliance Posture: Catches vulnerabilities and non-compliant images for all software that will be deployed to production. This reduces the risk of security incidents and compliance violations.
  • Third-Party Supply Chain Risk Management: Provides insights into the vulnerabilities and compliance status of third-party components.
  • Ease of implementation: This stage is typically the lowest lift for implementation given that most SBOM generators can be deployed standalone and pointed at the registry to scan all images.

Cons:

  • High Risk for Release Delays: Scanning images at this stage are akin to traditional waterfall-style development patterns. Most design decisions are baked-in and changes typically incur a steep penalty.
  • Difficult to Push Feedback into Exist Workflow: The registry sits outside of typical developer workflows and creating a feedback loop that seamlessly reports issues without changing the developer’s process is a non-trivial amount of work.
  • Complexity in Management: Managing SBOMs for both internally developed and third-party components adds complexity to the software supply chain.

Use-Cases

  • Software Supply Chain Security: Organizations can detect vulnerabilities in both their internal developed software and external software to prevent supply chain injections from leading to a security incident.
  • Compliance Reporting: Reporting on both 1st- and 3rd-party software is necessary for industries with strict regulatory requirements.
  • Detection of Leaky Pipelines: Identifies release images that have bypassed the standard build and test pipeline, allowing teams to take corrective action.
  • Third-Party Risk Analysis: Assesses the security and compliance of third-party container images, binaries, and applications before they are deployed.

Example: An organization subject to strict compliance standards like FedRAMP or cATO uses Analyzed SBOMs to verify that all artifacts in their registry, including third-party applications, comply with security policies and licensing requirements. This practice not only enhances their security posture but also streamlines the audit process.

Pro Tip

A registry is an easy and non-invasive way to test and evaluate potential SBOM generators. It won’t give you a full picture of what can be found in your DevSecOps pipeline but it will at least give you an initial idea of its efficacy and help you make the decision on whether to go through the effort of integrating it into your build pipeline where it will produce deeper insights.

Deploy => Deployed SBOM

As your container orchestrator is deploying an image from your registry into production it will also orchestrate any production dependencies such as sidecar containers or production dependencies. At this stage, an SBOM that is generated is named an “Deployed SBOM” by CISA.

The ideal scenario is that your operations team is storing all of these images in the same central registry as your engineering team but—as we’ve noted before—reality diverges from the ideal.

The additional metadata that is collected for a Deployed SBOM includes:

  • Any additional sidecar containers or production dependencies that are injected or modified through a release controller.

Pros and Cons

Pros:

  • Enhanced Security Posture: The final gate to prevent vulnerabilities from being deployed into production. This reduces the risk of security incidents and compliance violations.
  • Leaky Pipeline Detection: Another location to increase visibility into the happy path of the DevSecOps pipeline being circumvented.
  • Compliance Enforcement: Some compliance standards require a deployment breaking enforcement gate before any software is deployed to production. A container orchestrator release controller is the ideal location to implement this.

Cons:

Essentially the same issues that come up during the release phase.

  • High Risk for Release Delays: Scanning images at this stage are even later than traditional waterfall-style development patterns and will incur a steep penalty if an issue is uncovered.
  • Difficult to Push Feedback into Exist Workflow: A deployment release controller sits outside of typical developer workflows and creating a feedback loop that seamlessly reports issues without changing the developer’s process is a non-trivial amount of work.

Use-Cases

  • Strict Software Supply Chain Security: Implementing a pipeline breaking gating mechanism is typically reserved for only the most critical security vulnerabilities (think: an actively exploitable known vulnerability).
  • High-Stakes Compliance Enforcement: Industries like defense, financial services and critical infrastructure will require vendors to implement a deployment gate for specific risk scenarios beyond actively exploitable vulnerabilities.
  • Compliance Audit Automation: Most regulatory compliance frameworks mandate audit artifacts at deploy time, these documents can be automatically generated and stored for future audits.

Example: A Deployed SBOM can be used as the source of truth for generating a report that demonstrates that no HIGH or CRITICAL vulnerabilities were deployed to production during an audit period.

Pro Tip

Combine a Deployed SBOM with a container vulnerability scanner that cross-checks all vulnerabilities against CISA’s Known Exploitable Vulnerability (KEV) database. In the scenario where a matching KEV is found for a software component you can configure your vulnerability scanner to return a FAIL response to your release controller to abort the deployment.

This strategy creates an ideal balance between not adding delays to software delivery and an extremely high probability for a security incident. 

Operate & Monitor (or Production) => Runtime SBOM

After your container orchestrator has deployed an application into your production environment it is live and serving customer traffic. An SBOM that is generated at this stage don’t have a name as specified by CISA. They are sometimes referred to as “Runtime SBOMs”. SBOMs are still a new-ish standard and will continue to evolve. 

The additional metadata that is collected for a Runtime SBOM includes:

  • Modifications (i.e., intentional hotfixes or malicious malware injection) made to running applications in your production environment. 

Pros and Cons

Pros:

  • Continuous Security Monitoring: Identifies new vulnerabilities that emerge after deployment.
  • Active Runtime Inventory: Provides a canonical view into an organization’s active software landscape.
  • Low Lift Implementation: Deploying SBOM generation into a production environment typically only requires deploying the scanner as another container and giving it permission to access the rest of the production environment.

Cons:

  • No Shift Left Security: By definition is excluded as a part of a shift left security posture.
  • Potential for Release Rollbacks: Scanning images at this stage is the worst possible place for proactive remediation. Discovering a vulnerability could potentially cause a security incident and force a release rollback.

Use-Cases

  • Rapid Incident Management: When new critical vulnerabilities are discovered and announced by the community the first priority for an organization is to determine exposure. An accurate production inventory, down to the component-level, is needed to answer this critical question.
  • Threat Detection: Continuously monitoring for anomalous activity linked to specific components. Sealing your system off completely from advanced persistent threats (APTs) is an unfeasible goal. Instead, quick detection and rapid intervention is the scalable solution to limit the impact of these adversaries.
  • Patch Management: As new releases of 3rd-party components and applications are released an inventory of impacted production assets provides helpful insights that can direct the prioritization of engineering efforts.

Example: When the XZ Utils vulnerability was announced in the spring of 2024, organizations that already automatically generated a Runtime SBOM inventory ran a simple search query against their SBOM database and knew within minutes—or even seconds—whether they were impacted.

Pro Tip

If you want to learn about how Google was able to go from an all-hands on deck security incident when the XZ Utils vulnerability was announced to an all clear under 10 minutes, watch our webinar with the lead of Google’s SBOM initiative.



Wrap-Up

As the SBOM standard has evolved the subject has grown considerably. What started as a structured way to store information about open source licenses has expanded to include numerous use-cases. A clear understanding of the evolution of SBOMs throughout the DevSecOps lifecycle is essential for organizations aiming to solve problems ranging from software supply chain security to regulatory compliance to legal risk management.

SBOMs are a powerful tool in the arsenal of modern software development. By recognizing their importance and integrating them thoughtfully across the DevSecOps lifecycle, you position your organization at the forefront of secure, efficient, and compliant software delivery.

Ready to secure your software supply chain and automate compliance tasks with SBOMs? Anchore is here to help. We offer SBOM management, vulnerability scanning and compliance automation enforcement solutions. If you still need some more information before looking at solutions, check out our webinar below on scaling a secure software supply chain with Kubernetes. 👇👇👇

Learn how Spectro Cloud secured their Kubernetes-based software supply chain and the pivotal role SBOMs played.

The Evolution of SBOMs in the DevSecOps Lifecycle: From Planning to Production

The software industry has wholeheartedly adopted the practice of building new software on the shoulders of the giants that came before them. To accomplish this developers construct a foundation of pre-built, 3rd-party components together then wrap custom 1st-party code around this structure to create novel applications. It is an extraordinarily innovative and productive practice but it also introduces challenges ranging from security vulnerabilities to compliance headaches to legal risk nightmares. Software bills of materials (SBOMs) have emerged to provide solutions for these wide ranging problems.

An SBOM provides a detailed inventory of all the components that make up an application at a point in time. However, it’s important to recognize that not all SBOMs are the same—even for the same piece of software! SBOMs evolve throughout the DevSecOps lifecycle; just as an application evolves from source code to a container image to a running application. The Cybersecurity and Infrastructure Security Agency’s (CISA) has codified this idea by differentiating between all of the different types of SBOMs. Each type serves different purposes and captures information about an application through its evolutionary process.

In this 2-part blog series, we’ll deep dive into each stage of the DevSecOps process and the associated SBOM. Highlighting the differences, the benefits and disadvantages and the use-cases that each type of SBOM supports. Whether you’re just beginning your SBOM journey or looking to deepen your understanding of how SBOMs can be integrated into your DevSecOps practices, this comprehensive guide will provide valuable insights and advice from industry experts.


Learn about the role that SBOMs for the security of your organization in this white paper.

Learn about the role that SBOMs for the security, including open source software (OSS) security, of your organization in this white paper.

Types of SBOMs and the DevSecOps Pipeline

Over the past decade the US government got serious about software supply chain security and began advocating for SBOMs as the standardized approach to the problem. As part of this initiative CISA created the Types of Software Bill of Material (SBOM) Documents white paper that codified the definitions of the different types of SBOMs and mapped them to each stage of the DevSecOps lifecycle. We will discuss each in turn but before we do, let’s anchor on some terminology to prevent confusion or misunderstanding.

Below is a diagram that lays out each stage of the DevSecOps lifecycle as well as the naming convention we will use going forward.

With that out of the way, let’s get started!

Plan => Design SBOM

As the DevSecOps paradigm has spread across the software industry, a notable best practice known as the security architecture review has become integral to the development process. This practice embodies the DevSecOps goal of integrating security into every phase of the software lifecycle, aligning perfectly with the concept of Shift-Left Security—addressing security considerations as early as possible.

At this stage, the SBOM documents the planned components of the application. The CISA refers to SBOMs generated during this phase as Design SBOMs. These SBOMs are preliminary and outline the intended components and dependencies before any code is written.

The metadata that is collected for a Design SBOM includes:

  • Component Inventory: Identifying potential OSS libraries and frameworks to be used as well as the dependency relationship between the components.
  • Licensing Information: Understanding the licenses associated with selected components to ensure compliance.
  • Risk Assessment Data: Evaluating known vulnerabilities and security risks associated with each component.

This might sound like a lot of extra work but luckily if you’re already performing DevSecOps-style planning that incorporates a security and legal review—as is best practice—you’re already surfacing all of this information. The only thing that is different is that this preliminary data is formatted and stored in a standardized data structure, namely an SBOM.

Pros and Cons

Pros:

  • Maximal Shift-Left Security: Vulnerabilities cannot be found any earlier in the software development process. Design time security decisions are the peak of a proactive security posture and preempt bad design decisions before they become ingrained into the codebase.
  • Cost Efficiency: Resolving security issues at this stage is generally less expensive and less disruptive than during later stages of development or—worst of all—after deployment.
  • Legal and Compliance Risk Mitigation: Ensures that all selected components meet necessary compliance standards, avoiding legal complications down the line.

Cons:

  • Upfront Investment: Gathering detailed information on potential components and maintaining an SBOM at this stage requires a non-trivial commitment of time and effort.
  • Incomplete Information: Projects are not static, they will adapt as unplanned challenges surface. A design SBOM likely won’t stay relevant for long.

Use-Cases

There are a number of use-cases that are enabled by 

  • Security Policy Enforcement: Automatically checking proposed components against organizational security policies to prevent the inclusion of disallowed libraries or frameworks.
  • License Compliance Verification: Ensuring that all components comply with the project’s licensing requirements, avoiding potential legal issues.
  • Vendor and Third-Party Risk Management: Assessing the security posture of third-party components before they are integrated into the application.
  • Enhance Transparency and Collaboration: A well-documented SBOM provides a clear record of the software’s components but more importantly that the project aligns with the goals of all of the stakeholders (engineering, security, legal, etc). This builds trust and creates a collaborative environment that increases the chances of each individual stakeholder outcome will be achieved.

Example:

A financial services company operating within a strict regulatory environment uses SBOMs during planning to ensure that all components comply with compliance standards like PCI DSS. By doing so, they prevent the incorporation of insecure components that won’t meet PCI compliance. This reduces the risk of the financial penalties associated with security breaches and regulatory non-compliance.

Pro Tip

If your organization is still early in the maturity of its SBOM initiative then we generally recommend moving the integration of design time SBOMs to the back of the queue. As we mention at the beginning of this the information stored in a design SBOMs is naturally surfaced during the DevSecOps process, as long as the information is being recorded and stored much of the value of a design SBOM will be captured in the artifact. This level of SBOM integration is best saved for later maturity stages when your organization is ready to begin exploring deeper levels of insights that have a higher risk-to-reward ratio. 

Alternatively, if your organization is having difficulty getting your teams to adopt a collaborative DevSecOps planning process mandating a SBOM as a requirement can act as a forcing function to catalyze a cultural shift.

Source => Source SBOM

During the development stage, engineers implement the selected 3rd-party components into the codebase. CISA refers to SBOMs generated during this phase as Source SBOMs. The SBOMs generated here capture the actual implemented components and additional information that is specific to the developer who is doing the work.

The additional metadata that is collected for a Source SBOM includes:

  • Dependency Mapping: Documenting direct and transitive dependencies.
  • Identity Metadata: Adding contributor and commit information.
  • Developer Environment: Captures information about the development environment.

Unlike Design SBOMs which are typically done manually, these SBOMs can be generated programmatically with a software composition analysis (SCA) tool—like Syft. They are usually packaged as command line interfaces (CLIs) since this is the preferred interface for developers. 

If you’re looking for an SBOM generation tool (SCA embedded), we have a comprehensive list of options to make this decision easier.

Pros and Cons

Pros:

  • Accurate and Timely Component Inventory: Reflects the actual components used in the codebase and tracks changes as codebase is actively being developed.
  • Shift-Left Vulnerability Detection: Identifies vulnerabilities as components are integrated but requires commit level automation and feedback mechanisms to be effective.
  • Facilitates Collaboration and Visibility: Keeps all stakeholders members informed about divergence from the original plan and provokes conversations as needed. This is also dependent on automation to record changes during development and the notification systems to broadcast the updates.

Example: A developer adds a new logging library to the project like an outdated version of Log4j. The SBOM, paired with a vulnerability scanner, immediately flags the Log4Shell vulnerability, prompting the engineer to update to a patched version.

Cons:

  • Noise from Developer Toolchains: A lot of times developer environments are bespoke. This creates noise for security teams by recording development dependencies.
  • Potential Overhead: Continuous updates to the SBOM can be resource-intensive when done manually; the only resource efficient method is by using an SBOM generation tool that automates the process.
  • Possibility of Missing Early Risks: Issues not identified during planning may surface here, requiring code changes.

Use-Cases

  • Faster Root Cause Analysis: During service incident retrospectives questions about where, when and by whom a specific component was introduced into an application. Source SBOMs are the programmatic record that can provide answers and decrease manual root cause analysis.
  • Real-Time Security Alerts: Immediate notification of vulnerabilities upon adding new components, decreasing time to remediation and keeping security teams informed.
  • Automated Compliance Checks: Ensuring added components comply with security or license policies to manage compliance risk.
  • Effortless Collaboration: Stakeholders can subscribe to a live feed of changes and immediately know when implementation diverges from the plan.

Pro Tip

Some SBOM generators allow developers to specify development dependencies that should be ignored, similar to .gitignore file. This can help cut down on the noise created by unique developer setups.

Build & Test => Build SBOM

When a developer pushes a commit to the CI/CD build system an automated process initiates that converts the application source code into an artifact that can then be deployed. CISA refers to SBOMs generated during this phase as Build SBOMs. These SBOMs capture both source code dependencies and build tooling dependencies.

The additional metadata that is collected includes:

  • Build Dependencies: Build tooling such as the language compilers, testing frameworks, package managers, etc.
  • Binary Analysis Data: Metadata for compiled binaries that don’t utilize traditional container formats.
  • Configuration Parameters: Details on build configuration files that might impact security or compliance.

Pros and Cons

Pros:

  • Build Infrastructure Analysis: Captures build-specific components which may have their own vulnerability or compliance issues.
  • Reuses Existing Automation Tooling: Enables programmatic security and compliance scanning as well as policy enforcement without introducing any additional build tooling.
  • Reuses Existing Automation Tooling: Directly integrates with developer workflow. Engineers receive security, compliance, etc. feedback without the need to reference a new tool.
  • Reproducibility: Facilitates reproducing builds for debugging and auditing.

Cons:

  • SBOM Sprawl: Build processes run frequently, if it is generating an SBOM with each run you will find yourself with a glut of files that you will have to manage.
  • Delayed Detection: Vulnerabilities or non-compliance issues found at this stage may require rework.

Use-Cases

  • SBOM Drift Detection: By comparing SBOMs from two or more stages, unexpected dependency injection can be detected. This might take the form of a benign, leaky build pipeline that requires manual workarounds or a malicious actor attempting to covertly introduce malware. Either way this provides actionable and valuable information.
  • Policy Enforcement: Enables the creation of build breaking gates to enforce security or compliance. For high-stakes operating environments like defense, financial services or critical infrastructure, automating security and compliance at the expense of some developer friction is a net-positive strategy.
  • Automated Compliance Artifacts: Compliance requires proof in the form of reports and artifacts. Re-utilizing existing build tooling automation to automate this task significantly reduces the manual work required by security teams to meet compliance requirements.

Example: A security scan during testing uses the Build SBOM to identify a critical vulnerability and alerts the responsible engineer. The remediation process is initiated and a patch is applied before deployment.

Pro Tip

If your organization is just beginning their SBOM journey, this is the recommended phase of the DevSecOps lifecycle to implement SBOMs first. The two primary cons of this phase can be mitigated the easiest. For SBOM sprawl, you can procure a turnkey SBOM management solution like Anchore SBOM

As for the delay in feedback created by waiting till the build phase, if your team is utilizing DevOps best practices and breaking features up into smaller components that fit into 2-week sprints then this tight scoping will limit the impact of any significant vulnerabilities or non-compliance discovered.

Intermission

So far we’ve covered the first half of the DevSecOps lifecycle. Next week we will publish the second part of this blog series where we’ll cover the remainder of the pipeline. Watch our socials to be sure you get notified when part 2 is published.

If you’re looking for some additional reading in the meantime, check out our container security white paper below.

Learn the 5 best practices for container security and how SBOMs play a pivotal role in securing your software supply chain.

Choosing the Right SBOM Generator: A Framework for Success

Choosing the right SBOM (software bill of materials) generator is tricker than it looks at first glance. SBOMs are the foundation for a number of different uses ranging from software supply chain security to continuous regulatory compliance. Due to its cornerstone nature, the SBOM generator that you choose will either pave the way for achieving your organization’s goals or become a road block that delays critical initiatives.

But how do you navigate the crowded market of SBOM generation tools to find the one that aligns with your organization’s unique needs? It’s not merely about selecting a tool with the most features or the nicest CLI. It’s about identifying a solution that maps directly to your desired outcomes and use-cases, whether that’s rapid incident response, proactive vulnerability management, or compliance reporting.

We at Anchore have been enabling organizations to achieve their SBOM-related outcomes and do it with the least amount of frustration and setbacks. We’ve compiled our learnings on choosing the right SBOM generation tool into a framework to help the wider community make decisions that set them up for success.

Below is a quick TL;DR of the high-level evaluation criteria that we cover in this blog post:

  • Understanding Your Use-Cases: Aligning the tool with your specific goals.
  • Ecosystem Compatibility: Ensuring support for your programming languages, operating systems, and build artifacts.
  • Data Accuracy: Evaluating the tool’s ability to provide comprehensive and precise SBOMs.
  • DevSecOps Integration: Assessing how well the tool fits into your existing DevSecOps tooling.
  • Proprietary vs. Open Source: Weighing the long-term implications of your choice.

By focusing on these key areas, you’ll be better equipped to select an SBOM generator that not only meets your current requirements but also positions your organization for future success.


Learn about the role that SBOMs for the security of your organization in this white paper.

Learn about the role that SBOMs for the security, including open source software (OSS) security, of your organization in this white paper.

Know your use-cases

When choosing from the array of SBOM generation tools in the market, it is important to frame your decision with the outcome(s) that you are trying to achieve. If your goal is to improve the response time/mean time to remediation when the next Log4j-style incident occurs—and be sure that there will be a next time—an SBOM tool that excels at correctly identifying open source licenses in a code base won’t be the best solution for your use-case (even if you prefer its CLI ;-D).

What to Do:

  • Identify and prioritize the outcomes that your organization is attempting to achieve
  • Map the outcomes to the relevant SBOM use-cases
  • Review each SBOM generation tool to determine whether they are best suited to your use-cases

It can be tempting to prioritize an SBOM generator that is best suited to our preferences and workflows; we are the ones that will be using the tool regularly—shouldn’t we prioritize what makes our lives easier? If we prioritize our needs above the goal of the initiative we might end up putting ourselves into a position where our choice in tools impedes our ability to recognize the desired outcome. Using the correct framing, in this case by focusing on the use-cases, will keep us focused on delivering the best possible outcome.

SBOMs can be utilized for numerous purposes: security incident response, open source license compliance, proactive vulnerability management, compliance reporting or software supply chain risk management. We won’t address all use-cases/outcomes in this blog post, a more comprehensive treatment of all of the potential SBOM use-cases can be found on our website.

Example SBOM Use-Cases:

  • Security incident response: an inventory of all applications and their dependencies that can be queried quickly and easily to identify whether a newly announced zero-day impacts the organization.
  • Proactive vulnerability management: all software and dependencies are scanned for vulnerabilities as part of the DevSecOps lifecycle and remediated based on organizational priority.
  • Regulatory compliance reporting: compliance artifacts and reports are automatically generated by the DevSecOps pipeline to enable continuous compliance and prevent manual compliance work.
  • Software supply chain risk management: an inventory of software components with identified vulnerabilities used to inform organizational decision making when deciding between remediating risk versus building new features.
  • Open source license compliance: an inventory of all software components and the associated OSS license to measure potential legal exposure.

Pro tip: While you will inevitably leave many SBOM use-cases out of scope for your current project, keeping secondary use-cases in the back of your mind while making a decision on the right SBOM tool will set you up for success when those secondary use-cases eventually become a priority in the future.

Does the SBOM generator support your organization’s ecosystem of programming languages, etc?

SBOM generators aren’t just tools to ingest data and re-format it into a standardized format. They are typically paired with a software composition analysis (SCA) tool that scans an application/software artifact for metadata that will populate the final SBOM.

Support for the complete array of programming languages, build artifacts and operating system ecosystems is essentially an impossible task. This means that support varies significantly depending on the SBOM generator that you select. An SBOM generator’s ability to help you reach your organizational goals is directly related to its support for your organization’s software tooling preferences. This will likely be one of the most important qualifications when choosing between different options and will rule out many that don’t meet the needs of your organization.

Considerations:

  • Programming Languages: Does the tool support all languages used by your team?
  • Operating Systems: Can it scan the different OS environments your applications run on top of?
  • Build Artifacts: Does the tool scan containers? Binaries? Source code repositories? 
  • Frameworks and Libraries: Does it recognize the frameworks and libraries your applications depend on?

Data accuracy

This is one of the most important criteria when evaluating different SBOM tools. An SBOM generator may claim support for a particular programming language but after testing the scanner you may discover that it returns an SBOM with only direct dependencies—honestly not much better than a package.json or go.mod file that your build process spits out.

Two different tools might both generate a valid SPDX SBOM document when run against the same source artifact, but the content of those documents can vary greatly. This variation depends on what the tool can inspect, understand, and translate. Being able to fully scan an application for both direct and transitive dependencies as well as navigate non-ideomatic patterns for how software can be structured end up being the true differentiators between the field of SBOM generation contenders.

Imagine using two SBOM tools on a Debian package. One tool recognizes Debian packages and includes detailed information about them in the SBOM. The latter can’t fully parse the Debian .deb format and omits critical information. Both produce an SBOM, but only one provides the data you need to power use-case based outcomes like security incident response or proactive vulnerability management.

Let’s make this example more concrete by simulating this difference with Syft, Anchore’s open source SBOM generation tool:

$ syft -q -o spdx-json nginx:latest > nginx_a.spdx.json
$ grype -q nginx_a.spdx.json | grep Critical
libaom3             3.6.0-1+deb12u1          (won't fix)       deb   CVE-2023-6879     Critical    
libssl3             3.0.14-1~deb12u2         (won't fix)       deb   CVE-2024-5535     Critical    
openssl             3.0.14-1~deb12u2         (won't fix)       deb   CVE-2024-5535     Critical    
zlib1g              1:1.2.13.dfsg-1          (won't fix)       deb   CVE-2023-45853    Critical

In this example, we first generate an SBOM using Syft then run it through Grype—our vulnerability scanning tool. Syft + Grype uncover 4 critical vulnerabilities.

Now let’s try the same thing but “simulate” an SBOM generator that can’t fully parse the structure of the software artifact in question:

$ syft -q -o spdx-json --select-catalogers "-dpkg-db-cataloger,-binary-classifier-cataloger" nginx:latest > nginx_b.spdx.json 
$ grype -q nginx_b.spdx.json | grep Critical
$

In this case, we are returned none of the critical vulnerabilities found with the former tool.

This highlights the importance of careful evaluation of the SBOM generator that you decide on. It could mean the difference between effective vulnerability risk management and a security incident.

Can the SBOM tool integration into your DevSecOps pipeline?

If the SBOM generator is packaged as a self-contained binary with a command line interface (CLI) then it should tick this box. CI/CD build tools are most amenable to this deployment model. If the SBOM generation tool in question isn’t a CLI then it should at least run as a server with an API that can be called as part of the build process.

Integrating with an organization’s DevSecOps pipeline is key to enable a scalable SBOM generation process. By implementing SBOM creation directly into the existing build tooling, organizations can leverage existing automation tools to ensure consistency and efficiency which are necessary for achieving the desired outcomes.

Proprietary vs. open source SBOM generator?

Using an open source SBOM tool is considered an industry best practice. This is because it guards against the risks associated with vendor lock-in. As a bonus, the ecosystem for open source SBOM generation tooling is very healthy. OSS will always have an advantage over proprietary in regards to ecosystem coverage and data quality because it will get into the hands of more users which will create a feedback loop that closes gaps in coverage or quality.

Finally, even if your organization decides to utilize a software supply chain security product that has its own proprietary SBOM generator, it is still better to create your SBOMs with an open source SBOM generator, export to a standardized format (e.g., SPDX or CycloneDX) then have your software supply chain security platform ingest these non-proprietary data structures. All platforms will be able to ingest SBOMs from one or both of these standards-based formats.

Wrap-Up

In a landscape where the next security/compliance/legal challenge is always just around the corner, equipping your team with the right SBOM generator empowers you to act swiftly and confidently. It’s an investment not just in a tool, but in the resilience and security of your entire software supply chain. By making a thoughtful, informed choice now, you’re laying the groundwork for a more secure and efficient future.

Anchore on AWS Marketplace and joins ISV Accelerate

We are excited to announce two significant milestones in our partnership with Amazon Web Services (AWS) today:  

  • Anchore Enterprise can now be purchased through the AWS marketplace and 
  • Anchore has joined the APN’s (Amazon Partner Network) ISV Accelerate Program

Organizations like Nvidia, Cisco Umbrella and Infoblox validate our commitment to delivering trusted solutions for SBOM management, secure software supply chains, and automated compliance enforcement  and can now benefit from a stronger partnership between AWS and Anchore.

Anchore’s best-in-breed container security solution was chosen by Cisco Umbrella as it seamlessly integrated in their AWS infrastructure and accelerated meeting all six FedRAMP requirements. They deployed Anchore into an environment that had to meet a number of high-security and compliance standards. Chief among those was STIG compliance for Amazon EC2 nodes that backed the Amazon EKS deployment. 

In addition, Anchore Enterprise supports high-security requirements such as IL4/IL6, FIPS, SSDF attestation and EO 14208 compliance. 

Contact Anchore’s sales team today for a pricing quote or demo that suits your unique needs.

Anchore Enterprise is now available on AWS Marketplace

The AWS Marketplace offers a convenient and efficient way for AWS customers to procure Anchore. It simplifies the procurement process, provides greater control and governance, and fosters innovation by offering a rich selection of tools and services that seamlessly integrate with their existing AWS infrastructure. 

Anchore Enterprise on AWS Marketplace benefits DevSecOps teams by enabling:

  • Self-procurement via the AWS console
  • Faster procurement with applicable legal terms provided and standardized security review
  • Easier budget management with a single consolidated AWS bill for all infrastructure spend
  • Spend on Anchore Enterprise partially counts towards EDP (Enterprise Discount Program) committed spend

By strengthening our collaboration with AWS, customers can now feel at ease that Anchore Enterprise integrates and operates seamlessly on AWS infrastructure. Joining the ISV Accelerate Program allows us to work closely with AWS account teams to ensure seamless support and exceptional service for our joint clients. 

Purchase Anchore Enterprise on the AWS Marketplace or contact our sales team for a pricing quote that meets your organization’s needs.

Automate STIG Compliance with MITRE SAF: the Fastest Path to ATO

Trying to get your head around STIG (Security Technical Implementation Guides) compliance? Anchore is here to help. With the help of MITRE Security Automation Framework (SAF) we’ll walk you through the quickset path to STIG Compliance and ultimately the converted Authority to Operate (ATO).

The goal for any company that aims to provide software services to the Department of Defense (DoD) is an ATO. Without this stamp of approval your software will never get into the hands of the warfighters that need it most. STIG compliance is a necessary needle that must be thread on the path to ATO. Luckily, MITRE has developed and open-sourced SAF to smooth the often complex and time-consuming STIG compliance process.

We’ll get you up to speed on MITRE SAF and how it helps you achieve STIG compliance in this blog post but before we jump straight into the content be sure to bookmark our webinar with the Chief Architect of MITRE Security Automation Framework (SAF), Aaron Lippold. Josh Bressers, VP of Security at Anchore and Lippold provide a behind the scenes look at SAF and how it dramatically reduces the friction of the STIG compliance process.

What is the MITRE Security Automation Framework (SAF)?

The MITRE SAF is both a high-level cybersecurity framework and an umbrella that encompasses a set of security/compliance tools. It is designed to simplify STIG compliance by translating DISA (Defense Information Systems Agency) SRG (Security Requirements Guide) guidance into actionable steps. 

By following the Security Automation Framework, organizations can streamline and automate the hardened configuration of their DevSecOps pipeline to achieve an ATO (Authority to Operate).

The SAF offers four primary benefits:

  1. Accelerate Path to ATO: By streamlining STIG compliance, SAF enables organizations to get their applications into the hands of DoD operators faster. This acceleration is crucial for meeting operational demands without compromising on security standards.
  2. Establish Security Requirements: SAF translates SRGs and STIGs into actionable steps tailored to an organization’s specific DevSecOps pipeline. This eliminates ambiguity and ensures security controls are implemented correctly.
  3. Build Security In: The framework provides tooling that can be directly embedded into the software development pipeline. By automating STIG configurations and policy checks, it ensures that security measures are consistently applied, leaving no room for false steps.
  4. Assess and Monitor Vulnerabilities: SAF offers visualization and analysis tools that assist organizations in making informed decisions about their current vulnerability inventory. It helps chart a path toward achieving STIG compliance and ultimately an ATO.

The overarching vision of the MITRE SAF is to “implement evolving security requirements while deploying apps at speed.” In essence, it allows organizations to have their cake and eat it too—gaining the benefits of accelerated software delivery without letting cybersecurity risks grow unchecked.

How does MITRE SAF work?

MITRE SAF is segmented into 5 capabilities that map to specific stages of the DevSecOps pipeline or STIG compliance process:

  1. Plan
  2. Harden
  3. Validate
  4. Normalize
  5. Visualize

Let’s break down each of these capabilities.

Plan

There are hundreds of existing STIGs for products ranging from Microsoft Windows to Cisco Routers to MySQL databases. On the off chance that a product your team wants to use doesn’t have a pre-existing STIG, SAF’s Vulcan tool is helps translate the application SRG into a tailored STIG that can then be used to achieve compliance.

Vulcan helps streamline the process of creating STIG-ready security guidance and the accompanying InSpec automated policy that confirms a specific instance of software is configured in a compliant manner.

Vulcan does this by modeling the STIG intent form and tailoring the applicable SRG controls into a finished STIG for an application. The finished STIG is then sent to DISA for peer review and formal publishing as a STIG. Vulcan allows the author to develop both human-readable instructions and machine-readable InSpec automated validation code at the same time.

Diagram of process to map SRG controls to STIG guidelines via the MITE SAF Vulcan CLI tool; an automated conversion tool to speed up STIG compliance process.

Harden

The hardening capability focuses on automating STIG compliance through the use of pre-built infrastructure configuration scripts. SAF hardening content allows organizations to:

  • Use their preferred configuration management tools: Chef Cookbooks, Ansible Playbooks, Terraform Modules, etc. are available as open source templates on MITRE’s GitHub page.
  • Share and collaborate: All hardening content is open source, encouraging community involvement and shared learning.
  • Coverage for the full development stack: Ensuring that every layer, from cloud infrastructure to applications, adheres to security standards.

Validate

The validation capability focuses on verifying the hardening meets the applicable STIG compliance standard. These validation checks are automated via the SAF CLI tool that incorporates the InSpec policies for a STIG. With SAF CLI, organizations can:

  • Automatically validate STIG compliance: By integrating SAF CLI directly into your CI/CD pipeline and invoking InSpec policy checks at every build; shifting security left by surfacing policy violations early.
  • Promote community collaboration: Like the hardening content, validation scripts are open source and accessible by the community for collaborative efforts.
  • Span the entire development stack: Validation—similar to hardening—isn’t limited to a single layer; it encompasses cloud infrastructure, platforms, operating systems, databases, web servers, and applications.
  • Incorporate manual attestation: To achieve comprehensive coverage of policy requirements that automated tools might not fully address.

Normalize

Normalization addresses the challenge of interoperability between different security tools and data formats. SAF CLI performs double-duty by taking on the normalization function as well as validation. It is able to:

  • Translate data into OHDF: OASIS Heimdall Data Format (OHDF), is an open standard that structures countless proprietary security metadata formats into a single universal format.
  • Leverage open source OHDF libraries: Organizations can use OHDF converters as libraries within their custom applications.
  • Automate data conversion: By incorporating SAF CLI into the DevSecOps pipeline, data is automatically standardized with each run.
  • Increased compliance efficiency: A single data format for all security data allows interoperability and facilitates efficient and automated STIG compliance.

Example: Below is an example of Burp Suite’s proprietary data format normalized to the OHDF JSON format:

Image of Burp Suite data format being mapped to MITRE SAF's OHDF to reduce manual data mapping and reduce time to STIG compliance.

Visualize

Visualization is critical for understanding security posture and making informed decisions. SAF provides an open source, self-hosted visualization tool named Heimdall. It ingests OHDF normalized security data and provides the data analysis tools to enable organizations to:

  • Aggregate security and compliance results: Compiling data into comprehensive rollups, charts, and timelines for a holistic view of security and compliance status.
  • Perform deep dives: Allowing teams to explore detailed vulnerability information to facilitate investigation and remediation, ultimately speeding up time to STIG compliance.
  • Guide risk reduction efforts: Visualization of insights help with prioritization of security and compliance tasks reducing risk in the most efficient manner.

How is SAF related to a DoD Software Factory?

A DoD Software Factory is the common term for a DevSecOps pipeline that meets the definition laid out in DoD Enterprise DevSecOps Reference Design. All software that ultimately achieves an ATO has to be built on a fully implemented DoD Software Factory. You can either build your own or use a pre-existing DoD Software Factory like the US Air Force’s Platform One or the US Navy’s Black Pearl.

As we saw earlier, MITRE SAF is a framework meant to help you achieve STIG compliance and is a portion of your journey towards an ATO. STIG compliance applies to both the software that you write as well as the DevSecOps platform that your software is built on. Building your own DoD Software Factory means committing to going through the ATO process and STIG compliance for the DevSecOps platform first then a second time for the end-user application.

Wrap-Up

The MITRE SAF is a huge leg up for modern, cloud-native DevSecOps software vendors that are currently navigating the labyrinth towards ATO. By providing actionable guidance, automation tooling, and a community-driven approach, SAF dramatically reduces the time to ATO. It bridges the gap between the speed of DevOps software delivery and secure, compliant applications ready for critical DoD missions with national security implications. 

Embracing SAF means more than just meeting regulatory requirements; it’s about building a resilient, efficient, and secure development pipeline that can adapt to evolving security demands. In an era where cybersecurity threats are evolving just as rapidly as software, leveraging frameworks like MITRE SAF is not an efficient path to compliance—it’s essential for sustained success.

Introducing Anchore Data Service and Anchore Enterprise 5.10

We are thrilled to announce the release of Anchore Enterprise 5.10, our tenth release of 2024. This update brings two major enhancements that will elevate your experience and bolster your security posture: the new Anchore Data Service (ADS) and expanded AnchoreCTL ecosystem coverage. 

With ADS, we’ve built a fast and reliable solution that reduces time spent by DevSecOps teams debugging intermittent network issues from flaky services that are vital to software supply chain security.

On top of that, we have buffed our software composition analysis (SCA) scanner’s ecosystem coverage (e.g., C++, Swift, Elixir, R, etc.) for all Anchore customers. To do this we embedded Syftour popular, open source SCA/SBOM (software bill of materials) generator—directly into Anchore Enterprise.

It’s been a fall of big releases at Anchore and we’re excited to continue delivering value to our loyal customers. Read on to get all of the gory details >>

Announcing the Anchore Data Service

Before, customers ran the Anchore Feed Service within their environment to pull data feeds into their Anchore Enterprise deployment. To get an idea of what this looked like, see the architecture diagram of Anchore Enterprise prior to version 5.10:

Originally we did this to give customers more control over their environment. Unfortunately this wasn’t without its issues. The data feeds are provided by the community which means the services were designed to be accessible but cost-efficient. This meant they were unreliable; frequently having accessibility issues.

We only have to stretch our memory back to the spring to recall an example that made national headlines. The National Vulnerability Database (NVD) ran into funding issues. This impacted both the creation of new vulnerabilities AND the availability of their API. This created significant friction for Anchore customers—not to mention the entirety of the software industry.

Now, Anchore is running its own enterprise-grade service, named Anchore Data Service (ADS). It is a replacement for the former feed service. ADS aggregates all of the community data feeds, enriches the data (with proprietary threat data) and packages it for customers; all of this with a service availability guarantee expected of an enterprise service.

The new architecture with ADS as the intermediary is illustrated below:

As a bonus for our customers running air-gapped deployments of Anchore Enterprise, there is no more need to run a second deployment of Anchore Enterprise in a DMZ to pull down the data feeds. Instead a single file is pulled from ADS then transferred to a USB thumb drive. From there a single CLI command is run to update your air-gapped deployment of Anchore Enterprise.

Increased AnchoreCTL Ecosystem Coverage

We have increased the number of supported ecosystems (e.g., C++, Swift, Elixir, R, etc.) in Anchore Enterprise. This improves coverage and increases the likelihood that all of your organization’s applications can be scanned and protected by Anchore Enterprise.

More importantly, we have completely re-architected the process for how Anchore Enterprise supports new ecosystems. By integrating Syft—Anchore’s open source SBOM generation tool—directly into AnchoreCTL, Anchore’s customers will now get access to new ecosystem support as they are merged into Syft’s codebase.

Before Syft and AnchoreCTL were somewhat separate which caused AnchoreCTL’s support for new ecosystems to lag Syft’s. Now, they are fully integrated. This enables all of Anchore’s enterprise and public sector customers to take full advantage of the open source community’s development velocity.

Full list of support ecosystems

Below is a complete list of all supported ecosystems by both Syft and AnchoreCTL (as of Anchore Enterprise 5.10; see our docs for most current info):

  • Alpine (apk)
  • C (conan)
  • C++ (conan)
  • Dart (pubs)
  • Debian (dpkg)
  • Dotnet (deps.json)
  • Objective-C (cocoapods)
  • Elixir (mix)
  • Erlang (rebar3)
  • Go (go.mod, Go binaries)
  • Haskell (cabal, stack)
  • Java (jar, ear, war, par, sar, nar, native-image)
  • JavaScript (npm, yarn)
  • Jenkins Plugins (jpi, hpi)
  • Linux kernel archives (vmlinuz)
  • Linux kernel a (ko)
  • Nix (outputs in /nix/store)
  • PHP (composer)
  • Python (wheel, egg, poetry, requirements.txt)
  • Red Hat (rpm)
  • Ruby (gem)
  • Rust (cargo.lock)
  • Swift (cocoapods, swift-package-manager)
  • WordPress plugins

After you update to Anchore Enterprise 5.10, the SBOM inventory will now display all of the new ecosystems. Any SBOMs that have been generated for a particular ecosystem will show up top. The screenshot below gives you an idea of what this will look like:

Wrap-Up

Anchore Enterprise 5.10 marks a new chapter in providing reliable, enterprise-ready security tooling for modern software development. The introduction of the Anchore Data Service ensures that you have consistent and dependable access to critical vulnerability and exploit data, while the expanded ecosystem support means that no part of your tech stack is left unscrutinized for latent risk. Upgrade to the latest version and experience these new features for yourself.

To update and leverage these new features check out our docs, reach out to your Customer Success Engineer or contact our support team. Your feedback is invaluable to us, and we look forward to continuing to support your organization’s security needs.We are offering all of our product updates as a new quarterly product update webinar series. Watch the fall webinar update in the player below to get all of the juicy tidbits from our product team.

Navigating Open Source Software Compliance in Regulated Industries

Open source software (OSS) brings a wealth of benefits; speed, innovation, cost savings. But when serving customers in highly regulated industries like defense, energy, or finance, a new complication enters the picture—compliance.

Imagine your DevOps-fluent engineering team has been leveraging OSS to accelerate product delivery, and suddenly, a major customer hits you with a security compliance questionnaire. What now? 

Regulatory compliance isn’t just about managing the risks of OSS for your business anymore; it’s about providing concrete evidence that you meet standards like FedRAMP and the Secure Software Development Framework (SSDF).

The tricky part is that the OSS “suppliers” making up 70-90% of your software supply chain aren’t traditional vendors—they don’t have the same obligations or accountability, and they’re not necessarily aligned with your compliance needs. 

So, who bears the responsibility? You do.

The OSS your engineering team consumes is your resource and your responsibility. This means you’re not only tasked with managing the security risks of using OSS but also with proving that both your applications and your OSS supply chain meet compliance standards. 

In this post, we’ll explore why you’re ultimately responsible for the OSS you consume and outline practical steps to help you use OSS while staying compliant.

Learn about CISA’s SSDF attestation form and how to meet compliance.

SSDF Attestation 101: A Practical Guide for Software Producers

What does it mean to use open source software in a regulated environment?

Highly regulated environments add a new wrinkle to the OSS security narrative. The OSS developers that author the software dependencies that make up the vast majority of modern software supply chains aren’t vendors in the traditional sense. They are more of a volunteer workforce that allow you to re-use their work but it is a take it or leave it agreement. You have no recourse if it doesn’t work as expected, or worse, has vulnerabilities in it.

So, how do you meet compliance standards when your software supply chain is built on top of a foundation of OSS?

Who is the vendor? You are!

Whether you have internalized this or not the open source software that your developers consume is your resource and thus your responsibility.

This means that you are shouldered with the burden of not only managing the security risk of consuming OSS but also having to shoulder the burden of proving that both your applications and the your OSS supply chain meets compliance.

Open source software is a natural resource

Before we jump into how to accomplish the task set forth in the previous section, let’s take some time to understand why you are the vendor when it comes to open source software.

The common idea is that OSS is produced by a 3rd-party that isn’t part of your organization, so they are the software supplier. Shouldn’t they be the ones required to secure their code? They control and maintain what goes in, right? How are they not responsible?

To answer that question, let’s think about OSS as a natural resource that is shared by the public at large, for instance the public water supply.

This shouldn’t be too much of a stretch. We already use terms like upstream and downstream to think about the relationship between software dependencies and the global software supply chain.

Using this mental model, it becomes easier to understand that a public good isn’t a supplier. You can’t ask a river or a lake for an audit report that it is contaminant free and safe to drink. 

Instead the organization that processes and provides the water to the community is responsible for testing the water and guaranteeing its safety. In this metaphor, your company is the one processing the water and selling it as pristine bottled water. 

How do you pass the buck to your “supplier”? You can’t. That’s the point.

This probably has you asking yourself, if I am responsible for my own OSS supply chain then how to meet a compliance standard for something that I don’t have control over? Keep reading and you’ll find out.

How do I use OSS and stay compliant?

While compliance standards are often thought of as rigid, the reality is much more nuanced. Just because your organization doesn’t own/control the open source projects that you consume doesn’t mean that you can’t use OSS and meet compliance requirements.

There are a few different steps that you need to take in order to build a “reasonably secure” OSS supply chain that will pass a compliance audit. We’ll walk you through the steps below:

Step 1 — Know what you have (i.e., an SBOM inventory)

The foundation of the global software supply chain is the SBOM (software bill of materials) standard. Each of the security and compliance functions outlined in the steps below use or manipulate an SBOM.

SBOMs are the foundational component of the global software supply chain because they record the ingredients that were used to produce the application an end-user will consume. If you don’t have a good grasp of the ingredients of your applications there isn’t much hope for producing any upstream security or compliance guarantees.

The best way to create observability into your software supply chain is to generate an SBOM for every single application in your DevSecOps build pipeline—at each stage of the pipeline!

Step 2 — Maintain a historical record of application source code

To meet compliance standards like FedRAMP and SSDF, you need to be able to maintain a historical record of the source code of your applications, including: 

  • Where it comes from, 
  • Who created it, and 
  • Any modifications made to it over time.

SBOMs were designed to meet these requirements. They act as a record of how applications were built and when/where OSS dependencies were introduced. They also double as compliance artifacts that prove you are compliant with regulatory standards.

Governments aren’t content with self-attestation (at least not for long); they need hard evidence to verify that you are trustworthy. Even though SSDF is currently self-attestation only, the federal government is known for rolling out compliance frameworks in stages. First advising on best-practices, then requiring self-attestation, finally external validation via a certification process. 

The Cybersecurity Maturity Model Certification (CMMC) is a good example of this dynamic process. It recently transitioned from self-attestation to external validation with the introduction of the 2.0 release of the framework.

Step 3 — Manage your OSS vulnerabilities

Not only do you need to keep a record of applications as they evolve over time, you have to track the known vulnerabilities of your OSS dependencies to achieve compliance. Just as SBOMs prove provenance, vulnerability scans are proof that your application and its dependencies aren’t vulnerable. These scans are a crucial piece of the evidence that you will need to provide to your compliance officer as you go through the certification process. 

Remember the buck stops with you! If the OSS that your application consumes doesn’t supply an SBOM and vulnerability scan (which is essentially all OSS projects) then you are responsible to create them. There is no vendor to pass the blame to for proving that your supply chain is reasonably secure and thus compliant.

Step 4 — Continuous compliance of open source software supply chain

It is important to recognize that modern compliance standards are no longer sprints but marathons. Not only do you have to prove that your application(s) are compliant at the time of audit but you have to be able to demonstrate that it remains secure continuously in order to maintain your certification.

This can be challenging to scale but it is made easier by integrating SBOM generation, vulnerability scanning and policy checks directly into the DevSecOps pipeline. This is the approach that modern, SBOM-powered SCAs advocate for.

By embedding the compliance policy-as-code into your DevSecOps pipeline as policy gates, compliance can be maintained over time. Developers are alerted when their code doesn’t meet a compliance standard and are directed to take the corrective action. Also, these compliance checks can be used to automatically generate the compliance artifacts needed. 

You already have an automated DevSecOps pipeline that is producing and delivering applications with minimal human intervention, why not take advantage of this existing tooling to automate open source software compliance in the same way that security was integrated directly into DevOps.

Real-world Examples

To help bring these concepts to life, we’ve outlined some real-world examples of how open source software and compliance intersect:

Open source project has unfixed vulnerabilities

This is far and wide the most common issue that comes up during compliance audits. One of your application’s OSS dependencies has a known vulnerability that has been sitting in the backlog for months or even years!

There are several reasons why an open source software developer might leave a known vulnerability unresolved:

  • They prioritize a feature over fixing a vulnerability
  • The vulnerability is from a third-party dependency they don’t control and can’t fix
  • They don’t like fixing vulnerabilities and choose to ignore it
  • They reviewed the vulnerability and decided it’s not likely to be exploited, so it’s not worth their time
  • They’re planning a codebase refactor that will address the vulnerability in the future

These are all rational reasons for vulnerabilities to persist in a codebase. Remember, OSS projects are owned and maintained by 3rd-party developers who control the repository; they make no guarantees about its quality. They are not vendors.

You, on the other hand, are a vendor and must meet compliance requirements. The responsibility falls on you. An OSS vulnerability management program is how you meet your compliance requirements while enjoying the benefits of OSS.

Need to fill out a supplier questionnaire

Imagine you’re a cloud service provider or software vendor. Your sales team is trying to close a deal with a significant customer. As the contract nears signing, the customer’s legal team requests a security questionnaire. They’re in the business of protecting their organization from financial risk stemming from their supply chain, and your company is about to become part of that supply chain.

These forms are usually from lawyers, very formal, and not focused on technical attacks. They just want to know what you’re using. The quick answer? “Here’s our SBOM.” 

Compliance comes in the form of public standards like FedRAMP, SSDF, NIST, etc., and these less formal security questionnaires. Either way, being unable to provide a full accounting of the risks in your software supply chain can be a speed bump to your organization’s revenue growth and success.

Integrating SBOM scanning, generation, and management deeply into your DevSecOps pipeline is key to accelerating the sales process and your organization’s overall success.

Prove provenance

CISA’s SSDF Attestation form requires that enterprises selling software to the federal government can produce a historical record of their applications. Quoting directly: “The software producer [must] maintain provenance for internal code and third-party components incorporated into the software to the greatest extent feasible.”

If you want access to the revenue opportunities the U.S. federal government offers, SSDF attestation is the needle you have to thread. Meeting this requirement without hiring an army of compliance engineers to manually review your entire DevSecOps pipeline demands an automated OSS component observability and management system.

Often, we jump to cryptographic signatures, encryption keys, trust roots—this quickly becomes a mess. Really, just a hash of the files in a database (read: SBOM inventory) satisfies the requirement. Sometimes, simpler is better. 

Discover the “easy button” to SSDF Attestation and OSS supply chain compliance in our previous blog post.

Takeaways

OSS Is Not a Vendor—But You Are! The best way to have your OSS cake and eat it too (without the indigestion) is to:

  1. Know Your Ingredients: Maintain an SBOM inventory of your OSS supply chain.
  2. Maintain a Complete Historical Record: Keep track of your application’s source code and build process.
  3. Scan for Known Vulnerabilities: Regularly check your OSS dependencies.
  4. Continuous Compliance thru Automation: Generate compliance records automatically to scale your compliance process.

There are numerous reasons to aim for open source software compliance, especially for your software supply chain:

  • Balance Gains Against Risks: Leverage OSS benefits while managing associated risks.
  • Reduce Financial Risk: Protect your organization’s existing revenue.
  • Increase Revenue Opportunities: Access new markets that mandate specific compliance standards.
  • Avoid Becoming a Cautionary Tale: Stay ahead of potential security incidents.

Regardless of your motivation for wanting to use OSS and use it responsibly (i.e., securely and compliantly), Anchore is here to help. Reach out to our team to learn more about how to build and manage a secure and compliant OSS supply chain.

Learn the container security best practices to reduce the risk of software supply chain attacks.

Learn the container security best practices, including open source software (OSS) security, to reduce the risk of software supply chain attacks.

US Navy achieves ATO in days with continuous compliance and OSS risk management

Implementing secure and compliant software solutions within the Department of Defense’s (DoD) software factory framework is no small feat. 

For Black Pearl, the premier DevSecOps platform for the U.S. Navy, and Sigma Defense, a leading DoD technology contractor, the challenge was not just about meeting stringent security requirements but to empower the warfighter. 

We’ll cover how they streamlined compliance, managed open source software (OSS) risk, and reduced vulnerability overload—all while accelerating their Authority to Operate (ATO) process.

Challenge: Navigating Complex Security and Compliance Requirements

Black Pearl and Sigma Defense faced several critical hurdles in meeting the stringent security and compliance standards of the DoD Enterprise DevSecOps Reference Design:

  • Achieving RMF Security and Compliance: Black Pearl needed to secure its own platform and help its customers achieve ATO under the Risk Management Framework (RMF). This involved meeting stringent security controls like RA-5 (Vulnerability Management), SI-3 (Malware Protection), and IA-5 (Credential Management) for both the platform and the applications built on it.
  • Maintaining Continuous Compliance: With the RAISE 2.0 memo emphasizing continuous ATO compliance, manual processes were no longer sufficient. The teams needed to automate compliance tasks to avoid the time-consuming procedures traditionally associated with maintaining ATO status.
  • Managing Open-Source Software (OSS) Risks: Open-source components are integral to modern software development but come with inherent risks. Black Pearl had to manage OSS risks for both its platform and its customers’ applications, ensuring vulnerabilities didn’t compromise security or compliance.
  • Vulnerability Overload for Developers: Developers often face an overwhelming number of vulnerabilities, many of which may not pose significant risks. Prioritizing actionable items without draining resources or slowing down development was a significant challenge.

“By using Anchore and the Black Pearl platform, applications inherit 80% of the RMF’s security controls. You can avoid all of the boring stuff and just get down to what everyone does well, which is write code.”

Christopher Rennie, Product Lead/Solutions Architect

Solution: Automating Compliance and Security with Anchore

To address these challenges, Black Pearl and Sigma Defense implemented Anchore, which provided:

“Working alongside Anchore, we have customized the compliance artifacts that come from the Anchore API to look exactly how the AOs are expecting them to. This has created a good foundation for us to start building the POA&Ms that they’re expecting.”

Josiah Ritchie, DevSecOps Staff Engineer

  • Managing OSS Risks with Continuous Monitoring: Anchore’s integrated vulnerability scanner, policy enforcer, and reporting system provided continuous monitoring of open-source software components. This proactive approach ensured vulnerabilities were detected and addressed promptly, effectively mitigating security risks.
  • Automated Prioritization of Vulnerabilities: By integrating the Anchore Developer Bundle, Black Pearl enabled automatic prioritization of actionable vulnerabilities. Developers received immediate alerts on critical issues, reducing noise and allowing them to focus on what truly matters.

Results: Accelerated ATO and Enhanced Security

The implementation of Anchore transformed Black Pearl’s compliance process and security posture:

  • Platform ATO in 3-5 days: With Anchore’s integration, Black Pearl users accessed a fully operational DevSecOps platform within days, a significant reduction from the typical six months for DIY builds.

“The DoD has four different layers of authorizing officials in order to achieve ATO. You have to figure out how to make all of them happy. We want to innovate by automating the compliance process. Anchore helps us achieve this, so that we can build a full ATO package in an afternoon rather than taking a month or more.”

Josiah Ritchie, DevSecOps Staff Engineer

  • Significantly reduced time spent on compliance reporting: Anchore automated compliance checks and artifact generation, cutting down hours spent on manual reviews and ensuring consistency in reports submitted to authorizing officials.
  • Proactive OSS risk management: By shifting security and compliance to the left, developers identified and remediated open-source vulnerabilities early in the development lifecycle, mitigating risks and streamlining the compliance process.
  • Reduced vulnerability overload with prioritized vulnerability reporting: Anchore’s prioritization of vulnerabilities prevented developer overwhelm, allowing teams to focus on critical issues without hindering development speed.

Conclusion: Empowering the Warfighter Through Efficient Compliance and Security

Black Pearl and Sigma Defense’s partnership with Anchore demonstrates how automating security and compliance processes leads to significant efficiencies. This empowers Navy divisions to focus on developing software that supports the warfighter. 

Achieving ATO in days rather than months is a game-changer in an environment where every second counts, setting a new standard for efficiency through the combination of Black Pearl’s robust DevSecOps platform and Anchore’s comprehensive security solutions.

If you’re facing similar challenges in securing your software supply chain and accelerating compliance, it’s time to explore how Anchore can help your organization achieve its mission-critical objectives.

Download the full case study below👇

Mark Your Calendars: Anchore’s Must-Attend Events and Webinars in October

Are you ready for cooler temperatures and the changing of the leaves? Anchore is! We are excited to announce a series of events and webinars next month. From in-person conferences to insightful webinars, we have a lineup designed to keep you informed about the latest developments in software supply chain security, DevSecOps, and compliance. Join us to learn, connect, and explore how Anchore can help your organization navigate the evolving landscape of software security.

EVENT: TD Synnex Inspire

Date: October 9-12, 2024

Location: Booth T84 | Greenville Convention Center in Greenville, SC

Anchore is thrilled to be exhibiting at the 2024 TD SYNNEX Inspire. Visit us at Booth T84 in the Pavilion to discover how Anchore secures containers for AI, machine learning applications—with a special emphasis on high-performance computing (HPC).

Anchore has helped many Fortune 50 enterprises scale their container security and vulnerability management programs for their entire software supply chain including luminaries like NVIDIA. If you’d like to book dedicated time to speak with our team, drop by our booth or email us at [email protected].

WEBINAR: Introducing the Anchore Data Service

Date: October 15, 2024 at 10am PT

We will showcase the exciting new features introduced in Anchore Enterprise 5.8, 5.9, and 5.10. All designed to effortlessly secure your software supply chain and reduce risk for your organization. Highlights include:

  • Version 5.10: New Anchore Data Service which automatically updates your vulnerability feeds—even in air-gapped environments!
  • Version 5.9: Improved SBOM generation with native integration of Syft, etc.
  • Version 5.8: CISA Known Exploited Vulnerabilities (KEV) feed, etc.

We will be demo-ing all of the new features, sharing pro tips and providing takeaways on how to best utilize the new releases. Don’t miss out!

EVENT: All Things Open Conference

Date: October 27-29, 2024

Location: Booth #95 | Raleigh Convention Center in Raleigh, NC

Anchore is excited to participate in the 2024 All Things Open Conference—one of the largest open source software events in the U.S. Drop by and visit us at Booth #95 to learn how our open source tools, Syft and Grype, can help you start your journey to a more secure DevSecOps pipeline. 

Anchore employees will be on hand to help you understand:

WEBINAR: Accelerate FedRAMP Compliance on Amazon EKS with Anchore

Date: October 29, 2024 at 10am PT

Navigating FedRAMP compliance can be challenging, but Anchore and AWS are here to simplify the process. Join Luis Morales, Solutions Architect at AWS, and Brian Thomason, Manager of Partner and Solutions Engineering at Anchore, as they explain how Cisco achieved FedRAMP compliance in weeks rather than months.

In this live session, we’ll share actionable guidance and insights that address:

  • How to meet six FedRAMP vulnerability scanning requirements
  • Automating STIG and FIPS compliance for Amazon EC2 virtual machines
  • Securing containers end-to-end across CI/CD, Amazon EKS, and ECS

*We’ll also discuss the architecture of Anchore running in an AWS customer environment, demonstrating how to leverage AWS tools and services to enhance your cloud security posture.

WEBINAR: Expert Series: Solving Real-World Challenges in FedRAMP Compliance

Date: October 30, 2024 at 10am PT

Navigating the path to FedRAMP authorization can be daunting, especially with the evolving landscape of federal security requirements. In this Expert Series webinar, Neil Levine, SVP of Product at Anchore, and Mike Strohecker, Director of Cloud Operations at InfusionPoints, will share real-world stories of how we’ve helped our FedRAMP customers overcome key challenges—from achieving compliance faster to meeting the latest FedRAMP Rev 5 requirements.

We’ll dive into practical solutions, including:

  • Overcoming common FedRAMP compliance hurdles
  • Meeting Rev 5 security hardening standards like STIG and CIS (CM-6)
  • Effectively shifting security left in the CI/CD pipeline
  • Automating policy enforcement and continuous monitoring

*We’ll also explore the future impact of the July 2024 FedRAMP modernization memo, highlighting how increased automation with OSCAL is transforming the compliance process.

Wrap-Up

With a brimming schedule of events, October promises to be a jam packed month for Anchore and our community. Whether you’re interested in our latest product updates, exploring strategies for FedRAMP compliance, or connecting at industry-leading events, there’s something for everyone. Mark your calendars and join us to stay ahead in the evolving world of software supply chain security.

Stay informed about upcoming events and developments at Anchore by bookmarking our Events Page and checking back regularly!

How to build an OSS risk management program

In previous blog posts we have covered the risks of open source software (OSS) and security best practices to manage that risk. From there we zoomed in on the benefits of tightly coupling two of those best practices (SBOMs and vulnerability scanning)

Now, we’ll dig deeper into the practical considerations of integrating this paired solution into a DevSecOps pipeline. By examining the design and implementation of SBOMs and vulnerability scanning, we’ll illuminate the path to creating a holistic open source software (OSS) risk management program.

Learn about the role that SBOMs for the security of your organization in this white paper.

Learn about the role that SBOMs for the security, including open source software (OSS) security, of your organization in this white paper.

How do I integrate SBOM management and vulnerability scanning into my development process?

Ideally, you want to generate an SBOM at each stage of the software development process (see image below). By generating an SBOM and scanning for vulnerabilities at each stage, you unlock a number of novel use-cases and benefits that we covered previously.

DevSecOps lifecycle diagram with all stages to integrate SBOM generation and vulnerability scanning.

Let’s break down how to integrate SBOM generation and vulnerability scanning into each stage of the development pipeline:

Source (PLAN & CODE)

The easiest way to integrate SBOM generation and vulnerability scanning into the design and coding phases is to provide CLI (command-line interface) tools to your developers. Engineers are already used to these tools—and have a preference for them!

If you’re going the open source route, we recommend both Syft (SBOM generation) and Grype (vulnerability scanner) as easy options to get started. If you’re interested in an integrated enterprise tool then you’ll want to look at AnchoreCTL.

Developers can generate SBOMs and run vulnerability scans right from the workstation. By doing this at design or commit time developers can shift security left and know immediately about security implications of their design decisions.

If existing vulnerabilities are found, developers can immediately pivot to OSS dependencies that are clean or start a conversation with their security team to understand if their preferred framework will be a deal breaker. Either way, security risk is addressed early before any design decisions are made that will be difficult to roll back.

Build (BUILD + TEST)

The ideal location to integrate SBOM generation and vulnerability scanning during the build and test phases are directly into the organization’s continuous integration (CI) pipeline.

The same self-contained CLI tools used during the source stage are integrated as additional steps into CI scripts/runbooks. When a developer pushes a commit that triggers the build process, the new steps are executed and both an SBOM and vulnerability scan are created as outputs. 

Check out our docs site to see how AnchoreCTL (running in distributed mode) makes this integration a breeze.

If you’re having trouble convincing your developers to jump on the SBOM train, we recommend that developers think about all security scans as just another unit test that is part of their testing suite.

Running these steps in the CI pipeline delays feedback a little versus performing the check as incremental code commits are made as an application is being coded but it is still light years better than waiting till a release is code complete. 

If you are unable to enforce vulnerability scanning of OSS dependencies by your engineering team, a CI-based strategy can be a good happy medium. It is much easier to ensure every build runs exactly the same each time than it is to do the same for developers.

Release (aka Registry)

Another integration option is the container registry. This option will require you to either roll your own service that will regularly call the registry and scan new containers or use a service that does this for you.

See how Anchore Enterprise can automate this entire process by reviewing our integration docs.

Regardless of the path you choose, you will end up creating an IAM service account within your CI application which will give your SBOM and vulnerability scanning solution the access to your registries.

The release stage tends to be fairly far along in the development process and is not an ideal location for these functions to run. Most of the benefits of a shift left security posture won’t be available anymore.

If this is an additional vulnerability scanning stage—rather than the sole stage—then this is a fantastic environment to integrate into. Software supply chain attacks that target registries are popular and can be prevented with a continuous scanning strategy.

Deploy

This is the traditional stage of the SDLC (software development lifecycle) to run vulnerability scans. SBOM generation can be added on as another step in an organization’s continuous deployment (CD) runbook.

Similar to the build stage, the best integration method is by calling CLI tools directly in the deploy script to generate the SBOM and then scan it for vulnerabilities.

Alternatively, if you utilize a container orchestrator like Kubernetes you can also configure an admission controller to act as a deployment gate. The admissions controller should be configured to make a call out to a standalone SBOM generator and vulnerability scanner. 

If you’d like to understand how this is implemented with Anchore Enterprise, see our docs.

While this is the traditional location for running vulnerability scans, it is not recommended that this is the only stage to scan for vulnerabilities. Feedback about security issues would be arriving very late in the development process and prior design decisions may prevent vulnerabilities from being easily remediated. Don’t do this unless you have no other option.

Production (OPERATE + MONITOR)

This is not a traditional stage to run vulnerability scans since the goal is to prevent vulnerabilities from getting to production. Regardless, this is still an important environment to scan. Production containers have a tendency to drift from their pristine build states (DevSecOps pipelines are leaky!).

Also, new vulnerabilities are discovered all of the time and being able to prioritize remediation efforts to the most vulnerable applications (i.e., runtime containers) considerably reduces the risk of exploitation.

The recommended way to run SBOM generation and vulnerability scans in production is to run an independent container with the SBOM generator and vulnerability scanner installed. Most container orchestrators have SDKs that will allow you to integrate an SBOM generator and vulnerability scanner to the preferred administration CLI (e.g., kubectl for k8s clusters). 

Read how Anchore Enterprise integrates these components together into a single container for both Kubernetes and Amazon ECS.

How do I manage all of the SBOMs and vulnerability scans?

Tightly coupling SBOM generation and vulnerability scanning creates a number of benefits but it also creates one problem; a firehose of data. This unintended side effect is named SBOM sprawl and it inevitably becomes a headache in and of itself.

The concise solution to this problem is to create a centralized SBOM repository. The brevity of this answer downplays the challenges that go along with building and managing a new data pipeline.

We’ll walk you through the high-level steps below but if you’re looking to understand the challenges and solutions of SBOM sprawl in more detail, we have a separate article that covers that.

Integrating SBOMs and vulnerability scanning for better OSS risk management

Assuming you’ve deployed an SBOM generator and vulnerability scanner into at least one of your development stages (as detailed above in “How do I integrate SBOM management and vulnerability scanning into my development process?”) and have an SBOM repository for storing your SBOMs and/or vulnerability scans, we can now walkthrough how to tie these systems together.

  1. Create a system to pull vulnerability feeds from reputable sources. If you’re looking for a way to get started here, read our post on how to get started.
  2. Regularly scan your catalog of SBOMs for vulnerabilities, storing the results alongside the SBOMs.
  3. Implement a query system to extract insights from your inventory of SBOMs.
  4. Create a dashboard to visualize your software supply chain’s health.
  5. Build alerting automation to ping your team as newly discovered vulnerabilities are announced.
  6. Maintain all of these DIY security applications and tools. 
  7. Continue to incrementally improve on these tools as new threats emerge, technologies evolve and development processes change.

If this feels like more work than you’re willing to take on, this is why security vendors exist. See the benefits of a managed SBOM-powered SCA below.

Prefer not to DIY? Evaluate Anchore Enterprise

Anchore Enterprise was designed from the ground up to provide a reliable software supply chain security platform that requires the least amount of work to integrate and maintain. Included in the product is:

  • Out-of-the-box integrations for popular CI/CD software (e.g., GitHub, Jenkins, GitLab, etc.)
  • End-to-end SBOM management
  • Enterprise-grade vulnerability scanning with best-in-class false positives
  • Built-in SBOM drift detection
  • Remediation recommendations
  • Continuous visibility and monitoring of software supply chain health

Enterprises like NVIDIA, Cisco, Infoblox, etc. have chosen Anchore Enterprise as their “easy button” to achieve open source software security with the least amount of lift.

If you’re interested to learn more about how to roll out a complete OSS security solution without the blood, sweat and tears that come with the DIY route—reach out to our team to get a demo or try Anchore Enterprise yourself with a 15-day free trial.

Learn the container security best practices to reduce the risk of software supply chain attacks.

Learn the container security best practices, including open source software (OSS) security, to reduce the risk of software supply chain attacks.

SBOMs and Vulnerability Management: OSS Security in the DevSecOps Era

The rise of open-source software (OSS) development and DevOps practices has unleashed a paradigm shift in OSS security. As traditional approaches to OSS security have proven inadequate in the face of rapid development cycles, the Software Bill of Materials (SBOM) has re-made OSS vulnerability management in the era of DevSecOps.

This blog post zooms in on two best practices from our introductory article on OSS security and the software supply chain:

  1. Maintain a Software Dependency Inventory
  2. Implement Vulnerability Scanning

These two best practices are set apart from the rest because they are a natural pair. We’ll cover how this novel approach,

  • Scaled OSS vulnerability management under the pressure of rapid software delivery
  • Is set apart from legacy SCAs
  • Unlocks new use-cases in software supply chain security, OSS risk management, etc.
  • Benefits software engineering orgs
  • Benefits an organization’s overall security posture
  • Has measurably impacted modern enterprises, such as, NVIDIA, Infoblox, etc.

Whether you’re a seasoned DevSecOps professional or just beginning to tackle the challenges of securing your software supply chain, this blog post offers insights into how SBOMs and vulnerability management can transform your approach to OSS security.

Learn about the role that SBOMs for the security of your organization in this white paper.

Learn about the role that SBOMs for the security, including open source software (OSS) security, of your organization in this white paper.

Why do I need SBOMs for OSS vulnerability management?

The TL;DR is SBOMs enabled DevSecOps teams to scale OSS vulnerability management programs in a modern, cloud native environment. Legacy security tools (i.e., SCA platforms) weren’t built to handle the pace of software delivery after a DevOps face lift.

Answering this question in full requires some historical context. Below is a speed-run of how we got to a place where SBOMs became the clear solution for vulnerability management after the rise of DevOps and OSS; the original longform is found on our blog.

If you’re not interested in a history lesson, skip to the next section, “What new use-cases are unlocked with a software dependency inventory?” to get straight to the impact of this evolution on software supply chain security (SSCS).

A short history on software composition analysis (SCA)

  • SCAs were originally designed to solve the problem of OSS licensing risk
  • Remember that Microsoft made a big fuss about the dangers of OSS at the turn of the millennium
  • Vulnerability scanning and management was tacked-on later
  • These legacy SCAs worked well enough until DevOps and OSS popularity hit critical mass

How the rise of OSS and DevOps principles broke legacy SCAs

  • DevOps and OSS movements hit traction in the 2010s
  • Software development and delivery transitioned from major updates with long development times to incremental updates with frequent releases
  • Modern engineering organizations are measured and optimized for delivery speed
  • Legacy SCAs were designed to scan a golden image once and take as much as needed to do it; upwards of weeks in some cases
  • This wasn’t compatible with the DevOps promise and created friction between engineering and security
  • This meant not all software could be scanned and much was scanned after release increasing the risk of a security breach

SBOMs as the solution

  • SBOMs were introduced as a standardized data structure that comprised a complete list of all software dependencies (OSS or otherwise)
  • These lightweight files created a reliable way to scan software for vulnerabilities without the slow performance of scanning the entire application—soup to nuts
  • Modern SCAs utilize SBOMs as the foundational layer to power vulnerability scanning in DevSecOps pipelines
  • SBOMs + SCAs deliver on the performance of DevOps without compromising security

What is the difference between SBOMs and legacy SCA scanning?

SBOMs offer two functional innovations over the legacy model: 

  1. Deeper visibility into an organization’s application inventory and; 
  2. A record of changes to applications over-time.

The deeper visibility comes from the fact that modern SCA scanners identify software dependencies recursively and build a complete software dependency tree (both direct and transitive). The record of changes comes from the fact that the OSS ecosystem has begun to standardize the contents of SBOMs to allow interoperability between OSS consumers and producers.

Legacy SCAs typically only scan for direct software dependencies and don’t recursively scan for dependencies of dependencies. Also, legacy SCAs don’t generate standardized scans that can then be used to track changes over time.

What new use-cases are unlocked with an SBOM inventory?

The innovations brought by SBOMs (see above) have unlocked new use-cases that benefit both the software supply chain security niche and the greater DevSecOps world. See the list below:

OSS Dependency Drift Detection

Ideally software dependencies are only injected in source code but the reality is that CI/CD pipelines are leaky and both automated and one-off modifications are made at all stages of development. Plugging 100% of the leaks is a strategy with diminishing returns. Application drift detection is a scalable solution to this challenge.

SBOMs unlocks drift detection by creating a point-in-time record on the composition of an application at each stage of the development process. This creates an auditable record of when software builds are modified; how they are changed and who changed it. 

Software Supply Chain Attack Detection

Not all dependency injections are performed by benevolent 1st-party developers. Malicious threat actors who gain access to your organization’s DevSecOps pipeline or the pipeline of one of your OSS suppliers can inject malicious code into your applications.

An SBOM inventory creates the historical record that can identify anomalous behavior and catch these security breaches before organizational damage is done. This is a particularly important strategy for dealing with advanced persistent threats (APTs) that are expert at infiltration and stealth. For a real-world example, see our blog on the recent XZ supply chain attack.

OSS Licensing Risk Management

OSS licenses are currently undergoing the beginning of a new transformation. The highly permissive licenses that came into fashion over the last 20 years are proving to be unsustainable. As prominent open source startups amend their licenses (e.g., Hashicorp, Elastic, Redis, etc.), organizations need to evaluate these changes and how it impacts their OSS supply chain strategy.

Similar to the benefits during a security incident, an SBOM inventory acts as the source of truth for OSS licensing risk. As licenses are amended, an organization can quickly evaluate their risk by querying their inventory and identifying who their “critical” OSS suppliers are. 

Domain Expertise Risk Management

Another emerging use-case of software dependency inventories is the management of domain expertise of developers in your organization. A comprehensive inventory of software dependencies allows organization’s to map critical software to individual employee’s domain knowledge. This creates a measurement of how well resourced your engineering organization is and who owns the knowledge that could impact business operations.

While losing an employee with a particular set of skills might not have the same urgency as a security incident, over time this gap can create instability. An SBOM inventory allows organizations to maintain a list of critical OSS suppliers and get ahead of any structural risks in their organization.

What are the benefits of a software dependency inventory?

SBOM inventories create a number of benefits for tangential domains, such as, software supply chain security, risk management, etc. but there is one big benefit for the core practices of software development.

Reduced engineering and QA time for debugging

A software dependency inventory stores metadata about applications and their OSS dependencies over-time in a centralized repository. This datastore is a simple and efficient way to search and answer critical questions about the state of an organization’s software development pipeline.

Previously, engineering and QA teams had to manually search codebases and commits in order to determine the source of a rogue dependency being added to an application. A software dependency inventory combines a centralized repository of SBOMs with an intuitive search interface. Now, these time consuming investigations can be accomplished in minutes versus hours.

What are the benefits of scanning SBOMs for vulnerabilities?

There are a number of security benefits that can be achieved by integrating SBOMs and vulnerability scanning. We’ve highlighted the most important below:

Reduce risk by scaling vulnerability scanning for complete coverage

One of the side effects of transitioning to DevOps practices was that legacy SCAs couldn’t keep up with the software output of modern engineering orgs. This meant that not all applications were scanned before being deployed to production—a risky security practice!

Modern SCAs solved this problem by scanning SBOMs rather than applications or codebases. These lightweight SBOM scans are so efficient that they can keep up with the pace of DevOps output. Scanning 100% of applications reduces risk by preventing unscanned software from being deployed into vulnerable environments.

Prevent delays in software delivery

Overall organizational productivity can be increased by adopting modern, SBOM-powered SCAs that allow organizations to shift security left. When vulnerabilities are uncovered during application design, developers can make informed decisions about the OSS dependencies that they choose. 

This prevents the situation where engineering creates a new application or feature but right before it is deployed into production the security team scans the dependencies and finds a critical vulnerability. These last minute security scans can delay a release and create frustration across the organization. Scanning early and often prevents this productivity drain from occurring at the worst possible time.

Reduced financial risk during a security incident

The faster a security incident is resolved the less risk that an organization is exposed to. The primary metric that organizations track is called mean-time-to-recovery (MTTR). SBOM inventories are utilized to significantly reduce this metric and improve incident outcomes.

An application inventory with full details on the software dependencies is a prerequisite for rapid security response in the event of an incident. A single SQL query to an SBOM inventory will return a list of all applications that have exploitable dependencies installed. Recent examples include Log4j and XZ. This prevents the need for manual scanning of codebases or production containers. This is the difference between a zero-day incident lasting a few hours versus weeks.

Reduce hours spent on compliance with automation

Compliance certifications are powerful growth levers for organizations; they open up new market opportunities. The downside is that they create a lot of work for organizations. Manually confirming that each compliance control is met and providing evidence for the compliance officer to review discourages organizations from pursuing these certifications.

Providing automated vulnerability scans from DevSecOps pipelines that integrate SBOM inventories and vulnerability scanners significantly reduces the hours needed to generate and collect evidence for compliance audits.

How impactful are these benefits?

Many modern enterprises are adopting SBOM-powered SCAs and reaping the benefits outlined above. The quantifiable benefits to any organization are unique to that enterprise but anecdotal evidence is still helpful when weighing how to prioritize a software supply chain security initiative, like the adoption of an SBOM-powered SCA against other organizational priorities.

As a leading SBOM-powered SCA, Anchore has helped numerous organizations achieve the benefits of this evolution in the software industry. To get an estimate of what your organization can expect, see the case studies below:

NVIDIA

  • Reduced time to production by scanning SBOMs instead of full applications
  • Scaled vulnerability scanning and management program to 100% coverage across 1000s of containerized applications and 100,000s of containers

Read the full NVIDIA case study here >>

Infoblox

  • 75% reduction in engineering hours spent performing manual vulnerability detection
  • 55% reduction in hours allocated to retroactive remediation of vulnerabilities
  • 60% reduction in hours spent on manual compliance discovery and documentation

Read the full Infoblox case study here >>

DreamFactory

  • 75% reduction in engineering hours spent on vulnerability management and compliance
  • 70% faster production deployments with automated vulnerability scanning and management

Read the full DreamFactory case study here >>

Next Steps

Hopefully you now have a better understanding of the power of integrating an SBOM inventory into OSS vulnerability management. This “one-two” combo has unlocked novel use-cases, numerous benefits and measurable results for modern enterprises.

If you’re interested in learning more about how Anchore can help your organization achieve similar results, reach out to our team.

Learn the container security best practices to reduce the risk of software supply chain attacks.

Learn the container security best practices, including open source software (OSS) security, to reduce the risk of software supply chain attacks.

DreamFactory Achieves 75% Time Savings with Anchore: A Case Study in Secure API Generation

As the popularity of APIs has swept the software industry, API security has become paramount, especially for organizations in highly regulated industries. DreamFactory, an API generation platform serving the defense industry and critical national infrastructure, required an air-gapped vulnerability scanning and management solution that didn’t slow down their productivity. Avoiding security breaches and compliance failures are non-negotiables for the team to maintain customer trust.

Challenge: Security Across the Gap

DreamFactory encountered several critical hurdles in meeting the needs of its high-profile clients, particularly those in the defense community and other highly regulated sectors:

  1. Secure deployments without cloud connectivity: Many clients, including the Department of Defense (DoD), required on-premises deployments with air-gapping, breaking the assumptions of modern cloud-based security strategies.
  2. Air-gapped vulnerability scans: Despite air-gapping, these organizations still demanded comprehensive vulnerability reporting to protect their sensitive data.
  3. Building high-trust partnerships: In industries where security breaches could have catastrophic consequences, establishing trust rapidly was crucial.

As Terence Bennett, CEO of DreamFactory, explains, “The data processed by these organizations have the highest national security implications. We needed a solution that could deliver bulletproof security without cloud connectivity.”

Solution: Anchore Enterprise On-Prem and Air-Gapped 

To address these challenges, DreamFactory implemented Anchore Enterprise, which provided:

  1. Support for on-prem and air-gapped deployments: Anchore Enterprise was designed to operate in air-gapped environments, aligning perfectly with DreamFactory’s needs.
  2. Comprehensive vulnerability scanning: DreamFactory integrated Anchore Enterprise into its build pipeline, running daily vulnerability scans on all deployment versions.
  3. Automated SBOM generation and management: Every build is now cataloged and stored (as an SBOM), providing immediate transparency into the software’s components.

“By catching vulnerabilities in our build pipeline, we can inform our customers and prevent any of the APIs created by a DreamFactory install from being leveraged to exploit our customer’s network,” Bennett notes. “Anchore has helped us achieve this massive value-add for our customers.”

Results: Developer Time Savings and Enhanced Trust

The implementation of Anchore Enterprise transformed DreamFactory’s security posture and business operations:

  • 75% reduction in time spent on vulnerability management and compliance requirements
  • 70% faster production deployments with integrated security checks
  • Rapid trust development through transparency

“We’re seeing a lot of traction with data warehousing use-cases,” says Bennett. “Being able to bring an SBOM to the conversation at the very beginning completely changes the conversation and allows CISOs to say, ‘let’s give this a go’.”

Conclusion: A Competitive Edge in High-Stakes Environments

By leveraging Anchore Enterprise, DreamFactory has positioned itself as a trusted partner for organizations requiring the highest levels of security and compliance in their API generation solutions. In an era where API security is more critical than ever, DreamFactory’s success story demonstrates that with the right tools and approach, it’s possible to achieve both ironclad security and operational efficiency.


Are you facing similar challenges hardening your software supply chain in order to meet the requirements of the DoD? By designing your DevSecOps pipeline to the DoD software factory standard, your organization can guarantee to meet these sky-high security and compliance requirements. Learn more about the DoD software factory standard by downloading our white paper below.

How is Open Source Software Security Managed in the Software Supply Chain?

Open source software has revolutionized the way developers build applications, offering a treasure trove of pre-built software “legos” that dramatically boost productivity and accelerate innovation. By leveraging the collective expertise of a global community, developers can create complex, feature-rich applications in a fraction of the time it would take to build everything from scratch. However, this incredible power comes with a significant caveat: the open source model introduces risk.

Organizations inherit both the good and bad parts of the OSS source code they don’t own. This double-edged sword of open source software necessitates a careful balance between harnessing its productivity benefits and managing the risks. A comprehensive OSS security program is the industry standard best practice for managing the risk of open source software within an organization’s software supply chain.

Learn the container security best practices to reduce the risk of software supply chain attacks.

Learn the container security best practices, including open source software security, to reduce the risk of software supply chain attacks.

What is open source software security?

Open source software security is the ecosystem of security tools (some of it being OSS!) that have developed to compensate for the inherent risk of OSS development. The security of the OSS environment was founded on the idea that “given enough eyeballs, all bugs are shallow”. The reality of OSS is that the majority of it is written and maintained by single contributors. The percentage of open source software that passes the qualifier of “enough eyeballs” is miniscule.

Does that mean open source software isn’t secure? Fortunately, no. The OSS community still produces secure software but an entire ecosystem of tools ensure that this is verified—not only trusted implicitly.

What is the difference between closed source and open source software security?

The primary difference between open source software security and closed source software security is how much control you have over the source code. Open source code is public and can have many contributors that are not employees of your organization while proprietary source code is written exclusively by employees of your organization. The threat models required to manage risk for each of these software development methods are informed by these differences.

Due to the fact that open source software is publicly accessible and can be contributed to by a diverse, often anonymous community, its threat model must account for the possibility of malicious code contributions, unintentional vulnerabilities introduced by inexperienced developers, and potential exploitation of disclosed vulnerabilities before patches are applied. This model emphasizes continuous monitoring, rigorous code review processes, and active community engagement to mitigate risks. 

In contrast, proprietary software’s threat model centers around insider threats, such as disgruntled employees or lapses in secure coding practices, and focuses heavily on internal access controls, security audits, and maintaining strict development protocols. 

The need for external threat intelligence is also greater in OSS, as the public nature of the code makes it a target for attackers seeking to exploit weaknesses, while proprietary software relies on obscurity and controlled access as a first line of defense against potential breaches.

What are the risks of using open source software?

  1. Vulnerability exploitation of your application
    • The bargain that is struck when utilizing OSS is your organization gives up significant amounts of control of the quality of the software. When you use OSS you inherit both good AND bad (read: insecure) code. Any known or latent vulnerabilities in the software become your problem.
  2. Access to source code increases the risk of vulnerabilities being discovered by threat actors
    • OSS development is unique in that both the defenders and the attackers have direct access to the source code. This gives the threat actors a leg up. They don’t have to break through perimeter defenses before they get access to source code that they can then analyze for vulnerabilities.
  3. Increased maintenance costs for DevSecOps function
    • Adopting OSS into an engineering organization is another function that requires management. Data has to be collected about the OSS that is embedded in your applications. That data has to be stored and made available in case of the event of a security incident. These maintenance costs are typically incurred by the DevOps and Security teams.
  4. OSS license legal exposure
    • OSS licenses are mostly permissive for use within commercial applications but a non-trivial subset are not, or worse they are highly adversarial when used by a commercial enterprise. Organizations that don’t manage this risk increase the potential for legal action to be taken against them.

How serious are the risks associated with the use of open source software?

Current estimates are that 70-90% of modern applications are composed of open source software. This means that only 10-30% of applications developed by organizations are written by developers employed by the organization. Without having significant visibility into the security of OSS, organization’s are handing over the keys to the castle to the community and hoping for the best.

Not only is OSS a significant footprint in modern application composition but its growth is accelerating. This means the associated risks are growing just as fast. This is part of the reason we see an acceleration in the frequency of software supply chain attacks. Organizations that aren’t addressing these realities are getting caught on their back foot when zero-days are announced like the recent XZ utils backdoor.

Why are SBOMs important to open source software security?

Software Bills of Materials (SBOMs) serve as the foundation of software supply chain security by providing a comprehensive “ingredient list” of all components within an application. This transparency is crucial in today’s software landscape, where modern applications are a complex web of mostly open source software dependencies that can harbor hidden vulnerabilities. 

SBOMs enable organizations to quickly identify and respond to security threats, as demonstrated during incidents like Log4Shell, where companies with centralized SBOM repositories were able to locate vulnerable components in hours rather than days. By offering a clear view of an application’s composition, SBOMs form the bedrock upon which other software supply chain security measures can be effectively built and validated.

The importance of SBOMs in open source software security cannot be overstated. Open source projects often involve numerous contributors and dependencies, making it challenging to maintain a clear picture of all components and their potential vulnerabilities. By implementing SBOMs, organizations can proactively manage risks associated with open source software, ensure regulatory compliance, and build trust with customers and partners. 

SBOMs enable quick responses to newly discovered vulnerabilities, facilitate automated vulnerability management, and support higher-level security abstractions like cryptographically signed images or source code. In essence, SBOMs provide the critical knowledge needed to navigate the complex world of open source dependencies by enabling us to channel our inner GI Joe—”knowing is half the battle” in software supply chain security.

Best practices for securing open source software?

Open source software has become an integral part of modern development practices, offering numerous benefits such as cost-effectiveness, flexibility, and community-driven innovation. However, with these advantages come unique security challenges. To mitigate risks and ensure the safety of your open source components, consider implementing the following best practices:

1. Model Security Scans as Unit Tests

Re-branding security checks as another type of unit test helps developers orient to DevSecOps principles. This approach helps developers re-imagine security as an integral part of their workflow rather than a separate, post-development concern. By modeling security checks as unit tests, you can:

  • Catch vulnerabilities earlier in the development process
  • Reduce the time between vulnerability detection and remediation
  • Empower developers to take ownership of security issues
  • Create a more seamless integration between development and security teams

Remember, the goal is to make security an integral part of the development process, not a bottleneck. By treating security checks as unit tests, you can achieve a balance between rapid development and robust security practices.

2. Review Code Quality

Assessing the quality of open source code is crucial for identifying potential vulnerabilities and ensuring overall software reliability. Consider the following steps:

  • Conduct thorough code reviews, either manually or using automated tools
  • Look for adherence to coding standards and best practices
  • Look for projects developed with secure-by-default principles
  • Evaluate the overall architecture and design patterns used

Remember, high-quality code is generally more secure and easier to maintain.

3. Assess Overall Project Health

A vibrant, active community and committed maintainers are crucial indicators of a well-maintained open source project. When evaluating a project’s health and security:

  • Examine community involvement:
    • Check the number of contributors and frequency of contributions
    • Review the project’s popularity metrics (e.g., GitHub stars, forks, watchers)
    • Assess the quality and frequency of discussions in forums or mailing lists
  • Evaluate maintainer(s) commitment:
    • Check the frequency of commits, releases, and security updates
    • Check for active engagement between maintainers and contributors
    • Review the time taken to address reported bugs and vulnerabilities
    • Look for a clear roadmap or future development plans

4. Maintain a Software Dependency Inventory

Keeping track of your open source dependencies is crucial for managing security risks. To create and maintain an effective inventory:

  • Use tools like Syft or Anchore SBOM to automatically scan your application source code for OSS dependencies
    • Include both direct and transitive dependencies in your scans
  • Generate a Software Bill of Materials (SBOM) from the dependency scan
    • Your dependency scanner should also do this for you
  • Store your SBOMs in a central location that can be searched and analyzed
  • Scan your entire DevSecOps pipeline regularly (ideally every build and deploy)

An up-to-date inventory allows for quicker responses to newly discovered vulnerabilities.

5. Implement Vulnerability Scanning

Regular vulnerability scanning helps identify known security issues in your open source components. To effectively scan for vulnerabilities:

  • Use tools like Grype or Anchore Secure to automatically scan your SBOMs for vulnerabilities
  • Automate vulnerability scanning tools directly into your CI/CD pipeline
    • At minimum implement vulnerability scanning as containers are built
    • Ideally scan container registries, container orchestrators and even each time a new dependency is added during design
  • Set up alerts for newly discovered vulnerabilities in your dependencies
  • Establish a process for addressing identified vulnerabilities promptly

6. Implement Version Control Best Practices

Version control practices are crucial for securing all DevSecOps pipelines that utilize open source software:

  • Implement branch protection rules to prevent unauthorized changes
  • Require code reviews and approvals before merging changes
  • Use signed commits to verify the authenticity of contributions

By implementing these best practices, you can significantly enhance the security of your software development pipeline and reduce the risk intrinsic to open source software. By doing this you will be able to have your cake (productivity boost of OSS) and eat it too (without the inherent risk).

How do I integrate open source software security into my development process?

DIY a comprehensive OSS security system

We’ve written about the steps to build a OSS security system from scratch in a previous blog post—below is the TL;DR:

  • Integrate dependency scanning, SBOM generation and vulnerability scanning into your DevSecOps pipeline
  • Implement a data pipeline to manage the influx of security metadata
  • Use automated policy-as-code “security tests” to provide rapid feedback to developers
  • Automate remediation recommendations to reduce cognitive load on developers

Outsource OSS security to a turnkey vendor

Modern software composition analysis (SCA) tools, like Anchore Enterprise, are purpose built to provide you with a comprehensive OSS security system out-of-the-box. All of the same features of DIY but without the hassle of building while maintaining your current manual process.

  • Anchore SBOM: comprehensive dependency scanning, SBOM generation and management
  • Anchore Secure: vulnerability scanning and management
  • Anchore Enforce: automated security enforcement and compliance

Whether you want to scale an understaffed security to increase their reach across your organization or free your team up to focus on different priorities, the buy versus build opportunity cost is a straightforward decision.

Next Steps

Hopefully, you now have a strong understanding of the risks associated with adopting open source software. If you’re looking to continue your exploration into the intricacies of software supply chain security, Anchore has a catalog of deep dive content on our website. If you’d prefer to get your hands dirty, we also offer a 15-day free trial of Anchore Enterprise.

Learn about the role that SBOMs for the security of your organization in this white paper.

Learn about the role that SBOMs for the security, including open source software security, of your organization in this white paper.

SSDF Attestation Template: Battle-tested Compliance Guidance

The CISA Secure Software Development Attestation form, commonly referred to as, SSDF attestation, was released earlier this year and with any new compliance framework, knowing the exact wording and details to provide in order to meet the compliance requirements can be difficult.

We feel you here. Anchore is heavily invested in the public sector and had to generate our own SSDF attestation for our platform, Anchore Enterprise. Having gone through the process ourselves and working with a number of customers that requested our expertise on this matter, we developed a document that helps you put together an SSDF attestation that will make a compliance officer’s heart sing.

Our goal with this document is to make SSDF attestation as easy as possible and demonstrate how Anchore Enterprise is an “easy button” that you can utilize to satisfy the majority of evidence needed to achieve compliance. We have already submitted in our own SSDF attestation and been approved, so we have confidence these answers will help get you over the line. You can find our SSDF attestation guide on our docs site.

Explore SSDF attestation in-depth with this eBook. Learn the benefits of the framework and how you can benefit from it.

SSDF Attestation 101: A Practical Guide for Software Producers

How do I fill out the SSDF attestation form?

This is the difficult part, isn’t it? The SSDF attestation form looks very simple at a glance, but it has a number of sections that expect evidence to be attached that details how your organization secures both your development environments and production systems. Like all compliance standards, it doesn’t specify what will or won’t meet compliance for your organization, hence the importance of the evidence.

At Anchore, we both experienced this ourselves and helped our customers navigate this ambiguity. Out of these experiences we created a document that breaks down each item and what evidence was able to achieve compliance without being rejected by a compliance officer.

We have published this document on our Docs site for all other organizations to use as a template when attempting to meet SSDF attestation compliance.

Structure of the SSDF attestation form

The SSDF attestation is divided into 3 sections:

Section I

The first section is very short, it is where you list the type of attestation you are submitting and information about the product that you are attesting to meeting compliance.

Section II

This section is also short, the form is collecting contact information. CISA wants to be able to know how to get in contact with your organization and who is responsible for any questions or concerns that need to be addressed.

Section III

For all intents and purposes, Section III is the SSDF attestation form. This is where you will provide all of the technical supporting information to demonstrate that your organization complies with the requirements set out in the SSDF attestation form. 

The guide that Anchore has developed is focused specifically on how to fill out this section in a way that will meet the expectations of CISA compliance officers.

Where do I submit the SSDF attestation form?

If you are a US government vendor you can submit your organization’s completed form on the Repository for Software Attestations and Artifacts. You will need an account that can be requested on the login page. It normally takes a few days for the account to be created. Be sure to give yourself at least a week for it to be created. This can be done ahead of time while you’re gathering the information to fill out your form.

It’s also possible you will receive requests directly to pass along the form. Not every agency will use the repository. It’s even possible you will have non-government customers asking for the form. While it’s being mandated by the government, there’s a lot of good evidence in the document.

What tooling do I need to meet SSDF attestation compliance?

There are many ways in order to meet the technical requirements of SSDF attestation but there is also a well worn path. Anchore utilizes modern DevSecOps practices and assumes that the majority of our customers do as well. Below is a list of common DevSecOps tools that are typically used to help meet SSDF compliance

Endpoint Protection

Description: Endpoint protection tools secure individual devices (endpoints) that connect to a network. They protect against malware, detect and prevent intrusions, and provide real-time monitoring and response capabilities.

SSDF Requirement: [3.1] — “Separating and protecting each environment involved in developing and building software”

Examples: Jamf, Elastic, SentinelOne, etc.

Source Control

Description: Source control systems manage changes to source code over time. They help track modifications, facilitate collaboration among developers, and maintain different versions of code.

SSDF Requirement: [3.1] — “Separating and protecting each environment involved in developing and building software”

Examples: GitHub, GitLab, etc.

CI/CD Build Pipeline

Description: Continuous Integration/Continuous Deployment (CI/CD) pipelines automate the process of building, testing, and deploying software. They help ensure consistent and reliable software delivery.

SSDF Requirement: [3.1] — “Separating and protecting each environment involved in developing and building software”

Examples: Jenkins, GitLab, GitHub Actions, etc.

Single Sign-on (SSO)

Description: SSO allows users to access multiple applications with one set of login credentials. It enhances security by centralizing authentication and reducing the number of attack vectors.

SSDF Requirement: [3.1] — “Enforcing multi-factor authentication and conditional access across the environments relevant to developing and building software in a manner that minimizes security risk;”

Examples: Okta, Google Workspace, etc.

Security Event and Incident Management (SEIM)

Description: Monitoring tools provide real-time visibility into the performance and security of systems and applications. They can detect anomalies, track resource usage, and alert on potential issues.

SSDF Requirement: [3.1] — “Implementing defensive cybersecurity practices, including continuous monitoring of operations and alerts and, as necessary, responding to suspected and confirmed cyber incidents;”

Examples: Elasticsearch, Splunk, Panther, RunReveal, etc.

Audit Logging

Description: Audit logging captures a record of system activities, providing a trail of actions performed within the software development and build environments.

SSDF Requirement: [3.1] — “Regularly logging, monitoring, and auditing trust relationships used for authorization and access: i) to any software development and build environments; and ii) among components within each environment;”

Examples: Typically a built-in feature of CI/CD, SCM, SSO, etc.

Secrets Encryption

Description: Secrets encryption tools secure sensitive information such as passwords, API keys, and certificates used in the development and build processes.

SSDF Requirement: [3.1] — “Encrypting sensitive data, such as credentials, to the extent practicable and based on risk;”

Examples: Typically a built-in feature of CI/CD and SCM

Secrets Scanning

Description: Secrets scanning tools automatically detect and alert on exposed secrets in code repositories, preventing accidental leakage of sensitive information.

SSDF Requirement: [3.1] — “Encrypting sensitive data, such as credentials, to the extent practicable and based on risk;”

Examples: Anchore Secure or other container security platforms

OSS Component Inventory (+ Provenance)

Description: These tools maintain an inventory of open-source software components used in a project, including their origins and lineage (provenance).

SSDF Requirement: [3.3] — “The software producer maintains provenance for internal code and third-party components incorporated into the software to the greatest extent feasible;”

Examples: Anchore SBOM or other SBOM generation and management platform

Vulnerability Scanning

Description: Vulnerability scanning tools automatically detect security weaknesses in code, dependencies, and infrastructure.

SSDF Requirement: [3.4] — “The software producer employs automated tools or comparable processes that check for security vulnerabilities. In addition: a) The software producer operates these processes on an ongoing basis and prior to product, version, or update releases;”

Examples: Anchore Secure or other software composition analysis (SCA) platform

Vulnerability Management and Remediation Runbook

Description: This is a process and set of guidelines for addressing discovered vulnerabilities, including prioritization and remediation steps.

SSDF Requirement: [3.4] — “The software producer has a policy or process to address discovered security vulnerabilities prior to product release; and The software producer operates a vulnerability disclosure program and accepts, reviews, and addresses disclosed software vulnerabilities in a timely fashion and according to and timelines specified in the vulnerability disclosure program or applicable policies.”

Examples: This is not necessarily a tool but an organizational SLA on security operations. For reference Anchore has included a screenshot from our vulnerability management guide.

Next Steps

If your organization currently provides software services to a federal agency or is looking to in the future, Anchore is here to help you in your journey. Reach out to our team and learn how you can integrate continuous and automated compliance directly into your CI/CD build pipeline with Anchore Enterprise.

Learn about the importance of both FedRAMP and SSDF compliance for selling to the federal government.

Ad for webinar by Anchore about how to sell software services to the federal government by achieving FedRAMP or SSDF Compliance

FedRAMP & FISMA Compliance: Key Differences Explained

This blog post has been archived and replaced by the supporting pillar page that can be found here: https://anchore.com/wp-admin/post.php?post=987474188&action=edit

The blog post is meant to remain “public” so that it will continue to show on the /blog feed. This will help discoverability for people browsing the blog and potentially help SEO. If it is clicked on it will automatically redirect to the pillar page.

Anchore at Billington CyberSecurity Summit: Automating Defense in the AI Era

Are you gearing up for the 15th Annual Billington CyberSecurity Summit? So are we! The Anchore team will be front and center in the exhibition hall throughout the event, ready to showcase how we’re revolutionizing cybersecurity in the age of AI.

This year’s summit promises to be a banger, highlighting the evolution in cybersecurity as the latest iteration of AI takes center stage. While large language models (LLMs) like ChatGPT have been making waves across industries, the cybersecurity realm is still charting its course in this new AI-driven landscape. But make no mistake – this is no time to rest on our laurels.

As blue teams explore innovative ways to harness LLMs, cybercriminals are working overtime to weaponize the same technology. If there’s one lesson we’ve learned from every software and AI hype cycle: automation is key. As adversaries incorporate novel automations into their tactics, defenders must not just keep pace—they need to get ahead.

At Anchore, we’re all-in with this strategy. The Anchore Enterprise platform is purpose-built to automate and scale cybersecurity across your entire software development lifecycle. By automating continuous vulnerability scanning and compliance in your DevSecOps pipeline, we’re equipping warfighters with the tools they need to outpace adversaries that never sleep.

Ready to see how Anchore can transform your cybersecurity posture in the AI era? Stop by our booth for a live demo. Don’t miss this opportunity to stay ahead of the curve—book a meeting (below) with our team and take the first step towards a more secure tomorrow.

Anchore at the Billington CyberSecurity Summit

Date: September 3–6, 2024

Location: The Ronald Reagan Building and International Trade Center in Washington, DC

Our team is looking forward to meeting you! Book a demo session in advance to ensure a preferred slot.

Anchore’s Showcase: DevSecOps and Automated Compliance

We will be demonstrating the Anchore Enterprise platform at the event. Our showcase will focus on:

  1. Software Composition Analysis (SCA) for Cloud-Native Environments: Learn how our tools can help you gain visibility into your software supply chain and manage risk effectively.
  2. Automated SBOM Generation and Management: Discover how Anchore simplifies the creation and maintenance of Software Bills of Materials (SBOMs), the foundational component in software supply chain security.
  3. Continuous Scanning for Vulnerabilities, Secrets, and Malware: See our advanced scanning capabilities in action, designed to protect your applications across the DevSecOps pipeline or DoD software factory.
  4. Automated Compliance Enforcement: Experience how Anchore can streamline compliance with key standards such as cATO, RAISE 2.0,  NIST, CISA, and FedRAMP, saving time and reducing human error.

We invite all attendees to visit our booth to learn more about how Anchore’s DevSecOps and automated compliance solutions can enhance your organization’s security posture in the age of AI and cloud computing.

Event Highlights

Still on the fence about whether to attend? Here is a quick run-down to help get you off of the fence. This year’s summit, themed “Advancing Cybersecurity in the AI Age,” will feature more than 40 sessions and breakouts, covering critical topics such as:

  • The increasing impact of artificial intelligence on cybersecurity
  • Cloud security challenges and solutions
  • Proactive approaches to technical risk management
  • Emerging cyber risks and defense strategies
  • Data protection against breaches and insider threats
  • The intersection of cybersecurity and critical infrastructure

The event will showcase fireside chats with top government officials, including FBI Deputy Director Paul Abbate, Chairman of the Joint Chiefs of Staff General CQ Brown, Jr., and U.S. Cyber Command Commander General Timothy D. Haugh, among others.

Next Steps and Additional Resources

Join us at the Billington Cybersecurity Summit to network with industry leaders, gain valuable insights, and explore innovative technologies that are shaping the future of cybersecurity. We look forward to seeing you there!

If you are interested in the Anchore Enterprise platform and can’t wait till the show, here are some resources to help get you started:

Learn about best practices that are setting new standards for security in DoD software factories.

Anchore Awarded DoD ESI DevSecOps Phase II Agreement

The Department of Defense (DoD) Enterprise Software Initiative (ESI) has awarded Anchore inclusion in its DevSecOps program, which is part of the ESI’s DevSecOps Phase II enterprise agreements.

The DoD ESI’s main objective is to streamline the acquisition process for software and services across the DoD, in order to gain significant cost savings and improve efficiency. Admittance into the ESI program validates Anchore’s commitment to be a trusted partner to the DoD, delivering advanced container vulnerability scanning as well as SBOM management solutions that meet the most stringent compliance and security requirements.

Anchore’s inclusion in the DoD ESI DevSecOps Phase II agreement is a testament to our commitment to delivering cutting-edge software supply chain security solutions. This milestone enables us to more efficiently support the DoD’s critical missions by providing them with the tools they need to secure their software development pipelines. Our continued partnership with the DoD reinforces Anchore’s position as a trusted leader in SBOM-powered DevSecOps and container security.

—Tim Zeller, EVP Sales & Marketing

The agreements also included DevSecOps luminaries Hashicorp and Rancher Government as well as Cloudbees, Infoblox, GitLab, Crowdstrike, F5 Networks; all are now part of the preferred vendor list for all DoD missions that require cybersecurity solutions, generally, and software supply chain security, specifically.

Anchore is steadily growing their presence on federal contracts and catalogues such as Iron Patriot & Minerva, GSA, 2GIT, NASA SEWP, ITES and most recently also JFAC (Joint Federated Assurance Center).

What does this mean?

Similar to the GSA Advantage marketplace, DoD missions can now procure Anchore through the fully negotiated and approved ESI Agreements on the Solutions for Enterprise-Wide Procurement (SEWP) Marketplace. 

Anchore’s History with DoD

This award continues Anchore’s deepening relationship with the DoD. Starting in 2020, the DoD has vetted and approved Anchore’s container vulnerability scanning tools. Anchore is named in both the DoD Container Image Creation and Deployment Guide and the DoD Container Hardening Process Guide as recommended solutions.

The same year, Anchore was selected by the US Air Force’s Platform One to become the software supply chain vendor to implement the best practices in the above guides for all software built on the platform. Read our case study on how Anchore partnered with Platform One to build the premier DevSecOps platform for the DoD.

The following year, Anchore won the Small Business Innovation Research (SBIR) Phase III contract with Platform One to integrate directly into the Iron Bank container image process. If your image has achieved Iron Bank certification it is because Anchore’s solution has given it a passing grade. Read more about this DevSecOps success story in our case study with the Iron Bank.

Due to the success of Platform One within the US Air Force, in 2022 Anchore partnered with the US Navy to secure the Black Pearl DevSecOps platform. Similar to Platform One, Black Pearl is the go-to standard for modern software development within the Department of the Navy (DON) software development.

As Anchore continued to expand its relationship with the DoD and federal agencies, its offerings became available for purchase through the online government marketplaces and contracts such as GSA Advantage and Second Generation IT Blanket Purchase Agreements (2GIT), NASA SEWP, Iron Patriot/Minerva, ITES and JFAC. The ESI’s DevSecOps Phase II award was built on the back of all of the previous success stories that came before it. 

Achieving ATO is now easier with the inclusion of Anchore into the DoD ESI. Read our white paper on DoD software factory best practices to reach cATO or RAISE 2.0 compliance in days versus months.

We advise on best practices that are setting new standards for security and efficiency in DoD software factories, such as: Hardening container images, automation for policy enforcement and continuous monitoring for vulnerabilities.

Anchore Enterprise 5.8 Adds KEV Enrichment Feed

Today we have released Anchore Enterprise 5.8, featuring the integration of the U.S. Cybersecurity and Infrastructure Security Agency’s (CISA) Known Exploited Vulnerabilities (KEV) catalog as a new vulnerability feed.

Previously, Anchore Enterprise matched software libraries and frameworks inside applications against vulnerability databases, such as, National Vulnerability Database (NVD), the GitHub Advisory Database or individual vendor feeds. With Anchore Enterprise 5.8, customers can augment their vulnerability feeds with the KEV catalog without having to leave the dashboard. In addition, teams can automatically flag exploitable vulnerabilities as software is being developed or gate build artifacts from being released into production. 

Before we jump into what all of this means, let’s take a step back and get some context to KEV and its impact on DevSecOps pipelines.

What is CISA KEV?

The KEV (Known Exploited Vulnerabilities) catalog is a critical cybersecurity resource maintained by the U.S. Cybersecurity and Infrastructure Security Agency (CISA). It is a database of exploited vulnerabilities that is current and active in the wild. While addressing these vulnerabilities is mandatory for U.S. federal agencies under Binding Operational Directive 22-01, the KEV catalog serves as an essential public resource for improving cybersecurity for any organization.

The primary difference between CISA KEV and a standard vulnerability feed (e.g., the CVE program) are the adjectives, “actively exploited”. Actively exploited vulnerabilities are being used by attackers to compromise systems in real-time, meaning now. They are real and your organization may be standing in the line of fire, whereas CVE lists vulnerabilities that may or may not have any available exploits currently. Due to the imminent threat of actively exploited vulnerabilities, they are considered the highest risk outside of an active security incident.

The benefits of KEV enrichment

The KEV catalog offers significant benefits to organizations striving to improve their cybersecurity posture. One of its primary advantages is its high signal-to-noise ratio. By focusing exclusively on vulnerabilities that are actively being exploited in the wild, the KEV cuts through the noise of countless potential vulnerabilities, allowing developers and security teams to prioritize their efforts on the most critical and immediate threats. This focused approach ensures that limited resources are allocated to addressing the vulnerabilities that pose the greatest risk, significantly enhancing an organization’s security efficiency.

Moreover, the KEV can be leveraged as a powerful tool in an organization’s development and deployment processes. By using the KEV as a trigger for build pipeline gates, companies can prevent exploitable vulnerabilities from being promoted to production environments. This proactive measure adds an extra layer of security to the software development lifecycle, reducing the risk of deploying vulnerable code. 

Additionally, while adherence to the KEV is not yet a universal compliance requirement, it represents a security best practice that forward-thinking organizations are adopting. Given the trend of such practices evolving into compliance mandates, integrating the KEV into security protocols can be seen as a form of future-proofing, potentially easing the transition if and when such practices inevitably become compliance requirements.

How Anchore Enterprise delivers KEV enrichment

With Anchore Enterprise, CISA KEV is now a vulnerability feed similar to any other data feed that gets imported into the system. Anchore Enterprise can be configured to pull this directly from the source as part of the deployment feed service.

To make use of the new KEV data, we have an additional rule option in the Anchore Policy Engine that allows a STOP or WARN to be configured when a vulnerability is detected that is on the KEV list. When any application build, registry store or runtime deploy occurs, Anchore Enterprise will evaluate the artifiact’s SBOM against the security policy and if the SBOM has been annotated with a KEV entry then the Anchore policy engine can return a STOP value to inform the build pipeline to fail the step and return the KEV as the source of the violation.

To configure the KEV feed as a trigger in the policy engine, first select vulnerabilities as the gate then kev list as a trigger. Finally choose an action.

Anchore Enterprise dashboard policy engine rule set configuration showing vulnerabilities as the gate value and the CISA KEV catalog as the trigger value.

After you save the new rule, you will see the kev list rule as part of the entire policy.

Anchore Enterprise 5.8 policy engine dashboard showing all rules for the default policy including the CISA KEV catalog rule at the top (highlighted in the red square).

After scanning a container with the policy that has the kev list rule in it, you can view all dependencies that match the kev list vulnerability feed.

Anchore Enterprise 5.8 vulnerability scan report with policy enrichment and policy actions. All software dependencies are matched against the CISA KEV catalog of known exploitable vulnerabilities and the assigned action is reported in the dashboard.

Next Steps

To stay on top of our releases, sign-up for our monthly newsletter or follow our LinkedIn account. If you are already an Anchore customer, please reach out to your account manager to upgrade to 5.8 and gain access to KEV support. We also offer a 15 day free trial to get hands on with Anchore Enterprise or you can reach out to us for a guided tour.

A Guide to FedRAMP in 2025: FAQs & Key Takeaways

This blog post has been archived and replaced by the supporting pillar page that can be found here: https://anchore.com/wp-admin/post.php?post=987473983&action=edit

The blog post is meant to remain “public” so that it will continue to show on the /blog feed. This will help discoverability for people browsing the blog and potentially help SEO. If it is clicked on it will automatically redirect to the pillar page.

DevSecOps Evolution: How DoD Software Factories Are Reshaping Federal Compliance

Anchore’s Vice President of Security, Josh Bressers recently did an interview with Fed Gov Today about the role of automation in DevSecOps and how it is impacting the US federal government. We’ve condensed the highlights of the interview into a snackable blog post below.

Automation is the foundation of DevSecOps

Automation isn’t just a buzzword but is actually the foundation of DevSecOps. It is what gives meaning to marketing taglines like “shift left”. The point of DevSecOps is to create automated workflows that provide feedback to software developers as they are writing the application. This unwinds the previous practice of  artificially grouping all of the “compliance” or “security” tasks into large blocks at the end of the development process. The challenge with this pattern is that it delays feedback and design decisions are made that become difficult to undo after development has completed. By inverting the narrative and automating feedback as design decisions are made, developers are able to prevent compliance or security issues before they become deeply embedded into the software.

DoD Software Factories are leading the way in DevSecOps adoption

The US Department of Defense (DoD) is at the forefront of implementing DevSecOps through their DoD software factory model. The US Navy’s Black Pearl and the Air Force’s Platform One are perfect examples of this program. These organizations are leveraging automation to streamline compliance work. Instead of relying on manual documentation ahead of Authority to Operate (ATO) reviews, automated workflows built directly into the software development pipeline provide direct feedback to developers. This approach has proven highly effective, Bressers emphasizes this in his interview:

It’s obvious why the DoD software factory model is catching on. It’s because they work. It’s not just talk, it’s actually working. There’s many organizations that have been talking about DevSecOps for a long time. There’s a difference between talking and doing. Software factories are doing and it’s amazing.

—Josh Bressers, VP of Security, Anchore

Benefits of compliance automation

By giving compliance the same treatment as security (i.e., automate all the things), tasks that once took weeks or even months, can now be completed in minutes or hours. This dramatic reduction in time-to-compliance not only accelerates development cycles but also allows teams to focus on collaboration and solution delivery rather than getting bogged down in procedural details. The result is a “shift left” approach that extends beyond security to compliance as well. When compliance is integrated early in the development process the benefits cascade down the entire development waterfall.

Compliance automation is shifting the policy checks left into the software development process. What this means is that once your application is finished; instead of the compliance taking weeks or months, we’re talking hours or minutes.

—Josh Bressers, VP of Security, Anchore

Areas for improvement

While automation is crucial, there are still several areas for improvement in DevSecOps environments. Key focus areas include ensuring developers fully understand the automated processes, improving communication between team members and agencies, and striking the right balance between automation and human oversight. Bressers emphasizes the importance of letting “people do people things” while leveraging computers for tasks they excel at. This approach fosters genuine collaboration and allows teams to focus on meaningful work rather than simply checking boxes to meet compliance requirements.

Standardizing communication workflows with integrated developer tools

Software development pipelines are primarily platforms to coordinate the work of distributed teams of developers. They act like old-fashioned switchboard operators that connect one member of the development team to the next as they hand-off work in the development production line. Leveraging developer tooling like GitLab or GitHub standardizes communication workflows. These platforms provide mechanisms for different team members to interact across various stages of the development pipeline. Teams can easily file and track issues, automatically pass or fail tests (e.g., compliance tests), and maintain a searchable record of discussions. This approach facilitates better understanding between developers and those identifying issues, leading to more efficient problem-solving and collaboration.

The government getting ahead of the private sector: an unexpected narrative inversion

In a surprising turn of events, Bressers points out that government agencies are now leading the way in DevSecOps implementation by integrating automated compliance. Historically often seen as technologically behind, federal agencies, through the DoD software factory model, are setting new standards that are likely to influence the private sector. As these practices become more widespread, contractors and private companies working with the government will need to adapt to these new requirements. This shift is evident in recent initiatives like the SSDF attestation questionnaire and White House Executive Order (EO) 14028. These initiatives are setting new expectations for federal contractors, signaling a broader move towards making compliance a native pillar of DevSecOps.

This is one of the few instances in recent memory where the government is truly leading the way. Historically the government has often been the butt of jokes about being behind in technology but these DoD software factories are absolutely amazing. The next thing that we’re going to see is the compliance expectations that are being built into these DoD software factories will seep out into the private sector. The SSDF attestation and the White House Executive Order are only the beginning. Ironically my expectation is everyone is going to have to start paying attention to this, not just federal agencies.

—Josh Bressers, VP of Security, Anchore

Next Steps

If you’re interested to learn more about how to future-proof your software supply chain with compliance automation via the DoD software factory model, be sure to read our white paper.

If you’d like to hear the interview in full, be sure to watch it on Fed Gov Today’s Youtube channel.

High volume image scanning and vulnerability management at the Iron Bank (Platform One)

The Iron Bank provides Platform One and any US Department of Defense (DoD) agency with a hardened and centralized container image repository that supports the end-to-end lifecycle needed for secure software development. Anchore and the Iron Bank have been collaborating since 2020 to balance deployment velocity, and policy compliance while maintaining rigorous security standards and adapting to new security threats. 

The Challenge

The Iron Bank development team is responsible for the integrity and security of 1,800 base images that are provided to build and create software applications across the DoD. They face difficult tasks such as:

  • Providing hardened components for downstream applications across the DoD
  • Meeting rigorous security standards crucial for military systems
  • Improving deployment frequency while maintaining policy compliance
  • Reducing the burden of false positives on the development team

Camdon Cady, Chief Technology Officer at Platform One:

People want to be security minded, and they want to do the right thing. But what they really want is tooling that helps them to do that with all the necessary information in one place. That’s why we looked to Anchore for help.

The Solution

Anchore’s engineering team is deeply embedded with the Iron Bank infrastructure and development team to improve and maintain DevSecOps standards. Anchore Enterprise is the software supply chain security tool of choice as it provides: 

The Results: Sustainable security at DevOps speed

The partnership between Iron Bank and Anchore has yielded impressive results:

  • Reduced False Positives: The introduction of an exclusion feed captured over 12,000 known false positives, significantly reducing the security assessment load.
  • Improved SBOM Accuracy: Custom capabilities like SBOM Hints and SBOM Corrections allow for more precise component identification and vulnerability mapping.
  • Standardized Compliance: A jointly developed custom policy enforces the DoD Container Hardening requirements consistently across all images.
  • Enhanced Scanning Capabilities: Additions like time-based allowlisting, content hints, and image scanning have expanded Iron Bank’s security coverage.
  • Streamlined Processes: The standardized scanning process adheres to the DoD’s Container Hardening Guide while delivering high-quality vulnerability and compliance findings.

Even though security is important for all organizations, the stakes are higher for the DoD. What we need is a repeatable development process. It’s imperative that we have a standardized way of building secure software across our military agencies.

Camdon Cady, Chief Technology Officer at Platform One

Download the full case study to learn more about how Anchore Enterprise can help your organization achieve a proactive security stance while maintaining development velocity.

How Infoblox Scaled Product Security and Compliance with Anchore Enterprise

In today’s fast-paced software development world, maintaining the highest levels of security and compliance is a daunting challenge. Our new case study highlights how Infoblox, a leader in Enterprise DDI (DNS, DHCP, IPAM), successfully scaled their product security and compliance efforts using Anchore Enterprise. Let’s dive into their journey and the impressive results they achieved.

The Challenge: Scaling security in high-velocity Environments

Infoblox faced several critical challenges in their product security efforts:

  • Implementing “shift-left” security at scale for 150 applications developed by over 600 engineers with a security team of 15 (a 40:1 ratio!)
  • Managing vulnerabilities across thousands of containers produced monthly
  • Maintaining multiple compliance certifications (FedRAMP, SOC 2, StateRAMP, ISO 27001)
  • Integrating seamlessly into existing DevOps workflows

“When I first started, I was manually searching GitHub repos for references to vulnerable libraries,” recalls Sukhmani Sandhu, Product Security Engineer at Infoblox. This manual approach was unsustainable and prone to errors.

The Solution: Anchore Enterprise

To address these challenges, Infoblox turned to Anchore Enterprise to provide:

  • Container image scanning with low false positives
  • Comprehensive vulnerability and CVE management
  • Native integrations with Amazon EKS, Harbor, and Jenkins CI
  • A FedRAMP, SOC 2, StateRAMP, and ISO compliant platform

Chris Wallace, Product Security Engineering Manager at Infoblox, emphasizes the importance of accuracy: “We’re not trying to waste our team or other team’s time. We don’t want to report vulnerabilities that don’t exist. A low false-positive rate is paramount.

Impressive Results

The implementation of Anchore Enterprise transformed Infoblox’s product security program:

  • 75% reduction in time for manual vulnerability detection tasks
  • 55% reduction in hours allocated to retroactive vulnerability remediation
  • 60% reduction in hours spent on compliance tasks
  • Empowered the product security team to adopt a proactive, “shift-left” security posture

These improvements allowed the Infoblox team to focus on higher-value initiatives like automating policy and remediation. Developers even began self-adopting scanning tools during development, catching vulnerabilities before they entered the build pipeline.

“We effectively had no tooling before Anchore. Everything was manual. We reduced the amount of time on vulnerability detection tasks by 75%,” says Chris Wallace.

Conclusion: Scaling security without compromise

Infoblox’s success story demonstrates that it’s possible to scale product security and compliance efforts without compromising on development speed or accuracy. By leveraging Anchore Enterprise, they transformed their security posture from reactive to proactive, significantly reduced manual efforts, and maintained critical compliance certifications.

Are you facing similar challenges in your organization? Download the full case study and take the first step towards a secure, compliant, and efficient development environment. Or learn more about how Anchore’s container security platform can help your organization.

Modernizing FedRAMP: GSA’s Roadmap to Streamline Authorization

If you’ve ever thought that the FedRAMP (Federal Risk and Authorization Management Program) authorization process is challenging and laborious, things may be getting better. The General Services Administration’s (GSA) has publicly committed to improving the authorization process by publishing a public roadmap to modernize FedRAMP

The purpose of FedRAMP is to act as a central intermediary between federal agencies and cloud service providers (CSP) in order to make it easier for agencies to purchase software services and for CSPs to sell software services to agencies. By being the middleman, FedRAMP creates a single marketplace that reduces the amount of time it takes for an agency to select and purchase a product. From the CSP perspective, FedRAMP becomes a single standard that they can target for compliance and after achieving authorization they get access to 200+ agencies that they can sell to—a win-win.

Unfortunately, that promised land wasn’t the typical experience for either side of the exchange. Since FedRAMP’s inception in 2011, the demand for cloud services has increased significantly. Cloud was still in its infancy at the birth of FedRAMP and the majority of federal agencies still procured software with perpetual licenses rather than as a cloud service (e.g., SaaS). In the following 10+ years that have passed, that preference has inverted and now the predominant delivery model is infrastructure/platform/software-as-a-service.

This has led to an environment where new cloud services are popping up every year but federal agencies don’t have access to them via the streamlined FedRAMP marketplace. On the other side of the coin, CSPs want access to the market of federal agencies that are only able to procure software via FedRAMP but the process of becoming FedRAMP certified is a complex and laborious process that reduces the opportunity cost of access to this market.

Luckily, the GSA isn’t resting on its laurels. Due to feedback from all stakeholders they are prioritizing a revamp of the FedRAMP authorization process to take into account the shifting preferences in the market. To help you get a sense of what is happening, how quickly you can expect changes and the benefits of the initiative, we have compiled a comprehensive FAQ.

Frequently Asked Questions (FAQ)

How soon will the benefits of FedRAMP modernization be realized?

Optimistically changes will be rolling out over the next 18 months and be completed by the end of 2025. See the full rollout schedule on the public roadmap.

Who does this impact?

  • Federal agencies
  • Cloud service providers (CSP)
  • Third-party assessment organization (3PAO)

What are the benefits of the FedRAMP modernization initiative?

TL;DR—For agencies

  • Increased vendor options within the FedRAMP marketplace
  • Reduced wait time for CSPs in authorization process

TL;DR—For CSPs

  • Reduced friction during the authorization process
  • More clarity on how to meet security requirements
  • Less time and cost spent on the authorization process

TL;DR—For 3PAOs

  • Reduced friction between 3PAO and CSP during authorization process
  • Increased clarity on how to evaluate CSPs

What prompted the GSA to improve FedRAMP now?

GSA is modernizing FedRAMP because of feedback from stakeholders. Both federal agencies and CSPs levied complaints about the current FedRAMP process. Agencies wanted more CSPs in the FedRAMP marketplace that they could then easily procure. CSPs wanted a more streamlined process so that they could get into the FedRAMP marketplace faster. The point of friction was the FedRAMP authorization process that hasn’t evolved at the same pace as the transition from the on-premise, perpetual license delivery model to the rapid, cloud services model.

How will GSA deliver on its promises to modernize FedRAMP?

The full list of initiatives can be found in their public product roadmap document but the highlights are:

  • Taking a customer-centric approach that reduces friction in the authorization process based on customer interviews
  • Publishing clear guidance on how to meet core security requirements
  • Streamlining authorization process to reduce bottlenecks based on best practices from agencies that have developed a strong authorization process
  • Moving away from lengthy documents and towards a data-first foundation to enable automation of the authorization process for CSPs and 3PAOs

Wrap-Up

The GSA has made a commitment to being transparent about the improvements to the modernization process. Anchore, as well as, the rest of the public sector stakeholders will be watching and holding the GSA accountable. Follow this blog or the Anchore LinkedIn page to stay updated on progress.If the 18 month timeline is longer than you’re willing to wait, Anchore is already an expert in supporting organizations that are seeking FedRAMP authorization. Anchore Enterprise is a modern, cloud-native software composition analysis (SCA) platform that both meets FedRAMP compliance standards and helps evaluate whether your software supply chain is FedRAMP compliant. If you’re interested to learn more, download “FedRAMP Requirements Checklist for Container Vulnerability Scanning” or learn more about how Anchore Enterprise has helped organizations like Cisco achieve FedRAMP compliance in weeks versus months.

Add SBOM Generation to Your GitHub Project with Syft

According to the latest figures, GitHub has over 100 million developers working on over 420 million repositories, with at least 28M being public repos. Unfortunately, very few software repos contain a Software Bill of Materials (SBOM) inventory of what’s been released.

SBOMs (Software Bill of Materials) are crucial in a repository as they provide a comprehensive inventory of all components, improving transparency and traceability in the software supply chain. This allows developers and security teams to quickly identify and address vulnerabilities, enhancing overall security and compliance with regulatory standards.

Anchore developed the sbom-action GitHub Action to automatically generate an SBOM using Syft. Developers can quickly add the action via the GitHub Marketplace and pretty much fire and forget the setup.

What is an SBOM?

Anchore developers have written plenty over the years about What is an SBOM, but here is the tl;dr:

An SBOM (Software Bill of Materials) is a detailed list of all software project components, libraries, and dependencies. It serves as a comprehensive inventory that helps understand the software’s structure and the origins of its components.

An SBOM in your project enhances security by quickly identifying and mitigating vulnerabilities in third-party components. Additionally, it ensures compliance with regulatory standards and provides transparency, essential for maintaining trust with stakeholders and users.

Introducing Anchore’s SBOM GitHub Action

Adding an SBOM is a cinch with the GitHub Action for SBOM Generation provided by Anchore. Once added to a repo the action will execute a Syft scan in the workspace directory and upload a workflow artefact SBOM in SPDX format.

The SBOM Action can scan a Docker image directly from the container registry with or without registry credentials specified. Alternatively, it can scan a directory full of artifacts or a specific single file.

The action will also detect if it’s being run during the GitHub release and upload the SBOM as a release asset. Easy!

How to Add the SBOM GitHub Action to Your Project

Assuming you already have a GitHub account and repository setup, adding the SBOM action is straightforward.

Anchore SBOM Action in the GitHub Marketplace.
  • Navigate to the GitHub Marketplace
  • Search for “Anchore SBOM Action” or visit Anchore SBOM Action directly
  • Add the action to your repository by clicking the green “Use latest version” button
  • Configure the action in your workflow file

That’s it!

Example Workflow Configuration

Here’s a bare-bones configuration for running the Anchore SBOM Action on each push to the repo.

  name: Generate SBOM

  on: [push]

  jobs:
    build:
      runs-on: ubuntu-latest
      steps:
      - name: Checkout code
        uses: actions/checkout@v2
      - name: Anchore SBOM Action
        uses: anchore/[email protected]

There are further options detailed on the GitHub Marketplace page for the action. For example, use output-file to specify the resulting SBOM file name and format to select whether to build an SPDX or CycloneDX formatted SBOM.

Results and Benefits

After the GitHub action is set up, the SBOM will start being generated on each push or with every release – depending on your configuration.

Once the SBOM is published on your GitHub repo, users can analyze it to identify and address vulnerabilities in third-party components. They can also use it to ensure compliance with security and regulatory standards, maintaining the integrity of the software supply chain.

Additional Resources

The SBOM action is open source and is available under the Apache 2.0 License in the sbom-action repository. It relies on Syft which is available under the same license, also on GitHub. We welcome contributions to both sbom-action and Syft, as well as Grype, which can consume and process these generated SBOMs.

Join us on Discourse to discuss all our open source tools.

Reduce risk in your software supply chain: 5 tips for container security

Rising threats to the software supply chain and increasing use of containers are causing organizations to focus on container security. Containers bring many unique security challenges due to their layered dependencies and the fact that many container images come from public repositories.

Our new white paper, Reduce Risk for Software Supply Chain Attacks: Best Practices for Container Security, digs into 5 tips for securing containers. It also describes how Anchore Enterprise simplifies implementation of these critical best practices, so you don’t have to.

5 best practices to instantly strengthening container security

  1. Use SBOMs to build a transparent foundation

SBOMs—Software Bill of Materials—create a trackable inventory of the components you use, which is a precursor for identifying security risks, meeting regulatory requirements and assessing license compliance. Get recommendations on the best way to generate, store, search and share SBOMs for better transparency.  

  1. Identify vulnerabilities early with continuous scanning

Security issues can arise at any point in the software supply chain. Learn why shifting left is necessary, but not sufficient for container security. Understand the role SBOMs are critical when responding to zero-day vulnerabilities.

  1. Automate policy enforcement and security gates

Find out how to use automated policies to identify which vulnerabilities should be fixed and enforce regulatory requirements. Learn how a customizable policy engine and out-of-the-box policy packs streamline your compliance efforts. 

  1. Reduce toil in the developer experience

Integrating with the tools developers use, minimizing false positives, and providing a path to faster remediation will keep developers happy and your software development moving efficiently.  See how Anchore Enterprise makes it easy to provide a good developer experience

  1. Protect your software supply chain with security controls

To protect your software supply chain, you must ensure that the code you bring in from third-party sources is trusted and vetted. Implement vetting processes for open-source code that you use.

Balancing the Scale: Software Supply Chain Security and APTs

Note: This is a multi-part series primer on the intersection of advanced persistent threats (APTs) and software supply chain security (SSCS). This blog post is the first in the series. We will update this blog post with links to the additional parts of the series as they are published.
Part 1 | With Great Power Comes Great Responsibility: APTs & Software Supply Chain Security
Part 2 | David and Goliath: the Intersection of APTs and Software Supply Chain Security
• Part 3 (This blog post)

Last week we dug into the details of why an organization’s software supply chain is a ripe target for well-resourced groups like APTs and the potential avenues that companies have to combat this threat. This week we’re going to highlight the Anchore Enterprise platform and how it provides a turnkey solution for combating threats to any software supply chain.

How Anchore Can Help

Anchore was founded on the belief that a security platform that delivers deep, granular insights into an organization’s software supply chain, covers the entire breadth of the SDLC and integrates automated feedback from the platform will create a holistic security posture to detect advanced threats and allow for human interaction to remediate security incidents. Anchore is trusted by Fortune 100 companies and the most exacting federal agencies across the globe because it has delivered on this promise.

The rest of the blog post will detail how Anchore Enterprise accomplishes this.

Depth: Automating Software Supply Chain Threat Detection

Having deeper visibility into an organization’s software supply chain is crucial for security purposes because it enables the identification and tracking of every component in the software’s construction. This comprehensive understanding helps in pinpointing vulnerabilities, understanding dependencies, and identifying potential security risks. It allows for more effective management of these risks by enabling targeted security measures and quicker response to potential threats. Essentially, deeper visibility equips an organization to better protect itself against complex cyber threats, including those that exploit obscure or overlooked aspects of the software supply chain.

Anchore Enterprise accomplishes this by generating a comprehensive software bill of materials (SBOM) for every piece of software (even down to the component/library/framework-level). It then compares this detailed ingredients list against vulnerability and active exploit databases to identify exactly where in the software supply chain there are security risks. These surgically precise insights can then be fed back to the original software developers, rolled-up into reports for the security team to better inform risk management or sent directly into an incident management workflow if the vulnerability is evaluated as severe enough to warrant an “all-hands on deck” response.

Developers shouldn’t have to worry about manually identifying threats and risks inside your software supply chain. Having deep insights into your software supply chain and being able to automate the detection and response is vital to creating a resilient and scalable solution to the risk of APTs.

Breadth: Continuous Monitoring in Every Step of Your Software Supply Chain

The breadth of instrumentation in the Software Development Lifecycle (SDLC) is crucial for securing the software supply chain because it ensures comprehensive security coverage across all stages of software development. This broad instrumentation facilitates early detection and mitigation of vulnerabilities, ensures consistent application of security policies, and allows for a more agile response to emerging threats. It provides a holistic view of the software’s security posture, enabling better risk management and enhancing the overall resilience of the software against cyber threats.

Powered by a 100% feature complete platform API, Anchore Enterprise integrates into your existing DevOps pipeline.

Anchore has been supporting the DoD in this effort since 2019. Commonly referred to as “overwatch” for the DoD’s software supply chain. Anchore Enterprise continuously monitors how risk is evolving based on the ingesting of tens of thousands of runtime containers, hundreds of source code repositories and alerting on malware-laced images submitted to the registry. Monitoring every stage of the DevOps pipeline, source to build to registry to deploy, to gain a holistic view of when and where threats enter the software development lifecycle.

Feedback: Alerting on Breaches or Critical Events in Your Software Supply Chain

Integrating feedback from your software supply chain and SDLC into your overall security program is important because it allows for real-time insights and continuous improvement in security practices. This integration ensures that lessons learned and vulnerabilities identified at any stage of the development or deployment process are quickly communicated and addressed. It enhances the ability to preemptively manage risks and adapt to new threats, thereby strengthening the overall security posture of the organization.

How would you know if something is wrong in a system? Create high-quality feedback loops, of course. If there is a fire in your house, you typically have a fire alarm. That is a great source of feedback. It’s loud and creates urgency. When you investigate to confirm the fire is legitimate and not a false alarm; you can see fire, you can feel fire.

Software supply chain breaches are more similar to carbon monoxide leaks. Silent, often undetected, and potentially lethal. If you don’t have anything in place to specifically alert for that kind of threat then you could pay severely. 

Anchore Enterprise was designed specifically as both a set of sensors that can be deployed both deeply and broadly into your software supply chain AND a system of feedback that uses the sensors in your supply chain to detect and alert on potential threats that are silently emitting carbon monoxide in your warehouse.

Anchore Enterprise’s feedback mechanisms come in three flavors; automatic, recommendations and informational. Anchore Enterprise utilizes a policy engine to enable automatic action based on the feedback provided by the software supply chain sensors. If you want to make sure that no software is ever deployed into production (or any environment) with an exploitable version of Log4j the Anchore policy engine can review the security metadata created by the sensors for the existence of this software component and stop a deployment in progress before it ever becomes accessible to attackers.

Anchore Enterprise can also be configured to make recommendations and provide opinionated actions based on security signals. If a vulnerability is discovered in a software component but it isn’t considered urgent, Anchore Enterprise can instead provide a recommendation to the software developer to fix the vulnerability but still allow them to continue to test and deploy their software. This allows developers to become aware of security issues very early in the SDLC but also provide flexibility for them to fix the vulnerability based on their own prioritization.

Finally, Anchore Enterprise offers informational feedback that alerts developers, the security team or even the executive team to potential security risks but doesn’t offer a specific solution. These types of alerts can be integrated into any development, support or incident management systems the organization utilizes. Often these alerts are for high risk vulnerabilities that require deeper organizational analysis to determine the best course of action in order to remediate.

Conclusion

Due to the asymmetry between APTs and under-resourced security teams, the goal isn’t to create an impenetrable fortress that can never be breached. The goal is instead to follow security best practices and instead litter your SDLC with sensors and automated feedback mechanisms. APTs may have significantly more resources than your security team but they are still human and all humans make mistakes. By placing low-effort tripwires in as many locations as possible, you reverse the asymmetry of resources and instead allow the well-resourced adversary to become their own worst enemy. APTs are still software developers at the end of the day and no one writes bug-free code in the long run. By transforming your software supply chain into a minefield of best practices, you create a battlefield that requires your adversaries to slow down and carefully disable each individual security mechanism. None are impossible to disarm but each speed bump creates another opportunity for your adversary to make a mistake and reveal themselves. If the zero-trust architecture has taught us anything, it is that an impenetrable perimeter was never the best strategy.

David and Goliath: the Intersection of APTs and Software Supply Chain Security

Note: This is a multi-part series primer on the intersection of advanced persistent threats (APTs) and software supply chain security (SSCS). This blog post is the second in the series. If you’d like to start from the beginning, you can find the first blog post here.

Last week we set the stage for discussing APTs and the challenges they pose for software supply chain security by giving a quick overview of each topic. This week we will dive into the details of how the structure of the open source software supply chain is a uniquely ripe target for APTs.

The Intersection of APTs and Software Supply Chain Security

The Software Ecosystem: A Ripe Target

APT groups often prioritize the exploitation of software supply chain vulnerabilities. This is due to the asymmetric structure of the software ecosystem. By breaching a single component, such as a build system, they can gain access to any organization using the compromised software component. This creates an inversion in the cost benefit of the effort involved in the research and development effort needed to discover a vulnerability and craft an exploit for the vulnerability. Before APTs were focused primarily on targets where the pay off could warrant the investment or vulnerabilities that were so wide-spread that the attack could be automated. The complex interactions of software dependencies allows APTs to scale their attack due to the structure of the ecosystem.

The Software Supply Chain Security Dynamic: An Unequal Playing Ground

The interesting challenge with software supply chain security is that securing the supply chain requires even more effort than an APT would take to exploit it. The rub comes because each company that consumes software has to build a software supply chain security system to protect their organization. An APT investing in exploiting a popular component or system gets the benefit of access to all of the software built on top of it.

Given that security organizations are at a structural disadvantage, how can organizations even the odds?

How Do I Secure My Software Supply Chain from APTs?

An organization’s ability to detect the threat of APTs in its internal software supply chain comes down to three core themes that can be summed up as “go deep, go wide and integrate feedback”. Specifically this means, the deeper the visibility into your organization’s software supply chain the less surface area an attack has to slip in malicious software. The wider this visibility is deployed across the software development lifecycle, the earlier an attacker will be caught. Neither of the first two points matter if the feedback produced isn’t integrated into the overall security program that can act on the signals surfaced.

By applying these three core principles to the design of a secure software supply chain, an organization can ensure that they balance the playing field against the structural advantage APTs possess.

How Can I Easily Implement a Strategy for Securing My Software Supply Chain?

The core principles of depth, breadth and feedback are powerful touchstones to utilize when designing a secure software supply chain that can challenge APTs but they aren’t specific rules that can be easily implemented. To address this, Anchore has created the open source VIPERR Framework to provide specific guidance on how to achieve the core principles of software supply chain security.

VIPERR is a free software supply chain security framework that Anchore created for organizations to evaluate and improve the security posture of their software supply chain. VIPERR stands for visibility, inspection, policy enforcement, remediation, and reporting. 

Utilizing the VIPERR Framework an organization can satisfy the three core principles of software supply chain security; depth, breadth and feedback. By following this guide, numerous Fortune 500 enterprises and top federal agencies have transformed their software supply chain security posture and become harder targets for advanced persistent threats. If you’re looking to design and run your own secure software supply chain system, this framework will provide a shortcut to ensure the developed system will be resilient. 

How Can I Comprehensively Implement a Strategy for Securing My Software Supply Chain?

There are a number of different comprehensive initiatives to define best practices for software supply chain security. Organizations ranging from the National Institute of Standards and Technology (NIST) with standards such as SP 800-53, SP 800-218, and SP 800-161. The Cloud Native Computing Foundation (CNCF) to Open Source Security Foundation (OpenSSF) have created detailed documentation on their recommendations to achieve a comprehensive supply chain security program, such as, the SLSA framework and Secure Supply Chain Consumption Framework (S2C2F) Project. Be aware that these are not quick and dirty solutions for achieving a “reasonably” secure software supply chain. They are large undertakings for any organization and should be given the resources needed to achieve success. 

We don’t have the time to go over each in this blog post but we have broken each down in our complete guide to software supply chain security.

This is the second in a series of blog posts focused on the intersection of APTs and software supply chain security. This blog post highlighted the reasons that APTs focus their efforts on software supply chain exploits and the potential avenues that companies have to combat this threat. Next week we will discuss the Anchore Enterprise solution as a turnkey platform to implement the strategies outlined above.

How Cisco Umbrella Achieved FedRAMP Compliance in Weeks

Implementing compliance standards can be a daunting task for IT and security teams. The complexity and volume of requirements, increased workload, and resource constraints make it challenging to ensure compliance without overwhelming those responsible. Our latest case study, “How Cisco Umbrella Achieved FedRAMP Compliance in Weeks,” provides a roadmap for overcoming these challenges, leading to a world of streamlined compliance with low cognitive overhead.

Challenges Faced by Cisco Umbrella

Cisco Umbrella for Government, a cloud-native cybersecurity solution tailored for federal, state, and local government agencies, faced a tight deadline to meet FedRAMP vulnerability scanning requirements. They needed to integrate multiple security functions into a single, manageable solution while ensuring comprehensive protection across various environments, including remote work settings. Key challenges included:

  • Meeting all six FedRAMP vulnerability scanning requirements
  • Maintaining and automating STIG & FIPS compliance for Amazon EC2 virtual machines
  • Integrating end-to-end container security across the CI/CD pipeline, Amazon EKS, and Amazon ECS
  • Meeting SBOM requirements for White House Executive Order (EO 14028)

Solutions Implemented

To overcome these challenges, Cisco Umbrella leveraged Anchore Enterprise, a leading software supply chain security platform specializing in container security and vulnerability management. Anchore Enterprise integrated seamlessly with Cisco’s existing infrastructure, providing:

These features enabled Cisco Umbrella to secure their software supply chain, ensuring compliance with FedRAMP, STIG, FIPS, and EO 14028 within a short timeframe.

Remarkable Results

By integrating Anchore Enterprise, Cisco Umbrella achieved:

  • FedRAMP, FIPS, and STIG compliance in weeks versus months
  • Reduced implementation time and improved developer experience
  • Proactive vulnerability detection in development, saving hours of developer time
  • Simplified security data management with a complete SBOM management solution

Download the Case Study Today

Navigating the complexity and volume of compliance requirements can be overwhelming for IT and security teams, especially with increased workloads and resource constraints. Cisco Umbrella’s experience shows that with the right tools, achieving compliance can be streamlined and manageable. Discover how you can implement these strategies in your organization by downloading our case study, “How Cisco Umbrella Achieved FedRAMP Compliance in Weeks,” and take the first step towards streamlined compliance today.

Using the Common Form for SSDF Attestation: What Software Producers Need to Know

The release of the long-awaited Secure Software Development Attestation Form on March 18, 2024 by the Cybersecurity and Infrastructure Agency (CISA) increases the focus on cybersecurity compliance for software used by the US government. With the release of the SSDF attestation form, the clock is now ticking for software vendors and federal systems integrators to comply with and attest to secure software development practices.

This initiative is rooted in the cybersecurity challenges highlighted by Executive Order 14028, including the SolarWinds attack and the Colonial Pipeline ransomware attack, which clearly demonstrated the need for a coordinated national response to the emerging threats of a complex software supply chain. Attestation to Secure Software Development Framework (SSDF) requirements using the new Common Form is the most recent, and likely not the final, step towards a more secure software supply chain for both the United States and the world at large. We will take you through the details of what this form means for your organization and how to best approach it.

Overview of the SSDF attestation

SSDF attestation is part of a broader effort derived from the Cybersecurity EO 14028 (formally called “Improving the Nation’s Cybersecurity). As a result of this EO, the Office of Management and Budget (OMB) issued two memorandums, M-22-18 “Enhancing the Security of the Software Supply Chain through Secure Software Development Practices” and M-23-16 “Update to Memorandum M-22-18”.

These memos require the Federal agencies to obtain self-attestation forms from software suppliers. Software suppliers have to attest to complying with a subset of the Secure Software Development Framework (SSDF).

Before the publication of the SSDF attestation form, the SSDF was a software development best practices standard published by the National Institute of Standards and Technology (NIST) based on industry best practices like OWASP’s BSIMM and SAMM, a useful resource for organizations that valued security intrinsically and wanted to run secure software development without any external incentives like formal compliance requirements.

Now, the SSDF attestation form requires software providers to self-attest to having met a subset of the SSDF best practices. There are a number of implications to this transition from secure software development as being an aspiration standard to a compliance standard that we will cover below. The most important thing to keep in mind is that while the Attestation Form doesn’t require a software provider to be formally certified before they can transaction with a federal agency like FedRAMP does, there are retroactive punishments that can be applied in cases of non-compliance.

Who/What is Affected?

  1. Software providers to federal agencies
  • Federal service integrators
  • Independent software vendor
  • Cloud service providers
  1. Federal agencies and DoD programs who use any of the above software providers

Included

  • New software: Any software developed after September 14, 2022
  • Major updates to existing software: A major version change after September 14, 2022
  • Software-as-a-Service (SaaS)

Exclusions

  • First-party software: Software developed in-house by federal agencies. SSDF is still considered a best practice but does not require self-attestation
  • Free and open-source software (FOSS): Even though FOSS components and end-user products are excluded from self-attestation the SSDF requires that specific controls are in place to protect against software supply chain security breaches

Key Requirements of the Attestation Form

There are two high-level requirements for meeting compliance with the SSDF attestation form;

  1. Meet the technical requirements of the form
    • Note: NIST SSDF has 19 categories and 42 total requirements. The self-attestation form has 4 categories which are a subset of the full SSDF
  2. Self-attest to compliance with the subset of SSDF
    • Sign and return the form

Timeline

The timeline for compliance with the SSDF self-attestation form involves two critical dates:

  • Critical software: Jun 11, 2024 (3 months after approval on March 11)
  • All software: Sep 11, 2024 (6 months after approval on March 11)

Implications

Now that CISA has published the final version of the SSDF attestation form there are a number of implications to this transition. One is financial and the other is potentially criminal.

The financial penalty of not attesting to secure software development practices via the form can be significant. Federal agencies are required to stop using the software, potentially impacting your revenue,  and any future agencies you want to work with will ask to see your SSDF attestation form before procurement. Sign the form or miss out on this revenue.

The second penalty is a bit scarier from an individual perspective. An officer of the company has to sign the attestation form to state that they are responsible for attesting to the fact that all of the form’s requirements have been met. Here is the relevant quote from the form:

“Willfully providing false or misleading information may constitute a violation of 18 U.S.C. § 1001, a criminal statute.”

It is also important to realize that this isn’t an unenforceable threat. There is evidence that the DOJ Civil Cyber Fraud Initiative is trying to crack down on government contractors failing to meet cybersecurity requirements. They are bringing False Claims Act investigations and enforcement actions. This will likely weigh heavily on both the individual that signs the form and who is chosen at the organization to sign the form.

Given this, most organizations will likely opt to utilize a third-party assessment organization (3PAO) to sign the form in order to shift liability off of any individual in the organization.

Challenges and Considerations

Do I still have to sign if I have a 3PAO do the technical assessment?

No. As long as the 3PAO is FedRAMP-certified. 

What if I can’t comply in time?

You can draft a plan of action and milestones (POA&M) to fill the gap while you are addressing the gaps between your current system and the system required by the attestation form. If the agency is satisfied with the POA&M then they can continue to use your software. But they have to request either an extension of the deadline from OMB or a waiver in order to do that.

Can only the CEO and COO sign the form?

The wording in the draft form that was published required either the CEO or COO but new language was added to the final form that allows for a different company employee to sign the attestation form.

Conclusion

Cybersecurity compliance is a journey not a destination. SSDF attestation is the next step in that journey for secure software development. With the release of the SSDF attestation for, the SSDF standard is not transformed from a recommendation into a requirement. Given the overall trend of cybersecurity modernization that was kickstarted with FISMA in 2002, it would be prudent to assume that this SSDF attestation form is an intermediate step before the requirements become a hard gate where compliance will have to be demonstrated as a prerequisite to utilizing the software.

If you’re interested to get a deep-dive into what is technically required to meet the requirements of the SSDF attestation form, read all of the nitty-gritty details in our eBook, “SSDF Attestation 101: A Practical Guide for Software Producers“. 

If you’re looking for a solution to help you achieve the technical requirements of SSDF attestation quickly, take a look at Anchore Enterprise. We have helped hundreds of enterprises achieve SSDF attestation in days versus months with our automated compliance platform.

With Great Power Comes Great Responsibility: APTs & Software Supply Chain Security

Note: This is a multi-part series primer on the intersection of advanced persistent threats (APTs) and software supply chain security (SSCS). This blog post is the first in the series. We will update this blog post with links to the additional parts of the series as they are published.
• Part 1 (This blog post)
Part 2
Part 3

In the realm of cybersecurity, the convergence of Advanced Persistent Threats (APTs) and software supply chain security presents a uniquely daunting challenge for organizations. APTs, characterized by their sophisticated, state-sponsored or well-funded nature, focus on stealthy, long-term data theft, espionage, or sabotage, targeting specific entities. Their effectiveness is amplified by the asymmetric power dynamics of a well funded attacker versus a resource constrained security team.

Modern supply chains inadvertently magnify the impact of APTs due to the complex and interconnected dependency network of software and hardware components. The exploitation of this weakness by APTs not only compromises the targeted organizations but also poses a systemic risk to all users of the compromised downstream components. The infamous Solarwinds exploit exemplifies the far-reaching consequences of such breaches.

This landscape underscores the necessity for an integrated approach to cybersecurity, emphasizing depth, breadth, and feedback to create a holistic software supply chain security program that can withstand even adversaries as motivated and well-resourced as APTs. Before we jump into how to create a secure software supply chain that can resist APTs, let’s understand our adversary a bit better first.

Know Your Adversary: Advanced Persistent Threats (APTs)

What is an Advanced Persistent Threats (APT)?

An Advanced Persistent Threat (APT) is a sophisticated, prolonged cyberattack, usually state-sponsored or executed by well-funded criminal groups, targeting specific organizations or nations. Characterized by advanced techniques, APTs exploit zero-day vulnerabilities and custom malware, focusing on stealth and long-term data theft, espionage, or sabotage. Unlike broad, indiscriminate cyber threats, APTs are highly targeted, involving extensive research into the victim’s vulnerabilities and tailored attack strategies.

APTs are marked by their persistence, maintaining a foothold in a target’s network for extended periods, often months or years, to continually gather information. They are designed to evade detection, blending in with regular network traffic, and using methods like encryption and log deletion. Defending against APTs requires robust, advanced security measures, continuous monitoring, and a proactive cybersecurity approach, often necessitating collaboration with cybersecurity experts and authorities.

High-Profile APT Example: Operation Triangulation

The recent Operation Triangulation campaign disclosed by Kaspersky researchers is an extraordinary example of an APT in both its sophistication and depth. The campaign made use of four separate zero-day vulnerabilities, utilized a highly targeted approach towards specific individuals at Kaspersky, combined a multi-phase attack pattern and persisted over a four year period. Its complexity, implied significant resources possibly from a nation-state, and the stealthy, methodical progression of the attack, align closely with the hallmarks of APTs. Famed security researcher, Bruce Schneier, writing on his blog, Schneier on Security, wasn’t able to contain his surprise upon reading the details of the campaign, “[t]his is nation-state stuff, absolutely crazy in its sophistication.”

What is the impact of APTs on organizations?

Ignoring the threat posed by Advanced Persistent Threats (APTs) can lead to significant impact for organizations, including extensive data breaches and severe financial losses. These sophisticated attacks can disrupt operations, damage reputations, and, in cases involving government targets, even compromise national security. APTs enable long-term espionage and strategic disadvantage due to their persistent nature. Thus, overlooking APTs leaves organizations exposed to continuous, sophisticated cyber espionage and the multifaceted damages that follow.

Now that we have a good grasp on the threat of APTs, we turn our attention to the world of software supply chain security to understand the unique features of this landscape.

Setting the Stage: Software Supply Chain Security

What is Software Supply Chain Security?

Software supply chain security is focused on protecting the integrity of software through its development and distribution. Specifically it aims to prevent the introduction of malicious code into software that is utilized as components to build widely-used software services.

The open source software ecosystem is a complex supply chain that solves the problem of redundancy of effort. By creating a single open source version of a web server and distributing it, new companies that want to operate a business on the internet can re-use the generic open source web server instead of having to build its own before it can do business. These new companies can instead focus their efforts on building new bespoke software on top of a web server that does new, useful functions for users that were previously unserved. This is typically referred to as compostable software building blocks and it is one of the most important outcomes of the open source software movement.

But as they say, “there are no free lunches”. While open source software has created this incredible productivity boon comes responsibility. 

What is the Key Vulnerability of the Modern Software Supply Chain Ecosystem?

The key vulnerability in the modern software supply chain is the structure of how software components are re-used, each with its own set of dependencies, creating a complex web of interlinked parts. This intricate dependency network can lead to significant security risks if even a single component is compromised, as vulnerabilities can cascade throughout the entire network. This interconnected structure makes it challenging to ensure comprehensive security, as a flaw in any part of the supply chain can affect the entire system.

Modern software is particularly vulnerable to software supply chain attacks because 70-90% of modern applications are open source software components with the remaining 10-30% being the proprietary code that implements company specific features. This means that by breaching popular open source software frameworks and libraries an attacker can amplify the blast radius of their attack to effectively reach significant portions of internet based services with a single attack.

If you’re looking for a deeper understanding of software supply chain security we have written a comprehensive guide to walk you through the topic in full.

High-Profile Software Supply Chain Exploit Example: SolarWinds

In one of the most sophisticated supply chain attacks, malicious actors compromised the update mechanism of SolarWinds’ Orion software. This breach allowed the attackers to distribute malware to approximately 18,000 customers. The attack had far-reaching consequences, affecting numerous government agencies, private companies, and critical infrastructure.

Looking at the example of SolarWinds, the lesson we should take away is not to put a focus on prevention. APTs has a wealth of resources to draw upon. Instead the focus should be on monitoring the software we consume, build, and ship for unexpected changes. Modern software supply chains come with a great deal of responsibility. The software we use and ship need to be understood and monitored.

This is the first in a series of blog posts focused on the intersection of APTs and software supply chain security. This blog post highlighted the contextual background to set the stage for the unique consequences of these two larger forces. Next week, we will discuss the implications of the collision of these two spheres in the second blog post in this series.

Anchore’s June Line-Up: Essential Events for Software Supply Chain Security and DevSecOps Enthusiasts

Summer is beginning to ramp up, but before we all check out for the holidays, Anchore has a sizzling hot line-up of events to keep you engaged and informed. This June, we are excited to host and participate in a number of events that cater to the DevSecOps crowd and the public sector. From insightful webinars to hands-on workshops and major conferences, there’s something for everyone looking to enhance their knowledge and connect with industry leaders. Join us at these events to learn more about how we are driving innovation in the software supply chain security industry.

WEBINAR: How the US Navy is enabling software delivery from lab to fleet

Date: Jun 4, 2024

The US Navy’s DevSecOps platform, Party Barge, has revolutionized feature delivery by significantly reducing onboarding time from 5 weeks to just 1 day. This improvement enhances developer experience and productivity through actionable findings and fewer false positives, while maintaining high security standards with inherent policy enforcement and Authorization to Operate (ATO). As a result, development teams can efficiently ship applications that have achieved cyber-readiness for Navy Authorizing Officials (AOs).

In an upcoming webinar, Sigma Defense and Anchore will provide an in-depth look at the secure pipeline automation and security artifacts that expedite application ATO and production timelines. Topics will include strategies for removing silos in DevSecOps, building efficient development pipeline roles and component templates, delivering critical security artifacts for ATO (such as SBOMs, vulnerability reports, and policy evidence), and streamlining operations with automated policy checks on container images.

WORKSHOP: VIPERR — Actionable Framework for Software Supply Chain Security

Date: Jun 17, 2024 from 8:30am – 2:00pm ET

Location: Carahsoft office in Reston, VA

Anchore, in partnership with Carahsoft, is offering an exclusive in-person workshop to walk security practitioners through the principles of the VIPERR framework. Learn the framework hands-on from the team that originally developed the industry leading software supply chain security framework. In case you’re not familiar, the VIPERR framework enhances software supply chain security by enabling teams to evaluate and improve their security posture. It offers a structured approach to meet popular compliance standards. VIPERR stands for visibility, inspection, policy enforcement, remediation, and reporting, focusing on actionable strategies to bolster supply chain security.

The workshop covers building a software bill of materials (SBOM) for visibility, performing security checks for vulnerabilities and malware during inspection, enforcing compliance with both external and internal standards, and providing recommendations and automation for quick issue remediation. Additionally, timely reporting at any development stage is emphasized, along with a special topic on achieving STIG compliance.

EVENT: Carahsoft DevSecOps Conference 2024

Date: Jun 18, 2024

Location: The Ronald Reagan Building and International Trade Center in Washington, DC

If you’re planning to be at the show, our team is looking forward to meeting you.  You can book a demo session with us in advance!

On top of offering the VIPERR workshop, the Anchore team will be attending Carahsoft’s 2nd annual DevSecOps Conference in Washington, DC, a comprehensive forum designed to address the pressing technological, security, and innovation challenges faced by government agencies today. The event aims to explore innovative approaches such as DoD software factories, which drive efficiency and enhance the delivery of citizen-centric services, and DevSecOps, which integrates security into the software development lifecycle to combat evolving cybersecurity threats. Through a series of panels and discussions, attendees will gain valuable knowledge on how to leverage these cutting-edge strategies to improve their operations and service delivery.

EVENT: AWS Summit Washington, DC

Dates:  June 26-27, 2024

Location: Walter E. Washington Convention Center in Washington, DC

If you’re planning to be at the show, our team is looking forward to meeting you.  You can book a demo session with us in advance!

To round out June, Anchore will also be attending AWS Summit Washington, DC. The event highlights how AWS partners can help public sector organizations meet the needs of federal agencies. Anchore is an AWS Public Sector Partner and a graduate of the AWS ISV Accelerate program.

See how Anchore helped Cisco Umbrella for Government achieve FedRAMP compliance by reading the co-authored blog post on the AWS Partner Network (APN) Blog. Or better yet, drop by our booth and the team can give you a live demo of the product.

VIRTUAL EVENT: Life after the xz utils backdoor hack with Josh Bressers

Date: Wednesday, June 5, from 12:35 PM – 1:20 PM EDT

The xz utils hack was a significant breach that profoundly undermined trust within the open source community. The discovery of the backdoor revealed vulnerabilities in the software supply chain. As a member of both the open source community and a solution provider for the software supply chain security field, we at Anchore have strong opinions about XZ specifically, and open source security generally. Anchore’s VP of Security,  Josh Bressers will be speaking publicly about this topic at Upstream 2024.

Be sure to catch the live stream of “Life after the xz utils backdoor hack,” a panel discussion featuring Josh Bressers. The panel will cover the implications of the recent xz utils backdoor hack and how the attack deeply impacted trust within the open source community. In keeping with the Upstream 2024 theme of “Unusual Ideas to Solve the Usual Problems”, Josh will be presenting the “unusual” solution that Anchore has developed to keep these types of hacks from impacting the industry. The discussion will include insights from industry experts such as Shaun Martin of BlackIce, Jordan Harband, prolific JavaScript maintainer, Rachel Stephens from RedMonk, and Terrence Fischer from Boeing.

Wrap-Up

Don’t miss out on these exciting opportunities to connect with Anchore and learn about the latest advancements in software supply chain security and DevSecOps. Whether you join us for a webinar, participate in our in-person VIPERR workshop, or visit us at one of the major conferences, you’ll gain valuable insights and practical knowledge to enhance your organization’s security posture. We’re looking forward to engaging with you and helping you navigate the evolving digital landscape. See you in June!

Also, if you want to stay up-to-date on all of the events that Anchore hosts or participates in be sure to bookmark our events page and check back often!

Navigating the Updates to cATO: Critical Changes & Practical Advice for DoD Programs

On April 11, the US Department of Defense (DoD)’s Chief Information Officer (CIO) released the DevSecOps Continuous Authorization Implementation Guide, marking the next step in the evolution of the DoD’s efforts to modernize its security and compliance ecosystem. This guide is part of a larger trend of compliance modernization that is transforming the US public sector and the global public sector as a whole. It aims to streamline and enhance the processes for achieving continuous authorization to operate (cATO), reflecting a continued push to shift from traditional, point-in-time authorizations to operate (ATOs) to a more dynamic and ongoing compliance model.

The new guide introduces several significant updates, including the introduction of specific security and development metrics required to achieve cATO, comprehensive evaluation criteria, practical advice on how to meet cATO requirements and a special emphasis on software supply chain security via software bills of material (SBOMs).

We break down the updates that are important to highlight if you’re already familiar with the cATO process. If you’re looking for a primer on cATO to get yourself up to speed, read our original blog post or click below to watch our webinar on-demand.

Continuous Authorization Metrics

A new addition to the corpus of information on cATO is the introduction of specific security and software development metrics that are required to be continuously monitored. Many of these come from the private sector DevSecOps best practices that have been honed by organizations at the cutting edge of this field, such as Google, Microsoft, Facebook and Amazon.

We’ve outlined the major ones below.

  1. Mean Time to Patch Vulnerabilities:
    • Description: Average time between the identification of a vulnerability in the DevSecOps Platform (DSOP) or application and the successful production deployment of a patch.
    • Focus: Emphasis on vulnerabilities with high to moderate impact on the application or mission.
  2. Trend Metrics:
    • Description: Metrics associated with security guardrails and control gates PASS/FAIL ratio over time.
    • Focus: Show improvements in development team efforts at developing secure code with each new sprint and the system’s continuous improvement in its security posture.
  3. Feedback Communication Frequency:
    • Description: Metrics to ensure feedback loops are in place, being used, and trends showing improvement in security posture.
  4. Effectiveness of Mitigations:
    • Description: Metrics associated with the continued effectiveness of mitigations against a changing threat landscape.
  5. Security Posture Dashboard Metrics:
    • Description: Metrics showing the stage of application and its security posture in the context of risk tolerances, security control compliance, and security control effectiveness results.
  6. Container Metrics:
    • Description: Measure the age of containers against the number of times they have been used in a subsystem and the residual risk based on the aggregate set of open security issues.
  7. Test Metrics:
    • Description: Percentage of test coverage passed, percentage of passing functional tests, count of various severity level findings, percentage of threat actor actions mitigated, security findings compared to risk tolerance, and percentage of passing security control compliance.

The overall thread with the metrics required is to quickly understand whether the overall security of the application is improving. If they aren’t this is a sign that something within the system is out of balance and is in need of attention.

Comprehensive and detailed evaluation criteria

Tucked away in Appendix B. “Requirements” is a detailed table that spells out the individual requirements that need to be met in order to achieve a cATO. This table is meant to improve the cATO process so that the individuals in a program that are implementing the requirements know the criteria they will be evaluated against. The goal being to reduce the amount of back-and-forth between the program and the Authorizing Official (AO) that is evaluating them.

Practical Implementation Advice

The ecosystem for DSOPs has evolved significantly since cATO was first announced in February 2022. Over the past 2+ years, a number of early adopters, such as Platform One have blazed a trail and learned all of the painful lessons in order to smooth the path for other organizations that are now looking to modernize their development practices. The advice in the implementation guide is a high-signal, low-noise distillation of these hard won lessons learned.

DevSecOps Platform (DSOP) Advice

If you’re more interested in writing software than operating a DSOP then you’ll want to focus your attention on pre-existing DSOP’s, commonly called DoD software factories.

We have written both a primer for understanding DoD software factories and an index of additional content that can quickly direct you to deep dives in specific content you’re interested in.

If you love to get your hands dirty and would rather have full control over your development environment, just be aware that this is specifically recommended against:

Build a new DSOP using hardened components (this is the most time-consuming approach and should be avoided if possible).

DevSecOps Culture Advice

While the DevSecOps culture and process advice is well-known in the private sector, it is still important to emphasize in the federal context that is currently transitioning to the modern software development paradigm.

  1. Bring the security team at the start of development and keep them involved throughout.
  2. Create secure agile processes to support the continued delivery of value without the introduction of unnecessary risk

Continuous Monitoring (ConMon) Advice

Ensure that all environments are continuously monitored (e.g., development, test and production). Utilize the security data collected from these environments to power and inform thresholds and triggers for active incident response. ConMon and ACD are separate pillars of cATO but need to be integrated so that information is flowing to the systems that can make best use of it. It is this integrated approach that delivers on the promise of significantly improved security and risk outcomes.

Active Cyber Defense (ACD) Advice

Both a Security Operations Center (SOC) and external CSSP are needed in order to achieve the Active Cyber Defense (ACD) pillar of cATO. On top of that, there also has to be a detailed incident response plan and personnel trained on it. While cATO’s goal is to automate as much of the security and incident response system as possible to reduce the burden of manual intervention. Humans in the loop are still an important component in order to tune the system and react with appropriate urgency.

Software Supply Chain Security (SSCS) Advice

The new implementation guide is very clear that a DSOP creates SBOMs for itself and any applications that pass through it. This is a mega-trend that has been sweeping over the software supply chain security industry for the past decade. It is now the consensus that SBOMs are the best abstraction and practice for securing software development in the age of composible and complex software.

The 3 (+1) Pillars of cATO

While the 3 pillars of cATO and its recommendation for SBOMs as the preferred software supply chain security tool were called out in the original cATO memo, the recently published implementation guide again emphasizes the importance of the 3 (+1) pillars of cATO.

The guide quotes directly from the memo:

In order to prevent any combination of human errors, supply chain interdictions, unintended code, and support the creation of a software bill of materials (SBOM), the adoption of an approved software platform and development pipeline(s) are critical.

This is a continuation of the DoD specifically, and the federal government generally, highlighting the importance of software supply chain security and software bills of material (SBOMs) as “critical” for achieving the 3 pillars of cATO. This is why Anchore refers to this as the “3 (+1) Pillars of cATO“.

  1. Continuous Monitoring (ConMon)
  2. Active Cyber Defense (ACD)
  3. DevSecOps (DSO) Reference Design
  4. Secure Software Supply Chain (SSSC)

Wrap-up

The release of the new DevSecOps Continuous Authorization Implementation Guide marks a significant advancement in the DoD’s approach to cybersecurity and compliance. With a focus on transitioning from traditional point-in-time Authorizations to Operate (ATOs) to a continuous authorization model, the guide introduces comprehensive updates designed to streamline the cATO process. The goal being to ease the burden of the process and help more programs modernize their security and compliance posture.

If you’re interested to learn more about the benefits and best practices of utilizing a DSOP (i.e., DoD software factory) in order to transform cATO compliance into a “switch flip”. Be sure to pick up a copy of our “DevSecOps for a DoD Software Factory: 6 Best Practices for Container Images” white paper. Click below to download.

Best Practices for DevSecOps in DoD Software Factories: A White Paper

The Department of Defense’s (DoD) Software Modernization Implementation Plan, unveiled in March 2023, represents a significant stride towards transforming software delivery timelines from years to days. This ambitious plan leverages the power of containers and modern DevSecOps practices within a DoD software factory.

Our latest white paper, titled “DevSecOps for a DoD Software Factory: 6 Best Practices for Container Images,” dives deep into the security practices for securing container images in a DoD software factory. It also details how Anchore Federal—a pivotal tool within this framework—supports these best practices to enhance security and compliance across multiple DoD software factories including the US Air Force’s Platform One, Iron Bank, and the US Navy’s Black Pearl.

Key Insights from the White Paper

  • Securing Container Images: The paper outlines six essential best practices ranging from using trusted base images to continuous vulnerability scanning and remediation. Each practice is backed by both DoD guidance and relevant NIST standards, ensuring alignment with federal requirements.
  • Role of Anchore Federal: As a proven tool in the arena of container image security, Anchore Federal facilitates these best practices by integrating seamlessly into DevSecOps workflows, providing continuous scanning, and enabling automated policy enforcement. It’s designed to meet the stringent security needs of DoD software factories, ready for deployment even in classified and air-gapped environments.
  • Supporting Rapid and Secure Software Delivery: With the DoD’s shift towards software factories, the need for robust, secure, and agile software delivery mechanisms has never been more critical. Anchore Federal is the turnkey solution for automating security processes and ensuring that all container images meet the DoD’s rigorous security and compliance requirements.

Download the White Paper Today

Empower your organization with the insights and tools needed for secure software delivery within the DoD ecosystem. Download our white paper now and take a significant step towards implementing best-in-class DevSecOps practices in your operations. Equip your teams with the knowledge and technology to not just meet, but exceed the modern security demands of the DoD’s software modernization efforts.

Navigate SSDF Attestation with this Practical Guide

The clock is ticking again for software producers selling to federal agencies. In the second half of 2024, CEOs or their designees must begin providing an SSDF attestation that their organization adheres to secure software development practices documented in NIST SSDF 800-218

Download our latest ebook to navigate through SSDF attestation quickly and adhere to timelines. 

SSDF attestation covers four main areas from NIST SSDF including: 

  1. Securing development environments, 
  2. Using automated tools to maintain trusted source code supply chains
  3. Maintaining provenance (e.g., via SBOMs) for internal code and third-party components, and 
  4. Using automated tools to check for security vulnerabilities.  

This new requirement is not to be taken lightly. It applies to all software producers, regardless of whether they provide a software end product as SaaS or on-prem, to any federal agency. The SSDF attestation deadline is June 11, 2024, for critical software and September 11, 2024, for all software. However, on-prem software developed before September 14, 2022, will only require SSDF attestation when a new major version is released. The bottom line is that most organizations will need to comply by 2024.

Companies will make their SSDF attestation through an online Common Form that covers the minimum secure software development requirements that software producers must meet. Individual agencies can add agency-specific instructions outside of the Common Form. 

Organizations that want to ensure they meet all relevant requirements can submit a third-party assessment instead of a CEO attestation. You must use a Third-Party Assessment Organization (3PAO) that is FedRAMP-certified or approved by an agency official.  This option is a no-brainer for cloud software producers who use a 3PAO for FedRAMP.

Details over details – so we put together a practical guide to the SSDF attestation requirements and how to meet them “SSDF Attestation 101: A Practical Guide for Software Producers”. We also included how Anchore Enterprise automates the SSDF attestation compliance by directly integrating into your software development pipeline and utilizing continuous policy scanning to detect issues before they hit production.

Modeling Software Security as Unit Tests: A Mental Model for Developers

Modern software development is complex to say the least. Vulnerabilities often lurk within the vast networks of dependencies that underpin applications. A typical scenario involves a simple app.go source file that is underpinned by a sprawling tree of external libraries and frameworks (check the go.mod file for the receipts). As developers incorporate these dependencies into their applications, the security risks escalate, often surpassing the complexity of the original source code. This real-world challenge highlights a critical concern: the hidden vulnerabilities that are magnified by the dependencies themselves, making the task of securing software increasingly daunting.

Addressing this challenge requires reimagining software supply chain security through a different lens. In a recent webinar with the famed Kelsey Hightower, he provides an apt analogy to help bring the sometimes opaque world of security into focus for a developer. Software security can be thought of as just another test in the software testing suite. And the system that manages the tests and the associated metadata is a data pipeline. We’ll be exploring this analogy in more depth in this blog post and by the end we will have created a bridge between developers and security.

The Problem: Modern software is built on a tower

Modern software is built from a tower of libraries and dependencies that increase the productivity of developers but with these boosts comes the risks of increased complexity. Below is a simple ‘ping-pong’ (i.e., request-response) application written in Go that imports a single HTTP web framework:

package main

import (
	"net/http"

	"github.com/gin-gonic/gin"
)

func main() {
	r := gin.Default()
	r.GET("/ping", func(c *gin.Context) {
		c.JSON(http.StatusOK, gin.H{
			"message": "pong",
		})
	})
	r.Run()
}

With this single framework comes a laundry list of dependencies that are needed in order to work. This is the go.mod file that accompanies the application:

module app

go 1.20

require github.com/gin-gonic/gin v1.7.2

require (
	github.com/gin-contrib/sse v0.1.0 // indirect
	github.com/go-playground/locales v0.13.0 // indirect
	github.com/go-playground/universal-translator v0.17.0 // indirect
	github.com/go-playground/validator/v10 v10.4.1 // indirect
	github.com/golang/protobuf v1.3.3 // indirect
	github.com/json-iterator/go v1.1.9 // indirect
	github.com/leodido/go-urn v1.2.0 // indirect
	github.com/mattn/go-isatty v0.0.12 // indirect
	github.com/modern-go/concurrent v0.0.0-20180228061459-e0a39a4cb421 // indirect
	github.com/modern-go/reflect2 v0.0.0-20180701023420-4b7aa43c6742 // indirect
	github.com/ugorji/go/codec v1.1.7 // indirect
	golang.org/x/crypto v0.0.0-20200622213623-75b288015ac9 // indirect
	golang.org/x/sys v0.0.0-20200116001909-b77594299b42 // indirect
	gopkg.in/yaml.v2 v2.2.8 // indirect
)

The dependencies for the application end up being larger than the application source code. And in each of these dependencies is the potential for a vulnerability that could be exploited by a determined adversary. Kelsey Hightower summed this up well, “this is software security in the real world”. Below is an example of a Java app that hides vulnerabile dependencies inside the frameworks that the application is built off of.

As much as we might want to put the genie back in the bottle, the productivity boosts of building on top of frameworks are too good to reverse this trend. Instead we have to look for different ways to manage security in this more complex world of software development.

If you’re looking for a solution to the complexity of modern software vulnerability management, be sure to take a look at the Anchore Enterprise platform and the included container vulnerability scanner.

The Solution: Modeling software supply chain security as a data pipeline

Software supply chain security is a meta problem of software development. The solution to most meta problems in software development is data pipeline management. 

Developers have learned this lesson before when they first build an application and something goes wrong. In order to solve the problem they have to create a log of the error. This is a great solution until you’ve written your first hundred logging statements. Suddenly your solution has become its own problem and a developer becomes buried under a mountain of logging data. This is where a logging (read: data) pipeline steps in. The pipeline manages the mountain of log data and helps developers sift the signal from the noise.

The same pattern emerges in software supply chain security. From the first run of a vulnerability scanner on almost any modern software, a developer will find themselves buried under a mountain of security metadata. 

$ grype dir:~/webinar-demo/examples/app:v2.0.0

 ✔ Vulnerability DB                [no update available]  
 ✔ Indexed file system                                                                            ~/webinar-demo/examples/app:v2.0.0
 ✔ Cataloged contents                                                         889d95358bbb68b88fb72e07ba33267b314b6da8c6be84d164d2ed425c80b9c3
   ├── ✔ Packages                        [16 packages]  
   └── ✔ Executables                     [0 executables]  
 ✔ Scanned for vulnerabilities     [11 vulnerability matches]  
   ├── by severity: 1 critical, 5 high, 5 medium, 0 low, 0 negligible
   └── by status:   11 fixed, 0 not-fixed, 0 ignored 

NAME                      INSTALLED                           FIXED-IN                           TYPE          VULNERABILITY        SEVERITY 
github.com/gin-gonic/gin  v1.7.2                              1.7.7                              go-module     GHSA-h395-qcrw-5vmq  High      
github.com/gin-gonic/gin  v1.7.2                              1.9.0                              go-module     GHSA-3vp4-m3rf-835h  Medium    
github.com/gin-gonic/gin  v1.7.2                              1.9.1                              go-module     GHSA-2c4m-59x9-fr2g  Medium    
golang.org/x/crypto       v0.0.0-20200622213623-75b288015ac9  0.0.0-20211202192323-5770296d904e  go-module     GHSA-gwc9-m7rh-j2ww  High      
golang.org/x/crypto       v0.0.0-20200622213623-75b288015ac9  0.0.0-20220314234659-1baeb1ce4c0b  go-module     GHSA-8c26-wmh5-6g9v  High      
golang.org/x/crypto       v0.0.0-20200622213623-75b288015ac9  0.0.0-20201216223049-8b5274cf687f  go-module     GHSA-3vm4-22fp-5rfm  High      
golang.org/x/crypto       v0.0.0-20200622213623-75b288015ac9  0.17.0                             go-module     GHSA-45x7-px36-x8w8  Medium    
golang.org/x/sys          v0.0.0-20200116001909-b77594299b42  0.0.0-20220412211240-33da011f77ad  go-module     GHSA-p782-xgp4-8hr8  Medium    
log4j-core                2.15.0                              2.16.0                             java-archive  GHSA-7rjr-3q55-vv33  Critical  
log4j-core                2.15.0                              2.17.0                             java-archive  GHSA-p6xc-xr62-6r2g  High      
log4j-core                2.15.0                              2.17.1                             java-archive  GHSA-8489-44mv-ggj8  Medium

All of this from a single innocuous include statements to your favorite application framework. 

Again the data pipeline comes to the rescue and helps manage the flood of security metadata. In this blog post we’ll step through the major functions of a data pipeline customized for solving the problem of software supply chain security.

Modeling SBOMs and vulnerability scans as unit tests

I like to think of security tools as just another test. A unit test might test the behavior of my code. I think this falls in the same quality bucket as linters to make sure you are following your company’s style guide. This is a way to make sure you are following your company’s security guide.

Kelsey Hightower

This idea from renowned developer, Kelsey Hightower is apt, particularly for software supply chain security. Tests are a mental model that developers utilize on a daily basis. Security tooling are functions that are run against your application in order to produce security data about your application rather than behavioral information like a unit test. The first two foundational functions of software supply chain security are being able to identify all of the software dependencies and to scan the dependencies for known existing vulnerabilities (i.e., ‘testing’ for vulnerabilities in an application). 

This is typically accomplished by running an SBOM generation tool like Syft to create an inventory of all dependencies followed by running a vulnerability scanner like Grype to compare the inventory of software components in the SBOM against a database of vulnerabilities. Going back to the data pipeline model, the SBOM and vulnerability database are the data sources and the vulnerability report is the transformed security metadata that will feed the rest of the pipeline.

$ grype dir:~/webinar-demo/examples/app:v2.0.0 -o json

 ✔ Vulnerability DB                [no update available]  
 ✔ Indexed file system                                                                            ~/webinar-demo/examples/app:v2.0.0
 ✔ Cataloged contents                                                         889d95358bbb68b88fb72e07ba33267b314b6da8c6be84d164d2ed425c80b9c3
   ├── ✔ Packages                        [16 packages]  
   └── ✔ Executables                     [0 executables]  
 ✔ Scanned for vulnerabilities     [11 vulnerability matches]  
   ├── by severity: 1 critical, 5 high, 5 medium, 0 low, 0 negligible
   └── by status:   11 fixed, 0 not-fixed, 0 ignored 

{
 "matches": [
  {
   "vulnerability": {
    "id": "GHSA-h395-qcrw-5vmq",
    "dataSource": "https://github.com/advisories/GHSA-h395-qcrw-5vmq",
    "namespace": "github:language:go",
    "severity": "High",
    "urls": [
     "https://github.com/advisories/GHSA-h395-qcrw-5vmq"
    ],
    "description": "Inconsistent Interpretation of HTTP Requests in github.com/gin-gonic/gin",
    "cvss": [
     {
      "version": "3.1",
      "vector": "CVSS:3.1/AV:N/AC:L/PR:N/UI:R/S:U/C:H/I:L/A:N",
      "metrics": {
       "baseScore": 7.1,
       "exploitabilityScore": 2.8,
       "impactScore": 4.2
      },
      "vendorMetadata": {
       "base_severity": "High",
       "status": "N/A"
      }
     }
    ],
    . . . 

This was previously done just prior to pushing an application to production as a release gate that would need to be passed before software could be shipped. As unit tests have moved forward in the software development lifecycle as DevOps principles have won the mindshare of the industry, so has security testing “shifted left” in the development cycle. With self-contained, open source CLI tooling like Syft and Grype, developers can now incorporate security testing into their development environment and test for vulnerabilities before even pushing a commit to a continuous integration (CI) server.

From a security perspective this is a huge win. Security vulnerabilities are caught earlier in the development process and fixed before they come up against a delivery due date. But with all of this new data being created, the problem of data overload has led to a different set of problems.

Vulnerability overload; Uncovering the signal in the noise

Like the world of application logs that came before it, at some point there is so much information that an automated process generates that finding the signal in the noise becomes its own problem.

How Anchore Enterprise manages SBOMs and vulnerability scans

Centralized management of SBOMs and vulnerability scans can end up being a massive headache. No need to come up with your own storage and data management solution. Just configure the AnchoreCTL CLI tool to automatically submit SBOMs and vulnerability scans as you run them locally. Anchore Enterprise stores all of this data for you.

On top of this, Anchore Enterprise offers data analysis tools so that you can search and filter SBOMs and vulnerability scans by version, build stage, vulnerability type, etc.

Combining local developer tooling with centralized data management creates a best of both worlds environment where engineers can still get their tasks done locally with ease but offload the arduous management tasks to a server.

Added benefit, SBOM drift detection

Another benefit of distributed SBOM generation and vulnerability scanning is that this security check can be run at each stage of the build process. It would be ideal to believe that the software that is written on in a developers local environment always makes it through to production in an untouched, pristine state but this is rarely the case.

Running SBOM generation and vulnerability scanning at development, on the build server, in the artifact registry, at deploy and during runtime will create a full picture of where and when software is modified in the development process and simplify post-incident investigations or even better catch issues well before they make it to a production environment.

This historical record is a feature provided by Anchore Enterprise called Drift Detection. In the same way that an HTTP cookie creates state between individual HTTP requests, Drift detection is security metadata about security metadata (recursion, much?) that allows state to be created between each stage of the build pipeline.  Being the central store for all of the associated security metadata makes the Anchore Enterprise platform the ideal location to aggregate and scan for these particular anomalies.

Policy as a lever

Being able to filter through all of the noise created by integrating security checks across the software development process creates massive leverage when searching for a particular issue but it is still a manual process and being a full-time investigator isn’t part of the software engineer job description. Wouldn’t it be great if we could automate some if not the majority of these investigations?

I’m glad we’re of like minds because this is where policy comes into picture. Returning to Kelsey Hightower’s original comparison between security tools as linters, policy is the security guide that is codified by your security team that will allow you to quickly check whether the commit that you put together will meet the standards for secure software.

By running these checks and automating the feedback, developers can quickly receive feedback on any potential security issues discovered in their commit. This allows developers to polish their code before it is flagged by the security check in the CI server and potentially failed. No more waiting on the security team to review your commit before it can proceed to the next stage. Developers are empowered to solve the security risks and feel confident that their code won’t be held up downstream.

Policies-as-code supports existing developer workflows

Anchore Enterprise designed its policy engine to ingest the individual policies as JSON objects that can be integrated directly into the existing software development tooling. Create a policy in the UI or CLI, generate the JSON and commit it directly to the repo.

This prevents the painful context switching of moving between different interfaces for developers and allows engineering and security teams to easily reap the rewards of version control and rollbacks that come pre-baked into tools like version control. Anchore Enterprise was designed by engineers for engineers which made policy-as-code the obvious choice when designing the platform.

Remediation automation integrated into the development workflow

Being able to be alerted when a commit is violating your company’s security guidelines is better than pushing insecure code and waiting for the breach to find out that you forgot to sanitize the user input. Even after you get alerted to a problem, you still need to understand what is insecure and how to fix it. This can be done by trying to Google the issue or starting up a conversation with your security team. But this just ends up creating more work for you before you can get your commit into the build pipeline. What if you could get the answer to how to fix your commit in order to make it secure directly into your normal workflow?

Anchore Enterprise provides remediation recommendations to help create actionable advice on how to resolve security alerts that are flagged by a policy. This helps point developers in the right direction so that they can resolve their vulnerabilities quickly and easily without the manual back and forth of opening a ticket with the security team or Googling aimlessly to find the correct solution. The recommendations can be integrated directly into GitHub Issues or Jira tickets in order to blend seamlessly into the workflows that teams depend on to coordinate work across the organization.

Wrap-Up

From the perspective of a developer it can sometimes feel like the security team is primarily a frustration that only slows down your ability to ship code. Anchore has internalized this feedback and has built a platform that allows developers to still move at DevOps speeds and do so while producing high quality, secure code. By integrating directly into developer workflows (e.g., CLI tooling, CI/CD integrations, source code repository integrations, etc.) and providing actionable feedback Anchore Enterprise removes the traditional roadblock mentality that has typically described the relationship between development and security.

If you’re interested to see all of the features described in this blog post via a hands-on demo, check out the webinar by clicking on the screenshot below and going to the workshop hosted on GitHub.

If you’re looking to go further in-depth with how to build and secure containers in the software supply chain, be sure to read our white paper: The Fundamentals of Container Security.

Streamlining FedRAMP Compliance: How Anchore Enterprise Simplifies the Process

FedRAMP compliance is hard, not only because there are hundreds of controls that need to be reviewed and verified. On top of this, the controls can be interpreted and satisfied in multiple different ways. It is admirable to see an enterprise achieve FedRAMP compliance from scratch but most of us want to achieve compliance without spending more than a year debating the interpretation of specific controls. This is where turnkey solutions like Anchore Enterprise come in. 

Anchore Enterprise is a cloud-native software composition analysis platform that integrates SBOM management, vulnerability scanning and policy enforcement into a single platform to provide a comprehensive solution for software supply chain security.

Overview of FedRAMP, who it applies to and the challenges of compliance

FedRAMP, or the Federal Risk and Authorization Management Program, is a federal compliance program that standardizes security assessment, authorization, and continuous monitoring for cloud products and services. As with any compliance standard, FedRAMP is modeled from the “Trust but Verify” security principle. FedRAMP standardizes how security is verified for Cloud Service Providers (CSP).

One of the biggest challenges with achieving FedRAMP compliance comes from sorting through the vast volumes of data that make up the standard. Depending on the level of FedRAMP compliance you are attempting to meet, this could mean complying with 125 controls in the case of a FedRAMP low certification or up to 425 for FedRAMP high compliance.

While we aren’t going to go through the entire FedRAMP standard in this blog post, we will be focusing on the container security controls that are interleaved into FedRAMP.

FedRAMP container security requirements

1) Hardened Images

FedRAMP requires CSPs to adhere to strict security standards for hardened images used by government agencies. The standard mandates that:

  • Only essential services and software are included in the images
  • Updated with the latest security patches
  • Configuration settings meet secure baselines
  • Disabling unnecessary ports and services
  • Managing user accounts securely
  • Implementing encryption
  • Maintaining logging and monitoring practices
  • Regular vulnerability scanning and prompt remediation

If you want to go in-depth with how to create hardened images that meet FedRAMP compliance, download our white papers:

DevSecOps for a DoD Software Factory: 6 Best Practices for Container Images

Complete Guide to Hardening Containers with STIG

2) Container Build, Test, and Orchestration Pipelines

FedRAMP sets stringent requirements for container build, test, and orchestration pipelines to protect federal agencies. These include:

  • Hardened base images (see above) 
  • Automated build processes with integrity checks
  • Strict configuration management
  • Immutable containers
  • Secure artifact management
  • Containers security testing
  • Comprehensive logging and monitoring

3) Vulnerability Scanning for Container Images

FedRAMP mandates rigorous vulnerability scanning protocols for container images to ensure their security within federal cloud deployments. This includes: 

  • Comprehensive scans integrated into CI/CD pipelines 
  • Prioritize remediation based on severity
  • Re-scanning policy post-remediation 
  • Detailed audit and compliance reports 
  • Checks against secure baselines (i.e., CIS or STIG)

4) Secure Sensors

FedRAMP requires continuous management of the security of machines, applications, and systems by identifying vulnerabilities. 

  • Authorized scanning tools
  • Authenticated security scans to simulate threats
  • Reporting and remediation
  • Scanning independent of developers
  • Direct integration with configuration management to track vulnerabilities

5) Registry Monitoring

While not explicitly called out in FedRAMP as either a control or a control family, there is still a requirement that the images stored in a container registry are scanned at least every 30-days if the images are deployed to production.

6) Asset Management and Inventory Reporting for Deployed Containers

FedRAMP mandates thorough asset management and inventory reporting for deployed containers to ensure security and compliance. Organizations must maintain detailed inventories including:

  • Container images
  • Source code
  • Versions
  • Configurations 
  • Continuous monitoring of container state 

7) Encryption

FedRAMP mandates robust encryption standards to secure federal information, requiring the use of NIST-approved cryptographic methods for both data at rest and data in transit. It is important that any containers that store data or move data through the system meet FIPS standards.

How Anchore helps organizations comply with these requirements

Anchore is the leading software supply chain security platform for meeting FedRAMP compliance. We have helped hundreds of organizations meet FedRAMP compliance by deploying Anchore Enterprise as the solution for achieving container security compliance. Below you can see an overview of how Anchore Enterprise integrates into a FedRAMP compliant environment. For more details on how each of these integrations meet FedRAMP compliance keep reading.

1) Hardened Images

Anchore Enterprise integrates multiple tools in order to meet the FedRAMP requirements for hardened container images. We provide compliance policies that scan specifically for compliance with container hardening standards, such as, STIG and CIS. These tools were custom built to perform the checks necessary to meet the two relevant standards or both!

2) Container Build, Test, and Orchestration Pipelines

Anchore integrates directly into your CI/CD pipelines either the Anchore Enterprise API or pre-built plug-ins. This tight integration meets the FedRAMP standards that require all container images are hardened, all security checks are automated within the build process and all actions are logged and audited. Anchore’s FedRAMP policy specifically checks to make sure that any container in any stage of the pipeline will be checked for compliance.

3) Vulnerability Scanning for Container Images

Anchore Enterprise can be integrated into each stage of the development pipeline, offer remediation recommendations based on severity (e.g., CISA’s KEV vulnerabilities can be flagged and prioritized for immediate action), enforce re-scanning of containers after remediation and produce compliance artifacts to automate compliance. This is accomplished with Anchore’s container scanner, direction pipeline integration and FedRAMP policy.

4) Secure Sensors

Anchore Enterprise’s container vulnerability scanner and Kubernetes inventory agent are both authorized scanning tools. The container vulnerability scanner is integrated directly into the build pipeline whereas the k8s agent is run in production and scans for non-compliant containers at runtime.

5) Registry Monitoring

Anchore Enterprise is able to scan an artifact registry continuously for potentially non-compliant containers. It is configured to watch each unique image in image registries. It will automatically scan images that get pushed to these registries.

6) Asset Management and Inventory Reporting for Deployed Containers

Anchore Enterprise includes a full software component inventory workflow. It can scan all software components, generate Software Bill of Materials (SBOMs) to keep track of the component and centrally store all SBOMs for analysis. Anchore Enterprises’s Kubernetes inventory agent can perform the same service for the runtime environment.

7) Encryption

Anchore Enterprise Static STIG tool can ensure that all containers are maintaining NIST & FIPS encryption standards. Verifying that containers are encrypting data at-rest and in-transit for each of thousands of containers is a difficult chore but easily automated via Anchore Enterprise.

The benefits of the shift left approach of Anchore Enterprise

Shift compliance left and prevent violations

Detect and remediate FedRAMP compliance violations early in the development lifecycle to prevent production/high-side violations that will threaten your hard earned compliance. Use Anchore’s “developer-bundle” in the integration phase to take immediate action on potential compliance violations. This will ensure vulnerabilities with fixes available and CISA KEV vulnerabilities are addressed before they make it to the registry and you have to report these non-compliance issues.

Below is an example of a workflow in GitLab of how Anchore Enterprise’s SBOM generation, vulnerability scanning and policy enforcement can catch issues early and keep your compliance record clean.

Automate Compliance Reporting

Automate monthly/annual reporting using Anchore’s reporting. Have these reports set up to auto generate based on the compliance reporting needs of FedRAMP.

Manage POA&Ms

Given that Anchore Enterprise centrally stores and manages vulnerability information for an organization, it can also be utilized to manage Plan Of Action & Milestones (POA&Ms) for any portions of the system that aren’t yet FedRAMP compliant but have a planned due date. Using Allowlists in Anchore Enterprise to centrally manage POA&Ms and assessed/justifiable findings.

Prevent Production Compliance Violations

Practice good production registry hygiene by utilizing Anchore Enterprise to scan stored images regularly. Anchore Enterprise’s Kubernetes runtime inventory will identify images that do not meet FedRAMP compliance or have not been used in the last ~7 days (company defined) to remove from your production registry.

Conclusion

Achieving FedRAMP compliance from scratch is an arduous process and not a key differentiator for many organizations. In order to maintain organizational priority on the aspects of the business that differentiate an organization from its competitors, strategic outsourcing of non-core competencies is always a recommended strategy. Anchore Enterprise aims to be that turnkey solution for organizations that want the benefits of FedRAMP compliance without developing the internal expertise, specifically for the container security aspect.

By integrating SBOM generation, vulnerability scanning, and policy enforcement into a single platform, Anchore Enterprise not only simplifies the path to compliance but also enhances overall software supply chain security. Through the deployment of Anchore Enterprise, companies can achieve and maintain compliance more quickly and with greater assurance. If you’re looking for an even deeper look at how to achieve all 7 of the container security requirements of FedRAMP with Anchore Enterprise, read our playbook: FedRAMP Pre-Assessment Playbook For Containers.

From Chaos to Compliance: Revolutionizing License Management with Automation

The ascent of both containerized applications and open-source software component building blocks has dramatically escalated the complexity of software and the burden of managing all of the associated licenses. Modern applications are often built from a mosaic of hundreds, if not thousands, of individual software components, each bound by its own potential licensing pitfalls. This intricate web of dependencies, akin to a supply chain, poses significant challenges not only for legal teams tasked with mitigating financial risks but also for developers who manage these components’ inventory and compliance.

Previously license management was primarily a manual affair, software wasn’t as complex in the past and more software was proprietary 1st party software that didn’t have the same license compliance issues. These original license management techniques haven’t kept up with the needs of modern, cloud-native application development. In this blog post, we’re going to discuss how automation is needed to address the challenges of continuing to manage licensing risk in modern software.

The Problem

Modern software is complex. This is fairly well known at this point but in case you need a quick visual reminder, we’ve inserted to images to quickly reinforce this idea:

Applications can be constructed from 10s or 100s or even 1000s of individual software components each with their own license for how it can be used. Modern software is so complex that this endlessly nested collection of dependencies are typically referred to as a metaphorical supply chain and there is an entire industry that has grown to provide security solutions for this quagmire called software supply chain security

This is a complexity nightmare for legal teams that are tasked with managing the financial risk of an organization. It’s also a nightmare for the developers who are tasked with maintaining an inventory of all of the software dependencies in an organization and the associated license for each component.

Let’s provide an example of how this normally manifests in a software startup. Assuming business is going well, you have a product and there are customers out in the world that are interested in purchasing your software. During the procurement cycle, your customer’s legal team will be tasked with assessing the risk of using your software. In order to create this assessment they will do a number of things and one of them will be to determine if your software is safe from a licensing perspective to use. In order to do this they will normally send over a document that looks like this:

As a software vendor, it will be your job to fill this out so that legal can approve the purchasing of your software and you can take that revenue to the bank.

Let’s say you manually fill this entire spreadsheet out. A developer would need to go through each dependency that is utilized in the software that you sell and “scan” the codebase for all of the licensing metadata. Component name, version number, OSS license (i.e., MIT, GPL, BSD, etc.), etc. It would take some time and be quite tedious but not an insurmountable task. In the end they would produce something like this:

This is great in the world of once-in-a-while deployments and updates. This becomes exhausting in the world of continuous integration/delivery that the DevOps movement has created. Imagine having to produce a new document like this everytime you push to production. DevOps has allowed for some times to push to production multiple times per day.  Requiring that a document is manually created for all of your customers’ legal teams for each release would almost eliminate all of the velocity gains that moving to the DevOps architecture created.

The Solution

The solution to this problem is the automation of the license discovery process. If software can scan your codebase and produce a document that will exhaustively cover all of the building blocks of your application this unlocks the potential to both have your DevOps cake and eat it too.

To this end, Anchore has created and open sourced a tool that does just this.

Introducing Grant: Automated License Discovery

Grant is an open-source command line tool that scans and discovers the software licenses of all dependencies in a piece of open-source software. If you want to get a quick primer on what you can do with Grant, read our announcement blog post. Or if you’re ready to dive straight in, you can view all of the Grant documentation on its GitHub repo.

How does Grant Integrate into my Development Workflow?

As a software license scanner, Grant operates on a software inventory artifact like an SBOM or directly on a container image. Let’s continue with the example from above to bring this to life. In the legal review example above you are a software developer that has been tasked with manually searching and finding all of the OSS license files to provide to your customer’s legal team for review.

Not wanting to do this manually by hand, you instead open up your CLI and install Grant. From there you navigate to your artifact registry and pull down the latest image of your application’s production build. Right before you run the Grant license scan on your production container image you notice that your team has been following software supply chain best practices and have already created an SBOM with a popular open-source tool called Syft. Instead of running the container through Grant which could take some time to scan the image, you instead pipe the SBOM into Grant which is already a JSON object of the entire dependency inventory of the application. A few seconds later you have a full report of all of the licenses in your application.  

From here you export the full component inventory with the license enrichment into a spreadsheet and send this off to the customer’s legal team for review. A process that might have taken a full day or even multiple days to do by hand was finished in seconds with the power of open-source tooling.

Automating License Compliance with Policy

Grant is an amazing tool that can automate much of the most tedious work of protecting an organization from legal consequences but when used by a developer as a CLI tool, there is still a human in the loop which can cause traffic jams to occur. With this in mind, our OSS team made sure to launch Grant with support for policy-based filters that can automate the execution and alerting of license scanning. 

Let’s say that your organization’s legal team has decided that using any GPL components in 1st party software is too risky. By writing a policy that fails any software that includes GPL licensed components and integrating the policy check as early as the staging CI environment or even allowing developers to run Grant in a one-off fashion during design as they prototype the initial idea, the potential for legally risky dependencies infiltrating into production software drops precipitously.

How Anchore Can Help

Grant is an amazing tool that automates the license compliance discovery process. This is great for small projects or software that does irregular releases. Things get much more complicated in the cloud-native, continuous integration/deployment paradigm on DevSecOps where there are new releases multiple times per day. Having Grant generate the license data is great but suddenly you will have an explosion of data that itself needs to be managed.

This is where Anchore Enterprise steps in to fill this gap. The Anchore Enterprise platform is an end-to-end data management solution that not only incorporates all of Anchore’s open-source tooling that generates artifacts like SBOMs, vulnerability scans and license scans. It also manages the massive amount of data that a high speed DevSecOps pipeline will create as part of its regular operation and on top of that apply a highly customizable policy engine that can then automate decision-making around the insights derived from the software supply chain artifacts like SBOMs, vulnerability scans and license scans. 

Want to make sure that no GPL license OSS components ever make it into your SDLC? No problem. Grant will uncover all components that have this license, Anchore Enterprise will centralize these scans and the Anchore policy engine will alert the developer who just integrated a new GPL licensed OSS component into their development environment that they need to find a different component or they won’t be able to push their branch to staging. The shift left principle of DevSecOps can be applied to LegalOps as well. 

Conclusion

The advent of tools like Grant, an open-source license discovery solution developed by Anchore, marks a significant advancement in the realm of open-source license management. By automating the tedious process of license verification, Grant not only enhances operational efficiency but also integrates seamlessly into continuous integration/continuous delivery (CI/CD) environments. This capability is crucial in modern DevOps practices, which demand frequent and fast-paced updates. Grant’s ability to quickly generate comprehensive licensing reports transforms a potentially day-long task into a matter of seconds.

Anchore Enterprise extends this functionality by managing the deluge of data from continuous deployments and integrating a policy engine that automates compliance decisions. This ecosystem not only streamlines the process of license management but also empowers developers and legal teams to preemptively address compliance issues, thereby embedding legal safeguards directly into the software development lifecycle. This proactive approach ensures that as the technological landscape evolves, businesses remain agile yet compliant, ready to capitalize on opportunities without being bogged down by legal liabilities.

If you’re interested to hear about the topics covered in this blog post directly from the lips of Anchore’s CTO, Dan Nurmi, and the maintainer of Grant, Christopher Phillips, you can watch the on-demand webinar here. Or join the Anchore Community Discourse forum to speak with our team directly. We look forward to hearing from you and reviewing your pull requests!

An Outline for Getting Up to Speed on the DoD Software Factory

This blog post is meant as a gateway to all things DoD software factory. We highlight content from across the Anchore universe that can help anyone get up to speed on what a DoD software factory is, why to use it and how to build one. This blog post is meant as an index to be scanned for the topics that are most interesting to you as the reader with links to more detailed content.

What is a DoD Software Factory?

The short answer is a DoD Software Factory is an implementation of the DoD Enterprise DevSecOps Reference Design. A slightly longer answer comes from our DoD software factory primer:

A Department of Defense (DoD) Software Factory is a software development pipeline that embodies the principles and tools of the larger DevSecOps movement with a few choice modifications that conform to the extremely high threat profile of the DoD and DIB.

Note that the diagram below looks like a traditional DevOps pipeline. The difference being that there are security controls layered into this environment that automate software component inventory, vulnerability scanning and policy enforcement to meet the requirements to be considered a DoD software factory.

Got the basics down? Go deeper and learn how Anchore can help you put the Sec into DevSecOps Reference Design by reading our DoD Software Factory Best Practices white paper.

Why do I want to utilize a DoD Software Factory?

For DoD programs, the primary reason to utilize a DoD software factory is that it is a requirement for achieving a continuous authorization to operation (cATO). The cATO standard specifically calls out that software is developed in a system that meets the DoD Enterprise DevSecOps Reference Design. A DoD software factory is the generic implementation of this design standard.

For Federal Service Integrators (FSIs), the biggest reason to utilize a DoD software factory is that it is a standard approach to meeting DoD compliance and certification standards. By meeting a standard, such as CMMC Level 2, you expand your opportunity to work with DoD programs.

Continuous Authorization to Operate (cATO)

If you’re looking for more information on cATO, Anchore has written a comprehensive guide on navigating the cATO process that can be found on our blog:

DevSecOps for a DoD Software Factory: 6 Best Practices for Container Images

The shift from traditional software delivery to DevSecOps in the Department of Defense (DoD) represents a crucial evolution in how software is built, secured, and deployed with a focus on efficiencies and speed. Our white paper advises on best practices that are setting new standards for security and efficiency in DoD software factories.

Cybersecurity Maturity Model Certification (CMMC)

The CMMC is the certification standard that is used by the DoD to vet FSIs from the defense industrial base (DIB). This is the gold standard for demonstrating to the DoD that your organization takes security seriously enough to work with the highest standards of any DoD program. The security controls that the CMMC references when determining certification are outlined in NIST 800-171. There are 17 total families of security controls that an organization has to meet in order to meet the CMMC Level 2 certification and a DoD software factory can help check a number of these off of the list.

The specific families of controls that a DoD software factory helps meet are:

  • Access Control (AC)
  • Audit and Accountability (AU)
  • Configuration Management (CM)
  • Incident Response (IR)
  • Maintenance (MA)
  • Risk Assessment (RA)
  • Security Assessment and Authorization (CA)
  • System and Communications Protection (SC)
  • System and Information Integrity (SI)
  • Supply Chain Risk Management (SR)

If you’re looking for more information on how apply software supply chain security to meet the CMMC, Anchore has published two blog posts on the topic:

NIST SP 800-171 & Controlled Unclassified Data: A Guide in Plain English

  • NIST SP 800-171 is the canonical list of security controls for meeting CMMC Level 2 certification. Anchore has broken down the entire 800-171 standard to give you an easy to understand overview.

Automated Policy Enforcement for CMMC with Anchore Enterprise

  • Policy Enforcement is the backbone of meeting the monitoring, enforcement and reporting requirements of the CMMC. In this blog post, we break down how Anchore Federal can meet a number of the controls specifically related to software supply chain security that are outlined in NIST 800-171.

How do I meet the DevSecOps Reference Design requirements?

The easy answer is by utilizing a DoD Software Factory Managed Service Provider (MSP). Below in the User Stories section, we deep dive into the US Air Force’s Platform One given they are the preeminent DoD software factory.

The DIY answer involves carefully reading and implementing the DoD Enterprise DevSecOps Reference Design. This document is massive but there are a few shortcuts you can utilize to help expedite your journey. 

Container Hardening

Deciding to utilize software containers in a DevOps pipeline is almost a foregone conclusion at this point. What is less well known is how to secure your containers, especially to meet the standards of a DoD software factory.

The DoD has published two guides that can help with this. The first is the DoD Container Hardening Guide, and the second is the Container Image Creation and Deployment Guide. Both name Anchore Federal as an approved container hardening scanner.

Anchore has published a number of blogs and even a white paper that condense the information in both of these guides into more digestible content. See below:

Container Security for U.S. Government Information Systems

  • This comprehensive white paper breaks down how to achieve a container build and deployment system that is hardened to the standards of a DoD software factory.

Enforcing the DoD Container Image and Deployment Guide with Anchore Federal

  • This blog post is great for those who are interested to see how Anchore Federal can turn all of the requirements of the DoD Container Hardening Guide and the Container Image Creation and Deployment Guide into an easy button.

Deep Dive into Anchore Federal’s Container Image Inspection and Vulnerability Management

  • This blog post deep dives into how to utilize Anchore Federal to find container vulnerabilities and alert or report on whether they are violating the security compliance required to be a DoD software factory.

Policy-based Software Supply Chain Security and Compliance

The power of a policy-based approach to software supply chain security is that it can be integrated directly into a DevOps pipeline and automate a significant amount of alerting, reporting and enforcement work. The blog posts below go into depth on how this automated approach to security and compliance can uplevel a DoD software factory:

A Policy Based Approach to Container Security & Compliance

  • This blog details how a policy-based platform works and how it can benefit both software supply chain security and compliance. 

The Power of Policy-as-Code for the Public Sector

  • This follow-up to the post above shows how the policy-based security platform outlined in the first blog post can have significant benefits to public sector organizations that have to focus on both internal information security and how to prove they are compliant with government standards.

Benefits of Static Image Inspection and Policy Enforcement

  • Getting a bit more technical this blog details how a policy-based development workflow can be utilized as a security gate with deployment orchestration systems like Kubernetes.

Getting Started With Anchore Policy Bundles

  • An even deeper dive into what is possible with the policy-based security system provided by Anchore Enterprise, this blog gets into the nitty-gritty on how to configure policies to achieve specific security outcomes.

Unpacking the Power of Policy at Scale in Anchore

  • This blog shows how a security practitioner can extend the security signals that Anchore Enterprise collects with the assistance of a more flexible data platform like New Relic to derive more actionable insights.

Security Technical Implementation Guide (STIG)

The Security Technical Implementation Guides (STIGs) are fantastic technical guides for configuring off the shelf software to DoD hardening standards. Anchore being a company focused on making security and compliance as simple as possible has written a significant amount about how to utilize STIGs and achieve STIG compliance, especially for container-based DevSecOps pipelines. Exactly the kind of software development environments that meet the standards of a DoD software factory. View our previous content below:

4 Ways to Prepare your Containers for the STIG Process

  • In this blog post, we give you four quick tips to help you prepare for the STIG process for software containers. Think of this as the amuse bouche to prepare you for the comprehensive white paper that comes next.

Navigating STIG Compliance for Containers

  • As promised, this is the extensive document that walks you through how to build a DevSecOps pipeline based on containers that is both high velocity and secure. Perfect for organizations that are aiming to roll their own DoD software factory.

User Stories

Anchore has been supporting FSIs and DoD programs to build DevSecOps programs that meet the criteria to be called a DoD software factory for the past decade. We can write technical guides and best practices documents till time ends but sometimes the best lessons are learned from real-life stories. Below are user stories that help fill in all of the details about how a DoD software factory can be built from scratch:

DoD’s Pathway to Secure Software

  • Join Major Camdon Cady of Platform One and Anchore’s VP of Security, Josh Bressers as they discuss the lessons learned from building a DoD software factory from the ground up. Watch this on-demand webinar to get all of the details in a laid back and casual conversation between two luminaries in their field.

Development at Mach Speed

  • If you prefer a written format over video, this case study highlights how Platform One utilized Red Hat OpenShift and Anchore Federal to build their DoD software factory that has become the leading Managed Service Provider for DoD programs.

Conclusion

Similar to how Cloud has taken over the infrastructure discussion in the enterprise world, DoD software factories are quickly becoming the go-to solution for DoD programs and the FSIs that support them. Delivering on the promise of the DevOps movement of high velocity development without compromising security, a DoD software factory is the one-stop shop to upgrade your software development practice into the modern age and become compliant as a bonus! If you’re looking for an easy button to infuse your DevOps pipeline with security and compliance without the headache of building it yourself, take a look at Anchore Federal and how it helps organizations layer software supply chain security into a DoD software factory and achieve a cATO.