SBOM Management: How to Tackle Sprawl and Secure Your Supply Chain

Software Bill of Materials (SBOM) has emerged as a pivotal technology to scale product innovation while taming the inevitable growth of complexity of modern software development. SBOMs are typically thought of as a comprehensive inventory of all software components—both open source and proprietary—within an application. But they are more than just a simple list of “ingredients”. They offer deeper insights that enable organizations to unlock enterprise-wide value. From automating security and compliance audits to assessing legal risks and scaling continuous regulatory compliance, SBOMs are central to the software development lifecycle.

However, as organizations begin to mature their SBOM initiative, the rapid growth in SBOM documents can quickly become overwhelming—a phenomenon known as SBOM sprawl. Below, we’ll outline how SBOM sprawl happens, why it’s a natural byproduct of a maturing SBOM program, and the best practices for wrangling the complexity to extract real value.

Learn about the role that SBOMs for the security of your organization in this white paper.

Learn about the role that SBOMs for the security, including open source software (OSS) security, of your organization in this white paper.

How does SBOM Sprawl Happen?

SBOM adoption typically begins ad hoc when a Software Engineer starts self-scanning their source code to look for vulnerabilities or a DevOps Engineer integrating an open source SBOM generator into a non-critical CI/CD pipeline as a favor to a friendly Security Engineer. SBOMs begin to pile up niches across the organization and adoption continues to grow through casual conversation. 

It’s only a matter of time before you accumulate a flood of SBOMs. As ad hoc adoption reaches scale, the volume of SBOM data balloons. We’re not all running at Google’s scale but their SBOM growth chart still illustrates the story of how new SBOM programs can quickly get out of hand.

Chart showing growth of the number of SBOMs that Google has generated over time. Stonks meme is overlayed on the chart.

We call this SBOM sprawl. It is the overabundance of unmanaged and unorganized SBOM documents. When left unchecked, SBOM sprawl prevents SBOMs from delivering the value of its use-cases like real-time vulnerability discovery, automated compliance checks, or supply chain insights.

So, how do we regain control as our SBOM initiative grows? The answer lies in a centralized data store. Instead of letting SBOMs scatter, we bring them together in a robust SBOM management system that can index, analyze, and make these insights actionable. Think of it like any large data management problem. The same best practices used for enterprise data management—ETL pipelines, centralized storage, robust queries—apply to SBOMs as well.

What is SBOM Management (SBOMOps)?

SBOM management—or SBOMOps—is defined as:

A data pipeline, centralized repository and query engine purpose-built to manage and extract actionable software supply chain insights from software component metadata (i.e., SBOMs).

In practical terms, SBOM management focuses on:

  • Reliably generating SBOMs at key points in the development lifecycle
  • Ingesting and converting 3rd-party supplier SBOMs
  • Storing the SBOM documents in a centralized repository
  • Enriching SBOMs from additional data sources (e.g., security, compliance, licensing, etc.)
  • Generating the use-case specific queries and reports that drive results

When done right, SBOMOps reduces chaos, making SBOMs instantly available for everything from zero-day vulnerability checks to open source license tracking—without forcing teams to manually search scattered locations for relevant files.

SBOM Management Best Practices

1. Store and manage SBOMs in a central repository

A single, central repository for SBOMs is the antidote to sprawl. Even if developer teams keep SBOM artifacts near their code repositories for local reference, the security organization should maintain a comprehensive SBOM inventory across all applications.

Why it matters:

  • Rapid incident response: When a new zero-day hits, you need a quick way to identify which applications are affected. That’s nearly impossible if your SBOM data is scattered or nonexistent.
  • Time savings: Instead of rescanning every application from scratch, you can consult the repository for an instant snapshot of dependencies.

2. Support SBOM standards—but don’t stop there

SPDX and CycloneDX are the two primary SBOM standards today, each with their own unique strengths. Any modern SBOM management system should be able to ingest and produce both. However, the story doesn’t end there.

Tooling differences:

  • Not all SBOM generation tools are created equal. For instance, Syft (OSS) and AnchoreCTL can produce a more comprehensive intermediate SBOM format containing additional metadata beyond what SPDX or CycloneDX alone might capture. This extra data can lead to:
    • More precise vulnerability detection
    • More granular policy enforcement
    • Fewer false positives

3. Require SBOMs from all 3rd-party software suppliers

It’s not just about your own code—any 3rd-party software you include is part of your software supply chain. SBOM best practice demands obtaining an SBOM from your suppliers, whether that software is open source or proprietary.

Challenges:

4. Generate SBOMs at each step in the development process and for each build

Yes, generating an SBOM with every build inflates your data management and storage needs. However, if you have an SBOM management system in place then storing these extra files is a non-issue—and the benefits are massive.

SBOM drift detection:
By generating SBOMs multiple times along the DevSecOps pipeline, you can detect any unexpected changes (drift) in your supply chain—whether accidental or malicious. This approach can thwart adversaries (e.g., APTs) that compromise upstream supplier code. An SBOM audit trail lets you pinpoint exactly when and where an unexpected code injection occurred.

5. Pay it forward >> Create an SBOM for your releases

If you expect third parties to provide you SBOMs, it’s only fair to provide your own to downstream users. This transparency fosters trust and positions you for future SBOM-related regulatory or compliance requirements. If SBOM compliance becomes mandatory (we think the industry is trending that direction), you’ll already be prepared.

How to Architect an SBOM Management System

The reference architecture below outlines the major components needed for a reliable SBOM management solution:

  1. Integrate SBOM Generation into CI/CD Pipeline: Embed a SBOM generator in every relevant DevSecOps pipeline stage to automate software composition analysis and SBOM generation.
  2. Ingest 3rd-party supplier SBOMs: Ingest and normalize external SBOMs from all 3rd-party software component suppliers.
  3. Send All SBOM Artifacts to Repository: Once generated, ensure each SBOM is automatically shipped to the central repository to serve as your source-of-truth for software supply chain data.
  4. Enrich SBOMs with critical business data: Enrich SBOMs from additional data sources (e.g., security, compliance, licensing, etc.)
  5. Stand Up Data Analysis & Visualization: Build or adopt dashboards and query tooling to generate the software supply chain insights desired from the scoped SBOM use-cases.
  6. Profit! (Figuratively): Gain comprehensive software supply chain visibility, faster incident response, and the potential to unlock advanced use-cases like automated policy-as-code enforcement.

You can roll your own SBOM management platform or opt for a turnkey solution 👇.

SBOM Management Using Anchore SBOM and Anchore Enterprise

Anchore offers a comprehensive SBOM management platform that help you centralize and operationalize SBOM data—unlocking the full strategic value of your software bills of materials:

  • Anchore Enterprise
    • Out-of-the-box SBOM generation: Combined SCA scanning, SBOM generation and SBOM format support for full development environment coverage.
    • Purpose-built SBOM Inventory: Eliminates SBOM sprawl with a centralized SBOM repository to power SBOM use-cases like rapid incident response, license risk management and automated compliance.
    • DevSecOps Integrations: Drop-in plugins for standard DevOps tooling; CI/CD pipelines, container registries, ticketing systems, and more.
    • SBOM Enrichment Data Pipeline: Fully automated data enrichment pipeline for security, compliance, licensing, etc. data.
    • Pre-built Data Analysis & Visualization: Delivers immediate insights on the health of your software supply chain with visualization dashboards for vulnerabilities, compliance, and policy checks.
    • Policy-as-Code Enforcement: Customizable policy rules guarantee continuous compliance, preventing high-risk or non-compliant components from entering production and reducing manual intervention.

Ready to get serious about securing your software supply chain?
Request a demo to learn how Anchore Enterprise can help you implement a scalable SBOM management solution.

Learn the 5 best practices for container security and how SBOMs play a pivotal role in securing your software supply chain.

The Top Ten List: The 2024 Anchore Blog

To close out 2024, we’re going to count down the top 10 hottest hits from the Anchore blog in 2024! The Anchore content team continued our tradition of delivering expert guidance, practical insights, and forward-looking strategies on DevSecOps, cybersecurity compliance, and software supply chain management.

This top ten list spotlights our most impactful blog posts from the year, each tackling a different angle of modern software development and security. Hot topics this year include: 

  • All things SBOM (software bill of materials)
  • DevSecOps compliance for the Department of Defense (DoD)
  • Regulatory compliance for federal government vendors (e.g., FedRAMP & SSDF Attestation)
  • Vulnerability scanning and management—a perennial favorite!

Our selection runs the gamut of knowledge needed to help your team stay ahead of the compliance curve, boost DevSecOps efficiency, and fully embrace SBOMs. So, grab your popcorn and settle in—it’s time to count down the blog posts that made the biggest splash this year!

The Top Ten List

10 | A Guide to Air Gapping

Kicking us off at number 10 is a blog that’s all about staying off the grid—literally. Ever wonder what it really means to keep your network totally offline? 

A Guide to Air Gapping: Balancing Security and Efficiency in Classified Environments breaks down the concept of “air gapping”—literally disconnecting a computer from the internet by leaving a “gap of air” between your computer and an ethernet cable. It is generally considered a security practice to protect classified, military-grade data or similar.

Our blog covers the perks, like knocking out a huge range of cyber threats, and the downsides, like having to manually update and transfer data. It also details how Anchore Enforce Federal Edition can slip right into these ultra-secure setups, blending top-notch protection with the convenience of automated, cloud-native software checks.

9 | SBOMs + Vulnerability Management == Open Source Security++

Coming in at number nine on our countdown is a blog that breaks down two of our favorite topics; SBOMs and vulnerability scanners—And how using SBOMs as your foundation for vulnerability management can level up your open source security game.

SBOMs and Vulnerability Management: OSS Security in the DevSecOps Era is all about getting a handle on: 

  • every dependency in your code, 
  • scanning for issues early and often, and 
  • speeding up the DevSecOps process so you don’t feel the drag of legacy security tools. 

By switching to this modern, SBOM-driven approach, you’ll see benefits like faster fixes, smoother compliance checks, and fewer late-stage security surprises—just ask companies like NVIDIA, Infoblox, DreamFactory and ModuleQ, who’ve saved tons of time and hassle by adopting these practices.

8 | Improving Syft’s Binary Detection

Landing at number eight, we’ve got a blog post that’s basically a backstage pass to Syft’s binary detection magic. Improving Syft’s Binary Detection goes deep on how Syft—Anchore’s open source SBOM generation tool—uncovers out the details of executable files and how you can lend a hand in making it even better. 

We walk you through the process of adding new binary detection rules, from finding the right binaries and testing them out, to fine-tuning the patterns that match their version strings. 

The end goal? Helping all open source contributors quickly get started making their first pull request and broadening support for new ecosystems. A thriving, community-driven approach to better securing the global open source ecosystem.

7 | A Guide to FedRAMP in 2024: FAQs & Key Takeaways

Sliding in at lucky number seven, we’ve got the ultimate cheat sheet for FedRAMP in 2024 (and 2025😉)! Ever wonder how Uncle Sam greenlights those fancy cloud services? A Guide to FedRAMP in 2024: FAQs & Key Takeaways spills the beans on all the FedRAMP basics you’ve been struggling to find—fast answers without all the fluff. 

It covers what FedRAMP is, how it works, who needs it, and why it matters; detailing the key players and how it connects with other federal security standards like FISMA. The idea is to help you quickly get a handle on why cloud service providers often need FedRAMP certification, what benefits it offers, and what’s involved in earning that gold star of trust from federal agencies. By the end, you’ll know exactly where to start and what to expect if you’re eyeing a spot in the federal cloud marketplace.

6 | Introducing Grant: OSS Licence Management

At number six on tonight’s countdown, we’re rolling out the red carpet for Grant—Anchore’s snazzy new software license-wrangling sidekick! Introducing Grant: An OSS project for inspecting and checking license compliance using SBOMs covers how Grant helps you keep track of software licenses in your projects. 

By using SBOMs, Grant can quickly show you which licenses are in play—and whether any have changed from something friendly to something more restrictive. With handy list and check commands, Grant makes it easier to spot and manage license risk, ensuring you keep shipping software without getting hit with last-minute legal surprises.

5 | An Overview of SSDF Attestation: Compliance Need-to-Knows

Landing at number five on tonight’s compliance countdown is a big wake-up call for all you software suppliers eyeing Uncle Sam’s checkbook: the SSDF Attestation Form. That’s right—starting now, if you wanna do business with the feds, you gotta show off those DevSecOps chops, no exceptions! In Using the Common Form for SSDF Attestation: What Software Producers Need to Know we break down the new Secure Software Development Attestation Form—released in March 2024—that’s got everyone talking in the federal software space. 

In short, if you’re a software vendor wanting to sell to the US government, you now have to “show your work” when it comes to secure software development. The form builds on the SSDF framework, turning it from a nice-to-have into a must-do. It covers which software is included, the timelines you need to know, and what happens if you don’t shape up.

There are real financial risks if you can’t meet the deadlines or if you fudge the details (hello, criminal penalties!). With this new rule, it’s time to get serious about locking down your dev practices or risk losing out on government contracts.

4 | Prep your Containers for STIG

At number four, we’re diving headfirst into the STIG compliance world—the DoD’s ultimate ‘tough crowd’ when it comes to security! If you’re feeling stressed about locking down those container environments—we’ve got you covered. 4 Ways to Prepare your Containers for the STIG Process is all about tackling the often complicated STIG process for containers in DoD projects. 

You’ll learn how to level up your team by cross-training cybersecurity pros in container basics and introducing your devs and architects to STIG fundamentals. It also suggests going straight to the official DISA source for current STIG info, making the STIG Viewer a must-have tool on everyone’s workstation, and looking into automation to speed up compliance checks. 

Bottom line: stay informed, build internal expertise, and lean on the right tools to keep the STIG process from slowing you down.

3 | Syft Graduates to v1.0!

Give it up for number three on our countdown—Syft’s big graduation announcement! In Syft Reaches v1.0! Syft celebrates hitting the big 1.0 milestone! 

Syft is Anchore’s OSS tool for generating SBOMs, helping you figure out exactly what’s inside your software, from container images to source code. Over the years, it’s grown to support over 40 package types, outputting SBOMs in various formats like SPDX and CycloneDX. With v1.0, Syft’s CLI and API are now stable, so you can rely on it for consistent results and long-term compatibility. 

But don’t worry—development doesn’t stop here. The team plans to keep adding support for more ecosystems and formats, and they invite the community to join in, share ideas, and contribute to the future of Syft.

2 | RAISE 2.0 Overview: RMF and ATO for the US Navy

Next up at number two is the lowdown on RAISE 2.0—your backstage pass to lightning-fast software approvals with the US Navy! In RMF and ATO with RAISE 2.0 — Navy’s DevSecOps solution for Rapid Delivery we break down what RAISE 2.0 means for teams working with the Department of the Navy’s containerized software.  The key takeaway? By using an approved DevSecOps platform—known as an RPOC—you can skip getting separate ATOs for every new app. 

The guidelines encourage a “shift left” approach, focusing on early and continuous security checks throughout development. Tools like Anchore Enforce Federal Edition can help automate the required security gates, vulnerability scans, and policy checks, making it easier to keep everything compliant. 

In short, RAISE 2.0 is all about streamlining security, speeding up approvals, and helping you deliver secure code faster.

1 | Introduction to the DoD Software Factory

Taking our top spot at number one, we’ve got the DoD software factory—the true VIP of the DevSecOps world! We’re talking about a full-blown, high-security software pipeline that cranks out code for the defense sector faster than a fighter jet screaming across the sky. In Introduction to the DoD Software Factory we break down what a DoD software factory really is—think of it as a template to build a DoD-approved DevSecOps pipeline. 

The blog post details how concepts like shifting security left, using microservices, and leveraging automation all come together to meet the DoD’s sky-high security standards. Whether you choose an existing DoD software factory (like Platform One) or build your own, the goal is to streamline development without sacrificing security. 

Tools like Anchore Enforce Federal Edition can help with everything from SBOM generation to continuous vulnerability scanning, so you can meet compliance requirements and keep your mission-critical apps protected at every stage.

Wrap-Up

That wraps up the top ten Anchore blog posts of 2024! We covered it all: next-level software supply chain best practices, military-grade compliance tactics, and all the open-source goodies that keep your DevSecOps pipeline firing on all cylinders. 

The common thread throughout them all is the recognition that security and speed can work hand-in-hand. With SBOM-driven approaches, modern vulnerability management, and automated compliance checks, organizations can achieve the rapid, secure, and compliant software delivery required in the DevSecOps era. We hope these posts will serve as a guide and inspiration as you continue to refine your DevSecOps practice, embrace new technologies, and steer your organization toward a more secure and efficient future.

If you enjoyed our greatest hits album of 2024 but need more immediacy in your life, follow along in 2025 by subscribing to the Anchore Newsletter or following Anchore on your favorite social platform:

Learn about the role that SBOMs for the security of your organization in this white paper.

Learn about the role that SBOMs for the security, including open source software (OSS) security, of your organization in this white paper.

ModuleQ reduces vulnerability management time by 80% with Anchore Secure

ModuleQ, an AI-driven enterprise knowledge platform, knows only too well the stakes for a company providing software solutions in the highly regulated financial services sector. In this world where data breaches are cause for termination of a vendor relationship and evolving cyberthreats loom large, proactive vulnerability management is not just a best practice—it’s a necessity. 

ModuleQ required a vulnerability management platform that could automatically identify and remediate vulnerabilities, maintain airtight security, and meet stringent compliance requirements—all without slowing down their development velocity.

Learn the essential container security best practices to reduce the risk of software supply chain attacks in this white paper.

The Challenge: Scaling Security in a High-Stakes Environment

ModuleQ found itself drowning in a flood of newly released vulnerabilities—over 25,000 in 2023 alone. Operating in a heavily regulated industry meant any oversight could have severe repercussions. High-profile incidents like the Log4j exploit underscored the importance of supply chain security, yet the manual, resource-intensive nature of ModuleQ’s vulnerability management process made it hard to keep pace.

The mandate that no critical vulnerabilities reached production was a particularly high bar to meet with the existing manual review process. Each time engineers stepped away from their coding environment to check a separate security dashboard, they lost context, productivity, and confidence. The fear of accidentally letting something slip through the cracks was ever present.

The Solution: Anchore Secure for Automated, Integrated Vulnerability Management

ModuleQ chose Anchore Secure to simplify, automate, and fully integrate vulnerability management into their existing DevSecOps workflows. Instead of relying on manual security reviews, Anchore Secure injected security measures seamlessly into ModuleQ’s Azure DevOps pipelines, .NET, and C# environment. Every software build—staged nightly through a multi-step pipeline—was automatically scanned for vulnerabilities. Any critical issues triggered immediate notifications and halted promotions to production, ensuring that potential risks were addressed before they could ever reach customers.

Equally important, Anchore’s platform was built to operate in on-prem or air-gapped environments. This guaranteed that ModuleQ’s clients could maintain the highest security standards without the need for external connectivity. For an organization whose customers demand this level of diligence, Anchore’s design provided peace of mind and strengthened client relationships.

Results: Faster, More Secure Deployments

By adopting Anchore Secure, ModuleQ dramatically accelerated and enhanced its vulnerability management approach:

  • 80% Reduction in Vulnerability Management Time: Automated scanning, triage, and reporting freed the team from manual checks, letting them focus on building new features rather than chasing down low-priority issues.
  • 50% Less Time on Security Tasks During Deployments: Proactive detection of high-severity vulnerabilities streamlined deployment workflows, enabling ModuleQ to deliver software faster—without compromising security.
  • Unwavering Confidence in Compliance: With every new release automatically vetted for critical vulnerabilities, ModuleQ’s customers in the financial sector gained renewed trust. Anchore’s support for fully on-prem deployments allowed ModuleQ to meet stringent security demands consistently.

Looking Ahead

In an era defined by unrelenting cybersecurity threats, ModuleQ proved that speed and security need not be at odds. Anchore Secure provided a turnkey solution that integrated seamlessly into their workflow, saving time, strengthening compliance, and maintaining the agility to adapt to future challenges. By adopting an automated security backbone, ModuleQ has positioned itself as a trusted and reliable partner in the financial services landscape.

Looking for more details? Read the ModuleQ case study in full. If you’re ready to move forward see all of the features on Anchore Secure’s product page or reach out to our team to schedule a demo.

The Evolution of SBOMs in the DevSecOps Lifecycle: Part 2

Welcome back to the second installment of our two-part series on “The Evolution of SBOMs in the DevSecOps Lifecycle”. In our first post, we explored how Software Bills of Materials (SBOMs) evolve over the first 4 stages of the DevSecOps pipeline—Plan, Source, Build & Test—and how each type of SBOM serves different purposes. Some of those use-cases include: shift left vulnerability detection, regulatory compliance automation, OSS license risk management and incident root cause analysis.

In this part, we’ll continue our exploration with the final 4 stages of the DevSecOps lifecycle, examining:

  • Analyzed SBOMs at the Release (Registry) stage
  • Deployed SBOMs during the Deployment phase
  • Runtime SBOMs in Production (Operate & Monitor stages)

As applications migrate down the pipeline, design decisions made at the beginning begin to ossify becoming more difficult to change; this influences the challenges that are experienced and the role that SBOMs play in overcoming these novel problems. Some of the new challenges that come up include: pipeline leaks, vulnerabilities in third-party packages, and runtime injection. All of which introduce significant risk. Understanding how SBOMs evolve across these stages helps organizations mitigate these risks effectively.

Whether you’re aiming to enhance your security posture, streamline compliance reporting, or improve incident response times, this comprehensive guide will equip you with the knowledge to leverage SBOMs effectively from Release to Production. Additionally, we’ll offer pro tips to help you maximize the benefits of SBOMs in your DevSecOps practices.

So, let’s continue our journey through the DevSecOps pipeline and discover how SBOMs can transform the latter stages of your software development lifecycle.

Learn the 5 best practices for container security and how SBOMs play a pivotal role in securing your software supply chain.

Release (or Registry) => Analyzed SBOM

After development is completed and the new release of the software is declared a “golden” image the build system will push the release artifact to a registry for storage until it is deployed. At this stage, an SBOM that is generated based on these container images, binaries, etc. is named an “Analyzed SBOM” by CISA. The name is a little confusing since all SBOMs should be analyzed regardless of the stage they are generated. A more appropriate name might be a Release SBOM but we’ll stick with CISA’s name for now.

At first glance, it would seem that Analyzed SBOMs and the final Build SBOMs should be identical since it is the same software but that doesn’t hold up in practice. DevSecOps pipelines aren’t hermetically sealed systems, they can be “leaky”. You might be surprised what finds its way into this storage repository and eventually gets deployed bypassing your carefully constructed build and test setup.

On top of that, the registry holds more than just first-party applications that are built in-house. It also stores 3rd-party container images like operating systems and any other self-contained applications used by the organization.

The additional metadata that is collected for an Analyzed SBOM includes:

  • Release images that bypass the happy path build and test pipeline
  • 3rd-party container images, binaries and applications

Pros and Cons

Pros:

  • Comprehensive Artifact Inventory: A more holistic view of all software—both 1st- and 3rd-party—that is utilized in production.
  • Enhanced Security and Compliance Posture: Catches vulnerabilities and non-compliant images for all software that will be deployed to production. This reduces the risk of security incidents and compliance violations.
  • Third-Party Supply Chain Risk Management: Provides insights into the vulnerabilities and compliance status of third-party components.
  • Ease of implementation: This stage is typically the lowest lift for implementation given that most SBOM generators can be deployed standalone and pointed at the registry to scan all images.

Cons:

  • High Risk for Release Delays: Scanning images at this stage are akin to traditional waterfall-style development patterns. Most design decisions are baked-in and changes typically incur a steep penalty.
  • Difficult to Push Feedback into Exist Workflow: The registry sits outside of typical developer workflows and creating a feedback loop that seamlessly reports issues without changing the developer’s process is a non-trivial amount of work.
  • Complexity in Management: Managing SBOMs for both internally developed and third-party components adds complexity to the software supply chain.

Use-Cases

  • Software Supply Chain Security: Organizations can detect vulnerabilities in both their internal developed software and external software to prevent supply chain injections from leading to a security incident.
  • Compliance Reporting: Reporting on both 1st- and 3rd-party software is necessary for industries with strict regulatory requirements.
  • Detection of Leaky Pipelines: Identifies release images that have bypassed the standard build and test pipeline, allowing teams to take corrective action.
  • Third-Party Risk Analysis: Assesses the security and compliance of third-party container images, binaries, and applications before they are deployed.

Example: An organization subject to strict compliance standards like FedRAMP or cATO uses Analyzed SBOMs to verify that all artifacts in their registry, including third-party applications, comply with security policies and licensing requirements. This practice not only enhances their security posture but also streamlines the audit process.

Pro Tip

A registry is an easy and non-invasive way to test and evaluate potential SBOM generators. It won’t give you a full picture of what can be found in your DevSecOps pipeline but it will at least give you an initial idea of its efficacy and help you make the decision on whether to go through the effort of integrating it into your build pipeline where it will produce deeper insights.

Deploy => Deployed SBOM

As your container orchestrator is deploying an image from your registry into production it will also orchestrate any production dependencies such as sidecar containers or production dependencies. At this stage, an SBOM that is generated is named an “Deployed SBOM” by CISA.

The ideal scenario is that your operations team is storing all of these images in the same central registry as your engineering team but—as we’ve noted before—reality diverges from the ideal.

The additional metadata that is collected for a Deployed SBOM includes:

  • Any additional sidecar containers or production dependencies that are injected or modified through a release controller.

Pros and Cons

Pros:

  • Enhanced Security Posture: The final gate to prevent vulnerabilities from being deployed into production. This reduces the risk of security incidents and compliance violations.
  • Leaky Pipeline Detection: Another location to increase visibility into the happy path of the DevSecOps pipeline being circumvented.
  • Compliance Enforcement: Some compliance standards require a deployment breaking enforcement gate before any software is deployed to production. A container orchestrator release controller is the ideal location to implement this.

Cons:

Essentially the same issues that come up during the release phase.

  • High Risk for Release Delays: Scanning images at this stage are even later than traditional waterfall-style development patterns and will incur a steep penalty if an issue is uncovered.
  • Difficult to Push Feedback into Exist Workflow: A deployment release controller sits outside of typical developer workflows and creating a feedback loop that seamlessly reports issues without changing the developer’s process is a non-trivial amount of work.

Use-Cases

  • Strict Software Supply Chain Security: Implementing a pipeline breaking gating mechanism is typically reserved for only the most critical security vulnerabilities (think: an actively exploitable known vulnerability).
  • High-Stakes Compliance Enforcement: Industries like defense, financial services and critical infrastructure will require vendors to implement a deployment gate for specific risk scenarios beyond actively exploitable vulnerabilities.
  • Compliance Audit Automation: Most regulatory compliance frameworks mandate audit artifacts at deploy time, these documents can be automatically generated and stored for future audits.

Example: A Deployed SBOM can be used as the source of truth for generating a report that demonstrates that no HIGH or CRITICAL vulnerabilities were deployed to production during an audit period.

Pro Tip

Combine a Deployed SBOM with a container vulnerability scanner that cross-checks all vulnerabilities against CISA’s Known Exploitable Vulnerability (KEV) database. In the scenario where a matching KEV is found for a software component you can configure your vulnerability scanner to return a FAIL response to your release controller to abort the deployment.

This strategy creates an ideal balance between not adding delays to software delivery and an extremely high probability for a security incident. 

Operate & Monitor (or Production) => Runtime SBOM

After your container orchestrator has deployed an application into your production environment it is live and serving customer traffic. An SBOM that is generated at this stage don’t have a name as specified by CISA. They are sometimes referred to as “Runtime SBOMs”. SBOMs are still a new-ish standard and will continue to evolve. 

The additional metadata that is collected for a Runtime SBOM includes:

  • Modifications (i.e., intentional hotfixes or malicious malware injection) made to running applications in your production environment. 

Pros and Cons

Pros:

  • Continuous Security Monitoring: Identifies new vulnerabilities that emerge after deployment.
  • Active Runtime Inventory: Provides a canonical view into an organization’s active software landscape.
  • Low Lift Implementation: Deploying SBOM generation into a production environment typically only requires deploying the scanner as another container and giving it permission to access the rest of the production environment.

Cons:

  • No Shift Left Security: By definition is excluded as a part of a shift left security posture.
  • Potential for Release Rollbacks: Scanning images at this stage is the worst possible place for proactive remediation. Discovering a vulnerability could potentially cause a security incident and force a release rollback.

Use-Cases

  • Rapid Incident Management: When new critical vulnerabilities are discovered and announced by the community the first priority for an organization is to determine exposure. An accurate production inventory, down to the component-level, is needed to answer this critical question.
  • Threat Detection: Continuously monitoring for anomalous activity linked to specific components. Sealing your system off completely from advanced persistent threats (APTs) is an unfeasible goal. Instead, quick detection and rapid intervention is the scalable solution to limit the impact of these adversaries.
  • Patch Management: As new releases of 3rd-party components and applications are released an inventory of impacted production assets provides helpful insights that can direct the prioritization of engineering efforts.

Example: When the XZ Utils vulnerability was announced in the spring of 2024, organizations that already automatically generated a Runtime SBOM inventory ran a simple search query against their SBOM database and knew within minutes—or even seconds—whether they were impacted.

Pro Tip

If you want to learn about how Google was able to go from an all-hands on deck security incident when the XZ Utils vulnerability was announced to an all clear under 10 minutes, watch our webinar with the lead of Google’s SBOM initiative.

Wrap-Up

As the SBOM standard has evolved the subject has grown considerably. What started as a structured way to store information about open source licenses has expanded to include numerous use-cases. A clear understanding of the evolution of SBOMs throughout the DevSecOps lifecycle is essential for organizations aiming to solve problems ranging from software supply chain security to regulatory compliance to legal risk management.

SBOMs are a powerful tool in the arsenal of modern software development. By recognizing their importance and integrating them thoughtfully across the DevSecOps lifecycle, you position your organization at the forefront of secure, efficient, and compliant software delivery.

Ready to secure your software supply chain and automate compliance tasks with SBOMs? Anchore is here to help. We offer SBOM management, vulnerability scanning and compliance automation enforcement solutions. If you still need some more information before looking at solutions, check out our webinar below on scaling a secure software supply chain with Kubernetes. 👇👇👇

Learn how Spectro Cloud secured their Kubernetes-based software supply chain and the pivotal role SBOMs played.

The Evolution of SBOMs in the DevSecOps Lifecycle: From Planning to Production

The software industry has wholeheartedly adopted the practice of building new software on the shoulders of the giants that came before them. To accomplish this developers construct a foundation of pre-built, 3rd-party components together then wrap custom 1st-party code around this structure to create novel applications. It is an extraordinarily innovative and productive practice but it also introduces challenges ranging from security vulnerabilities to compliance headaches to legal risk nightmares. Software bills of materials (SBOMs) have emerged to provide solutions for these wide ranging problems.

An SBOM provides a detailed inventory of all the components that make up an application at a point in time. However, it’s important to recognize that not all SBOMs are the same—even for the same piece of software! SBOMs evolve throughout the DevSecOps lifecycle; just as an application evolves from source code to a container image to a running application. The Cybersecurity and Infrastructure Security Agency’s (CISA) has codified this idea by differentiating between all of the different types of SBOMs. Each type serves different purposes and captures information about an application through its evolutionary process.

In this 2-part blog series, we’ll deep dive into each stage of the DevSecOps process and the associated SBOM. Highlighting the differences, the benefits and disadvantages and the use-cases that each type of SBOM supports. Whether you’re just beginning your SBOM journey or looking to deepen your understanding of how SBOMs can be integrated into your DevSecOps practices, this comprehensive guide will provide valuable insights and advice from industry experts.

Learn about the role that SBOMs for the security of your organization in this white paper.

Learn about the role that SBOMs for the security, including open source software (OSS) security, of your organization in this white paper.

Types of SBOMs and the DevSecOps Pipeline

Over the past decade the US government got serious about software supply chain security and began advocating for SBOMs as the standardized approach to the problem. As part of this initiative CISA created the Types of Software Bill of Material (SBOM) Documents white paper that codified the definitions of the different types of SBOMs and mapped them to each stage of the DevSecOps lifecycle. We will discuss each in turn but before we do, let’s anchor on some terminology to prevent confusion or misunderstanding.

Below is a diagram that lays out each stage of the DevSecOps lifecycle as well as the naming convention we will use going forward.

With that out of the way, let’s get started!

Plan => Design SBOM

As the DevSecOps paradigm has spread across the software industry, a notable best practice known as the security architecture review has become integral to the development process. This practice embodies the DevSecOps goal of integrating security into every phase of the software lifecycle, aligning perfectly with the concept of Shift-Left Security—addressing security considerations as early as possible.

At this stage, the SBOM documents the planned components of the application. The CISA refers to SBOMs generated during this phase as Design SBOMs. These SBOMs are preliminary and outline the intended components and dependencies before any code is written.

The metadata that is collected for a Design SBOM includes:

  • Component Inventory: Identifying potential OSS libraries and frameworks to be used as well as the dependency relationship between the components.
  • Licensing Information: Understanding the licenses associated with selected components to ensure compliance.
  • Risk Assessment Data: Evaluating known vulnerabilities and security risks associated with each component.

This might sound like a lot of extra work but luckily if you’re already performing DevSecOps-style planning that incorporates a security and legal review—as is best practice—you’re already surfacing all of this information. The only thing that is different is that this preliminary data is formatted and stored in a standardized data structure, namely an SBOM.

Pros and Cons

Pros:

  • Maximal Shift-Left Security: Vulnerabilities cannot be found any earlier in the software development process. Design time security decisions are the peak of a proactive security posture and preempt bad design decisions before they become ingrained into the codebase.
  • Cost Efficiency: Resolving security issues at this stage is generally less expensive and less disruptive than during later stages of development or—worst of all—after deployment.
  • Legal and Compliance Risk Mitigation: Ensures that all selected components meet necessary compliance standards, avoiding legal complications down the line.

Cons:

  • Upfront Investment: Gathering detailed information on potential components and maintaining an SBOM at this stage requires a non-trivial commitment of time and effort.
  • Incomplete Information: Projects are not static, they will adapt as unplanned challenges surface. A design SBOM likely won’t stay relevant for long.

Use-Cases

There are a number of use-cases that are enabled by 

  • Security Policy Enforcement: Automatically checking proposed components against organizational security policies to prevent the inclusion of disallowed libraries or frameworks.
  • License Compliance Verification: Ensuring that all components comply with the project’s licensing requirements, avoiding potential legal issues.
  • Vendor and Third-Party Risk Management: Assessing the security posture of third-party components before they are integrated into the application.
  • Enhance Transparency and Collaboration: A well-documented SBOM provides a clear record of the software’s components but more importantly that the project aligns with the goals of all of the stakeholders (engineering, security, legal, etc). This builds trust and creates a collaborative environment that increases the chances of each individual stakeholder outcome will be achieved.

Example:

A financial services company operating within a strict regulatory environment uses SBOMs during planning to ensure that all components comply with compliance standards like PCI DSS. By doing so, they prevent the incorporation of insecure components that won’t meet PCI compliance. This reduces the risk of the financial penalties associated with security breaches and regulatory non-compliance.

Pro Tip

If your organization is still early in the maturity of its SBOM initiative then we generally recommend moving the integration of design time SBOMs to the back of the queue. As we mention at the beginning of this the information stored in a design SBOMs is naturally surfaced during the DevSecOps process, as long as the information is being recorded and stored much of the value of a design SBOM will be captured in the artifact. This level of SBOM integration is best saved for later maturity stages when your organization is ready to begin exploring deeper levels of insights that have a higher risk-to-reward ratio. 

Alternatively, if your organization is having difficulty getting your teams to adopt a collaborative DevSecOps planning process mandating a SBOM as a requirement can act as a forcing function to catalyze a cultural shift.

Source => Source SBOM

During the development stage, engineers implement the selected 3rd-party components into the codebase. CISA refers to SBOMs generated during this phase as Source SBOMs. The SBOMs generated here capture the actual implemented components and additional information that is specific to the developer who is doing the work.

The additional metadata that is collected for a Source SBOM includes:

  • Dependency Mapping: Documenting direct and transitive dependencies.
  • Identity Metadata: Adding contributor and commit information.
  • Developer Environment: Captures information about the development environment.

Unlike Design SBOMs which are typically done manually, these SBOMs can be generated programmatically with a software composition analysis (SCA) tool—like Syft. They are usually packaged as command line interfaces (CLIs) since this is the preferred interface for developers. 

If you’re looking for an SBOM generation tool (SCA embedded), we have a comprehensive list of options to make this decision easier.

Pros and Cons

Pros:

  • Accurate and Timely Component Inventory: Reflects the actual components used in the codebase and tracks changes as codebase is actively being developed.
  • Shift-Left Vulnerability Detection: Identifies vulnerabilities as components are integrated but requires commit level automation and feedback mechanisms to be effective.
  • Facilitates Collaboration and Visibility: Keeps all stakeholders members informed about divergence from the original plan and provokes conversations as needed. This is also dependent on automation to record changes during development and the notification systems to broadcast the updates.

Example: A developer adds a new logging library to the project like an outdated version of Log4j. The SBOM, paired with a vulnerability scanner, immediately flags the Log4Shell vulnerability, prompting the engineer to update to a patched version.

Cons:

  • Noise from Developer Toolchains: A lot of times developer environments are bespoke. This creates noise for security teams by recording development dependencies.
  • Potential Overhead: Continuous updates to the SBOM can be resource-intensive when done manually; the only resource efficient method is by using an SBOM generation tool that automates the process.
  • Possibility of Missing Early Risks: Issues not identified during planning may surface here, requiring code changes.

Use-Cases

  • Faster Root Cause Analysis: During service incident retrospectives questions about where, when and by whom a specific component was introduced into an application. Source SBOMs are the programmatic record that can provide answers and decrease manual root cause analysis.
  • Real-Time Security Alerts: Immediate notification of vulnerabilities upon adding new components, decreasing time to remediation and keeping security teams informed.
  • Automated Compliance Checks: Ensuring added components comply with security or license policies to manage compliance risk.
  • Effortless Collaboration: Stakeholders can subscribe to a live feed of changes and immediately know when implementation diverges from the plan.

Pro Tip

Some SBOM generators allow developers to specify development dependencies that should be ignored, similar to .gitignore file. This can help cut down on the noise created by unique developer setups.

Build & Test => Build SBOM

When a developer pushes a commit to the CI/CD build system an automated process initiates that converts the application source code into an artifact that can then be deployed. CISA refers to SBOMs generated during this phase as Build SBOMs. These SBOMs capture both source code dependencies and build tooling dependencies.

The additional metadata that is collected includes:

  • Build Dependencies: Build tooling such as the language compilers, testing frameworks, package managers, etc.
  • Binary Analysis Data: Metadata for compiled binaries that don’t utilize traditional container formats.
  • Configuration Parameters: Details on build configuration files that might impact security or compliance.

Pros and Cons

Pros:

  • Build Infrastructure Analysis: Captures build-specific components which may have their own vulnerability or compliance issues.
  • Reuses Existing Automation Tooling: Enables programmatic security and compliance scanning as well as policy enforcement without introducing any additional build tooling.
  • Reuses Existing Automation Tooling: Directly integrates with developer workflow. Engineers receive security, compliance, etc. feedback without the need to reference a new tool.
  • Reproducibility: Facilitates reproducing builds for debugging and auditing.

Cons:

  • SBOM Sprawl: Build processes run frequently, if it is generating an SBOM with each run you will find yourself with a glut of files that you will have to manage.
  • Delayed Detection: Vulnerabilities or non-compliance issues found at this stage may require rework.

Use-Cases

  • SBOM Drift Detection: By comparing SBOMs from two or more stages, unexpected dependency injection can be detected. This might take the form of a benign, leaky build pipeline that requires manual workarounds or a malicious actor attempting to covertly introduce malware. Either way this provides actionable and valuable information.
  • Policy Enforcement: Enables the creation of build breaking gates to enforce security or compliance. For high-stakes operating environments like defense, financial services or critical infrastructure, automating security and compliance at the expense of some developer friction is a net-positive strategy.
  • Automated Compliance Artifacts: Compliance requires proof in the form of reports and artifacts. Re-utilizing existing build tooling automation to automate this task significantly reduces the manual work required by security teams to meet compliance requirements.

Example: A security scan during testing uses the Build SBOM to identify a critical vulnerability and alerts the responsible engineer. The remediation process is initiated and a patch is applied before deployment.

Pro Tip

If your organization is just beginning their SBOM journey, this is the recommended phase of the DevSecOps lifecycle to implement SBOMs first. The two primary cons of this phase can be mitigated the easiest. For SBOM sprawl, you can procure a turnkey SBOM management solution like Anchore SBOM

As for the delay in feedback created by waiting till the build phase, if your team is utilizing DevOps best practices and breaking features up into smaller components that fit into 2-week sprints then this tight scoping will limit the impact of any significant vulnerabilities or non-compliance discovered.

Intermission

So far we’ve covered the first half of the DevSecOps lifecycle. Next week we will publish the second part of this blog series where we’ll cover the remainder of the pipeline. Watch our socials to be sure you get notified when part 2 is published.

If you’re looking for some additional reading in the meantime, check out our container security white paper below.

Learn the 5 best practices for container security and how SBOMs play a pivotal role in securing your software supply chain.

Choosing the Right SBOM Generator: A Framework for Success

Choosing the right SBOM (software bill of materials) generator is tricker than it looks at first glance. SBOMs are the foundation for a number of different uses ranging from software supply chain security to continuous regulatory compliance. Due to its cornerstone nature, the SBOM generator that you choose will either pave the way for achieving your organization’s goals or become a road block that delays critical initiatives.

But how do you navigate the crowded market of SBOM generation tools to find the one that aligns with your organization’s unique needs? It’s not merely about selecting a tool with the most features or the nicest CLI. It’s about identifying a solution that maps directly to your desired outcomes and use-cases, whether that’s rapid incident response, proactive vulnerability management, or compliance reporting.

We at Anchore have been enabling organizations to achieve their SBOM-related outcomes and do it with the least amount of frustration and setbacks. We’ve compiled our learnings on choosing the right SBOM generation tool into a framework to help the wider community make decisions that set them up for success.

Below is a quick TL;DR of the high-level evaluation criteria that we cover in this blog post:

  • Understanding Your Use-Cases: Aligning the tool with your specific goals.
  • Ecosystem Compatibility: Ensuring support for your programming languages, operating systems, and build artifacts.
  • Data Accuracy: Evaluating the tool’s ability to provide comprehensive and precise SBOMs.
  • DevSecOps Integration: Assessing how well the tool fits into your existing DevSecOps tooling.
  • Proprietary vs. Open Source: Weighing the long-term implications of your choice.

By focusing on these key areas, you’ll be better equipped to select an SBOM generator that not only meets your current requirements but also positions your organization for future success.

Learn about the role that SBOMs for the security of your organization in this white paper.

Learn about the role that SBOMs for the security, including open source software (OSS) security, of your organization in this white paper.

Know your use-cases

When choosing from the array of SBOM generation tools in the market, it is important to frame your decision with the outcome(s) that you are trying to achieve. If your goal is to improve the response time/mean time to remediation when the next Log4j-style incident occurs—and be sure that there will be a next time—an SBOM tool that excels at correctly identifying open source licenses in a code base won’t be the best solution for your use-case (even if you prefer its CLI ;-D).

What to Do:

  • Identify and prioritize the outcomes that your organization is attempting to achieve
  • Map the outcomes to the relevant SBOM use-cases
  • Review each SBOM generation tool to determine whether they are best suited to your use-cases

It can be tempting to prioritize an SBOM generator that is best suited to our preferences and workflows; we are the ones that will be using the tool regularly—shouldn’t we prioritize what makes our lives easier? If we prioritize our needs above the goal of the initiative we might end up putting ourselves into a position where our choice in tools impedes our ability to recognize the desired outcome. Using the correct framing, in this case by focusing on the use-cases, will keep us focused on delivering the best possible outcome.

SBOMs can be utilized for numerous purposes: security incident response, open source license compliance, proactive vulnerability management, compliance reporting or software supply chain risk management. We won’t address all use-cases/outcomes in this blog post, a more comprehensive treatment of all of the potential SBOM use-cases can be found on our website.

Example SBOM Use-Cases:

  • Security incident response: an inventory of all applications and their dependencies that can be queried quickly and easily to identify whether a newly announced zero-day impacts the organization.
  • Proactive vulnerability management: all software and dependencies are scanned for vulnerabilities as part of the DevSecOps lifecycle and remediated based on organizational priority.
  • Regulatory compliance reporting: compliance artifacts and reports are automatically generated by the DevSecOps pipeline to enable continuous compliance and prevent manual compliance work.
  • Software supply chain risk management: an inventory of software components with identified vulnerabilities used to inform organizational decision making when deciding between remediating risk versus building new features.
  • Open source license compliance: an inventory of all software components and the associated OSS license to measure potential legal exposure.

Pro tip: While you will inevitably leave many SBOM use-cases out of scope for your current project, keeping secondary use-cases in the back of your mind while making a decision on the right SBOM tool will set you up for success when those secondary use-cases eventually become a priority in the future.

Does the SBOM generator support your organization’s ecosystem of programming languages, etc?

SBOM generators aren’t just tools to ingest data and re-format it into a standardized format. They are typically paired with a software composition analysis (SCA) tool that scans an application/software artifact for metadata that will populate the final SBOM.

Support for the complete array of programming languages, build artifacts and operating system ecosystems is essentially an impossible task. This means that support varies significantly depending on the SBOM generator that you select. An SBOM generator’s ability to help you reach your organizational goals is directly related to its support for your organization’s software tooling preferences. This will likely be one of the most important qualifications when choosing between different options and will rule out many that don’t meet the needs of your organization.

Considerations:

  • Programming Languages: Does the tool support all languages used by your team?
  • Operating Systems: Can it scan the different OS environments your applications run on top of?
  • Build Artifacts: Does the tool scan containers? Binaries? Source code repositories? 
  • Frameworks and Libraries: Does it recognize the frameworks and libraries your applications depend on?

Data accuracy

This is one of the most important criteria when evaluating different SBOM tools. An SBOM generator may claim support for a particular programming language but after testing the scanner you may discover that it returns an SBOM with only direct dependencies—honestly not much better than a package.json or go.mod file that your build process spits out.

Two different tools might both generate a valid SPDX SBOM document when run against the same source artifact, but the content of those documents can vary greatly. This variation depends on what the tool can inspect, understand, and translate. Being able to fully scan an application for both direct and transitive dependencies as well as navigate non-ideomatic patterns for how software can be structured end up being the true differentiators between the field of SBOM generation contenders.

Imagine using two SBOM tools on a Debian package. One tool recognizes Debian packages and includes detailed information about them in the SBOM. The latter can’t fully parse the Debian .deb format and omits critical information. Both produce an SBOM, but only one provides the data you need to power use-case based outcomes like security incident response or proactive vulnerability management.

Let’s make this example more concrete by simulating this difference with Syft, Anchore’s open source SBOM generation tool:

$ syft -q -o spdx-json nginx:latest > nginx_a.spdx.json
$ grype -q nginx_a.spdx.json | grep Critical
libaom3             3.6.0-1+deb12u1          (won't fix)       deb   CVE-2023-6879     Critical    
libssl3             3.0.14-1~deb12u2         (won't fix)       deb   CVE-2024-5535     Critical    
openssl             3.0.14-1~deb12u2         (won't fix)       deb   CVE-2024-5535     Critical    
zlib1g              1:1.2.13.dfsg-1          (won't fix)       deb   CVE-2023-45853    Critical

In this example, we first generate an SBOM using Syft then run it through Grype—our vulnerability scanning tool. Syft + Grype uncover 4 critical vulnerabilities.

Now let’s try the same thing but “simulate” an SBOM generator that can’t fully parse the structure of the software artifact in question:

$ syft -q -o spdx-json --select-catalogers "-dpkg-db-cataloger,-binary-classifier-cataloger" nginx:latest > nginx_b.spdx.json 
$ grype -q nginx_b.spdx.json | grep Critical
$

In this case, we are returned none of the critical vulnerabilities found with the former tool.

This highlights the importance of careful evaluation of the SBOM generator that you decide on. It could mean the difference between effective vulnerability risk management and a security incident.

Can the SBOM tool integration into your DevSecOps pipeline?

If the SBOM generator is packaged as a self-contained binary with a command line interface (CLI) then it should tick this box. CI/CD build tools are most amenable to this deployment model. If the SBOM generation tool in question isn’t a CLI then it should at least run as a server with an API that can be called as part of the build process.

Integrating with an organization’s DevSecOps pipeline is key to enable a scalable SBOM generation process. By implementing SBOM creation directly into the existing build tooling, organizations can leverage existing automation tools to ensure consistency and efficiency which are necessary for achieving the desired outcomes.

Proprietary vs. open source SBOM generator?

Using an open source SBOM tool is considered an industry best practice. This is because it guards against the risks associated with vendor lock-in. As a bonus, the ecosystem for open source SBOM generation tooling is very healthy. OSS will always have an advantage over proprietary in regards to ecosystem coverage and data quality because it will get into the hands of more users which will create a feedback loop that closes gaps in coverage or quality.

Finally, even if your organization decides to utilize a software supply chain security product that has its own proprietary SBOM generator, it is still better to create your SBOMs with an open source SBOM generator, export to a standardized format (e.g., SPDX or CycloneDX) then have your software supply chain security platform ingest these non-proprietary data structures. All platforms will be able to ingest SBOMs from one or both of these standards-based formats.

Wrap-Up

In a landscape where the next security/compliance/legal challenge is always just around the corner, equipping your team with the right SBOM generator empowers you to act swiftly and confidently. It’s an investment not just in a tool, but in the resilience and security of your entire software supply chain. By making a thoughtful, informed choice now, you’re laying the groundwork for a more secure and efficient future.

Automate STIG Compliance with MITRE SAF: the Fastest Path to ATO

Trying to get your head around STIG (Security Technical Implementation Guides) compliance? Anchore is here to help. With the help of MITRE Security Automation Framework (SAF) we’ll walk you through the quickset path to STIG Compliance and ultimately the converted Authority to Operate (ATO).

The goal for any company that aims to provide software services to the Department of Defense (DoD) is an ATO. Without this stamp of approval your software will never get into the hands of the warfighters that need it most. STIG compliance is a necessary needle that must be thread on the path to ATO. Luckily, MITRE has developed and open-sourced SAF to smooth the often complex and time-consuming STIG compliance process.

We’ll get you up to speed on MITRE SAF and how it helps you achieve STIG compliance in this blog post but before we jump straight into the content be sure to bookmark our webinar with the Chief Architect of MITRE Security Automation Framework (SAF), Aaron Lippold. Josh Bressers, VP of Security at Anchore and Lippold provide a behind the scenes look at SAF and how it dramatically reduces the friction of the STIG compliance process.

What is the MITRE Security Automation Framework (SAF)?

The MITRE SAF is both a high-level cybersecurity framework and an umbrella that encompasses a set of security/compliance tools. It is designed to simplify STIG compliance by translating DISA (Defense Information Systems Agency) SRG (Security Requirements Guide) guidance into actionable steps. 

By following the Security Automation Framework, organizations can streamline and automate the hardened configuration of their DevSecOps pipeline to achieve an ATO (Authority to Operate).

The SAF offers four primary benefits:

  1. Accelerate Path to ATO: By streamlining STIG compliance, SAF enables organizations to get their applications into the hands of DoD operators faster. This acceleration is crucial for meeting operational demands without compromising on security standards.
  2. Establish Security Requirements: SAF translates SRGs and STIGs into actionable steps tailored to an organization’s specific DevSecOps pipeline. This eliminates ambiguity and ensures security controls are implemented correctly.
  3. Build Security In: The framework provides tooling that can be directly embedded into the software development pipeline. By automating STIG configurations and policy checks, it ensures that security measures are consistently applied, leaving no room for false steps.
  4. Assess and Monitor Vulnerabilities: SAF offers visualization and analysis tools that assist organizations in making informed decisions about their current vulnerability inventory. It helps chart a path toward achieving STIG compliance and ultimately an ATO.

The overarching vision of the MITRE SAF is to “implement evolving security requirements while deploying apps at speed.” In essence, it allows organizations to have their cake and eat it too—gaining the benefits of accelerated software delivery without letting cybersecurity risks grow unchecked.

How does MITRE SAF work?

MITRE SAF is segmented into 5 capabilities that map to specific stages of the DevSecOps pipeline or STIG compliance process:

  1. Plan
  2. Harden
  3. Validate
  4. Normalize
  5. Visualize

Let’s break down each of these capabilities.

Plan

There are hundreds of existing STIGs for products ranging from Microsoft Windows to Cisco Routers to MySQL databases. On the off chance that a product your team wants to use doesn’t have a pre-existing STIG, SAF’s Vulcan tool is helps translate the application SRG into a tailored STIG that can then be used to achieve compliance.

Vulcan helps streamline the process of creating STIG-ready security guidance and the accompanying InSpec automated policy that confirms a specific instance of software is configured in a compliant manner.

Vulcan does this by modeling the STIG intent form and tailoring the applicable SRG controls into a finished STIG for an application. The finished STIG is then sent to DISA for peer review and formal publishing as a STIG. Vulcan allows the author to develop both human-readable instructions and machine-readable InSpec automated validation code at the same time.

Diagram of process to map SRG controls to STIG guidelines via the MITE SAF Vulcan CLI tool; an automated conversion tool to speed up STIG compliance process.

Harden

The hardening capability focuses on automating STIG compliance through the use of pre-built infrastructure configuration scripts. SAF hardening content allows organizations to:

  • Use their preferred configuration management tools: Chef Cookbooks, Ansible Playbooks, Terraform Modules, etc. are available as open source templates on MITRE’s GitHub page.
  • Share and collaborate: All hardening content is open source, encouraging community involvement and shared learning.
  • Coverage for the full development stack: Ensuring that every layer, from cloud infrastructure to applications, adheres to security standards.

Validate

The validation capability focuses on verifying the hardening meets the applicable STIG compliance standard. These validation checks are automated via the SAF CLI tool that incorporates the InSpec policies for a STIG. With SAF CLI, organizations can:

  • Automatically validate STIG compliance: By integrating SAF CLI directly into your CI/CD pipeline and invoking InSpec policy checks at every build; shifting security left by surfacing policy violations early.
  • Promote community collaboration: Like the hardening content, validation scripts are open source and accessible by the community for collaborative efforts.
  • Span the entire development stack: Validation—similar to hardening—isn’t limited to a single layer; it encompasses cloud infrastructure, platforms, operating systems, databases, web servers, and applications.
  • Incorporate manual attestation: To achieve comprehensive coverage of policy requirements that automated tools might not fully address.

Normalize

Normalization addresses the challenge of interoperability between different security tools and data formats. SAF CLI performs double-duty by taking on the normalization function as well as validation. It is able to:

  • Translate data into OHDF: OASIS Heimdall Data Format (OHDF), is an open standard that structures countless proprietary security metadata formats into a single universal format.
  • Leverage open source OHDF libraries: Organizations can use OHDF converters as libraries within their custom applications.
  • Automate data conversion: By incorporating SAF CLI into the DevSecOps pipeline, data is automatically standardized with each run.
  • Increased compliance efficiency: A single data format for all security data allows interoperability and facilitates efficient and automated STIG compliance.

Example: Below is an example of Burp Suite’s proprietary data format normalized to the OHDF JSON format:

Image of Burp Suite data format being mapped to MITRE SAF's OHDF to reduce manual data mapping and reduce time to STIG compliance.

Visualize

Visualization is critical for understanding security posture and making informed decisions. SAF provides an open source, self-hosted visualization tool named Heimdall. It ingests OHDF normalized security data and provides the data analysis tools to enable organizations to:

  • Aggregate security and compliance results: Compiling data into comprehensive rollups, charts, and timelines for a holistic view of security and compliance status.
  • Perform deep dives: Allowing teams to explore detailed vulnerability information to facilitate investigation and remediation, ultimately speeding up time to STIG compliance.
  • Guide risk reduction efforts: Visualization of insights help with prioritization of security and compliance tasks reducing risk in the most efficient manner.

How is SAF related to a DoD Software Factory?

A DoD Software Factory is the common term for a DevSecOps pipeline that meets the definition laid out in DoD Enterprise DevSecOps Reference Design. All software that ultimately achieves an ATO has to be built on a fully implemented DoD Software Factory. You can either build your own or use a pre-existing DoD Software Factory like the US Air Force’s Platform One or the US Navy’s Black Pearl.

As we saw earlier, MITRE SAF is a framework meant to help you achieve STIG compliance and is a portion of your journey towards an ATO. STIG compliance applies to both the software that you write as well as the DevSecOps platform that your software is built on. Building your own DoD Software Factory means committing to going through the ATO process and STIG compliance for the DevSecOps platform first then a second time for the end-user application.

Wrap-Up

The MITRE SAF is a huge leg up for modern, cloud-native DevSecOps software vendors that are currently navigating the labyrinth towards ATO. By providing actionable guidance, automation tooling, and a community-driven approach, SAF dramatically reduces the time to ATO. It bridges the gap between the speed of DevOps software delivery and secure, compliant applications ready for critical DoD missions with national security implications. 

Embracing SAF means more than just meeting regulatory requirements; it’s about building a resilient, efficient, and secure development pipeline that can adapt to evolving security demands. In an era where cybersecurity threats are evolving just as rapidly as software, leveraging frameworks like MITRE SAF is not an efficient path to compliance—it’s essential for sustained success.

Introducing Anchore Data Service and Anchore Enterprise 5.10

We are thrilled to announce the release of Anchore Enterprise 5.10, our tenth release of 2024. This update brings two major enhancements that will elevate your experience and bolster your security posture: the new Anchore Data Service (ADS) and expanded AnchoreCTL ecosystem coverage. 

With ADS, we’ve built a fast and reliable solution that reduces time spent by DevSecOps teams debugging intermittent network issues from flaky services that are vital to software supply chain security.

On top of that, we have buffed our software composition analysis (SCA) scanner’s ecosystem coverage (e.g., C++, Swift, Elixir, R, etc.) for all Anchore customers. To do this we embedded Syftour popular, open source SCA/SBOM (software bill of materials) generator—directly into Anchore Enterprise.

It’s been a fall of big releases at Anchore and we’re excited to continue delivering value to our loyal customers. Read on to get all of the gory details >>

Announcing the Anchore Data Service

Before, customers ran the Anchore Feed Service within their environment to pull data feeds into their Anchore Enterprise deployment. To get an idea of what this looked like, see the architecture diagram of Anchore Enterprise prior to version 5.10:

Originally we did this to give customers more control over their environment. Unfortunately this wasn’t without its issues. The data feeds are provided by the community which means the services were designed to be accessible but cost-efficient. This meant they were unreliable; frequently having accessibility issues.

We only have to stretch our memory back to the spring to recall an example that made national headlines. The National Vulnerability Database (NVD) ran into funding issues. This impacted both the creation of new vulnerabilities AND the availability of their API. This created significant friction for Anchore customers—not to mention the entirety of the software industry.

Now, Anchore is running its own enterprise-grade service, named Anchore Data Service (ADS). It is a replacement for the former feed service. ADS aggregates all of the community data feeds, enriches the data (with proprietary threat data) and packages it for customers; all of this with a service availability guarantee expected of an enterprise service.

The new architecture with ADS as the intermediary is illustrated below:

As a bonus for our customers running air-gapped deployments of Anchore Enterprise, there is no more need to run a second deployment of Anchore Enterprise in a DMZ to pull down the data feeds. Instead a single file is pulled from ADS then transferred to a USB thumb drive. From there a single CLI command is run to update your air-gapped deployment of Anchore Enterprise.

Increased AnchoreCTL Ecosystem Coverage

We have increased the number of supported ecosystems (e.g., C++, Swift, Elixir, R, etc.) in Anchore Enterprise. This improves coverage and increases the likelihood that all of your organization’s applications can be scanned and protected by Anchore Enterprise.

More importantly, we have completely re-architected the process for how Anchore Enterprise supports new ecosystems. By integrating Syft—Anchore’s open source SBOM generation tool—directly into AnchoreCTL, Anchore’s customers will now get access to new ecosystem support as they are merged into Syft’s codebase.

Before Syft and AnchoreCTL were somewhat separate which caused AnchoreCTL’s support for new ecosystems to lag Syft’s. Now, they are fully integrated. This enables all of Anchore’s enterprise and public sector customers to take full advantage of the open source community’s development velocity.

Full list of support ecosystems

Below is a complete list of all supported ecosystems by both Syft and AnchoreCTL (as of Anchore Enterprise 5.10; see our docs for most current info):

  • Alpine (apk)
  • C (conan)
  • C++ (conan)
  • Dart (pubs)
  • Debian (dpkg)
  • Dotnet (deps.json)
  • Objective-C (cocoapods)
  • Elixir (mix)
  • Erlang (rebar3)
  • Go (go.mod, Go binaries)
  • Haskell (cabal, stack)
  • Java (jar, ear, war, par, sar, nar, native-image)
  • JavaScript (npm, yarn)
  • Jenkins Plugins (jpi, hpi)
  • Linux kernel archives (vmlinuz)
  • Linux kernel a (ko)
  • Nix (outputs in /nix/store)
  • PHP (composer)
  • Python (wheel, egg, poetry, requirements.txt)
  • Red Hat (rpm)
  • Ruby (gem)
  • Rust (cargo.lock)
  • Swift (cocoapods, swift-package-manager)
  • WordPress plugins

After you update to Anchore Enterprise 5.10, the SBOM inventory will now display all of the new ecosystems. Any SBOMs that have been generated for a particular ecosystem will show up top. The screenshot below gives you an idea of what this will look like:

Wrap-Up

Anchore Enterprise 5.10 marks a new chapter in providing reliable, enterprise-ready security tooling for modern software development. The introduction of the Anchore Data Service ensures that you have consistent and dependable access to critical vulnerability and exploit data, while the expanded ecosystem support means that no part of your tech stack is left unscrutinized for latent risk. Upgrade to the latest version and experience these new features for yourself.

To update and leverage these new features check out our docs, reach out to your Customer Success Engineer or contact our support team. Your feedback is invaluable to us, and we look forward to continuing to support your organization’s security needs.We are offering all of our product updates as a new quarterly product update webinar series. Watch the fall webinar update in the player below to get all of the juicy tidbits from our product team.

Compliance Requirements for DISA’s Security Technical Implementation Guides (STIGs)

In the rapidly modernizing landscape of cybersecurity compliance, evolving to a continuous compliance posture is more critical than ever—particularly for organizations involved with the Department of Defense (DoD) and other government agencies. At the heart of the DoD’s modern approach to software development is the DoD Enterprise DevSecOps Reference Design, commonly implemented as a DoD Software Factory

A key component of this framework is adhering to the Security Technical Implementation Guides (STIGs) developed by the Defense Information Systems Agency (DISA). STIG compliance within the DevSecOps pipeline not only accelerates the delivery of secure software but also embeds robust security practices directly into the development process, safeguarding sensitive data and reinforcing national security.

This comprehensive guide will walk you through what STIGs are, who should care about them, the levels of STIG compliance, key categories of STIG requirements, how to prepare for the STIG compliance process, and the tools available to automate STIG implementation and maintenance.

What are STIGs and who should care?

Understanding DISA and STIGs

The Defense Information Systems Agency (DISA) is the DoD agency responsible for delivering information technology (IT) support to ensure the security of U.S. national defense systems. To help organizations meet the DoD’s rigorous security controls, DISA develops Security Technical Implementation Guides (STIGs).

STIGs are configuration standards that provide prescriptive guidance on how to secure operating systems, network devices, software, and other IT systems. They serve as a secure configuration standard to harden systems against cyber threats.

For example, a STIG for the open source Apache web server would specify that encryption is enabled for all traffic (incoming or outgoing). This would require the generation of SSL/TLS certificates on the server in the correct location, updating the server’s configuration file to reference this certificate and re-configuration of the server to serve traffic from a secure port rather than the default insecure port.

Who should care about STIG compliance?

STIG compliance is mandatory for any organization that operates within the DoD network or handles DoD information. This includes:

  • DoD Contractors and Vendors: Companies providing products or services to the DoD—a.k.a. the defense industrial base (DIB)—must ensure their systems comply with STIG requirements.
  • Government Agencies: Federal agencies interfacing with the DoD need to adhere to applicable STIGs.
  • DoD Information Technology Teams: IT professionals within the DoD responsible for system security must implement STIGs.

Connection to the RMF and NIST SP 800-53

The Risk Management Framework (RMF)—more formally NIST 800-37—is a framework that integrates security and risk management into IT systems as they are being developed. The STIG compliance process outlined below is directly integrated into the higher-level RMF process. As you follow the RMF, the individual steps of STIG compliance will be completed in turn.

STIGs are also closely connected to the NIST 800-53, colloquially known as the “Control Catalog”. NIST 800-53 outlines security and privacy controls for all federal information systems; the controls are not prescriptive about the implementation, only the best practices and outcomes that need to be achieved. 

As DISA developed the STIG compliance standard, they started with the NIST 800-53 controls then “tailored” them to meet the needs of the DoD; these customized security best practices are known as Security Requirements Guides (SRGs). In order to remove all ambiguity around how to meet these higher-level best practices STIGs were created with implementation specific instructions.

For example, an SRG will mandate that all systems utilize a cybersecurity best practice, such as, role-based access control (RBAC) to prevent users without the correct privileges from accessing certain systems. A STIG, on the other hand, will detail exactly how to configure an RBAC system to meet the highest security standards.

Levels of STIG Compliance

The DISA STIG compliance standard uses Severity Category Codes to classify vulnerabilities based on their potential impact on system security. These codes help organizations prioritize remediation efforts. The three Severity Category Codes are:

  1. Category I (Cat I): These are the highest risk vulnerabilities, allowing an attacker immediate access to a system or network or allowing superuser access. Due to their high risk nature, these vulnerabilities be addressed immediately.
  2. Category II (Cat II): These vulnerabilities provide information with a high potential of giving access to intruders. These findings are considered a medium risk and should be remediated promptly.
  3. Category III (Cat III): These vulnerabilities constitute the lowest risk, providing information that could potentially lead to compromise. Although not as pressing as Cat II & III issues, it is still important to address these vulnerabilities to minimize risk and enhance overall security.

Understanding these categories is crucial in the STIG process, as they guide organizations in prioritizing remediation of vulnerabilities.

Key categories of STIG requirements

Given the extensive range of technologies used in DoD environments, there are hundreds of STIGs applicable to different systems, devices, applications, and more. While we won’t list all STIG requirements here, it’s important to understand the key categories and who they apply to.

1. Operating System STIGs

Applies to: System Administrators and IT Teams managing servers and workstations

Examples:

  • Microsoft Windows STIGs: Provides guidelines for securing Windows operating systems.
  • Linux STIGs: Offers secure configuration requirements for various Linux distributions.

2. Network Device STIGs

Applies to: Network Engineers and Administrators

Examples:

  • Network Router STIGs: Outlines security configurations for routers to protect network traffic.
  • Network Firewall STIGs: Details how to secure firewall settings to control access to networks.

3. Application STIGs

Applies to: Software Developers and Application Managers

Examples:

  • Generic Application Security STIG: Outlines the necessary security best practices needed to be STIG compliant
  • Web Server STIG: Provides security requirements for web servers.
  • Database STIG: Specifies how to secure database management systems (DBMS).

4. Mobile Device STIGs

Applies to: Mobile Device Administrators and Security Teams

Examples:

  • Apple iOS STIG: Guides securing of Apple mobile devices used within the DoD.
  • Android OS STIG: Details security configurations for Android devices.

5. Cloud Computing STIGs

Applies to: Cloud Service Providers and Cloud Infrastructure Teams

Examples:

  • Microsoft Azure SQL Database STIG: Offers security requirements for Azure SQL Database cloud service.
  • Cloud Computing OS STIG: Details secure configurations for any operating system offered by a cloud provider that doesn’t have a specific STIG.

Each category addresses specific technologies and includes a STIG checklist to ensure all necessary configurations are applied. 

You can view an example of a STIG checklist for “Application Security and Development” by following this link.

How to Prepare for the STIG Compliance Process

Achieving DISA STIG compliance involves a structured approach. Here are the stages of the STIG process and tips to prepare:

Stage 1: Identifying Applicable STIGs

With hundreds of STIGs relevant to different organizations and technology stacks, this step should not be underestimated. First, conduct an inventory of all systems, devices, applications, and technologies in use. Then, review the complete list of STIGs to match each to your inventory to ensure that all critical areas requiring secure configuration are addressed. This step is essential to avoiding gaps in compliance.

Tip: Use automated tools to scan your environment then match assets to relevant STIGs.

Stage 2: Implementation

After you’ve mapped your technology to the corresponding STIGs, the process of implementing the security configurations outlined in the guides begins. This step may require collaboration between IT, security, and development teams to ensure that the configurations are compatible with the organization’s infrastructure while enforcing strict security standards. Be sure to keep detailed records of changes made.

Tip: Prioritize implementing fixes for Cat I vulnerabilities first, followed by Cat II and Cat III. Depending on the urgency and needs of the mission, ATO can still be achieved with partial STIG compliance. Prioritizing efforts increases the chances that partial compliance is permitted.

Stage 3: Auditing & Maintenance

After the STIGs have been implemented, regular auditing and maintenance are critical to ensure ongoing compliance, verifying that no deviations have occurred over time due to system updates, patches, or other changes. This stage includes periodic scans, manual reviews, and remediation of any identified gaps. Additionally, organizations should develop a plan to stay informed about new STIG releases and updates from DISA.

Tip: Establish a maintenance schedule and assign responsibilities to team members. Alternatively, adopting a policy-as-code approach to continuous compliance by embedding STIG compliance requirements “-as-code” directly into your DevSecOps pipeline, you can automate this process.

General Preparation Tips

  • Training: Ensure your team is familiar with STIG requirements and the compliance process.
  • Collaboration: Work cross-functionally with all relevant departments, including IT, security, and compliance teams.
  • Resource Allocation: Dedicate sufficient resources, including time and personnel, to the compliance effort.
  • Continuous Improvement: Treat STIG compliance as an ongoing process rather than a one-time project.

Tools to automate STIG implementation and maintenance

Automation can significantly streamline the STIG compliance process. Here are some tools that can help:

1. Anchore STIG (Static and Runtime)

  • Purpose: Automates the process of checking container images against STIG requirements.
  • Benefits:
    • Simplifies compliance for containerized applications.
    • Integrates into CI/CD pipelines for continuous compliance.
  • Use Case: Ideal for DevSecOps teams utilizing containers in their deployments.

2. SCAP Compliance Checker

  • Purpose: Provides automated compliance scanning using the Security Content Automation Protocol (SCAP).
  • Benefits:
    • Validates system configurations against STIGs.
    • Generates detailed compliance reports.
  • Use Case: Useful for system administrators needing to audit various operating systems.

3. DISA STIG Viewer

  • Purpose: Helps in viewing and managing STIG checklists.
  • Benefits:
    • Allows for easy navigation of STIG requirements.
    • Facilitates documentation and reporting.
  • Use Case: Assists compliance officers in tracking compliance status.

4. DevOps Automation Tools

  • Infrastructure Automation Examples: Red Hat Ansible, Perforce Puppet, Hashicorp Terraform
  • Software Build Automation Examples: CloudBees CI, GitLab
  • Purpose: Automate the deployment of secure configurations that meet STIG compliance across multiple systems.
  • Benefits:
    • Ensures consistent application of secure configuration standards.
    • Reduces manual effort and the potential for errors.
  • Use Case: Suitable for large-scale environments where manual configuration is impractical.

5. Vulnerability Management Tools

  • Examples: Anchore Secure
  • Purpose: Identify vulnerabilities and compliance issues within your network.
  • Benefits:
    • Provides actionable insights to remediate security gaps.
    • Offers continuous monitoring capabilities.
  • Use Case: Critical for security teams focused on proactive risk management.

Wrap-Up

Achieving DISA STIG compliance is mandatory for organizations working with the DoD. By understanding what STIGs are, who they apply to, and how to navigate the compliance process, your organization can meet the stringent compliance requirements set forth by DISA. As a bonus, you will enhance its security posture and reduce the potential for a security breach.

Remember, compliance is not a one-time event but an ongoing effort that requires regular updates, audits, and maintenance. Leveraging automation tools like Anchore STIG and Anchore Secure can significantly ease this burden, allowing your team to focus on strategic initiatives rather than manual compliance tasks.

Stay proactive, keep your team informed, and make use of the resources available to ensure that your IT systems remain secure and compliant.

US Navy achieves ATO in days with continuous compliance and OSS risk management

Implementing secure and compliant software solutions within the Department of Defense’s (DoD) software factory framework is no small feat. 

For Black Pearl, the premier DevSecOps platform for the U.S. Navy, and Sigma Defense, a leading DoD technology contractor, the challenge was not just about meeting stringent security requirements but to empower the warfighter. 

We’ll cover how they streamlined compliance, managed open source software (OSS) risk, and reduced vulnerability overload—all while accelerating their Authority to Operate (ATO) process.

Challenge: Navigating Complex Security and Compliance Requirements

Black Pearl and Sigma Defense faced several critical hurdles in meeting the stringent security and compliance standards of the DoD Enterprise DevSecOps Reference Design:

  • Achieving RMF Security and Compliance: Black Pearl needed to secure its own platform and help its customers achieve ATO under the Risk Management Framework (RMF). This involved meeting stringent security controls like RA-5 (Vulnerability Management), SI-3 (Malware Protection), and IA-5 (Credential Management) for both the platform and the applications built on it.
  • Maintaining Continuous Compliance: With the RAISE 2.0 memo emphasizing continuous ATO compliance, manual processes were no longer sufficient. The teams needed to automate compliance tasks to avoid the time-consuming procedures traditionally associated with maintaining ATO status.
  • Managing Open-Source Software (OSS) Risks: Open-source components are integral to modern software development but come with inherent risks. Black Pearl had to manage OSS risks for both its platform and its customers’ applications, ensuring vulnerabilities didn’t compromise security or compliance.
  • Vulnerability Overload for Developers: Developers often face an overwhelming number of vulnerabilities, many of which may not pose significant risks. Prioritizing actionable items without draining resources or slowing down development was a significant challenge.

“By using Anchore and the Black Pearl platform, applications inherit 80% of the RMF’s security controls. You can avoid all of the boring stuff and just get down to what everyone does well, which is write code.”

Christopher Rennie, Product Lead/Solutions Architect

Solution: Automating Compliance and Security with Anchore

To address these challenges, Black Pearl and Sigma Defense implemented Anchore, which provided:

“Working alongside Anchore, we have customized the compliance artifacts that come from the Anchore API to look exactly how the AOs are expecting them to. This has created a good foundation for us to start building the POA&Ms that they’re expecting.”

Josiah Ritchie, DevSecOps Staff Engineer

  • Managing OSS Risks with Continuous Monitoring: Anchore’s integrated vulnerability scanner, policy enforcer, and reporting system provided continuous monitoring of open-source software components. This proactive approach ensured vulnerabilities were detected and addressed promptly, effectively mitigating security risks.
  • Automated Prioritization of Vulnerabilities: By integrating the Anchore Developer Bundle, Black Pearl enabled automatic prioritization of actionable vulnerabilities. Developers received immediate alerts on critical issues, reducing noise and allowing them to focus on what truly matters.

Results: Accelerated ATO and Enhanced Security

The implementation of Anchore transformed Black Pearl’s compliance process and security posture:

  • Platform ATO in 3-5 days: With Anchore’s integration, Black Pearl users accessed a fully operational DevSecOps platform within days, a significant reduction from the typical six months for DIY builds.

“The DoD has four different layers of authorizing officials in order to achieve ATO. You have to figure out how to make all of them happy. We want to innovate by automating the compliance process. Anchore helps us achieve this, so that we can build a full ATO package in an afternoon rather than taking a month or more.”

Josiah Ritchie, DevSecOps Staff Engineer

  • Significantly reduced time spent on compliance reporting: Anchore automated compliance checks and artifact generation, cutting down hours spent on manual reviews and ensuring consistency in reports submitted to authorizing officials.
  • Proactive OSS risk management: By shifting security and compliance to the left, developers identified and remediated open-source vulnerabilities early in the development lifecycle, mitigating risks and streamlining the compliance process.
  • Reduced vulnerability overload with prioritized vulnerability reporting: Anchore’s prioritization of vulnerabilities prevented developer overwhelm, allowing teams to focus on critical issues without hindering development speed.

Conclusion: Empowering the Warfighter Through Efficient Compliance and Security

Black Pearl and Sigma Defense’s partnership with Anchore demonstrates how automating security and compliance processes leads to significant efficiencies. This empowers Navy divisions to focus on developing software that supports the warfighter. 

Achieving ATO in days rather than months is a game-changer in an environment where every second counts, setting a new standard for efficiency through the combination of Black Pearl’s robust DevSecOps platform and Anchore’s comprehensive security solutions.

If you’re facing similar challenges in securing your software supply chain and accelerating compliance, it’s time to explore how Anchore can help your organization achieve its mission-critical objectives.

Download the full case study below👇

DreamFactory Achieves 75% Time Savings with Anchore: A Case Study in Secure API Generation

As the popularity of APIs has swept the software industry, API security has become paramount, especially for organizations in highly regulated industries. DreamFactory, an API generation platform serving the defense industry and critical national infrastructure, required an air-gapped vulnerability scanning and management solution that didn’t slow down their productivity. Avoiding security breaches and compliance failures are non-negotiables for the team to maintain customer trust.

Challenge: Security Across the Gap

DreamFactory encountered several critical hurdles in meeting the needs of its high-profile clients, particularly those in the defense community and other highly regulated sectors:

  1. Secure deployments without cloud connectivity: Many clients, including the Department of Defense (DoD), required on-premises deployments with air-gapping, breaking the assumptions of modern cloud-based security strategies.
  2. Air-gapped vulnerability scans: Despite air-gapping, these organizations still demanded comprehensive vulnerability reporting to protect their sensitive data.
  3. Building high-trust partnerships: In industries where security breaches could have catastrophic consequences, establishing trust rapidly was crucial.

As Terence Bennett, CEO of DreamFactory, explains, “The data processed by these organizations have the highest national security implications. We needed a solution that could deliver bulletproof security without cloud connectivity.”

Solution: Anchore Enterprise On-Prem and Air-Gapped 

To address these challenges, DreamFactory implemented Anchore Enterprise, which provided:

  1. Support for on-prem and air-gapped deployments: Anchore Enterprise was designed to operate in air-gapped environments, aligning perfectly with DreamFactory’s needs.
  2. Comprehensive vulnerability scanning: DreamFactory integrated Anchore Enterprise into its build pipeline, running daily vulnerability scans on all deployment versions.
  3. Automated SBOM generation and management: Every build is now cataloged and stored (as an SBOM), providing immediate transparency into the software’s components.

“By catching vulnerabilities in our build pipeline, we can inform our customers and prevent any of the APIs created by a DreamFactory install from being leveraged to exploit our customer’s network,” Bennett notes. “Anchore has helped us achieve this massive value-add for our customers.”

Results: Developer Time Savings and Enhanced Trust

The implementation of Anchore Enterprise transformed DreamFactory’s security posture and business operations:

  • 75% reduction in time spent on vulnerability management and compliance requirements
  • 70% faster production deployments with integrated security checks
  • Rapid trust development through transparency

“We’re seeing a lot of traction with data warehousing use-cases,” says Bennett. “Being able to bring an SBOM to the conversation at the very beginning completely changes the conversation and allows CISOs to say, ‘let’s give this a go’.”

Conclusion: A Competitive Edge in High-Stakes Environments

By leveraging Anchore Enterprise, DreamFactory has positioned itself as a trusted partner for organizations requiring the highest levels of security and compliance in their API generation solutions. In an era where API security is more critical than ever, DreamFactory’s success story demonstrates that with the right tools and approach, it’s possible to achieve both ironclad security and operational efficiency.


Are you facing similar challenges hardening your software supply chain in order to meet the requirements of the DoD? By designing your DevSecOps pipeline to the DoD software factory standard, your organization can guarantee to meet these sky-high security and compliance requirements. Learn more about the DoD software factory standard by downloading our white paper below.