Going All In: Anchore at SBOM Plugfest 2024

When we were invited to participate in Carnegie Mellon University’s Software Engineering Institute (SEI) SBOM Harmonization Plugfest 2024, we saw an opportunity to contribute to SBOM generation standardization efforts and thoroughly exercise our open-source SBOM generator, Syft

While the Plugfest only required two SBOM submissions, we decided to go all in – and learned some valuable lessons along the way.

The Plugfest Challenge

The SBOM Harmonization Plugfest aims to understand why different tools generate different SBOMs for the same software. It’s not a competition but a collaborative study to improve SBOM implementation harmonization. The organizers selected eight diverse software projects, ranging from Node.js applications to C++ libraries, and asked participants to generate SBOMs in standard formats like SPDX and CycloneDX.

Going Beyond the Minimum

Instead of just submitting two SBOMs, we decided to:

  1. SBOM generation for all eight target projects
  2. Create both source and binary analysis SBOMs where possible
  3. Output in every format Syft supports
  4. Test both enriched and non-enriched versions
  5. Validate everything thoroughly

This comprehensive approach would give us (and the broader community) much more data to work with.

Automation: The Key to Scale

To handle this expanded scope, we created a suite of scripts to automate the entire process:

  1. Target acquisition
  2. Source SBOM generation
  3. Binary building
  4. Binary SBOM generation
  5. SBOM validation

The entire pipeline runs in about 38 minutes on a well-connected server, generating nearly three hundred SBOMs across different formats and configurations.

The Power of Enrichment

One of Syft’s interesting features is its --enrich option, which can enhance SBOMs with additional metadata from online sources. Here’s a real example showing the difference in a CycloneDX SBOM for Dependency-Track:

$ wc -l dependency-track/cyclonedx-json.json dependency-track/cyclonedx-json_enriched.json
  5494 dependency-track/cyclonedx-json.json
  6117 dependency-track/cyclonedx-json_enriched.json

The enriched version contains additional information like license URLs and CPE identifiers:

{
  "license": {
    "name": "Apache 2",
    "url": "http://www.apache.org/licenses/LICENSE-2.0"
  },
  "cpe": "cpe:2.3:a:org.sonatype.oss:JUnitParams:1.1.1:*:*:*:*:*:*:*"
}

These additional identifiers are crucial for security and compliance teams – license URLs help automate legal compliance checks, while CPE identifiers enable consistent vulnerability matching across security tools.

SBOM Generation of Binaries

While source code analysis is valuable, many Syft users analyze built artifacts and containers. This reflects real-world usage where organizations must understand what’s being deployed, not just what’s in the source code. We built and analyzed binaries for most target projects:

PackageBuild MethodKey Findings
Dependency TrackDockerThe container SBOMs included ~1000 more items than source analysis, including base image components like Debian packages
HTTPiepip installBinary analysis caught runtime Python dependencies not visible in source
jqDockerPython dependencies contributed significant additional packages
MinecoloniesGradleJava runtime java archives appeared in binary analysis, but not in the source
OpenCVCMakeBinary and source SBOMs were largely the same
hexylCargo buildRust static linking meant minimal difference from source
nodejs-goofDockerNode.js runtime and base image packages significantly increased the component count

Some projects, like gin-gonic (a library) and PHPMailer, weren’t built as they’re not typically used as standalone binaries.

The differences between source and binary SBOMs were striking. For example, the Dependency-Track container SBOM revealed:

  • Base image operating system packages
  • Runtime dependencies not visible in source analysis
  • Additional layers of dependencies from the build process
  • System libraries and tools included in the container

This perfectly illustrates why both source and binary analysis are important:

  • Source SBOMs show some direct development dependencies
  • Binary/container SBOMs show the complete runtime environment
  • Together, they provide a full picture of the software supply chain

Organizations can leverage these differences in their CI/CD pipelines – using source SBOMs for early development security checks and binary/container SBOMs for final deployment validation and runtime security monitoring.

Unexpected Discovery: SBOM Generation Bug

One of the most valuable outcomes wasn’t planned at all. During our comprehensive testing, we discovered a bug in Syft’s SPDX document generation. The SPDX validators were flagging our documents as invalid due to absolute file paths:

file name must not be an absolute path starting with "/", but is: 
/.github/actions/bootstrap/action.yaml
file name must not be an absolute path starting with "/", but is: 
/.github/workflows/benchmark-testing.yaml
file name must not be an absolute path starting with "/", but is: 
/.github/workflows/dependabot-automation.yaml
file name must not be an absolute path starting with "/", but is: 
/.github/workflows/oss-project-board-add.yaml

The SPDX specification requires relative file paths in the SBOM, but Syft used absolute paths. Our team quickly developed a fix, which involved converting absolute paths to relative ones in the format model logic:

// spdx requires that the file name field is a relative filename
// with the root of the package archive or directory
func convertAbsoluteToRelative(absPath string) (string, error) {
    // Ensure the absolute path is absolute (although it should already be)
    if !path.IsAbs(absPath) {
        // already relative
        log.Debugf("%s is already relative", absPath)
        return absPath, nil
    }
    // we use "/" here given that we're converting absolute paths from root to relative
    relPath, found := strings.CutPrefix(absPath, "/")
    if !found {
        return "", fmt.Errorf("error calculating relative path: %s", absPath)
    }
    return relPath, nil
}

The fix was simple but effective – stripping the leading “/” from absolute paths while maintaining proper error handling and logging. This change was incorporated into Syft v1.18.0, which we used for our final Plugfest submissions.

This discovery highlights the value of comprehensive testing and community engagement. What started as a participation in the Plugfest ended up improving Syft for all users, ensuring more standard-compliant SPDX documents. It’s a perfect example of how collaborative efforts like the Plugfest can benefit the entire SBOM ecosystem.

SBOM Validation

We used multiple validation tools to verify our SBOMs:

Interestingly, we found some disparities between validators. For example, some enriched SBOMs that passed sbom-utility validation failed with pyspdxtools. Further, the NTA online validator gave us another different result in many cases. This highlights the ongoing challenges in SBOM standardization – even the tools that check SBOM validity don’t always agree!

Key Takeaways

  • Automation is crucial: Our scripted approach allowed us to efficiently generate and validate hundreds of SBOMs.
  • Real-world testing matters: Building and analyzing binaries revealed insights (and bugs!) that source-only analysis might have missed.
  • Enrichment adds value: Additional metadata can significantly enhance SBOM utility, though support varies by ecosystem.
  • Validation is complex: Different validators can give different results, showing the need for further standardization.

Looking Forward

The SBOM Harmonization Plugfest results will be analyzed in early 2025, and we’re eager to see how different tools handled the same targets. Our comprehensive submission will help identify areas where SBOM generation can be improved and standardized.

More importantly, this exercise has already improved Syft for our users through the bug fix and given us valuable insights for future development. We’re committed to continuing this thorough testing and community participation to make SBOM generation more reliable and consistent for everyone.

The final SBOMs are published in the plugfest-sboms repo, with the scripts in the plugfest-scripts repository. Consider using Syft for SBOM generation against your code and containers, and let us know how you get on in our community discourse.

ModuleQ reduces vulnerability management time by 80% with Anchore Secure

ModuleQ, an AI-driven enterprise knowledge platform, knows only too well the stakes for a company providing software solutions in the highly regulated financial services sector. In this world where data breaches are cause for termination of a vendor relationship and evolving cyberthreats loom large, proactive vulnerability management is not just a best practice—it’s a necessity. 

ModuleQ required a vulnerability management platform that could automatically identify and remediate vulnerabilities, maintain airtight security, and meet stringent compliance requirements—all without slowing down their development velocity.

Learn the essential container security best practices to reduce the risk of software supply chain attacks in this white paper.

The Challenge: Scaling Security in a High-Stakes Environment

ModuleQ found itself drowning in a flood of newly released vulnerabilities—over 25,000 in 2023 alone. Operating in a heavily regulated industry meant any oversight could have severe repercussions. High-profile incidents like the Log4j exploit underscored the importance of supply chain security, yet the manual, resource-intensive nature of ModuleQ’s vulnerability management process made it hard to keep pace.

The mandate that no critical vulnerabilities reached production was a particularly high bar to meet with the existing manual review process. Each time engineers stepped away from their coding environment to check a separate security dashboard, they lost context, productivity, and confidence. The fear of accidentally letting something slip through the cracks was ever present.

The Solution: Anchore Secure for Automated, Integrated Vulnerability Management

ModuleQ chose Anchore Secure to simplify, automate, and fully integrate vulnerability management into their existing DevSecOps workflows. Instead of relying on manual security reviews, Anchore Secure injected security measures seamlessly into ModuleQ’s Azure DevOps pipelines, .NET, and C# environment. Every software build—staged nightly through a multi-step pipeline—was automatically scanned for vulnerabilities. Any critical issues triggered immediate notifications and halted promotions to production, ensuring that potential risks were addressed before they could ever reach customers.

Equally important, Anchore’s platform was built to operate in on-prem or air-gapped environments. This guaranteed that ModuleQ’s clients could maintain the highest security standards without the need for external connectivity. For an organization whose customers demand this level of diligence, Anchore’s design provided peace of mind and strengthened client relationships.

Results: Faster, More Secure Deployments

By adopting Anchore Secure, ModuleQ dramatically accelerated and enhanced its vulnerability management approach:

  • 80% Reduction in Vulnerability Management Time: Automated scanning, triage, and reporting freed the team from manual checks, letting them focus on building new features rather than chasing down low-priority issues.
  • 50% Less Time on Security Tasks During Deployments: Proactive detection of high-severity vulnerabilities streamlined deployment workflows, enabling ModuleQ to deliver software faster—without compromising security.
  • Unwavering Confidence in Compliance: With every new release automatically vetted for critical vulnerabilities, ModuleQ’s customers in the financial sector gained renewed trust. Anchore’s support for fully on-prem deployments allowed ModuleQ to meet stringent security demands consistently.

Looking Ahead

In an era defined by unrelenting cybersecurity threats, ModuleQ proved that speed and security need not be at odds. Anchore Secure provided a turnkey solution that integrated seamlessly into their workflow, saving time, strengthening compliance, and maintaining the agility to adapt to future challenges. By adopting an automated security backbone, ModuleQ has positioned itself as a trusted and reliable partner in the financial services landscape.

Looking for more details? Read the ModuleQ case study in full. If you’re ready to move forward see all of the features on Anchore Secure’s product page or reach out to our team to schedule a demo.

Enhancing Container Security with NVIDIA’s AI Blueprint and Anchore’s Syft

Container security is critical – one breach can lead to devastating data losses and business disruption. NVIDIA’s new AI Blueprint for Vulnerability Analysis transforms how organizations handle these risks by automating vulnerability detection and analysis. For enhanced container security, this AI-powered solution is a potential game-changer.

At its core, the Blueprint combines AI-driven scanning with NVIDIA’s Morpheus Cybersecurity SDK to identify vulnerabilities in seconds rather than hours or days for enhanced container security. The system works through a straightforward process:

First, it generates a Software Bill of Materials (SBOM) using Syft, Anchore’s open-source tool. This tool creates a detailed inventory of all software components in a container. This SBOM feeds into an AI pipeline that leverages large language models (LLMs) and retrieval-augmented generation (RAG) to analyze potential vulnerabilities for enhanced container security.

The AI examines multiple data sources – from code repositories to vulnerability databases – and produces a detailed analysis of each potential threat. Most importantly, it distinguishes between genuine security risks and false positives by considering environmental factors and dependency requirements.

The system then provides clear recommendations through a standardized Vulnerability Exploitability eXchange (VEX) status, as illustrated below. Container security is further enhanced by these clear recommendations.

This Blueprint is particularly valuable because it automates traditional manual security analysis. Security teams can stop spending days investigating potential vulnerabilities and focus on addressing confirmed threats. This efficiency is invaluable for organizations managing container security at scale with enhanced container security solutions.

Want to try it yourself? Check out the Blueprint, read more in the NVIDIA blog post, and explore the vulnerability-analysis git repo. Let us know if you’ve tried this out with Syft, over on the Anchore Community Discourse.

The Evolution of SBOMs in the DevSecOps Lifecycle: Part 2

Welcome back to the second installment of our two-part series on “The Evolution of SBOMs in the DevSecOps Lifecycle”. In our first post, we explored how Software Bills of Materials (SBOMs) evolve over the first 4 stages of the DevSecOps pipeline—Plan, Source, Build & Test—and how each type of SBOM serves different purposes. Some of those use-cases include: shift left vulnerability detection, regulatory compliance automation, OSS license risk management and incident root cause analysis.

In this part, we’ll continue our exploration with the final 4 stages of the DevSecOps lifecycle, examining:

  • Analyzed SBOMs at the Release (Registry) stage
  • Deployed SBOMs during the Deployment phase
  • Runtime SBOMs in Production (Operate & Monitor stages)

As applications migrate down the pipeline, design decisions made at the beginning begin to ossify becoming more difficult to change; this influences the challenges that are experienced and the role that SBOMs play in overcoming these novel problems. Some of the new challenges that come up include: pipeline leaks, vulnerabilities in third-party packages, and runtime injection. All of which introduce significant risk. Understanding how SBOMs evolve across these stages helps organizations mitigate these risks effectively.

Whether you’re aiming to enhance your security posture, streamline compliance reporting, or improve incident response times, this comprehensive guide will equip you with the knowledge to leverage SBOMs effectively from Release to Production. Additionally, we’ll offer pro tips to help you maximize the benefits of SBOMs in your DevSecOps practices.

So, let’s continue our journey through the DevSecOps pipeline and discover how SBOMs can transform the latter stages of your software development lifecycle.

Learn the 5 best practices for container security and how SBOMs play a pivotal role in securing your software supply chain.

Release (or Registry) => Analyzed SBOM

After development is completed and the new release of the software is declared a “golden” image the build system will push the release artifact to a registry for storage until it is deployed. At this stage, an SBOM that is generated based on these container images, binaries, etc. is named an “Analyzed SBOM” by CISA. The name is a little confusing since all SBOMs should be analyzed regardless of the stage they are generated. A more appropriate name might be a Release SBOM but we’ll stick with CISA’s name for now.

At first glance, it would seem that Analyzed SBOMs and the final Build SBOMs should be identical since it is the same software but that doesn’t hold up in practice. DevSecOps pipelines aren’t hermetically sealed systems, they can be “leaky”. You might be surprised what finds its way into this storage repository and eventually gets deployed bypassing your carefully constructed build and test setup.

On top of that, the registry holds more than just first-party applications that are built in-house. It also stores 3rd-party container images like operating systems and any other self-contained applications used by the organization.

The additional metadata that is collected for an Analyzed SBOM includes:

  • Release images that bypass the happy path build and test pipeline
  • 3rd-party container images, binaries and applications

Pros and Cons

Pros:

  • Comprehensive Artifact Inventory: A more holistic view of all software—both 1st- and 3rd-party—that is utilized in production.
  • Enhanced Security and Compliance Posture: Catches vulnerabilities and non-compliant images for all software that will be deployed to production. This reduces the risk of security incidents and compliance violations.
  • Third-Party Supply Chain Risk Management: Provides insights into the vulnerabilities and compliance status of third-party components.
  • Ease of implementation: This stage is typically the lowest lift for implementation given that most SBOM generators can be deployed standalone and pointed at the registry to scan all images.

Cons:

  • High Risk for Release Delays: Scanning images at this stage are akin to traditional waterfall-style development patterns. Most design decisions are baked-in and changes typically incur a steep penalty.
  • Difficult to Push Feedback into Exist Workflow: The registry sits outside of typical developer workflows and creating a feedback loop that seamlessly reports issues without changing the developer’s process is a non-trivial amount of work.
  • Complexity in Management: Managing SBOMs for both internally developed and third-party components adds complexity to the software supply chain.

Use-Cases

  • Software Supply Chain Security: Organizations can detect vulnerabilities in both their internal developed software and external software to prevent supply chain injections from leading to a security incident.
  • Compliance Reporting: Reporting on both 1st- and 3rd-party software is necessary for industries with strict regulatory requirements.
  • Detection of Leaky Pipelines: Identifies release images that have bypassed the standard build and test pipeline, allowing teams to take corrective action.
  • Third-Party Risk Analysis: Assesses the security and compliance of third-party container images, binaries, and applications before they are deployed.

Example: An organization subject to strict compliance standards like FedRAMP or cATO uses Analyzed SBOMs to verify that all artifacts in their registry, including third-party applications, comply with security policies and licensing requirements. This practice not only enhances their security posture but also streamlines the audit process.

Pro Tip

A registry is an easy and non-invasive way to test and evaluate potential SBOM generators. It won’t give you a full picture of what can be found in your DevSecOps pipeline but it will at least give you an initial idea of its efficacy and help you make the decision on whether to go through the effort of integrating it into your build pipeline where it will produce deeper insights.

Deploy => Deployed SBOM

As your container orchestrator is deploying an image from your registry into production it will also orchestrate any production dependencies such as sidecar containers or production dependencies. At this stage, an SBOM that is generated is named an “Deployed SBOM” by CISA.

The ideal scenario is that your operations team is storing all of these images in the same central registry as your engineering team but—as we’ve noted before—reality diverges from the ideal.

The additional metadata that is collected for a Deployed SBOM includes:

  • Any additional sidecar containers or production dependencies that are injected or modified through a release controller.

Pros and Cons

Pros:

  • Enhanced Security Posture: The final gate to prevent vulnerabilities from being deployed into production. This reduces the risk of security incidents and compliance violations.
  • Leaky Pipeline Detection: Another location to increase visibility into the happy path of the DevSecOps pipeline being circumvented.
  • Compliance Enforcement: Some compliance standards require a deployment breaking enforcement gate before any software is deployed to production. A container orchestrator release controller is the ideal location to implement this.

Cons:

Essentially the same issues that come up during the release phase.

  • High Risk for Release Delays: Scanning images at this stage are even later than traditional waterfall-style development patterns and will incur a steep penalty if an issue is uncovered.
  • Difficult to Push Feedback into Exist Workflow: A deployment release controller sits outside of typical developer workflows and creating a feedback loop that seamlessly reports issues without changing the developer’s process is a non-trivial amount of work.

Use-Cases

  • Strict Software Supply Chain Security: Implementing a pipeline breaking gating mechanism is typically reserved for only the most critical security vulnerabilities (think: an actively exploitable known vulnerability).
  • High-Stakes Compliance Enforcement: Industries like defense, financial services and critical infrastructure will require vendors to implement a deployment gate for specific risk scenarios beyond actively exploitable vulnerabilities.
  • Compliance Audit Automation: Most regulatory compliance frameworks mandate audit artifacts at deploy time, these documents can be automatically generated and stored for future audits.

Example: A Deployed SBOM can be used as the source of truth for generating a report that demonstrates that no HIGH or CRITICAL vulnerabilities were deployed to production during an audit period.

Pro Tip

Combine a Deployed SBOM with a container vulnerability scanner that cross-checks all vulnerabilities against CISA’s Known Exploitable Vulnerability (KEV) database. In the scenario where a matching KEV is found for a software component you can configure your vulnerability scanner to return a FAIL response to your release controller to abort the deployment.

This strategy creates an ideal balance between not adding delays to software delivery and an extremely high probability for a security incident. 

Operate & Monitor (or Production) => Runtime SBOM

After your container orchestrator has deployed an application into your production environment it is live and serving customer traffic. An SBOM that is generated at this stage don’t have a name as specified by CISA. They are sometimes referred to as “Runtime SBOMs”. SBOMs are still a new-ish standard and will continue to evolve. 

The additional metadata that is collected for a Runtime SBOM includes:

  • Modifications (i.e., intentional hotfixes or malicious malware injection) made to running applications in your production environment. 

Pros and Cons

Pros:

  • Continuous Security Monitoring: Identifies new vulnerabilities that emerge after deployment.
  • Active Runtime Inventory: Provides a canonical view into an organization’s active software landscape.
  • Low Lift Implementation: Deploying SBOM generation into a production environment typically only requires deploying the scanner as another container and giving it permission to access the rest of the production environment.

Cons:

  • No Shift Left Security: By definition is excluded as a part of a shift left security posture.
  • Potential for Release Rollbacks: Scanning images at this stage is the worst possible place for proactive remediation. Discovering a vulnerability could potentially cause a security incident and force a release rollback.

Use-Cases

  • Rapid Incident Management: When new critical vulnerabilities are discovered and announced by the community the first priority for an organization is to determine exposure. An accurate production inventory, down to the component-level, is needed to answer this critical question.
  • Threat Detection: Continuously monitoring for anomalous activity linked to specific components. Sealing your system off completely from advanced persistent threats (APTs) is an unfeasible goal. Instead, quick detection and rapid intervention is the scalable solution to limit the impact of these adversaries.
  • Patch Management: As new releases of 3rd-party components and applications are released an inventory of impacted production assets provides helpful insights that can direct the prioritization of engineering efforts.

Example: When the XZ Utils vulnerability was announced in the spring of 2024, organizations that already automatically generated a Runtime SBOM inventory ran a simple search query against their SBOM database and knew within minutes—or even seconds—whether they were impacted.

Pro Tip

If you want to learn about how Google was able to go from an all-hands on deck security incident when the XZ Utils vulnerability was announced to an all clear under 10 minutes, watch our webinar with the lead of Google’s SBOM initiative.

Wrap-Up

As the SBOM standard has evolved the subject has grown considerably. What started as a structured way to store information about open source licenses has expanded to include numerous use-cases. A clear understanding of the evolution of SBOMs throughout the DevSecOps lifecycle is essential for organizations aiming to solve problems ranging from software supply chain security to regulatory compliance to legal risk management.

SBOMs are a powerful tool in the arsenal of modern software development. By recognizing their importance and integrating them thoughtfully across the DevSecOps lifecycle, you position your organization at the forefront of secure, efficient, and compliant software delivery.

Ready to secure your software supply chain and automate compliance tasks with SBOMs? Anchore is here to help. We offer SBOM management, vulnerability scanning and compliance automation enforcement solutions. If you still need some more information before looking at solutions, check out our webinar below on scaling a secure software supply chain with Kubernetes. 👇👇👇

Learn how Spectro Cloud secured their Kubernetes-based software supply chain and the pivotal role SBOMs played.

The Evolution of SBOMs in the DevSecOps Lifecycle: From Planning to Production

The software industry has wholeheartedly adopted the practice of building new software on the shoulders of the giants that came before them. To accomplish this developers construct a foundation of pre-built, 3rd-party components together then wrap custom 1st-party code around this structure to create novel applications. It is an extraordinarily innovative and productive practice but it also introduces challenges ranging from security vulnerabilities to compliance headaches to legal risk nightmares. Software bills of materials (SBOMs) have emerged to provide solutions for these wide ranging problems.

An SBOM provides a detailed inventory of all the components that make up an application at a point in time. However, it’s important to recognize that not all SBOMs are the same—even for the same piece of software! SBOMs evolve throughout the DevSecOps lifecycle; just as an application evolves from source code to a container image to a running application. The Cybersecurity and Infrastructure Security Agency’s (CISA) has codified this idea by differentiating between all of the different types of SBOMs. Each type serves different purposes and captures information about an application through its evolutionary process.

In this 2-part blog series, we’ll deep dive into each stage of the DevSecOps process and the associated SBOM. Highlighting the differences, the benefits and disadvantages and the use-cases that each type of SBOM supports. Whether you’re just beginning your SBOM journey or looking to deepen your understanding of how SBOMs can be integrated into your DevSecOps practices, this comprehensive guide will provide valuable insights and advice from industry experts.

Learn about the role that SBOMs for the security of your organization in this white paper.

Learn about the role that SBOMs for the security, including open source software (OSS) security, of your organization in this white paper.

Types of SBOMs and the DevSecOps Pipeline

Over the past decade the US government got serious about software supply chain security and began advocating for SBOMs as the standardized approach to the problem. As part of this initiative CISA created the Types of Software Bill of Material (SBOM) Documents white paper that codified the definitions of the different types of SBOMs and mapped them to each stage of the DevSecOps lifecycle. We will discuss each in turn but before we do, let’s anchor on some terminology to prevent confusion or misunderstanding.

Below is a diagram that lays out each stage of the DevSecOps lifecycle as well as the naming convention we will use going forward.

With that out of the way, let’s get started!

Plan => Design SBOM

As the DevSecOps paradigm has spread across the software industry, a notable best practice known as the security architecture review has become integral to the development process. This practice embodies the DevSecOps goal of integrating security into every phase of the software lifecycle, aligning perfectly with the concept of Shift-Left Security—addressing security considerations as early as possible.

At this stage, the SBOM documents the planned components of the application. The CISA refers to SBOMs generated during this phase as Design SBOMs. These SBOMs are preliminary and outline the intended components and dependencies before any code is written.

The metadata that is collected for a Design SBOM includes:

  • Component Inventory: Identifying potential OSS libraries and frameworks to be used as well as the dependency relationship between the components.
  • Licensing Information: Understanding the licenses associated with selected components to ensure compliance.
  • Risk Assessment Data: Evaluating known vulnerabilities and security risks associated with each component.

This might sound like a lot of extra work but luckily if you’re already performing DevSecOps-style planning that incorporates a security and legal review—as is best practice—you’re already surfacing all of this information. The only thing that is different is that this preliminary data is formatted and stored in a standardized data structure, namely an SBOM.

Pros and Cons

Pros:

  • Maximal Shift-Left Security: Vulnerabilities cannot be found any earlier in the software development process. Design time security decisions are the peak of a proactive security posture and preempt bad design decisions before they become ingrained into the codebase.
  • Cost Efficiency: Resolving security issues at this stage is generally less expensive and less disruptive than during later stages of development or—worst of all—after deployment.
  • Legal and Compliance Risk Mitigation: Ensures that all selected components meet necessary compliance standards, avoiding legal complications down the line.

Cons:

  • Upfront Investment: Gathering detailed information on potential components and maintaining an SBOM at this stage requires a non-trivial commitment of time and effort.
  • Incomplete Information: Projects are not static, they will adapt as unplanned challenges surface. A design SBOM likely won’t stay relevant for long.

Use-Cases

There are a number of use-cases that are enabled by 

  • Security Policy Enforcement: Automatically checking proposed components against organizational security policies to prevent the inclusion of disallowed libraries or frameworks.
  • License Compliance Verification: Ensuring that all components comply with the project’s licensing requirements, avoiding potential legal issues.
  • Vendor and Third-Party Risk Management: Assessing the security posture of third-party components before they are integrated into the application.
  • Enhance Transparency and Collaboration: A well-documented SBOM provides a clear record of the software’s components but more importantly that the project aligns with the goals of all of the stakeholders (engineering, security, legal, etc). This builds trust and creates a collaborative environment that increases the chances of each individual stakeholder outcome will be achieved.

Example:

A financial services company operating within a strict regulatory environment uses SBOMs during planning to ensure that all components comply with compliance standards like PCI DSS. By doing so, they prevent the incorporation of insecure components that won’t meet PCI compliance. This reduces the risk of the financial penalties associated with security breaches and regulatory non-compliance.

Pro Tip

If your organization is still early in the maturity of its SBOM initiative then we generally recommend moving the integration of design time SBOMs to the back of the queue. As we mention at the beginning of this the information stored in a design SBOMs is naturally surfaced during the DevSecOps process, as long as the information is being recorded and stored much of the value of a design SBOM will be captured in the artifact. This level of SBOM integration is best saved for later maturity stages when your organization is ready to begin exploring deeper levels of insights that have a higher risk-to-reward ratio. 

Alternatively, if your organization is having difficulty getting your teams to adopt a collaborative DevSecOps planning process mandating a SBOM as a requirement can act as a forcing function to catalyze a cultural shift.

Source => Source SBOM

During the development stage, engineers implement the selected 3rd-party components into the codebase. CISA refers to SBOMs generated during this phase as Source SBOMs. The SBOMs generated here capture the actual implemented components and additional information that is specific to the developer who is doing the work.

The additional metadata that is collected for a Source SBOM includes:

  • Dependency Mapping: Documenting direct and transitive dependencies.
  • Identity Metadata: Adding contributor and commit information.
  • Developer Environment: Captures information about the development environment.

Unlike Design SBOMs which are typically done manually, these SBOMs can be generated programmatically with a software composition analysis (SCA) tool—like Syft. They are usually packaged as command line interfaces (CLIs) since this is the preferred interface for developers. 

If you’re looking for an SBOM generation tool (SCA embedded), we have a comprehensive list of options to make this decision easier.

Pros and Cons

Pros:

  • Accurate and Timely Component Inventory: Reflects the actual components used in the codebase and tracks changes as codebase is actively being developed.
  • Shift-Left Vulnerability Detection: Identifies vulnerabilities as components are integrated but requires commit level automation and feedback mechanisms to be effective.
  • Facilitates Collaboration and Visibility: Keeps all stakeholders members informed about divergence from the original plan and provokes conversations as needed. This is also dependent on automation to record changes during development and the notification systems to broadcast the updates.

Example: A developer adds a new logging library to the project like an outdated version of Log4j. The SBOM, paired with a vulnerability scanner, immediately flags the Log4Shell vulnerability, prompting the engineer to update to a patched version.

Cons:

  • Noise from Developer Toolchains: A lot of times developer environments are bespoke. This creates noise for security teams by recording development dependencies.
  • Potential Overhead: Continuous updates to the SBOM can be resource-intensive when done manually; the only resource efficient method is by using an SBOM generation tool that automates the process.
  • Possibility of Missing Early Risks: Issues not identified during planning may surface here, requiring code changes.

Use-Cases

  • Faster Root Cause Analysis: During service incident retrospectives questions about where, when and by whom a specific component was introduced into an application. Source SBOMs are the programmatic record that can provide answers and decrease manual root cause analysis.
  • Real-Time Security Alerts: Immediate notification of vulnerabilities upon adding new components, decreasing time to remediation and keeping security teams informed.
  • Automated Compliance Checks: Ensuring added components comply with security or license policies to manage compliance risk.
  • Effortless Collaboration: Stakeholders can subscribe to a live feed of changes and immediately know when implementation diverges from the plan.

Pro Tip

Some SBOM generators allow developers to specify development dependencies that should be ignored, similar to .gitignore file. This can help cut down on the noise created by unique developer setups.

Build & Test => Build SBOM

When a developer pushes a commit to the CI/CD build system an automated process initiates that converts the application source code into an artifact that can then be deployed. CISA refers to SBOMs generated during this phase as Build SBOMs. These SBOMs capture both source code dependencies and build tooling dependencies.

The additional metadata that is collected includes:

  • Build Dependencies: Build tooling such as the language compilers, testing frameworks, package managers, etc.
  • Binary Analysis Data: Metadata for compiled binaries that don’t utilize traditional container formats.
  • Configuration Parameters: Details on build configuration files that might impact security or compliance.

Pros and Cons

Pros:

  • Build Infrastructure Analysis: Captures build-specific components which may have their own vulnerability or compliance issues.
  • Reuses Existing Automation Tooling: Enables programmatic security and compliance scanning as well as policy enforcement without introducing any additional build tooling.
  • Reuses Existing Automation Tooling: Directly integrates with developer workflow. Engineers receive security, compliance, etc. feedback without the need to reference a new tool.
  • Reproducibility: Facilitates reproducing builds for debugging and auditing.

Cons:

  • SBOM Sprawl: Build processes run frequently, if it is generating an SBOM with each run you will find yourself with a glut of files that you will have to manage.
  • Delayed Detection: Vulnerabilities or non-compliance issues found at this stage may require rework.

Use-Cases

  • SBOM Drift Detection: By comparing SBOMs from two or more stages, unexpected dependency injection can be detected. This might take the form of a benign, leaky build pipeline that requires manual workarounds or a malicious actor attempting to covertly introduce malware. Either way this provides actionable and valuable information.
  • Policy Enforcement: Enables the creation of build breaking gates to enforce security or compliance. For high-stakes operating environments like defense, financial services or critical infrastructure, automating security and compliance at the expense of some developer friction is a net-positive strategy.
  • Automated Compliance Artifacts: Compliance requires proof in the form of reports and artifacts. Re-utilizing existing build tooling automation to automate this task significantly reduces the manual work required by security teams to meet compliance requirements.

Example: A security scan during testing uses the Build SBOM to identify a critical vulnerability and alerts the responsible engineer. The remediation process is initiated and a patch is applied before deployment.

Pro Tip

If your organization is just beginning their SBOM journey, this is the recommended phase of the DevSecOps lifecycle to implement SBOMs first. The two primary cons of this phase can be mitigated the easiest. For SBOM sprawl, you can procure a turnkey SBOM management solution like Anchore SBOM

As for the delay in feedback created by waiting till the build phase, if your team is utilizing DevOps best practices and breaking features up into smaller components that fit into 2-week sprints then this tight scoping will limit the impact of any significant vulnerabilities or non-compliance discovered.

Intermission

So far we’ve covered the first half of the DevSecOps lifecycle. Next week we will publish the second part of this blog series where we’ll cover the remainder of the pipeline. Watch our socials to be sure you get notified when part 2 is published.

If you’re looking for some additional reading in the meantime, check out our container security white paper below.

Learn the 5 best practices for container security and how SBOMs play a pivotal role in securing your software supply chain.

Choosing the Right SBOM Generator: A Framework for Success

Choosing the right SBOM (software bill of materials) generator is tricker than it looks at first glance. SBOMs are the foundation for a number of different uses ranging from software supply chain security to continuous regulatory compliance. Due to its cornerstone nature, the SBOM generator that you choose will either pave the way for achieving your organization’s goals or become a road block that delays critical initiatives.

But how do you navigate the crowded market of SBOM generation tools to find the one that aligns with your organization’s unique needs? It’s not merely about selecting a tool with the most features or the nicest CLI. It’s about identifying a solution that maps directly to your desired outcomes and use-cases, whether that’s rapid incident response, proactive vulnerability management, or compliance reporting.

We at Anchore have been enabling organizations to achieve their SBOM-related outcomes and do it with the least amount of frustration and setbacks. We’ve compiled our learnings on choosing the right SBOM generation tool into a framework to help the wider community make decisions that set them up for success.

Below is a quick TL;DR of the high-level evaluation criteria that we cover in this blog post:

  • Understanding Your Use-Cases: Aligning the tool with your specific goals.
  • Ecosystem Compatibility: Ensuring support for your programming languages, operating systems, and build artifacts.
  • Data Accuracy: Evaluating the tool’s ability to provide comprehensive and precise SBOMs.
  • DevSecOps Integration: Assessing how well the tool fits into your existing DevSecOps tooling.
  • Proprietary vs. Open Source: Weighing the long-term implications of your choice.

By focusing on these key areas, you’ll be better equipped to select an SBOM generator that not only meets your current requirements but also positions your organization for future success.

Learn about the role that SBOMs for the security of your organization in this white paper.

Learn about the role that SBOMs for the security, including open source software (OSS) security, of your organization in this white paper.

Know your use-cases

When choosing from the array of SBOM generation tools in the market, it is important to frame your decision with the outcome(s) that you are trying to achieve. If your goal is to improve the response time/mean time to remediation when the next Log4j-style incident occurs—and be sure that there will be a next time—an SBOM tool that excels at correctly identifying open source licenses in a code base won’t be the best solution for your use-case (even if you prefer its CLI ;-D).

What to Do:

  • Identify and prioritize the outcomes that your organization is attempting to achieve
  • Map the outcomes to the relevant SBOM use-cases
  • Review each SBOM generation tool to determine whether they are best suited to your use-cases

It can be tempting to prioritize an SBOM generator that is best suited to our preferences and workflows; we are the ones that will be using the tool regularly—shouldn’t we prioritize what makes our lives easier? If we prioritize our needs above the goal of the initiative we might end up putting ourselves into a position where our choice in tools impedes our ability to recognize the desired outcome. Using the correct framing, in this case by focusing on the use-cases, will keep us focused on delivering the best possible outcome.

SBOMs can be utilized for numerous purposes: security incident response, open source license compliance, proactive vulnerability management, compliance reporting or software supply chain risk management. We won’t address all use-cases/outcomes in this blog post, a more comprehensive treatment of all of the potential SBOM use-cases can be found on our website.

Example SBOM Use-Cases:

  • Security incident response: an inventory of all applications and their dependencies that can be queried quickly and easily to identify whether a newly announced zero-day impacts the organization.
  • Proactive vulnerability management: all software and dependencies are scanned for vulnerabilities as part of the DevSecOps lifecycle and remediated based on organizational priority.
  • Regulatory compliance reporting: compliance artifacts and reports are automatically generated by the DevSecOps pipeline to enable continuous compliance and prevent manual compliance work.
  • Software supply chain risk management: an inventory of software components with identified vulnerabilities used to inform organizational decision making when deciding between remediating risk versus building new features.
  • Open source license compliance: an inventory of all software components and the associated OSS license to measure potential legal exposure.

Pro tip: While you will inevitably leave many SBOM use-cases out of scope for your current project, keeping secondary use-cases in the back of your mind while making a decision on the right SBOM tool will set you up for success when those secondary use-cases eventually become a priority in the future.

Does the SBOM generator support your organization’s ecosystem of programming languages, etc?

SBOM generators aren’t just tools to ingest data and re-format it into a standardized format. They are typically paired with a software composition analysis (SCA) tool that scans an application/software artifact for metadata that will populate the final SBOM.

Support for the complete array of programming languages, build artifacts and operating system ecosystems is essentially an impossible task. This means that support varies significantly depending on the SBOM generator that you select. An SBOM generator’s ability to help you reach your organizational goals is directly related to its support for your organization’s software tooling preferences. This will likely be one of the most important qualifications when choosing between different options and will rule out many that don’t meet the needs of your organization.

Considerations:

  • Programming Languages: Does the tool support all languages used by your team?
  • Operating Systems: Can it scan the different OS environments your applications run on top of?
  • Build Artifacts: Does the tool scan containers? Binaries? Source code repositories? 
  • Frameworks and Libraries: Does it recognize the frameworks and libraries your applications depend on?

Data accuracy

This is one of the most important criteria when evaluating different SBOM tools. An SBOM generator may claim support for a particular programming language but after testing the scanner you may discover that it returns an SBOM with only direct dependencies—honestly not much better than a package.json or go.mod file that your build process spits out.

Two different tools might both generate a valid SPDX SBOM document when run against the same source artifact, but the content of those documents can vary greatly. This variation depends on what the tool can inspect, understand, and translate. Being able to fully scan an application for both direct and transitive dependencies as well as navigate non-ideomatic patterns for how software can be structured end up being the true differentiators between the field of SBOM generation contenders.

Imagine using two SBOM tools on a Debian package. One tool recognizes Debian packages and includes detailed information about them in the SBOM. The latter can’t fully parse the Debian .deb format and omits critical information. Both produce an SBOM, but only one provides the data you need to power use-case based outcomes like security incident response or proactive vulnerability management.

Let’s make this example more concrete by simulating this difference with Syft, Anchore’s open source SBOM generation tool:

$ syft -q -o spdx-json nginx:latest > nginx_a.spdx.json
$ grype -q nginx_a.spdx.json | grep Critical
libaom3             3.6.0-1+deb12u1          (won't fix)       deb   CVE-2023-6879     Critical    
libssl3             3.0.14-1~deb12u2         (won't fix)       deb   CVE-2024-5535     Critical    
openssl             3.0.14-1~deb12u2         (won't fix)       deb   CVE-2024-5535     Critical    
zlib1g              1:1.2.13.dfsg-1          (won't fix)       deb   CVE-2023-45853    Critical

In this example, we first generate an SBOM using Syft then run it through Grype—our vulnerability scanning tool. Syft + Grype uncover 4 critical vulnerabilities.

Now let’s try the same thing but “simulate” an SBOM generator that can’t fully parse the structure of the software artifact in question:

$ syft -q -o spdx-json --select-catalogers "-dpkg-db-cataloger,-binary-classifier-cataloger" nginx:latest > nginx_b.spdx.json 
$ grype -q nginx_b.spdx.json | grep Critical
$

In this case, we are returned none of the critical vulnerabilities found with the former tool.

This highlights the importance of careful evaluation of the SBOM generator that you decide on. It could mean the difference between effective vulnerability risk management and a security incident.

Can the SBOM tool integration into your DevSecOps pipeline?

If the SBOM generator is packaged as a self-contained binary with a command line interface (CLI) then it should tick this box. CI/CD build tools are most amenable to this deployment model. If the SBOM generation tool in question isn’t a CLI then it should at least run as a server with an API that can be called as part of the build process.

Integrating with an organization’s DevSecOps pipeline is key to enable a scalable SBOM generation process. By implementing SBOM creation directly into the existing build tooling, organizations can leverage existing automation tools to ensure consistency and efficiency which are necessary for achieving the desired outcomes.

Proprietary vs. open source SBOM generator?

Using an open source SBOM tool is considered an industry best practice. This is because it guards against the risks associated with vendor lock-in. As a bonus, the ecosystem for open source SBOM generation tooling is very healthy. OSS will always have an advantage over proprietary in regards to ecosystem coverage and data quality because it will get into the hands of more users which will create a feedback loop that closes gaps in coverage or quality.

Finally, even if your organization decides to utilize a software supply chain security product that has its own proprietary SBOM generator, it is still better to create your SBOMs with an open source SBOM generator, export to a standardized format (e.g., SPDX or CycloneDX) then have your software supply chain security platform ingest these non-proprietary data structures. All platforms will be able to ingest SBOMs from one or both of these standards-based formats.

Wrap-Up

In a landscape where the next security/compliance/legal challenge is always just around the corner, equipping your team with the right SBOM generator empowers you to act swiftly and confidently. It’s an investment not just in a tool, but in the resilience and security of your entire software supply chain. By making a thoughtful, informed choice now, you’re laying the groundwork for a more secure and efficient future.

Automate STIG Compliance with MITRE SAF: the Fastest Path to ATO

Trying to get your head around STIG (Security Technical Implementation Guides) compliance? Anchore is here to help. With the help of MITRE Security Automation Framework (SAF) we’ll walk you through the quickset path to STIG Compliance and ultimately the converted Authority to Operate (ATO).

The goal for any company that aims to provide software services to the Department of Defense (DoD) is an ATO. Without this stamp of approval your software will never get into the hands of the warfighters that need it most. STIG compliance is a necessary needle that must be thread on the path to ATO. Luckily, MITRE has developed and open-sourced SAF to smooth the often complex and time-consuming STIG compliance process.

We’ll get you up to speed on MITRE SAF and how it helps you achieve STIG compliance in this blog post but before we jump straight into the content be sure to bookmark our webinar with the Chief Architect of MITRE Security Automation Framework (SAF), Aaron Lippold. Josh Bressers, VP of Security at Anchore and Lippold provide a behind the scenes look at SAF and how it dramatically reduces the friction of the STIG compliance process.

What is the MITRE Security Automation Framework (SAF)?

The MITRE SAF is both a high-level cybersecurity framework and an umbrella that encompasses a set of security/compliance tools. It is designed to simplify STIG compliance by translating DISA (Defense Information Systems Agency) SRG (Security Requirements Guide) guidance into actionable steps. 

By following the Security Automation Framework, organizations can streamline and automate the hardened configuration of their DevSecOps pipeline to achieve an ATO (Authority to Operate).

The SAF offers four primary benefits:

  1. Accelerate Path to ATO: By streamlining STIG compliance, SAF enables organizations to get their applications into the hands of DoD operators faster. This acceleration is crucial for meeting operational demands without compromising on security standards.
  2. Establish Security Requirements: SAF translates SRGs and STIGs into actionable steps tailored to an organization’s specific DevSecOps pipeline. This eliminates ambiguity and ensures security controls are implemented correctly.
  3. Build Security In: The framework provides tooling that can be directly embedded into the software development pipeline. By automating STIG configurations and policy checks, it ensures that security measures are consistently applied, leaving no room for false steps.
  4. Assess and Monitor Vulnerabilities: SAF offers visualization and analysis tools that assist organizations in making informed decisions about their current vulnerability inventory. It helps chart a path toward achieving STIG compliance and ultimately an ATO.

The overarching vision of the MITRE SAF is to “implement evolving security requirements while deploying apps at speed.” In essence, it allows organizations to have their cake and eat it too—gaining the benefits of accelerated software delivery without letting cybersecurity risks grow unchecked.

How does MITRE SAF work?

MITRE SAF is segmented into 5 capabilities that map to specific stages of the DevSecOps pipeline or STIG compliance process:

  1. Plan
  2. Harden
  3. Validate
  4. Normalize
  5. Visualize

Let’s break down each of these capabilities.

Plan

There are hundreds of existing STIGs for products ranging from Microsoft Windows to Cisco Routers to MySQL databases. On the off chance that a product your team wants to use doesn’t have a pre-existing STIG, SAF’s Vulcan tool is helps translate the application SRG into a tailored STIG that can then be used to achieve compliance.

Vulcan helps streamline the process of creating STIG-ready security guidance and the accompanying InSpec automated policy that confirms a specific instance of software is configured in a compliant manner.

Vulcan does this by modeling the STIG intent form and tailoring the applicable SRG controls into a finished STIG for an application. The finished STIG is then sent to DISA for peer review and formal publishing as a STIG. Vulcan allows the author to develop both human-readable instructions and machine-readable InSpec automated validation code at the same time.

Diagram of process to map SRG controls to STIG guidelines via the MITE SAF Vulcan CLI tool; an automated conversion tool to speed up STIG compliance process.

Harden

The hardening capability focuses on automating STIG compliance through the use of pre-built infrastructure configuration scripts. SAF hardening content allows organizations to:

  • Use their preferred configuration management tools: Chef Cookbooks, Ansible Playbooks, Terraform Modules, etc. are available as open source templates on MITRE’s GitHub page.
  • Share and collaborate: All hardening content is open source, encouraging community involvement and shared learning.
  • Coverage for the full development stack: Ensuring that every layer, from cloud infrastructure to applications, adheres to security standards.

Validate

The validation capability focuses on verifying the hardening meets the applicable STIG compliance standard. These validation checks are automated via the SAF CLI tool that incorporates the InSpec policies for a STIG. With SAF CLI, organizations can:

  • Automatically validate STIG compliance: By integrating SAF CLI directly into your CI/CD pipeline and invoking InSpec policy checks at every build; shifting security left by surfacing policy violations early.
  • Promote community collaboration: Like the hardening content, validation scripts are open source and accessible by the community for collaborative efforts.
  • Span the entire development stack: Validation—similar to hardening—isn’t limited to a single layer; it encompasses cloud infrastructure, platforms, operating systems, databases, web servers, and applications.
  • Incorporate manual attestation: To achieve comprehensive coverage of policy requirements that automated tools might not fully address.

Normalize

Normalization addresses the challenge of interoperability between different security tools and data formats. SAF CLI performs double-duty by taking on the normalization function as well as validation. It is able to:

  • Translate data into OHDF: OASIS Heimdall Data Format (OHDF), is an open standard that structures countless proprietary security metadata formats into a single universal format.
  • Leverage open source OHDF libraries: Organizations can use OHDF converters as libraries within their custom applications.
  • Automate data conversion: By incorporating SAF CLI into the DevSecOps pipeline, data is automatically standardized with each run.
  • Increased compliance efficiency: A single data format for all security data allows interoperability and facilitates efficient and automated STIG compliance.

Example: Below is an example of Burp Suite’s proprietary data format normalized to the OHDF JSON format:

Image of Burp Suite data format being mapped to MITRE SAF's OHDF to reduce manual data mapping and reduce time to STIG compliance.

Visualize

Visualization is critical for understanding security posture and making informed decisions. SAF provides an open source, self-hosted visualization tool named Heimdall. It ingests OHDF normalized security data and provides the data analysis tools to enable organizations to:

  • Aggregate security and compliance results: Compiling data into comprehensive rollups, charts, and timelines for a holistic view of security and compliance status.
  • Perform deep dives: Allowing teams to explore detailed vulnerability information to facilitate investigation and remediation, ultimately speeding up time to STIG compliance.
  • Guide risk reduction efforts: Visualization of insights help with prioritization of security and compliance tasks reducing risk in the most efficient manner.

How is SAF related to a DoD Software Factory?

A DoD Software Factory is the common term for a DevSecOps pipeline that meets the definition laid out in DoD Enterprise DevSecOps Reference Design. All software that ultimately achieves an ATO has to be built on a fully implemented DoD Software Factory. You can either build your own or use a pre-existing DoD Software Factory like the US Air Force’s Platform One or the US Navy’s Black Pearl.

As we saw earlier, MITRE SAF is a framework meant to help you achieve STIG compliance and is a portion of your journey towards an ATO. STIG compliance applies to both the software that you write as well as the DevSecOps platform that your software is built on. Building your own DoD Software Factory means committing to going through the ATO process and STIG compliance for the DevSecOps platform first then a second time for the end-user application.

Wrap-Up

The MITRE SAF is a huge leg up for modern, cloud-native DevSecOps software vendors that are currently navigating the labyrinth towards ATO. By providing actionable guidance, automation tooling, and a community-driven approach, SAF dramatically reduces the time to ATO. It bridges the gap between the speed of DevOps software delivery and secure, compliant applications ready for critical DoD missions with national security implications. 

Embracing SAF means more than just meeting regulatory requirements; it’s about building a resilient, efficient, and secure development pipeline that can adapt to evolving security demands. In an era where cybersecurity threats are evolving just as rapidly as software, leveraging frameworks like MITRE SAF is not an efficient path to compliance—it’s essential for sustained success.

Introducing Anchore Data Service and Anchore Enterprise 5.10

We are thrilled to announce the release of Anchore Enterprise 5.10, our tenth release of 2024. This update brings two major enhancements that will elevate your experience and bolster your security posture: the new Anchore Data Service (ADS) and expanded AnchoreCTL ecosystem coverage. 

With ADS, we’ve built a fast and reliable solution that reduces time spent by DevSecOps teams debugging intermittent network issues from flaky services that are vital to software supply chain security.

On top of that, we have buffed our software composition analysis (SCA) scanner’s ecosystem coverage (e.g., C++, Swift, Elixir, R, etc.) for all Anchore customers. To do this we embedded Syftour popular, open source SCA/SBOM (software bill of materials) generator—directly into Anchore Enterprise.

It’s been a fall of big releases at Anchore and we’re excited to continue delivering value to our loyal customers. Read on to get all of the gory details >>

Announcing the Anchore Data Service

Before, customers ran the Anchore Feed Service within their environment to pull data feeds into their Anchore Enterprise deployment. To get an idea of what this looked like, see the architecture diagram of Anchore Enterprise prior to version 5.10:

Originally we did this to give customers more control over their environment. Unfortunately this wasn’t without its issues. The data feeds are provided by the community which means the services were designed to be accessible but cost-efficient. This meant they were unreliable; frequently having accessibility issues.

We only have to stretch our memory back to the spring to recall an example that made national headlines. The National Vulnerability Database (NVD) ran into funding issues. This impacted both the creation of new vulnerabilities AND the availability of their API. This created significant friction for Anchore customers—not to mention the entirety of the software industry.

Now, Anchore is running its own enterprise-grade service, named Anchore Data Service (ADS). It is a replacement for the former feed service. ADS aggregates all of the community data feeds, enriches the data (with proprietary threat data) and packages it for customers; all of this with a service availability guarantee expected of an enterprise service.

The new architecture with ADS as the intermediary is illustrated below:

As a bonus for our customers running air-gapped deployments of Anchore Enterprise, there is no more need to run a second deployment of Anchore Enterprise in a DMZ to pull down the data feeds. Instead a single file is pulled from ADS then transferred to a USB thumb drive. From there a single CLI command is run to update your air-gapped deployment of Anchore Enterprise.

Increased AnchoreCTL Ecosystem Coverage

We have increased the number of supported ecosystems (e.g., C++, Swift, Elixir, R, etc.) in Anchore Enterprise. This improves coverage and increases the likelihood that all of your organization’s applications can be scanned and protected by Anchore Enterprise.

More importantly, we have completely re-architected the process for how Anchore Enterprise supports new ecosystems. By integrating Syft—Anchore’s open source SBOM generation tool—directly into AnchoreCTL, Anchore’s customers will now get access to new ecosystem support as they are merged into Syft’s codebase.

Before Syft and AnchoreCTL were somewhat separate which caused AnchoreCTL’s support for new ecosystems to lag Syft’s. Now, they are fully integrated. This enables all of Anchore’s enterprise and public sector customers to take full advantage of the open source community’s development velocity.

Full list of support ecosystems

Below is a complete list of all supported ecosystems by both Syft and AnchoreCTL (as of Anchore Enterprise 5.10; see our docs for most current info):

  • Alpine (apk)
  • C (conan)
  • C++ (conan)
  • Dart (pubs)
  • Debian (dpkg)
  • Dotnet (deps.json)
  • Objective-C (cocoapods)
  • Elixir (mix)
  • Erlang (rebar3)
  • Go (go.mod, Go binaries)
  • Haskell (cabal, stack)
  • Java (jar, ear, war, par, sar, nar, native-image)
  • JavaScript (npm, yarn)
  • Jenkins Plugins (jpi, hpi)
  • Linux kernel archives (vmlinuz)
  • Linux kernel a (ko)
  • Nix (outputs in /nix/store)
  • PHP (composer)
  • Python (wheel, egg, poetry, requirements.txt)
  • Red Hat (rpm)
  • Ruby (gem)
  • Rust (cargo.lock)
  • Swift (cocoapods, swift-package-manager)
  • WordPress plugins

After you update to Anchore Enterprise 5.10, the SBOM inventory will now display all of the new ecosystems. Any SBOMs that have been generated for a particular ecosystem will show up top. The screenshot below gives you an idea of what this will look like:

Wrap-Up

Anchore Enterprise 5.10 marks a new chapter in providing reliable, enterprise-ready security tooling for modern software development. The introduction of the Anchore Data Service ensures that you have consistent and dependable access to critical vulnerability and exploit data, while the expanded ecosystem support means that no part of your tech stack is left unscrutinized for latent risk. Upgrade to the latest version and experience these new features for yourself.

To update and leverage these new features check out our docs, reach out to your Customer Success Engineer or contact our support team. Your feedback is invaluable to us, and we look forward to continuing to support your organization’s security needs.We are offering all of our product updates as a new quarterly product update webinar series. Watch the fall webinar update in the player below to get all of the juicy tidbits from our product team.

Preparing for a critical vulnerability

One morning, you wake up and see a tweet like the one above. The immediate response is often panic. This sounds bad; it probably affects everyone, and nobody knows for certain what to do next. Eventually, the panic subsides, but we still have a problem that needs to be dealt with. So the question to ask is: What can we do?

Don’t panic

Having a vague statement about a situation that apparently will probably affect everyone sounds like a problem we can’t possibly prepare for. Waiting is generally the worst option in situations like this, but there are some things we can do to help us out.

One of the biggest challenges in modern infrastructure is just understanding what you have. This sounds silly, but it’s a tough problem because of how most software is built now. You depend on a few open-source projects, and those projects depend on a few other open-source projects, which rely on even more open-source projects. And before you know it, you have 300 open-source projects instead of the 6 you think you installed.

Our goal is to create an inventory of all our software. If we have an accurate and updated inventory, you don’t have to wonder if you’re running some random application, like CUPS. You will know beyond a reasonable doubt. Knowing what software you do (or don’t) have deployed can bring an amazing amount of peace of mind.

This is not new

This was the same story during log4J and xz emergencies. What induced panic in everyone wasn’t the vulnerabilities themselves but the scramble to find where those libraries were deployed. In many instances, we observed engineers manually connecting to servers with SSH and dissecting container images.

These security emergencies will never end, but they all play out similarly. There is a time between when something is announced and good actionable guidance appearing. The security community will come to a better understanding of the issue, then we can figure out the best way to deal with whatever the problem is. This could be updating packages, it could mean adjusting firewall rules, or maybe changing a configuration option.

While we wait for the best guidance, what if we were going through our software inventory? When Log4Shell happened almost everyone spent the first few days or weeks (or months) just figuring out if they had Log4j anywhere. If you have an inventory, those first few days can be spent putting together a plan for applying the guidance you know is coming. It’s a much nicer way to spend time than frantically searching!

The inventory

Creating an inventory sounds like it shouldn’t be that hard. How much software do you REALLY have? It’s hard, and software poses a unique challenge because there are nearly infinite ways to build and assemble any given application. Do you have OpenSSL as an operating system package? Or is it a library in a random directory? Maybe it’s statically compiled into the binary. Maybe we download a copy off the internet when the application starts. Maybe it’s all of these … at the same time.

This complexity is taken to a new level when you consider how many computers, servers, containers, and apps are deployed. The scale means automation is the only way we can do this. Humans cannot handcraft an inventory. They are too slow and make too many mistakes for this work, but robots are great at it!

But the automation we have today isn’t perfect. It’s early days for many of these scanners and inventory formats (such as a Software Bill of Materials or SBOM). We must grasp what possible blind spots our inventories may have. For example, some scanners do a great job finding operating system packages but aren’t as good at finding Java archives (jar files). This is part of what makes the current inventory process difficult. The tooling is improving at an impressive rate; don’t write anything off as too incomplete. It will change and get better in the future.

Enter the SBOM

Now that we have mentioned SBOMs, we should briefly explain how they fit into this inventory universe. An SBOM does nothing by itself; it’s just a file format for capturing information, such as a software inventory.

Anchore developers have written plenty over the years about what an is SBOM, but here is the tl;dr:

An SBOM is a detailed list of all software project components, libraries, and dependencies. It serves as a comprehensive inventory that helps understand the software’s structure and the origins of its components.

An SBOM in your project enhances security by quickly identifying and mitigating vulnerabilities in third-party components. Additionally, it ensures compliance with regulatory standards and provides transparency, which is essential for maintaining trust with stakeholders and users.

An example

To explain what all this looks like and some of the difficulties, let’s go over an example using the eclipse-temurin Java runtime container image. It would be very common for a developer to build on top of this image. It also shows many of the challenges in trying to pin down a software inventory.

The Dockerfile we’re going to reference can be found on GitHub, and the container image can be found on Docker Hub.

The first observation is that this container uses Ubuntu as the underlying container image.

This is great, Ubuntu has a very nice packaging system and it’s no trouble to see what’s installed. We can easily do this with Syft.

bress@anchore   ~ syft ubuntu:24.04
  Parsed image sha256:61b2756d6fa9d6242fafd5b29f674404779be561db2d0bd932aa3640ae67b9e1
  Cataloged contents 74f92a6b3589aa5cac6028719aaac83de4037bad4371ae79ba362834389035aa
   ├──  Packages                        [91 packages]
   ├──  File digests                    [2,259 files]
   ├──  File metadata                   [2,259 locations]
   └──  Executables                     [722 executables]
NAME                 VERSION                      TYPE
apt                  2.7.14build2                 deb
base-files           13ubuntu10.1                 deb
base-passwd          3.6.3build1                  deb
bash                 5.2.21-2ubuntu4              deb
bsdutils             1:2.39.3-9ubuntu6.1          deb
coreutils            9.4-3ubuntu6                 deb
dash                 0.5.12-6ubuntu5              deb
debconf              1.5.86ubuntu1                deb
debianutils          5.17build1                   deb
diffutils            1:3.10-1build1               deb
dpkg                 1.22.6ubuntu6.1              deb
e2fsprogs            1.47.0-2.4~exp1ubuntu4.1     deb
findutils            4.9.0-5build1                deb
gcc-14-base          14-20240412-0ubuntu1         deb
gpgv                 2.4.4-2ubuntu17              deb
grep                 3.11-4build1                 deb
gzip                 1.12-1ubuntu3                deb
hostname             3.23+nmu2ubuntu2             deb
init-system-helpers  1.66ubuntu1                  deb

There has been nothing exciting so far. But if we look a little deeper at the eclipse temurin Dockerfile, we see it’s installing the Java JDK using wget. That’s not something we’ll find just by looking at Ubuntu packages.

bress@anchore ~ syft ubuntu:24.04
  Parsed image sha256:61b2756d6fa9d6242fafd5b29f674404779be561db2d0bd932aa3640ae67b9e1
  Cataloged contents 74f92a6b3589aa5cac6028719aaac83de4037bad4371ae79ba362834389035aa
   ├──  Packages                        [91 packages]
   ├──  File digests                    [2,259 files]
   ├──  File metadata                   [2,259 locations]
   └──  Executables                     [722 executables]
NAME                 VERSION                      TYPE
apt                  2.7.14build2                 deb
base-files           13ubuntu10.1                 deb
base-passwd          3.6.3build1                  deb
bash                 5.2.21-2ubuntu4              deb
bsdutils             1:2.39.3-9ubuntu6.1          deb
coreutils            9.4-3ubuntu6                 deb
dash                 0.5.12-6ubuntu5              deb
debconf              1.5.86ubuntu1                deb
debianutils          5.17build1                   deb
diffutils            1:3.10-1build1               deb
dpkg                 1.22.6ubuntu6.1              deb
e2fsprogs            1.47.0-2.4~exp1ubuntu4.1     deb
findutils            4.9.0-5build1                deb
gcc-14-base          14-20240412-0ubuntu1         deb
gpgv                 2.4.4-2ubuntu17              deb
grep                 3.11-4build1                 deb
gzip                 1.12-1ubuntu3                deb
hostname             3.23+nmu2ubuntu2             deb
init-system-helpers  1.66ubuntu1                  deb

If we scan this image with Syft, we can see a few different types of packages installed.

bress@anchore ~ syft eclipse-temurin:8u422-b05-jre-noble
  Parsed image sha256:d2c2442dea2a2b1164bd6dd39af673db2215ff680910aff7417432b00a3c8e4d
  Cataloged contents 805b45dee2c503f1cca36e1ecc6e8625538592e2db32cc04e317a246fb86d0fc
   ├──  Packages                        [142 packages]
   ├──  File digests                    [3,856 files]
   ├──  File metadata                   [3,856 locations]
   └──  Executables                     [809 executables]
NAME                 VERSION                            TYPE

hostname             3.23+nmu2ubuntu2                   deb
init-system-helpers  1.66ubuntu1                        deb
jaccess              UNKNOWN                            java-archive
jce                  1.8.0_422                          java-archive
jfr                  1.8.0_422                          java-archive
jsse                 1.8.0_422                          java-archive
libacl1              2.3.2-1build1                      deb
libapt-pkg6.0t64     2.7.14build2                       deb
libassuan0           2.5.6-1build1                      deb
libattr1             1:2.5.2-1build1                    deb
libaudit-common      1:3.1.2-2.1build1                  deb

The jdk and jre are binaries in the image, as are some Java archives. This is a gotcha to watch for when you’re building an inventory. Many inventories and scanners may only look for known-good packages, not binaries and other files installed on the system. In a perfect world, our SBOM tells us details about everything in the image, not just one package type.

At this point, you can imagine a developer adding more things to the container: code they wrote, Java Archives, data files, and maybe even a few more binary files, probably installed with wget or curl.

What next

This sounds pretty daunting, but it’s not that hard to start building an inventory. You don’t need a fancy system. The easiest way is to find an open source SBOM generator, like Syft, and put the SBOMs in a directory. It’s not perfect, but even searching through those files is faster than manually finding every version of CUPS in your infrastructure.

Once you understand an initial inventory, you can investigate more complete solutions. There are countless open-source projects, products (such as Anchore Enterprise), and services that can help here. For example, when starting to build the inventory, don’t expect to go from zero to complete overnight. Big projects need immense patience.

It’s like the old proverb that the best time to plant a tree was twenty years ago; the second best time is now. The best time to start an inventory system was a decade ago; the second best time is now.

If you’d like to discuss any topics raised in this post, join us on this discourse thread.

Compliance Requirements for DISA’s Security Technical Implementation Guides (STIGs)

In the rapidly modernizing landscape of cybersecurity compliance, evolving to a continuous compliance posture is more critical than ever—particularly for organizations involved with the Department of Defense (DoD) and other government agencies. At the heart of the DoD’s modern approach to software development is the DoD Enterprise DevSecOps Reference Design, commonly implemented as a DoD Software Factory

A key component of this framework is adhering to the Security Technical Implementation Guides (STIGs) developed by the Defense Information Systems Agency (DISA). STIG compliance within the DevSecOps pipeline not only accelerates the delivery of secure software but also embeds robust security practices directly into the development process, safeguarding sensitive data and reinforcing national security.

This comprehensive guide will walk you through what STIGs are, who should care about them, the levels of STIG compliance, key categories of STIG requirements, how to prepare for the STIG compliance process, and the tools available to automate STIG implementation and maintenance.

What are STIGs and who should care?

Understanding DISA and STIGs

The Defense Information Systems Agency (DISA) is the DoD agency responsible for delivering information technology (IT) support to ensure the security of U.S. national defense systems. To help organizations meet the DoD’s rigorous security controls, DISA develops Security Technical Implementation Guides (STIGs).

STIGs are configuration standards that provide prescriptive guidance on how to secure operating systems, network devices, software, and other IT systems. They serve as a secure configuration standard to harden systems against cyber threats.

For example, a STIG for the open source Apache web server would specify that encryption is enabled for all traffic (incoming or outgoing). This would require the generation of SSL/TLS certificates on the server in the correct location, updating the server’s configuration file to reference this certificate and re-configuration of the server to serve traffic from a secure port rather than the default insecure port.

Who should care about STIG compliance?

STIG compliance is mandatory for any organization that operates within the DoD network or handles DoD information. This includes:

  • DoD Contractors and Vendors: Companies providing products or services to the DoD—a.k.a. the defense industrial base (DIB)—must ensure their systems comply with STIG requirements.
  • Government Agencies: Federal agencies interfacing with the DoD need to adhere to applicable STIGs.
  • DoD Information Technology Teams: IT professionals within the DoD responsible for system security must implement STIGs.

Connection to the RMF and NIST SP 800-53

The Risk Management Framework (RMF)—more formally NIST 800-37—is a framework that integrates security and risk management into IT systems as they are being developed. The STIG compliance process outlined below is directly integrated into the higher-level RMF process. As you follow the RMF, the individual steps of STIG compliance will be completed in turn.

STIGs are also closely connected to the NIST 800-53, colloquially known as the “Control Catalog”. NIST 800-53 outlines security and privacy controls for all federal information systems; the controls are not prescriptive about the implementation, only the best practices and outcomes that need to be achieved. 

As DISA developed the STIG compliance standard, they started with the NIST 800-53 controls then “tailored” them to meet the needs of the DoD; these customized security best practices are known as Security Requirements Guides (SRGs). In order to remove all ambiguity around how to meet these higher-level best practices STIGs were created with implementation specific instructions.

For example, an SRG will mandate that all systems utilize a cybersecurity best practice, such as, role-based access control (RBAC) to prevent users without the correct privileges from accessing certain systems. A STIG, on the other hand, will detail exactly how to configure an RBAC system to meet the highest security standards.

Levels of STIG Compliance

The DISA STIG compliance standard uses Severity Category Codes to classify vulnerabilities based on their potential impact on system security. These codes help organizations prioritize remediation efforts. The three Severity Category Codes are:

  1. Category I (Cat I): These are the highest risk vulnerabilities, allowing an attacker immediate access to a system or network or allowing superuser access. Due to their high risk nature, these vulnerabilities be addressed immediately.
  2. Category II (Cat II): These vulnerabilities provide information with a high potential of giving access to intruders. These findings are considered a medium risk and should be remediated promptly.
  3. Category III (Cat III): These vulnerabilities constitute the lowest risk, providing information that could potentially lead to compromise. Although not as pressing as Cat II & III issues, it is still important to address these vulnerabilities to minimize risk and enhance overall security.

Understanding these categories is crucial in the STIG process, as they guide organizations in prioritizing remediation of vulnerabilities.

Key categories of STIG requirements

Given the extensive range of technologies used in DoD environments, there are hundreds of STIGs applicable to different systems, devices, applications, and more. While we won’t list all STIG requirements here, it’s important to understand the key categories and who they apply to.

1. Operating System STIGs

Applies to: System Administrators and IT Teams managing servers and workstations

Examples:

  • Microsoft Windows STIGs: Provides guidelines for securing Windows operating systems.
  • Linux STIGs: Offers secure configuration requirements for various Linux distributions.

2. Network Device STIGs

Applies to: Network Engineers and Administrators

Examples:

  • Network Router STIGs: Outlines security configurations for routers to protect network traffic.
  • Network Firewall STIGs: Details how to secure firewall settings to control access to networks.

3. Application STIGs

Applies to: Software Developers and Application Managers

Examples:

  • Generic Application Security STIG: Outlines the necessary security best practices needed to be STIG compliant
  • Web Server STIG: Provides security requirements for web servers.
  • Database STIG: Specifies how to secure database management systems (DBMS).

4. Mobile Device STIGs

Applies to: Mobile Device Administrators and Security Teams

Examples:

  • Apple iOS STIG: Guides securing of Apple mobile devices used within the DoD.
  • Android OS STIG: Details security configurations for Android devices.

5. Cloud Computing STIGs

Applies to: Cloud Service Providers and Cloud Infrastructure Teams

Examples:

  • Microsoft Azure SQL Database STIG: Offers security requirements for Azure SQL Database cloud service.
  • Cloud Computing OS STIG: Details secure configurations for any operating system offered by a cloud provider that doesn’t have a specific STIG.

Each category addresses specific technologies and includes a STIG checklist to ensure all necessary configurations are applied. 

You can view an example of a STIG checklist for “Application Security and Development” by following this link.

How to Prepare for the STIG Compliance Process

Achieving DISA STIG compliance involves a structured approach. Here are the stages of the STIG process and tips to prepare:

Stage 1: Identifying Applicable STIGs

With hundreds of STIGs relevant to different organizations and technology stacks, this step should not be underestimated. First, conduct an inventory of all systems, devices, applications, and technologies in use. Then, review the complete list of STIGs to match each to your inventory to ensure that all critical areas requiring secure configuration are addressed. This step is essential to avoiding gaps in compliance.

Tip: Use automated tools to scan your environment then match assets to relevant STIGs.

Stage 2: Implementation

After you’ve mapped your technology to the corresponding STIGs, the process of implementing the security configurations outlined in the guides begins. This step may require collaboration between IT, security, and development teams to ensure that the configurations are compatible with the organization’s infrastructure while enforcing strict security standards. Be sure to keep detailed records of changes made.

Tip: Prioritize implementing fixes for Cat I vulnerabilities first, followed by Cat II and Cat III. Depending on the urgency and needs of the mission, ATO can still be achieved with partial STIG compliance. Prioritizing efforts increases the chances that partial compliance is permitted.

Stage 3: Auditing & Maintenance

After the STIGs have been implemented, regular auditing and maintenance are critical to ensure ongoing compliance, verifying that no deviations have occurred over time due to system updates, patches, or other changes. This stage includes periodic scans, manual reviews, and remediation of any identified gaps. Additionally, organizations should develop a plan to stay informed about new STIG releases and updates from DISA.

Tip: Establish a maintenance schedule and assign responsibilities to team members. Alternatively, adopting a policy-as-code approach to continuous compliance by embedding STIG compliance requirements “-as-code” directly into your DevSecOps pipeline, you can automate this process.

General Preparation Tips

  • Training: Ensure your team is familiar with STIG requirements and the compliance process.
  • Collaboration: Work cross-functionally with all relevant departments, including IT, security, and compliance teams.
  • Resource Allocation: Dedicate sufficient resources, including time and personnel, to the compliance effort.
  • Continuous Improvement: Treat STIG compliance as an ongoing process rather than a one-time project.

Tools to automate STIG implementation and maintenance

Automation can significantly streamline the STIG compliance process. Here are some tools that can help:

1. Anchore STIG (Static and Runtime)

  • Purpose: Automates the process of checking container images against STIG requirements.
  • Benefits:
    • Simplifies compliance for containerized applications.
    • Integrates into CI/CD pipelines for continuous compliance.
  • Use Case: Ideal for DevSecOps teams utilizing containers in their deployments.

2. SCAP Compliance Checker

  • Purpose: Provides automated compliance scanning using the Security Content Automation Protocol (SCAP).
  • Benefits:
    • Validates system configurations against STIGs.
    • Generates detailed compliance reports.
  • Use Case: Useful for system administrators needing to audit various operating systems.

3. DISA STIG Viewer

  • Purpose: Helps in viewing and managing STIG checklists.
  • Benefits:
    • Allows for easy navigation of STIG requirements.
    • Facilitates documentation and reporting.
  • Use Case: Assists compliance officers in tracking compliance status.

4. DevOps Automation Tools

  • Infrastructure Automation Examples: Red Hat Ansible, Perforce Puppet, Hashicorp Terraform
  • Software Build Automation Examples: CloudBees CI, GitLab
  • Purpose: Automate the deployment of secure configurations that meet STIG compliance across multiple systems.
  • Benefits:
    • Ensures consistent application of secure configuration standards.
    • Reduces manual effort and the potential for errors.
  • Use Case: Suitable for large-scale environments where manual configuration is impractical.

5. Vulnerability Management Tools

  • Examples: Anchore Secure
  • Purpose: Identify vulnerabilities and compliance issues within your network.
  • Benefits:
    • Provides actionable insights to remediate security gaps.
    • Offers continuous monitoring capabilities.
  • Use Case: Critical for security teams focused on proactive risk management.

Wrap-Up

Achieving DISA STIG compliance is mandatory for organizations working with the DoD. By understanding what STIGs are, who they apply to, and how to navigate the compliance process, your organization can meet the stringent compliance requirements set forth by DISA. As a bonus, you will enhance its security posture and reduce the potential for a security breach.

Remember, compliance is not a one-time event but an ongoing effort that requires regular updates, audits, and maintenance. Leveraging automation tools like Anchore STIG and Anchore Secure can significantly ease this burden, allowing your team to focus on strategic initiatives rather than manual compliance tasks.

Stay proactive, keep your team informed, and make use of the resources available to ensure that your IT systems remain secure and compliant.

Navigating Open Source Software Compliance in Regulated Industries

Open source software (OSS) brings a wealth of benefits; speed, innovation, cost savings. But when serving customers in highly regulated industries like defense, energy, or finance, a new complication enters the picture—compliance.

Imagine your DevOps-fluent engineering team has been leveraging OSS to accelerate product delivery, and suddenly, a major customer hits you with a security compliance questionnaire. What now? 

Regulatory compliance isn’t just about managing the risks of OSS for your business anymore; it’s about providing concrete evidence that you meet standards like FedRAMP and the Secure Software Development Framework (SSDF).

The tricky part is that the OSS “suppliers” making up 70-90% of your software supply chain aren’t traditional vendors—they don’t have the same obligations or accountability, and they’re not necessarily aligned with your compliance needs. 

So, who bears the responsibility? You do.

The OSS your engineering team consumes is your resource and your responsibility. This means you’re not only tasked with managing the security risks of using OSS but also with proving that both your applications and your OSS supply chain meet compliance standards. 

In this post, we’ll explore why you’re ultimately responsible for the OSS you consume and outline practical steps to help you use OSS while staying compliant.

Learn about CISA’s SSDF attestation form and how to meet compliance.

What does it mean to use open source software in a regulated environment?

Highly regulated environments add a new wrinkle to the OSS security narrative. The OSS developers that author the software dependencies that make up the vast majority of modern software supply chains aren’t vendors in the traditional sense. They are more of a volunteer force that allow you to re-use their work but it is a take it or leave it agreement. You have no recourse if it doesn’t work as expected, or worse, has vulnerabilities in it.

So, how do you meet compliance standards when your software supply chain is built on top of a foundation of OSS?

Who is the vendor? You are!

Whether you have internalized this or not the open source software that your developers consume is your resource and thus your responsibility.

This means that you are shouldered with the burden of not only managing the security risk of consuming OSS but also having to shoulder the burden of proving that both your applications and the your OSS supply chain meets compliance.

Open source software is a natural resource

Before we jump into how to accomplish the task set forth in the previous section, let’s take some time to understand why you are the vendor when it comes to open source software.

The common idea is that OSS is produced by a 3rd-party that isn’t part of your organization, so they are the software supplier. Shouldn’t they be the ones required to secure their code? They control and maintain what goes in, right? How are they not responsible?

To answer that question, let’s think about OSS as a natural resource that is shared by the public at large, for instance the public water supply.

This shouldn’t be too much of a stretch. We already use terms like upstream and downstream to think about the relationship between software dependencies and the global software supply chain.

Using this mental model, it becomes easier to understand that a public good isn’t a supplier. You can’t ask a river or a lake for an audit report that it is contaminant free and safe to drink. 

Instead the organization that processes and provides the water to the community is responsible for testing the water and guaranteeing its safety. In this metaphor, your company is the one processing the water and selling it as pristine bottled water. 

How do you pass the buck to your “supplier”? You can’t. That’s the point.

This probably has you asking yourself, if I am responsible for my own OSS supply chain then how to meet a compliance standard for something that I don’t have control over? Keep reading and you’ll find out.

How do I use OSS and stay compliant?

While compliance standards are often thought of as rigid, the reality is much more nuanced. Just because your organization doesn’t own/control the open source projects that you consume doesn’t mean that you can’t use OSS and meet compliance requirements.

There are a few different steps that you need to take in order to build a “reasonably secure” OSS supply chain that will pass a compliance audit. We’ll walk you through the steps below:

Step 1 — Know what you have (i.e., an SBOM inventory)

The foundation of the global software supply chain is the SBOM (software bill of materials) standard. Each of the security and compliance functions outlined in the steps below use or manipulate an SBOM.

SBOMs are the foundational component of the global software supply chain because they record the ingredients that were used to produce the application an end-user will consume. If you don’t have a good grasp of the ingredients of your applications there isn’t much hope for producing any upstream security or compliance guarantees.

The best way to create observability into your software supply chain is to generate an SBOM for every single application in your DevSecOps build pipeline—at each stage of the pipeline!

Step 2 — Maintain a historical record of application source code

To meet compliance standards like FedRAMP and SSDF, you need to be able to maintain a historical record of the source code of your applications, including: 

  • Where it comes from, 
  • Who created it, and 
  • Any modifications made to it over time.

SBOMs were designed to meet these requirements. They act as a record of how applications were built and when/where OSS dependencies were introduced. They also double as compliance artifacts that prove you are compliant with regulatory standards.

Governments aren’t content with self-attestation (at least not for long); they need hard evidence to verify that you are trustworthy. Even though SSDF is currently self-attestation only, the federal government is known for rolling out compliance frameworks in stages. First advising on best-practices, then requiring self-attestation, finally external validation via a certification process. 

The Cybersecurity Maturity Model Certification (CMMC) is a good example of this dynamic process. It recently transitioned from self-attestation to external validation with the introduction of the 2.0 release of the framework.

Step 3 — Manage your OSS vulnerabilities

Not only do you need to keep a record of applications as they evolve over time, you have to track the known vulnerabilities of your OSS dependencies to achieve compliance. Just as SBOMs prove provenance, vulnerability scans are proof that your application and its dependencies aren’t vulnerable. These scans are a crucial piece of the evidence that you will need to provide to your compliance officer as you go through the certification process. 

Remember the buck stops with you! If the OSS that your application consumes doesn’t supply an SBOM and vulnerability scan (which is essentially all OSS projects) then you are responsible to create them. There is no vendor to pass the blame to for proving that your supply chain is reasonably secure and thus compliant.

Step 4 — Continuous compliance of open source software supply chain

It is important to recognize that modern compliance standards are no longer sprints but marathons. Not only do you have to prove that your application(s) are compliant at the time of audit but you have to be able to demonstrate that it remains secure continuously in order to maintain your certification.

This can be challenging to scale but it is made easier by integrating SBOM generation, vulnerability scanning and policy checks directly into the DevSecOps pipeline. This is the approach that modern, SBOM-powered SCAs advocate for.

By embedding the compliance policy-as-code into your DevSecOps pipeline as policy gates, compliance can be maintained over time. Developers are alerted when their code doesn’t meet a compliance standard and are directed to take the corrective action. Also, these compliance checks can be used to automatically generate the compliance artifacts needed. 

You already have an automated DevSecOps pipeline that is producing and delivering applications with minimal human intervention, why not take advantage of this existing tooling to automate open source software compliance in the same way that security was integrated directly into DevOps.

Real-world Examples

To help bring these concepts to life, we’ve outlined some real-world examples of how open source software and compliance intersect:

Open source project has unfixed vulnerabilities

This is far and wide the most common issue that comes up during compliance audits. One of your application’s OSS dependencies has a known vulnerability that has been sitting in the backlog for months or even years!

There are several reasons why an open source software developer might leave a known vulnerability unresolved:

  • They prioritize a feature over fixing a vulnerability
  • The vulnerability is from a third-party dependency they don’t control and can’t fix
  • They don’t like fixing vulnerabilities and choose to ignore it
  • They reviewed the vulnerability and decided it’s not likely to be exploited, so it’s not worth their time
  • They’re planning a codebase refactor that will address the vulnerability in the future

These are all rational reasons for vulnerabilities to persist in a codebase. Remember, OSS projects are owned and maintained by 3rd-party developers who control the repository; they make no guarantees about its quality. They are not vendors.

You, on the other hand, are a vendor and must meet compliance requirements. The responsibility falls on you. An OSS vulnerability management program is how you meet your compliance requirements while enjoying the benefits of OSS.

Need to fill out a supplier questionnaire

Imagine you’re a cloud service provider or software vendor. Your sales team is trying to close a deal with a significant customer. As the contract nears signing, the customer’s legal team requests a security questionnaire. They’re in the business of protecting their organization from financial risk stemming from their supply chain, and your company is about to become part of that supply chain.

These forms are usually from lawyers, very formal, and not focused on technical attacks. They just want to know what you’re using. The quick answer? “Here’s our SBOM.” 

Compliance comes in the form of public standards like FedRAMP, SSDF, NIST, etc., and these less formal security questionnaires. Either way, being unable to provide a full accounting of the risks in your software supply chain can be a speed bump to your organization’s revenue growth and success.

Integrating SBOM scanning, generation, and management deeply into your DevSecOps pipeline is key to accelerating the sales process and your organization’s overall success.

Prove provenance

CISA’s SSDF Attestation form requires that enterprises selling software to the federal government can produce a historical record of their applications. Quoting directly: “The software producer [must] maintain provenance for internal code and third-party components incorporated into the software to the greatest extent feasible.”

If you want access to the revenue opportunities the U.S. federal government offers, SSDF attestation is the needle you have to thread. Meeting this requirement without hiring an army of compliance engineers to manually review your entire DevSecOps pipeline demands an automated OSS component observability and management system.

Often, we jump to cryptographic signatures, encryption keys, trust roots—this quickly becomes a mess. Really, just a hash of the files in a database (read: SBOM inventory) satisfies the requirement. Sometimes, simpler is better. 

Discover the “easy button” to SSDF Attestation and OSS supply chain compliance in our previous blog post.

Takeaways

OSS Is Not a Vendor—But You Are! The best way to have your OSS cake and eat it too (without the indigestion) is to:

  1. Know Your Ingredients: Maintain an SBOM inventory of your OSS supply chain.
  2. Maintain a Complete Historical Record: Keep track of your application’s source code and build process.
  3. Scan for Known Vulnerabilities: Regularly check your OSS dependencies.
  4. Continuous Compliance thru Automation: Generate compliance records automatically to scale your compliance process.

There are numerous reasons to aim for open source software compliance, especially for your software supply chain:

  • Balance Gains Against Risks: Leverage OSS benefits while managing associated risks.
  • Reduce Financial Risk: Protect your organization’s existing revenue.
  • Increase Revenue Opportunities: Access new markets that mandate specific compliance standards.
  • Avoid Becoming a Cautionary Tale: Stay ahead of potential security incidents.

Regardless of your motivation for wanting to use OSS and use it responsibly (i.e., securely and compliantly), Anchore is here to help. Reach out to our team to learn more about how to build and manage a secure and compliant OSS supply chain.

Learn the container security best practices to reduce the risk of software supply chain attacks.

Learn the container security best practices, including open source software (OSS) security, to reduce the risk of software supply chain attacks.

US Navy achieves ATO in days with continuous compliance and OSS risk management

Implementing secure and compliant software solutions within the Department of Defense’s (DoD) software factory framework is no small feat. 

For Black Pearl, the premier DevSecOps platform for the U.S. Navy, and Sigma Defense, a leading DoD technology contractor, the challenge was not just about meeting stringent security requirements but to empower the warfighter. 

We’ll cover how they streamlined compliance, managed open source software (OSS) risk, and reduced vulnerability overload—all while accelerating their Authority to Operate (ATO) process.

Challenge: Navigating Complex Security and Compliance Requirements

Black Pearl and Sigma Defense faced several critical hurdles in meeting the stringent security and compliance standards of the DoD Enterprise DevSecOps Reference Design:

  • Achieving RMF Security and Compliance: Black Pearl needed to secure its own platform and help its customers achieve ATO under the Risk Management Framework (RMF). This involved meeting stringent security controls like RA-5 (Vulnerability Management), SI-3 (Malware Protection), and IA-5 (Credential Management) for both the platform and the applications built on it.
  • Maintaining Continuous Compliance: With the RAISE 2.0 memo emphasizing continuous ATO compliance, manual processes were no longer sufficient. The teams needed to automate compliance tasks to avoid the time-consuming procedures traditionally associated with maintaining ATO status.
  • Managing Open-Source Software (OSS) Risks: Open-source components are integral to modern software development but come with inherent risks. Black Pearl had to manage OSS risks for both its platform and its customers’ applications, ensuring vulnerabilities didn’t compromise security or compliance.
  • Vulnerability Overload for Developers: Developers often face an overwhelming number of vulnerabilities, many of which may not pose significant risks. Prioritizing actionable items without draining resources or slowing down development was a significant challenge.

“By using Anchore and the Black Pearl platform, applications inherit 80% of the RMF’s security controls. You can avoid all of the boring stuff and just get down to what everyone does well, which is write code.”

Christopher Rennie, Product Lead/Solutions Architect

Solution: Automating Compliance and Security with Anchore

To address these challenges, Black Pearl and Sigma Defense implemented Anchore, which provided:

“Working alongside Anchore, we have customized the compliance artifacts that come from the Anchore API to look exactly how the AOs are expecting them to. This has created a good foundation for us to start building the POA&Ms that they’re expecting.”

Josiah Ritchie, DevSecOps Staff Engineer

  • Managing OSS Risks with Continuous Monitoring: Anchore’s integrated vulnerability scanner, policy enforcer, and reporting system provided continuous monitoring of open-source software components. This proactive approach ensured vulnerabilities were detected and addressed promptly, effectively mitigating security risks.
  • Automated Prioritization of Vulnerabilities: By integrating the Anchore Developer Bundle, Black Pearl enabled automatic prioritization of actionable vulnerabilities. Developers received immediate alerts on critical issues, reducing noise and allowing them to focus on what truly matters.

Results: Accelerated ATO and Enhanced Security

The implementation of Anchore transformed Black Pearl’s compliance process and security posture:

  • Platform ATO in 3-5 days: With Anchore’s integration, Black Pearl users accessed a fully operational DevSecOps platform within days, a significant reduction from the typical six months for DIY builds.

“The DoD has four different layers of authorizing officials in order to achieve ATO. You have to figure out how to make all of them happy. We want to innovate by automating the compliance process. Anchore helps us achieve this, so that we can build a full ATO package in an afternoon rather than taking a month or more.”

Josiah Ritchie, DevSecOps Staff Engineer

  • Significantly reduced time spent on compliance reporting: Anchore automated compliance checks and artifact generation, cutting down hours spent on manual reviews and ensuring consistency in reports submitted to authorizing officials.
  • Proactive OSS risk management: By shifting security and compliance to the left, developers identified and remediated open-source vulnerabilities early in the development lifecycle, mitigating risks and streamlining the compliance process.
  • Reduced vulnerability overload with prioritized vulnerability reporting: Anchore’s prioritization of vulnerabilities prevented developer overwhelm, allowing teams to focus on critical issues without hindering development speed.

Conclusion: Empowering the Warfighter Through Efficient Compliance and Security

Black Pearl and Sigma Defense’s partnership with Anchore demonstrates how automating security and compliance processes leads to significant efficiencies. This empowers Navy divisions to focus on developing software that supports the warfighter. 

Achieving ATO in days rather than months is a game-changer in an environment where every second counts, setting a new standard for efficiency through the combination of Black Pearl’s robust DevSecOps platform and Anchore’s comprehensive security solutions.

If you’re facing similar challenges in securing your software supply chain and accelerating compliance, it’s time to explore how Anchore can help your organization achieve its mission-critical objectives.

Download the full case study below👇

How to build an OSS risk management program

In previous blog posts we have covered the risks of open source software (OSS) and security best practices to manage that risk. From there we zoomed in on the benefits of tightly coupling two of those best practices (SBOMs and vulnerability scanning)

Now, we’ll dig deeper into the practical considerations of integrating this paired solution into a DevSecOps pipeline. By examining the design and implementation of SBOMs and vulnerability scanning, we’ll illuminate the path to creating a holistic open source software (OSS) risk management program.

Learn about the role that SBOMs for the security of your organization in this white paper.

Learn about the role that SBOMs for the security, including open source software (OSS) security, of your organization in this white paper.

How do I integrate SBOM management and vulnerability scanning into my development process?

Ideally, you want to generate an SBOM at each stage of the software development process (see image below). By generating an SBOM and scanning for vulnerabilities at each stage, you unlock a number of novel use-cases and benefits that we covered previously.

DevSecOps lifecycle diagram with all stages to integrate SBOM generation and vulnerability scanning.

Let’s break down how to integrate SBOM generation and vulnerability scanning into each stage of the development pipeline:

Source (PLAN & CODE)

The easiest way to integrate SBOM generation and vulnerability scanning into the design and coding phases is to provide CLI (command-line interface) tools to your developers. Engineers are already used to these tools—and have a preference for them!

If you’re going the open source route, we recommend both Syft (SBOM generation) and Grype (vulnerability scanner) as easy options to get started. If you’re interested in an integrated enterprise tool then you’ll want to look at AnchoreCTL.

Developers can generate SBOMs and run vulnerability scans right from the workstation. By doing this at design or commit time developers can shift security left and know immediately about security implications of their design decisions.

If existing vulnerabilities are found, developers can immediately pivot to OSS dependencies that are clean or start a conversation with their security team to understand if their preferred framework will be a deal breaker. Either way, security risk is addressed early before any design decisions are made that will be difficult to roll back.

Build (BUILD + TEST)

The ideal location to integrate SBOM generation and vulnerability scanning during the build and test phases are directly into the organization’s continuous integration (CI) pipeline.

The same self-contained CLI tools used during the source stage are integrated as additional steps into CI scripts/runbooks. When a developer pushes a commit that triggers the build process, the new steps are executed and both an SBOM and vulnerability scan are created as outputs. 

Check out our docs site to see how AnchoreCTL (running in distributed mode) makes this integration a breeze.

If you’re having trouble convincing your developers to jump on the SBOM train, we recommend that developers think about all security scans as just another unit test that is part of their testing suite.

Running these steps in the CI pipeline delays feedback a little versus performing the check as incremental code commits are made as an application is being coded but it is still light years better than waiting till a release is code complete. 

If you are unable to enforce vulnerability scanning of OSS dependencies by your engineering team, a CI-based strategy can be a good happy medium. It is much easier to ensure every build runs exactly the same each time than it is to do the same for developers.

Release (aka Registry)

Another integration option is the container registry. This option will require you to either roll your own service that will regularly call the registry and scan new containers or use a service that does this for you.

See how Anchore Enterprise can automate this entire process by reviewing our integration docs.

Regardless of the path you choose, you will end up creating an IAM service account within your CI application which will give your SBOM and vulnerability scanning solution the access to your registries.

The release stage tends to be fairly far along in the development process and is not an ideal location for these functions to run. Most of the benefits of a shift left security posture won’t be available anymore.

If this is an additional vulnerability scanning stage—rather than the sole stage—then this is a fantastic environment to integrate into. Software supply chain attacks that target registries are popular and can be prevented with a continuous scanning strategy.

Deploy

This is the traditional stage of the SDLC (software development lifecycle) to run vulnerability scans. SBOM generation can be added on as another step in an organization’s continuous deployment (CD) runbook.

Similar to the build stage, the best integration method is by calling CLI tools directly in the deploy script to generate the SBOM and then scan it for vulnerabilities.

Alternatively, if you utilize a container orchestrator like Kubernetes you can also configure an admission controller to act as a deployment gate. The admissions controller should be configured to make a call out to a standalone SBOM generator and vulnerability scanner. 

If you’d like to understand how this is implemented with Anchore Enterprise, see our docs.

While this is the traditional location for running vulnerability scans, it is not recommended that this is the only stage to scan for vulnerabilities. Feedback about security issues would be arriving very late in the development process and prior design decisions may prevent vulnerabilities from being easily remediated. Don’t do this unless you have no other option.

Production (OPERATE + MONITOR)

This is not a traditional stage to run vulnerability scans since the goal is to prevent vulnerabilities from getting to production. Regardless, this is still an important environment to scan. Production containers have a tendency to drift from their pristine build states (DevSecOps pipelines are leaky!).

Also, new vulnerabilities are discovered all of the time and being able to prioritize remediation efforts to the most vulnerable applications (i.e., runtime containers) considerably reduces the risk of exploitation.

The recommended way to run SBOM generation and vulnerability scans in production is to run an independent container with the SBOM generator and vulnerability scanner installed. Most container orchestrators have SDKs that will allow you to integrate an SBOM generator and vulnerability scanner to the preferred administration CLI (e.g., kubectl for k8s clusters). 

Read how Anchore Enterprise integrates these components together into a single container for both Kubernetes and Amazon ECS.

How do I manage all of the SBOMs and vulnerability scans?

Tightly coupling SBOM generation and vulnerability scanning creates a number of benefits but it also creates one problem; a firehose of data. This unintended side effect is named SBOM sprawl and it inevitably becomes a headache in and of itself.

The concise solution to this problem is to create a centralized SBOM repository. The brevity of this answer downplays the challenges that go along with building and managing a new data pipeline.

We’ll walk you through the high-level steps below but if you’re looking to understand the challenges and solutions of SBOM sprawl in more detail, we have a separate article that covers that.

Integrating SBOMs and vulnerability scanning for better OSS risk management

Assuming you’ve deployed an SBOM generator and vulnerability scanner into at least one of your development stages (as detailed above in “How do I integrate SBOM management and vulnerability scanning into my development process?”) and have an SBOM repository for storing your SBOMs and/or vulnerability scans, we can now walkthrough how to tie these systems together.

  1. Create a system to pull vulnerability feeds from reputable sources. If you’re looking for a way to get started here, read our post on how to get started.
  2. Regularly scan your catalog of SBOMs for vulnerabilities, storing the results alongside the SBOMs.
  3. Implement a query system to extract insights from your inventory of SBOMs.
  4. Create a dashboard to visualize your software supply chain’s health.
  5. Build alerting automation to ping your team as newly discovered vulnerabilities are announced.
  6. Maintain all of these DIY security applications and tools. 
  7. Continue to incrementally improve on these tools as new threats emerge, technologies evolve and development processes change.

If this feels like more work than you’re willing to take on, this is why security vendors exist. See the benefits of a managed SBOM-powered SCA below.

Prefer not to DIY? Evaluate Anchore Enterprise

Anchore Enterprise was designed from the ground up to provide a reliable software supply chain security platform that requires the least amount of work to integrate and maintain. Included in the product is:

  • Out-of-the-box integrations for popular CI/CD software (e.g., GitHub, Jenkins, GitLab, etc.)
  • End-to-end SBOM management
  • Enterprise-grade vulnerability scanning with best-in-class false positives
  • Built-in SBOM drift detection
  • Remediation recommendations
  • Continuous visibility and monitoring of software supply chain health

Enterprises like NVIDIA, Cisco, Infoblox, etc. have chosen Anchore Enterprise as their “easy button” to achieve open source software security with the least amount of lift.

If you’re interested to learn more about how to roll out a complete OSS security solution without the blood, sweat and tears that come with the DIY route—reach out to our team to get a demo or try Anchore Enterprise yourself with a 15-day free trial.

Learn the container security best practices to reduce the risk of software supply chain attacks.

Learn the container security best practices, including open source software (OSS) security, to reduce the risk of software supply chain attacks.

SBOMs and Vulnerability Management: OSS Security in the DevSecOps Era

The rise of open-source software (OSS) development and DevOps practices has unleashed a paradigm shift in OSS security. As traditional approaches to OSS security have proven inadequate in the face of rapid development cycles, the Software Bill of Materials (SBOM) has re-made OSS vulnerability management in the era of DevSecOps.

This blog post zooms in on two best practices from our introductory article on OSS security and the software supply chain:

  1. Maintain a Software Dependency Inventory
  2. Implement Vulnerability Scanning

These two best practices are set apart from the rest because they are a natural pair. We’ll cover how this novel approach,

  • Scaled OSS vulnerability management under the pressure of rapid software delivery
  • Is set apart from legacy SCAs
  • Unlocks new use-cases in software supply chain security, OSS risk management, etc.
  • Benefits software engineering orgs
  • Benefits an organization’s overall security posture
  • Has measurably impacted modern enterprises, such as, NVIDIA, Infoblox, etc.

Whether you’re a seasoned DevSecOps professional or just beginning to tackle the challenges of securing your software supply chain, this blog post offers insights into how SBOMs and vulnerability management can transform your approach to OSS security.

Learn about the role that SBOMs for the security of your organization in this white paper.

Learn about the role that SBOMs for the security, including open source software (OSS) security, of your organization in this white paper.

Why do I need SBOMs for OSS vulnerability management?

The TL;DR is SBOMs enabled DevSecOps teams to scale OSS vulnerability management programs in a modern, cloud native environment. Legacy security tools (i.e., SCA platforms) weren’t built to handle the pace of software delivery after a DevOps face lift.

Answering this question in full requires some historical context. Below is a speed-run of how we got to a place where SBOMs became the clear solution for vulnerability management after the rise of DevOps and OSS; the original longform is found on our blog.

If you’re not interested in a history lesson, skip to the next section, “What new use-cases are unlocked with a software dependency inventory?” to get straight to the impact of this evolution on software supply chain security (SSCS).

A short history on software composition analysis (SCA)

  • SCAs were originally designed to solve the problem of OSS licensing risk
  • Remember that Microsoft made a big fuss about the dangers of OSS at the turn of the millennium
  • Vulnerability scanning and management was tacked-on later
  • These legacy SCAs worked well enough until DevOps and OSS popularity hit critical mass

How the rise of OSS and DevOps principles broke legacy SCAs

  • DevOps and OSS movements hit traction in the 2010s
  • Software development and delivery transitioned from major updates with long development times to incremental updates with frequent releases
  • Modern engineering organizations are measured and optimized for delivery speed
  • Legacy SCAs were designed to scan a golden image once and take as much as needed to do it; upwards of weeks in some cases
  • This wasn’t compatible with the DevOps promise and created friction between engineering and security
  • This meant not all software could be scanned and much was scanned after release increasing the risk of a security breach

SBOMs as the solution

  • SBOMs were introduced as a standardized data structure that comprised a complete list of all software dependencies (OSS or otherwise)
  • These lightweight files created a reliable way to scan software for vulnerabilities without the slow performance of scanning the entire application—soup to nuts
  • Modern SCAs utilize SBOMs as the foundational layer to power vulnerability scanning in DevSecOps pipelines
  • SBOMs + SCAs deliver on the performance of DevOps without compromising security

What is the difference between SBOMs and legacy SCA scanning?

SBOMs offer two functional innovations over the legacy model: 

  1. Deeper visibility into an organization’s application inventory and; 
  2. A record of changes to applications over-time.

The deeper visibility comes from the fact that modern SCA scanners identify software dependencies recursively and build a complete software dependency tree (both direct and transitive). The record of changes comes from the fact that the OSS ecosystem has begun to standardize the contents of SBOMs to allow interoperability between OSS consumers and producers.

Legacy SCAs typically only scan for direct software dependencies and don’t recursively scan for dependencies of dependencies. Also, legacy SCAs don’t generate standardized scans that can then be used to track changes over time.

What new use-cases are unlocked with an SBOM inventory?

The innovations brought by SBOMs (see above) have unlocked new use-cases that benefit both the software supply chain security niche and the greater DevSecOps world. See the list below:

OSS Dependency Drift Detection

Ideally software dependencies are only injected in source code but the reality is that CI/CD pipelines are leaky and both automated and one-off modifications are made at all stages of development. Plugging 100% of the leaks is a strategy with diminishing returns. Application drift detection is a scalable solution to this challenge.

SBOMs unlocks drift detection by creating a point-in-time record on the composition of an application at each stage of the development process. This creates an auditable record of when software builds are modified; how they are changed and who changed it. 

Software Supply Chain Attack Detection

Not all dependency injections are performed by benevolent 1st-party developers. Malicious threat actors who gain access to your organization’s DevSecOps pipeline or the pipeline of one of your OSS suppliers can inject malicious code into your applications.

An SBOM inventory creates the historical record that can identify anomalous behavior and catch these security breaches before organizational damage is done. This is a particularly important strategy for dealing with advanced persistent threats (APTs) that are expert at infiltration and stealth. For a real-world example, see our blog on the recent XZ supply chain attack.

OSS Licensing Risk Management

OSS licenses are currently undergoing the beginning of a new transformation. The highly permissive licenses that came into fashion over the last 20 years are proving to be unsustainable. As prominent open source startups amend their licenses (e.g., Hashicorp, Elastic, Redis, etc.), organizations need to evaluate these changes and how it impacts their OSS supply chain strategy.

Similar to the benefits during a security incident, an SBOM inventory acts as the source of truth for OSS licensing risk. As licenses are amended, an organization can quickly evaluate their risk by querying their inventory and identifying who their “critical” OSS suppliers are. 

Domain Expertise Risk Management

Another emerging use-case of software dependency inventories is the management of domain expertise of developers in your organization. A comprehensive inventory of software dependencies allows organization’s to map critical software to individual employee’s domain knowledge. This creates a measurement of how well resourced your engineering organization is and who owns the knowledge that could impact business operations.

While losing an employee with a particular set of skills might not have the same urgency as a security incident, over time this gap can create instability. An SBOM inventory allows organizations to maintain a list of critical OSS suppliers and get ahead of any structural risks in their organization.

What are the benefits of a software dependency inventory?

SBOM inventories create a number of benefits for tangential domains, such as, software supply chain security, risk management, etc. but there is one big benefit for the core practices of software development.

Reduced engineering and QA time for debugging

A software dependency inventory stores metadata about applications and their OSS dependencies over-time in a centralized repository. This datastore is a simple and efficient way to search and answer critical questions about the state of an organization’s software development pipeline.

Previously, engineering and QA teams had to manually search codebases and commits in order to determine the source of a rogue dependency being added to an application. A software dependency inventory combines a centralized repository of SBOMs with an intuitive search interface. Now, these time consuming investigations can be accomplished in minutes versus hours.

What are the benefits of scanning SBOMs for vulnerabilities?

There are a number of security benefits that can be achieved by integrating SBOMs and vulnerability scanning. We’ve highlighted the most important below:

Reduce risk by scaling vulnerability scanning for complete coverage

One of the side effects of transitioning to DevOps practices was that legacy SCAs couldn’t keep up with the software output of modern engineering orgs. This meant that not all applications were scanned before being deployed to production—a risky security practice!

Modern SCAs solved this problem by scanning SBOMs rather than applications or codebases. These lightweight SBOM scans are so efficient that they can keep up with the pace of DevOps output. Scanning 100% of applications reduces risk by preventing unscanned software from being deployed into vulnerable environments.

Prevent delays in software delivery

Overall organizational productivity can be increased by adopting modern, SBOM-powered SCAs that allow organizations to shift security left. When vulnerabilities are uncovered during application design, developers can make informed decisions about the OSS dependencies that they choose. 

This prevents the situation where engineering creates a new application or feature but right before it is deployed into production the security team scans the dependencies and finds a critical vulnerability. These last minute security scans can delay a release and create frustration across the organization. Scanning early and often prevents this productivity drain from occurring at the worst possible time.

Reduced financial risk during a security incident

The faster a security incident is resolved the less risk that an organization is exposed to. The primary metric that organizations track is called mean-time-to-recovery (MTTR). SBOM inventories are utilized to significantly reduce this metric and improve incident outcomes.

An application inventory with full details on the software dependencies is a prerequisite for rapid security response in the event of an incident. A single SQL query to an SBOM inventory will return a list of all applications that have exploitable dependencies installed. Recent examples include Log4j and XZ. This prevents the need for manual scanning of codebases or production containers. This is the difference between a zero-day incident lasting a few hours versus weeks.

Reduce hours spent on compliance with automation

Compliance certifications are powerful growth levers for organizations; they open up new market opportunities. The downside is that they create a lot of work for organizations. Manually confirming that each compliance control is met and providing evidence for the compliance officer to review discourages organizations from pursuing these certifications.

Providing automated vulnerability scans from DevSecOps pipelines that integrate SBOM inventories and vulnerability scanners significantly reduces the hours needed to generate and collect evidence for compliance audits.

How impactful are these benefits?

Many modern enterprises are adopting SBOM-powered SCAs and reaping the benefits outlined above. The quantifiable benefits to any organization are unique to that enterprise but anecdotal evidence is still helpful when weighing how to prioritize a software supply chain security initiative, like the adoption of an SBOM-powered SCA against other organizational priorities.

As a leading SBOM-powered SCA, Anchore has helped numerous organizations achieve the benefits of this evolution in the software industry. To get an estimate of what your organization can expect, see the case studies below:

NVIDIA

  • Reduced time to production by scanning SBOMs instead of full applications
  • Scaled vulnerability scanning and management program to 100% coverage across 1000s of containerized applications and 100,000s of containers

Read the full NVIDIA case study here >>

Infoblox

  • 75% reduction in engineering hours spent performing manual vulnerability detection
  • 55% reduction in hours allocated to retroactive remediation of vulnerabilities
  • 60% reduction in hours spent on manual compliance discovery and documentation

Read the full Infoblox case study here >>

DreamFactory

  • 75% reduction in engineering hours spent on vulnerability management and compliance
  • 70% faster production deployments with automated vulnerability scanning and management

Read the full DreamFactory case study here >>

Next Steps

Hopefully you now have a better understanding of the power of integrating an SBOM inventory into OSS vulnerability management. This “one-two” combo has unlocked novel use-cases, numerous benefits and measurable results for modern enterprises.

If you’re interested in learning more about how Anchore can help your organization achieve similar results, reach out to our team.

Learn the container security best practices to reduce the risk of software supply chain attacks.

Learn the container security best practices, including open source software (OSS) security, to reduce the risk of software supply chain attacks.

DreamFactory Achieves 75% Time Savings with Anchore: A Case Study in Secure API Generation

As the popularity of APIs has swept the software industry, API security has become paramount, especially for organizations in highly regulated industries. DreamFactory, an API generation platform serving the defense industry and critical national infrastructure, required an air-gapped vulnerability scanning and management solution that didn’t slow down their productivity. Avoiding security breaches and compliance failures are non-negotiables for the team to maintain customer trust.

Challenge: Security Across the Gap

DreamFactory encountered several critical hurdles in meeting the needs of its high-profile clients, particularly those in the defense community and other highly regulated sectors:

  1. Secure deployments without cloud connectivity: Many clients, including the Department of Defense (DoD), required on-premises deployments with air-gapping, breaking the assumptions of modern cloud-based security strategies.
  2. Air-gapped vulnerability scans: Despite air-gapping, these organizations still demanded comprehensive vulnerability reporting to protect their sensitive data.
  3. Building high-trust partnerships: In industries where security breaches could have catastrophic consequences, establishing trust rapidly was crucial.

As Terence Bennett, CEO of DreamFactory, explains, “The data processed by these organizations have the highest national security implications. We needed a solution that could deliver bulletproof security without cloud connectivity.”

Solution: Anchore Enterprise On-Prem and Air-Gapped 

To address these challenges, DreamFactory implemented Anchore Enterprise, which provided:

  1. Support for on-prem and air-gapped deployments: Anchore Enterprise was designed to operate in air-gapped environments, aligning perfectly with DreamFactory’s needs.
  2. Comprehensive vulnerability scanning: DreamFactory integrated Anchore Enterprise into its build pipeline, running daily vulnerability scans on all deployment versions.
  3. Automated SBOM generation and management: Every build is now cataloged and stored (as an SBOM), providing immediate transparency into the software’s components.

“By catching vulnerabilities in our build pipeline, we can inform our customers and prevent any of the APIs created by a DreamFactory install from being leveraged to exploit our customer’s network,” Bennett notes. “Anchore has helped us achieve this massive value-add for our customers.”

Results: Developer Time Savings and Enhanced Trust

The implementation of Anchore Enterprise transformed DreamFactory’s security posture and business operations:

  • 75% reduction in time spent on vulnerability management and compliance requirements
  • 70% faster production deployments with integrated security checks
  • Rapid trust development through transparency

“We’re seeing a lot of traction with data warehousing use-cases,” says Bennett. “Being able to bring an SBOM to the conversation at the very beginning completely changes the conversation and allows CISOs to say, ‘let’s give this a go’.”

Conclusion: A Competitive Edge in High-Stakes Environments

By leveraging Anchore Enterprise, DreamFactory has positioned itself as a trusted partner for organizations requiring the highest levels of security and compliance in their API generation solutions. In an era where API security is more critical than ever, DreamFactory’s success story demonstrates that with the right tools and approach, it’s possible to achieve both ironclad security and operational efficiency.


Are you facing similar challenges hardening your software supply chain in order to meet the requirements of the DoD? By designing your DevSecOps pipeline to the DoD software factory standard, your organization can guarantee to meet these sky-high security and compliance requirements. Learn more about the DoD software factory standard by downloading our white paper below.

How is Open Source Software Security Managed in the Software Supply Chain?

Open source software has revolutionized the way developers build applications, offering a treasure trove of pre-built software “legos” that dramatically boost productivity and accelerate innovation. By leveraging the collective expertise of a global community, developers can create complex, feature-rich applications in a fraction of the time it would take to build everything from scratch. However, this incredible power comes with a significant caveat: the open source model introduces risk.

Organizations inherit both the good and bad parts of the OSS source code they don’t own. This double-edged sword of open source software necessitates a careful balance between harnessing its productivity benefits and managing the risks. A comprehensive OSS security program is the industry standard best practice for managing the risk of open source software within an organization’s software supply chain.

Learn the container security best practices to reduce the risk of software supply chain attacks.

Learn the container security best practices, including open source software security, to reduce the risk of software supply chain attacks.

What is open source software security?

Open source software security is the ecosystem of security tools (some of it being OSS!) that have developed to compensate for the inherent risk of OSS development. The security of the OSS environment was founded on the idea that “given enough eyeballs, all bugs are shallow”. The reality of OSS is that the majority of it is written and maintained by single contributors. The percentage of open source software that passes the qualifier of “enough eyeballs” is miniscule.

Does that mean open source software isn’t secure? Fortunately, no. The OSS community still produces secure software but an entire ecosystem of tools ensure that this is verified—not only trusted implicitly.

What is the difference between closed source and open source software security?

The primary difference between open source software security and closed source software security is how much control you have over the source code. Open source code is public and can have many contributors that are not employees of your organization while proprietary source code is written exclusively by employees of your organization. The threat models required to manage risk for each of these software development methods are informed by these differences.

Due to the fact that open source software is publicly accessible and can be contributed to by a diverse, often anonymous community, its threat model must account for the possibility of malicious code contributions, unintentional vulnerabilities introduced by inexperienced developers, and potential exploitation of disclosed vulnerabilities before patches are applied. This model emphasizes continuous monitoring, rigorous code review processes, and active community engagement to mitigate risks. 

In contrast, proprietary software’s threat model centers around insider threats, such as disgruntled employees or lapses in secure coding practices, and focuses heavily on internal access controls, security audits, and maintaining strict development protocols. 

The need for external threat intelligence is also greater in OSS, as the public nature of the code makes it a target for attackers seeking to exploit weaknesses, while proprietary software relies on obscurity and controlled access as a first line of defense against potential breaches.

What are the risks of using open source software?

  1. Vulnerability exploitation of your application
    • The bargain that is struck when utilizing OSS is your organization gives up significant amounts of control of the quality of the software. When you use OSS you inherit both good AND bad (read: insecure) code. Any known or latent vulnerabilities in the software become your problem.
  2. Access to source code increases the risk of vulnerabilities being discovered by threat actors
    • OSS development is unique in that both the defenders and the attackers have direct access to the source code. This gives the threat actors a leg up. They don’t have to break through perimeter defenses before they get access to source code that they can then analyze for vulnerabilities.
  3. Increased maintenance costs for DevSecOps function
    • Adopting OSS into an engineering organization is another function that requires management. Data has to be collected about the OSS that is embedded in your applications. That data has to be stored and made available in case of the event of a security incident. These maintenance costs are typically incurred by the DevOps and Security teams.
  4. OSS license legal exposure
    • OSS licenses are mostly permissive for use within commercial applications but a non-trivial subset are not, or worse they are highly adversarial when used by a commercial enterprise. Organizations that don’t manage this risk increase the potential for legal action to be taken against them.

How serious are the risks associated with the use of open source software?

Current estimates are that 70-90% of modern applications are composed of open source software. This means that only 10-30% of applications developed by organizations are written by developers employed by the organization. Without having significant visibility into the security of OSS, organization’s are handing over the keys to the castle to the community and hoping for the best.

Not only is OSS a significant footprint in modern application composition but its growth is accelerating. This means the associated risks are growing just as fast. This is part of the reason we see an acceleration in the frequency of software supply chain attacks. Organizations that aren’t addressing these realities are getting caught on their back foot when zero-days are announced like the recent XZ utils backdoor.

Why are SBOMs important to open source software security?

Software Bills of Materials (SBOMs) serve as the foundation of software supply chain security by providing a comprehensive “ingredient list” of all components within an application. This transparency is crucial in today’s software landscape, where modern applications are a complex web of mostly open source software dependencies that can harbor hidden vulnerabilities. 

SBOMs enable organizations to quickly identify and respond to security threats, as demonstrated during incidents like Log4Shell, where companies with centralized SBOM repositories were able to locate vulnerable components in hours rather than days. By offering a clear view of an application’s composition, SBOMs form the bedrock upon which other software supply chain security measures can be effectively built and validated.

The importance of SBOMs in open source software security cannot be overstated. Open source projects often involve numerous contributors and dependencies, making it challenging to maintain a clear picture of all components and their potential vulnerabilities. By implementing SBOMs, organizations can proactively manage risks associated with open source software, ensure regulatory compliance, and build trust with customers and partners. 

SBOMs enable quick responses to newly discovered vulnerabilities, facilitate automated vulnerability management, and support higher-level security abstractions like cryptographically signed images or source code. In essence, SBOMs provide the critical knowledge needed to navigate the complex world of open source dependencies by enabling us to channel our inner GI Joe—”knowing is half the battle” in software supply chain security.

Best practices for securing open source software?

Open source software has become an integral part of modern development practices, offering numerous benefits such as cost-effectiveness, flexibility, and community-driven innovation. However, with these advantages come unique security challenges. To mitigate risks and ensure the safety of your open source components, consider implementing the following best practices:

1. Model Security Scans as Unit Tests

Re-branding security checks as another type of unit test helps developers orient to DevSecOps principles. This approach helps developers re-imagine security as an integral part of their workflow rather than a separate, post-development concern. By modeling security checks as unit tests, you can:

  • Catch vulnerabilities earlier in the development process
  • Reduce the time between vulnerability detection and remediation
  • Empower developers to take ownership of security issues
  • Create a more seamless integration between development and security teams

Remember, the goal is to make security an integral part of the development process, not a bottleneck. By treating security checks as unit tests, you can achieve a balance between rapid development and robust security practices.

2. Review Code Quality

Assessing the quality of open source code is crucial for identifying potential vulnerabilities and ensuring overall software reliability. Consider the following steps:

  • Conduct thorough code reviews, either manually or using automated tools
  • Look for adherence to coding standards and best practices
  • Look for projects developed with secure-by-default principles
  • Evaluate the overall architecture and design patterns used

Remember, high-quality code is generally more secure and easier to maintain.

3. Assess Overall Project Health

A vibrant, active community and committed maintainers are crucial indicators of a well-maintained open source project. When evaluating a project’s health and security:

  • Examine community involvement:
    • Check the number of contributors and frequency of contributions
    • Review the project’s popularity metrics (e.g., GitHub stars, forks, watchers)
    • Assess the quality and frequency of discussions in forums or mailing lists
  • Evaluate maintainer(s) commitment:
    • Check the frequency of commits, releases, and security updates
    • Check for active engagement between maintainers and contributors
    • Review the time taken to address reported bugs and vulnerabilities
    • Look for a clear roadmap or future development plans

4. Maintain a Software Dependency Inventory

Keeping track of your open source dependencies is crucial for managing security risks. To create and maintain an effective inventory:

  • Use tools like Syft or Anchore SBOM to automatically scan your application source code for OSS dependencies
    • Include both direct and transitive dependencies in your scans
  • Generate a Software Bill of Materials (SBOM) from the dependency scan
    • Your dependency scanner should also do this for you
  • Store your SBOMs in a central location that can be searched and analyzed
  • Scan your entire DevSecOps pipeline regularly (ideally every build and deploy)

An up-to-date inventory allows for quicker responses to newly discovered vulnerabilities.

5. Implement Vulnerability Scanning

Regular vulnerability scanning helps identify known security issues in your open source components. To effectively scan for vulnerabilities:

  • Use tools like Grype or Anchore Secure to automatically scan your SBOMs for vulnerabilities
  • Automate vulnerability scanning tools directly into your CI/CD pipeline
    • At minimum implement vulnerability scanning as containers are built
    • Ideally scan container registries, container orchestrators and even each time a new dependency is added during design
  • Set up alerts for newly discovered vulnerabilities in your dependencies
  • Establish a process for addressing identified vulnerabilities promptly

6. Implement Version Control Best Practices

Version control practices are crucial for securing all DevSecOps pipelines that utilize open source software:

  • Implement branch protection rules to prevent unauthorized changes
  • Require code reviews and approvals before merging changes
  • Use signed commits to verify the authenticity of contributions

By implementing these best practices, you can significantly enhance the security of your software development pipeline and reduce the risk intrinsic to open source software. By doing this you will be able to have your cake (productivity boost of OSS) and eat it too (without the inherent risk).

How do I integrate open source software security into my development process?

DIY a comprehensive OSS security system

We’ve written about the steps to build a OSS security system from scratch in a previous blog post—below is the TL;DR:

  • Integrate dependency scanning, SBOM generation and vulnerability scanning into your DevSecOps pipeline
  • Implement a data pipeline to manage the influx of security metadata
  • Use automated policy-as-code “security tests” to provide rapid feedback to developers
  • Automate remediation recommendations to reduce cognitive load on developers

Outsource OSS security to a turnkey vendor

Modern software composition analysis (SCA) tools, like Anchore Enterprise, are purpose built to provide you with a comprehensive OSS security system out-of-the-box. All of the same features of DIY but without the hassle of building while maintaining your current manual process.

  • Anchore SBOM: comprehensive dependency scanning, SBOM generation and management
  • Anchore Secure: vulnerability scanning and management
  • Anchore Enforce: automated security enforcement and compliance

Whether you want to scale an understaffed security to increase their reach across your organization or free your team up to focus on different priorities, the buy versus build opportunity cost is a straightforward decision.

Next Steps

Hopefully, you now have a strong understanding of the risks associated with adopting open source software. If you’re looking to continue your exploration into the intricacies of software supply chain security, Anchore has a catalog of deep dive content on our website. If you’d prefer to get your hands dirty, we also offer a 15-day free trial of Anchore Enterprise.

Learn about the role that SBOMs for the security of your organization in this white paper.

Learn about the role that SBOMs for the security, including open source software security, of your organization in this white paper.

Anchore Enterprise 5.8 Adds KEV Enrichment Feed

Today we have released Anchore Enterprise 5.8, featuring the integration of the U.S. Cybersecurity and Infrastructure Security Agency’s (CISA) Known Exploited Vulnerabilities (KEV) catalog as a new vulnerability feed.

Previously, Anchore Enterprise matched software libraries and frameworks inside applications against vulnerability databases, such as, National Vulnerability Database (NVD), the GitHub Advisory Database or individual vendor feeds. With Anchore Enterprise 5.8, customers can augment their vulnerability feeds with the KEV catalog without having to leave the dashboard. In addition, teams can automatically flag exploitable vulnerabilities as software is being developed or gate build artifacts from being released into production. 

Before we jump into what all of this means, let’s take a step back and get some context to KEV and its impact on DevSecOps pipelines.

What is CISA KEV?

The KEV (Known Exploited Vulnerabilities) catalog is a critical cybersecurity resource maintained by the U.S. Cybersecurity and Infrastructure Security Agency (CISA). It is a database of exploited vulnerabilities that is current and active in the wild. While addressing these vulnerabilities is mandatory for U.S. federal agencies under Binding Operational Directive 22-01, the KEV catalog serves as an essential public resource for improving cybersecurity for any organization.

The primary difference between CISA KEV and a standard vulnerability feed (e.g., the CVE program) are the adjectives, “actively exploited”. Actively exploited vulnerabilities are being used by attackers to compromise systems in real-time, meaning now. They are real and your organization may be standing in the line of fire, whereas CVE lists vulnerabilities that may or may not have any available exploits currently. Due to the imminent threat of actively exploited vulnerabilities, they are considered the highest risk outside of an active security incident.

The benefits of KEV enrichment

The KEV catalog offers significant benefits to organizations striving to improve their cybersecurity posture. One of its primary advantages is its high signal-to-noise ratio. By focusing exclusively on vulnerabilities that are actively being exploited in the wild, the KEV cuts through the noise of countless potential vulnerabilities, allowing developers and security teams to prioritize their efforts on the most critical and immediate threats. This focused approach ensures that limited resources are allocated to addressing the vulnerabilities that pose the greatest risk, significantly enhancing an organization’s security efficiency.

Moreover, the KEV can be leveraged as a powerful tool in an organization’s development and deployment processes. By using the KEV as a trigger for build pipeline gates, companies can prevent exploitable vulnerabilities from being promoted to production environments. This proactive measure adds an extra layer of security to the software development lifecycle, reducing the risk of deploying vulnerable code. 

Additionally, while adherence to the KEV is not yet a universal compliance requirement, it represents a security best practice that forward-thinking organizations are adopting. Given the trend of such practices evolving into compliance mandates, integrating the KEV into security protocols can be seen as a form of future-proofing, potentially easing the transition if and when such practices inevitably become compliance requirements.

How Anchore Enterprise delivers KEV enrichment

With Anchore Enterprise, CISA KEV is now a vulnerability feed similar to any other data feed that gets imported into the system. Anchore Enterprise can be configured to pull this directly from the source as part of the deployment feed service.

To make use of the new KEV data, we have an additional rule option in the Anchore Policy Engine that allows a STOP or WARN to be configured when a vulnerability is detected that is on the KEV list. When any application build, registry store or runtime deploy occurs, Anchore Enterprise will evaluate the artifiact’s SBOM against the security policy and if the SBOM has been annotated with a KEV entry then the Anchore policy engine can return a STOP value to inform the build pipeline to fail the step and return the KEV as the source of the violation.

To configure the KEV feed as a trigger in the policy engine, first select vulnerabilities as the gate then kev list as a trigger. Finally choose an action.

Anchore Enterprise dashboard policy engine rule set configuration showing vulnerabilities as the gate value and the CISA KEV catalog as the trigger value.

After you save the new rule, you will see the kev list rule as part of the entire policy.

Anchore Enterprise 5.8 policy engine dashboard showing all rules for the default policy including the CISA KEV catalog rule at the top (highlighted in the red square).

After scanning a container with the policy that has the kev list rule in it, you can view all dependencies that match the kev list vulnerability feed.

Anchore Enterprise 5.8 vulnerability scan report with policy enrichment and policy actions. All software dependencies are matched against the CISA KEV catalog of known exploitable vulnerabilities and the assigned action is reported in the dashboard.

Next Steps

To stay on top of our releases, sign-up for our monthly newsletter or follow our LinkedIn account. If you are already an Anchore customer, please reach out to your account manager to upgrade to 5.8 and gain access to KEV support. We also offer a 15 day free trial to get hands on with Anchore Enterprise or you can reach out to us for a guided tour.