Going All In: Anchore at SBOM Plugfest 2024

When we were invited to participate in Carnegie Mellon University’s Software Engineering Institute (SEI) SBOM Harmonization Plugfest 2024, we saw an opportunity to contribute to SBOM generation standardization efforts and thoroughly exercise our open-source SBOM generator, Syft

While the Plugfest only required two SBOM submissions, we decided to go all in – and learned some valuable lessons along the way.

The Plugfest Challenge

The SBOM Harmonization Plugfest aims to understand why different tools generate different SBOMs for the same software. It’s not a competition but a collaborative study to improve SBOM implementation harmonization. The organizers selected eight diverse software projects, ranging from Node.js applications to C++ libraries, and asked participants to generate SBOMs in standard formats like SPDX and CycloneDX.

Going Beyond the Minimum

Instead of just submitting two SBOMs, we decided to:

  1. SBOM generation for all eight target projects
  2. Create both source and binary analysis SBOMs where possible
  3. Output in every format Syft supports
  4. Test both enriched and non-enriched versions
  5. Validate everything thoroughly

This comprehensive approach would give us (and the broader community) much more data to work with.

Automation: The Key to Scale

To handle this expanded scope, we created a suite of scripts to automate the entire process:

  1. Target acquisition
  2. Source SBOM generation
  3. Binary building
  4. Binary SBOM generation
  5. SBOM validation

The entire pipeline runs in about 38 minutes on a well-connected server, generating nearly three hundred SBOMs across different formats and configurations.

The Power of Enrichment

One of Syft’s interesting features is its --enrich option, which can enhance SBOMs with additional metadata from online sources. Here’s a real example showing the difference in a CycloneDX SBOM for Dependency-Track:

$ wc -l dependency-track/cyclonedx-json.json dependency-track/cyclonedx-json_enriched.json
  5494 dependency-track/cyclonedx-json.json
  6117 dependency-track/cyclonedx-json_enriched.json

The enriched version contains additional information like license URLs and CPE identifiers:

{
  "license": {
    "name": "Apache 2",
    "url": "http://www.apache.org/licenses/LICENSE-2.0"
  },
  "cpe": "cpe:2.3:a:org.sonatype.oss:JUnitParams:1.1.1:*:*:*:*:*:*:*"
}

These additional identifiers are crucial for security and compliance teams – license URLs help automate legal compliance checks, while CPE identifiers enable consistent vulnerability matching across security tools.

SBOM Generation of Binaries

While source code analysis is valuable, many Syft users analyze built artifacts and containers. This reflects real-world usage where organizations must understand what’s being deployed, not just what’s in the source code. We built and analyzed binaries for most target projects:

PackageBuild MethodKey Findings
Dependency TrackDockerThe container SBOMs included ~1000 more items than source analysis, including base image components like Debian packages
HTTPiepip installBinary analysis caught runtime Python dependencies not visible in source
jqDockerPython dependencies contributed significant additional packages
MinecoloniesGradleJava runtime java archives appeared in binary analysis, but not in the source
OpenCVCMakeBinary and source SBOMs were largely the same
hexylCargo buildRust static linking meant minimal difference from source
nodejs-goofDockerNode.js runtime and base image packages significantly increased the component count

Some projects, like gin-gonic (a library) and PHPMailer, weren’t built as they’re not typically used as standalone binaries.

The differences between source and binary SBOMs were striking. For example, the Dependency-Track container SBOM revealed:

  • Base image operating system packages
  • Runtime dependencies not visible in source analysis
  • Additional layers of dependencies from the build process
  • System libraries and tools included in the container

This perfectly illustrates why both source and binary analysis are important:

  • Source SBOMs show some direct development dependencies
  • Binary/container SBOMs show the complete runtime environment
  • Together, they provide a full picture of the software supply chain

Organizations can leverage these differences in their CI/CD pipelines – using source SBOMs for early development security checks and binary/container SBOMs for final deployment validation and runtime security monitoring.

Unexpected Discovery: SBOM Generation Bug

One of the most valuable outcomes wasn’t planned at all. During our comprehensive testing, we discovered a bug in Syft’s SPDX document generation. The SPDX validators were flagging our documents as invalid due to absolute file paths:

file name must not be an absolute path starting with "/", but is: 
/.github/actions/bootstrap/action.yaml
file name must not be an absolute path starting with "/", but is: 
/.github/workflows/benchmark-testing.yaml
file name must not be an absolute path starting with "/", but is: 
/.github/workflows/dependabot-automation.yaml
file name must not be an absolute path starting with "/", but is: 
/.github/workflows/oss-project-board-add.yaml

The SPDX specification requires relative file paths in the SBOM, but Syft used absolute paths. Our team quickly developed a fix, which involved converting absolute paths to relative ones in the format model logic:

// spdx requires that the file name field is a relative filename
// with the root of the package archive or directory
func convertAbsoluteToRelative(absPath string) (string, error) {
    // Ensure the absolute path is absolute (although it should already be)
    if !path.IsAbs(absPath) {
        // already relative
        log.Debugf("%s is already relative", absPath)
        return absPath, nil
    }
    // we use "/" here given that we're converting absolute paths from root to relative
    relPath, found := strings.CutPrefix(absPath, "/")
    if !found {
        return "", fmt.Errorf("error calculating relative path: %s", absPath)
    }
    return relPath, nil
}

The fix was simple but effective – stripping the leading “/” from absolute paths while maintaining proper error handling and logging. This change was incorporated into Syft v1.18.0, which we used for our final Plugfest submissions.

This discovery highlights the value of comprehensive testing and community engagement. What started as a participation in the Plugfest ended up improving Syft for all users, ensuring more standard-compliant SPDX documents. It’s a perfect example of how collaborative efforts like the Plugfest can benefit the entire SBOM ecosystem.

SBOM Validation

We used multiple validation tools to verify our SBOMs:

Interestingly, we found some disparities between validators. For example, some enriched SBOMs that passed sbom-utility validation failed with pyspdxtools. Further, the NTA online validator gave us another different result in many cases. This highlights the ongoing challenges in SBOM standardization – even the tools that check SBOM validity don’t always agree!

Key Takeaways

  • Automation is crucial: Our scripted approach allowed us to efficiently generate and validate hundreds of SBOMs.
  • Real-world testing matters: Building and analyzing binaries revealed insights (and bugs!) that source-only analysis might have missed.
  • Enrichment adds value: Additional metadata can significantly enhance SBOM utility, though support varies by ecosystem.
  • Validation is complex: Different validators can give different results, showing the need for further standardization.

Looking Forward

The SBOM Harmonization Plugfest results will be analyzed in early 2025, and we’re eager to see how different tools handled the same targets. Our comprehensive submission will help identify areas where SBOM generation can be improved and standardized.

More importantly, this exercise has already improved Syft for our users through the bug fix and given us valuable insights for future development. We’re committed to continuing this thorough testing and community participation to make SBOM generation more reliable and consistent for everyone.

The final SBOMs are published in the plugfest-sboms repo, with the scripts in the plugfest-scripts repository. Consider using Syft for SBOM generation against your code and containers, and let us know how you get on in our community discourse.

ModuleQ reduces vulnerability management time by 80% with Anchore Secure

ModuleQ, an AI-driven enterprise knowledge platform, knows only too well the stakes for a company providing software solutions in the highly regulated financial services sector. In this world where data breaches are cause for termination of a vendor relationship and evolving cyberthreats loom large, proactive vulnerability management is not just a best practice—it’s a necessity. 

ModuleQ required a vulnerability management platform that could automatically identify and remediate vulnerabilities, maintain airtight security, and meet stringent compliance requirements—all without slowing down their development velocity.

Learn the essential container security best practices to reduce the risk of software supply chain attacks in this white paper.

The Challenge: Scaling Security in a High-Stakes Environment

ModuleQ found itself drowning in a flood of newly released vulnerabilities—over 25,000 in 2023 alone. Operating in a heavily regulated industry meant any oversight could have severe repercussions. High-profile incidents like the Log4j exploit underscored the importance of supply chain security, yet the manual, resource-intensive nature of ModuleQ’s vulnerability management process made it hard to keep pace.

The mandate that no critical vulnerabilities reached production was a particularly high bar to meet with the existing manual review process. Each time engineers stepped away from their coding environment to check a separate security dashboard, they lost context, productivity, and confidence. The fear of accidentally letting something slip through the cracks was ever present.

The Solution: Anchore Secure for Automated, Integrated Vulnerability Management

ModuleQ chose Anchore Secure to simplify, automate, and fully integrate vulnerability management into their existing DevSecOps workflows. Instead of relying on manual security reviews, Anchore Secure injected security measures seamlessly into ModuleQ’s Azure DevOps pipelines, .NET, and C# environment. Every software build—staged nightly through a multi-step pipeline—was automatically scanned for vulnerabilities. Any critical issues triggered immediate notifications and halted promotions to production, ensuring that potential risks were addressed before they could ever reach customers.

Equally important, Anchore’s platform was built to operate in on-prem or air-gapped environments. This guaranteed that ModuleQ’s clients could maintain the highest security standards without the need for external connectivity. For an organization whose customers demand this level of diligence, Anchore’s design provided peace of mind and strengthened client relationships.

Results: Faster, More Secure Deployments

By adopting Anchore Secure, ModuleQ dramatically accelerated and enhanced its vulnerability management approach:

  • 80% Reduction in Vulnerability Management Time: Automated scanning, triage, and reporting freed the team from manual checks, letting them focus on building new features rather than chasing down low-priority issues.
  • 50% Less Time on Security Tasks During Deployments: Proactive detection of high-severity vulnerabilities streamlined deployment workflows, enabling ModuleQ to deliver software faster—without compromising security.
  • Unwavering Confidence in Compliance: With every new release automatically vetted for critical vulnerabilities, ModuleQ’s customers in the financial sector gained renewed trust. Anchore’s support for fully on-prem deployments allowed ModuleQ to meet stringent security demands consistently.

Looking Ahead

In an era defined by unrelenting cybersecurity threats, ModuleQ proved that speed and security need not be at odds. Anchore Secure provided a turnkey solution that integrated seamlessly into their workflow, saving time, strengthening compliance, and maintaining the agility to adapt to future challenges. By adopting an automated security backbone, ModuleQ has positioned itself as a trusted and reliable partner in the financial services landscape.

Looking for more details? Read the ModuleQ case study in full. If you’re ready to move forward see all of the features on Anchore Secure’s product page or reach out to our team to schedule a demo.

Enhancing Container Security with NVIDIA’s AI Blueprint and Anchore’s Syft

Container security is critical – one breach can lead to devastating data losses and business disruption. NVIDIA’s new AI Blueprint for Vulnerability Analysis transforms how organizations handle these risks by automating vulnerability detection and analysis. For enhanced container security, this AI-powered solution is a potential game-changer.

At its core, the Blueprint combines AI-driven scanning with NVIDIA’s Morpheus Cybersecurity SDK to identify vulnerabilities in seconds rather than hours or days for enhanced container security. The system works through a straightforward process:

First, it generates a Software Bill of Materials (SBOM) using Syft, Anchore’s open-source tool. This tool creates a detailed inventory of all software components in a container. This SBOM feeds into an AI pipeline that leverages large language models (LLMs) and retrieval-augmented generation (RAG) to analyze potential vulnerabilities for enhanced container security.

The AI examines multiple data sources – from code repositories to vulnerability databases – and produces a detailed analysis of each potential threat. Most importantly, it distinguishes between genuine security risks and false positives by considering environmental factors and dependency requirements.

The system then provides clear recommendations through a standardized Vulnerability Exploitability eXchange (VEX) status, as illustrated below. Container security is further enhanced by these clear recommendations.

This Blueprint is particularly valuable because it automates traditional manual security analysis. Security teams can stop spending days investigating potential vulnerabilities and focus on addressing confirmed threats. This efficiency is invaluable for organizations managing container security at scale with enhanced container security solutions.

Want to try it yourself? Check out the Blueprint, read more in the NVIDIA blog post, and explore the vulnerability-analysis git repo. Let us know if you’ve tried this out with Syft, over on the Anchore Community Discourse.

Survey Data Shows 200% Increase in Software Supply Chain Focus

Data found in the recent Anchore 2024 Software Supply Chain Security Report shows that there has been a 200% increase in the priority of software supply chain security. As attacks continue to increase, organizations are doubling their focus in this area. There is much to understand across the industry with the nuances and intensity of software supply chain attacks across the past twelve months.

Below we’ve compiled a graphical representation of the insights gathered in the Anchore 2024 Software Supply Chain Security Report, to provide a visual approach to the unique insights, experiences, and practices of over 100 organizations that are the targets of software supply chain attacks.

The Anchore 2024 Software Supply Chain Security Report is now available. This report provides a unique set of insights into the experiences and practices of over 100 organizations that are the targets of software supply chain attacks.

The Evolution of SBOMs in the DevSecOps Lifecycle: Part 2

Welcome back to the second installment of our two-part series on “The Evolution of SBOMs in the DevSecOps Lifecycle”. In our first post, we explored how Software Bills of Materials (SBOMs) evolve over the first 4 stages of the DevSecOps pipeline—Plan, Source, Build & Test—and how each type of SBOM serves different purposes. Some of those use-cases include: shift left vulnerability detection, regulatory compliance automation, OSS license risk management and incident root cause analysis.

In this part, we’ll continue our exploration with the final 4 stages of the DevSecOps lifecycle, examining:

  • Analyzed SBOMs at the Release (Registry) stage
  • Deployed SBOMs during the Deployment phase
  • Runtime SBOMs in Production (Operate & Monitor stages)

As applications migrate down the pipeline, design decisions made at the beginning begin to ossify becoming more difficult to change; this influences the challenges that are experienced and the role that SBOMs play in overcoming these novel problems. Some of the new challenges that come up include: pipeline leaks, vulnerabilities in third-party packages, and runtime injection. All of which introduce significant risk. Understanding how SBOMs evolve across these stages helps organizations mitigate these risks effectively.

Whether you’re aiming to enhance your security posture, streamline compliance reporting, or improve incident response times, this comprehensive guide will equip you with the knowledge to leverage SBOMs effectively from Release to Production. Additionally, we’ll offer pro tips to help you maximize the benefits of SBOMs in your DevSecOps practices.

So, let’s continue our journey through the DevSecOps pipeline and discover how SBOMs can transform the latter stages of your software development lifecycle.

Learn the 5 best practices for container security and how SBOMs play a pivotal role in securing your software supply chain.

Release (or Registry) => Analyzed SBOM

After development is completed and the new release of the software is declared a “golden” image the build system will push the release artifact to a registry for storage until it is deployed. At this stage, an SBOM that is generated based on these container images, binaries, etc. is named an “Analyzed SBOM” by CISA. The name is a little confusing since all SBOMs should be analyzed regardless of the stage they are generated. A more appropriate name might be a Release SBOM but we’ll stick with CISA’s name for now.

At first glance, it would seem that Analyzed SBOMs and the final Build SBOMs should be identical since it is the same software but that doesn’t hold up in practice. DevSecOps pipelines aren’t hermetically sealed systems, they can be “leaky”. You might be surprised what finds its way into this storage repository and eventually gets deployed bypassing your carefully constructed build and test setup.

On top of that, the registry holds more than just first-party applications that are built in-house. It also stores 3rd-party container images like operating systems and any other self-contained applications used by the organization.

The additional metadata that is collected for an Analyzed SBOM includes:

  • Release images that bypass the happy path build and test pipeline
  • 3rd-party container images, binaries and applications

Pros and Cons

Pros:

  • Comprehensive Artifact Inventory: A more holistic view of all software—both 1st- and 3rd-party—that is utilized in production.
  • Enhanced Security and Compliance Posture: Catches vulnerabilities and non-compliant images for all software that will be deployed to production. This reduces the risk of security incidents and compliance violations.
  • Third-Party Supply Chain Risk Management: Provides insights into the vulnerabilities and compliance status of third-party components.
  • Ease of implementation: This stage is typically the lowest lift for implementation given that most SBOM generators can be deployed standalone and pointed at the registry to scan all images.

Cons:

  • High Risk for Release Delays: Scanning images at this stage are akin to traditional waterfall-style development patterns. Most design decisions are baked-in and changes typically incur a steep penalty.
  • Difficult to Push Feedback into Exist Workflow: The registry sits outside of typical developer workflows and creating a feedback loop that seamlessly reports issues without changing the developer’s process is a non-trivial amount of work.
  • Complexity in Management: Managing SBOMs for both internally developed and third-party components adds complexity to the software supply chain.

Use-Cases

  • Software Supply Chain Security: Organizations can detect vulnerabilities in both their internal developed software and external software to prevent supply chain injections from leading to a security incident.
  • Compliance Reporting: Reporting on both 1st- and 3rd-party software is necessary for industries with strict regulatory requirements.
  • Detection of Leaky Pipelines: Identifies release images that have bypassed the standard build and test pipeline, allowing teams to take corrective action.
  • Third-Party Risk Analysis: Assesses the security and compliance of third-party container images, binaries, and applications before they are deployed.

Example: An organization subject to strict compliance standards like FedRAMP or cATO uses Analyzed SBOMs to verify that all artifacts in their registry, including third-party applications, comply with security policies and licensing requirements. This practice not only enhances their security posture but also streamlines the audit process.

Pro Tip

A registry is an easy and non-invasive way to test and evaluate potential SBOM generators. It won’t give you a full picture of what can be found in your DevSecOps pipeline but it will at least give you an initial idea of its efficacy and help you make the decision on whether to go through the effort of integrating it into your build pipeline where it will produce deeper insights.

Deploy => Deployed SBOM

As your container orchestrator is deploying an image from your registry into production it will also orchestrate any production dependencies such as sidecar containers or production dependencies. At this stage, an SBOM that is generated is named an “Deployed SBOM” by CISA.

The ideal scenario is that your operations team is storing all of these images in the same central registry as your engineering team but—as we’ve noted before—reality diverges from the ideal.

The additional metadata that is collected for a Deployed SBOM includes:

  • Any additional sidecar containers or production dependencies that are injected or modified through a release controller.

Pros and Cons

Pros:

  • Enhanced Security Posture: The final gate to prevent vulnerabilities from being deployed into production. This reduces the risk of security incidents and compliance violations.
  • Leaky Pipeline Detection: Another location to increase visibility into the happy path of the DevSecOps pipeline being circumvented.
  • Compliance Enforcement: Some compliance standards require a deployment breaking enforcement gate before any software is deployed to production. A container orchestrator release controller is the ideal location to implement this.

Cons:

Essentially the same issues that come up during the release phase.

  • High Risk for Release Delays: Scanning images at this stage are even later than traditional waterfall-style development patterns and will incur a steep penalty if an issue is uncovered.
  • Difficult to Push Feedback into Exist Workflow: A deployment release controller sits outside of typical developer workflows and creating a feedback loop that seamlessly reports issues without changing the developer’s process is a non-trivial amount of work.

Use-Cases

  • Strict Software Supply Chain Security: Implementing a pipeline breaking gating mechanism is typically reserved for only the most critical security vulnerabilities (think: an actively exploitable known vulnerability).
  • High-Stakes Compliance Enforcement: Industries like defense, financial services and critical infrastructure will require vendors to implement a deployment gate for specific risk scenarios beyond actively exploitable vulnerabilities.
  • Compliance Audit Automation: Most regulatory compliance frameworks mandate audit artifacts at deploy time, these documents can be automatically generated and stored for future audits.

Example: A Deployed SBOM can be used as the source of truth for generating a report that demonstrates that no HIGH or CRITICAL vulnerabilities were deployed to production during an audit period.

Pro Tip

Combine a Deployed SBOM with a container vulnerability scanner that cross-checks all vulnerabilities against CISA’s Known Exploitable Vulnerability (KEV) database. In the scenario where a matching KEV is found for a software component you can configure your vulnerability scanner to return a FAIL response to your release controller to abort the deployment.

This strategy creates an ideal balance between not adding delays to software delivery and an extremely high probability for a security incident. 

Operate & Monitor (or Production) => Runtime SBOM

After your container orchestrator has deployed an application into your production environment it is live and serving customer traffic. An SBOM that is generated at this stage don’t have a name as specified by CISA. They are sometimes referred to as “Runtime SBOMs”. SBOMs are still a new-ish standard and will continue to evolve. 

The additional metadata that is collected for a Runtime SBOM includes:

  • Modifications (i.e., intentional hotfixes or malicious malware injection) made to running applications in your production environment. 

Pros and Cons

Pros:

  • Continuous Security Monitoring: Identifies new vulnerabilities that emerge after deployment.
  • Active Runtime Inventory: Provides a canonical view into an organization’s active software landscape.
  • Low Lift Implementation: Deploying SBOM generation into a production environment typically only requires deploying the scanner as another container and giving it permission to access the rest of the production environment.

Cons:

  • No Shift Left Security: By definition is excluded as a part of a shift left security posture.
  • Potential for Release Rollbacks: Scanning images at this stage is the worst possible place for proactive remediation. Discovering a vulnerability could potentially cause a security incident and force a release rollback.

Use-Cases

  • Rapid Incident Management: When new critical vulnerabilities are discovered and announced by the community the first priority for an organization is to determine exposure. An accurate production inventory, down to the component-level, is needed to answer this critical question.
  • Threat Detection: Continuously monitoring for anomalous activity linked to specific components. Sealing your system off completely from advanced persistent threats (APTs) is an unfeasible goal. Instead, quick detection and rapid intervention is the scalable solution to limit the impact of these adversaries.
  • Patch Management: As new releases of 3rd-party components and applications are released an inventory of impacted production assets provides helpful insights that can direct the prioritization of engineering efforts.

Example: When the XZ Utils vulnerability was announced in the spring of 2024, organizations that already automatically generated a Runtime SBOM inventory ran a simple search query against their SBOM database and knew within minutes—or even seconds—whether they were impacted.

Pro Tip

If you want to learn about how Google was able to go from an all-hands on deck security incident when the XZ Utils vulnerability was announced to an all clear under 10 minutes, watch our webinar with the lead of Google’s SBOM initiative.

Wrap-Up

As the SBOM standard has evolved the subject has grown considerably. What started as a structured way to store information about open source licenses has expanded to include numerous use-cases. A clear understanding of the evolution of SBOMs throughout the DevSecOps lifecycle is essential for organizations aiming to solve problems ranging from software supply chain security to regulatory compliance to legal risk management.

SBOMs are a powerful tool in the arsenal of modern software development. By recognizing their importance and integrating them thoughtfully across the DevSecOps lifecycle, you position your organization at the forefront of secure, efficient, and compliant software delivery.

Ready to secure your software supply chain and automate compliance tasks with SBOMs? Anchore is here to help. We offer SBOM management, vulnerability scanning and compliance automation enforcement solutions. If you still need some more information before looking at solutions, check out our webinar below on scaling a secure software supply chain with Kubernetes. 👇👇👇

Learn how Spectro Cloud secured their Kubernetes-based software supply chain and the pivotal role SBOMs played.

The Evolution of SBOMs in the DevSecOps Lifecycle: From Planning to Production

The software industry has wholeheartedly adopted the practice of building new software on the shoulders of the giants that came before them. To accomplish this developers construct a foundation of pre-built, 3rd-party components together then wrap custom 1st-party code around this structure to create novel applications. It is an extraordinarily innovative and productive practice but it also introduces challenges ranging from security vulnerabilities to compliance headaches to legal risk nightmares. Software bills of materials (SBOMs) have emerged to provide solutions for these wide ranging problems.

An SBOM provides a detailed inventory of all the components that make up an application at a point in time. However, it’s important to recognize that not all SBOMs are the same—even for the same piece of software! SBOMs evolve throughout the DevSecOps lifecycle; just as an application evolves from source code to a container image to a running application. The Cybersecurity and Infrastructure Security Agency’s (CISA) has codified this idea by differentiating between all of the different types of SBOMs. Each type serves different purposes and captures information about an application through its evolutionary process.

In this 2-part blog series, we’ll deep dive into each stage of the DevSecOps process and the associated SBOM. Highlighting the differences, the benefits and disadvantages and the use-cases that each type of SBOM supports. Whether you’re just beginning your SBOM journey or looking to deepen your understanding of how SBOMs can be integrated into your DevSecOps practices, this comprehensive guide will provide valuable insights and advice from industry experts.

Learn about the role that SBOMs for the security of your organization in this white paper.

Learn about the role that SBOMs for the security, including open source software (OSS) security, of your organization in this white paper.

Types of SBOMs and the DevSecOps Pipeline

Over the past decade the US government got serious about software supply chain security and began advocating for SBOMs as the standardized approach to the problem. As part of this initiative CISA created the Types of Software Bill of Material (SBOM) Documents white paper that codified the definitions of the different types of SBOMs and mapped them to each stage of the DevSecOps lifecycle. We will discuss each in turn but before we do, let’s anchor on some terminology to prevent confusion or misunderstanding.

Below is a diagram that lays out each stage of the DevSecOps lifecycle as well as the naming convention we will use going forward.

With that out of the way, let’s get started!

Plan => Design SBOM

As the DevSecOps paradigm has spread across the software industry, a notable best practice known as the security architecture review has become integral to the development process. This practice embodies the DevSecOps goal of integrating security into every phase of the software lifecycle, aligning perfectly with the concept of Shift-Left Security—addressing security considerations as early as possible.

At this stage, the SBOM documents the planned components of the application. The CISA refers to SBOMs generated during this phase as Design SBOMs. These SBOMs are preliminary and outline the intended components and dependencies before any code is written.

The metadata that is collected for a Design SBOM includes:

  • Component Inventory: Identifying potential OSS libraries and frameworks to be used as well as the dependency relationship between the components.
  • Licensing Information: Understanding the licenses associated with selected components to ensure compliance.
  • Risk Assessment Data: Evaluating known vulnerabilities and security risks associated with each component.

This might sound like a lot of extra work but luckily if you’re already performing DevSecOps-style planning that incorporates a security and legal review—as is best practice—you’re already surfacing all of this information. The only thing that is different is that this preliminary data is formatted and stored in a standardized data structure, namely an SBOM.

Pros and Cons

Pros:

  • Maximal Shift-Left Security: Vulnerabilities cannot be found any earlier in the software development process. Design time security decisions are the peak of a proactive security posture and preempt bad design decisions before they become ingrained into the codebase.
  • Cost Efficiency: Resolving security issues at this stage is generally less expensive and less disruptive than during later stages of development or—worst of all—after deployment.
  • Legal and Compliance Risk Mitigation: Ensures that all selected components meet necessary compliance standards, avoiding legal complications down the line.

Cons:

  • Upfront Investment: Gathering detailed information on potential components and maintaining an SBOM at this stage requires a non-trivial commitment of time and effort.
  • Incomplete Information: Projects are not static, they will adapt as unplanned challenges surface. A design SBOM likely won’t stay relevant for long.

Use-Cases

There are a number of use-cases that are enabled by 

  • Security Policy Enforcement: Automatically checking proposed components against organizational security policies to prevent the inclusion of disallowed libraries or frameworks.
  • License Compliance Verification: Ensuring that all components comply with the project’s licensing requirements, avoiding potential legal issues.
  • Vendor and Third-Party Risk Management: Assessing the security posture of third-party components before they are integrated into the application.
  • Enhance Transparency and Collaboration: A well-documented SBOM provides a clear record of the software’s components but more importantly that the project aligns with the goals of all of the stakeholders (engineering, security, legal, etc). This builds trust and creates a collaborative environment that increases the chances of each individual stakeholder outcome will be achieved.

Example:

A financial services company operating within a strict regulatory environment uses SBOMs during planning to ensure that all components comply with compliance standards like PCI DSS. By doing so, they prevent the incorporation of insecure components that won’t meet PCI compliance. This reduces the risk of the financial penalties associated with security breaches and regulatory non-compliance.

Pro Tip

If your organization is still early in the maturity of its SBOM initiative then we generally recommend moving the integration of design time SBOMs to the back of the queue. As we mention at the beginning of this the information stored in a design SBOMs is naturally surfaced during the DevSecOps process, as long as the information is being recorded and stored much of the value of a design SBOM will be captured in the artifact. This level of SBOM integration is best saved for later maturity stages when your organization is ready to begin exploring deeper levels of insights that have a higher risk-to-reward ratio. 

Alternatively, if your organization is having difficulty getting your teams to adopt a collaborative DevSecOps planning process mandating a SBOM as a requirement can act as a forcing function to catalyze a cultural shift.

Source => Source SBOM

During the development stage, engineers implement the selected 3rd-party components into the codebase. CISA refers to SBOMs generated during this phase as Source SBOMs. The SBOMs generated here capture the actual implemented components and additional information that is specific to the developer who is doing the work.

The additional metadata that is collected for a Source SBOM includes:

  • Dependency Mapping: Documenting direct and transitive dependencies.
  • Identity Metadata: Adding contributor and commit information.
  • Developer Environment: Captures information about the development environment.

Unlike Design SBOMs which are typically done manually, these SBOMs can be generated programmatically with a software composition analysis (SCA) tool—like Syft. They are usually packaged as command line interfaces (CLIs) since this is the preferred interface for developers. 

If you’re looking for an SBOM generation tool (SCA embedded), we have a comprehensive list of options to make this decision easier.

Pros and Cons

Pros:

  • Accurate and Timely Component Inventory: Reflects the actual components used in the codebase and tracks changes as codebase is actively being developed.
  • Shift-Left Vulnerability Detection: Identifies vulnerabilities as components are integrated but requires commit level automation and feedback mechanisms to be effective.
  • Facilitates Collaboration and Visibility: Keeps all stakeholders members informed about divergence from the original plan and provokes conversations as needed. This is also dependent on automation to record changes during development and the notification systems to broadcast the updates.

Example: A developer adds a new logging library to the project like an outdated version of Log4j. The SBOM, paired with a vulnerability scanner, immediately flags the Log4Shell vulnerability, prompting the engineer to update to a patched version.

Cons:

  • Noise from Developer Toolchains: A lot of times developer environments are bespoke. This creates noise for security teams by recording development dependencies.
  • Potential Overhead: Continuous updates to the SBOM can be resource-intensive when done manually; the only resource efficient method is by using an SBOM generation tool that automates the process.
  • Possibility of Missing Early Risks: Issues not identified during planning may surface here, requiring code changes.

Use-Cases

  • Faster Root Cause Analysis: During service incident retrospectives questions about where, when and by whom a specific component was introduced into an application. Source SBOMs are the programmatic record that can provide answers and decrease manual root cause analysis.
  • Real-Time Security Alerts: Immediate notification of vulnerabilities upon adding new components, decreasing time to remediation and keeping security teams informed.
  • Automated Compliance Checks: Ensuring added components comply with security or license policies to manage compliance risk.
  • Effortless Collaboration: Stakeholders can subscribe to a live feed of changes and immediately know when implementation diverges from the plan.

Pro Tip

Some SBOM generators allow developers to specify development dependencies that should be ignored, similar to .gitignore file. This can help cut down on the noise created by unique developer setups.

Build & Test => Build SBOM

When a developer pushes a commit to the CI/CD build system an automated process initiates that converts the application source code into an artifact that can then be deployed. CISA refers to SBOMs generated during this phase as Build SBOMs. These SBOMs capture both source code dependencies and build tooling dependencies.

The additional metadata that is collected includes:

  • Build Dependencies: Build tooling such as the language compilers, testing frameworks, package managers, etc.
  • Binary Analysis Data: Metadata for compiled binaries that don’t utilize traditional container formats.
  • Configuration Parameters: Details on build configuration files that might impact security or compliance.

Pros and Cons

Pros:

  • Build Infrastructure Analysis: Captures build-specific components which may have their own vulnerability or compliance issues.
  • Reuses Existing Automation Tooling: Enables programmatic security and compliance scanning as well as policy enforcement without introducing any additional build tooling.
  • Reuses Existing Automation Tooling: Directly integrates with developer workflow. Engineers receive security, compliance, etc. feedback without the need to reference a new tool.
  • Reproducibility: Facilitates reproducing builds for debugging and auditing.

Cons:

  • SBOM Sprawl: Build processes run frequently, if it is generating an SBOM with each run you will find yourself with a glut of files that you will have to manage.
  • Delayed Detection: Vulnerabilities or non-compliance issues found at this stage may require rework.

Use-Cases

  • SBOM Drift Detection: By comparing SBOMs from two or more stages, unexpected dependency injection can be detected. This might take the form of a benign, leaky build pipeline that requires manual workarounds or a malicious actor attempting to covertly introduce malware. Either way this provides actionable and valuable information.
  • Policy Enforcement: Enables the creation of build breaking gates to enforce security or compliance. For high-stakes operating environments like defense, financial services or critical infrastructure, automating security and compliance at the expense of some developer friction is a net-positive strategy.
  • Automated Compliance Artifacts: Compliance requires proof in the form of reports and artifacts. Re-utilizing existing build tooling automation to automate this task significantly reduces the manual work required by security teams to meet compliance requirements.

Example: A security scan during testing uses the Build SBOM to identify a critical vulnerability and alerts the responsible engineer. The remediation process is initiated and a patch is applied before deployment.

Pro Tip

If your organization is just beginning their SBOM journey, this is the recommended phase of the DevSecOps lifecycle to implement SBOMs first. The two primary cons of this phase can be mitigated the easiest. For SBOM sprawl, you can procure a turnkey SBOM management solution like Anchore SBOM

As for the delay in feedback created by waiting till the build phase, if your team is utilizing DevOps best practices and breaking features up into smaller components that fit into 2-week sprints then this tight scoping will limit the impact of any significant vulnerabilities or non-compliance discovered.

Intermission

So far we’ve covered the first half of the DevSecOps lifecycle. Next week we will publish the second part of this blog series where we’ll cover the remainder of the pipeline. Watch our socials to be sure you get notified when part 2 is published.

If you’re looking for some additional reading in the meantime, check out our container security white paper below.

Learn the 5 best practices for container security and how SBOMs play a pivotal role in securing your software supply chain.

Choosing the Right SBOM Generator: A Framework for Success

Choosing the right SBOM (software bill of materials) generator is tricker than it looks at first glance. SBOMs are the foundation for a number of different uses ranging from software supply chain security to continuous regulatory compliance. Due to its cornerstone nature, the SBOM generator that you choose will either pave the way for achieving your organization’s goals or become a road block that delays critical initiatives.

But how do you navigate the crowded market of SBOM generation tools to find the one that aligns with your organization’s unique needs? It’s not merely about selecting a tool with the most features or the nicest CLI. It’s about identifying a solution that maps directly to your desired outcomes and use-cases, whether that’s rapid incident response, proactive vulnerability management, or compliance reporting.

We at Anchore have been enabling organizations to achieve their SBOM-related outcomes and do it with the least amount of frustration and setbacks. We’ve compiled our learnings on choosing the right SBOM generation tool into a framework to help the wider community make decisions that set them up for success.

Below is a quick TL;DR of the high-level evaluation criteria that we cover in this blog post:

  • Understanding Your Use-Cases: Aligning the tool with your specific goals.
  • Ecosystem Compatibility: Ensuring support for your programming languages, operating systems, and build artifacts.
  • Data Accuracy: Evaluating the tool’s ability to provide comprehensive and precise SBOMs.
  • DevSecOps Integration: Assessing how well the tool fits into your existing DevSecOps tooling.
  • Proprietary vs. Open Source: Weighing the long-term implications of your choice.

By focusing on these key areas, you’ll be better equipped to select an SBOM generator that not only meets your current requirements but also positions your organization for future success.

Learn about the role that SBOMs for the security of your organization in this white paper.

Learn about the role that SBOMs for the security, including open source software (OSS) security, of your organization in this white paper.

Know your use-cases

When choosing from the array of SBOM generation tools in the market, it is important to frame your decision with the outcome(s) that you are trying to achieve. If your goal is to improve the response time/mean time to remediation when the next Log4j-style incident occurs—and be sure that there will be a next time—an SBOM tool that excels at correctly identifying open source licenses in a code base won’t be the best solution for your use-case (even if you prefer its CLI ;-D).

What to Do:

  • Identify and prioritize the outcomes that your organization is attempting to achieve
  • Map the outcomes to the relevant SBOM use-cases
  • Review each SBOM generation tool to determine whether they are best suited to your use-cases

It can be tempting to prioritize an SBOM generator that is best suited to our preferences and workflows; we are the ones that will be using the tool regularly—shouldn’t we prioritize what makes our lives easier? If we prioritize our needs above the goal of the initiative we might end up putting ourselves into a position where our choice in tools impedes our ability to recognize the desired outcome. Using the correct framing, in this case by focusing on the use-cases, will keep us focused on delivering the best possible outcome.

SBOMs can be utilized for numerous purposes: security incident response, open source license compliance, proactive vulnerability management, compliance reporting or software supply chain risk management. We won’t address all use-cases/outcomes in this blog post, a more comprehensive treatment of all of the potential SBOM use-cases can be found on our website.

Example SBOM Use-Cases:

  • Security incident response: an inventory of all applications and their dependencies that can be queried quickly and easily to identify whether a newly announced zero-day impacts the organization.
  • Proactive vulnerability management: all software and dependencies are scanned for vulnerabilities as part of the DevSecOps lifecycle and remediated based on organizational priority.
  • Regulatory compliance reporting: compliance artifacts and reports are automatically generated by the DevSecOps pipeline to enable continuous compliance and prevent manual compliance work.
  • Software supply chain risk management: an inventory of software components with identified vulnerabilities used to inform organizational decision making when deciding between remediating risk versus building new features.
  • Open source license compliance: an inventory of all software components and the associated OSS license to measure potential legal exposure.

Pro tip: While you will inevitably leave many SBOM use-cases out of scope for your current project, keeping secondary use-cases in the back of your mind while making a decision on the right SBOM tool will set you up for success when those secondary use-cases eventually become a priority in the future.

Does the SBOM generator support your organization’s ecosystem of programming languages, etc?

SBOM generators aren’t just tools to ingest data and re-format it into a standardized format. They are typically paired with a software composition analysis (SCA) tool that scans an application/software artifact for metadata that will populate the final SBOM.

Support for the complete array of programming languages, build artifacts and operating system ecosystems is essentially an impossible task. This means that support varies significantly depending on the SBOM generator that you select. An SBOM generator’s ability to help you reach your organizational goals is directly related to its support for your organization’s software tooling preferences. This will likely be one of the most important qualifications when choosing between different options and will rule out many that don’t meet the needs of your organization.

Considerations:

  • Programming Languages: Does the tool support all languages used by your team?
  • Operating Systems: Can it scan the different OS environments your applications run on top of?
  • Build Artifacts: Does the tool scan containers? Binaries? Source code repositories? 
  • Frameworks and Libraries: Does it recognize the frameworks and libraries your applications depend on?

Data accuracy

This is one of the most important criteria when evaluating different SBOM tools. An SBOM generator may claim support for a particular programming language but after testing the scanner you may discover that it returns an SBOM with only direct dependencies—honestly not much better than a package.json or go.mod file that your build process spits out.

Two different tools might both generate a valid SPDX SBOM document when run against the same source artifact, but the content of those documents can vary greatly. This variation depends on what the tool can inspect, understand, and translate. Being able to fully scan an application for both direct and transitive dependencies as well as navigate non-ideomatic patterns for how software can be structured end up being the true differentiators between the field of SBOM generation contenders.

Imagine using two SBOM tools on a Debian package. One tool recognizes Debian packages and includes detailed information about them in the SBOM. The latter can’t fully parse the Debian .deb format and omits critical information. Both produce an SBOM, but only one provides the data you need to power use-case based outcomes like security incident response or proactive vulnerability management.

Let’s make this example more concrete by simulating this difference with Syft, Anchore’s open source SBOM generation tool:

$ syft -q -o spdx-json nginx:latest > nginx_a.spdx.json
$ grype -q nginx_a.spdx.json | grep Critical
libaom3             3.6.0-1+deb12u1          (won't fix)       deb   CVE-2023-6879     Critical    
libssl3             3.0.14-1~deb12u2         (won't fix)       deb   CVE-2024-5535     Critical    
openssl             3.0.14-1~deb12u2         (won't fix)       deb   CVE-2024-5535     Critical    
zlib1g              1:1.2.13.dfsg-1          (won't fix)       deb   CVE-2023-45853    Critical

In this example, we first generate an SBOM using Syft then run it through Grype—our vulnerability scanning tool. Syft + Grype uncover 4 critical vulnerabilities.

Now let’s try the same thing but “simulate” an SBOM generator that can’t fully parse the structure of the software artifact in question:

$ syft -q -o spdx-json --select-catalogers "-dpkg-db-cataloger,-binary-classifier-cataloger" nginx:latest > nginx_b.spdx.json 
$ grype -q nginx_b.spdx.json | grep Critical
$

In this case, we are returned none of the critical vulnerabilities found with the former tool.

This highlights the importance of careful evaluation of the SBOM generator that you decide on. It could mean the difference between effective vulnerability risk management and a security incident.

Can the SBOM tool integration into your DevSecOps pipeline?

If the SBOM generator is packaged as a self-contained binary with a command line interface (CLI) then it should tick this box. CI/CD build tools are most amenable to this deployment model. If the SBOM generation tool in question isn’t a CLI then it should at least run as a server with an API that can be called as part of the build process.

Integrating with an organization’s DevSecOps pipeline is key to enable a scalable SBOM generation process. By implementing SBOM creation directly into the existing build tooling, organizations can leverage existing automation tools to ensure consistency and efficiency which are necessary for achieving the desired outcomes.

Proprietary vs. open source SBOM generator?

Using an open source SBOM tool is considered an industry best practice. This is because it guards against the risks associated with vendor lock-in. As a bonus, the ecosystem for open source SBOM generation tooling is very healthy. OSS will always have an advantage over proprietary in regards to ecosystem coverage and data quality because it will get into the hands of more users which will create a feedback loop that closes gaps in coverage or quality.

Finally, even if your organization decides to utilize a software supply chain security product that has its own proprietary SBOM generator, it is still better to create your SBOMs with an open source SBOM generator, export to a standardized format (e.g., SPDX or CycloneDX) then have your software supply chain security platform ingest these non-proprietary data structures. All platforms will be able to ingest SBOMs from one or both of these standards-based formats.

Wrap-Up

In a landscape where the next security/compliance/legal challenge is always just around the corner, equipping your team with the right SBOM generator empowers you to act swiftly and confidently. It’s an investment not just in a tool, but in the resilience and security of your entire software supply chain. By making a thoughtful, informed choice now, you’re laying the groundwork for a more secure and efficient future.

Anchore on AWS Marketplace and joins ISV Accelerate

We are excited to announce two significant milestones in our partnership with Amazon Web Services (AWS) today:  

  • Anchore Enterprise can now be purchased through the AWS marketplace and 
  • Anchore has joined the APN’s (Amazon Partner Network) ISV Accelerate Program

Organizations like Nvidia, Cisco Umbrella and Infoblox validate our commitment to delivering trusted solutions for SBOM management, secure software supply chains, and automated compliance enforcement  and can now benefit from a stronger partnership between AWS and Anchore.

Anchore’s best-in-breed container security solution was chosen by Cisco Umbrella as it seamlessly integrated in their AWS infrastructure and accelerated meeting all six FedRAMP requirements. They deployed Anchore into an environment that had to meet a number of high-security and compliance standards. Chief among those was STIG compliance for Amazon EC2 nodes that backed the Amazon EKS deployment. 

In addition, Anchore Enterprise supports high-security requirements such as IL4/IL6, FIPS, SSDF attestation and EO 14208 compliance. 

Contact Anchore’s sales team today for a pricing quote or demo that suits your unique needs.

Anchore Enterprise is now available on AWS Marketplace

The AWS Marketplace offers a convenient and efficient way for AWS customers to procure Anchore. It simplifies the procurement process, provides greater control and governance, and fosters innovation by offering a rich selection of tools and services that seamlessly integrate with their existing AWS infrastructure. 

Anchore Enterprise on AWS Marketplace benefits DevSecOps teams by enabling:

  • Self-procurement via the AWS console
  • Faster procurement with applicable legal terms provided and standardized security review
  • Easier budget management with a single consolidated AWS bill for all infrastructure spend
  • Spend on Anchore Enterprise partially counts towards EDP (Enterprise Discount Program) committed spend

By strengthening our collaboration with AWS, customers can now feel at ease that Anchore Enterprise integrates and operates seamlessly on AWS infrastructure. Joining the ISV Accelerate Program allows us to work closely with AWS account teams to ensure seamless support and exceptional service for our joint clients. 

Purchase Anchore Enterprise on the AWS Marketplace or contact our sales team for a pricing quote that meets your organization’s needs.

Anchore Survey 2024: Only 1 in 5 organizations have full visibility of open source

The Anchore 2024 Software Supply Chain Security Report is now available. This report provides a unique set of insights into the experiences and practices of over 100 organizations that are the targets of software supply chain attacks.

Survey Highlights

The survey shows that amid growing software supply chain risks:

  • The intensity of software supply chain attacks is increasing.
  • 200% increase in the priority of software supply chain security.
  • Only 1 in 5 have full visibility of open source.
  • Third-party software joins open source as a top security challenge.
  • Organizations must comply with an average of 4.9 standards. 
  • 78% plan to increase SBOM usage.
  • Respondents worry about AI’s impact on software supply chain security.

The intensity of software supply chain attacks is increasing.

The survey shows that the intensity of software supply chain attacks is increasing, with 21% of successful supply chain attacks having a significant impact, more than doubling from 10% in 2022. 

200% increase in the priority of software supply chain security.

As a result of increased attacks, organizations are increasing their focus on software supply chain security, with a 200% increase in organizations making it a top priority. 

Only 1 in 5 have full visibility of open source.

Amid growing software supply chain risks, only 21% of respondents are very confident that they have complete visibility into all the dependencies of the applications their organization builds. Without this critical foundation, organizations are unaware of vulnerabilities that leave them open to supply chain attacks.

Third-party software joins open source as a top security challenge.

Organizations are looking to secure all elements of their software supply chain, including open source software and 3rd party libraries. While the security of open source software continues to be identified as a significant challenge, in this year’s report, 46% of respondents chose the security of 3rd party software as a significant challenge.

Organizations must comply with an average of 4.9 different standards.

Compliance is a significant driver in supply chain security. As software supply chain risks grow, governments and industry groups are responding with new guidelines and regulations. Respondents reported the need to comply with an average of almost five separate standards per organization.  Many must comply with new regulatory requirements including the CISA Directive of Known Exploited Vulnerabilities, the Secure Software Development Framework (SSDF), and the EU Cyber Resilience Act.

78% plan to increase SBOM usage.

The software bill-of-materials (SBOM) is now a critical component of software supply chain security. An SBOM provides visibility into software ingredients and is a foundation for understanding software vulnerabilities and risks. While just under half of respondents currently leverage SBOMs, a large majority plan to increase SBOM use over the next 18 months.

Respondents worry about AI’s impact on software supply chain security.

A large majority of respondents are concerned about AI’s impact on software supply chain security, and as many as a third are very concerned. The highest concerns are with code tested with AI and code generated with AI or with Copilot tools. 

Let’s design an action plan

Join on December 10, 2024 for a live discussion with VP of Security Josh Bressers on the latest trends. Hear practical steps for building a more resilient software supply chain. Register Now.

To minimize risk, avoid reputational damage, and protect downstream users and customers, software supply chain security must become a new practice for every organization that uses or builds software. SBOMs are a critical foundation of this new practice, providing visibility into the dependencies and risks of the software you use.  

Here are seven steps to take your software supply chain security to the next level:

  1. Assess your software supply chain maturity against best practices
  2. Identify key challenges and create a plan to make tangible improvements over the coming months.
  3. Develop a methodology to document and assess the impact of supply chain attacks on your organization, along with improvements to be made.
  4. Create a plan to generate, manage, and share SBOMs as a key pillar of your supply chain security initiative. Learn more with the Expert Guide on SBOMs in Cybersecurity and 6 Ways to Prevent SBOM sprawl
  5. Delve into existing and emerging compliance requirements and create a plan to automate compliance checks. Learn how to meet compliance standards like NIST, SSDF, and FedRAMP.
  6. Identify gaps in tooling and create plans to address the gaps.  See how Anchore can help.  Try open source tools like Syft for SBOM generation and Grype for vulnerability scanning as a good way to get started.
  7. Create an organizational structure and define responsibilities to address software supply chain security and risk.

Tonight’s Movie: The Terminal (of your laptop)

A picture paints a thousand words, but a GIF shows every typo in motion. But it doesn’t have to! GIFs have long been the go-to in technical docs, capturing real-time terminal output and letting readers watch workflows unfold as if sitting beside you.

I recently needed to make some terminal GIFs, so I tried three of the best available tools, and here are my findings.

Requirements

We recently attended All Things Open, where a TV on our stand needed a rolling demo video. I wanted to add a few terminal usage examples for Syft, Grype, and Grant – our Open-Source, best-in-class container security tools. I tried a few tools to generate the GIFs, which I embedded in a set of Google Slides (for ease) and then captured and rendered as a video that played in a loop on a laptop running VLC.

To summarise, this was the intended flow:

Typing in a terminal → 
↳ Recording
↳ GIF
↳ Google Slides
↳ Video Capture
↳ VLC playlist
↳ Success 🎉

We decided to render it as a video to mitigate conference WiFi issues. Nobody wants to walk past your exhibitor stand and see a 404 or “Network Connectivity Problems” on the Jumbotron®️!

The goal was for attendees passing our stand to see the command-line utilities in action. It also allowed us to discuss the demos with interested conferencegoers without busting out a laptop and crouching around it. We just pointed to the screen as a terminal appeared and talked through it.

Below is an early iteration of what I was aiming for, taken as a frame grab from a video – hence the slight blur.

My requirements were for a utility which:

  • Records a terminal running commands
  • Runs on Linux and macOS because I use both
  • Reliably captures output from the commands being run
  • Renders out a high-quality GIF
  • Is preferably open source
  • Is actively maintained

The reason for requiring a GIF rather than a traditional video, such as MP4, is to embed the GIF easily in a Google Slides presentation. While I could create an MP4 and then use a video editor to cut together the videos, I wanted something simple and easily reproducible. I may use MP4s in other situations – such as posting to social media – so if a tool can export to that format easily, I consider that a bonus.

It is worth noting that Google Slides supports GIFs up to 1000 frames in length. So, if you have a long-running command captured at a high frame rate, this limit is easy to hit. If that is the case, perhaps render an MP4 and use the right tool for the job, a video editor.

“High quality” GIF is a subjective term, but I’m after something that looks pleasing (to me), doesn’t distract from the tool being demonstrated, and doesn’t visibly stutter.

Feature Summary

I’ve put the full summary up here near the top of the article to save wear & tear on your mouse wheel or while your magic mouse is upside down, on charge. The details are underneath the conclusion for those interested and equipped with a fully-charged mouse.

† asciinema requires an additional tool such as agg to convert the recorded output to a GIF.
◊ t-rec supports X11 on Linux, but currently does not support Wayland sessions.
* t-rec development appears to have stalled.

Conclusion

All three tools are widely used and work fine in many cases. Asciinema is often recommended because it’s straightforward to install, and almost no configuration is required. The resulting recordings can be published online and rendered on a web page.

While t-rec is interesting, as it records the actual terminal window, not just the session text (as asciinema does), it is a touch heavyweight. As such, with a 4fps frame rate, videos made with t-rec look jerky.

I selected vhs for a few reasons.

It runs easily on macOS and Linux, so I can create GIFs on my work or personal computer with the same tool. vhs is very configurable, supports higher frame rates than other tools, and is scriptable, making it ideal for creating GIFs for documentation in CI pipelines.

vhs being scriptable is, I think, the real superpower here. For example, vhs can be part of a documentation site build system. One configuration file can specify a particular font family, size and color scheme to generate a GIF suitable for embedding in documentation.

Another almost identical configuration file might use a different font size or color, which is more suitable for a social media post. The same commands will be run, but the color, font family, font size, and even GIF resolution can be different, making for a very flexible and reliable way to create a terminal GIF for any occasion!

vhs ships with a broad default theme set that matches typical desktop color schemes, such as the familiar purple-hue terminal on Ubuntu, as seen below. This GIF uses the “BlexMono Nerd Font Mono” font (a modified version of IBM Plex font), part of the nerd-fonts project.

If this GIF seems slow, that’s intentional. The vhs configuration can “type” at a configurable speed and slow the resulting captured output down (or speed it up).

There are also popular Catppuccin themes that are pretty appealing. The following GIF uses the “catppuccin-macchiato” theme with “Iosevka Term” font, which is part of the Iosevka project. I also added a PS1 environment variable to the configuration to simulate a typical console prompt.

vhs can also take a still screenshot during the recording, which can be helpful as a thumbnail image, or to capture a particular frame from the middle of the recording. Below is the final frame from the previous GIF.

Here is one of the final (non-animated) slides from the video. I tried to put as little as possible on screen simultaneously, just the title, video, and a QR code for more information. It worked well, with someone even asking how the terminal videos were made. This blog is for them.

I am very happy with the results from vhs, and will likely continue using it in documentation, and perhaps social posts – if I can get the font to a readable size on mobile devices.

Alternatives

I’m aware of OBS Studio and other screen (and window) recording tools that could be used to create an initial video, which could be converted into a GIF.

Are there other, better ways to do this?

Let me know on our community discourse, or leave a comment wherever you read this blog post.

Below are the details about each of the three tools I tested.


t-rec

t-rec is a “Blazingly fast terminal recorder that generates animated gif images for the web written in rust.” This was my first choice, as I had played with it before my current task came up.

I initially quite liked that t-rec recorded the entire terminal window, so when running on Linux, I could use a familiar desktop theme indicating to the viewer that the command is running on a Linux host. On a macOS host, I could use a native terminal (such as iTerm2) to hint that the command is run on an Apple computer.

However, I eventually decided this wasn’t that important at all. Especially given that vhs can be used to theme the terminal so it looks close to a particular host OS. Plus, most of the commands I’m recording are platform agnostic, producing the same output no matter what they’re running on.

t-rec Usage

  • Configure the terminal to be the size you require with the desired font and any other settings before you start t-rec.
  • Run t-rec.
$ t-rec --quiet --output grant

The terminal will clear, and recording will begin.

  • Type each command as you normally would.
  • Press CTRL+D to end recording.
  • t-rec will then generate the GIF using the specified name.
🎆 Applying effects to 118 frames (might take a bit)
💡 Tip: To add a pause at the end of the gif loop, use e.g. option `-e 3s`
🎉 🚀 Generating grant.gif
Time: ~9s
 alan@Alans-MacBook-Pro  ~ 

The output GIF will be written in the current directory by stitching together all the bitmap images taken during the recording. Note the recording below contains the entire terminal user interface and the content.

t-rec Benefits

t-rec records the video by taking actual bitmap screenshots of the entire terminal on every frame. So, if you’re keen on having a GIF that includes the terminal UI, including the top bar and other window chrome, then this may be for you.

t-rec Limitations

t-rec records at 4 frames per second, which may be sufficient but can look jerky with long commands. There is an unmerged draft PR to allow user-configurable recording frame rates, but it hasn’t been touched for a couple of years.

I found t-rec would frequently just stop adding frames to a GIF. So the resulting GIF would start okay, then randomly miss out most of the frames, abruptly end, and loop back to the start. I didn’t have time to debug why this happened, which got me looking for a different tool.

asciinema

Did you try asciinema?” was a common question asked of me, when I mentioned to fellow nerds what I was trying to achieve. Yes.

asciinema is the venerable Grand-daddy of terminal recording. It’s straightforward to install and setup, has a very simple recording and publishing pipeline. Perhaps too simple.

When I wandered around the various exhibitor stands at All Things Open last week, it was obvious who spent far too long fiddling with these tools (me), and which vendors recorded a window, or published an asciinema, with some content blurred out.

One even had an ugly demo of our favorite child, grype (don’t tell syft I said that), in such a video! Horror of horrors!

asciinema doesn’t create GIFs directly but instead creates “cast” files, JSON formatted text representations of the session, containing both the user-entered text and the program output. A separate utility, agg (asciinema gif generator), converts the “cast” to a GIF. In addition, another tool, asciinema-edit, can be used to edit the cast file post-recording.

asciinema Usage

  • Start asciinema rec, and optionally specify a target file to save as.
asciinema rec ./grype.cast
  • Run commands.
  • Type exit when finished.
  • Play back the cast file

asciinema play ./grype.cast

  • Convert asciinema recording to GIF.
agg --font-family "BlexMono Nerd Font Mono" grype.cast grype.gif

Here’s the resulting GIF, using the above options. Overall, it looks fine, very much like my terminal appears. Some of the characters are missing or incorrectly displayed, however. For example, the animated braille characters are used while grype is parsing the container image.

asciinema – or rather agg (the cast-to-GIF converter) has a few options for customizing the resulting video. There are a small number of themes, the ability to configure the window size (in rows/columns), font family, and size, and set various speed and delay-related options.

Overall, asciinema is very capable, fast, and easy to use. The upstream developers are currently porting it from Python to Rust, so I’d consider this an active project. But it wasn’t entirely giving me all the options I wanted. It’s still a useful utility to keep in your toolbelt.

vhs

vhs has a novel approach using ‘tape’ files which describe the recording as a sequence of Type, Enter and Sleep statements.

The initial tape file can be created with vhs record and then edited in any standard text editor to modify commands, choice of shell, sleep durations, and other configuration settings. The vhs cassette.tape command will configure the session, then run the commands in a virtual (hidden) terminal.

Once the end of the ‘tape’ is reached, vhs generates the GIF, and optionally, an MP4 video. The tape file can be iterated on to change the theme, font family, size, and other settings, then re-running vhs cassette.tape creates a whole new GIF.

vhs Usage

  • Create a .tape file with vis record --shell bash > cassette.tape.
  • Run commands.
  • Type exit when finished.

vhs will write the commands and timings to the cassette.tape file, for example:

$ cat cassette.tape
Sleep 1.5s
Type "./grype ubuntu:latest"
Enter
Sleep 3s
  • Optionally edit the tape file
  • Generate the GIF
$ vhs cassette.tape
File: ./cassette.tape
Sleep 1.5s
Type ./grype ubuntu:latest
Enter 1
Sleep 3s
Creating ...
Host your GIF on vhs.charm.sh: vhs publish <file>.gif

Below is the resulting default GIF, which looks fantastic out of the box, even before playing with themes, fonts and prompts.

vhs Benefits

vhs is very configurable, with some useful supported commands in the .tape file. The support for themes, fonts, resolution and ‘special’ key presses, makes it very flexible for scripting a terminal based application recording.

vhs Limitations

vhs requires the tape author to specify how long to Sleep after each command – or assume the initial values created with vhs record are correct. vhs does not (yet) auto-advance when a command finishes. This may not be a problem if the command you’re recording has a reliable runtime. Still, it might be a problem if the duration of a command is dependent on prevailing conditions such as the network or disk performance.


What do you think? Do you like animated terminal output, or would you prefer a video, interactive tool, or just a plain README.md. Let me know on our community discourse, or leave a comment wherever you read this blog post.

Automate STIG Compliance with MITRE SAF: the Fastest Path to ATO

Trying to get your head around STIG (Security Technical Implementation Guides) compliance? Anchore is here to help. With the help of MITRE Security Automation Framework (SAF) we’ll walk you through the quickset path to STIG Compliance and ultimately the converted Authority to Operate (ATO).

The goal for any company that aims to provide software services to the Department of Defense (DoD) is an ATO. Without this stamp of approval your software will never get into the hands of the warfighters that need it most. STIG compliance is a necessary needle that must be thread on the path to ATO. Luckily, MITRE has developed and open-sourced SAF to smooth the often complex and time-consuming STIG compliance process.

We’ll get you up to speed on MITRE SAF and how it helps you achieve STIG compliance in this blog post but before we jump straight into the content be sure to bookmark our webinar with the Chief Architect of MITRE Security Automation Framework (SAF), Aaron Lippold. Josh Bressers, VP of Security at Anchore and Lippold provide a behind the scenes look at SAF and how it dramatically reduces the friction of the STIG compliance process.

What is the MITRE Security Automation Framework (SAF)?

The MITRE SAF is both a high-level cybersecurity framework and an umbrella that encompasses a set of security/compliance tools. It is designed to simplify STIG compliance by translating DISA (Defense Information Systems Agency) SRG (Security Requirements Guide) guidance into actionable steps. 

By following the Security Automation Framework, organizations can streamline and automate the hardened configuration of their DevSecOps pipeline to achieve an ATO (Authority to Operate).

The SAF offers four primary benefits:

  1. Accelerate Path to ATO: By streamlining STIG compliance, SAF enables organizations to get their applications into the hands of DoD operators faster. This acceleration is crucial for meeting operational demands without compromising on security standards.
  2. Establish Security Requirements: SAF translates SRGs and STIGs into actionable steps tailored to an organization’s specific DevSecOps pipeline. This eliminates ambiguity and ensures security controls are implemented correctly.
  3. Build Security In: The framework provides tooling that can be directly embedded into the software development pipeline. By automating STIG configurations and policy checks, it ensures that security measures are consistently applied, leaving no room for false steps.
  4. Assess and Monitor Vulnerabilities: SAF offers visualization and analysis tools that assist organizations in making informed decisions about their current vulnerability inventory. It helps chart a path toward achieving STIG compliance and ultimately an ATO.

The overarching vision of the MITRE SAF is to “implement evolving security requirements while deploying apps at speed.” In essence, it allows organizations to have their cake and eat it too—gaining the benefits of accelerated software delivery without letting cybersecurity risks grow unchecked.

How does MITRE SAF work?

MITRE SAF is segmented into 5 capabilities that map to specific stages of the DevSecOps pipeline or STIG compliance process:

  1. Plan
  2. Harden
  3. Validate
  4. Normalize
  5. Visualize

Let’s break down each of these capabilities.

Plan

There are hundreds of existing STIGs for products ranging from Microsoft Windows to Cisco Routers to MySQL databases. On the off chance that a product your team wants to use doesn’t have a pre-existing STIG, SAF’s Vulcan tool is helps translate the application SRG into a tailored STIG that can then be used to achieve compliance.

Vulcan helps streamline the process of creating STIG-ready security guidance and the accompanying InSpec automated policy that confirms a specific instance of software is configured in a compliant manner.

Vulcan does this by modeling the STIG intent form and tailoring the applicable SRG controls into a finished STIG for an application. The finished STIG is then sent to DISA for peer review and formal publishing as a STIG. Vulcan allows the author to develop both human-readable instructions and machine-readable InSpec automated validation code at the same time.

Diagram of process to map SRG controls to STIG guidelines via the MITE SAF Vulcan CLI tool; an automated conversion tool to speed up STIG compliance process.

Harden

The hardening capability focuses on automating STIG compliance through the use of pre-built infrastructure configuration scripts. SAF hardening content allows organizations to:

  • Use their preferred configuration management tools: Chef Cookbooks, Ansible Playbooks, Terraform Modules, etc. are available as open source templates on MITRE’s GitHub page.
  • Share and collaborate: All hardening content is open source, encouraging community involvement and shared learning.
  • Coverage for the full development stack: Ensuring that every layer, from cloud infrastructure to applications, adheres to security standards.

Validate

The validation capability focuses on verifying the hardening meets the applicable STIG compliance standard. These validation checks are automated via the SAF CLI tool that incorporates the InSpec policies for a STIG. With SAF CLI, organizations can:

  • Automatically validate STIG compliance: By integrating SAF CLI directly into your CI/CD pipeline and invoking InSpec policy checks at every build; shifting security left by surfacing policy violations early.
  • Promote community collaboration: Like the hardening content, validation scripts are open source and accessible by the community for collaborative efforts.
  • Span the entire development stack: Validation—similar to hardening—isn’t limited to a single layer; it encompasses cloud infrastructure, platforms, operating systems, databases, web servers, and applications.
  • Incorporate manual attestation: To achieve comprehensive coverage of policy requirements that automated tools might not fully address.

Normalize

Normalization addresses the challenge of interoperability between different security tools and data formats. SAF CLI performs double-duty by taking on the normalization function as well as validation. It is able to:

  • Translate data into OHDF: OASIS Heimdall Data Format (OHDF), is an open standard that structures countless proprietary security metadata formats into a single universal format.
  • Leverage open source OHDF libraries: Organizations can use OHDF converters as libraries within their custom applications.
  • Automate data conversion: By incorporating SAF CLI into the DevSecOps pipeline, data is automatically standardized with each run.
  • Increased compliance efficiency: A single data format for all security data allows interoperability and facilitates efficient and automated STIG compliance.

Example: Below is an example of Burp Suite’s proprietary data format normalized to the OHDF JSON format:

Image of Burp Suite data format being mapped to MITRE SAF's OHDF to reduce manual data mapping and reduce time to STIG compliance.

Visualize

Visualization is critical for understanding security posture and making informed decisions. SAF provides an open source, self-hosted visualization tool named Heimdall. It ingests OHDF normalized security data and provides the data analysis tools to enable organizations to:

  • Aggregate security and compliance results: Compiling data into comprehensive rollups, charts, and timelines for a holistic view of security and compliance status.
  • Perform deep dives: Allowing teams to explore detailed vulnerability information to facilitate investigation and remediation, ultimately speeding up time to STIG compliance.
  • Guide risk reduction efforts: Visualization of insights help with prioritization of security and compliance tasks reducing risk in the most efficient manner.

How is SAF related to a DoD Software Factory?

A DoD Software Factory is the common term for a DevSecOps pipeline that meets the definition laid out in DoD Enterprise DevSecOps Reference Design. All software that ultimately achieves an ATO has to be built on a fully implemented DoD Software Factory. You can either build your own or use a pre-existing DoD Software Factory like the US Air Force’s Platform One or the US Navy’s Black Pearl.

As we saw earlier, MITRE SAF is a framework meant to help you achieve STIG compliance and is a portion of your journey towards an ATO. STIG compliance applies to both the software that you write as well as the DevSecOps platform that your software is built on. Building your own DoD Software Factory means committing to going through the ATO process and STIG compliance for the DevSecOps platform first then a second time for the end-user application.

Wrap-Up

The MITRE SAF is a huge leg up for modern, cloud-native DevSecOps software vendors that are currently navigating the labyrinth towards ATO. By providing actionable guidance, automation tooling, and a community-driven approach, SAF dramatically reduces the time to ATO. It bridges the gap between the speed of DevOps software delivery and secure, compliant applications ready for critical DoD missions with national security implications. 

Embracing SAF means more than just meeting regulatory requirements; it’s about building a resilient, efficient, and secure development pipeline that can adapt to evolving security demands. In an era where cybersecurity threats are evolving just as rapidly as software, leveraging frameworks like MITRE SAF is not an efficient path to compliance—it’s essential for sustained success.

Grype Support for Azure Linux 3 released

On September 26, 2024 the OSS team at Anchore released general support for Azure Linux 3, Microsoft’s new cloud-focused Linux distribution. This blog post will share some of the technical details of what goes into supporting a new Linux distribution in Grype.

Step 1: Make sure Syft identifies the distro correctly

In this case, this step happened automatically. Syft is pretty smart about parsing /etc/os-release in an image, and Microsoft has labeled Azure Linux in a standard way. Even before this release, if you’d run the following command, you would see Azure Linux 3 correctly identified.

syft -q -o json mcr.microsoft.com/azurelinux/base/core:3.0 | jq .distro
{
  "prettyName": "Microsoft Azure Linux 3.0",
  "name": "Microsoft Azure Linux",
  "id": "azurelinux",
  "version": "3.0.20241005",
  "versionID": "3.0",
  "homeURL": "https://aka.ms/azurelinux",
  "supportURL": "https://aka.ms/azurelinux",
  "bugReportURL": "https://aka.ms/azurelinux"
}

Step 2: Build a vulnerable image

You can’t test a vulnerability scanner without an image that has known vulnerabilities in it. So just about the first thing to do is make a test image that is known to have some problems.

In this case, we started with Azure’s base image and intentionally installed an old version of the golang RPM:

FROM mcr.microsoft.com/azurelinux/base/core:3.0@sha256:9c1df3923b29a197dc5e6947e9c283ac71f33ef051110e3980c12e87a2de91f1

RUN tdnf install -y golang-1.22.5-1.azl3

This has a couple of CVEs against it, so we can use it to test whether Grype is working end to end.

$ docker build -t azuretest:latest .
$ docker image save azuretest:latest > azuretest.tar
$ grype ./azuretest.tar
  Parsed image sha256:49edd6d1eff19d2b34c27a6ad11a4a8185d2764ae1182c17c563a597d173b8
  Cataloged contents e649de5ff4361e49e52ecdb8fe8acb854cf064247e377ba92669e7a33a228a00
   ├──  Packages                        [122 packages]
   ├──  File digests                    [11,141 files]
   ├──  File metadata                   [11,141 locations]
   └──  Executables                     [426 executables]
  Scanned for vulnerabilities     [84 vulnerability matches]
   ├── by severity: 3 critical, 57 high, 3 medium, 0 low, 0 negligible (21 unknown)
   └── by status:   84 fixed, 0 not-fixed, 0 ignored
NAME          INSTALLED      FIXED-IN         TYPE       VULNERABILITY   SEVERITY
coreutils     9.4-3.azl3     0:9.4-5.azl3     rpm        CVE-2024-0684   Medium
curl          8.8.0-1.azl3   0:8.8.0-2.azl3   rpm        CVE-2024-6197   High
curl-libs     8.8.0-1.azl3   0:8.8.0-2.azl3   rpm        CVE-2024-6197   High
expat         2.6.2-1.azl3   0:2.6.3-1.azl3   rpm        CVE-2024-45492  High
expat         2.6.2-1.azl3   0:2.6.3-1.azl3   rpm        CVE-2024-45491  High
expat         2.6.2-1.azl3   0:2.6.3-1.azl3   rpm        CVE-2024-45490  High
expat-libs    2.6.2-1.azl3   0:2.6.3-1.azl3   rpm        CVE-2024-45492  High
expat-libs    2.6.2-1.azl3   0:2.6.3-1.azl3   rpm        CVE-2024-45491  High
expat-libs    2.6.2-1.azl3   0:2.6.3-1.azl3   rpm        CVE-2024-45490  High
golang        1.22.5-1.azl3  0:1.22.7-2.azl3  rpm        CVE-2023-29404  Critical
golang        1.22.5-1.azl3  0:1.22.7-2.azl3  rpm        CVE-2023-29402  Critical
golang        1.22.5-1.azl3  0:1.22.7-2.azl3  rpm        CVE-2022-41722  High
krb5          1.21.2-1.azl3  0:1.21.3-1.azl3  rpm        CVE-2024-37371  Critical

Normally, we like to build test images with CVEs from 2021 or earlier against them because this set of vulnerabilities changes slowly. But hats off to the team at Microsoft. We could not find an easy way to get a three-year-old vulnerability into their distro. So, in this case, the team did some behind-the-scenes work to make it easier to add test images that only have newer vulnerabilities as part of this release.

Step 3: Write the vunnel provider

Vunnel is Anchore’s “vulnerability funnel,” the open-source project that downloads vulnerability data from many different sources and collects and normalizes them so that grype can match them. This step was pretty straightforward because Microsoft publishes complete and up-to-date OVAL XML, so the Vunnel provider can just download it, parse it into our own format, and pass it along.

Step 4: Wire it up in Grype, and profit scan away

Now Syft identifies the distro, we have test images to use in our CI/CD pipelines so that we’re sure we don’t regress, and Vunnel is downloading the Azure Linux 3 vulnerability data from Microsoft, we’re ready to release the Grype change. In this case, it was a simple change telling Grype where to look in its database for vulnerabilities about the new distro.

Conclusion

There are two big upshots of this post:

First, anyone running Grype v0.81.0 or later can scan images built from Azure Linux 3 and get accurate vulnerability information today, for free.

Second, Anchore’s tools make it possible to add a new Linux distro to Syft and Grype in just a few pull requests. All the work we did for this support was open source – you can go read the pull requests on GitHub if you’d like (vunnel, grype-db, grype, test-images). And that means that if your favorite Linux distribution isn’t covered yet, you can let us know or send us a PR.

If you’d like to discuss any topics this post raises, join us on discourse.

Introducing Anchore Data Service and Anchore Enterprise 5.10

We are thrilled to announce the release of Anchore Enterprise 5.10, our tenth release of 2024. This update brings two major enhancements that will elevate your experience and bolster your security posture: the new Anchore Data Service (ADS) and expanded AnchoreCTL ecosystem coverage. 

With ADS, we’ve built a fast and reliable solution that reduces time spent by DevSecOps teams debugging intermittent network issues from flaky services that are vital to software supply chain security.

On top of that, we have buffed our software composition analysis (SCA) scanner’s ecosystem coverage (e.g., C++, Swift, Elixir, R, etc.) for all Anchore customers. To do this we embedded Syftour popular, open source SCA/SBOM (software bill of materials) generator—directly into Anchore Enterprise.

It’s been a fall of big releases at Anchore and we’re excited to continue delivering value to our loyal customers. Read on to get all of the gory details >>

Announcing the Anchore Data Service

Before, customers ran the Anchore Feed Service within their environment to pull data feeds into their Anchore Enterprise deployment. To get an idea of what this looked like, see the architecture diagram of Anchore Enterprise prior to version 5.10:

Originally we did this to give customers more control over their environment. Unfortunately this wasn’t without its issues. The data feeds are provided by the community which means the services were designed to be accessible but cost-efficient. This meant they were unreliable; frequently having accessibility issues.

We only have to stretch our memory back to the spring to recall an example that made national headlines. The National Vulnerability Database (NVD) ran into funding issues. This impacted both the creation of new vulnerabilities AND the availability of their API. This created significant friction for Anchore customers—not to mention the entirety of the software industry.

Now, Anchore is running its own enterprise-grade service, named Anchore Data Service (ADS). It is a replacement for the former feed service. ADS aggregates all of the community data feeds, enriches the data (with proprietary threat data) and packages it for customers; all of this with a service availability guarantee expected of an enterprise service.

The new architecture with ADS as the intermediary is illustrated below:

As a bonus for our customers running air-gapped deployments of Anchore Enterprise, there is no more need to run a second deployment of Anchore Enterprise in a DMZ to pull down the data feeds. Instead a single file is pulled from ADS then transferred to a USB thumb drive. From there a single CLI command is run to update your air-gapped deployment of Anchore Enterprise.

Increased AnchoreCTL Ecosystem Coverage

We have increased the number of supported ecosystems (e.g., C++, Swift, Elixir, R, etc.) in Anchore Enterprise. This improves coverage and increases the likelihood that all of your organization’s applications can be scanned and protected by Anchore Enterprise.

More importantly, we have completely re-architected the process for how Anchore Enterprise supports new ecosystems. By integrating Syft—Anchore’s open source SBOM generation tool—directly into AnchoreCTL, Anchore’s customers will now get access to new ecosystem support as they are merged into Syft’s codebase.

Before Syft and AnchoreCTL were somewhat separate which caused AnchoreCTL’s support for new ecosystems to lag Syft’s. Now, they are fully integrated. This enables all of Anchore’s enterprise and public sector customers to take full advantage of the open source community’s development velocity.

Full list of support ecosystems

Below is a complete list of all supported ecosystems by both Syft and AnchoreCTL (as of Anchore Enterprise 5.10; see our docs for most current info):

  • Alpine (apk)
  • C (conan)
  • C++ (conan)
  • Dart (pubs)
  • Debian (dpkg)
  • Dotnet (deps.json)
  • Objective-C (cocoapods)
  • Elixir (mix)
  • Erlang (rebar3)
  • Go (go.mod, Go binaries)
  • Haskell (cabal, stack)
  • Java (jar, ear, war, par, sar, nar, native-image)
  • JavaScript (npm, yarn)
  • Jenkins Plugins (jpi, hpi)
  • Linux kernel archives (vmlinuz)
  • Linux kernel a (ko)
  • Nix (outputs in /nix/store)
  • PHP (composer)
  • Python (wheel, egg, poetry, requirements.txt)
  • Red Hat (rpm)
  • Ruby (gem)
  • Rust (cargo.lock)
  • Swift (cocoapods, swift-package-manager)
  • WordPress plugins

After you update to Anchore Enterprise 5.10, the SBOM inventory will now display all of the new ecosystems. Any SBOMs that have been generated for a particular ecosystem will show up top. The screenshot below gives you an idea of what this will look like:

Wrap-Up

Anchore Enterprise 5.10 marks a new chapter in providing reliable, enterprise-ready security tooling for modern software development. The introduction of the Anchore Data Service ensures that you have consistent and dependable access to critical vulnerability and exploit data, while the expanded ecosystem support means that no part of your tech stack is left unscrutinized for latent risk. Upgrade to the latest version and experience these new features for yourself.

To update and leverage these new features check out our docs, reach out to your Customer Success Engineer or contact our support team. Your feedback is invaluable to us, and we look forward to continuing to support your organization’s security needs.We are offering all of our product updates as a new quarterly product update webinar series. Watch the fall webinar update in the player below to get all of the juicy tidbits from our product team.

Who watches the watchmen? Introducing yardstick validate

Grype scans images for vulnerabilities, but who tests Grype? If Grype does or doesn’t find a given vulnerability in a given artifact, is it right? In this blog post, we’ll dive into yardstick, an open-source tool by Anchore for comparing the results of different vulnerability scans, both against each other and against data hand-labeled by security researchers.

Quality Gates

In Anchore’s CI pipelines, we have a concept we call a “quality gate.” A quality gate’s job is to ensure each change to each of our tools results in matching at least as good as the version before the change. To talk about quality gates, we need a couple of terms:

  • Reference tool configuration, or just “reference tool” for short – this is an invocation of the tool (Grype, in this case) as it works today, without the change we are considering making
  • Candidate tool configuration, or just “candidate tool” for short – this is an invocation the tool with the change we’re trying to verify. Maybe we changed Vunnel, or the grype source code itself, for example.
  • Test images are images that Anchore has built that are known to have vulnerabilities
  • Label data is data our researchers have labeled, essentially writing down, “for image A, for package B at version C, vulnerability X is really present (or is not really present)”

The important thing about the quality gate is that it’s an experiment – it changes only one thing to test the hypothesis. The hypothesis is always, “the candidate tool is as good or better than the reference tool,” and the one thing that’s different is the difference between the candidate tool and the reference tool. For example, if we’re testing a code change in Grype itself, the only difference between reference tool and candidate tool is the code change; the database of vulnerabilities will be the same for both runs. On the other hand, if we’re testing a change to how we build the database, the code for both Grypes will be the same, but the database used by the candidate tool will be built by the new code.

Now let’s talk through the logic of a quality gate:

  1. Run the reference tool and the candidate tool to get reference matches
  2. If both tools found nothing, the test is invalid. (Remember we’re scanning images that intentionally have vulnerabilities to test a scanner.)
  3. If both tools find exactly the same vulnerabilities, the test passes, because the candidate tool can’t be worse than the reference tool if they find the same things
  4. Finally, if both the reference tool and the candidate tool find at least one vulnerability, but not the same set of vulnerabilities, then we need to Do Match

Matching Math: Precision, Recall, and F1

The first math problem we do is easy: Did we add too many False Negatives? (Usually one is too many.) For example, if the reference tool found a vulnerability, and the candidate tool didn’t, and the label data says it’s really there, then the gate should fail – we can’t have a vulnerability matcher that misses things we know about!

The second math problem is also pretty easy: did we leave too many matches unlabeled? If we left too many matches unlabeled, we can’t do a comparison, because, if the reference tool and the candidate tool both found a lot of vulnerabilities, but we don’t know whether they’re really present or not, we can’t say which set of results is better. So the gate should fail and the engineer making the change will go and label more vulnerabilities

Now, we get to the harder math. Let’s say the reference tool and the candidate tool both find vulnerabilities, but not exactly the same ones, and the candidate tool doesn’t introduce any false negatives. But let’s say the candidate tool does introduce a false positive or two, but it also fixes false positives and false negatives that the reference tool was wrong about. Is it better? Now we have to borrow some math from science class:

  • Precision is the fraction of matches that are true positives. So if one of the tools found 10 vulnerabilities, and 8 of them are true positives, the precision is 0.8.
  • Recall is the fraction of vulnerabilities that the tool found. So if there were 10 vulnerabilities present in the image and Grype found 9 of them, the recall is 0.9.
  • F1 score is a calculation based on precision and recall that tries to reward high precision and high recall, while penalizing low precision and penalizing low recall. I won’t type out the calculation but you can read about it on Wikipedia or see it calculated in yardstick’s source code.

So what’s new in yardstick

Recently, the Anchore OSS team released the yardstick validate subcommand. This subcommand encapsulates the above work in a single command, which centralized a bunch of test Python spread out over the different OSS repositories.

Now, to add a quality gate with a set of images, we just need to add some yaml like:

pr_vs_latest_via_sbom_2022:
    description: "same as 'pr_vs_latest_via_sbom', but includes vulnerabilities from 2022 and before, instead of 2021 and before"
    max_year: 2022
    validations:
      - max-f1-regression: 0.1 # allowed to regress 0.1 on f1 score
        max-new-false-negatives: 10
        max-unlabeled-percent: 0
        max_year: 2022
        fail_on_empty_match_set: false
    matrix:
      images:
        - docker.io/anchore/test_images:azurelinux3-63671fe@sha256:2d761ba36575ddd4e07d446f4f2a05448298c20e5bdcd3dedfbbc00f9865240d
      tools:
        - name: syft
          # note: we want to use a fixed version of syft for capturing all results (NOT "latest")
          version: v0.98.0
          produces: SBOM
          refresh: false
        - name: grype
          version: path:../../+import-db=db.tar.gz
          takes: SBOM
          label: candidate # is candidate better than the current baseline?
        - name: grype
          version: latest+import-db=db.tar.gz
          takes: SBOM
          label: reference # this run is the current baseline

We think this change will make it easier to contribute to Grype and Vunnel. We know it helped out in the recent work to release Azure Linux 3 support.

If you’d like to discuss any topics this post raises, join us on discourse.

Preparing for a critical vulnerability

One morning, you wake up and see a tweet like the one above. The immediate response is often panic. This sounds bad; it probably affects everyone, and nobody knows for certain what to do next. Eventually, the panic subsides, but we still have a problem that needs to be dealt with. So the question to ask is: What can we do?

Don’t panic

Having a vague statement about a situation that apparently will probably affect everyone sounds like a problem we can’t possibly prepare for. Waiting is generally the worst option in situations like this, but there are some things we can do to help us out.

One of the biggest challenges in modern infrastructure is just understanding what you have. This sounds silly, but it’s a tough problem because of how most software is built now. You depend on a few open-source projects, and those projects depend on a few other open-source projects, which rely on even more open-source projects. And before you know it, you have 300 open-source projects instead of the 6 you think you installed.

Our goal is to create an inventory of all our software. If we have an accurate and updated inventory, you don’t have to wonder if you’re running some random application, like CUPS. You will know beyond a reasonable doubt. Knowing what software you do (or don’t) have deployed can bring an amazing amount of peace of mind.

This is not new

This was the same story during log4J and xz emergencies. What induced panic in everyone wasn’t the vulnerabilities themselves but the scramble to find where those libraries were deployed. In many instances, we observed engineers manually connecting to servers with SSH and dissecting container images.

These security emergencies will never end, but they all play out similarly. There is a time between when something is announced and good actionable guidance appearing. The security community will come to a better understanding of the issue, then we can figure out the best way to deal with whatever the problem is. This could be updating packages, it could mean adjusting firewall rules, or maybe changing a configuration option.

While we wait for the best guidance, what if we were going through our software inventory? When Log4Shell happened almost everyone spent the first few days or weeks (or months) just figuring out if they had Log4j anywhere. If you have an inventory, those first few days can be spent putting together a plan for applying the guidance you know is coming. It’s a much nicer way to spend time than frantically searching!

The inventory

Creating an inventory sounds like it shouldn’t be that hard. How much software do you REALLY have? It’s hard, and software poses a unique challenge because there are nearly infinite ways to build and assemble any given application. Do you have OpenSSL as an operating system package? Or is it a library in a random directory? Maybe it’s statically compiled into the binary. Maybe we download a copy off the internet when the application starts. Maybe it’s all of these … at the same time.

This complexity is taken to a new level when you consider how many computers, servers, containers, and apps are deployed. The scale means automation is the only way we can do this. Humans cannot handcraft an inventory. They are too slow and make too many mistakes for this work, but robots are great at it!

But the automation we have today isn’t perfect. It’s early days for many of these scanners and inventory formats (such as a Software Bill of Materials or SBOM). We must grasp what possible blind spots our inventories may have. For example, some scanners do a great job finding operating system packages but aren’t as good at finding Java archives (jar files). This is part of what makes the current inventory process difficult. The tooling is improving at an impressive rate; don’t write anything off as too incomplete. It will change and get better in the future.

Enter the SBOM

Now that we have mentioned SBOMs, we should briefly explain how they fit into this inventory universe. An SBOM does nothing by itself; it’s just a file format for capturing information, such as a software inventory.

Anchore developers have written plenty over the years about what an is SBOM, but here is the tl;dr:

An SBOM is a detailed list of all software project components, libraries, and dependencies. It serves as a comprehensive inventory that helps understand the software’s structure and the origins of its components.

An SBOM in your project enhances security by quickly identifying and mitigating vulnerabilities in third-party components. Additionally, it ensures compliance with regulatory standards and provides transparency, which is essential for maintaining trust with stakeholders and users.

An example

To explain what all this looks like and some of the difficulties, let’s go over an example using the eclipse-temurin Java runtime container image. It would be very common for a developer to build on top of this image. It also shows many of the challenges in trying to pin down a software inventory.

The Dockerfile we’re going to reference can be found on GitHub, and the container image can be found on Docker Hub.

The first observation is that this container uses Ubuntu as the underlying container image.

This is great, Ubuntu has a very nice packaging system and it’s no trouble to see what’s installed. We can easily do this with Syft.

bress@anchore   ~ syft ubuntu:24.04
  Parsed image sha256:61b2756d6fa9d6242fafd5b29f674404779be561db2d0bd932aa3640ae67b9e1
  Cataloged contents 74f92a6b3589aa5cac6028719aaac83de4037bad4371ae79ba362834389035aa
   ├──  Packages                        [91 packages]
   ├──  File digests                    [2,259 files]
   ├──  File metadata                   [2,259 locations]
   └──  Executables                     [722 executables]
NAME                 VERSION                      TYPE
apt                  2.7.14build2                 deb
base-files           13ubuntu10.1                 deb
base-passwd          3.6.3build1                  deb
bash                 5.2.21-2ubuntu4              deb
bsdutils             1:2.39.3-9ubuntu6.1          deb
coreutils            9.4-3ubuntu6                 deb
dash                 0.5.12-6ubuntu5              deb
debconf              1.5.86ubuntu1                deb
debianutils          5.17build1                   deb
diffutils            1:3.10-1build1               deb
dpkg                 1.22.6ubuntu6.1              deb
e2fsprogs            1.47.0-2.4~exp1ubuntu4.1     deb
findutils            4.9.0-5build1                deb
gcc-14-base          14-20240412-0ubuntu1         deb
gpgv                 2.4.4-2ubuntu17              deb
grep                 3.11-4build1                 deb
gzip                 1.12-1ubuntu3                deb
hostname             3.23+nmu2ubuntu2             deb
init-system-helpers  1.66ubuntu1                  deb

There has been nothing exciting so far. But if we look a little deeper at the eclipse temurin Dockerfile, we see it’s installing the Java JDK using wget. That’s not something we’ll find just by looking at Ubuntu packages.

bress@anchore ~ syft ubuntu:24.04
  Parsed image sha256:61b2756d6fa9d6242fafd5b29f674404779be561db2d0bd932aa3640ae67b9e1
  Cataloged contents 74f92a6b3589aa5cac6028719aaac83de4037bad4371ae79ba362834389035aa
   ├──  Packages                        [91 packages]
   ├──  File digests                    [2,259 files]
   ├──  File metadata                   [2,259 locations]
   └──  Executables                     [722 executables]
NAME                 VERSION                      TYPE
apt                  2.7.14build2                 deb
base-files           13ubuntu10.1                 deb
base-passwd          3.6.3build1                  deb
bash                 5.2.21-2ubuntu4              deb
bsdutils             1:2.39.3-9ubuntu6.1          deb
coreutils            9.4-3ubuntu6                 deb
dash                 0.5.12-6ubuntu5              deb
debconf              1.5.86ubuntu1                deb
debianutils          5.17build1                   deb
diffutils            1:3.10-1build1               deb
dpkg                 1.22.6ubuntu6.1              deb
e2fsprogs            1.47.0-2.4~exp1ubuntu4.1     deb
findutils            4.9.0-5build1                deb
gcc-14-base          14-20240412-0ubuntu1         deb
gpgv                 2.4.4-2ubuntu17              deb
grep                 3.11-4build1                 deb
gzip                 1.12-1ubuntu3                deb
hostname             3.23+nmu2ubuntu2             deb
init-system-helpers  1.66ubuntu1                  deb

If we scan this image with Syft, we can see a few different types of packages installed.

bress@anchore ~ syft eclipse-temurin:8u422-b05-jre-noble
  Parsed image sha256:d2c2442dea2a2b1164bd6dd39af673db2215ff680910aff7417432b00a3c8e4d
  Cataloged contents 805b45dee2c503f1cca36e1ecc6e8625538592e2db32cc04e317a246fb86d0fc
   ├──  Packages                        [142 packages]
   ├──  File digests                    [3,856 files]
   ├──  File metadata                   [3,856 locations]
   └──  Executables                     [809 executables]
NAME                 VERSION                            TYPE

hostname             3.23+nmu2ubuntu2                   deb
init-system-helpers  1.66ubuntu1                        deb
jaccess              UNKNOWN                            java-archive
jce                  1.8.0_422                          java-archive
jfr                  1.8.0_422                          java-archive
jsse                 1.8.0_422                          java-archive
libacl1              2.3.2-1build1                      deb
libapt-pkg6.0t64     2.7.14build2                       deb
libassuan0           2.5.6-1build1                      deb
libattr1             1:2.5.2-1build1                    deb
libaudit-common      1:3.1.2-2.1build1                  deb

The jdk and jre are binaries in the image, as are some Java archives. This is a gotcha to watch for when you’re building an inventory. Many inventories and scanners may only look for known-good packages, not binaries and other files installed on the system. In a perfect world, our SBOM tells us details about everything in the image, not just one package type.

At this point, you can imagine a developer adding more things to the container: code they wrote, Java Archives, data files, and maybe even a few more binary files, probably installed with wget or curl.

What next

This sounds pretty daunting, but it’s not that hard to start building an inventory. You don’t need a fancy system. The easiest way is to find an open source SBOM generator, like Syft, and put the SBOMs in a directory. It’s not perfect, but even searching through those files is faster than manually finding every version of CUPS in your infrastructure.

Once you understand an initial inventory, you can investigate more complete solutions. There are countless open-source projects, products (such as Anchore Enterprise), and services that can help here. For example, when starting to build the inventory, don’t expect to go from zero to complete overnight. Big projects need immense patience.

It’s like the old proverb that the best time to plant a tree was twenty years ago; the second best time is now. The best time to start an inventory system was a decade ago; the second best time is now.

If you’d like to discuss any topics raised in this post, join us on this discourse thread.

Compliance Requirements for DISA’s Security Technical Implementation Guides (STIGs)

In the rapidly modernizing landscape of cybersecurity compliance, evolving to a continuous compliance posture is more critical than ever—particularly for organizations involved with the Department of Defense (DoD) and other government agencies. At the heart of the DoD’s modern approach to software development is the DoD Enterprise DevSecOps Reference Design, commonly implemented as a DoD Software Factory

A key component of this framework is adhering to the Security Technical Implementation Guides (STIGs) developed by the Defense Information Systems Agency (DISA). STIG compliance within the DevSecOps pipeline not only accelerates the delivery of secure software but also embeds robust security practices directly into the development process, safeguarding sensitive data and reinforcing national security.

This comprehensive guide will walk you through what STIGs are, who should care about them, the levels of STIG compliance, key categories of STIG requirements, how to prepare for the STIG compliance process, and the tools available to automate STIG implementation and maintenance.

What are STIGs and who should care?

Understanding DISA and STIGs

The Defense Information Systems Agency (DISA) is the DoD agency responsible for delivering information technology (IT) support to ensure the security of U.S. national defense systems. To help organizations meet the DoD’s rigorous security controls, DISA develops Security Technical Implementation Guides (STIGs).

STIGs are configuration standards that provide prescriptive guidance on how to secure operating systems, network devices, software, and other IT systems. They serve as a secure configuration standard to harden systems against cyber threats.

For example, a STIG for the open source Apache web server would specify that encryption is enabled for all traffic (incoming or outgoing). This would require the generation of SSL/TLS certificates on the server in the correct location, updating the server’s configuration file to reference this certificate and re-configuration of the server to serve traffic from a secure port rather than the default insecure port.

Who should care about STIG compliance?

STIG compliance is mandatory for any organization that operates within the DoD network or handles DoD information. This includes:

  • DoD Contractors and Vendors: Companies providing products or services to the DoD—a.k.a. the defense industrial base (DIB)—must ensure their systems comply with STIG requirements.
  • Government Agencies: Federal agencies interfacing with the DoD need to adhere to applicable STIGs.
  • DoD Information Technology Teams: IT professionals within the DoD responsible for system security must implement STIGs.

Connection to the RMF and NIST SP 800-53

The Risk Management Framework (RMF)—more formally NIST 800-37—is a framework that integrates security and risk management into IT systems as they are being developed. The STIG compliance process outlined below is directly integrated into the higher-level RMF process. As you follow the RMF, the individual steps of STIG compliance will be completed in turn.

STIGs are also closely connected to the NIST 800-53, colloquially known as the “Control Catalog”. NIST 800-53 outlines security and privacy controls for all federal information systems; the controls are not prescriptive about the implementation, only the best practices and outcomes that need to be achieved. 

As DISA developed the STIG compliance standard, they started with the NIST 800-53 controls then “tailored” them to meet the needs of the DoD; these customized security best practices are known as Security Requirements Guides (SRGs). In order to remove all ambiguity around how to meet these higher-level best practices STIGs were created with implementation specific instructions.

For example, an SRG will mandate that all systems utilize a cybersecurity best practice, such as, role-based access control (RBAC) to prevent users without the correct privileges from accessing certain systems. A STIG, on the other hand, will detail exactly how to configure an RBAC system to meet the highest security standards.

Levels of STIG Compliance

The DISA STIG compliance standard uses Severity Category Codes to classify vulnerabilities based on their potential impact on system security. These codes help organizations prioritize remediation efforts. The three Severity Category Codes are:

  1. Category I (Cat I): These are the highest risk vulnerabilities, allowing an attacker immediate access to a system or network or allowing superuser access. Due to their high risk nature, these vulnerabilities be addressed immediately.
  2. Category II (Cat II): These vulnerabilities provide information with a high potential of giving access to intruders. These findings are considered a medium risk and should be remediated promptly.
  3. Category III (Cat III): These vulnerabilities constitute the lowest risk, providing information that could potentially lead to compromise. Although not as pressing as Cat II & III issues, it is still important to address these vulnerabilities to minimize risk and enhance overall security.

Understanding these categories is crucial in the STIG process, as they guide organizations in prioritizing remediation of vulnerabilities.

Key categories of STIG requirements

Given the extensive range of technologies used in DoD environments, there are hundreds of STIGs applicable to different systems, devices, applications, and more. While we won’t list all STIG requirements here, it’s important to understand the key categories and who they apply to.

1. Operating System STIGs

Applies to: System Administrators and IT Teams managing servers and workstations

Examples:

  • Microsoft Windows STIGs: Provides guidelines for securing Windows operating systems.
  • Linux STIGs: Offers secure configuration requirements for various Linux distributions.

2. Network Device STIGs

Applies to: Network Engineers and Administrators

Examples:

  • Network Router STIGs: Outlines security configurations for routers to protect network traffic.
  • Network Firewall STIGs: Details how to secure firewall settings to control access to networks.

3. Application STIGs

Applies to: Software Developers and Application Managers

Examples:

  • Generic Application Security STIG: Outlines the necessary security best practices needed to be STIG compliant
  • Web Server STIG: Provides security requirements for web servers.
  • Database STIG: Specifies how to secure database management systems (DBMS).

4. Mobile Device STIGs

Applies to: Mobile Device Administrators and Security Teams

Examples:

  • Apple iOS STIG: Guides securing of Apple mobile devices used within the DoD.
  • Android OS STIG: Details security configurations for Android devices.

5. Cloud Computing STIGs

Applies to: Cloud Service Providers and Cloud Infrastructure Teams

Examples:

  • Microsoft Azure SQL Database STIG: Offers security requirements for Azure SQL Database cloud service.
  • Cloud Computing OS STIG: Details secure configurations for any operating system offered by a cloud provider that doesn’t have a specific STIG.

Each category addresses specific technologies and includes a STIG checklist to ensure all necessary configurations are applied. 

You can view an example of a STIG checklist for “Application Security and Development” by following this link.

How to Prepare for the STIG Compliance Process

Achieving DISA STIG compliance involves a structured approach. Here are the stages of the STIG process and tips to prepare:

Stage 1: Identifying Applicable STIGs

With hundreds of STIGs relevant to different organizations and technology stacks, this step should not be underestimated. First, conduct an inventory of all systems, devices, applications, and technologies in use. Then, review the complete list of STIGs to match each to your inventory to ensure that all critical areas requiring secure configuration are addressed. This step is essential to avoiding gaps in compliance.

Tip: Use automated tools to scan your environment then match assets to relevant STIGs.

Stage 2: Implementation

After you’ve mapped your technology to the corresponding STIGs, the process of implementing the security configurations outlined in the guides begins. This step may require collaboration between IT, security, and development teams to ensure that the configurations are compatible with the organization’s infrastructure while enforcing strict security standards. Be sure to keep detailed records of changes made.

Tip: Prioritize implementing fixes for Cat I vulnerabilities first, followed by Cat II and Cat III. Depending on the urgency and needs of the mission, ATO can still be achieved with partial STIG compliance. Prioritizing efforts increases the chances that partial compliance is permitted.

Stage 3: Auditing & Maintenance

After the STIGs have been implemented, regular auditing and maintenance are critical to ensure ongoing compliance, verifying that no deviations have occurred over time due to system updates, patches, or other changes. This stage includes periodic scans, manual reviews, and remediation of any identified gaps. Additionally, organizations should develop a plan to stay informed about new STIG releases and updates from DISA.

Tip: Establish a maintenance schedule and assign responsibilities to team members. Alternatively, adopting a policy-as-code approach to continuous compliance by embedding STIG compliance requirements “-as-code” directly into your DevSecOps pipeline, you can automate this process.

General Preparation Tips

  • Training: Ensure your team is familiar with STIG requirements and the compliance process.
  • Collaboration: Work cross-functionally with all relevant departments, including IT, security, and compliance teams.
  • Resource Allocation: Dedicate sufficient resources, including time and personnel, to the compliance effort.
  • Continuous Improvement: Treat STIG compliance as an ongoing process rather than a one-time project.

Tools to automate STIG implementation and maintenance

Automation can significantly streamline the STIG compliance process. Here are some tools that can help:

1. Anchore STIG (Static and Runtime)

  • Purpose: Automates the process of checking container images against STIG requirements.
  • Benefits:
    • Simplifies compliance for containerized applications.
    • Integrates into CI/CD pipelines for continuous compliance.
  • Use Case: Ideal for DevSecOps teams utilizing containers in their deployments.

2. SCAP Compliance Checker

  • Purpose: Provides automated compliance scanning using the Security Content Automation Protocol (SCAP).
  • Benefits:
    • Validates system configurations against STIGs.
    • Generates detailed compliance reports.
  • Use Case: Useful for system administrators needing to audit various operating systems.

3. DISA STIG Viewer

  • Purpose: Helps in viewing and managing STIG checklists.
  • Benefits:
    • Allows for easy navigation of STIG requirements.
    • Facilitates documentation and reporting.
  • Use Case: Assists compliance officers in tracking compliance status.

4. DevOps Automation Tools

  • Infrastructure Automation Examples: Red Hat Ansible, Perforce Puppet, Hashicorp Terraform
  • Software Build Automation Examples: CloudBees CI, GitLab
  • Purpose: Automate the deployment of secure configurations that meet STIG compliance across multiple systems.
  • Benefits:
    • Ensures consistent application of secure configuration standards.
    • Reduces manual effort and the potential for errors.
  • Use Case: Suitable for large-scale environments where manual configuration is impractical.

5. Vulnerability Management Tools

  • Examples: Anchore Secure
  • Purpose: Identify vulnerabilities and compliance issues within your network.
  • Benefits:
    • Provides actionable insights to remediate security gaps.
    • Offers continuous monitoring capabilities.
  • Use Case: Critical for security teams focused on proactive risk management.

Wrap-Up

Achieving DISA STIG compliance is mandatory for organizations working with the DoD. By understanding what STIGs are, who they apply to, and how to navigate the compliance process, your organization can meet the stringent compliance requirements set forth by DISA. As a bonus, you will enhance its security posture and reduce the potential for a security breach.

Remember, compliance is not a one-time event but an ongoing effort that requires regular updates, audits, and maintenance. Leveraging automation tools like Anchore STIG and Anchore Secure can significantly ease this burden, allowing your team to focus on strategic initiatives rather than manual compliance tasks.

Stay proactive, keep your team informed, and make use of the resources available to ensure that your IT systems remain secure and compliant.

Navigating Open Source Software Compliance in Regulated Industries

Open source software (OSS) brings a wealth of benefits; speed, innovation, cost savings. But when serving customers in highly regulated industries like defense, energy, or finance, a new complication enters the picture—compliance.

Imagine your DevOps-fluent engineering team has been leveraging OSS to accelerate product delivery, and suddenly, a major customer hits you with a security compliance questionnaire. What now? 

Regulatory compliance isn’t just about managing the risks of OSS for your business anymore; it’s about providing concrete evidence that you meet standards like FedRAMP and the Secure Software Development Framework (SSDF).

The tricky part is that the OSS “suppliers” making up 70-90% of your software supply chain aren’t traditional vendors—they don’t have the same obligations or accountability, and they’re not necessarily aligned with your compliance needs. 

So, who bears the responsibility? You do.

The OSS your engineering team consumes is your resource and your responsibility. This means you’re not only tasked with managing the security risks of using OSS but also with proving that both your applications and your OSS supply chain meet compliance standards. 

In this post, we’ll explore why you’re ultimately responsible for the OSS you consume and outline practical steps to help you use OSS while staying compliant.

Learn about CISA’s SSDF attestation form and how to meet compliance.

What does it mean to use open source software in a regulated environment?

Highly regulated environments add a new wrinkle to the OSS security narrative. The OSS developers that author the software dependencies that make up the vast majority of modern software supply chains aren’t vendors in the traditional sense. They are more of a volunteer force that allow you to re-use their work but it is a take it or leave it agreement. You have no recourse if it doesn’t work as expected, or worse, has vulnerabilities in it.

So, how do you meet compliance standards when your software supply chain is built on top of a foundation of OSS?

Who is the vendor? You are!

Whether you have internalized this or not the open source software that your developers consume is your resource and thus your responsibility.

This means that you are shouldered with the burden of not only managing the security risk of consuming OSS but also having to shoulder the burden of proving that both your applications and the your OSS supply chain meets compliance.

Open source software is a natural resource

Before we jump into how to accomplish the task set forth in the previous section, let’s take some time to understand why you are the vendor when it comes to open source software.

The common idea is that OSS is produced by a 3rd-party that isn’t part of your organization, so they are the software supplier. Shouldn’t they be the ones required to secure their code? They control and maintain what goes in, right? How are they not responsible?

To answer that question, let’s think about OSS as a natural resource that is shared by the public at large, for instance the public water supply.

This shouldn’t be too much of a stretch. We already use terms like upstream and downstream to think about the relationship between software dependencies and the global software supply chain.

Using this mental model, it becomes easier to understand that a public good isn’t a supplier. You can’t ask a river or a lake for an audit report that it is contaminant free and safe to drink. 

Instead the organization that processes and provides the water to the community is responsible for testing the water and guaranteeing its safety. In this metaphor, your company is the one processing the water and selling it as pristine bottled water. 

How do you pass the buck to your “supplier”? You can’t. That’s the point.

This probably has you asking yourself, if I am responsible for my own OSS supply chain then how to meet a compliance standard for something that I don’t have control over? Keep reading and you’ll find out.

How do I use OSS and stay compliant?

While compliance standards are often thought of as rigid, the reality is much more nuanced. Just because your organization doesn’t own/control the open source projects that you consume doesn’t mean that you can’t use OSS and meet compliance requirements.

There are a few different steps that you need to take in order to build a “reasonably secure” OSS supply chain that will pass a compliance audit. We’ll walk you through the steps below:

Step 1 — Know what you have (i.e., an SBOM inventory)

The foundation of the global software supply chain is the SBOM (software bill of materials) standard. Each of the security and compliance functions outlined in the steps below use or manipulate an SBOM.

SBOMs are the foundational component of the global software supply chain because they record the ingredients that were used to produce the application an end-user will consume. If you don’t have a good grasp of the ingredients of your applications there isn’t much hope for producing any upstream security or compliance guarantees.

The best way to create observability into your software supply chain is to generate an SBOM for every single application in your DevSecOps build pipeline—at each stage of the pipeline!

Step 2 — Maintain a historical record of application source code

To meet compliance standards like FedRAMP and SSDF, you need to be able to maintain a historical record of the source code of your applications, including: 

  • Where it comes from, 
  • Who created it, and 
  • Any modifications made to it over time.

SBOMs were designed to meet these requirements. They act as a record of how applications were built and when/where OSS dependencies were introduced. They also double as compliance artifacts that prove you are compliant with regulatory standards.

Governments aren’t content with self-attestation (at least not for long); they need hard evidence to verify that you are trustworthy. Even though SSDF is currently self-attestation only, the federal government is known for rolling out compliance frameworks in stages. First advising on best-practices, then requiring self-attestation, finally external validation via a certification process. 

The Cybersecurity Maturity Model Certification (CMMC) is a good example of this dynamic process. It recently transitioned from self-attestation to external validation with the introduction of the 2.0 release of the framework.

Step 3 — Manage your OSS vulnerabilities

Not only do you need to keep a record of applications as they evolve over time, you have to track the known vulnerabilities of your OSS dependencies to achieve compliance. Just as SBOMs prove provenance, vulnerability scans are proof that your application and its dependencies aren’t vulnerable. These scans are a crucial piece of the evidence that you will need to provide to your compliance officer as you go through the certification process. 

Remember the buck stops with you! If the OSS that your application consumes doesn’t supply an SBOM and vulnerability scan (which is essentially all OSS projects) then you are responsible to create them. There is no vendor to pass the blame to for proving that your supply chain is reasonably secure and thus compliant.

Step 4 — Continuous compliance of open source software supply chain

It is important to recognize that modern compliance standards are no longer sprints but marathons. Not only do you have to prove that your application(s) are compliant at the time of audit but you have to be able to demonstrate that it remains secure continuously in order to maintain your certification.

This can be challenging to scale but it is made easier by integrating SBOM generation, vulnerability scanning and policy checks directly into the DevSecOps pipeline. This is the approach that modern, SBOM-powered SCAs advocate for.

By embedding the compliance policy-as-code into your DevSecOps pipeline as policy gates, compliance can be maintained over time. Developers are alerted when their code doesn’t meet a compliance standard and are directed to take the corrective action. Also, these compliance checks can be used to automatically generate the compliance artifacts needed. 

You already have an automated DevSecOps pipeline that is producing and delivering applications with minimal human intervention, why not take advantage of this existing tooling to automate open source software compliance in the same way that security was integrated directly into DevOps.

Real-world Examples

To help bring these concepts to life, we’ve outlined some real-world examples of how open source software and compliance intersect:

Open source project has unfixed vulnerabilities

This is far and wide the most common issue that comes up during compliance audits. One of your application’s OSS dependencies has a known vulnerability that has been sitting in the backlog for months or even years!

There are several reasons why an open source software developer might leave a known vulnerability unresolved:

  • They prioritize a feature over fixing a vulnerability
  • The vulnerability is from a third-party dependency they don’t control and can’t fix
  • They don’t like fixing vulnerabilities and choose to ignore it
  • They reviewed the vulnerability and decided it’s not likely to be exploited, so it’s not worth their time
  • They’re planning a codebase refactor that will address the vulnerability in the future

These are all rational reasons for vulnerabilities to persist in a codebase. Remember, OSS projects are owned and maintained by 3rd-party developers who control the repository; they make no guarantees about its quality. They are not vendors.

You, on the other hand, are a vendor and must meet compliance requirements. The responsibility falls on you. An OSS vulnerability management program is how you meet your compliance requirements while enjoying the benefits of OSS.

Need to fill out a supplier questionnaire

Imagine you’re a cloud service provider or software vendor. Your sales team is trying to close a deal with a significant customer. As the contract nears signing, the customer’s legal team requests a security questionnaire. They’re in the business of protecting their organization from financial risk stemming from their supply chain, and your company is about to become part of that supply chain.

These forms are usually from lawyers, very formal, and not focused on technical attacks. They just want to know what you’re using. The quick answer? “Here’s our SBOM.” 

Compliance comes in the form of public standards like FedRAMP, SSDF, NIST, etc., and these less formal security questionnaires. Either way, being unable to provide a full accounting of the risks in your software supply chain can be a speed bump to your organization’s revenue growth and success.

Integrating SBOM scanning, generation, and management deeply into your DevSecOps pipeline is key to accelerating the sales process and your organization’s overall success.

Prove provenance

CISA’s SSDF Attestation form requires that enterprises selling software to the federal government can produce a historical record of their applications. Quoting directly: “The software producer [must] maintain provenance for internal code and third-party components incorporated into the software to the greatest extent feasible.”

If you want access to the revenue opportunities the U.S. federal government offers, SSDF attestation is the needle you have to thread. Meeting this requirement without hiring an army of compliance engineers to manually review your entire DevSecOps pipeline demands an automated OSS component observability and management system.

Often, we jump to cryptographic signatures, encryption keys, trust roots—this quickly becomes a mess. Really, just a hash of the files in a database (read: SBOM inventory) satisfies the requirement. Sometimes, simpler is better. 

Discover the “easy button” to SSDF Attestation and OSS supply chain compliance in our previous blog post.

Takeaways

OSS Is Not a Vendor—But You Are! The best way to have your OSS cake and eat it too (without the indigestion) is to:

  1. Know Your Ingredients: Maintain an SBOM inventory of your OSS supply chain.
  2. Maintain a Complete Historical Record: Keep track of your application’s source code and build process.
  3. Scan for Known Vulnerabilities: Regularly check your OSS dependencies.
  4. Continuous Compliance thru Automation: Generate compliance records automatically to scale your compliance process.

There are numerous reasons to aim for open source software compliance, especially for your software supply chain:

  • Balance Gains Against Risks: Leverage OSS benefits while managing associated risks.
  • Reduce Financial Risk: Protect your organization’s existing revenue.
  • Increase Revenue Opportunities: Access new markets that mandate specific compliance standards.
  • Avoid Becoming a Cautionary Tale: Stay ahead of potential security incidents.

Regardless of your motivation for wanting to use OSS and use it responsibly (i.e., securely and compliantly), Anchore is here to help. Reach out to our team to learn more about how to build and manage a secure and compliant OSS supply chain.

Learn the container security best practices to reduce the risk of software supply chain attacks.

Learn the container security best practices, including open source software (OSS) security, to reduce the risk of software supply chain attacks.

US Navy achieves ATO in days with continuous compliance and OSS risk management

Implementing secure and compliant software solutions within the Department of Defense’s (DoD) software factory framework is no small feat. 

For Black Pearl, the premier DevSecOps platform for the U.S. Navy, and Sigma Defense, a leading DoD technology contractor, the challenge was not just about meeting stringent security requirements but to empower the warfighter. 

We’ll cover how they streamlined compliance, managed open source software (OSS) risk, and reduced vulnerability overload—all while accelerating their Authority to Operate (ATO) process.

Challenge: Navigating Complex Security and Compliance Requirements

Black Pearl and Sigma Defense faced several critical hurdles in meeting the stringent security and compliance standards of the DoD Enterprise DevSecOps Reference Design:

  • Achieving RMF Security and Compliance: Black Pearl needed to secure its own platform and help its customers achieve ATO under the Risk Management Framework (RMF). This involved meeting stringent security controls like RA-5 (Vulnerability Management), SI-3 (Malware Protection), and IA-5 (Credential Management) for both the platform and the applications built on it.
  • Maintaining Continuous Compliance: With the RAISE 2.0 memo emphasizing continuous ATO compliance, manual processes were no longer sufficient. The teams needed to automate compliance tasks to avoid the time-consuming procedures traditionally associated with maintaining ATO status.
  • Managing Open-Source Software (OSS) Risks: Open-source components are integral to modern software development but come with inherent risks. Black Pearl had to manage OSS risks for both its platform and its customers’ applications, ensuring vulnerabilities didn’t compromise security or compliance.
  • Vulnerability Overload for Developers: Developers often face an overwhelming number of vulnerabilities, many of which may not pose significant risks. Prioritizing actionable items without draining resources or slowing down development was a significant challenge.

“By using Anchore and the Black Pearl platform, applications inherit 80% of the RMF’s security controls. You can avoid all of the boring stuff and just get down to what everyone does well, which is write code.”

Christopher Rennie, Product Lead/Solutions Architect

Solution: Automating Compliance and Security with Anchore

To address these challenges, Black Pearl and Sigma Defense implemented Anchore, which provided:

“Working alongside Anchore, we have customized the compliance artifacts that come from the Anchore API to look exactly how the AOs are expecting them to. This has created a good foundation for us to start building the POA&Ms that they’re expecting.”

Josiah Ritchie, DevSecOps Staff Engineer

  • Managing OSS Risks with Continuous Monitoring: Anchore’s integrated vulnerability scanner, policy enforcer, and reporting system provided continuous monitoring of open-source software components. This proactive approach ensured vulnerabilities were detected and addressed promptly, effectively mitigating security risks.
  • Automated Prioritization of Vulnerabilities: By integrating the Anchore Developer Bundle, Black Pearl enabled automatic prioritization of actionable vulnerabilities. Developers received immediate alerts on critical issues, reducing noise and allowing them to focus on what truly matters.

Results: Accelerated ATO and Enhanced Security

The implementation of Anchore transformed Black Pearl’s compliance process and security posture:

  • Platform ATO in 3-5 days: With Anchore’s integration, Black Pearl users accessed a fully operational DevSecOps platform within days, a significant reduction from the typical six months for DIY builds.

“The DoD has four different layers of authorizing officials in order to achieve ATO. You have to figure out how to make all of them happy. We want to innovate by automating the compliance process. Anchore helps us achieve this, so that we can build a full ATO package in an afternoon rather than taking a month or more.”

Josiah Ritchie, DevSecOps Staff Engineer

  • Significantly reduced time spent on compliance reporting: Anchore automated compliance checks and artifact generation, cutting down hours spent on manual reviews and ensuring consistency in reports submitted to authorizing officials.
  • Proactive OSS risk management: By shifting security and compliance to the left, developers identified and remediated open-source vulnerabilities early in the development lifecycle, mitigating risks and streamlining the compliance process.
  • Reduced vulnerability overload with prioritized vulnerability reporting: Anchore’s prioritization of vulnerabilities prevented developer overwhelm, allowing teams to focus on critical issues without hindering development speed.

Conclusion: Empowering the Warfighter Through Efficient Compliance and Security

Black Pearl and Sigma Defense’s partnership with Anchore demonstrates how automating security and compliance processes leads to significant efficiencies. This empowers Navy divisions to focus on developing software that supports the warfighter. 

Achieving ATO in days rather than months is a game-changer in an environment where every second counts, setting a new standard for efficiency through the combination of Black Pearl’s robust DevSecOps platform and Anchore’s comprehensive security solutions.

If you’re facing similar challenges in securing your software supply chain and accelerating compliance, it’s time to explore how Anchore can help your organization achieve its mission-critical objectives.

Download the full case study below👇

Mark Your Calendars: Anchore’s Must-Attend Events and Webinars in October

Are you ready for cooler temperatures and the changing of the leaves? Anchore is! We are excited to announce a series of events and webinars next month. From in-person conferences to insightful webinars, we have a lineup designed to keep you informed about the latest developments in software supply chain security, DevSecOps, and compliance. Join us to learn, connect, and explore how Anchore can help your organization navigate the evolving landscape of software security.

EVENT: TD Synnex Inspire

Date: October 9-12, 2024

Location: Booth T84 | Greenville Convention Center in Greenville, SC

Anchore is thrilled to be exhibiting at the 2024 TD SYNNEX Inspire. Visit us at Booth T84 in the Pavilion to discover how Anchore secures containers for AI, machine learning applications—with a special emphasis on high-performance computing (HPC).

Anchore has helped many Fortune 50 enterprises scale their container security and vulnerability management programs for their entire software supply chain including luminaries like NVIDIA. If you’d like to book dedicated time to speak with our team, drop by our booth or email us at [email protected].

WEBINAR: Introducing the Anchore Data Service

Date: October 15, 2024 at 10am PT

We will showcase the exciting new features introduced in Anchore Enterprise 5.8, 5.9, and 5.10. All designed to effortlessly secure your software supply chain and reduce risk for your organization. Highlights include:

  • Version 5.10: New Anchore Data Service which automatically updates your vulnerability feeds—even in air-gapped environments!
  • Version 5.9: Improved SBOM generation with native integration of Syft, etc.
  • Version 5.8: CISA Known Exploited Vulnerabilities (KEV) feed, etc.

We will be demo-ing all of the new features, sharing pro tips and providing takeaways on how to best utilize the new releases. Don’t miss out!

EVENT: All Things Open Conference

Date: October 27-29, 2024

Location: Booth #95 | Raleigh Convention Center in Raleigh, NC

Anchore is excited to participate in the 2024 All Things Open Conference—one of the largest open source software events in the U.S. Drop by and visit us at Booth #95 to learn how our open source tools, Syft and Grype, can help you start your journey to a more secure DevSecOps pipeline. 

Anchore employees will be on hand to help you understand:

WEBINAR: Accelerate FedRAMP Compliance on Amazon EKS with Anchore

Date: October 29, 2024 at 10am PT

Navigating FedRAMP compliance can be challenging, but Anchore and AWS are here to simplify the process. Join Luis Morales, Solutions Architect at AWS, and Brian Thomason, Manager of Partner and Solutions Engineering at Anchore, as they explain how Cisco achieved FedRAMP compliance in weeks rather than months.

In this live session, we’ll share actionable guidance and insights that address:

  • How to meet six FedRAMP vulnerability scanning requirements
  • Automating STIG and FIPS compliance for Amazon EC2 virtual machines
  • Securing containers end-to-end across CI/CD, Amazon EKS, and ECS

*We’ll also discuss the architecture of Anchore running in an AWS customer environment, demonstrating how to leverage AWS tools and services to enhance your cloud security posture.

WEBINAR: Expert Series: Solving Real-World Challenges in FedRAMP Compliance

Date: October 30, 2024 at 10am PT

Navigating the path to FedRAMP authorization can be daunting, especially with the evolving landscape of federal security requirements. In this Expert Series webinar, Neil Levine, SVP of Product at Anchore, and Mike Strohecker, Director of Cloud Operations at InfusionPoints, will share real-world stories of how we’ve helped our FedRAMP customers overcome key challenges—from achieving compliance faster to meeting the latest FedRAMP Rev 5 requirements.

We’ll dive into practical solutions, including:

  • Overcoming common FedRAMP compliance hurdles
  • Meeting Rev 5 security hardening standards like STIG and CIS (CM-6)
  • Effectively shifting security left in the CI/CD pipeline
  • Automating policy enforcement and continuous monitoring

*We’ll also explore the future impact of the July 2024 FedRAMP modernization memo, highlighting how increased automation with OSCAL is transforming the compliance process.

Wrap-Up

With a brimming schedule of events, October promises to be a jam packed month for Anchore and our community. Whether you’re interested in our latest product updates, exploring strategies for FedRAMP compliance, or connecting at industry-leading events, there’s something for everyone. Mark your calendars and join us to stay ahead in the evolving world of software supply chain security.

Stay informed about upcoming events and developments at Anchore by bookmarking our Events Page and checking back regularly!

We migrated from S3 to R2. Thankfully nobody noticed

Sometimes, the best changes are the ones that you don’t notice. Well, some of you reading this may not have noticed, but there’s a good chance that many of you did notice a hiccup or two in Grype database availability that suddenly became a lot more stable.

One of the greatest things about Anchore, is that we are empowered to make changes quickly when needed. This is the story about doing just that: identifying issues in our database distribution mechanism and making a change to improve the experience for all our users.

A Heisenbug is born

It all started some time ago, in a galaxy far away. As early as 2022, when we received reports that some users had issues downloading the Grype database. These issues included general slowness and timeouts, with users receiving the dreaded: context deadline exceeded; and manually downloading the database from a browser could show similar behavior:

Debugging these transient single issues among thousands of legitimate, successful downloads was problematic for the team, as no one could reproduce these reliably, so it remained unclear what the cause was. A few more reports trickled in here and there, but everything seemed to work well whenever we tested this ourselves. Without further information, we had to chalk this up to something like unreliable network transfers in specific regions or under certain conditions, exacerbated by the moderately large size of the database: about 200 MB, compressed.

To determine any patterns or provide feedback to our CDN provider that users are having issues downloading the files, we set up a job to download the database periodically, adding DataDog monitoring across many regions to do the same thing. We noticed a few things: periodic and regular issues downloading the database, and the failures seemed to correlate to high-volume periods – just after a new database was built, for example. We continued monitoring these, but the intermittent failures didn’t seem frequent enough to cause great concern.

Small things matter

At some point leading up to August, we also began to get reports of users experiencing issues downloading the Grype database listing file. When Grype downloads the database, it first downloads a listing file to determine if a newer database exists. At the time, this file contained a historical record of 450 databases worth of metadata (90 days × each of the 5 Grype database versions), so the listing file clocked in around 200 KB. 

Grype only really needs the latest database, so the first thing we did was trim this file down to only the last few days; once we shrunk this file to under 5k, the issues downloading the listing file itself went away. This was our first clue about the problem: smaller files worked fine.

Fast forward to August 16, 2024: we awoke to multiple reports from people worldwide indicating they had the same issues downloading the database. We finally started to see the same thing ourselves after many months of being unable to reproduce the failures meaningfully. What happened? We had reached an inflection point of traffic that was causing issues with the CDN being able to deliver these files reliably to end users. Interestingly, the traffic was not from Grype but rather from Syft invocations checking for application updates: 1 million times per hour – approximately double what we saw previously, and this amount of traffic was beginning to affect users of Grype adversely – since they were served from the same endpoint, possibly due to the volume causing some throttling by the CDN provider.

The right tool for the job

As a team, we had individually investigated these database failures, but we decided it was time for all of us to strap on our boots and solve this. The clue we had from decreasing the size of the listing file was crucial to understanding what was going on. We were using a standard CDN offering backed by AWS S3 storage. 

Finding documentation about the CDN usage resulted in vague information that didn’t help us understand if we were decidedly doing something wrong or not. However, much of the documentation was evident in that it talked about web traffic, and we could assume this is how the service is optimized based on our experience with a more web-friendly sized listing file. After much reading, it started to sound like larger files should be served using the Cloudflare R2 Object Storage offering instead…

So that’s what we did: the team collaborated via a long, caffeine-fuelled Zoom call over an entire day. We updated our database publishing jobs to additionally publish databases and updated listing files to a second location backed by the Cloudflare R2 Object Storage service, served from grype.anchore.io instead of toolbox-data.anchore.io/grype

We verified this was working as expected with Grype and finally updated the main listing file to point to this new location. The traffic load moved to the new service precisely as expected. This was completely transparent for Grype end-users, and our monitoring jobs have been green since!

While this wasn’t fun to scramble to fix, it’s great to know that our tools are popular enough to cause problems with a really good CDN service. Because of all the automated testing we have in place, our autonomy to operate independently, and robust publishing jobs, we were able to move quickly to address these issues. After letting this change operate over the weekend, we composed a short announcement for our community discourse to keep everyone informed. 

Many projects experience growing pains as they see increased usage; our tools are no exception. Still, we were able almost seamlessly to provide everyone with a more reliable experience quickly and have had reports that the change has solved issues for them. Hopefully, we won’t have to make any more changes even when usage grows another 100x…

If you have any feedback for the Syft & Grype developers, head over to our community discourse.

How to build an OSS risk management program

In previous blog posts we have covered the risks of open source software (OSS) and security best practices to manage that risk. From there we zoomed in on the benefits of tightly coupling two of those best practices (SBOMs and vulnerability scanning)

Now, we’ll dig deeper into the practical considerations of integrating this paired solution into a DevSecOps pipeline. By examining the design and implementation of SBOMs and vulnerability scanning, we’ll illuminate the path to creating a holistic open source software (OSS) risk management program.

Learn about the role that SBOMs for the security of your organization in this white paper.

Learn about the role that SBOMs for the security, including open source software (OSS) security, of your organization in this white paper.

How do I integrate SBOM management and vulnerability scanning into my development process?

Ideally, you want to generate an SBOM at each stage of the software development process (see image below). By generating an SBOM and scanning for vulnerabilities at each stage, you unlock a number of novel use-cases and benefits that we covered previously.

DevSecOps lifecycle diagram with all stages to integrate SBOM generation and vulnerability scanning.

Let’s break down how to integrate SBOM generation and vulnerability scanning into each stage of the development pipeline:

Source (PLAN & CODE)

The easiest way to integrate SBOM generation and vulnerability scanning into the design and coding phases is to provide CLI (command-line interface) tools to your developers. Engineers are already used to these tools—and have a preference for them!

If you’re going the open source route, we recommend both Syft (SBOM generation) and Grype (vulnerability scanner) as easy options to get started. If you’re interested in an integrated enterprise tool then you’ll want to look at AnchoreCTL.

Developers can generate SBOMs and run vulnerability scans right from the workstation. By doing this at design or commit time developers can shift security left and know immediately about security implications of their design decisions.

If existing vulnerabilities are found, developers can immediately pivot to OSS dependencies that are clean or start a conversation with their security team to understand if their preferred framework will be a deal breaker. Either way, security risk is addressed early before any design decisions are made that will be difficult to roll back.

Build (BUILD + TEST)

The ideal location to integrate SBOM generation and vulnerability scanning during the build and test phases are directly into the organization’s continuous integration (CI) pipeline.

The same self-contained CLI tools used during the source stage are integrated as additional steps into CI scripts/runbooks. When a developer pushes a commit that triggers the build process, the new steps are executed and both an SBOM and vulnerability scan are created as outputs. 

Check out our docs site to see how AnchoreCTL (running in distributed mode) makes this integration a breeze.

If you’re having trouble convincing your developers to jump on the SBOM train, we recommend that developers think about all security scans as just another unit test that is part of their testing suite.

Running these steps in the CI pipeline delays feedback a little versus performing the check as incremental code commits are made as an application is being coded but it is still light years better than waiting till a release is code complete. 

If you are unable to enforce vulnerability scanning of OSS dependencies by your engineering team, a CI-based strategy can be a good happy medium. It is much easier to ensure every build runs exactly the same each time than it is to do the same for developers.

Release (aka Registry)

Another integration option is the container registry. This option will require you to either roll your own service that will regularly call the registry and scan new containers or use a service that does this for you.

See how Anchore Enterprise can automate this entire process by reviewing our integration docs.

Regardless of the path you choose, you will end up creating an IAM service account within your CI application which will give your SBOM and vulnerability scanning solution the access to your registries.

The release stage tends to be fairly far along in the development process and is not an ideal location for these functions to run. Most of the benefits of a shift left security posture won’t be available anymore.

If this is an additional vulnerability scanning stage—rather than the sole stage—then this is a fantastic environment to integrate into. Software supply chain attacks that target registries are popular and can be prevented with a continuous scanning strategy.

Deploy

This is the traditional stage of the SDLC (software development lifecycle) to run vulnerability scans. SBOM generation can be added on as another step in an organization’s continuous deployment (CD) runbook.

Similar to the build stage, the best integration method is by calling CLI tools directly in the deploy script to generate the SBOM and then scan it for vulnerabilities.

Alternatively, if you utilize a container orchestrator like Kubernetes you can also configure an admission controller to act as a deployment gate. The admissions controller should be configured to make a call out to a standalone SBOM generator and vulnerability scanner. 

If you’d like to understand how this is implemented with Anchore Enterprise, see our docs.

While this is the traditional location for running vulnerability scans, it is not recommended that this is the only stage to scan for vulnerabilities. Feedback about security issues would be arriving very late in the development process and prior design decisions may prevent vulnerabilities from being easily remediated. Don’t do this unless you have no other option.

Production (OPERATE + MONITOR)

This is not a traditional stage to run vulnerability scans since the goal is to prevent vulnerabilities from getting to production. Regardless, this is still an important environment to scan. Production containers have a tendency to drift from their pristine build states (DevSecOps pipelines are leaky!).

Also, new vulnerabilities are discovered all of the time and being able to prioritize remediation efforts to the most vulnerable applications (i.e., runtime containers) considerably reduces the risk of exploitation.

The recommended way to run SBOM generation and vulnerability scans in production is to run an independent container with the SBOM generator and vulnerability scanner installed. Most container orchestrators have SDKs that will allow you to integrate an SBOM generator and vulnerability scanner to the preferred administration CLI (e.g., kubectl for k8s clusters). 

Read how Anchore Enterprise integrates these components together into a single container for both Kubernetes and Amazon ECS.

How do I manage all of the SBOMs and vulnerability scans?

Tightly coupling SBOM generation and vulnerability scanning creates a number of benefits but it also creates one problem; a firehose of data. This unintended side effect is named SBOM sprawl and it inevitably becomes a headache in and of itself.

The concise solution to this problem is to create a centralized SBOM repository. The brevity of this answer downplays the challenges that go along with building and managing a new data pipeline.

We’ll walk you through the high-level steps below but if you’re looking to understand the challenges and solutions of SBOM sprawl in more detail, we have a separate article that covers that.

Integrating SBOMs and vulnerability scanning for better OSS risk management

Assuming you’ve deployed an SBOM generator and vulnerability scanner into at least one of your development stages (as detailed above in “How do I integrate SBOM management and vulnerability scanning into my development process?”) and have an SBOM repository for storing your SBOMs and/or vulnerability scans, we can now walkthrough how to tie these systems together.

  1. Create a system to pull vulnerability feeds from reputable sources. If you’re looking for a way to get started here, read our post on how to get started.
  2. Regularly scan your catalog of SBOMs for vulnerabilities, storing the results alongside the SBOMs.
  3. Implement a query system to extract insights from your inventory of SBOMs.
  4. Create a dashboard to visualize your software supply chain’s health.
  5. Build alerting automation to ping your team as newly discovered vulnerabilities are announced.
  6. Maintain all of these DIY security applications and tools. 
  7. Continue to incrementally improve on these tools as new threats emerge, technologies evolve and development processes change.

If this feels like more work than you’re willing to take on, this is why security vendors exist. See the benefits of a managed SBOM-powered SCA below.

Prefer not to DIY? Evaluate Anchore Enterprise

Anchore Enterprise was designed from the ground up to provide a reliable software supply chain security platform that requires the least amount of work to integrate and maintain. Included in the product is:

  • Out-of-the-box integrations for popular CI/CD software (e.g., GitHub, Jenkins, GitLab, etc.)
  • End-to-end SBOM management
  • Enterprise-grade vulnerability scanning with best-in-class false positives
  • Built-in SBOM drift detection
  • Remediation recommendations
  • Continuous visibility and monitoring of software supply chain health

Enterprises like NVIDIA, Cisco, Infoblox, etc. have chosen Anchore Enterprise as their “easy button” to achieve open source software security with the least amount of lift.

If you’re interested to learn more about how to roll out a complete OSS security solution without the blood, sweat and tears that come with the DIY route—reach out to our team to get a demo or try Anchore Enterprise yourself with a 15-day free trial.

Learn the container security best practices to reduce the risk of software supply chain attacks.

Learn the container security best practices, including open source software (OSS) security, to reduce the risk of software supply chain attacks.

SBOMs and Vulnerability Management: OSS Security in the DevSecOps Era

The rise of open-source software (OSS) development and DevOps practices has unleashed a paradigm shift in OSS security. As traditional approaches to OSS security have proven inadequate in the face of rapid development cycles, the Software Bill of Materials (SBOM) has re-made OSS vulnerability management in the era of DevSecOps.

This blog post zooms in on two best practices from our introductory article on OSS security and the software supply chain:

  1. Maintain a Software Dependency Inventory
  2. Implement Vulnerability Scanning

These two best practices are set apart from the rest because they are a natural pair. We’ll cover how this novel approach,

  • Scaled OSS vulnerability management under the pressure of rapid software delivery
  • Is set apart from legacy SCAs
  • Unlocks new use-cases in software supply chain security, OSS risk management, etc.
  • Benefits software engineering orgs
  • Benefits an organization’s overall security posture
  • Has measurably impacted modern enterprises, such as, NVIDIA, Infoblox, etc.

Whether you’re a seasoned DevSecOps professional or just beginning to tackle the challenges of securing your software supply chain, this blog post offers insights into how SBOMs and vulnerability management can transform your approach to OSS security.

Learn about the role that SBOMs for the security of your organization in this white paper.

Learn about the role that SBOMs for the security, including open source software (OSS) security, of your organization in this white paper.

Why do I need SBOMs for OSS vulnerability management?

The TL;DR is SBOMs enabled DevSecOps teams to scale OSS vulnerability management programs in a modern, cloud native environment. Legacy security tools (i.e., SCA platforms) weren’t built to handle the pace of software delivery after a DevOps face lift.

Answering this question in full requires some historical context. Below is a speed-run of how we got to a place where SBOMs became the clear solution for vulnerability management after the rise of DevOps and OSS; the original longform is found on our blog.

If you’re not interested in a history lesson, skip to the next section, “What new use-cases are unlocked with a software dependency inventory?” to get straight to the impact of this evolution on software supply chain security (SSCS).

A short history on software composition analysis (SCA)

  • SCAs were originally designed to solve the problem of OSS licensing risk
  • Remember that Microsoft made a big fuss about the dangers of OSS at the turn of the millennium
  • Vulnerability scanning and management was tacked-on later
  • These legacy SCAs worked well enough until DevOps and OSS popularity hit critical mass

How the rise of OSS and DevOps principles broke legacy SCAs

  • DevOps and OSS movements hit traction in the 2010s
  • Software development and delivery transitioned from major updates with long development times to incremental updates with frequent releases
  • Modern engineering organizations are measured and optimized for delivery speed
  • Legacy SCAs were designed to scan a golden image once and take as much as needed to do it; upwards of weeks in some cases
  • This wasn’t compatible with the DevOps promise and created friction between engineering and security
  • This meant not all software could be scanned and much was scanned after release increasing the risk of a security breach

SBOMs as the solution

  • SBOMs were introduced as a standardized data structure that comprised a complete list of all software dependencies (OSS or otherwise)
  • These lightweight files created a reliable way to scan software for vulnerabilities without the slow performance of scanning the entire application—soup to nuts
  • Modern SCAs utilize SBOMs as the foundational layer to power vulnerability scanning in DevSecOps pipelines
  • SBOMs + SCAs deliver on the performance of DevOps without compromising security

What is the difference between SBOMs and legacy SCA scanning?

SBOMs offer two functional innovations over the legacy model: 

  1. Deeper visibility into an organization’s application inventory and; 
  2. A record of changes to applications over-time.

The deeper visibility comes from the fact that modern SCA scanners identify software dependencies recursively and build a complete software dependency tree (both direct and transitive). The record of changes comes from the fact that the OSS ecosystem has begun to standardize the contents of SBOMs to allow interoperability between OSS consumers and producers.

Legacy SCAs typically only scan for direct software dependencies and don’t recursively scan for dependencies of dependencies. Also, legacy SCAs don’t generate standardized scans that can then be used to track changes over time.

What new use-cases are unlocked with an SBOM inventory?

The innovations brought by SBOMs (see above) have unlocked new use-cases that benefit both the software supply chain security niche and the greater DevSecOps world. See the list below:

OSS Dependency Drift Detection

Ideally software dependencies are only injected in source code but the reality is that CI/CD pipelines are leaky and both automated and one-off modifications are made at all stages of development. Plugging 100% of the leaks is a strategy with diminishing returns. Application drift detection is a scalable solution to this challenge.

SBOMs unlocks drift detection by creating a point-in-time record on the composition of an application at each stage of the development process. This creates an auditable record of when software builds are modified; how they are changed and who changed it. 

Software Supply Chain Attack Detection

Not all dependency injections are performed by benevolent 1st-party developers. Malicious threat actors who gain access to your organization’s DevSecOps pipeline or the pipeline of one of your OSS suppliers can inject malicious code into your applications.

An SBOM inventory creates the historical record that can identify anomalous behavior and catch these security breaches before organizational damage is done. This is a particularly important strategy for dealing with advanced persistent threats (APTs) that are expert at infiltration and stealth. For a real-world example, see our blog on the recent XZ supply chain attack.

OSS Licensing Risk Management

OSS licenses are currently undergoing the beginning of a new transformation. The highly permissive licenses that came into fashion over the last 20 years are proving to be unsustainable. As prominent open source startups amend their licenses (e.g., Hashicorp, Elastic, Redis, etc.), organizations need to evaluate these changes and how it impacts their OSS supply chain strategy.

Similar to the benefits during a security incident, an SBOM inventory acts as the source of truth for OSS licensing risk. As licenses are amended, an organization can quickly evaluate their risk by querying their inventory and identifying who their “critical” OSS suppliers are. 

Domain Expertise Risk Management

Another emerging use-case of software dependency inventories is the management of domain expertise of developers in your organization. A comprehensive inventory of software dependencies allows organization’s to map critical software to individual employee’s domain knowledge. This creates a measurement of how well resourced your engineering organization is and who owns the knowledge that could impact business operations.

While losing an employee with a particular set of skills might not have the same urgency as a security incident, over time this gap can create instability. An SBOM inventory allows organizations to maintain a list of critical OSS suppliers and get ahead of any structural risks in their organization.

What are the benefits of a software dependency inventory?

SBOM inventories create a number of benefits for tangential domains, such as, software supply chain security, risk management, etc. but there is one big benefit for the core practices of software development.

Reduced engineering and QA time for debugging

A software dependency inventory stores metadata about applications and their OSS dependencies over-time in a centralized repository. This datastore is a simple and efficient way to search and answer critical questions about the state of an organization’s software development pipeline.

Previously, engineering and QA teams had to manually search codebases and commits in order to determine the source of a rogue dependency being added to an application. A software dependency inventory combines a centralized repository of SBOMs with an intuitive search interface. Now, these time consuming investigations can be accomplished in minutes versus hours.

What are the benefits of scanning SBOMs for vulnerabilities?

There are a number of security benefits that can be achieved by integrating SBOMs and vulnerability scanning. We’ve highlighted the most important below:

Reduce risk by scaling vulnerability scanning for complete coverage

One of the side effects of transitioning to DevOps practices was that legacy SCAs couldn’t keep up with the software output of modern engineering orgs. This meant that not all applications were scanned before being deployed to production—a risky security practice!

Modern SCAs solved this problem by scanning SBOMs rather than applications or codebases. These lightweight SBOM scans are so efficient that they can keep up with the pace of DevOps output. Scanning 100% of applications reduces risk by preventing unscanned software from being deployed into vulnerable environments.

Prevent delays in software delivery

Overall organizational productivity can be increased by adopting modern, SBOM-powered SCAs that allow organizations to shift security left. When vulnerabilities are uncovered during application design, developers can make informed decisions about the OSS dependencies that they choose. 

This prevents the situation where engineering creates a new application or feature but right before it is deployed into production the security team scans the dependencies and finds a critical vulnerability. These last minute security scans can delay a release and create frustration across the organization. Scanning early and often prevents this productivity drain from occurring at the worst possible time.

Reduced financial risk during a security incident

The faster a security incident is resolved the less risk that an organization is exposed to. The primary metric that organizations track is called mean-time-to-recovery (MTTR). SBOM inventories are utilized to significantly reduce this metric and improve incident outcomes.

An application inventory with full details on the software dependencies is a prerequisite for rapid security response in the event of an incident. A single SQL query to an SBOM inventory will return a list of all applications that have exploitable dependencies installed. Recent examples include Log4j and XZ. This prevents the need for manual scanning of codebases or production containers. This is the difference between a zero-day incident lasting a few hours versus weeks.

Reduce hours spent on compliance with automation

Compliance certifications are powerful growth levers for organizations; they open up new market opportunities. The downside is that they create a lot of work for organizations. Manually confirming that each compliance control is met and providing evidence for the compliance officer to review discourages organizations from pursuing these certifications.

Providing automated vulnerability scans from DevSecOps pipelines that integrate SBOM inventories and vulnerability scanners significantly reduces the hours needed to generate and collect evidence for compliance audits.

How impactful are these benefits?

Many modern enterprises are adopting SBOM-powered SCAs and reaping the benefits outlined above. The quantifiable benefits to any organization are unique to that enterprise but anecdotal evidence is still helpful when weighing how to prioritize a software supply chain security initiative, like the adoption of an SBOM-powered SCA against other organizational priorities.

As a leading SBOM-powered SCA, Anchore has helped numerous organizations achieve the benefits of this evolution in the software industry. To get an estimate of what your organization can expect, see the case studies below:

NVIDIA

  • Reduced time to production by scanning SBOMs instead of full applications
  • Scaled vulnerability scanning and management program to 100% coverage across 1000s of containerized applications and 100,000s of containers

Read the full NVIDIA case study here >>

Infoblox

  • 75% reduction in engineering hours spent performing manual vulnerability detection
  • 55% reduction in hours allocated to retroactive remediation of vulnerabilities
  • 60% reduction in hours spent on manual compliance discovery and documentation

Read the full Infoblox case study here >>

DreamFactory

  • 75% reduction in engineering hours spent on vulnerability management and compliance
  • 70% faster production deployments with automated vulnerability scanning and management

Read the full DreamFactory case study here >>

Next Steps

Hopefully you now have a better understanding of the power of integrating an SBOM inventory into OSS vulnerability management. This “one-two” combo has unlocked novel use-cases, numerous benefits and measurable results for modern enterprises.

If you’re interested in learning more about how Anchore can help your organization achieve similar results, reach out to our team.

Learn the container security best practices to reduce the risk of software supply chain attacks.

Learn the container security best practices, including open source software (OSS) security, to reduce the risk of software supply chain attacks.

DreamFactory Achieves 75% Time Savings with Anchore: A Case Study in Secure API Generation

As the popularity of APIs has swept the software industry, API security has become paramount, especially for organizations in highly regulated industries. DreamFactory, an API generation platform serving the defense industry and critical national infrastructure, required an air-gapped vulnerability scanning and management solution that didn’t slow down their productivity. Avoiding security breaches and compliance failures are non-negotiables for the team to maintain customer trust.

Challenge: Security Across the Gap

DreamFactory encountered several critical hurdles in meeting the needs of its high-profile clients, particularly those in the defense community and other highly regulated sectors:

  1. Secure deployments without cloud connectivity: Many clients, including the Department of Defense (DoD), required on-premises deployments with air-gapping, breaking the assumptions of modern cloud-based security strategies.
  2. Air-gapped vulnerability scans: Despite air-gapping, these organizations still demanded comprehensive vulnerability reporting to protect their sensitive data.
  3. Building high-trust partnerships: In industries where security breaches could have catastrophic consequences, establishing trust rapidly was crucial.

As Terence Bennett, CEO of DreamFactory, explains, “The data processed by these organizations have the highest national security implications. We needed a solution that could deliver bulletproof security without cloud connectivity.”

Solution: Anchore Enterprise On-Prem and Air-Gapped 

To address these challenges, DreamFactory implemented Anchore Enterprise, which provided:

  1. Support for on-prem and air-gapped deployments: Anchore Enterprise was designed to operate in air-gapped environments, aligning perfectly with DreamFactory’s needs.
  2. Comprehensive vulnerability scanning: DreamFactory integrated Anchore Enterprise into its build pipeline, running daily vulnerability scans on all deployment versions.
  3. Automated SBOM generation and management: Every build is now cataloged and stored (as an SBOM), providing immediate transparency into the software’s components.

“By catching vulnerabilities in our build pipeline, we can inform our customers and prevent any of the APIs created by a DreamFactory install from being leveraged to exploit our customer’s network,” Bennett notes. “Anchore has helped us achieve this massive value-add for our customers.”

Results: Developer Time Savings and Enhanced Trust

The implementation of Anchore Enterprise transformed DreamFactory’s security posture and business operations:

  • 75% reduction in time spent on vulnerability management and compliance requirements
  • 70% faster production deployments with integrated security checks
  • Rapid trust development through transparency

“We’re seeing a lot of traction with data warehousing use-cases,” says Bennett. “Being able to bring an SBOM to the conversation at the very beginning completely changes the conversation and allows CISOs to say, ‘let’s give this a go’.”

Conclusion: A Competitive Edge in High-Stakes Environments

By leveraging Anchore Enterprise, DreamFactory has positioned itself as a trusted partner for organizations requiring the highest levels of security and compliance in their API generation solutions. In an era where API security is more critical than ever, DreamFactory’s success story demonstrates that with the right tools and approach, it’s possible to achieve both ironclad security and operational efficiency.


Are you facing similar challenges hardening your software supply chain in order to meet the requirements of the DoD? By designing your DevSecOps pipeline to the DoD software factory standard, your organization can guarantee to meet these sky-high security and compliance requirements. Learn more about the DoD software factory standard by downloading our white paper below.

How is Open Source Software Security Managed in the Software Supply Chain?

Open source software has revolutionized the way developers build applications, offering a treasure trove of pre-built software “legos” that dramatically boost productivity and accelerate innovation. By leveraging the collective expertise of a global community, developers can create complex, feature-rich applications in a fraction of the time it would take to build everything from scratch. However, this incredible power comes with a significant caveat: the open source model introduces risk.

Organizations inherit both the good and bad parts of the OSS source code they don’t own. This double-edged sword of open source software necessitates a careful balance between harnessing its productivity benefits and managing the risks. A comprehensive OSS security program is the industry standard best practice for managing the risk of open source software within an organization’s software supply chain.

Learn the container security best practices to reduce the risk of software supply chain attacks.

Learn the container security best practices, including open source software security, to reduce the risk of software supply chain attacks.

What is open source software security?

Open source software security is the ecosystem of security tools (some of it being OSS!) that have developed to compensate for the inherent risk of OSS development. The security of the OSS environment was founded on the idea that “given enough eyeballs, all bugs are shallow”. The reality of OSS is that the majority of it is written and maintained by single contributors. The percentage of open source software that passes the qualifier of “enough eyeballs” is miniscule.

Does that mean open source software isn’t secure? Fortunately, no. The OSS community still produces secure software but an entire ecosystem of tools ensure that this is verified—not only trusted implicitly.

What is the difference between closed source and open source software security?

The primary difference between open source software security and closed source software security is how much control you have over the source code. Open source code is public and can have many contributors that are not employees of your organization while proprietary source code is written exclusively by employees of your organization. The threat models required to manage risk for each of these software development methods are informed by these differences.

Due to the fact that open source software is publicly accessible and can be contributed to by a diverse, often anonymous community, its threat model must account for the possibility of malicious code contributions, unintentional vulnerabilities introduced by inexperienced developers, and potential exploitation of disclosed vulnerabilities before patches are applied. This model emphasizes continuous monitoring, rigorous code review processes, and active community engagement to mitigate risks. 

In contrast, proprietary software’s threat model centers around insider threats, such as disgruntled employees or lapses in secure coding practices, and focuses heavily on internal access controls, security audits, and maintaining strict development protocols. 

The need for external threat intelligence is also greater in OSS, as the public nature of the code makes it a target for attackers seeking to exploit weaknesses, while proprietary software relies on obscurity and controlled access as a first line of defense against potential breaches.

What are the risks of using open source software?

  1. Vulnerability exploitation of your application
    • The bargain that is struck when utilizing OSS is your organization gives up significant amounts of control of the quality of the software. When you use OSS you inherit both good AND bad (read: insecure) code. Any known or latent vulnerabilities in the software become your problem.
  2. Access to source code increases the risk of vulnerabilities being discovered by threat actors
    • OSS development is unique in that both the defenders and the attackers have direct access to the source code. This gives the threat actors a leg up. They don’t have to break through perimeter defenses before they get access to source code that they can then analyze for vulnerabilities.
  3. Increased maintenance costs for DevSecOps function
    • Adopting OSS into an engineering organization is another function that requires management. Data has to be collected about the OSS that is embedded in your applications. That data has to be stored and made available in case of the event of a security incident. These maintenance costs are typically incurred by the DevOps and Security teams.
  4. OSS license legal exposure
    • OSS licenses are mostly permissive for use within commercial applications but a non-trivial subset are not, or worse they are highly adversarial when used by a commercial enterprise. Organizations that don’t manage this risk increase the potential for legal action to be taken against them.

How serious are the risks associated with the use of open source software?

Current estimates are that 70-90% of modern applications are composed of open source software. This means that only 10-30% of applications developed by organizations are written by developers employed by the organization. Without having significant visibility into the security of OSS, organization’s are handing over the keys to the castle to the community and hoping for the best.

Not only is OSS a significant footprint in modern application composition but its growth is accelerating. This means the associated risks are growing just as fast. This is part of the reason we see an acceleration in the frequency of software supply chain attacks. Organizations that aren’t addressing these realities are getting caught on their back foot when zero-days are announced like the recent XZ utils backdoor.

Why are SBOMs important to open source software security?

Software Bills of Materials (SBOMs) serve as the foundation of software supply chain security by providing a comprehensive “ingredient list” of all components within an application. This transparency is crucial in today’s software landscape, where modern applications are a complex web of mostly open source software dependencies that can harbor hidden vulnerabilities. 

SBOMs enable organizations to quickly identify and respond to security threats, as demonstrated during incidents like Log4Shell, where companies with centralized SBOM repositories were able to locate vulnerable components in hours rather than days. By offering a clear view of an application’s composition, SBOMs form the bedrock upon which other software supply chain security measures can be effectively built and validated.

The importance of SBOMs in open source software security cannot be overstated. Open source projects often involve numerous contributors and dependencies, making it challenging to maintain a clear picture of all components and their potential vulnerabilities. By implementing SBOMs, organizations can proactively manage risks associated with open source software, ensure regulatory compliance, and build trust with customers and partners. 

SBOMs enable quick responses to newly discovered vulnerabilities, facilitate automated vulnerability management, and support higher-level security abstractions like cryptographically signed images or source code. In essence, SBOMs provide the critical knowledge needed to navigate the complex world of open source dependencies by enabling us to channel our inner GI Joe—”knowing is half the battle” in software supply chain security.

Best practices for securing open source software?

Open source software has become an integral part of modern development practices, offering numerous benefits such as cost-effectiveness, flexibility, and community-driven innovation. However, with these advantages come unique security challenges. To mitigate risks and ensure the safety of your open source components, consider implementing the following best practices:

1. Model Security Scans as Unit Tests

Re-branding security checks as another type of unit test helps developers orient to DevSecOps principles. This approach helps developers re-imagine security as an integral part of their workflow rather than a separate, post-development concern. By modeling security checks as unit tests, you can:

  • Catch vulnerabilities earlier in the development process
  • Reduce the time between vulnerability detection and remediation
  • Empower developers to take ownership of security issues
  • Create a more seamless integration between development and security teams

Remember, the goal is to make security an integral part of the development process, not a bottleneck. By treating security checks as unit tests, you can achieve a balance between rapid development and robust security practices.

2. Review Code Quality

Assessing the quality of open source code is crucial for identifying potential vulnerabilities and ensuring overall software reliability. Consider the following steps:

  • Conduct thorough code reviews, either manually or using automated tools
  • Look for adherence to coding standards and best practices
  • Look for projects developed with secure-by-default principles
  • Evaluate the overall architecture and design patterns used

Remember, high-quality code is generally more secure and easier to maintain.

3. Assess Overall Project Health

A vibrant, active community and committed maintainers are crucial indicators of a well-maintained open source project. When evaluating a project’s health and security:

  • Examine community involvement:
    • Check the number of contributors and frequency of contributions
    • Review the project’s popularity metrics (e.g., GitHub stars, forks, watchers)
    • Assess the quality and frequency of discussions in forums or mailing lists
  • Evaluate maintainer(s) commitment:
    • Check the frequency of commits, releases, and security updates
    • Check for active engagement between maintainers and contributors
    • Review the time taken to address reported bugs and vulnerabilities
    • Look for a clear roadmap or future development plans

4. Maintain a Software Dependency Inventory

Keeping track of your open source dependencies is crucial for managing security risks. To create and maintain an effective inventory:

  • Use tools like Syft or Anchore SBOM to automatically scan your application source code for OSS dependencies
    • Include both direct and transitive dependencies in your scans
  • Generate a Software Bill of Materials (SBOM) from the dependency scan
    • Your dependency scanner should also do this for you
  • Store your SBOMs in a central location that can be searched and analyzed
  • Scan your entire DevSecOps pipeline regularly (ideally every build and deploy)

An up-to-date inventory allows for quicker responses to newly discovered vulnerabilities.

5. Implement Vulnerability Scanning

Regular vulnerability scanning helps identify known security issues in your open source components. To effectively scan for vulnerabilities:

  • Use tools like Grype or Anchore Secure to automatically scan your SBOMs for vulnerabilities
  • Automate vulnerability scanning tools directly into your CI/CD pipeline
    • At minimum implement vulnerability scanning as containers are built
    • Ideally scan container registries, container orchestrators and even each time a new dependency is added during design
  • Set up alerts for newly discovered vulnerabilities in your dependencies
  • Establish a process for addressing identified vulnerabilities promptly

6. Implement Version Control Best Practices

Version control practices are crucial for securing all DevSecOps pipelines that utilize open source software:

  • Implement branch protection rules to prevent unauthorized changes
  • Require code reviews and approvals before merging changes
  • Use signed commits to verify the authenticity of contributions

By implementing these best practices, you can significantly enhance the security of your software development pipeline and reduce the risk intrinsic to open source software. By doing this you will be able to have your cake (productivity boost of OSS) and eat it too (without the inherent risk).

How do I integrate open source software security into my development process?

DIY a comprehensive OSS security system

We’ve written about the steps to build a OSS security system from scratch in a previous blog post—below is the TL;DR:

  • Integrate dependency scanning, SBOM generation and vulnerability scanning into your DevSecOps pipeline
  • Implement a data pipeline to manage the influx of security metadata
  • Use automated policy-as-code “security tests” to provide rapid feedback to developers
  • Automate remediation recommendations to reduce cognitive load on developers

Outsource OSS security to a turnkey vendor

Modern software composition analysis (SCA) tools, like Anchore Enterprise, are purpose built to provide you with a comprehensive OSS security system out-of-the-box. All of the same features of DIY but without the hassle of building while maintaining your current manual process.

  • Anchore SBOM: comprehensive dependency scanning, SBOM generation and management
  • Anchore Secure: vulnerability scanning and management
  • Anchore Enforce: automated security enforcement and compliance

Whether you want to scale an understaffed security to increase their reach across your organization or free your team up to focus on different priorities, the buy versus build opportunity cost is a straightforward decision.

Next Steps

Hopefully, you now have a strong understanding of the risks associated with adopting open source software. If you’re looking to continue your exploration into the intricacies of software supply chain security, Anchore has a catalog of deep dive content on our website. If you’d prefer to get your hands dirty, we also offer a 15-day free trial of Anchore Enterprise.

Learn about the role that SBOMs for the security of your organization in this white paper.

Learn about the role that SBOMs for the security, including open source software security, of your organization in this white paper.

SSDF Attestation Template: Battle-tested Compliance Guidance

The CISA Secure Software Development Attestation form, commonly referred to as, SSDF attestation, was released earlier this year and with any new compliance framework, knowing the exact wording and details to provide in order to meet the compliance requirements can be difficult.

We feel you here. Anchore is heavily invested in the public sector and had to generate our own SSDF attestation for our platform, Anchore Enterprise. Having gone through the process ourselves and working with a number of customers that requested our expertise on this matter, we developed a document that helps you put together an SSDF attestation that will make a compliance officer’s heart sing.

Our goal with this document is to make SSDF attestation as easy as possible and demonstrate how Anchore Enterprise is an “easy button” that you can utilize to satisfy the majority of evidence needed to achieve compliance. We have already submitted in our own SSDF attestation and been approved, so we have confidence these answers will help get you over the line. You can find our SSDF attestation guide on our docs site.

Explore SSDF attestation in-depth with this eBook. Learn the benefits of the framework and how you can benefit from it.

How do I fill out the SSDF attestation form?

This is the difficult part, isn’t it? The SSDF attestation form looks very simple at a glance, but it has a number of sections that expect evidence to be attached that details how your organization secures both your development environments and production systems. Like all compliance standards, it doesn’t specify what will or won’t meet compliance for your organization, hence the importance of the evidence.

At Anchore, we both experienced this ourselves and helped our customers navigate this ambiguity. Out of these experiences we created a document that breaks down each item and what evidence was able to achieve compliance without being rejected by a compliance officer.

We have published this document on our Docs site for all other organizations to use as a template when attempting to meet SSDF attestation compliance.

Structure of the SSDF attestation form

The SSDF attestation is divided into 3 sections:

Section I

The first section is very short, it is where you list the type of attestation you are submitting and information about the product that you are attesting to meeting compliance.

Section II

This section is also short, the form is collecting contact information. CISA wants to be able to know how to get in contact with your organization and who is responsible for any questions or concerns that need to be addressed.

Section III

For all intents and purposes, Section III is the SSDF attestation form. This is where you will provide all of the technical supporting information to demonstrate that your organization complies with the requirements set out in the SSDF attestation form. 

The guide that Anchore has developed is focused specifically on how to fill out this section in a way that will meet the expectations of CISA compliance officers.

Where do I submit the SSDF attestation form?

If you are a US government vendor you can submit your organization’s completed form on the Repository for Software Attestations and Artifacts. You will need an account that can be requested on the login page. It normally takes a few days for the account to be created. Be sure to give yourself at least a week for it to be created. This can be done ahead of time while you’re gathering the information to fill out your form.

It’s also possible you will receive requests directly to pass along the form. Not every agency will use the repository. It’s even possible you will have non-government customers asking for the form. While it’s being mandated by the government, there’s a lot of good evidence in the document.

What tooling do I need to meet SSDF attestation compliance?

There are many ways in order to meet the technical requirements of SSDF attestation but there is also a well worn path. Anchore utilizes modern DevSecOps practices and assumes that the majority of our customers do as well. Below is a list of common DevSecOps tools that are typically used to help meet SSDF compliance

Endpoint Protection

Description: Endpoint protection tools secure individual devices (endpoints) that connect to a network. They protect against malware, detect and prevent intrusions, and provide real-time monitoring and response capabilities.

SSDF Requirement: [3.1] — “Separating and protecting each environment involved in developing and building software”

Examples: Jamf, Elastic, SentinelOne, etc.

Source Control

Description: Source control systems manage changes to source code over time. They help track modifications, facilitate collaboration among developers, and maintain different versions of code.

SSDF Requirement: [3.1] — “Separating and protecting each environment involved in developing and building software”

Examples: GitHub, GitLab, etc.

CI/CD Build Pipeline

Description: Continuous Integration/Continuous Deployment (CI/CD) pipelines automate the process of building, testing, and deploying software. They help ensure consistent and reliable software delivery.

SSDF Requirement: [3.1] — “Separating and protecting each environment involved in developing and building software”

Examples: Jenkins, GitLab, GitHub Actions, etc.

Single Sign-on (SSO)

Description: SSO allows users to access multiple applications with one set of login credentials. It enhances security by centralizing authentication and reducing the number of attack vectors.

SSDF Requirement: [3.1] — “Enforcing multi-factor authentication and conditional access across the environments relevant to developing and building software in a manner that minimizes security risk;”

Examples: Okta, Google Workspace, etc.

Security Event and Incident Management (SEIM)

Description: Monitoring tools provide real-time visibility into the performance and security of systems and applications. They can detect anomalies, track resource usage, and alert on potential issues.

SSDF Requirement: [3.1] — “Implementing defensive cybersecurity practices, including continuous monitoring of operations and alerts and, as necessary, responding to suspected and confirmed cyber incidents;”

Examples: Elasticsearch, Splunk, Panther, RunReveal, etc.

Audit Logging

Description: Audit logging captures a record of system activities, providing a trail of actions performed within the software development and build environments.

SSDF Requirement: [3.1] — “Regularly logging, monitoring, and auditing trust relationships used for authorization and access: i) to any software development and build environments; and ii) among components within each environment;”

Examples: Typically a built-in feature of CI/CD, SCM, SSO, etc.

Secrets Encryption

Description: Secrets encryption tools secure sensitive information such as passwords, API keys, and certificates used in the development and build processes.

SSDF Requirement: [3.1] — “Encrypting sensitive data, such as credentials, to the extent practicable and based on risk;”

Examples: Typically a built-in feature of CI/CD and SCM

Secrets Scanning

Description: Secrets scanning tools automatically detect and alert on exposed secrets in code repositories, preventing accidental leakage of sensitive information.

SSDF Requirement: [3.1] — “Encrypting sensitive data, such as credentials, to the extent practicable and based on risk;”

Examples: Anchore Secure or other container security platforms

OSS Component Inventory (+ Provenance)

Description: These tools maintain an inventory of open-source software components used in a project, including their origins and lineage (provenance).

SSDF Requirement: [3.3] — “The software producer maintains provenance for internal code and third-party components incorporated into the software to the greatest extent feasible;”

Examples: Anchore SBOM or other SBOM generation and management platform

Vulnerability Scanning

Description: Vulnerability scanning tools automatically detect security weaknesses in code, dependencies, and infrastructure.

SSDF Requirement: [3.4] — “The software producer employs automated tools or comparable processes that check for security vulnerabilities. In addition: a) The software producer operates these processes on an ongoing basis and prior to product, version, or update releases;”

Examples: Anchore Secure or other software composition analysis (SCA) platform

Vulnerability Management and Remediation Runbook

Description: This is a process and set of guidelines for addressing discovered vulnerabilities, including prioritization and remediation steps.

SSDF Requirement: [3.4] — “The software producer has a policy or process to address discovered security vulnerabilities prior to product release; and The software producer operates a vulnerability disclosure program and accepts, reviews, and addresses disclosed software vulnerabilities in a timely fashion and according to and timelines specified in the vulnerability disclosure program or applicable policies.”

Examples: This is not necessarily a tool but an organizational SLA on security operations. For reference Anchore has included a screenshot from our vulnerability management guide.

Next Steps

If your organization currently provides software services to a federal agency or is looking to in the future, Anchore is here to help you in your journey. Reach out to our team and learn how you can integrate continuous and automated compliance directly into your CI/CD build pipeline with Anchore Enterprise.

Learn about the importance of both FedRAMP and SSDF compliance for selling to the federal government.

Ad for webinar by Anchore about how to sell software services to the federal government by achieving FedRAMP or SSDF Compliance

FedRAMP & FISMA Compliance: Key Differences Explained

This blog post has been archived and replaced by the supporting pillar page that can be found here: https://anchore.com/wp-admin/post.php?post=987474188&action=edit

The blog post is meant to remain “public” so that it will continue to show on the /blog feed. This will help discoverability for people browsing the blog and potentially help SEO. If it is clicked on it will automatically redirect to the pillar page.

Anchore at Billington CyberSecurity Summit: Automating Defense in the AI Era

Are you gearing up for the 15th Annual Billington CyberSecurity Summit? So are we! The Anchore team will be front and center in the exhibition hall throughout the event, ready to showcase how we’re revolutionizing cybersecurity in the age of AI.

This year’s summit promises to be a banger, highlighting the evolution in cybersecurity as the latest iteration of AI takes center stage. While large language models (LLMs) like ChatGPT have been making waves across industries, the cybersecurity realm is still charting its course in this new AI-driven landscape. But make no mistake – this is no time to rest on our laurels.

As blue teams explore innovative ways to harness LLMs, cybercriminals are working overtime to weaponize the same technology. If there’s one lesson we’ve learned from every software and AI hype cycle: automation is key. As adversaries incorporate novel automations into their tactics, defenders must not just keep pace—they need to get ahead.

At Anchore, we’re all-in with this strategy. The Anchore Enterprise platform is purpose-built to automate and scale cybersecurity across your entire software development lifecycle. By automating continuous vulnerability scanning and compliance in your DevSecOps pipeline, we’re equipping warfighters with the tools they need to outpace adversaries that never sleep.

Ready to see how Anchore can transform your cybersecurity posture in the AI era? Stop by our booth for a live demo. Don’t miss this opportunity to stay ahead of the curve—book a meeting (below) with our team and take the first step towards a more secure tomorrow.

Anchore at the Billington CyberSecurity Summit

Date: September 3–6, 2024

Location: The Ronald Reagan Building and International Trade Center in Washington, DC

Our team is looking forward to meeting you! Book a demo session in advance to ensure a preferred slot.

Anchore’s Showcase: DevSecOps and Automated Compliance

We will be demonstrating the Anchore Enterprise platform at the event. Our showcase will focus on:

  1. Software Composition Analysis (SCA) for Cloud-Native Environments: Learn how our tools can help you gain visibility into your software supply chain and manage risk effectively.
  2. Automated SBOM Generation and Management: Discover how Anchore simplifies the creation and maintenance of Software Bills of Materials (SBOMs), the foundational component in software supply chain security.
  3. Continuous Scanning for Vulnerabilities, Secrets, and Malware: See our advanced scanning capabilities in action, designed to protect your applications across the DevSecOps pipeline or DoD software factory.
  4. Automated Compliance Enforcement: Experience how Anchore can streamline compliance with key standards such as cATO, RAISE 2.0,  NIST, CISA, and FedRAMP, saving time and reducing human error.

We invite all attendees to visit our booth to learn more about how Anchore’s DevSecOps and automated compliance solutions can enhance your organization’s security posture in the age of AI and cloud computing.

Event Highlights

Still on the fence about whether to attend? Here is a quick run-down to help get you off of the fence. This year’s summit, themed “Advancing Cybersecurity in the AI Age,” will feature more than 40 sessions and breakouts, covering critical topics such as:

  • The increasing impact of artificial intelligence on cybersecurity
  • Cloud security challenges and solutions
  • Proactive approaches to technical risk management
  • Emerging cyber risks and defense strategies
  • Data protection against breaches and insider threats
  • The intersection of cybersecurity and critical infrastructure

The event will showcase fireside chats with top government officials, including FBI Deputy Director Paul Abbate, Chairman of the Joint Chiefs of Staff General CQ Brown, Jr., and U.S. Cyber Command Commander General Timothy D. Haugh, among others.

Next Steps and Additional Resources

Join us at the Billington Cybersecurity Summit to network with industry leaders, gain valuable insights, and explore innovative technologies that are shaping the future of cybersecurity. We look forward to seeing you there!

If you are interested in the Anchore Enterprise platform and can’t wait till the show, here are some resources to help get you started:

Learn about best practices that are setting new standards for security in DoD software factories.

Enhancing Software Security: August Webinars on DevSecOps, DoD Software Factories, and CMMC Compliance

This August Anchore’s webinar series is coming in hot with blazing hot topics on software development and cybersecurity. Stay informed on the latest trends and best practices with our full docket of live webinars that address critical aspects of software supply chain security, DevSecOps, and compliance. Whether you’re interested in adopting the Department of Defense (DoD) software factory model, automating CMMC compliance, or optimizing DevSecOps practices, these webinars offer valuable insights from engineers in the field.

WEBINAR: Adopting the DoD Software Factory Model: Insights & How Tos

Date: August 13, 2024, 10 am PT | 1 pm ET

The DoD software factory model has become a cornerstone of innovation and security in national defense and cybersecurity. This webinar will explore the building blocks needed to successfully adopt a software factory model, drawing insights from both Platform One and Black Pearl.

This session is perfect for those looking to enhance their understanding of DoD-compliant software development practices and learn how to implement them effectively. Topics covered will include how to standardize secure software development and deployment along with a demo of Anchore Enterprise’s capabilities in automating policy enforcement, security checks, and vulnerability scans.

WEBINAR: Automated Policy Enforcement for CMMC with Anchore Enterprise

Date: August 27, 2024, 2-2:30 PM EST

For organizations required to comply with the Cybersecurity Maturity Model Certification (CMMC), this webinar will offer crucial insights into automating compliance measures. With CMMC’s importance in hardening the cybersecurity posture of the defense industrial base, timely compliance is critical for software vendors that work with the federal government.

This webinar is invaluable for teams working to achieve CMMC compliance efficiently and effectively. Attendees will learn about the implementation and automation of compliance controls, how to leverage automation for vulnerability scans along with specific controls automated by Anchore Enterprise for CMMC and NIST.

WEBINAR: DevSecOps Editorial Roundtable

Date: August 12, 2024, 1-2pm EST

As the software industry increasingly adopts and refines practices for secure software development, optimizing DevSecOps processes has become crucial. This roundtable discussion will bring together application development and cybersecurity experts to explore strategies for shifting application security left in the development process.

This webinar is ideal for those looking to enhance their DevSecOps practices and create more robust and efficient software supply chains. Those that attend will gain insights on effective DevSecOps integration without slowing down application deployment and how to optimize security measures that developers will embrace.

Wrap-Up

Don’t miss these opportunities to deepen your understanding of modern software security topics and learn from industry experts. Each webinar offers unique perspectives and practical strategies that can be applied to improve your organization’s approach to software security.

Also, if you want to stay up-to-date on all of the events that Anchore hosts or participates in be sure to bookmark our events page and check back often!

Anchore Awarded DoD ESI DevSecOps Phase II Agreement

The Department of Defense (DoD) Enterprise Software Initiative (ESI) has awarded Anchore inclusion in its DevSecOps program, which is part of the ESI’s DevSecOps Phase II enterprise agreements.

The DoD ESI’s main objective is to streamline the acquisition process for software and services across the DoD, in order to gain significant cost savings and improve efficiency. Admittance into the ESI program validates Anchore’s commitment to be a trusted partner to the DoD, delivering advanced container vulnerability scanning as well as SBOM management solutions that meet the most stringent compliance and security requirements.

Anchore’s inclusion in the DoD ESI DevSecOps Phase II agreement is a testament to our commitment to delivering cutting-edge software supply chain security solutions. This milestone enables us to more efficiently support the DoD’s critical missions by providing them with the tools they need to secure their software development pipelines. Our continued partnership with the DoD reinforces Anchore’s position as a trusted leader in SBOM-powered DevSecOps and container security.

—Tim Zeller, EVP Sales & Marketing

The agreements also included DevSecOps luminaries Hashicorp and Rancher Government as well as Cloudbees, Infoblox, GitLab, Crowdstrike, F5 Networks; all are now part of the preferred vendor list for all DoD missions that require cybersecurity solutions, generally, and software supply chain security, specifically.

Anchore is steadily growing their presence on federal contracts and catalogues such as Iron Patriot & Minerva, GSA, 2GIT, NASA SEWP, ITES and most recently also JFAC (Joint Federated Assurance Center).

What does this mean?

Similar to the GSA Advantage marketplace, DoD missions can now procure Anchore through the fully negotiated and approved ESI Agreements on the Solutions for Enterprise-Wide Procurement (SEWP) Marketplace. 

Anchore’s History with DoD

This award continues Anchore’s deepening relationship with the DoD. Starting in 2020, the DoD has vetted and approved Anchore’s container vulnerability scanning tools. Anchore is named in both the DoD Container Image Creation and Deployment Guide and the DoD Container Hardening Process Guide as recommended solutions.

The same year, Anchore was selected by the US Air Force’s Platform One to become the software supply chain vendor to implement the best practices in the above guides for all software built on the platform. Read our case study on how Anchore partnered with Platform One to build the premier DevSecOps platform for the DoD.

The following year, Anchore won the Small Business Innovation Research (SBIR) Phase III contract with Platform One to integrate directly into the Iron Bank container image process. If your image has achieved Iron Bank certification it is because Anchore’s solution has given it a passing grade. Read more about this DevSecOps success story in our case study with the Iron Bank.

Due to the success of Platform One within the US Air Force, in 2022 Anchore partnered with the US Navy to secure the Black Pearl DevSecOps platform. Similar to Platform One, Black Pearl is the go-to standard for modern software development within the Department of the Navy (DON) software development.

As Anchore continued to expand its relationship with the DoD and federal agencies, its offerings became available for purchase through the online government marketplaces and contracts such as GSA Advantage and Second Generation IT Blanket Purchase Agreements (2GIT), NASA SEWP, Iron Patriot/Minerva, ITES and JFAC. The ESI’s DevSecOps Phase II award was built on the back of all of the previous success stories that came before it. 

Achieving ATO is now easier with the inclusion of Anchore into the DoD ESI. Read our white paper on DoD software factory best practices to reach cATO or RAISE 2.0 compliance in days versus months.

We advise on best practices that are setting new standards for security and efficiency in DoD software factories, such as: Hardening container images, automation for policy enforcement and continuous monitoring for vulnerabilities.

Anchore Previews Grype Support for Azure Linux 3.0

The Anchore OSS team was on the Microsoft community call for mariner users last week. At this meeting, we got a chance to demo some new grype capabilities for when Azure Linux 3.0 becomes generally available.

The Anchore OSS team builds its vulnerability feeds and data sourcing out in the open. It’s important to note that an update to support a new distro release (or naming migration for past releases) can require pull requests in up to three different repositories. Let’s look at the pull requests supporting this new release of Azure Linux and walk through how we can build a local copy of the demo on our machines.

Grype ecosystem changes that support new Linux distributions

Here are the three pull requests required to get Azure Linux 3.0 working with grype.

  • Grype-db: this change asserts that the new data shape and data mapping is being done correctly when processing the new Azure Linux 3.0 vulnerability data
  • Vunnel: this change sources the vulnerability data from Microsoft and transforms it into a common scheme that grype-db can distribute
  • Grype: this change adds the base distro types used by grype-db, vunnel, and grype so that matching can be correctly associated with both the old mariner and new Azure Linux 3.0 data

For this preview, let’s do a quick walkthrough on how a user could test this new functionality locally and get a grype db for just Azure Linux 3.0 setup. When Azure Linux 3.0 is released as generally available, readers can look forward to a more technical post on how the vunnel and grype-db data pipeline works in GitHub actions, what matching looks like, and how syft/grype can discern the different distribution versions. 

Let’s get our demo working locally in anticipation of the coming release!

Setting up the Demo

To get the demo setup readers will want to make sure they have the following installed:

  • Git to clone and interact with the repositories
  • The latest version of Golang
  • A managed version of Python running at 3.12.x. If you need help getting a managed version of python setup we recommend mise.
  • The poetry python dependency manager 
  • Make is also required as part of developing and bootstrapping commands in the three development environments.

After the dev dependencies are installed, clone down the three repositories listed above (grype, grype-db, and vunnel) into a local development folder and checkout the branches listed in the above pull requests. I have included a script to do all this for you below.

#!/bin/bash

# Define the repositories and the branch
REPOS=(
    "https://github.com/anchore/grype.git"
    "https://github.com/anchore/grype-db.git"
    "https://github.com/anchore/vunnel.git"
)
BRANCH="feat-azure-linux-3-support"
FOLDER="demo"

# Create the folder if it doesn't exist
mkdir -p "$FOLDER"

# Change to the folder
cd "$FOLDER" || exit

# Clone each repository, checkout the branch, and run make bootstrap
for REPO in "${REPOS[@]}"; do
    # Extract the repo name from the URL
    REPO_NAME=$(basename -s .git "$REPO")

    # Clone the repository
    git clone "$REPO"

    # Change to the repository directory
    cd "$REPO_NAME" || exit

    # Checkout the branch
    git checkout "$BRANCH"

    # Run make bootstrap
    make bootstrap

	# Special handling for grype-db repository
    if [ "$REPO_NAME" == "grype-db" ]; then
        # Add the replace directive to go.mod
        echo 'replace github.com/anchore/grype v0.78.0 => ../grype' >> go.mod

        # Run go mod tidy
        go mod tidy
    fi

    # Special handling for grype repository
    if [ "$REPO_NAME" == "grype" ]; then
        # Run go mod tidy
        go mod tidy
    fi

    # Change back to the parent folder
    cd ..

done

echo "All repositories have been cloned, checked out, and built."

Pulling the new Azure Linux 3.0 vulnerability data

We will be doing all of our work in the vunnel repository. We needed to pull the other two repositories since vunnel can orchestrate and build those binaries to accomplish its data aggregation goals. 

To get all the repositories built and usable in vunnel, run the following commands:

cd demo/vunnel
poetry shell
make dev provider="mariner"
make update-db

That should produce output similar to the following:

Entering vunnel development shell...
• Configuring with providers: mariner ...
• Writing grype config: ./demo/vunnel/.grype.yaml ...
• Writing grype-db config: ./demo/vunnel/.grype-db.yaml ...
• Activating poetry virtual env: /Library/Caches/pypoetry/virtualenvs/vunnel-0PTQ8JOw-py3.12 ...
• Installing editable version of vunnel ...
• Building grype ...
• Building grype-db ...
mkdir -p ./bin

Note: development builds grype and grype-db are now available in your path.
To update these builds run 'make build-grype' and 'make build-grype-db' respectively.
To run your provider and update the grype database run 'make update-db'.
Type 'exit' to exit the development shell.

.....Records being processed

This should lead to a local vulnerability db being built for just the Azure Linux 3.0 data. You can interact with this data and use the locally built grype to see how the data can be used against an older preview image of Azure Linux 3.0.

Let’s run the following command to interact with the new Azure Linux 3.0 data and preview grype against an older dev build of the container image to make sure everything is working correctly:

./bin/grype azurelinuxpreview.azurecr.io/public/azurelinux/base/core:3.0.20240401-amd64

  Loaded image azurelinuxpreview.azurecr.io/public/azurelinux/base/core:3.0.20240401-amd64
  Parsed image sha256:3017b52132fb240b9c714bd09e88c4bc1f8e55860de23c74fe2431b8f75981dd
  Cataloged contents 9b4fcfdd3a247b97e02cda6011cd6d6858dcdf98d1f95fb8af54d57d2da89d5f
   ├──  Packages                        [75 packages]
   ├──  File digests                    [1,495 files]
   ├──  File metadata                   [1,495 locations]
   └──  Executables                     [380 executables]
  Scanned for vulnerabilities     [17 vulnerability matches]
   ├── by severity: 0 critical, 8 high, 7 medium, 2 low, 0 negligible
   └── by status:   17 fixed, 0 not-fixed, 0 ignored
NAME          INSTALLED      FIXED-IN         TYPE  VULNERABILITY   SEVERITY
expat         2.5.0-1.azl3   0:2.6.2-1.azl3   rpm   CVE-2024-28757  High
expat         2.5.0-1.azl3   0:2.6.2-1.azl3   rpm   CVE-2023-52425  High
expat         2.5.0-1.azl3   0:2.6.2-1.azl3   rpm   CVE-2023-52426  Medium
expat-libs    2.5.0-1.azl3   0:2.6.2-1.azl3   rpm   CVE-2024-28757  High
expat-libs    2.5.0-1.azl3   0:2.6.2-1.azl3   rpm   CVE-2023-52425  High
expat-libs    2.5.0-1.azl3   0:2.6.2-1.azl3   rpm   CVE-2023-52426  Medium
glibc         2.38-3.azl3    0:2.38-6.azl3    rpm   CVE-2023-6779   High
glibc         2.38-3.azl3    0:2.38-6.azl3    rpm   CVE-2023-6246   High
glibc         2.38-3.azl3    0:2.38-6.azl3    rpm   CVE-2023-5156   High
glibc         2.38-3.azl3    0:2.38-6.azl3    rpm   CVE-2023-4911   High
glibc         2.38-3.azl3    0:2.38-6.azl3    rpm   CVE-2023-6780   Medium
libgcc        13.2.0-3.azl3  0:13.2.0-7.azl3  rpm   CVE-2023-4039   Medium
libstdc++     13.2.0-3.azl3  0:13.2.0-7.azl3  rpm   CVE-2023-4039   Medium
openssl       3.1.4-3.azl3   0:3.3.0-1.azl3   rpm   CVE-2023-6237   Medium
openssl       3.1.4-3.azl3   0:3.3.0-1.azl3   rpm   CVE-2024-2511   Low
openssl-libs  3.1.4-3.azl3   0:3.3.0-1.azl3   rpm   CVE-2023-6237   Medium
openssl-libs  3.1.4-3.azl3   0:3.3.0-1.azl3   rpm   CVE-2024-2511   Low

Updating the image

Many vulnerable container images can be remediated by consuming the upstream security team’s fixes. Let’s run the same command against the latest preview version released from Microsoft:

./bin/grype azurelinuxpreview.azurecr.io/public/azurelinux/base/core:3.0

  Loaded image azurelinuxpreview.azurecr.io/public/azurelinux/base/core:3.0
  Parsed image sha256:234cac9f296dd1d336eecde7a97074bec0d691c6fd87bd4ff098b5968e579ce1
  Cataloged contents 9964aca715152fb6b14bfb57be5e27c655fb7d733a33dd995a5ba72157c54ee7
   ├──  Packages                        [76 packages]
   ├──  File digests                    [1,521 files]
   ├──  File metadata                   [1,521 locations]
   └──  Executables                     [380 executables]
  Scanned for vulnerabilities     [0 vulnerability matches]
   ├── by severity: 0 critical, 0 high, 0 medium, 0 low, 0 negligible
   └── by status:   0 fixed, 0 not-fixed, 0 ignored
No vulnerabilities found

Awesome! Microsoft security teams for the Azure Linux 3 preview images have been highly responsive in ensuring up-to-date images containing fixes or remediations to any security findings are published.

We’re excited to see the new Azure Linux 3 release when it’s ready! In the meantime, you can grab our latest Grype release and try it on all your other containers. If you have questions or problems, join the Anchore Open Source Team on Discourse or check out one of our weekly Live Streams on YouTube.

Automate your SBOM management with Anchore Enterprise. Get instant access with a 15-day free trial.

Anchore Enterprise 5.8 Adds KEV Enrichment Feed

Today we have released Anchore Enterprise 5.8, featuring the integration of the U.S. Cybersecurity and Infrastructure Security Agency’s (CISA) Known Exploited Vulnerabilities (KEV) catalog as a new vulnerability feed.

Previously, Anchore Enterprise matched software libraries and frameworks inside applications against vulnerability databases, such as, National Vulnerability Database (NVD), the GitHub Advisory Database or individual vendor feeds. With Anchore Enterprise 5.8, customers can augment their vulnerability feeds with the KEV catalog without having to leave the dashboard. In addition, teams can automatically flag exploitable vulnerabilities as software is being developed or gate build artifacts from being released into production. 

Before we jump into what all of this means, let’s take a step back and get some context to KEV and its impact on DevSecOps pipelines.

What is CISA KEV?

The KEV (Known Exploited Vulnerabilities) catalog is a critical cybersecurity resource maintained by the U.S. Cybersecurity and Infrastructure Security Agency (CISA). It is a database of exploited vulnerabilities that is current and active in the wild. While addressing these vulnerabilities is mandatory for U.S. federal agencies under Binding Operational Directive 22-01, the KEV catalog serves as an essential public resource for improving cybersecurity for any organization.

The primary difference between CISA KEV and a standard vulnerability feed (e.g., the CVE program) are the adjectives, “actively exploited”. Actively exploited vulnerabilities are being used by attackers to compromise systems in real-time, meaning now. They are real and your organization may be standing in the line of fire, whereas CVE lists vulnerabilities that may or may not have any available exploits currently. Due to the imminent threat of actively exploited vulnerabilities, they are considered the highest risk outside of an active security incident.

The benefits of KEV enrichment

The KEV catalog offers significant benefits to organizations striving to improve their cybersecurity posture. One of its primary advantages is its high signal-to-noise ratio. By focusing exclusively on vulnerabilities that are actively being exploited in the wild, the KEV cuts through the noise of countless potential vulnerabilities, allowing developers and security teams to prioritize their efforts on the most critical and immediate threats. This focused approach ensures that limited resources are allocated to addressing the vulnerabilities that pose the greatest risk, significantly enhancing an organization’s security efficiency.

Moreover, the KEV can be leveraged as a powerful tool in an organization’s development and deployment processes. By using the KEV as a trigger for build pipeline gates, companies can prevent exploitable vulnerabilities from being promoted to production environments. This proactive measure adds an extra layer of security to the software development lifecycle, reducing the risk of deploying vulnerable code. 

Additionally, while adherence to the KEV is not yet a universal compliance requirement, it represents a security best practice that forward-thinking organizations are adopting. Given the trend of such practices evolving into compliance mandates, integrating the KEV into security protocols can be seen as a form of future-proofing, potentially easing the transition if and when such practices inevitably become compliance requirements.

How Anchore Enterprise delivers KEV enrichment

With Anchore Enterprise, CISA KEV is now a vulnerability feed similar to any other data feed that gets imported into the system. Anchore Enterprise can be configured to pull this directly from the source as part of the deployment feed service.

To make use of the new KEV data, we have an additional rule option in the Anchore Policy Engine that allows a STOP or WARN to be configured when a vulnerability is detected that is on the KEV list. When any application build, registry store or runtime deploy occurs, Anchore Enterprise will evaluate the artifiact’s SBOM against the security policy and if the SBOM has been annotated with a KEV entry then the Anchore policy engine can return a STOP value to inform the build pipeline to fail the step and return the KEV as the source of the violation.

To configure the KEV feed as a trigger in the policy engine, first select vulnerabilities as the gate then kev list as a trigger. Finally choose an action.

Anchore Enterprise dashboard policy engine rule set configuration showing vulnerabilities as the gate value and the CISA KEV catalog as the trigger value.

After you save the new rule, you will see the kev list rule as part of the entire policy.

Anchore Enterprise 5.8 policy engine dashboard showing all rules for the default policy including the CISA KEV catalog rule at the top (highlighted in the red square).

After scanning a container with the policy that has the kev list rule in it, you can view all dependencies that match the kev list vulnerability feed.

Anchore Enterprise 5.8 vulnerability scan report with policy enrichment and policy actions. All software dependencies are matched against the CISA KEV catalog of known exploitable vulnerabilities and the assigned action is reported in the dashboard.

Next Steps

To stay on top of our releases, sign-up for our monthly newsletter or follow our LinkedIn account. If you are already an Anchore customer, please reach out to your account manager to upgrade to 5.8 and gain access to KEV support. We also offer a 15 day free trial to get hands on with Anchore Enterprise or you can reach out to us for a guided tour.

A Guide to FedRAMP in 2024: FAQs & Key Takeaways

This blog post has been archived and replaced by the supporting pillar page that can be found here: https://anchore.com/wp-admin/post.php?post=987473983&action=edit

The blog post is meant to remain “public” so that it will continue to show on the /blog feed. This will help discoverability for people browsing the blog and potentially help SEO. If it is clicked on it will automatically redirect to the pillar page.

DevSecOps Evolution: How DoD Software Factories Are Reshaping Federal Compliance

Anchore’s Vice President of Security, Josh Bressers recently did an interview with Fed Gov Today about the role of automation in DevSecOps and how it is impacting the US federal government. We’ve condensed the highlights of the interview into a snackable blog post below.

Automation is the foundation of DevSecOps

Automation isn’t just a buzzword but is actually the foundation of DevSecOps. It is what gives meaning to marketing taglines like “shift left”. The point of DevSecOps is to create automated workflows that provide feedback to software developers as they are writing the application. This unwinds the previous practice of  artificially grouping all of the “compliance” or “security” tasks into large blocks at the end of the development process. The challenge with this pattern is that it delays feedback and design decisions are made that become difficult to undo after development has completed. By inverting the narrative and automating feedback as design decisions are made, developers are able to prevent compliance or security issues before they become deeply embedded into the software.

DoD Software Factories are leading the way in DevSecOps adoption

The US Department of Defense (DoD) is at the forefront of implementing DevSecOps through their DoD software factory model. The US Navy’s Black Pearl and the Air Force’s Platform One are perfect examples of this program. These organizations are leveraging automation to streamline compliance work. Instead of relying on manual documentation ahead of Authority to Operate (ATO) reviews, automated workflows built directly into the software development pipeline provide direct feedback to developers. This approach has proven highly effective, Bressers emphasizes this in his interview:

It’s obvious why the DoD software factory model is catching on. It’s because they work. It’s not just talk, it’s actually working. There’s many organizations that have been talking about DevSecOps for a long time. There’s a difference between talking and doing. Software factories are doing and it’s amazing.

—Josh Bressers, VP of Security, Anchore

Benefits of compliance automation

By giving compliance the same treatment as security (i.e., automate all the things), tasks that once took weeks or even months, can now be completed in minutes or hours. This dramatic reduction in time-to-compliance not only accelerates development cycles but also allows teams to focus on collaboration and solution delivery rather than getting bogged down in procedural details. The result is a “shift left” approach that extends beyond security to compliance as well. When compliance is integrated early in the development process the benefits cascade down the entire development waterfall.

Compliance automation is shifting the policy checks left into the software development process. What this means is that once your application is finished; instead of the compliance taking weeks or months, we’re talking hours or minutes.

—Josh Bressers, VP of Security, Anchore

Areas for improvement

While automation is crucial, there are still several areas for improvement in DevSecOps environments. Key focus areas include ensuring developers fully understand the automated processes, improving communication between team members and agencies, and striking the right balance between automation and human oversight. Bressers emphasizes the importance of letting “people do people things” while leveraging computers for tasks they excel at. This approach fosters genuine collaboration and allows teams to focus on meaningful work rather than simply checking boxes to meet compliance requirements.

Standardizing communication workflows with integrated developer tools

Software development pipelines are primarily platforms to coordinate the work of distributed teams of developers. They act like old-fashioned switchboard operators that connect one member of the development team to the next as they hand-off work in the development production line. Leveraging developer tooling like GitLab or GitHub standardizes communication workflows. These platforms provide mechanisms for different team members to interact across various stages of the development pipeline. Teams can easily file and track issues, automatically pass or fail tests (e.g., compliance tests), and maintain a searchable record of discussions. This approach facilitates better understanding between developers and those identifying issues, leading to more efficient problem-solving and collaboration.

The government getting ahead of the private sector: an unexpected narrative inversion

In a surprising turn of events, Bressers points out that government agencies are now leading the way in DevSecOps implementation by integrating automated compliance. Historically often seen as technologically behind, federal agencies, through the DoD software factory model, are setting new standards that are likely to influence the private sector. As these practices become more widespread, contractors and private companies working with the government will need to adapt to these new requirements. This shift is evident in recent initiatives like the SSDF attestation questionnaire and White House Executive Order (EO) 14028. These initiatives are setting new expectations for federal contractors, signaling a broader move towards making compliance a native pillar of DevSecOps.

This is one of the few instances in recent memory where the government is truly leading the way. Historically the government has often been the butt of jokes about being behind in technology but these DoD software factories are absolutely amazing. The next thing that we’re going to see is the compliance expectations that are being built into these DoD software factories will seep out into the private sector. The SSDF attestation and the White House Executive Order are only the beginning. Ironically my expectation is everyone is going to have to start paying attention to this, not just federal agencies.

—Josh Bressers, VP of Security, Anchore

Next Steps

If you’re interested to learn more about how to future-proof your software supply chain with compliance automation via the DoD software factory model, be sure to read our white paper.

If you’d like to hear the interview in full, be sure to watch it on Fed Gov Today’s Youtube channel.

Automate Container Vulnerability Scanning in CI with Anchore

Achieve container vulnerability scanning nirvana in your CI pipeline with Anchore Enterprise and your preferred CI platform, whether it’s GitHub, GitLab, or Jenkins. Identifying vulnerabilities, security issues, and compliance policy failures early in the software development process is crucial. It’s certainly preferable to uncover these issues during development rather than having them discovered by a customer or during an external audit.

Early detection of vulnerabilities ensures that security and compliance are integrated into your development workflow, reducing the risk of breaches and compliance violations. This proactive approach not only protects your software but also saves time and resources by addressing issues before they escalate.

Enabling CI Integration

At a high level, the steps to connect any CI platform to Enterprise are broadly the same, with implementation details differing between each vendor.

  • Enable network connectivity between CI and Enterprise
  • Capture Enterprise configuration for AnchoreCTL
  • Craft an automation script to operate after the build process
    • Install AnchoreCTL
    • Capture built container details
    • Use AnchoreCTL to submit container details to Enterprise

Once SBOM generation is integrated into the CI pipeline, and they’re submitted to Anchore Enterprise, the following features can quickly be leveraged:

  • Known vulnerabilities with severity, and fix availability
  • Search for accidental ‘secrets’ sharing such as private API keys
  • Scan for malware like trojans and viruses
  • Policy enforcement to comply with standards like FedRAMP, CISA and DISA
  • Remediation by notifying developers and other agents via standard tools like GitHub issues, JIRA, and Slack
  • Scheduled reporting on container insights

CI Integration by Example

Taking GitHub Actions as an example, we can outline the requirements and settings to get up and running with automated SBOM generation and vulnerability management.

Network connectivity

AnchoreCTL uses port 8228 for communication with the Anchore Enterprise SBOM ingest and management API. Ensure the Anchore Enterprise host, where this is configured, is accessible on that port from GitHub. This is site specific and may require firewall, VLAN and other site-specific changes.

Required configuration

AnchoreCTL requires only three environment variables, typically set as GitHub secrets.

  • ANCHORECTL_URL – the URL of the Anchore Enterprise API endpoint. e.g. http://anchore-enterprise.example.com:8228
  • ANCHORECTL_USERNAME – the user account in Anchore Enterprise, that the anchorectl will authenticate using
  • ANCHORECTL_PASSWORD – the password for the account, set on the Anchore Enterprise instance

On the GitHub repository go to Settings -> Secrets and Variables -> Actions.

Under the ‘Variables’ tab, add ANCHORECTL_URL & ANCHORECTL_USERNAME, and set their values. In the ‘Secrets’ tab, add ANCHORECTL_PASSWORD and set the value.

Automation script

Below are the sample snippets from a GitHub action that should be placed in the repository under .github/workflows to enable SBOM generation in Anchore Enterprise. In this example,

First, our action needs a name:

name: Anchore Enterprise Centralized Scan

Pick one or more from this next section, depending on when you require the action to be triggered. It could be based on pushes to the main or other named branches, on a timed schedule, or manually.

Commonly when configuring an action for the first time, manual triggering is used until proven working, then timed or branch automation is enabled later.

on:
  ## Action runs on a push the branches listed
  push:
    branches:
      - main
  ## Action runs on a regular schedule
  schedule: 
      ## Run at midnight every day
    - cron: '0 0 * * *'
  ## Action runs on demand build
  workflow_dispatch:
    inputs:
      mode:
        description: 'On-Demand Build'  

In the env section we pass in the settings gathered and configured inside the GitHub web UI earlier. Additionally the optional ANCHORECTL_FAIL_BASED_ON_RESULTS boolean defines (if true) whether we want the the entire action to be failed based on scan results. This may be desirable, to block further processing if any vulnerabilities, secrets or malware are identified.

env:
  ANCHORECTL_URL: ${{ vars.ANCHORECTL_URL }}
  ANCHORECTL_USERNAME: ${{ vars.ANCHORECTL_USERNAME }}
  ANCHORECTL_PASSWORD: ${{ secrets.ANCHORECTL_PASSWORD }}
  ANCHORECTL_FAIL_BASED_ON_RESULTS: false

Now we start the actual body of the action, which comprises two jobs, ‘Build’ and ‘Anchore’. The ‘Build’ example here will use externally defined steps to checkout the code in the repo and build a container using docker, then push the resulting image to the container registry. In this case we build and publish to the GitHub Container Registry (ghcr), however, we could publish elsewhere.

jobs:

  Build:
    runs-on: ubuntu-latest
    steps:

    - name: "Set IMAGE environmental variables"
      run: |
        echo "IMAGE=${REGISTRY}/${GITHUB_REPOSITORY}:${GITHUB_REF_NAME}" >> $GITHUB_ENV

    - name: Checkout Code
      uses: actions/checkout@v3

    - name: Log in to the Container registry
      uses: docker/login-action@v2
      with:
        registry: ${{ env.REGISTRY }}
        username: ${{ github.actor }}
        password: ${{ secrets.GITHUB_TOKEN }}      

    - name: Set up Docker Buildx
      uses: docker/setup-buildx-action@v2

    - name: build local container
      uses: docker/build-push-action@v3
      with:
        tags: ${{ env.IMAGE }}
        push: true
        load: false

The next job actually generates the SBOM, let’s break this down. First, the usualy boilerplate, but note this job depends on the previous ‘Build’ job having already run.

  Anchore:
    runs-on: ubuntu-latest
    needs: Build

    steps:

The same registry settings are used here as were used in the ‘Build’ job above, then we checkout the code onto the action runner. The IMAGE variable will be used by the anchorectl command later to submit into Anchore Enterprise.

    - name: "Set IMAGE environment variables"
      run: |
        echo "IMAGE=${REGISTRY}/${GITHUB_REPOSITORY}:${GITHUB_REF_NAME}" >> $GITHUB_ENV

    - name: Checkout Code
      uses: actions/checkout@v3

Installing the AnchoreCTL binary inside the action runner is required to send the request to the Anchore Enterprise API. Note the version number specified as the past parameter, should match the version of Enterprise.

    - name: Install Latest anchorectl Binary
      run: |
        curl -sSfL https://anchorectl-releases.anchore.io/anchorectl/install.sh | sh -s -- -b ${HOME}/.local/bin v5.7.0
        export PATH="${HOME}/.local/bin/:${PATH}"

The Connectivity check is a good way to ensure anchorectl is installed correctly, and configured to connect to the right Anchore Enterprise instance.

    - name: Connectivity Check
      run: |
        anchorectl version
        anchorectl system status
        anchorectl feed list

Now we actually queue the image up for scanning by our Enterprise instance. Note the use of --wait to ensure the GitHub Action pauses until the backend Enterprise instance completes the scan. Otherwise the next steps would likely fail, as the scan would not yet be complete.

    - name: Queue Image for Scanning by Anchore Enterprise
      run: |
        anchorectl image add --no-auto-subscribe --wait --dockerfile ./Dockerfile --force ${IMAGE} 

Once the backend Anchore Enterprise has completed the vulnerability, malware, and secrets scan, we use anchorectl to pull the list of vulnerabilities and display them as a table. This can be viewed in the GitHub Action log, if required.

    - name: Pull Vulnerability List
      run: |
        anchorectl image vulnerabilities ${IMAGE} 

Finally, the image check will pull down the results of the policy compliance as defined in your Anchore Enterprise. This will likely be a significantly shorter output than the full vulnerability list, depending on your policy bundle.

If the environment variable ANCHORECTL_FAIL_BASED_ON_RESULTS was set true earlier in the action, or -f is added to the command below, the action will return as a ‘failed’ run.

    - name: Pull Policy Evaluation
      run: |
        anchorectl image check --detail ${IMAGE}

That’s everything. If configured correctly, the action will run as required, and directly leverage the vulnerability, malware and secrets scanning of Anchore Enterprise.

Not just GitHub

While the example above is clearly GitHub specific, a similar configuration can be used in GitLab pipelines, Jenkins, or indeed any CI system that supports arbitrary shell scripts in automation.

Conclusion

By integrating Anchore Enterprise into your CI pipeline, you can achieve a higher level of security and compliance for your software development process. Automating vulnerability scanning and SBOM management ensures that your software is secure, compliant, and ready for deployment.

Automate your SBOM management with Anchore Enterprise. Get instant access with a 15-day free trial.

High volume image scanning and vulnerability management at the Iron Bank (Platform One)

The Iron Bank provides Platform One and any US Department of Defense (DoD) agency with a hardened and centralized container image repository that supports the end-to-end lifecycle needed for secure software development. Anchore and the Iron Bank have been collaborating since 2020 to balance deployment velocity, and policy compliance while maintaining rigorous security standards and adapting to new security threats. 

The Challenge

The Iron Bank development team is responsible for the integrity and security of 1,800 base images that are provided to build and create software applications across the DoD. They face difficult tasks such as:

  • Providing hardened components for downstream applications across the DoD
  • Meeting rigorous security standards crucial for military systems
  • Improving deployment frequency while maintaining policy compliance
  • Reducing the burden of false positives on the development team

Camdon Cady, Chief Technology Officer at Platform One:

People want to be security minded, and they want to do the right thing. But what they really want is tooling that helps them to do that with all the necessary information in one place. That’s why we looked to Anchore for help.

The Solution

Anchore’s engineering team is deeply embedded with the Iron Bank infrastructure and development team to improve and maintain DevSecOps standards. Anchore Enterprise is the software supply chain security tool of choice as it provides: 

The Results: Sustainable security at DevOps speed

The partnership between Iron Bank and Anchore has yielded impressive results:

  • Reduced False Positives: The introduction of an exclusion feed captured over 12,000 known false positives, significantly reducing the security assessment load.
  • Improved SBOM Accuracy: Custom capabilities like SBOM Hints and SBOM Corrections allow for more precise component identification and vulnerability mapping.
  • Standardized Compliance: A jointly developed custom policy enforces the DoD Container Hardening requirements consistently across all images.
  • Enhanced Scanning Capabilities: Additions like time-based allowlisting, content hints, and image scanning have expanded Iron Bank’s security coverage.
  • Streamlined Processes: The standardized scanning process adheres to the DoD’s Container Hardening Guide while delivering high-quality vulnerability and compliance findings.

Even though security is important for all organizations, the stakes are higher for the DoD. What we need is a repeatable development process. It’s imperative that we have a standardized way of building secure software across our military agencies.

Camdon Cady, Chief Technology Officer at Platform One

Download the full case study to learn more about how Anchore Enterprise can help your organization achieve a proactive security stance while maintaining development velocity.

How Infoblox Scaled Product Security and Compliance with Anchore Enterprise

In today’s fast-paced software development world, maintaining the highest levels of security and compliance is a daunting challenge. Our new case study highlights how Infoblox, a leader in Enterprise DDI (DNS, DHCP, IPAM), successfully scaled their product security and compliance efforts using Anchore Enterprise. Let’s dive into their journey and the impressive results they achieved.

The Challenge: Scaling security in high-velocity Environments

Infoblox faced several critical challenges in their product security efforts:

  • Implementing “shift-left” security at scale for 150 applications developed by over 600 engineers with a security team of 15 (a 40:1 ratio!)
  • Managing vulnerabilities across thousands of containers produced monthly
  • Maintaining multiple compliance certifications (FedRAMP, SOC 2, StateRAMP, ISO 27001)
  • Integrating seamlessly into existing DevOps workflows

“When I first started, I was manually searching GitHub repos for references to vulnerable libraries,” recalls Sukhmani Sandhu, Product Security Engineer at Infoblox. This manual approach was unsustainable and prone to errors.

The Solution: Anchore Enterprise

To address these challenges, Infoblox turned to Anchore Enterprise to provide:

  • Container image scanning with low false positives
  • Comprehensive vulnerability and CVE management
  • Native integrations with Amazon EKS, Harbor, and Jenkins CI
  • A FedRAMP, SOC 2, StateRAMP, and ISO compliant platform

Chris Wallace, Product Security Engineering Manager at Infoblox, emphasizes the importance of accuracy: “We’re not trying to waste our team or other team’s time. We don’t want to report vulnerabilities that don’t exist. A low false-positive rate is paramount.

Impressive Results

The implementation of Anchore Enterprise transformed Infoblox’s product security program:

  • 75% reduction in time for manual vulnerability detection tasks
  • 55% reduction in hours allocated to retroactive vulnerability remediation
  • 60% reduction in hours spent on compliance tasks
  • Empowered the product security team to adopt a proactive, “shift-left” security posture

These improvements allowed the Infoblox team to focus on higher-value initiatives like automating policy and remediation. Developers even began self-adopting scanning tools during development, catching vulnerabilities before they entered the build pipeline.

“We effectively had no tooling before Anchore. Everything was manual. We reduced the amount of time on vulnerability detection tasks by 75%,” says Chris Wallace.

Conclusion: Scaling security without compromise

Infoblox’s success story demonstrates that it’s possible to scale product security and compliance efforts without compromising on development speed or accuracy. By leveraging Anchore Enterprise, they transformed their security posture from reactive to proactive, significantly reduced manual efforts, and maintained critical compliance certifications.

Are you facing similar challenges in your organization? Download the full case study and take the first step towards a secure, compliant, and efficient development environment. Or learn more about how Anchore’s container security platform can help your organization.

Introduction to the DoD Software Factory

In the rapidly evolving landscape of national defense and cybersecurity, the concept of a Department of Defense (DoD) software factory has emerged as a cornerstone of innovation and security. These software factories represent an integration of the principles and practices found within the DevSecOps movement, tailored to meet the unique security requirements of the DoD and Defense Industrial Base (DIB). 

By fostering an environment that emphasizes continuous monitoring, automation, and cyber resilience, DoD Software Factories are at the forefront of the United States Government’s push towards modernizing its software and cybersecurity capabilities. This initiative not only aims to enhance the velocity of software development but also ensures that these advancements are achieved without compromising on security, even against the backdrop of an increasingly sophisticated threat landscape.

Building and running a DoD software factory is so central to the future of software development that “Establish a Software Factory” is the one of the explicitly named plays from the DoD DevSecOps Playbook. On top of that, the compliance capstone of the authorization to operate (ATO) or its DevSecOps infused cousin the continuous ATO (cATO) effectively require a software factory in order to meet the requirements of the standard. In this blog post, we’ll break down the concept of a DoD software factory and a high-level overview of the components that make up one.

What is a DoD software factory?

A Department of Defense (DoD) Software Factory is a software development pipeline that embodies the principles and tools of the larger DevSecOps movement with a few choice modifications that conform to the extremely high threat profile of the DoD and DIB. It is part of the larger software and cybersecurity modernization trend that has been a central focus for the United States Government in the last two decades.

The goal of a DoD Software Factory is aimed at creating an ecosystem that enables continuous delivery of secure software that meet the needs of end-users while ensuring cyber resilience (a DoD catchphrase that emphasizes the transition from point-in-time security compliance to continuous security compliance). In other words, the goal is to leverage automation of software security tasks in order to fulfill the promise of the DevSecOps movement to increase the velocity of software development.

What is an example of a DoD software factory?

Platform One is the canonical example of a DoD software factory. Run by the US Air Force, it offers both a comprehensive portfolio of software development tools and services. It has come to prominence due to its hosted services like Repo One for source code hosting and collaborative development, Big Bang for a end-to-end DevSecOps CI/CD platform and the Iron Bank for centralized container storage (i.e., container registry). These services have led the way to demonstrating that the principles of DevSecOps can be integrated into mission critical systems while still preserving the highest levels of security to protect the most classified information.

If you’re interested to learn more about how Platform One has unlocked the productivity bonus of DevSecOps while still maintaining DoD levels of security, watch our webinar with Camdon Cady, Chief of Operations and Chief Technology Officer of Platform One.

Who does it apply to?

Federal Service Integrators (FSI)

Any organization that works with the DoD as a federal service integrator will want to be intimately familiar with DoD software factories as they will either have to build on top of existing software factories or, if the mission/program wants to have full control over their software factory, be able to build their own for the agency.

Department of Defense (DoD) Mission

Any Department of Defense (DoD) mission will need to be well-versed on DoD software factories as all of their software and systems will be required to run on a software factory as well as both reach and maintain a cATO.

What are the components of a DoD Software Factory?

A DoD software factory is composed of both high-level principles and specific technologies that meet these principles. Below are a list of some of the most significant principles of a DoD software factory:

Principles of DevSecOps embedded into a DoD software factory

  1. Breakdown organizational silos
    • This principle is borrowed directly from the DevSecOps movement, specifically the DoD aims to integrate software development, test, deployment, security and operations into a single culture with the organization.
  2. Open source and reusable code
    • Composable software building blocks is another principle of the DevSecOps that increases productivity and reduces security implementation errors from developers writing secure software packages that they are not experts in.
  3. Immutable Infrastructure-as-Code (IaC)
    • This principle focuses on treating the infrastructure that software runs on as ephemeral and managed via configuration rather than manual systems operations. Enabled by cloud computing (i.e., hardware virtualization) this principle increases the security of the underlying infrastructure through templated secure-by-design defaults and reliability of software as all infrastructure has to be designed to fail at any moment.
  4. Microservices architecture (via containers)
    • Microservices are a design pattern that creates smaller software services that can be built and scale independently of each other. This principle allows for less complex software that only performs a limited set of behavior.
  5. Shift Left
    • Shift left is the DevSecOps principle that re-frames when and how security testing is done in the software development lifecycle. The goal is to begin security testing while software is being written and tested rather than after the software is “complete”. This prevents insecure practices from cascading into significant issues right as software is ready to be deployed.
  6. Continuous improvement through key capabilities
    • The principle of continuous improvement is a primary characteristic of the DevSecOps ethos but the specific key capabilities that are defined in the DoD DevSecOps playbook are what make this unique to the DoD.
  7. Define a DevSecOps pipeline
    • A DevSecOps pipeline is the system that utilizes all of the preceding principles in order to create the continuously improving security outcomes that is the goal of the DoD software factory program.
  8. Cyber resilience
    • Cyber resiliency is the goal of a DoD software factory, is it defined as, “the ability to anticipate, withstand, recover from, and adapt to adverse conditions, stresses, attacks, or compromises on the systems that include cyber resources.”

Common tools and systems of a DoD software factory

  1. Code Repository (e.g., Repo One)
    • Where software source code is stored, managed and collaborated on.
  2. CI/CD Build Pipeline (e.g., Big Bang)
    • The system that automates the creation of software build artifacts, tests the software and packages the software for deployment.
  3. Artifact Repository (e.g., Iron Bank)
    • The storage system for software components used in development and the finished software artifacts that are produced from the build process.
  4. Runtime Orchestrator and Platform (e.g., Big Bang)
    • The deployment system that hosts the software artifacts pulled from the registry and keeps the software running so that users can access it.

How do I meet the security requirements for a DoD Software Factory? (Best Practices)

Use a pre-existing software factory

The benefit of using a pre-existing DoD software factory is the same as using a public cloud provider; someone else manages the infrastructure and systems. What you lose is the ability to highly customize your infrastructure to your specific needs. What you gain is the simplicity of only having to write software and allow others with specialized skill sets to deal with the work of building and maintaining the software infrastructure. When you are a car manufacturer, you don’t also want to be a civil engineering firm that designs roads.

To view existing DoD software factories, visit the Software Factory Ecosystem Coalition website.

Map of all DoD software factories in the US.

Roll your own by following DoD best practices 

If you need the flexibility and customization of managing your own software factory then we’d recommend following the DoD Enterprise DevSecOps Reference Design as the base framework. There are a few software supply chain security recommendations that we would make in order to ensure that things go smoothly during the authorization to operate (ATO) process:

  1. Continuous vulnerability scanning across all stages of CI/CD pipeline
    • Use a cloud-native vulnerability scanner that can be directly integrated into your CI/CD pipeline and called automatically during each phase of the SDLC
  2. Automated policy checks to enforce requirements and achieve ATO
    • Use a cloud-native policy engine in tandem with your vulnerability scanner in order to automate the reporting and blocking of software that is a security threat and a compliance risk
  3. Remediation feedback
    • Use a cloud-native policy engine that can provide automated remediation feedback to developers in order to maintain a high velocity of software development
  4. Compliance (Trust but Verify)
    • Use a reporting system that can be directly integrated with your CI/CD pipeline to create and collect the compliance artifacts that can prove compliance with DoD frameworks (e.g., CMMC and cATO)
  5. Air-gapped system

Is a software factory required in order to achieve cATO?

Technically, no. Effectively, yes. A cATO requires that your software is deployed on an Approved DoD Enterprise DevSecOps Reference Design not a software factory specifically. If you build your own DevSecOps platform that meets the criteria for the reference design then you have effectively rolled your own software factory.

How Anchore can help

The easiest and most effective method for achieving the security guarantees that a software factory is required to meet for its software supply chain are by using: 

  1. An SBOM generation tool that integrates directly into your software development pipeline
  2. A container vulnerability scanner that integrates directly into your software development pipeline
  3. A policy engine that integrates directly into your software development pipeline
  4. A centralized database to store all of your software supply chain security logs
  5. A query engine that can continuously monitor your software supply chain and automate the creation of compliance artifacts

These are the primary components of both Anchore Enterprise and Anchore Federal cloud native, SBOM-powered software composition analysis (SCA) platforms that provide an end-to-end software supply chain security to holistically protect your DevSecOps pipeline and automate compliance. This approach has been validated by the DoD, in fact the DoD’s Container Hardening Process Guide specifically named Anchore Federal as a recommended container hardening solution.

Learn more about how Anchore fuses DevSecOps and DoD software factories.

Conclusion and Next Steps

DoD software factories can come off as intimidating at first but hopefully we have broken them down into a more digestible form. At their core they reflect the best of the DevSecOps movement with specific adaptations that are relevant to the extreme threat environment that the DoD has to operate in, as well as, the intersecting trend of the modernization of federal security compliance standards.

If you’re looking to dive deeper into all things DoD software factory, we have a white paper that lays out the 6 best practices for container images in highly secure environments. Download the white paper below.

AnchoreCTL Setup and Top Tips

Introduction

Welcome to the beginners guide to AnchoreCTL, a powerful command-line tool designed for seamless interaction with Anchore Enterprise via the Anchore API. Whether you’re wrangling SBOMs, managing Kubernetes runtime inventories, or ensuring compliance at scale, AnchoreCTL is your go-to companion.

Overview

AnchoreCTL enables you to efficiently manage and inspect all aspects of your Anchore Enterprise deployments. It serves both as a human-readable configuration tool and a CLI for automation in CI/CD environments, making it indispensable for DevOps, security engineers, and developers.

If you’re familiar with Syft and Grype, AnchoreCTL will be a valuable addition to your toolkit. It offers enhanced capabilities to manage tens, hundreds, or even thousands of images and applications across your organization.

In this blog series, we’ll explore top tips and practical use cases to help you leverage AnchoreCTL to its fullest potential. In this part, we’ll review the basics of getting started with AnchoreCTL. In subsequent posts, we will dive deep on container scanning, SBOM Management and Vulnerability Management.

We’ll start by getting AnchoreCTL installed and learning about its configuration and use. I’ll be using AnchoreCTL on my macOS laptop, connected to a demo of Anchore Enterprise running on another machine.

Get AnchoreCTL

AnchoreCTL is a command-line tool available for macOS, Linux and Windows. The AnchoreCTL Deployment docs cover installation and deployment in detail. Grab the release of AnchoreCTL that matches your Anchore Enterprise install.

At the time of writing, the current release of AnchoreCTL and Anchore Enterprise is v5.6.0. Both are updated on a monthly cadence, and yours may be newer or older than what we’re using here. The AnchoreCTL Release Notes contain details about the latest, and all historical releases of the utility.

You may have more than one Anchore Enterprise deployment on different releases. As AnchoreCTL is a single binary, you can install multiple versions on a system to support all the deployments in your landscape.

macOS / Linux

This following snippet will install the binary in a directory of your choosing. On my personal workstation, I use $HOME/bin, but anywhere in your $PATH is fine. Placing the application binary in /usr/local/bin/ makes sense in a shared environment.

$ # Download the macOS or Linux build of anchorectl
$ curl -sSfL  https://anchorectl-releases.anchore.io/anchorectl/install.sh  | sh -s -- -b $HOME/bin v5.6.0

Windows

The Windows install snippet grabs the zip file containing the binary. Once downloaded, unpack the zip and copy the anchorectl command somewhere appropriate.

$ # Download the Windows build of anchorectl
$ curl -o anchorectl.zip https://anchorectl-releases.anchore.io/anchorectl/v5.6.0/anchorectl_5.6.0_windows_amd64.zip

Setup

Quick check

Once AnchoreCTL is installed, check it’s working with a simple anchorectl version. It should print output similar to this:

$ # Show the version of the anchorectl command line tool
$ anchorectl version
Application:        anchorectl
Version:            5.6.0
SyftVersion:        v1.4.1
BuildDate:          2024-05-27T18:28:23Z
GitCommit:          7c134b46b7911a5a17ba1fa5f5ffa4e3687f170b
GitDescription:     v5.6.0
Platform:           darwin/arm64
GoVersion:          go1.21.10
Compiler:           gc

Configure

The anchorectl command has a --help option that displays a lot of useful information beyond just the list of command line options reference. Below are the first 15 lines to illustrate what you should see. The actual output is over 80 lines, so we’ve snipped it down here.

$ # Show the top 15 lines of the help
$ anchorectl --help | head -n 15
Usage:
  anchorectl [command]

Application Config:

  (search locations: .anchorectl.yaml, anchorectl.yaml, .anchorectl/config.yaml, ~/.anchorectl.yaml, ~/anchorectl.yaml, $XDG_CONFIG_HOME/anchorectl/config.yaml)

  # the URL to the Anchore Enterprise API (env var: "ANCHORECTL_URL")
  url: ""

  # the Anchore Enterprise username (env var: "ANCHORECTL_USERNAME")
  username: ""

  # the Anchore Enterprise user's login password (env var: "ANCHORECTL_PASSWORD")

On launch, the anchorectl binary will search for a yaml configuration file in a series of locations shown in the help above. For a quick start, just create .anchorectl.yaml in your home directory, but any of the listed locations are fine.

Here is my very basic .anchorectl.yaml which has been configured with the minimum values of url, username and password to get started. I’ve pointed anchorectl at the Anchore Enterprise v5.6.0 running on my Linux laptop ‘ziggy’, using the default port, username and password. We’ll see later how we can create new accounts and users.

$ # Show the basic config file
$ cat .anchorectl.yml
url: "http://ziggy.local:8228"
username: "admin"
password: "foobar"

Config Check

The configuration can be validated with anchorectl -v. If the configuration is syntactically correct, you’ll see the online help displayed, and the command will exit with return code 0. In this example, I have truncated the lengthy anchorectl -v output.

$ # Good config
$ cat .anchorectl.yml
url: "http://ziggy.local:8228"
username: "admin"
password: "foobar"

$ anchorectl -v
[0000]  INFO 
anchorectl version: 5.6.0
Usage:  anchorectl [command]



      --version         version for anchorectl
Use "anchorectl [command] --help" for more information about a command.

$ echo $?
0

In this example, I omitted a closing quotation mark on the url: line, to force an error.

$ # Bad config
$ cat .anchorectl.yml
url: "http://ziggy.local:8228
username: "admin"
password: "foobar"

$ anchorectl -v


error: invalid application config: unable to parse config="/Users/alan/.anchorectl.yml": While parsing config: yaml: line 1: did not find expected key

$ echo $?
1

Connectivity Check

Assuming the configuration file is syntactically correct, we can now validate the correct url, username and password are set for the Anchore Enterprise system with an anchorectl system status. If all is going well, we’ll get a report similar to this:

The output of anchore system status shows the services running on my Anchore Enterprise.

Multiple Configurations

You may also use the -c or --config option to specify the path to a configuration file. This is useful if you communicate with multiple Anchore Enterprise systems.

$ # Show the production configuration file
$ cat ./production.anchore.yml
url: "http://remotehost.anchoreservers.com:8228"
username: "admin"
password: "foobar"

$ # Show the development configuration file, which points to a diff PC
$ cat ./development.anchore.yml
url: "http://workstation.local:8228"
username: "admin"
password: "foobar"

$ # Connect to remote production instance
$ anchorectl -c ./production.anchorectl.yml system status 
 Status system⋮

$ # Connect to developer workstation
$ anchorectl -c ./development.anchorectl.yml system status 
 Status system⋮

Environment Variables

Note from the --help further up that AnchoreCTL can be configured with environment variables instead of the configuration file. This can be useful when the tool is deployed in CI/CD environments, where these can be set using the platform ‘secret storage’.

So, without any configuration file, we can issue the same command but setting options via environment variables. I’ve truncated the output below, but note the ✔ Status system indicating a successful call to the remote system.

$ # Delete the configuration to prove we aren't using it
$ rm .anchorectl.yml
$ anchorectl system status 

error: 1 error occurred:  * no enterprise URL provided

$ # Use environment variables instead
$ ANCHORECTL_URL="http://ziggy.local:8228" \
  ANCHORECTL_USERNAME="admin" \
  ANCHORECTL_PASSWORD="foobar" \
  anchorectl system status 
 Status system⋮

Of course, in a CI/CD environment such as GitHub, GitLab, or Jenkins, these environment variables would be set in a secure store and only set up as the job running anchorectl it initiated.

Users

Viewing Accounts & Users

In the examples above, I’ve been using the default username and password for a demo Anchore Enterprise instance. AnchoreCTL can be used to query and manage the system’s accounts and users. Documentation for these activities can be found in the user management section of the docs.

$ # Show list of accounts on the remote instance
$ anchorectl account list 
 Fetched accounts
┌───────┬─────────────────┬─────────┐
 NAME   EMAIL            STATE   
├───────┼─────────────────┼─────────┤
 admin  admin@myanchore  enabled 
└───────┴─────────────────┴─────────┘

We can also list existing users on the system:

$ # Show list of users (if any) in the admin account
$ anchorectl user list --account admin 
 Fetched users
┌──────────┬──────────────────────┬───────────────────────┬────────┬──────────┬────────┐
 USERNAME  CREATED AT            PASSWORD LAST UPDATED  TYPE    IDP NAME  SOURCE 
├──────────┼──────────────────────┼───────────────────────┼────────┼──────────┼────────┤
 admin     2024-06-10T11:48:32Z  2024-06-10T11:48:32Z   native │          │        
└──────────┴──────────────────────┴───────────────────────┴────────┴──────────┴────────┘

Managing Acounts

AnchoreCTL can be used to add (account add), enable (account enable), disable (account disable) and remove (account delete) accounts from the system:

$ # Create a new account
$ anchorectl account add dev_team_alpha 
 Added account
Name: dev_team_alpha
Email:
State: enabled

$ # Get a list of accounts
$ anchorectl account list 
 Fetched accounts
┌────────────────┬─────────────────┬─────────┐
 NAME            EMAIL            STATE   
├────────────────┼─────────────────┼─────────┤
 admin           admin@myanchore  enabled 
 dev_team_alpha                   enabled 
 dev_team_beta                    enabled 
└────────────────┴─────────────────┴─────────┘

$ # Disable an account before deleting it
$ anchorectl account disable dev_team_alpha 
 Disabled accountState: disabled

$ # Delete the account
$ anchorectl account delete dev_team_alpha 
 Deleted account
No results

$ # Get a list of accounts
$ anchorectl account list 
 Fetched accounts
┌────────────────┬─────────────────┬──────────┐
 NAME            EMAIL            STATE    
├────────────────┼─────────────────┼──────────┤
 admin           admin@myanchore  enabled  
 dev_team_alpha                   deleting 
 dev_team_beta                    enabled  
└────────────────┴─────────────────┴──────────┘

Managing Users

Users exist within accounts, but usernames are globally unique since they are used for authenticating API requests. Any user in the admin account can perform user management in the default Anchore Enterprise configuration using the native authorizer. 

For more information on configuring other authorization plugins, see Authorization Plugins and Configuration in our documentation.

Users can also be managed via AnchoreCTL. Here we create a new dev_admin_beta user under the dev_team_beta account and give then the role full-control as an administrator of the team. We’ll set a password of CorrectHorseBatteryStable for the admin user, but pass that via the environment rather than echo it out in the command line.

$ # Create a new user from the dev_team_beta account
$ ANCHORECTL_USER_PASSWORD=CorrectHorseBatteryStable \
  anchorectl user add --account dev_team_beta dev_admin_beta \
  --role full-control 
  
   Added user      dev_admin_beta
  Username: dev_admin_beta
  Created At: 2024-06-12T10:25:23Z
  Password Last Updated: 2024-06-12T10:25:23Z
  Type: native
  IDP Name:
  Source:

Let’s check that worked:

$ # Check that the new user was created
$ anchorectl user list --account dev_team_beta 
 Fetched users
┌────────────────┬──────────────────────┬───────────────────────┬────────┬──────────┬────────┐
 USERNAME        CREATED AT            PASSWORD LAST UPDATED  TYPE    IDP NAME  SOURCE 
├────────────────┼──────────────────────┼───────────────────────┼────────┼──────────┼────────┤
 dev_admin_beta  2024-06-12T10:25:23Z  2024-06-12T10:25:23Z   native │          │        
└────────────────┴──────────────────────┴───────────────────────┴────────┴──────────┴────────┘

That user is now able to use the API.

$ # List users from the dev_team_beta account
$ ANCHORECTL_USERNAME=dev_admin_beta \
  ANCHORECTL_PASSWORD=CorrectHorseBatteryStable \
  ANCHORECTL_ACCOUNT=dev_team_beta \
  anchorectl user list 
   Fetched users
  ┌────────────────┬──────────────────────┬───────────────────────┬────────┬──────────┬────────┐
   USERNAME        CREATED AT            PASSWORD LAST UPDATED  TYPE    IDP NAME  SOURCE 
  ├────────────────┼──────────────────────┼───────────────────────┼────────┼──────────┼────────┤
   dev_admin_beta  2024-06-12T10:25:23Z  2024-06-12T10:25:23Z   native │          │        
  └────────────────┴──────────────────────┴───────────────────────┴────────┴──────────┴────────┘

Using AnchoreCTL

We now have AnchoreCTL set-up to talk to our Anchore Enterprise, and a user other than admin to connect as let’s actually use it to scan a container. We have two options here, ‘Centralized Analysis’ and ‘Distributed Analysis’.

In Centralized Analysis, any container we request will be downloaded and analyzed by our Anchore Enterprise. If we choose Distributed Analysis, the image will be analyzed by anchorectl itself. This is covered in much more detail in the Vulnerability Management section of the docs.

Currently we have no images submitted for analysis:

$ # Query Enterprise to get a list of container images and their status
$ ANCHORECTL_USERNAME=dev_admin_beta \
  ANCHORECTL_PASSWORD=CorrectHorseBatteryStable \
  ANCHORECTL_ACCOUNT=dev_team_beta \
  anchorectl image list 
  
   Fetched images
  ┌─────┬────────┬──────────┬────────┐
   TAG  DIGEST  ANALYSIS  STATUS 
  ├─────┼────────┼──────────┼────────┤
  └─────┴────────┴──────────┴────────┘

Let’s submit the latest Debian container from Dockerhub to Anchore Enterprise for analysis. The backend Anchore Enterprise deployment will then pull (download) the image, and analyze it.

$ # Request that enterprise downloads and analyzes the debian:latest image
$ ANCHORECTL_USERNAME=dev_admin_beta \
  ANCHORECTL_PASSWORD=CorrectHorseBatteryStable \
  ANCHORECTL_ACCOUNT=dev_team_beta \
  anchorectl image add docker.io/library/debian:latest 
  
 ✔ Added Image      
 docker.io/library/debian:latest
 Image:  
 status:           not-analyzed (active)  
 tag:              docker.io/library/debian:latest  
 digest:           sha256:820a611dc036cb57cee7...  
 id:               7b34f2fc561c06e26d69d7a5a58...

Initially the image starts in a state of not-analyzed. Once it’s been downloaded, it’ll be queued for analysis. When the analysis begins, the status will change to analyzing after which it will change to analyzed. We can check the status with anchorectl image list.

$ # Check the status of the container image we requested 
$ ANCHORECTL_USERNAME=dev_admin_beta \
  ANCHORECTL_PASSWORD=CorrectHorseBatteryStable \
  ANCHORECTL_ACCOUNT=dev_team_beta \
  anchorectl image list 
  
 Fetched images
┌─────────────────────────────────┬────────────────────────────────┬───────────┬────────┐
 TAG                              DIGEST                          ANALYSIS   STATUS 
├─────────────────────────────────┼────────────────────────────────┼───────────┼────────┤
 docker.io/library/debian:latest  sha256:820a611dc036cb57cee7...  analyzing  active 
└─────────────────────────────────┴────────────────────────────────┴───────────┴────────┘

After a short while, the image has been analyzed.

$ # Check the status of the container image we requested 
$ ANCHORECTL_USERNAME=dev_admin_beta \
  ANCHORECTL_PASSWORD=CorrectHorseBatteryStable \
  ANCHORECTL_ACCOUNT=dev_team_beta \
  anchorectl image list 
  Fetched images
┌─────────────────────────────────┬────────────────────────────────┬───────────┬────────┐
 TAG                              DIGEST                          ANALYSIS   STATUS 
├─────────────────────────────────┼────────────────────────────────┼───────────┼────────┤
 docker.io/library/debian:latest  sha256:820a611dc036cb57cee7...  analyzed   active 
└─────────────────────────────────┴────────────────────────────────┴───────────┴────────┘

Results

Once analysis is complete, we can inspect the results, again with anchorectl.

Container contents

First, let’s see what Operating System packages Anchore found in this container with anchorectl image content docker.io/library/debian:latest -t os

anchorectl reporting the full OS package list from this Debian image. (the list is too large to show here)

SBOM

We can also pull the Software Bill of Materials (SBOM) for this image from Anchore with anchorectl image sbom docker.io/library/debian:latest -o table. We can use -f to write this to a file, and -o syft-json (for example) to output in a different format.

$ # Get a list of OS packages in the image
$ ANCHORECTL_USERNAME=dev_admin_beta \ 
  ANCHORECTL_PASSWORD=CorrectHorseBatteryStable \
  ANCHORECTL_ACCOUNT=dev_team_beta \
  anchorectl image sbom docker.io/library/debian:latest -o table 
  
 Fetched SBOM  docker.io/library/debian:latest
NAME                    VERSION                TYPE
adduser                 3.134                  deb
apt                     2.6.1                  deb
base-files              12.4+deb12u6           deb

util-linux              2.38.1-5+deb12u1       deb
util-linux-extra        2.38.1-5+deb12u1       deb
zlib1g                  1:1.2.13.dfsg-1        deb

Vulnerabilities

Finally let’s have a quick look to see if any OS vulnerabilities were found in this image with anchorectl image vulnerabilities docker.io/library/debian:latest -t os. This is a lot of super-wide output, click through to see the full size image.

Conclusion

So far we’ve introduced AnchoreCTL, shown it’s is easy to install, configure and test. It can be used both locally on developer workstations, and in CI/CD environments such as GitHub, GitLab and Jenkins. We’ll cover the integration of AnchoreCTL with source forges in a later post.

AnchoreCTL is a powerful tool which can be used to automate the management of scanning container contents, generating SBOMs, and analyzing for vulnerabilities.

Find out more about AnchoreCTL in our documentation, and request a demo of Anchore Enterprise.

Modernizing FedRAMP: GSA’s Roadmap to Streamline Authorization

If you’ve ever thought that the FedRAMP (Federal Risk and Authorization Management Program) authorization process is challenging and laborious, things may be getting better. The General Services Administration’s (GSA) has publicly committed to improving the authorization process by publishing a public roadmap to modernize FedRAMP

The purpose of FedRAMP is to act as a central intermediary between federal agencies and cloud service providers (CSP) in order to make it easier for agencies to purchase software services and for CSPs to sell software services to agencies. By being the middleman, FedRAMP creates a single marketplace that reduces the amount of time it takes for an agency to select and purchase a product. From the CSP perspective, FedRAMP becomes a single standard that they can target for compliance and after achieving authorization they get access to 200+ agencies that they can sell to—a win-win.

Unfortunately, that promised land wasn’t the typical experience for either side of the exchange. Since FedRAMP’s inception in 2011, the demand for cloud services has increased significantly. Cloud was still in its infancy at the birth of FedRAMP and the majority of federal agencies still procured software with perpetual licenses rather than as a cloud service (e.g., SaaS). In the following 10+ years that have passed, that preference has inverted and now the predominant delivery model is infrastructure/platform/software-as-a-service.

This has led to an environment where new cloud services are popping up every year but federal agencies don’t have access to them via the streamlined FedRAMP marketplace. On the other side of the coin, CSPs want access to the market of federal agencies that are only able to procure software via FedRAMP but the process of becoming FedRAMP certified is a complex and laborious process that reduces the opportunity cost of access to this market.

Luckily, the GSA isn’t resting on its laurels. Due to feedback from all stakeholders they are prioritizing a revamp of the FedRAMP authorization process to take into account the shifting preferences in the market. To help you get a sense of what is happening, how quickly you can expect changes and the benefits of the initiative, we have compiled a comprehensive FAQ.

Frequently Asked Questions (FAQ)

How soon will the benefits of FedRAMP modernization be realized?

Optimistically changes will be rolling out over the next 18 months and be completed by the end of 2025. See the full rollout schedule on the public roadmap.

Who does this impact?

  • Federal agencies
  • Cloud service providers (CSP)
  • Third-party assessment organization (3PAO)

What are the benefits of the FedRAMP modernization initiative?

TL;DR—For agencies

  • Increased vendor options within the FedRAMP marketplace
  • Reduced wait time for CSPs in authorization process

TL;DR—For CSPs

  • Reduced friction during the authorization process
  • More clarity on how to meet security requirements
  • Less time and cost spent on the authorization process

TL;DR—For 3PAOs

  • Reduced friction between 3PAO and CSP during authorization process
  • Increased clarity on how to evaluate CSPs

What prompted the GSA to improve FedRAMP now?

GSA is modernizing FedRAMP because of feedback from stakeholders. Both federal agencies and CSPs levied complaints about the current FedRAMP process. Agencies wanted more CSPs in the FedRAMP marketplace that they could then easily procure. CSPs wanted a more streamlined process so that they could get into the FedRAMP marketplace faster. The point of friction was the FedRAMP authorization process that hasn’t evolved at the same pace as the transition from the on-premise, perpetual license delivery model to the rapid, cloud services model.

How will GSA deliver on its promises to modernize FedRAMP?

The full list of initiatives can be found in their public product roadmap document but the highlights are:

  • Taking a customer-centric approach that reduces friction in the authorization process based on customer interviews
  • Publishing clear guidance on how to meet core security requirements
  • Streamlining authorization process to reduce bottlenecks based on best practices from agencies that have developed a strong authorization process
  • Moving away from lengthy documents and towards a data-first foundation to enable automation of the authorization process for CSPs and 3PAOs

Wrap-Up

The GSA has made a commitment to being transparent about the improvements to the modernization process. Anchore, as well as, the rest of the public sector stakeholders will be watching and holding the GSA accountable. Follow this blog or the Anchore LinkedIn page to stay updated on progress.If the 18 month timeline is longer than you’re willing to wait, Anchore is already an expert in supporting organizations that are seeking FedRAMP authorization. Anchore Enterprise is a modern, cloud-native software composition analysis (SCA) platform that both meets FedRAMP compliance standards and helps evaluate whether your software supply chain is FedRAMP compliant. If you’re interested to learn more, download “FedRAMP Requirements Checklist for Container Vulnerability Scanning” or learn more about how Anchore Enterprise has helped organizations like Cisco achieve FedRAMP compliance in weeks versus months.

Add SBOM Generation to Your GitHub Project with Syft

According to the latest figures, GitHub has over 100 million developers working on over 420 million repositories, with at least 28M being public repos. Unfortunately, very few software repos contain a Software Bill of Materials (SBOM) inventory of what’s been released.

SBOMs (Software Bill of Materials) are crucial in a repository as they provide a comprehensive inventory of all components, improving transparency and traceability in the software supply chain. This allows developers and security teams to quickly identify and address vulnerabilities, enhancing overall security and compliance with regulatory standards.

Anchore developed the sbom-action GitHub Action to automatically generate an SBOM using Syft. Developers can quickly add the action via the GitHub Marketplace and pretty much fire and forget the setup.

What is an SBOM?

Anchore developers have written plenty over the years about What is an SBOM, but here is the tl;dr:

An SBOM (Software Bill of Materials) is a detailed list of all software project components, libraries, and dependencies. It serves as a comprehensive inventory that helps understand the software’s structure and the origins of its components.

An SBOM in your project enhances security by quickly identifying and mitigating vulnerabilities in third-party components. Additionally, it ensures compliance with regulatory standards and provides transparency, essential for maintaining trust with stakeholders and users.

Introducing Anchore’s SBOM GitHub Action

Adding an SBOM is a cinch with the GitHub Action for SBOM Generation provided by Anchore. Once added to a repo the action will execute a Syft scan in the workspace directory and upload a workflow artefact SBOM in SPDX format.

The SBOM Action can scan a Docker image directly from the container registry with or without registry credentials specified. Alternatively, it can scan a directory full of artifacts or a specific single file.

The action will also detect if it’s being run during the GitHub release and upload the SBOM as a release asset. Easy!

How to Add the SBOM GitHub Action to Your Project

Assuming you already have a GitHub account and repository setup, adding the SBOM action is straightforward.

Anchore SBOM Action in the GitHub Marketplace.
  • Navigate to the GitHub Marketplace
  • Search for “Anchore SBOM Action” or visit Anchore SBOM Action directly
  • Add the action to your repository by clicking the green “Use latest version” button
  • Configure the action in your workflow file

That’s it!

Example Workflow Configuration

Here’s a bare-bones configuration for running the Anchore SBOM Action on each push to the repo.

  name: Generate SBOM

  on: [push]

  jobs:
    build:
      runs-on: ubuntu-latest
      steps:
      - name: Checkout code
        uses: actions/checkout@v2
      - name: Anchore SBOM Action
        uses: anchore/[email protected]

There are further options detailed on the GitHub Marketplace page for the action. For example, use output-file to specify the resulting SBOM file name and format to select whether to build an SPDX or CycloneDX formatted SBOM.

Results and Benefits

After the GitHub action is set up, the SBOM will start being generated on each push or with every release – depending on your configuration.

Once the SBOM is published on your GitHub repo, users can analyze it to identify and address vulnerabilities in third-party components. They can also use it to ensure compliance with security and regulatory standards, maintaining the integrity of the software supply chain.

Additional Resources

The SBOM action is open source and is available under the Apache 2.0 License in the sbom-action repository. It relies on Syft which is available under the same license, also on GitHub. We welcome contributions to both sbom-action and Syft, as well as Grype, which can consume and process these generated SBOMs.

Join us on Discourse to discuss all our open source tools.

Reduce risk in your software supply chain: 5 tips for container security

Rising threats to the software supply chain and increasing use of containers are causing organizations to focus on container security. Containers bring many unique security challenges due to their layered dependencies and the fact that many container images come from public repositories.

Our new white paper, Reduce Risk for Software Supply Chain Attacks: Best Practices for Container Security, digs into 5 tips for securing containers. It also describes how Anchore Enterprise simplifies implementation of these critical best practices, so you don’t have to.

5 best practices to instantly strengthening container security

  1. Use SBOMs to build a transparent foundation

SBOMs—Software Bill of Materials—create a trackable inventory of the components you use, which is a precursor for identifying security risks, meeting regulatory requirements and assessing license compliance. Get recommendations on the best way to generate, store, search and share SBOMs for better transparency.  

  1. Identify vulnerabilities early with continuous scanning

Security issues can arise at any point in the software supply chain. Learn why shifting left is necessary, but not sufficient for container security. Understand the role SBOMs are critical when responding to zero-day vulnerabilities.

  1. Automate policy enforcement and security gates

Find out how to use automated policies to identify which vulnerabilities should be fixed and enforce regulatory requirements. Learn how a customizable policy engine and out-of-the-box policy packs streamline your compliance efforts. 

  1. Reduce toil in the developer experience

Integrating with the tools developers use, minimizing false positives, and providing a path to faster remediation will keep developers happy and your software development moving efficiently.  See how Anchore Enterprise makes it easy to provide a good developer experience

  1. Protect your software supply chain with security controls

To protect your software supply chain, you must ensure that the code you bring in from third-party sources is trusted and vetted. Implement vetting processes for open-source code that you use.

Four Years of Syft Development in 4 Minutes at 4K

Our open-source SBOM and vulnerability scanning tools Syft and Grype, recently turned four years old. So I did what any nerd would do: render an animated visualization of the development using the now-venerable Gource. Initially, I wanted to render these videos at 120Hz framerate, but that didn’t go well. Read on to find out how that panned out.

My employer (perhaps foolishly) gave me the keys to our Anchore YouTube and Anchore Vimeo accounts. You can find the video I rendered on YouTube or embedded below.

For those unaware, Gource is a popular open-source project by Andrew Caudwell. Its purpose is to visualize development with pretty OpenGL-rendered videos. You may have seen these animated glowing renders before, as Gource has been around for a while now.

Syft is Anchore’s command-line tool and library for generating a software bill of materials (SBOM) from container images and filesystems. Grype is our vulnerability scanner for container images and filesystems. They’re both fundamental components of our Anchore Enterprise platform but are also independently famous.

Generating the video

Plenty of guides online cover how to build Gource visualizations, which are pretty straightforward. Gource analyses the git log of changes in a repository to generate frames of animation which can be viewed or saved to a video. There are settings to control various aspects of the animation, which are well documented in the Gource Wiki.

By default, while Gource is running, a window displaying the animation will appear on your screen. So, if you want to see what the render will look like, most of the defaults are fine when running Gource directly.

Tweak the defaults

I wanted to limit the video duration, and render at a higher resolution than my laptop panel supports. I also wanted the window to be hidden while the process runs.

tl;dr Here’s the full command line I used to generate and encode the 4K video in the background.

$ /usr/bin/xvfb-run --server-num=99 -e /dev/stdout \
  -s '-screen 0 4096x2160x24 ' /usr/bin/gource \
  --max-files 0 --font-scale 4 --output-framerate 60 \
  -4096x2160 --auto-skip-seconds 0.1 --seconds-per-day 0.16 \
  --bloom-multiplier 0.9 --fullscreen --highlight-users \
  --multi-sampling --stop-at-end --high-dpi \
  --user-image-dir ../faces/ --start-date 2020-05-07 \
  --title 'Syft Development https://github.com/anchore/syft' \
  -o - \
  ffmpeg -y -r 60 -f image2pipe -vcodec ppm -i - \
  -vcodec libx264 -preset veryfast -pix_fmt yuv420p \
  -crf 1 -threads 0 -bf 0 ../syft-4096x2160-60.mkv

Let’s take a step back and examine the preparatory steps and some interesting points to note.

Preparation

The first thing to do is to get Gource and ffmpeg. I’m using Ubuntu 24.04 on my ThinkPad Z13, so a simple sudo apt install gource ffmpeg works.

Grab the Syft and/or Grype source code.

$ mkdir -p ~/Videos/gource/
$ cd ~/Videos/gource
$ git clone https://github.com/anchore/syft
$ git clone https://github.com/anchore/grype

Gource can use avatar images in the videos which represent the project contributors. I used gitfaces for this. Gitfaces is available from PyPI, so can be installed with pip install -U gitfaces or similar. Once installed, generate the avatars from within the project folder.

$ cd ~/Videos/gource/syft
$ mkdir ../faces
$ gitfaces . ../faces

Do this for each project you wish to render out. I used a central ../faces folder as there would be some duplication between the projects I’m rendering. However, not everyone has an avatar, so they’ll show up as an anonymous “head and shoulders” in the animation.

Test render

Perform a quick test to ensure Gource is installed correctly and the avatars are working.

$ cd ~/Videos/gource/syft
$ /usr/bin/gource --user-image-dir ../faces/ 

A default-sized window of 1052×834 should appear with nicely rendered blobs and lines. If you watch it for any appreciable length, you’ll notice it can be boring in the gaps between commits. Gource has some options to improve this.

The --auto-skip-seconds option defines when Gource will skip to the next entry in the git log while there is no activity. The default is 3 seconds, which can be reduced. With --seconds-per-day we can set the render speed so we don’t get a very long video.

I used 0.1 and 0.16, respectively. The result is a shorter, faster, more dynamic video. The Gource Wiki details many other options for Gource.

Up the resolution!

While the default 1052×834 video size is fine for a quick render, I wanted something much bigger. Using the ‘4 years in 4 minutes at 4K’ heading would be fun, so I went for 4096×2160. My laptop doesn’t have a 4K display (it’s 2880×1800 natively), so I decided to render it in the background, saving it to a video.

To run it in the background, I used xvfb-run from the xvfb package on my Ubuntu system. A quick sudo apt install xvfb installed it. To run Gource inside xvfb we simply prefix the command line like this:

(this is not the full command, just a snippet to show the xvfb syntax)

$ /usr/bin/xvfb-run --server-num=99 -e /dev/stdout \
  -s '-screen 0 4096x2160x24 ' /usr/bin/gource -4096x2160

Note that the XServer’s resolution matches the video’s, and we use the fullscreen option in Gource to use the whole virtual display. Here we also specify the color bit-depth of the XServer – in this case 24.

Create the video

Using ffmpeg—the Swiss army knife of video encoding—we can turn Gource’s output into a video. I used the x264 codec with some reasonable options. We can run these as two separate commands: one to generate a (huge) series of ppm images and the second to compress that into a reasonable file size.

$ /usr/bin/xvfb-run --server-num=99 -e /dev/stdout \
  -s '-screen 0 4096x2160x24 ' /usr/bin/gource \
  --max-files 0 --font-scale 4 --output-framerate 60 \
  -4096x2160 --auto-skip-seconds 0.1 --seconds-per-day 0.16 \
  --bloom-multiplier 0.9 --fullscreen --highlight-users \
  --multi-sampling --stop-at-end --high-dpi \
  --user-image-dir ../faces/ --start-date 2020-05-07 \
  --title 'Syft Development: https://github.com/anchore/syft' \
  -o ../syft-4096x2160-60.ppm

$ ffmpeg -y -r 60 -f image2pipe -vcodec ppm \
  -i ../syft-4096x2160-60.ppm -vcodec libx264 \
  -preset veryfast -pix_fmt yuv420p -crf 1 \
  -threads 0 -bf 0 ../syft-4096x2160-60.mkv

Four years of commits as uncompressed 4K60 images will fill the disk pretty fast. So it’s preferable to chain the two commands together so we save time and don’t waste too much disk space.

$ /usr/bin/xvfb-run --server-num=99 -e /dev/stdout \
  -s '-screen 0 4096x2160x24 ' /usr/bin/gource \
  --max-files 0 --font-scale 4 --output-framerate 60 \
  -4096x2160 --auto-skip-seconds 0.1 --seconds-per-day 0.16 \
  --bloom-multiplier 0.9 --fullscreen --highlight-users \
  --multi-sampling --stop-at-end --high-dpi \
  --user-image-dir ../faces/ --start-date 2020-05-07 \
  --title 'Syft Development: https://github.com/anchore/syft' \
  -o - ffmpeg -y -r 60 -f image2pipe -vcodec ppm -i - \
  -vcodec libx264 -preset veryfast -pix_fmt yuv420p \
  -crf 1 -threads 0 -bf 0 ../syft-4096x2160-60.mkv

On my ThinkPad Z13 equipped with an AMD Ryzen 7 PRO 6860Z CPU, this takes around 42 minutes and generates a ~10GB mkv video. Here’s what the resource utilisation looks like while this is running. Fully maxed out all the CPU cores. Toasty!

Screenshot of 'bottom' running in a terminal window on Linux.

Challenges

More frames

Initially, I considered creating a video at 120fps rather than the default 60fps that Gource generates. However, Gource is limited in code to 25, 30, and 60fps. As an academic exercise, I patched Gource (diff below) to generate visualizations at the higher frame rate.

I’m not a C++ developer, nor do I play one on TV! But with a bit of grep and a small amount of trial and error, I modified and rebuilt Gource to add support for 120fps.

diff --git a/src/core b/src/core
--- a/src/core
+++ b/src/core
@@ -1 +1 @@
-Subproject commit f7fa400ec164f6fb36bcca5b85d2d2685cd3c7e8
+Subproject commit f7fa400ec164f6fb36bcca5b85d2d2685cd3c7e8-dirty
diff --git a/src/gource.cpp b/src/gource.cpp
index cf86c4f..755745f 100644
--- a/src/gource.cpp
+++ b/src/gource.cpp
@@ -153,7 +153,7 @@ Gource::Gource(FrameExporter* exporter) {
     root = 0;
 
     //min physics rate 60fps (ie maximum allowed delta 1.0/60)
-    max_tick_rate = 1.0 / 60.0;
+    max_tick_rate = 1.0 / 120.0;
     runtime = 0.0f;
     frameskip = 0;
     framecount = 0;
@@ -511,7 +511,7 @@ void Gource::setFrameExporter(FrameExporter* exporter, int video_framerate) {
     this->frameskip  = 0;
 
     //calculate appropriate tick rate for video frame rate
-    while(gource_framerate<60) {
+    while(gource_framerate<120) {
         gource_framerate += video_framerate;
         this->frameskip++;
     }

I then re-ran Gource with --output-framerate 120 and ffmpeg with -r 120, which successfully generated the higher frame-rate files.

$ ls -lh
-rw-rw-r-- 1 alan alan 7.3G Jun 15 21:42 syft-2560x1440-60.mkv
-rw-rw-r-- 1 alan alan 8.9G Jun 15 22:14 grype-2560x1440-60.mkv
-rw-rw-r-- 1 alan alan  13G Jun 16 22:56 syft-2560x1440-120.mkv
-rw-rw-r-- 1 alan alan  16G Jun 16 22:33 grype-2560x1440-120.mkv

As you can see and probably expect on some test renders, with these settings, double the frames means double the size. I could have fiddled with ffmpeg to use better-optimized options, or a different codec, but decided against it.

There’s an even more significant issue here. There are precious few places to host high-frame-rate videos; few people have the hardware, bandwidth, and motivation to watch them. So, I rolled back to 60fps for subsequent renders.

More pixels

While 4K (4096×2160) is fun and fits the story of “4 years in 4 minutes at 4K”, I did consider trying to render out at 8K (7680×4320). After all, I had time on my hands at the weekend and spare CPU cycles, so why not?

Sadly, the hardware x264 encoder in my ThinkPad Z13 has a maximum canvas size of 4096×4096, which is far too small for 8K. I could have encoded using software rather than hardware acceleration, but that would have been ludicrously more time-consuming.

I do have an NVIDIA card but don’t believe it’s new enough to do 8K either, being a ‘lowly’ (these days) GTX 2080Ti. My work laptop is an M3 MacBook Pro. I didn’t attempt rendering there because I couldn’t fathom getting xvfb working to do off-screen rendering in Gource on macOS.

I have another four years to figure this out before my ‘8 years of Syft in 8 minutes at 8K’ video, though!

Minor edits

Once Gource and ffmpeg did their work, I used Kdenlive to add some music and our stock “top and tail” animated logo to the video and then rendered it for upload. The default compression settings in Kdenlive dramatically reduced the file size to something more manageable and uploadable!

Conclusion

Syft and Grype are – in open source terms – relatively young, with a small, dedicated team working on them. As such, the Gourse renders aren’t as busy or complex as more well-established projects with bigger teams.

We certainly welcome external contributions over on the Syft and Grype repositories. We also have a new Anchore Community Discourse where you can discuss the projects and this article.

If you’d like to see how Syft and Grype are integral to your SBOM generation, vulnerability and policy enforcement tools, contact us and watch the guided tour.

I always find these renders technically neat, beautiful and relaxing to watch. The challenges of rendering them also led me down some interesting technical paths. I’d love to hear feedback and suggestions over on the Anchore Community Discourse

Balancing the Scale: Software Supply Chain Security and APTs

Note: This is a multi-part series primer on the intersection of advanced persistent threats (APTs) and software supply chain security (SSCS). This blog post is the first in the series. We will update this blog post with links to the additional parts of the series as they are published.
Part 1 | With Great Power Comes Great Responsibility: APTs & Software Supply Chain Security
Part 2 | David and Goliath: the Intersection of APTs and Software Supply Chain Security
• Part 3 (This blog post)

Last week we dug into the details of why an organization’s software supply chain is a ripe target for well-resourced groups like APTs and the potential avenues that companies have to combat this threat. This week we’re going to highlight the Anchore Enterprise platform and how it provides a turnkey solution for combating threats to any software supply chain.

How Anchore Can Help

Anchore was founded on the belief that a security platform that delivers deep, granular insights into an organization’s software supply chain, covers the entire breadth of the SDLC and integrates automated feedback from the platform will create a holistic security posture to detect advanced threats and allow for human interaction to remediate security incidents. Anchore is trusted by Fortune 100 companies and the most exacting federal agencies across the globe because it has delivered on this promise.

The rest of the blog post will detail how Anchore Enterprise accomplishes this.

Depth: Automating Software Supply Chain Threat Detection

Having deeper visibility into an organization’s software supply chain is crucial for security purposes because it enables the identification and tracking of every component in the software’s construction. This comprehensive understanding helps in pinpointing vulnerabilities, understanding dependencies, and identifying potential security risks. It allows for more effective management of these risks by enabling targeted security measures and quicker response to potential threats. Essentially, deeper visibility equips an organization to better protect itself against complex cyber threats, including those that exploit obscure or overlooked aspects of the software supply chain.

Anchore Enterprise accomplishes this by generating a comprehensive software bill of materials (SBOM) for every piece of software (even down to the component/library/framework-level). It then compares this detailed ingredients list against vulnerability and active exploit databases to identify exactly where in the software supply chain there are security risks. These surgically precise insights can then be fed back to the original software developers, rolled-up into reports for the security team to better inform risk management or sent directly into an incident management workflow if the vulnerability is evaluated as severe enough to warrant an “all-hands on deck” response.

Developers shouldn’t have to worry about manually identifying threats and risks inside your software supply chain. Having deep insights into your software supply chain and being able to automate the detection and response is vital to creating a resilient and scalable solution to the risk of APTs.

Breadth: Continuous Monitoring in Every Step of Your Software Supply Chain

The breadth of instrumentation in the Software Development Lifecycle (SDLC) is crucial for securing the software supply chain because it ensures comprehensive security coverage across all stages of software development. This broad instrumentation facilitates early detection and mitigation of vulnerabilities, ensures consistent application of security policies, and allows for a more agile response to emerging threats. It provides a holistic view of the software’s security posture, enabling better risk management and enhancing the overall resilience of the software against cyber threats.

Powered by a 100% feature complete platform API, Anchore Enterprise integrates into your existing DevOps pipeline.

Anchore has been supporting the DoD in this effort since 2019. Commonly referred to as “overwatch” for the DoD’s software supply chain. Anchore Enterprise continuously monitors how risk is evolving based on the ingesting of tens of thousands of runtime containers, hundreds of source code repositories and alerting on malware-laced images submitted to the registry. Monitoring every stage of the DevOps pipeline, source to build to registry to deploy, to gain a holistic view of when and where threats enter the software development lifecycle.

Feedback: Alerting on Breaches or Critical Events in Your Software Supply Chain

Integrating feedback from your software supply chain and SDLC into your overall security program is important because it allows for real-time insights and continuous improvement in security practices. This integration ensures that lessons learned and vulnerabilities identified at any stage of the development or deployment process are quickly communicated and addressed. It enhances the ability to preemptively manage risks and adapt to new threats, thereby strengthening the overall security posture of the organization.

How would you know if something is wrong in a system? Create high-quality feedback loops, of course. If there is a fire in your house, you typically have a fire alarm. That is a great source of feedback. It’s loud and creates urgency. When you investigate to confirm the fire is legitimate and not a false alarm; you can see fire, you can feel fire.

Software supply chain breaches are more similar to carbon monoxide leaks. Silent, often undetected, and potentially lethal. If you don’t have anything in place to specifically alert for that kind of threat then you could pay severely. 

Anchore Enterprise was designed specifically as both a set of sensors that can be deployed both deeply and broadly into your software supply chain AND a system of feedback that uses the sensors in your supply chain to detect and alert on potential threats that are silently emitting carbon monoxide in your warehouse.

Anchore Enterprise’s feedback mechanisms come in three flavors; automatic, recommendations and informational. Anchore Enterprise utilizes a policy engine to enable automatic action based on the feedback provided by the software supply chain sensors. If you want to make sure that no software is ever deployed into production (or any environment) with an exploitable version of Log4j the Anchore policy engine can review the security metadata created by the sensors for the existence of this software component and stop a deployment in progress before it ever becomes accessible to attackers.

Anchore Enterprise can also be configured to make recommendations and provide opinionated actions based on security signals. If a vulnerability is discovered in a software component but it isn’t considered urgent, Anchore Enterprise can instead provide a recommendation to the software developer to fix the vulnerability but still allow them to continue to test and deploy their software. This allows developers to become aware of security issues very early in the SDLC but also provide flexibility for them to fix the vulnerability based on their own prioritization.

Finally, Anchore Enterprise offers informational feedback that alerts developers, the security team or even the executive team to potential security risks but doesn’t offer a specific solution. These types of alerts can be integrated into any development, support or incident management systems the organization utilizes. Often these alerts are for high risk vulnerabilities that require deeper organizational analysis to determine the best course of action in order to remediate.

Conclusion

Due to the asymmetry between APTs and under-resourced security teams, the goal isn’t to create an impenetrable fortress that can never be breached. The goal is instead to follow security best practices and instead litter your SDLC with sensors and automated feedback mechanisms. APTs may have significantly more resources than your security team but they are still human and all humans make mistakes. By placing low-effort tripwires in as many locations as possible, you reverse the asymmetry of resources and instead allow the well-resourced adversary to become their own worst enemy. APTs are still software developers at the end of the day and no one writes bug-free code in the long run. By transforming your software supply chain into a minefield of best practices, you create a battlefield that requires your adversaries to slow down and carefully disable each individual security mechanism. None are impossible to disarm but each speed bump creates another opportunity for your adversary to make a mistake and reveal themselves. If the zero-trust architecture has taught us anything, it is that an impenetrable perimeter was never the best strategy.

Improving Syft’s Binary Detection

You, too, can help make Syft better! As you’re probably aware, Syft is a software composition analysis tool which is able to scan a number of sources to find software packages in container images and the local filesystem. Syft detects packages from a number of things such as source code and package manager metadata, but also from arbitrary files it encounters such executable binaries. Today we’re going to talk about how some of Syft’s binary detection works and how easy it is to improve.

Just recently, we were made aware of this vulnerability and it seemed like something we’d want to surface in Syft’s companion tool, Grype… but Fluent Bit wasn’t something that Syft was already detecting. Let’s look at how we added support for it!

Syft binary matching

Before we get into the details, it’s important to understand how Syft’s binary detection works today: Syft scans a filesystem, and a binary cataloger looks for files matching a particular name pattern and uses a regular expression to find a version string in the binary. Although this isn’t the only thing Syft does, this has proven to be a simple pattern that works fairly well for finding information about arbitrary binaries, such as the Fluent Bit binary we’re interested in.

In order to add support for additional binary types in Syft, the basic process is this:

  1. Find a binary
  2. Add a matching rule
  3. Add tests

Getting started

Starting with a local fork of the Syft repository, let’s work in the binary cataloger’s test-fixtures directory:

$ cd syft/pkg/cataloger/binary/test-fixtures

Here you’ll find a Makefile and a config.yaml. These are the main things of importance — and you can run make help to see extra commands available.

The first thing we need to do is find somewhere to get one of the binaries from — we need something to test that we’re actually detecting the right thing! The best way to do this is using a publicly available container image. Once we know an image to use, the Makefile has some utilities to make the next steps fairly straightforward.

After a short search online, we found that there is indeed a public docker image with exactly what we were looking for: https://hub.docker.com/r/fluent/fluent-bit. Although we could pick just about any version, we somewhat arbitrarily chose this one. We can use more than one, but for now we’re just going to use this as a starting point.

Adding a reference to the binary

After finding an image, we need to identify the particular binary file to look at. Luckily, the Fluent Bit documentation gave a pretty good pointer – this was part of the docker command the documentation said to run: /fluent-bit/bin/fluent-bit! It may take a little more sleuthing to figure out what file(s) within the image we need; often you can run an image with a shell to figure this out… but chances are, if you can run the command with a --version flag and get the version printed out, we can figure out how to find it in the binary.

For now, let’s continue on with this binary. We need to add an entry that describes where to find the file in question in the syft/pkg/cataloger/binary/test-fixtures/config.yaml:

- version: 3.0.2
    images:
      - ref: fluent/fluent-bit:3.0.2-amd64@sha256:7e6fe8efd51dda0739e355f58bf5e3b1623cbf2d4a23c06c7a365d9553e2d242
        platform: linux/amd64
    paths:
      - /fluent-bit/bin/fluent-bit

There are lots of examples in that file already, and hopefully the fields are straightforward but note the version — this is what we’ve ascertained should be reported and it will drive some functions later. Also, we’ve included the full sha256 hash, so even if the tags change, we’ll get the expected image. Then just run make:

$ make

go run ./manager download  --skip-if-covered-by-snippet
...
[email protected]
  ✔  pull image fluent/fluent-bit:3.0.2-amd64@sha256:7e6fe8efd51dda0739e355f58bf5e3b1623cbf2d4a23c06c7a365d9553e2d242 (linux/amd64)
  ✔  extract /fluent-bit/bin/fluent-bit

This pulled the image locally and extracted the file we told it to…but so far we haven’t really done much that you couldn’t do with standard container tools.

Finding the version

Now we need to figure out what type of expression should reliably find the version. There are a number of binary inspection tools, many of which can make this easier and perhaps you have some favorites — by all means use those! But we’re going to stick with the tools at hand. Let’s take a look at what the binary has matching the version we indicated earlier by running make add-snippet

$ make add-snippet

go run ./manager add-snippet
running: ./capture-snippet.sh classifiers/bin/fluent-bit/3.0.2/linux-amd64/fluent-bit 3.0.2 --search-for 3\.0\.2 --group fluent-bit --length 100 --prefix-length 20
Using binary file:      classifiers/bin/fluent-bit/3.0.2/linux-amd64/fluent-bit
Searching for pattern:  3\.0\.2
Capture length:         120 bytes
Capture prefix length:  20 bytes
Multiple string matches found in the binary:

1) 3.0.2
2) 3.0.2
3) CONNECT {"verbose":false,"pedantic":false,"ssl_required":false,"name":"fluent-bit","lang":"c","version":"3.0.2"}

Please select a match: 

Follow the prompts to inspect the different sections of the binary. Each of these actually looks like it could be something usable, but we want one that hopefully is simple to match across different versions. The third match has JSON, which possibly could get reordered. Looking at the second we can see something that has a string containing only 3.0.2 but let’s take a closer look at the first match. If we look at 1, we see something like the second that has a string containing only the version, <NULL>3.0.2<NULL>, but we also see %sFluent Bit, nearby. This looks promising! Let’s capture this snippet by following the prompts:

Please select a match: 1

006804fc: 2525 2e25 6973 0a00 252a 733e 2074 7970  %%.%is..%*s> typ
0068050c: 653a 2000 332e 302e 3200 2573 466c 7565  e: .3.0.2.%sFlue
0068051c: 6e74 2042 6974 2076 2573 2573 0a00 2a20  nt Bit v%s%s..* 
0068052c: 6874 7470 733a 2f2f 666c 7565 6e74 6269  https://fluentbi
0068053c: 742e 696f 0a0a 0069 6e76 616c 6964 2063  t.io...invalid c
0068054c: 7573 746f 6d20 706c 7567 696e 2027 2573  ustom plugin '%s
0068055c: 2700 696e 7661 6c69 6420 696e 7075 7420  '.invalid input 
0068056c: 706c 7567 696e 2027                      plugin '

Does this snippet capture what you need? (Y/n/q) y
wrote snippet to "classifiers/snippets/fluent-bit/3.0.2/linux-amd64/fluent-bit"

How could we tell the NULL terminators? What’s going on here? Looking at the readable text on the right, we see: .3.0.2., but the bytes are also displayed in the same position: 00 332e 302e 3200 and we know 00 is a NULL character because we’ve done quite a lot of these expressions. This is the hardest part, believe me! But if you’re still following along, let’s wrap this up by putting everything we’ve found together in a rule.

Adding a rule to Syft

Edit the syft/pkg/cataloger/binary/classifiers.go and add an entry for this binary:

                {
                        Class:    "fluent-bit-binary",
                        FileGlob: "**/fluent-bit",
                        EvidenceMatcher: FileContentsVersionMatcher(
                                // [NUL]3.0.2[NUL]%sFluent Bit
                                `\x00(?P<version>[0-9]+\.[0-9]+\.[0-9]+)\x00%sFluent Bit`,
                        ),
                        Package: "fluent-bit",
                        PURL:    mustPURL("pkg:github/fluent/fluent-bit@version"),
                        CPEs:    singleCPE("cpe:2.3:a:treasuredata:fluent_bit:*:*:*:*:*:*:*:*"),
                },

We’ve put the information we know about this in the entry: the FileGlob should find the file, as we’ve seen earlier, the FileContentsVersionMatcher takes a regular expression to extract the version. And I went ahead and looked up the format for the CPE and PURL this package should use and included these here, too.

Once we’ve added this, you can test it out right away by running your modified Syft code from the base directory of your git clone:

$ go run ./cmd/syft fluent/fluent-bit:3.0.2-amd64

 ✔ Pulled image                    
 ✔ Loaded image                                                                                                                                                          fluent/fluent-bit:3.0.2-amd64
 ✔ Parsed image                                                                                                                sha256:2007231667469ee1d653bdad65e55cc5f300985f10d7c4dffd6de0a5e76ff078
 ✔ Cataloged contents                                                                                                                 d3a6e4b5bc02c65caa673a2eb3508385ab27bb22252fa684061643dbedabf9c7
   ├── ✔ Packages                        [39 packages]  
   ├── ✔ File digests                    [1,771 files]  
   ├── ✔ File metadata                   [1,771 locations]  
   └── ✔ Executables                     [313 executables]  
NAME              VERSION                  TYPE     
base-files        11.1+deb11u9             deb       
ca-certificates   20210119                 deb       
fluent-bit        3.0.2                    binary    
libatomic1        10.2.1-6                 deb       
...

Great! It worked! If we try this out on some different versions, it looks like 3.0.1-amd64 works as well but this definitely did not work for 2.2.1-arm64 or 2.1.10, so we just repeat the process a bit and find out that we just need to make our expression a bit better to account for the variance in the arm64 versions having a couple extra NULL characters and the older versions not having the %s part. Eventually, this expression seemed to do the trick for the images I tried: x00(?P<version>[0-9]+.[0-9]+.[0-9]+)x00[^d]*Fluent.

We could have made this simpler — to just find <NULL><version><NULL>, but there are quite a few strings in the various binaries that match this pattern and we want to try our best to find the one that looks like it’s the specific version string we want. When we looked at the various bytes across a number of versions both the version and the name of the project showed up together like this. Having done a number of these classifiers in the past, I can say this is a fairly common type of thing to look for.

Testing

Since we already captured a test snippet, the last thing to do is add a test. If you recall, when we used the add-snippet command, it told us: 

wrote snippet to u0022classifiers/snippets/fluent-bit/3.0.2/linux-amd64/fluent-bitu0022

This is what we’re going to want to reference. So let’s add a test case to syft/pkg/cataloger/binary/classifier_cataloger_test.go, the very large Test_Cataloger_PositiveCases test:

                {
                        logicalFixture: "fluent-bit/3.0.2/linux-amd64",
                        expected: pkg.Package{
                                Name:      "fluent-bit",
                                Version:   "3.0.2",
                                Type:      "binary",
                                PURL:      "pkg:github/fluent/[email protected]",
                                Locations: locations("fluent-bit"),
                                Metadata:  metadata("fluent-bit-binary"),
                        },
                },

Wrapping up

Now that we have: 1) identified a binary 2) added a rule to Syft, and 3) added a test case with a small snippet, we’re done coding! Submit a pull request and sit back, knowing you’ve made the world a better place!

David and Goliath: the Intersection of APTs and Software Supply Chain Security

Note: This is a multi-part series primer on the intersection of advanced persistent threats (APTs) and software supply chain security (SSCS). This blog post is the second in the series. If you’d like to start from the beginning, you can find the first blog post here.

Last week we set the stage for discussing APTs and the challenges they pose for software supply chain security by giving a quick overview of each topic. This week we will dive into the details of how the structure of the open source software supply chain is a uniquely ripe target for APTs.

The Intersection of APTs and Software Supply Chain Security

The Software Ecosystem: A Ripe Target

APT groups often prioritize the exploitation of software supply chain vulnerabilities. This is due to the asymmetric structure of the software ecosystem. By breaching a single component, such as a build system, they can gain access to any organization using the compromised software component. This creates an inversion in the cost benefit of the effort involved in the research and development effort needed to discover a vulnerability and craft an exploit for the vulnerability. Before APTs were focused primarily on targets where the pay off could warrant the investment or vulnerabilities that were so wide-spread that the attack could be automated. The complex interactions of software dependencies allows APTs to scale their attack due to the structure of the ecosystem.

The Software Supply Chain Security Dynamic: An Unequal Playing Ground

The interesting challenge with software supply chain security is that securing the supply chain requires even more effort than an APT would take to exploit it. The rub comes because each company that consumes software has to build a software supply chain security system to protect their organization. An APT investing in exploiting a popular component or system gets the benefit of access to all of the software built on top of it.

Given that security organizations are at a structural disadvantage, how can organizations even the odds?

How Do I Secure My Software Supply Chain from APTs?

An organization’s ability to detect the threat of APTs in its internal software supply chain comes down to three core themes that can be summed up as “go deep, go wide and integrate feedback”. Specifically this means, the deeper the visibility into your organization’s software supply chain the less surface area an attack has to slip in malicious software. The wider this visibility is deployed across the software development lifecycle, the earlier an attacker will be caught. Neither of the first two points matter if the feedback produced isn’t integrated into the overall security program that can act on the signals surfaced.

By applying these three core principles to the design of a secure software supply chain, an organization can ensure that they balance the playing field against the structural advantage APTs possess.

How Can I Easily Implement a Strategy for Securing My Software Supply Chain?

The core principles of depth, breadth and feedback are powerful touchstones to utilize when designing a secure software supply chain that can challenge APTs but they aren’t specific rules that can be easily implemented. To address this, Anchore has created the open source VIPERR Framework to provide specific guidance on how to achieve the core principles of software supply chain security.

VIPERR is a free software supply chain security framework that Anchore created for organizations to evaluate and improve the security posture of their software supply chain. VIPERR stands for visibility, inspection, policy enforcement, remediation, and reporting. 

Utilizing the VIPERR Framework an organization can satisfy the three core principles of software supply chain security; depth, breadth and feedback. By following this guide, numerous Fortune 500 enterprises and top federal agencies have transformed their software supply chain security posture and become harder targets for advanced persistent threats. If you’re looking to design and run your own secure software supply chain system, this framework will provide a shortcut to ensure the developed system will be resilient. 

How Can I Comprehensively Implement a Strategy for Securing My Software Supply Chain?

There are a number of different comprehensive initiatives to define best practices for software supply chain security. Organizations ranging from the National Institute of Standards and Technology (NIST) with standards such as SP 800-53, SP 800-218, and SP 800-161. The Cloud Native Computing Foundation (CNCF) to Open Source Security Foundation (OpenSSF) have created detailed documentation on their recommendations to achieve a comprehensive supply chain security program, such as, the SLSA framework and Secure Supply Chain Consumption Framework (S2C2F) Project. Be aware that these are not quick and dirty solutions for achieving a “reasonably” secure software supply chain. They are large undertakings for any organization and should be given the resources needed to achieve success. 

We don’t have the time to go over each in this blog post but we have broken each down in our complete guide to software supply chain security.

This is the second in a series of blog posts focused on the intersection of APTs and software supply chain security. This blog post highlighted the reasons that APTs focus their efforts on software supply chain exploits and the potential avenues that companies have to combat this threat. Next week we will discuss the Anchore Enterprise solution as a turnkey platform to implement the strategies outlined above.

Anchore Enterprise 5.6: Improved Remediation & Visibility with Account Context Switcher

The Anchore Enterprise 5.6 release features updates to account management that enable administrators to context switch fast; analyzing and troubleshooting multiple datasets across multiple accounts. And allow users to share data across accounts easily and safely.

Improve data isolation and performance with accounts and role-based access controls 

Accounts are the highest level object in the Anchore Enterprise system. Each account has its own SBOM assets, users, policies, and reports that are siloed from other accounts in the system. Admins can separate their environment into different accounts based on teams, business units, projects, or products. With accounts, admins can isolate data to meet data security requirements or create workflows that are customized to the data flowing into that account. 

Accounts allow secure data sharing in a single system. On top of that it enables performance improvements by reducing the total amount of data that is processed when updating records or generating reports.

Each account can have users and roles assigned. Admins create users and set identification as well as permissions. Users have roles assigned that may have custom rights or privileges to data that can be viewed and managed within the account.

Leveraging account context to improve remediation and visibility  

In Anchore Enterprise an account object is a collection of settings and permissions that allow a user to access, maintain and manage data. Anchore Enterprise is a multi-tenancy system that consists of three logical components (accounts, users and permissions) providing flexibility for users to access and manage their data.

On occasion users may need to access information that resides outside of their own account. To investigate or troubleshoot issues and to manage data visibility across teams, allowing account context is crucial. Within the Anchore Enterprise UI, the Account Context option enables “context switching” to view SBOMs, analysis, and reports of different accounts while still retaining the specific user profile.

Also standard users are now provided with an additional level and vector of access control.

Adding Account Context in the URL

Until now the URLs in Anchore did not include account context which caused limitations to sharing data across accounts. Different users within the same account or users who were not part of the same account had to manually navigate to resources that were shared. 

In Anchore 5.6, account context is now included in the URL. This simplifies the workflow for sharing reports among users who have access to shared resources within the same or across different accounts.

Example Scenario

1. Create an account TestAccount and added a user TestUser1

2. Analyze the latest tag for Ubuntu under TestAccount context as username admin.

http://localhost:3000/TestAccount/artifacts/image/docker.io/ubuntu/latest/sha256:d21429c4635332e96a4baae3169e3f02ac8e24e6ae3d89a86002d49a1259a4f7

3. Log out of username admin

4. Paste the URL for the image analysis page above

5. Log in as username TestUser1

6. You will now be directly navigated to the Image analysis page

7. Verify top right that you are under username TestUser1

8. If you are trying to access a link without having access to the resource, you will receive an error message on the top right corner of the UI.

Please feel free to review our release notes for other notable updates and bug fixes in Anchore Enterprise 5.6.

How Cisco Umbrella Achieved FedRAMP Compliance in Weeks

Implementing compliance standards can be a daunting task for IT and security teams. The complexity and volume of requirements, increased workload, and resource constraints make it challenging to ensure compliance without overwhelming those responsible. Our latest case study, “How Cisco Umbrella Achieved FedRAMP Compliance in Weeks,” provides a roadmap for overcoming these challenges, leading to a world of streamlined compliance with low cognitive overhead.

Challenges Faced by Cisco Umbrella

Cisco Umbrella for Government, a cloud-native cybersecurity solution tailored for federal, state, and local government agencies, faced a tight deadline to meet FedRAMP vulnerability scanning requirements. They needed to integrate multiple security functions into a single, manageable solution while ensuring comprehensive protection across various environments, including remote work settings. Key challenges included:

  • Meeting all six FedRAMP vulnerability scanning requirements
  • Maintaining and automating STIG & FIPS compliance for Amazon EC2 virtual machines
  • Integrating end-to-end container security across the CI/CD pipeline, Amazon EKS, and Amazon ECS
  • Meeting SBOM requirements for White House Executive Order (EO 14028)

Solutions Implemented

To overcome these challenges, Cisco Umbrella leveraged Anchore Enterprise, a leading software supply chain security platform specializing in container security and vulnerability management. Anchore Enterprise integrated seamlessly with Cisco’s existing infrastructure, providing:

These features enabled Cisco Umbrella to secure their software supply chain, ensuring compliance with FedRAMP, STIG, FIPS, and EO 14028 within a short timeframe.

Remarkable Results

By integrating Anchore Enterprise, Cisco Umbrella achieved:

  • FedRAMP, FIPS, and STIG compliance in weeks versus months
  • Reduced implementation time and improved developer experience
  • Proactive vulnerability detection in development, saving hours of developer time
  • Simplified security data management with a complete SBOM management solution

Download the Case Study Today

Navigating the complexity and volume of compliance requirements can be overwhelming for IT and security teams, especially with increased workloads and resource constraints. Cisco Umbrella’s experience shows that with the right tools, achieving compliance can be streamlined and manageable. Discover how you can implement these strategies in your organization by downloading our case study, “How Cisco Umbrella Achieved FedRAMP Compliance in Weeks,” and take the first step towards streamlined compliance today.

Using the Common Form for SSDF Attestation: What Software Producers Need to Know

The release of the long-awaited Secure Software Development Attestation Form on March 18, 2024 by the Cybersecurity and Infrastructure Agency (CISA) increases the focus on cybersecurity compliance for software used by the US government. With the release of the SSDF attestation form, the clock is now ticking for software vendors and federal systems integrators to comply with and attest to secure software development practices.

This initiative is rooted in the cybersecurity challenges highlighted by Executive Order 14028, including the SolarWinds attack and the Colonial Pipeline ransomware attack, which clearly demonstrated the need for a coordinated national response to the emerging threats of a complex software supply chain. Attestation to Secure Software Development Framework (SSDF) requirements using the new Common Form is the most recent, and likely not the final, step towards a more secure software supply chain for both the United States and the world at large. We will take you through the details of what this form means for your organization and how to best approach it.

Overview of the SSDF attestation

SSDF attestation is part of a broader effort derived from the Cybersecurity EO 14028 (formally called “Improving the Nation’s Cybersecurity). As a result of this EO, the Office of Management and Budget (OMB) issued two memorandums, M-22-18 “Enhancing the Security of the Software Supply Chain through Secure Software Development Practices” and M-23-16 “Update to Memorandum M-22-18”.

These memos require the Federal agencies to obtain self-attestation forms from software suppliers. Software suppliers have to attest to complying with a subset of the Secure Software Development Framework (SSDF).

Before the publication of the SSDF attestation form, the SSDF was a software development best practices standard published by the National Institute of Standards and Technology (NIST) based on industry best practices like OWASP’s BSIMM and SAMM, a useful resource for organizations that valued security intrinsically and wanted to run secure software development without any external incentives like formal compliance requirements.

Now, the SSDF attestation form requires software providers to self-attest to having met a subset of the SSDF best practices. There are a number of implications to this transition from secure software development as being an aspiration standard to a compliance standard that we will cover below. The most important thing to keep in mind is that while the Attestation Form doesn’t require a software provider to be formally certified before they can transaction with a federal agency like FedRAMP does, there are retroactive punishments that can be applied in cases of non-compliance.

Who/What is Affected?

  1. Software providers to federal agencies
  • Federal service integrators
  • Independent software vendor
  • Cloud service providers
  1. Federal agencies and DoD programs who use any of the above software providers

Included

  • New software: Any software developed after September 14, 2022
  • Major updates to existing software: A major version change after September 14, 2022
  • Software-as-a-Service (SaaS)

Exclusions

  • First-party software: Software developed in-house by federal agencies. SSDF is still considered a best practice but does not require self-attestation
  • Free and open-source software (FOSS): Even though FOSS components and end-user products are excluded from self-attestation the SSDF requires that specific controls are in place to protect against software supply chain security breaches

Key Requirements of the Attestation Form

There are two high-level requirements for meeting compliance with the SSDF attestation form;

  1. Meet the technical requirements of the form
    • Note: NIST SSDF has 19 categories and 42 total requirements. The self-attestation form has 4 categories which are a subset of the full SSDF
  2. Self-attest to compliance with the subset of SSDF
    • Sign and return the form

Timeline

The timeline for compliance with the SSDF self-attestation form involves two critical dates:

  • Critical software: Jun 11, 2024 (3 months after approval on March 11)
  • All software: Sep 11, 2024 (6 months after approval on March 11)

Implications

Now that CISA has published the final version of the SSDF attestation form there are a number of implications to this transition. One is financial and the other is potentially criminal.

The financial penalty of not attesting to secure software development practices via the form can be significant. Federal agencies are required to stop using the software, potentially impacting your revenue,  and any future agencies you want to work with will ask to see your SSDF attestation form before procurement. Sign the form or miss out on this revenue.

The second penalty is a bit scarier from an individual perspective. An officer of the company has to sign the attestation form to state that they are responsible for attesting to the fact that all of the form’s requirements have been met. Here is the relevant quote from the form:

“Willfully providing false or misleading information may constitute a violation of 18 U.S.C. § 1001, a criminal statute.”

It is also important to realize that this isn’t an unenforceable threat. There is evidence that the DOJ Civil Cyber Fraud Initiative is trying to crack down on government contractors failing to meet cybersecurity requirements. They are bringing False Claims Act investigations and enforcement actions. This will likely weigh heavily on both the individual that signs the form and who is chosen at the organization to sign the form.

Given this, most organizations will likely opt to utilize a third-party assessment organization (3PAO) to sign the form in order to shift liability off of any individual in the organization.

Challenges and Considerations

Do I still have to sign if I have a 3PAO do the technical assessment?

No. As long as the 3PAO is FedRAMP-certified. 

What if I can’t comply in time?

You can draft a plan of action and milestones (POA&M) to fill the gap while you are addressing the gaps between your current system and the system required by the attestation form. If the agency is satisfied with the POA&M then they can continue to use your software. But they have to request either an extension of the deadline from OMB or a waiver in order to do that.

Can only the CEO and COO sign the form?

The wording in the draft form that was published required either the CEO or COO but new language was added to the final form that allows for a different company employee to sign the attestation form.

Conclusion

Cybersecurity compliance is a journey not a destination. SSDF attestation is the next step in that journey for secure software development. With the release of the SSDF attestation for, the SSDF standard is not transformed from a recommendation into a requirement. Given the overall trend of cybersecurity modernization that was kickstarted with FISMA in 2002, it would be prudent to assume that this SSDF attestation form is an intermediate step before the requirements become a hard gate where compliance will have to be demonstrated as a prerequisite to utilizing the software.

If you’re interested to get a deep-dive into what is technically required to meet the requirements of the SSDF attestation form, read all of the nitty-gritty details in our eBook, “SSDF Attestation 101: A Practical Guide for Software Producers“. 

If you’re looking for a solution to help you achieve the technical requirements of SSDF attestation quickly, take a look at Anchore Enterprise. We have helped hundreds of enterprises achieve SSDF attestation in days versus months with our automated compliance platform.

With Great Power Comes Great Responsibility: APTs & Software Supply Chain Security

Note: This is a multi-part series primer on the intersection of advanced persistent threats (APTs) and software supply chain security (SSCS). This blog post is the first in the series. We will update this blog post with links to the additional parts of the series as they are published.
• Part 1 (This blog post)
Part 2
Part 3

In the realm of cybersecurity, the convergence of Advanced Persistent Threats (APTs) and software supply chain security presents a uniquely daunting challenge for organizations. APTs, characterized by their sophisticated, state-sponsored or well-funded nature, focus on stealthy, long-term data theft, espionage, or sabotage, targeting specific entities. Their effectiveness is amplified by the asymmetric power dynamics of a well funded attacker versus a resource constrained security team.

Modern supply chains inadvertently magnify the impact of APTs due to the complex and interconnected dependency network of software and hardware components. The exploitation of this weakness by APTs not only compromises the targeted organizations but also poses a systemic risk to all users of the compromised downstream components. The infamous Solarwinds exploit exemplifies the far-reaching consequences of such breaches.

This landscape underscores the necessity for an integrated approach to cybersecurity, emphasizing depth, breadth, and feedback to create a holistic software supply chain security program that can withstand even adversaries as motivated and well-resourced as APTs. Before we jump into how to create a secure software supply chain that can resist APTs, let’s understand our adversary a bit better first.

Know Your Adversary: Advanced Persistent Threats (APTs)

What is an Advanced Persistent Threats (APT)?

An Advanced Persistent Threat (APT) is a sophisticated, prolonged cyberattack, usually state-sponsored or executed by well-funded criminal groups, targeting specific organizations or nations. Characterized by advanced techniques, APTs exploit zero-day vulnerabilities and custom malware, focusing on stealth and long-term data theft, espionage, or sabotage. Unlike broad, indiscriminate cyber threats, APTs are highly targeted, involving extensive research into the victim’s vulnerabilities and tailored attack strategies.

APTs are marked by their persistence, maintaining a foothold in a target’s network for extended periods, often months or years, to continually gather information. They are designed to evade detection, blending in with regular network traffic, and using methods like encryption and log deletion. Defending against APTs requires robust, advanced security measures, continuous monitoring, and a proactive cybersecurity approach, often necessitating collaboration with cybersecurity experts and authorities.

High-Profile APT Example: Operation Triangulation

The recent Operation Triangulation campaign disclosed by Kaspersky researchers is an extraordinary example of an APT in both its sophistication and depth. The campaign made use of four separate zero-day vulnerabilities, utilized a highly targeted approach towards specific individuals at Kaspersky, combined a multi-phase attack pattern and persisted over a four year period. Its complexity, implied significant resources possibly from a nation-state, and the stealthy, methodical progression of the attack, align closely with the hallmarks of APTs. Famed security researcher, Bruce Schneier, writing on his blog, Schneier on Security, wasn’t able to contain his surprise upon reading the details of the campaign, “[t]his is nation-state stuff, absolutely crazy in its sophistication.”

What is the impact of APTs on organizations?

Ignoring the threat posed by Advanced Persistent Threats (APTs) can lead to significant impact for organizations, including extensive data breaches and severe financial losses. These sophisticated attacks can disrupt operations, damage reputations, and, in cases involving government targets, even compromise national security. APTs enable long-term espionage and strategic disadvantage due to their persistent nature. Thus, overlooking APTs leaves organizations exposed to continuous, sophisticated cyber espionage and the multifaceted damages that follow.

Now that we have a good grasp on the threat of APTs, we turn our attention to the world of software supply chain security to understand the unique features of this landscape.

Setting the Stage: Software Supply Chain Security

What is Software Supply Chain Security?

Software supply chain security is focused on protecting the integrity of software through its development and distribution. Specifically it aims to prevent the introduction of malicious code into software that is utilized as components to build widely-used software services.

The open source software ecosystem is a complex supply chain that solves the problem of redundancy of effort. By creating a single open source version of a web server and distributing it, new companies that want to operate a business on the internet can re-use the generic open source web server instead of having to build its own before it can do business. These new companies can instead focus their efforts on building new bespoke software on top of a web server that does new, useful functions for users that were previously unserved. This is typically referred to as compostable software building blocks and it is one of the most important outcomes of the open source software movement.

But as they say, “there are no free lunches”. While open source software has created this incredible productivity boon comes responsibility. 

What is the Key Vulnerability of the Modern Software Supply Chain Ecosystem?

The key vulnerability in the modern software supply chain is the structure of how software components are re-used, each with its own set of dependencies, creating a complex web of interlinked parts. This intricate dependency network can lead to significant security risks if even a single component is compromised, as vulnerabilities can cascade throughout the entire network. This interconnected structure makes it challenging to ensure comprehensive security, as a flaw in any part of the supply chain can affect the entire system.

Modern software is particularly vulnerable to software supply chain attacks because 70-90% of modern applications are open source software components with the remaining 10-30% being the proprietary code that implements company specific features. This means that by breaching popular open source software frameworks and libraries an attacker can amplify the blast radius of their attack to effectively reach significant portions of internet based services with a single attack.

If you’re looking for a deeper understanding of software supply chain security we have written a comprehensive guide to walk you through the topic in full.

High-Profile Software Supply Chain Exploit Example: SolarWinds

In one of the most sophisticated supply chain attacks, malicious actors compromised the update mechanism of SolarWinds’ Orion software. This breach allowed the attackers to distribute malware to approximately 18,000 customers. The attack had far-reaching consequences, affecting numerous government agencies, private companies, and critical infrastructure.

Looking at the example of SolarWinds, the lesson we should take away is not to put a focus on prevention. APTs has a wealth of resources to draw upon. Instead the focus should be on monitoring the software we consume, build, and ship for unexpected changes. Modern software supply chains come with a great deal of responsibility. The software we use and ship need to be understood and monitored.

This is the first in a series of blog posts focused on the intersection of APTs and software supply chain security. This blog post highlighted the contextual background to set the stage for the unique consequences of these two larger forces. Next week, we will discuss the implications of the collision of these two spheres in the second blog post in this series.

Anchore’s June Line-Up: Essential Events for Software Supply Chain Security and DevSecOps Enthusiasts

Summer is beginning to ramp up, but before we all check out for the holidays, Anchore has a sizzling hot line-up of events to keep you engaged and informed. This June, we are excited to host and participate in a number of events that cater to the DevSecOps crowd and the public sector. From insightful webinars to hands-on workshops and major conferences, there’s something for everyone looking to enhance their knowledge and connect with industry leaders. Join us at these events to learn more about how we are driving innovation in the software supply chain security industry.

WEBINAR: How the US Navy is enabling software delivery from lab to fleet

Date: Jun 4, 2024

The US Navy’s DevSecOps platform, Party Barge, has revolutionized feature delivery by significantly reducing onboarding time from 5 weeks to just 1 day. This improvement enhances developer experience and productivity through actionable findings and fewer false positives, while maintaining high security standards with inherent policy enforcement and Authorization to Operate (ATO). As a result, development teams can efficiently ship applications that have achieved cyber-readiness for Navy Authorizing Officials (AOs).

In an upcoming webinar, Sigma Defense and Anchore will provide an in-depth look at the secure pipeline automation and security artifacts that expedite application ATO and production timelines. Topics will include strategies for removing silos in DevSecOps, building efficient development pipeline roles and component templates, delivering critical security artifacts for ATO (such as SBOMs, vulnerability reports, and policy evidence), and streamlining operations with automated policy checks on container images.

WORKSHOP: VIPERR — Actionable Framework for Software Supply Chain Security

Date: Jun 17, 2024 from 8:30am – 2:00pm ET

Location: Carahsoft office in Reston, VA

Anchore, in partnership with Carahsoft, is offering an exclusive in-person workshop to walk security practitioners through the principles of the VIPERR framework. Learn the framework hands-on from the team that originally developed the industry leading software supply chain security framework. In case you’re not familiar, the VIPERR framework enhances software supply chain security by enabling teams to evaluate and improve their security posture. It offers a structured approach to meet popular compliance standards. VIPERR stands for visibility, inspection, policy enforcement, remediation, and reporting, focusing on actionable strategies to bolster supply chain security.

The workshop covers building a software bill of materials (SBOM) for visibility, performing security checks for vulnerabilities and malware during inspection, enforcing compliance with both external and internal standards, and providing recommendations and automation for quick issue remediation. Additionally, timely reporting at any development stage is emphasized, along with a special topic on achieving STIG compliance.

EVENT: Carahsoft DevSecOps Conference 2024

Date: Jun 18, 2024

Location: The Ronald Reagan Building and International Trade Center in Washington, DC

If you’re planning to be at the show, our team is looking forward to meeting you.  You can book a demo session with us in advance!

On top of offering the VIPERR workshop, the Anchore team will be attending Carahsoft’s 2nd annual DevSecOps Conference in Washington, DC, a comprehensive forum designed to address the pressing technological, security, and innovation challenges faced by government agencies today. The event aims to explore innovative approaches such as DoD software factories, which drive efficiency and enhance the delivery of citizen-centric services, and DevSecOps, which integrates security into the software development lifecycle to combat evolving cybersecurity threats. Through a series of panels and discussions, attendees will gain valuable knowledge on how to leverage these cutting-edge strategies to improve their operations and service delivery.

EVENT: AWS Summit Washington, DC

Dates:  June 26-27, 2024

Location: Walter E. Washington Convention Center in Washington, DC

If you’re planning to be at the show, our team is looking forward to meeting you.  You can book a demo session with us in advance!

To round out June, Anchore will also be attending AWS Summit Washington, DC. The event highlights how AWS partners can help public sector organizations meet the needs of federal agencies. Anchore is an AWS Public Sector Partner and a graduate of the AWS ISV Accelerate program.

See how Anchore helped Cisco Umbrella for Government achieve FedRAMP compliance by reading the co-authored blog post on the AWS Partner Network (APN) Blog. Or better yet, drop by our booth and the team can give you a live demo of the product.

VIRTUAL EVENT: Life after the xz utils backdoor hack with Josh Bressers

Date: Wednesday, June 5, from 12:35 PM – 1:20 PM EDT

The xz utils hack was a significant breach that profoundly undermined trust within the open source community. The discovery of the backdoor revealed vulnerabilities in the software supply chain. As a member of both the open source community and a solution provider for the software supply chain security field, we at Anchore have strong opinions about XZ specifically, and open source security generally. Anchore’s VP of Security,  Josh Bressers will be speaking publicly about this topic at Upstream 2024.

Be sure to catch the live stream of “Life after the xz utils backdoor hack,” a panel discussion featuring Josh Bressers. The panel will cover the implications of the recent xz utils backdoor hack and how the attack deeply impacted trust within the open source community. In keeping with the Upstream 2024 theme of “Unusual Ideas to Solve the Usual Problems”, Josh will be presenting the “unusual” solution that Anchore has developed to keep these types of hacks from impacting the industry. The discussion will include insights from industry experts such as Shaun Martin of BlackIce, Jordan Harband, prolific JavaScript maintainer, Rachel Stephens from RedMonk, and Terrence Fischer from Boeing.

Wrap-Up

Don’t miss out on these exciting opportunities to connect with Anchore and learn about the latest advancements in software supply chain security and DevSecOps. Whether you join us for a webinar, participate in our in-person VIPERR workshop, or visit us at one of the major conferences, you’ll gain valuable insights and practical knowledge to enhance your organization’s security posture. We’re looking forward to engaging with you and helping you navigate the evolving digital landscape. See you in June!

Also, if you want to stay up-to-date on all of the events that Anchore hosts or participates in be sure to bookmark our events page and check back often!

Navigating the Updates to cATO: Critical Changes & Practical Advice for DoD Programs

On April 11, the US Department of Defense (DoD)’s Chief Information Officer (CIO) released the DevSecOps Continuous Authorization Implementation Guide, marking the next step in the evolution of the DoD’s efforts to modernize its security and compliance ecosystem. This guide is part of a larger trend of compliance modernization that is transforming the US public sector and the global public sector as a whole. It aims to streamline and enhance the processes for achieving continuous authorization to operate (cATO), reflecting a continued push to shift from traditional, point-in-time authorizations to operate (ATOs) to a more dynamic and ongoing compliance model.

The new guide introduces several significant updates, including the introduction of specific security and development metrics required to achieve cATO, comprehensive evaluation criteria, practical advice on how to meet cATO requirements and a special emphasis on software supply chain security via software bills of material (SBOMs).

We break down the updates that are important to highlight if you’re already familiar with the cATO process. If you’re looking for a primer on cATO to get yourself up to speed, read our original blog post or click below to watch our webinar on-demand.

Continuous Authorization Metrics

A new addition to the corpus of information on cATO is the introduction of specific security and software development metrics that are required to be continuously monitored. Many of these come from the private sector DevSecOps best practices that have been honed by organizations at the cutting edge of this field, such as Google, Microsoft, Facebook and Amazon.

We’ve outlined the major ones below.

  1. Mean Time to Patch Vulnerabilities:
    • Description: Average time between the identification of a vulnerability in the DevSecOps Platform (DSOP) or application and the successful production deployment of a patch.
    • Focus: Emphasis on vulnerabilities with high to moderate impact on the application or mission.
  2. Trend Metrics:
    • Description: Metrics associated with security guardrails and control gates PASS/FAIL ratio over time.
    • Focus: Show improvements in development team efforts at developing secure code with each new sprint and the system’s continuous improvement in its security posture.
  3. Feedback Communication Frequency:
    • Description: Metrics to ensure feedback loops are in place, being used, and trends showing improvement in security posture.
  4. Effectiveness of Mitigations:
    • Description: Metrics associated with the continued effectiveness of mitigations against a changing threat landscape.
  5. Security Posture Dashboard Metrics:
    • Description: Metrics showing the stage of application and its security posture in the context of risk tolerances, security control compliance, and security control effectiveness results.
  6. Container Metrics:
    • Description: Measure the age of containers against the number of times they have been used in a subsystem and the residual risk based on the aggregate set of open security issues.
  7. Test Metrics:
    • Description: Percentage of test coverage passed, percentage of passing functional tests, count of various severity level findings, percentage of threat actor actions mitigated, security findings compared to risk tolerance, and percentage of passing security control compliance.

The overall thread with the metrics required is to quickly understand whether the overall security of the application is improving. If they aren’t this is a sign that something within the system is out of balance and is in need of attention.

Comprehensive and detailed evaluation criteria

Tucked away in Appendix B. “Requirements” is a detailed table that spells out the individual requirements that need to be met in order to achieve a cATO. This table is meant to improve the cATO process so that the individuals in a program that are implementing the requirements know the criteria they will be evaluated against. The goal being to reduce the amount of back-and-forth between the program and the Authorizing Official (AO) that is evaluating them.

Practical Implementation Advice

The ecosystem for DSOPs has evolved significantly since cATO was first announced in February 2022. Over the past 2+ years, a number of early adopters, such as Platform One have blazed a trail and learned all of the painful lessons in order to smooth the path for other organizations that are now looking to modernize their development practices. The advice in the implementation guide is a high-signal, low-noise distillation of these hard won lessons learned.

DevSecOps Platform (DSOP) Advice

If you’re more interested in writing software than operating a DSOP then you’ll want to focus your attention on pre-existing DSOP’s, commonly called DoD software factories.

We have written both a primer for understanding DoD software factories and an index of additional content that can quickly direct you to deep dives in specific content you’re interested in.

If you love to get your hands dirty and would rather have full control over your development environment, just be aware that this is specifically recommended against:

Build a new DSOP using hardened components (this is the most time-consuming approach and should be avoided if possible).

DevSecOps Culture Advice

While the DevSecOps culture and process advice is well-known in the private sector, it is still important to emphasize in the federal context that is currently transitioning to the modern software development paradigm.

  1. Bring the security team at the start of development and keep them involved throughout.
  2. Create secure agile processes to support the continued delivery of value without the introduction of unnecessary risk

Continuous Monitoring (ConMon) Advice

Ensure that all environments are continuously monitored (e.g., development, test and production). Utilize the security data collected from these environments to power and inform thresholds and triggers for active incident response. ConMon and ACD are separate pillars of cATO but need to be integrated so that information is flowing to the systems that can make best use of it. It is this integrated approach that delivers on the promise of significantly improved security and risk outcomes.

Active Cyber Defense (ACD) Advice

Both a Security Operations Center (SOC) and external CSSP are needed in order to achieve the Active Cyber Defense (ACD) pillar of cATO. On top of that, there also has to be a detailed incident response plan and personnel trained on it. While cATO’s goal is to automate as much of the security and incident response system as possible to reduce the burden of manual intervention. Humans in the loop are still an important component in order to tune the system and react with appropriate urgency.

Software Supply Chain Security (SSCS) Advice

The new implementation guide is very clear that a DSOP creates SBOMs for itself and any applications that pass through it. This is a mega-trend that has been sweeping over the software supply chain security industry for the past decade. It is now the consensus that SBOMs are the best abstraction and practice for securing software development in the age of composible and complex software.

The 3 (+1) Pillars of cATO

While the 3 pillars of cATO and its recommendation for SBOMs as the preferred software supply chain security tool were called out in the original cATO memo, the recently published implementation guide again emphasizes the importance of the 3 (+1) pillars of cATO.

The guide quotes directly from the memo:

In order to prevent any combination of human errors, supply chain interdictions, unintended code, and support the creation of a software bill of materials (SBOM), the adoption of an approved software platform and development pipeline(s) are critical.

This is a continuation of the DoD specifically, and the federal government generally, highlighting the importance of software supply chain security and software bills of material (SBOMs) as “critical” for achieving the 3 pillars of cATO. This is why Anchore refers to this as the “3 (+1) Pillars of cATO“.

  1. Continuous Monitoring (ConMon)
  2. Active Cyber Defense (ACD)
  3. DevSecOps (DSO) Reference Design
  4. Secure Software Supply Chain (SSSC)

Wrap-up

The release of the new DevSecOps Continuous Authorization Implementation Guide marks a significant advancement in the DoD’s approach to cybersecurity and compliance. With a focus on transitioning from traditional point-in-time Authorizations to Operate (ATOs) to a continuous authorization model, the guide introduces comprehensive updates designed to streamline the cATO process. The goal being to ease the burden of the process and help more programs modernize their security and compliance posture.

If you’re interested to learn more about the benefits and best practices of utilizing a DSOP (i.e., DoD software factory) in order to transform cATO compliance into a “switch flip”. Be sure to pick up a copy of our “DevSecOps for a DoD Software Factory: 6 Best Practices for Container Images” white paper. Click below to download.

A Guide to Air Gapping: Balancing Security and Efficiency in Classified Environments

Not every organization needs to protect their data from spies that can infiltrate Petagon-level security networks with printable masks but there are more organizations than you think that utilize air gapping to protect their most classified. This blog dives into the concept of air gapping, its benefits, and its drawbacks. Finally, we will highlight how Anchore Enterprise/Federal can be deployed into an air gapped network and provide organization’s that require this level of security to benefit from the best of both worlds; the extraordinary security of an air gapped network and the automation of a Cloud-Native software composition analysis tool to protect their software supply chain.

What is an air gap?

An air gap is exactly what it sounds like; a literal gap filled with air between a network and the greater internet. It is a network security practice where a computer (or network) is physically isolated from any external networks. This isolation is achieved by ensuring that the system has no physical or wireless connections to other networks, creating a secure environment that is highly resistant to external cyber threats.

By maintaining this separation, air gapped systems protect sensitive data and critical operations from potential intrusions, malware, and other forms of cyber attacks. This method is commonly used in high-security environments such as government agencies, financial institutions, and critical infrastructure facilities to safeguard highly confidential and mission-critical information.

If you count yourself as part of the defense industrial base (DIB), then you’re likely very familiar with IT systems that have an air-gapped component. Typically, highly classified data storage systems require air-gapping as part of their defense-in-depth strategy.

Benefits of air gapping

The primary benefit of utilizing air gapping is that it eliminates an entire class of security threats. This brings a significant reduction in risk to the systems and the data that the system processes. 

Beyond that there are also a number of secondary benefits that come from running an air gapped network. The first is that it gives the DevSecOps team running the system complete control over the security and architecture of the system. Also, any software that is run on an air gapped network inherits both the security and compliance of the network. For 3rd-party software where the security of the software is in question, being able to run it in a fully isolated environment reduces the risk associated with this ambiguity. Similar to how anti-virus software creates sandboxes to safely examine potential malicious files, an air gapped network creates a physical sandbox that both protects the isolated system from remote threats and from internal data from being pushed out to remote systems.

Drawback of air gapping

As with most security solutions, there is always a tradeoff between security and convenience, and air gapping is no different. While air gapping provides a high level of security by isolating systems from external networks, it comes at the cost of significant inconvenience. For instance, accessing and transferring data on air gapped systems requires physical presence and manual procedures, such as using USB drives or other removable media. This can be time-consuming and labor-intensive, especially in environments where data needs to be frequently updated or accessed by multiple users.

Additionally, the lack of network connectivity means that software updates, patches, and system maintenance tasks cannot be performed remotely or automatically. This increases the complexity of maintaining the system and ensuring it remains up-to-date with the latest security measures.

How Anchore enables air gapping for the DoD and DIB

As the leading Cloud-Native software composition analysis (SCA) platform, Anchore Enterprise/Federal offers an on-prem deployment model that integrates seamlessly with air gapped networks. This is a proven model with countless deployments into IL4 to IL6  (air-gapped and classified) environments. Anchore supports public sector customers like the US Department of Defense (DoD), North Atlantic Treaty Organization (NATO), US Air Force, US Navy, US Customs and Border Protection, Australian Government Department of Defence and more. 

Architectural Overview

In order to deploy Anchore Enterprise or Federal in an air gapped environment, a few architectural considerations need to be made. At a very basic level, two deployments of Anchore need to be provisioned. 

The deployment in the isolated environment will be deployed as normal with some slight modifications. When Anchore is deployed normally, the feed service will be reaching out to a dozen or so specified endpoints to compile and normalize vulnerability data and then it will compile that into a single feed dataset and store that in the feeds database either deployed in the cluster or on an external instance or managed database if configured. 

When deploying in an air gapped environment, it’s necessary to set Anchore to run in “apiOnly” mode. This will prevent the feeds service from making unnecessary calls to the internet that will inevitably time out due to being in an isolated environment. 

A second skeleton deployment of Anchore will need to be deployed in a connected environment to sync the feeds. This deployment doesn’t need to spin up any of the services that will scan images or generate policy results and can therefore be done on a smaller instance. This deployment will reach out to the internet and build the feeds database. 

The only major consideration that needs to be taken into consideration with this connected deployment is it needs to be running on the same version of PostgreSQL as the disconnected deployment in order to ensure complete compatibility.

Workflow

Once the two deployments are provisioned, the general workflow will be the following:

First, allow the feeds to sync on the connected environment. Once the feeds are synced, dump out the entire feeds database and transfer it to the disconnected environment through the method typically used for your environment. 

When the file is available on the high-side, transfer it to the feed database instance Anchore uses. When it’s finished transferring, perform a PostgreSQL restore and the feeds will be available.

Automation Process

To mimic how Anchore syncs feeds in a connected environment, many people choose to automate some of all of the workflow described previously. A cronjob can be set up to run the backup along with a push to an automated cross domain solution like those available in AWS GovCloud. From there, the air gapped deployment can be scheduled to look at that location and perform the feed restore on a regular cadence.

Wrap-Up

While not every organization counts the Impossible Missions Force (IMF) as an adversary, the practice of air gapping remains a vital strategy for many entities that handle highly sensitive information. By deploying Anchore Enterprise/Federal in an air gapped network, organizations can achieve an optimal balance between the robust security of air gapping and the efficiency of automated software composition analysis. This combination ensures that even the most secure environments can benefit from cutting-edge software supply chain security.

Air gapping is only one of a catalog security practices that are necessary for systems that handle classified government data. If you’re interested to learn more about the other requirements, check out our white paper on best practices for container images in DoD-grade environments.

Best Practices for DevSecOps in DoD Software Factories: A White Paper

The Department of Defense’s (DoD) Software Modernization Implementation Plan, unveiled in March 2023, represents a significant stride towards transforming software delivery timelines from years to days. This ambitious plan leverages the power of containers and modern DevSecOps practices within a DoD software factory.

Our latest white paper, titled “DevSecOps for a DoD Software Factory: 6 Best Practices for Container Images,” dives deep into the security practices for securing container images in a DoD software factory. It also details how Anchore Federal—a pivotal tool within this framework—supports these best practices to enhance security and compliance across multiple DoD software factories including the US Air Force’s Platform One, Iron Bank, and the US Navy’s Black Pearl.

Key Insights from the White Paper

  • Securing Container Images: The paper outlines six essential best practices ranging from using trusted base images to continuous vulnerability scanning and remediation. Each practice is backed by both DoD guidance and relevant NIST standards, ensuring alignment with federal requirements.
  • Role of Anchore Federal: As a proven tool in the arena of container image security, Anchore Federal facilitates these best practices by integrating seamlessly into DevSecOps workflows, providing continuous scanning, and enabling automated policy enforcement. It’s designed to meet the stringent security needs of DoD software factories, ready for deployment even in classified and air-gapped environments.
  • Supporting Rapid and Secure Software Delivery: With the DoD’s shift towards software factories, the need for robust, secure, and agile software delivery mechanisms has never been more critical. Anchore Federal is the turnkey solution for automating security processes and ensuring that all container images meet the DoD’s rigorous security and compliance requirements.

Download the White Paper Today

Empower your organization with the insights and tools needed for secure software delivery within the DoD ecosystem. Download our white paper now and take a significant step towards implementing best-in-class DevSecOps practices in your operations. Equip your teams with the knowledge and technology to not just meet, but exceed the modern security demands of the DoD’s software modernization efforts.

RMF and ATO with RAISE 2.0 — Navy’s DevSecOps solution for Rapid Delivery

In November 2022, the US Department of Navy (DON) released the RAISE 2.0 Implementation Guide to support its Cyber Ready initiative, pivoting from a time-intensive compliance checklist and mindset to continuous cyber-readiness with real-time visibility. RAISE 2.0 embraces a “shift left” philosophy, performing security checks at all stages of the SDLC and ensuring security issues are found as early as possible when they are faster and easier to fix.

3 Things to Know about RAISE 2.0 

RAISE 2.0 provides a way to automate the Implement, Assess, and Monitor phases of the Risk Management Framework (RMF). New containerized applications can be developed using an existing DevSecOps platform (DSOP) that already has an Authorization to Operate (ATO) and meets the RAISE requirements. This eliminates the need for a new application to get its own separate ATO because it inherits the ATO of the DSOP.

There are three primary things you need to know about RAISE 2.0:

  1. Who must follow RAISE 2.0? 
    • RAISE 2.0 only applies to applications delivered via containers
    • RAISE 2.0 requires all teams starting or upgrading containerized software applications to use a pre-vetted DevSecOps platform called a RAISE Platform of Choice (RPOC)
  2. What is an RPOC?
    • RPOC is a designation that a DevSecOps Platform (DSOP) receives after its inheritable security controls have received an ATO
    • A DSOP that wants to become an RPOC must follow the process outlined in the Implementation Guide
  1. How does an RPOC accelerate ATO?
    • The containerized application will be assessed and incorporated into the existing RPOC ATO and is not required to have a separate ATO

What are the requirements of RAISE?

The RAISE 2.0 Implementation Guidelines outline the requirements for an RPOC. They represent best practices for DevSecOps platforms and incorporate processes and documentation required by the US government.  

The guidelines also define the application owner’s ongoing responsibilities, which include overseeing the entire software development life cycle (SDLC) of the application running in the RPOC. 

RPOC Requirements

The requirements for the RPOC include standard DevSecOps best practices and specifically call out tooling, such as:

  • CI/CD pipeline
  • Container Orchestrator
  • Container Repository
  • Code Repository

It also requires specific processes, such as:

  • Continuous Monitoring (ConMon)
  • Ongoing vulnerability scans
  • Ad-hoc vulnerability reporting and dashboard
  • Penetration testing
  • Ad-hoc auditing via a cybersecurity dashboard

If you’re looking for a comprehensive list of RPOC requirements, you can find them in the RAISE 2.0 Implementation Guide on the DON website.

Application Owner Requirements

Separate from the RPOC, the application owner that deploys their software via an RPOC has their own set of requirements. The application owner’s requirements are focused less on the DevSecOps tooling and more on the implementation of the tooling specific to their application. Important callouts are:

  • Maintain a Security Requirements Guide (SRG) and Security Technical Implementation Guide (STIG) for their application
  • CI/CD implementation
  • Repository of vulnerability scans (e.g., SBOMs, container security scans, SAST scans, etc.)

Again, if looking for a comprehensive list of application owner requirements, you can find them in the RAISE 2.0 Implementation Guide on the DON website.

Meet Raise 2.0 requirements with Anchore Federal

Anchore Federal is the leading software supply chain security platform supporting public sector agencies and military programs worldwide. It helps RPOCs and applications owners meet RAISE 2.0 requirements by offering an SBOM management platform, container vulnerability scanner, flexible policy engine and comprehensive reporting dashboard.

Anchore Federal integrates seamlessly with CI/CD pipelines and enables teams to meet essential RPOC requirements, including RPOC requirements 4, 5, 15, 19, 20, and 24. The reporting capabilities also assist in the Quarterly Reviews, fulfilling QREV requirements 6 and 7. Finally it helps application owners to meet requirements APPO 7, 8, 9, and 10.  

Continuous container monitoring in production

Anchore Federal includes the anchore-k8s-inventory tool, which continuously monitors selected Kubernetes clusters and namespaces and identifies the container images running in production. This automated tool that integrates directly with any standard Kubernetes distribution allows RPOCs to meet “RPOC 4: Must support continuous monitoring”.

The Anchore Federal UI provides dashboards summarizing vulnerabilities and your compliance posture across all your clusters. You can drill down into the details for an individual cluster or namespace and filter the dashboard. For example, selecting only high and critical vulnerabilities will only show the clusters and namespaces with those vulnerabilities.

If a zero-day or new vulnerability arises, you can easily see the impacted clusters and containers to assess the blast radius.

Security gates that directly integrate into CI/CD pipeline

Anchore Federal automates required security gates for RPOCs and application owners (i.e., “RPOC 5: Must support the execution of Security Gates” and “APPO 9: Application(s) deployments must successfully pass through all Security Gates”). It integrates seamlessly into the CI/CD pipeline and the container registry, automatically generates SBOMs, and scans containers for vulnerabilities and secrets/keys (Gates 2, 3, and 4 in the RAISE 2.0 Implementation Guide).

Anchore’s security gating feature is backed by a flexible policy engine that allows both RPOCs and application owners to collaborate on the security gates that need to be passed for any container going through a CI/CD pipeline and customized gates specific to the application. The Anchore Policy Engine accelerates the move to RAISE 2.0 by offering pre-configured policy  packs, including NIST SP 800-53, and extensive docs for building custom policies. It is already used by DevSecOps platforms across the DON, including Black Pearl, Party Barge, and Lighthouse installations, and is ready to deploy in classified and air-gapped environments, including IL4 and IL6. 

Production container inventory

On top of continuously monitoring production for vulnerabilities in containers, the anchore-k8s-inventory tool can be utilized to create an inventory of all containers in production which can then be queried by the container registry in order to meet “RPOC 15: Must retain every image that is currently deployed”. This easily automates this requirement and keeps DON programs compliant with no additional manual work.

Single-pane-of-glass security dashboard

Anchore Federal maintains a complete repository of SBOMs, vulnerabilities, and policy evaluations. Dashboards and reports provide auditability that containers have met the necessary security gates to meet “RPOC 20: Must ensure that the CI/CD pipeline meets security gates”.

Customizable vulnerability reporting and comprehensive dashboard

Anchore Federal comes equipped with a customizable report builder for regular vulnerability reporting and a vulnerability dashboard for ad hoc reviews. This ensures that RPOCs and application owners meet the “RPOC 24: Must provide vulnerability reports and/or vulnerability dashboards for Application Owner review” requirement.

Aggregated vulnerability reporting

Not only can Anchore Federal generate application specific reporting, it can provide aggregated reporting for vulnerabilities for all applications and containers on the RPOC. This is vital functionality for meeting the “QREV 6: Consolidated vulnerabilities report of all applications deployed, via the RAISE process” requirement.

Generate and store security artifacts

Anchore Federal both generates and stores security artifacts from the entire CI/CD pipeline. This meets the software supply chain security requirements of “QREV 7: Application Deployment Artifacts”. Specifically this includes, SBOMs (i.e., dependency reports), container security scans, SRG/STIG scans.

Anchore Federal produces a number of the required reports, including STIG results, container security scan reports, and SBOMs, along with pass/fail reports of policy controls.

STIG results are generated by Anchore’s Static STIG Checker tool. This automates STIG scanning and makes STIG compliance a breeze. 

Remediation recommendations and automated allowlists

RAISE 2.0 requires that Application Owners remediate or mitigate all vulnerabilities with a severity of high or above within 21 calendar days. Anchore Federal helps speed remediation efforts to meet the 21-day deadline. It suggests recommended fixes and includes an Action Workbench where security reviewers can specify an Action Plan detailing what developers should do to remediate each vulnerability. 

Anchore Federal also provides time-based allowlists to enforce the 21-day timeline. With the assistance of remediation recommendations and automated allowlists, application owners knows that their applications are always in compliance with “APPO 7: Must review and remediate, or mitigate, all findings listed in the vulnerabilities report per the timelines defined within this guide”.

Automate STIG checks in CI/CD pipeline

The RPOC ISSM and the Application Owner must work together to determine what SRGs and STIGs to implement. Anchore’s Static STIG Checker tool automates STIG checks on Kubernetes environments and then uploads the results to Anchore Federal through a compliance API. That allows users to leverage additional functionality like reporting and correlating the runtime environment to images that have been analyzed.

This automated workflow specifically addresses the “APPO 10: Must provide and continuously maintain a complete Security Requirements Guide (SRG) and Security Technical Implementation Guide (STIG)” requirement.

Getting to RAISE 2.0 with Anchore Federal

Anchore Federal is a complete software supply chain security solution that gets RPOCs and application owners to RAISE 2.0 faster and easier. It is already proven in many Navy programs and across the DoD. With Anchore Federal you can:

  • Automate RAISE requirements
  • Meet required security gates
  • Generate recurring reports
  • Integrate with existing DSOPs

Anchore Federal was designed specifically to automate compliance with Federal and Department of Defense (DoD) compliance standards (e.g., RAISE, cATO, NIST SSDF, NIST 800-53, etc.) for application owners and DSOPs that would rather utilize a turnkey solution to software supply chain security rather than DIY a solution. The easiest way to achieve RAISE 2.0 is by adopting a DoD software factory which is the general version of the Navy’s RPOC.

If you’re interested to learn more about how Anchore can help your organization embed DevSecOps tooling and principles into your software development process, click below to read our white paper.

Navigate SSDF Attestation with this Practical Guide

The clock is ticking again for software producers selling to federal agencies. In the second half of 2024, CEOs or their designees must begin providing an SSDF attestation that their organization adheres to secure software development practices documented in NIST SSDF 800-218

Download our latest ebook to navigate through SSDF attestation quickly and adhere to timelines. 

SSDF attestation covers four main areas from NIST SSDF including: 

  1. Securing development environments, 
  2. Using automated tools to maintain trusted source code supply chains
  3. Maintaining provenance (e.g., via SBOMs) for internal code and third-party components, and 
  4. Using automated tools to check for security vulnerabilities.  

This new requirement is not to be taken lightly. It applies to all software producers, regardless of whether they provide a software end product as SaaS or on-prem, to any federal agency. The SSDF attestation deadline is June 11, 2024, for critical software and September 11, 2024, for all software. However, on-prem software developed before September 14, 2022, will only require SSDF attestation when a new major version is released. The bottom line is that most organizations will need to comply by 2024.

Companies will make their SSDF attestation through an online Common Form that covers the minimum secure software development requirements that software producers must meet. Individual agencies can add agency-specific instructions outside of the Common Form. 

Organizations that want to ensure they meet all relevant requirements can submit a third-party assessment instead of a CEO attestation. You must use a Third-Party Assessment Organization (3PAO) that is FedRAMP-certified or approved by an agency official.  This option is a no-brainer for cloud software producers who use a 3PAO for FedRAMP.

Details over details – so we put together a practical guide to the SSDF attestation requirements and how to meet them “SSDF Attestation 101: A Practical Guide for Software Producers”. We also included how Anchore Enterprise automates the SSDF attestation compliance by directly integrating into your software development pipeline and utilizing continuous policy scanning to detect issues before they hit production.

Anchore Enterprise 5.5: Vulnerability Feed Service Improvements

Today, we are announcing the release of Anchore Enterprise 5.5, the fifth in our regular minor monthly releases. There are a number of improvements to GUI performance, multi-tenancy support and AnchoreCTL but with the uncertainty at NVD continuing, the main news is updates to our feed drivers which help customers adapt to the current situation at NIST.

The NIST NVD interruption and backlog 

A few weeks ago, Anchore alerted the security community to the changes at the National Vulnerability Database (NVD) run by the US National Institute of Standards and Technology (NIST). As of February 18th, there was a massive decline in the number of records published with metadata such as severity and CVSS records. Less publicized but no less problematic has been that the service availability of the API that enables access to NVD records has also been erratic during the same period.  


While the uncertainty around NVD continues and recognizing that it continues to be a federally mandated data source (e.g., within FedRAMP), Anchore has updated its drivers to give customers flexibility in how they interact with its data. 

In a typical Anchore Enterprise deployment, NVD data has served two functions. The first is a catalog of CVEs that can be correlated with advisories from other vendors to help provide more context around an issue. The second is as a matching source of last resort for surfacing vulnerabilities where no other vulnerability data exists. This often comes with a caveat that the expansiveness of the NVD database means there is variable data quality which can lead to false positives.

See it in action!

To see the latest features in action, please join our webinar “Adapting to the new normal at NVD with Anchore Vulnerability Feed” on May 7th at 10am PT/1pm EST. Register Now.

Improvements to the Anchore Vulnerability Feed Service

The Exclusion feed

Anchore released its own Vulnerability Feed with v4.1 which provided an ‘Exclusion Feed’ to avoid NVD false positives by preventing matches against vulnerability records that were known to be inaccurate. 

As of v5.5, we have extended the Anchore Vulnerability Feed service to provide two additional data features. 

Simplify network management with 3rd party vulnerability feed 

The first is the ability to download a copy of 3rd party vulnerability feed data sets, including NVD, directly from Anchore, so that transient service availability issues don’t generate alerts. This simplifies network management by only requiring one firewall rule to be created to enable the retrieval of any vulnerability data for use with Anchore Enterprise. This proxy-mode is the default in 5.5 for the majority of feeds but customers who want to continue to benefit from autonomous operations that don’t rely on Anchore and contact NVD and other vendor endpoints directly can continue to use the direct-mode as before.

Enriched data

The second change is that Anchore is continuing to add new CVE records from the NVD database that customers use while providing Enriched Data to the records, specifically for CPE information which helps map affected versions. While Anchore can’t provide NVD severity or CVSS records, which by definition have to be provided by NVD themselves, these records and metadata will continue to allow customers to reference CVEs. This data is available by default in the proxy-mode mentioned above or as a configuration option with the Direct Mode.

How it works

For the 3rd party vulnerability data, Anchore is running the same feed service software that customers would run in their local site which periodically retrieves the data from all of its available sources and structures the data into a format ready to be downloaded by customers. Using the existing feed drivers in their local feed service (NVD, GitHub, etc) customers download the entire database in one atomic operation. This contrasts with the API based approach in the direct-mode which means that individual records are retrieved one at a time which can take time. This can be enabled on a driver-by-driver basis. 

For the enriched data, Anchore is running a service which looks for new CVE records from other upstream sources, for example the CVE5 database hosted by MITRE and custom data added by Anchore engineers. The database tables that host the NVD records are then populated with these CVE updates. CPE data is sourced from vendor records (e.g., Red Hat Security Advisories) and added to the NVD records to enable matching logic.

Looking forward

Throughout the year, we will be looking to make additional improvements to the Anchore Vulnerability Feed to help customers not just navigate through the uncertainty at NVD but reduce their false positive and false negative count in general.

See it in action!

Join our webinar Adapting to the new normal at NVD with Anchore Vulnerability Feed.
May 7th at 10am PT/1pm EST

Modeling Software Security as Unit Tests: A Mental Model for Developers

Modern software development is complex to say the least. Vulnerabilities often lurk within the vast networks of dependencies that underpin applications. A typical scenario involves a simple app.go source file that is underpinned by a sprawling tree of external libraries and frameworks (check the go.mod file for the receipts). As developers incorporate these dependencies into their applications, the security risks escalate, often surpassing the complexity of the original source code. This real-world challenge highlights a critical concern: the hidden vulnerabilities that are magnified by the dependencies themselves, making the task of securing software increasingly daunting.

Addressing this challenge requires reimagining software supply chain security through a different lens. In a recent webinar with the famed Kelsey Hightower, he provides an apt analogy to help bring the sometimes opaque world of security into focus for a developer. Software security can be thought of as just another test in the software testing suite. And the system that manages the tests and the associated metadata is a data pipeline. We’ll be exploring this analogy in more depth in this blog post and by the end we will have created a bridge between developers and security.

The Problem: Modern software is built on a tower

Modern software is built from a tower of libraries and dependencies that increase the productivity of developers but with these boosts comes the risks of increased complexity. Below is a simple ‘ping-pong’ (i.e., request-response) application written in Go that imports a single HTTP web framework:

package main

import (
	"net/http"

	"github.com/gin-gonic/gin"
)

func main() {
	r := gin.Default()
	r.GET("/ping", func(c *gin.Context) {
		c.JSON(http.StatusOK, gin.H{
			"message": "pong",
		})
	})
	r.Run()
}

With this single framework comes a laundry list of dependencies that are needed in order to work. This is the go.mod file that accompanies the application:

module app

go 1.20

require github.com/gin-gonic/gin v1.7.2

require (
	github.com/gin-contrib/sse v0.1.0 // indirect
	github.com/go-playground/locales v0.13.0 // indirect
	github.com/go-playground/universal-translator v0.17.0 // indirect
	github.com/go-playground/validator/v10 v10.4.1 // indirect
	github.com/golang/protobuf v1.3.3 // indirect
	github.com/json-iterator/go v1.1.9 // indirect
	github.com/leodido/go-urn v1.2.0 // indirect
	github.com/mattn/go-isatty v0.0.12 // indirect
	github.com/modern-go/concurrent v0.0.0-20180228061459-e0a39a4cb421 // indirect
	github.com/modern-go/reflect2 v0.0.0-20180701023420-4b7aa43c6742 // indirect
	github.com/ugorji/go/codec v1.1.7 // indirect
	golang.org/x/crypto v0.0.0-20200622213623-75b288015ac9 // indirect
	golang.org/x/sys v0.0.0-20200116001909-b77594299b42 // indirect
	gopkg.in/yaml.v2 v2.2.8 // indirect
)

The dependencies for the application end up being larger than the application source code. And in each of these dependencies is the potential for a vulnerability that could be exploited by a determined adversary. Kelsey Hightower summed this up well, “this is software security in the real world”. Below is an example of a Java app that hides vulnerabile dependencies inside the frameworks that the application is built off of.

As much as we might want to put the genie back in the bottle, the productivity boosts of building on top of frameworks are too good to reverse this trend. Instead we have to look for different ways to manage security in this more complex world of software development.

If you’re looking for a solution to the complexity of modern software vulnerability management, be sure to take a look at the Anchore Enterprise platform and the included container vulnerability scanner.

The Solution: Modeling software supply chain security as a data pipeline

Software supply chain security is a meta problem of software development. The solution to most meta problems in software development is data pipeline management. 

Developers have learned this lesson before when they first build an application and something goes wrong. In order to solve the problem they have to create a log of the error. This is a great solution until you’ve written your first hundred logging statements. Suddenly your solution has become its own problem and a developer becomes buried under a mountain of logging data. This is where a logging (read: data) pipeline steps in. The pipeline manages the mountain of log data and helps developers sift the signal from the noise.

The same pattern emerges in software supply chain security. From the first run of a vulnerability scanner on almost any modern software, a developer will find themselves buried under a mountain of security metadata. 

$ grype dir:~/webinar-demo/examples/app:v2.0.0

 ✔ Vulnerability DB                [no update available]  
 ✔ Indexed file system                                                                            ~/webinar-demo/examples/app:v2.0.0
 ✔ Cataloged contents                                                         889d95358bbb68b88fb72e07ba33267b314b6da8c6be84d164d2ed425c80b9c3
   ├── ✔ Packages                        [16 packages]  
   └── ✔ Executables                     [0 executables]  
 ✔ Scanned for vulnerabilities     [11 vulnerability matches]  
   ├── by severity: 1 critical, 5 high, 5 medium, 0 low, 0 negligible
   └── by status:   11 fixed, 0 not-fixed, 0 ignored 

NAME                      INSTALLED                           FIXED-IN                           TYPE          VULNERABILITY        SEVERITY 
github.com/gin-gonic/gin  v1.7.2                              1.7.7                              go-module     GHSA-h395-qcrw-5vmq  High      
github.com/gin-gonic/gin  v1.7.2                              1.9.0                              go-module     GHSA-3vp4-m3rf-835h  Medium    
github.com/gin-gonic/gin  v1.7.2                              1.9.1                              go-module     GHSA-2c4m-59x9-fr2g  Medium    
golang.org/x/crypto       v0.0.0-20200622213623-75b288015ac9  0.0.0-20211202192323-5770296d904e  go-module     GHSA-gwc9-m7rh-j2ww  High      
golang.org/x/crypto       v0.0.0-20200622213623-75b288015ac9  0.0.0-20220314234659-1baeb1ce4c0b  go-module     GHSA-8c26-wmh5-6g9v  High      
golang.org/x/crypto       v0.0.0-20200622213623-75b288015ac9  0.0.0-20201216223049-8b5274cf687f  go-module     GHSA-3vm4-22fp-5rfm  High      
golang.org/x/crypto       v0.0.0-20200622213623-75b288015ac9  0.17.0                             go-module     GHSA-45x7-px36-x8w8  Medium    
golang.org/x/sys          v0.0.0-20200116001909-b77594299b42  0.0.0-20220412211240-33da011f77ad  go-module     GHSA-p782-xgp4-8hr8  Medium    
log4j-core                2.15.0                              2.16.0                             java-archive  GHSA-7rjr-3q55-vv33  Critical  
log4j-core                2.15.0                              2.17.0                             java-archive  GHSA-p6xc-xr62-6r2g  High      
log4j-core                2.15.0                              2.17.1                             java-archive  GHSA-8489-44mv-ggj8  Medium

All of this from a single innocuous include statements to your favorite application framework. 

Again the data pipeline comes to the rescue and helps manage the flood of security metadata. In this blog post we’ll step through the major functions of a data pipeline customized for solving the problem of software supply chain security.

Modeling SBOMs and vulnerability scans as unit tests

I like to think of security tools as just another test. A unit test might test the behavior of my code. I think this falls in the same quality bucket as linters to make sure you are following your company’s style guide. This is a way to make sure you are following your company’s security guide.

Kelsey Hightower

This idea from renowned developer, Kelsey Hightower is apt, particularly for software supply chain security. Tests are a mental model that developers utilize on a daily basis. Security tooling are functions that are run against your application in order to produce security data about your application rather than behavioral information like a unit test. The first two foundational functions of software supply chain security are being able to identify all of the software dependencies and to scan the dependencies for known existing vulnerabilities (i.e., ‘testing’ for vulnerabilities in an application). 

This is typically accomplished by running an SBOM generation tool like Syft to create an inventory of all dependencies followed by running a vulnerability scanner like Grype to compare the inventory of software components in the SBOM against a database of vulnerabilities. Going back to the data pipeline model, the SBOM and vulnerability database are the data sources and the vulnerability report is the transformed security metadata that will feed the rest of the pipeline.

$ grype dir:~/webinar-demo/examples/app:v2.0.0 -o json

 ✔ Vulnerability DB                [no update available]  
 ✔ Indexed file system                                                                            ~/webinar-demo/examples/app:v2.0.0
 ✔ Cataloged contents                                                         889d95358bbb68b88fb72e07ba33267b314b6da8c6be84d164d2ed425c80b9c3
   ├── ✔ Packages                        [16 packages]  
   └── ✔ Executables                     [0 executables]  
 ✔ Scanned for vulnerabilities     [11 vulnerability matches]  
   ├── by severity: 1 critical, 5 high, 5 medium, 0 low, 0 negligible
   └── by status:   11 fixed, 0 not-fixed, 0 ignored 

{
 "matches": [
  {
   "vulnerability": {
    "id": "GHSA-h395-qcrw-5vmq",
    "dataSource": "https://github.com/advisories/GHSA-h395-qcrw-5vmq",
    "namespace": "github:language:go",
    "severity": "High",
    "urls": [
     "https://github.com/advisories/GHSA-h395-qcrw-5vmq"
    ],
    "description": "Inconsistent Interpretation of HTTP Requests in github.com/gin-gonic/gin",
    "cvss": [
     {
      "version": "3.1",
      "vector": "CVSS:3.1/AV:N/AC:L/PR:N/UI:R/S:U/C:H/I:L/A:N",
      "metrics": {
       "baseScore": 7.1,
       "exploitabilityScore": 2.8,
       "impactScore": 4.2
      },
      "vendorMetadata": {
       "base_severity": "High",
       "status": "N/A"
      }
     }
    ],
    . . . 

This was previously done just prior to pushing an application to production as a release gate that would need to be passed before software could be shipped. As unit tests have moved forward in the software development lifecycle as DevOps principles have won the mindshare of the industry, so has security testing “shifted left” in the development cycle. With self-contained, open source CLI tooling like Syft and Grype, developers can now incorporate security testing into their development environment and test for vulnerabilities before even pushing a commit to a continuous integration (CI) server.

From a security perspective this is a huge win. Security vulnerabilities are caught earlier in the development process and fixed before they come up against a delivery due date. But with all of this new data being created, the problem of data overload has led to a different set of problems.

Vulnerability overload; Uncovering the signal in the noise

Like the world of application logs that came before it, at some point there is so much information that an automated process generates that finding the signal in the noise becomes its own problem.

How Anchore Enterprise manages SBOMs and vulnerability scans

Centralized management of SBOMs and vulnerability scans can end up being a massive headache. No need to come up with your own storage and data management solution. Just configure the AnchoreCTL CLI tool to automatically submit SBOMs and vulnerability scans as you run them locally. Anchore Enterprise stores all of this data for you.

On top of this, Anchore Enterprise offers data analysis tools so that you can search and filter SBOMs and vulnerability scans by version, build stage, vulnerability type, etc.

Combining local developer tooling with centralized data management creates a best of both worlds environment where engineers can still get their tasks done locally with ease but offload the arduous management tasks to a server.

Added benefit, SBOM drift detection

Another benefit of distributed SBOM generation and vulnerability scanning is that this security check can be run at each stage of the build process. It would be ideal to believe that the software that is written on in a developers local environment always makes it through to production in an untouched, pristine state but this is rarely the case.

Running SBOM generation and vulnerability scanning at development, on the build server, in the artifact registry, at deploy and during runtime will create a full picture of where and when software is modified in the development process and simplify post-incident investigations or even better catch issues well before they make it to a production environment.

This historical record is a feature provided by Anchore Enterprise called Drift Detection. In the same way that an HTTP cookie creates state between individual HTTP requests, Drift detection is security metadata about security metadata (recursion, much?) that allows state to be created between each stage of the build pipeline.  Being the central store for all of the associated security metadata makes the Anchore Enterprise platform the ideal location to aggregate and scan for these particular anomalies.

Policy as a lever

Being able to filter through all of the noise created by integrating security checks across the software development process creates massive leverage when searching for a particular issue but it is still a manual process and being a full-time investigator isn’t part of the software engineer job description. Wouldn’t it be great if we could automate some if not the majority of these investigations?

I’m glad we’re of like minds because this is where policy comes into picture. Returning to Kelsey Hightower’s original comparison between security tools as linters, policy is the security guide that is codified by your security team that will allow you to quickly check whether the commit that you put together will meet the standards for secure software.

By running these checks and automating the feedback, developers can quickly receive feedback on any potential security issues discovered in their commit. This allows developers to polish their code before it is flagged by the security check in the CI server and potentially failed. No more waiting on the security team to review your commit before it can proceed to the next stage. Developers are empowered to solve the security risks and feel confident that their code won’t be held up downstream.

Policies-as-code supports existing developer workflows

Anchore Enterprise designed its policy engine to ingest the individual policies as JSON objects that can be integrated directly into the existing software development tooling. Create a policy in the UI or CLI, generate the JSON and commit it directly to the repo.

This prevents the painful context switching of moving between different interfaces for developers and allows engineering and security teams to easily reap the rewards of version control and rollbacks that come pre-baked into tools like version control. Anchore Enterprise was designed by engineers for engineers which made policy-as-code the obvious choice when designing the platform.

Remediation automation integrated into the development workflow

Being able to be alerted when a commit is violating your company’s security guidelines is better than pushing insecure code and waiting for the breach to find out that you forgot to sanitize the user input. Even after you get alerted to a problem, you still need to understand what is insecure and how to fix it. This can be done by trying to Google the issue or starting up a conversation with your security team. But this just ends up creating more work for you before you can get your commit into the build pipeline. What if you could get the answer to how to fix your commit in order to make it secure directly into your normal workflow?

Anchore Enterprise provides remediation recommendations to help create actionable advice on how to resolve security alerts that are flagged by a policy. This helps point developers in the right direction so that they can resolve their vulnerabilities quickly and easily without the manual back and forth of opening a ticket with the security team or Googling aimlessly to find the correct solution. The recommendations can be integrated directly into GitHub Issues or Jira tickets in order to blend seamlessly into the workflows that teams depend on to coordinate work across the organization.

Wrap-Up

From the perspective of a developer it can sometimes feel like the security team is primarily a frustration that only slows down your ability to ship code. Anchore has internalized this feedback and has built a platform that allows developers to still move at DevOps speeds and do so while producing high quality, secure code. By integrating directly into developer workflows (e.g., CLI tooling, CI/CD integrations, source code repository integrations, etc.) and providing actionable feedback Anchore Enterprise removes the traditional roadblock mentality that has typically described the relationship between development and security.

If you’re interested to see all of the features described in this blog post via a hands-on demo, check out the webinar by clicking on the screenshot below and going to the workshop hosted on GitHub.

If you’re looking to go further in-depth with how to build and secure containers in the software supply chain, be sure to read our white paper: The Fundamentals of Container Security.

Streamlining FedRAMP Compliance: How Anchore Enterprise Simplifies the Process

FedRAMP compliance is hard, not only because there are hundreds of controls that need to be reviewed and verified. On top of this, the controls can be interpreted and satisfied in multiple different ways. It is admirable to see an enterprise achieve FedRAMP compliance from scratch but most of us want to achieve compliance without spending more than a year debating the interpretation of specific controls. This is where turnkey solutions like Anchore Enterprise come in. 

Anchore Enterprise is a cloud-native software composition analysis platform that integrates SBOM generation, vulnerability scanning and policy enforcement into a single platform to provide a comprehensive solution for software supply chain security.

Overview of FedRAMP, who it applies to and the challenges of compliance

FedRAMP, or the Federal Risk and Authorization Management Program, is a federal compliance program that standardizes security assessment, authorization, and continuous monitoring for cloud products and services. As with any compliance standard, FedRAMP is modeled from the “Trust but Verify” security principle. FedRAMP standardizes how security is verified for Cloud Service Providers (CSP).

One of the biggest challenges with achieving FedRAMP compliance comes from sorting through the vast volumes of data that make up the standard. Depending on the level of FedRAMP compliance you are attempting to meet, this could mean complying with 125 controls in the case of a FedRAMP low certification or up to 425 for FedRAMP high compliance.

While we aren’t going to go through the entire FedRAMP standard in this blog post, we will be focusing on the container security controls that are interleaved into FedRAMP.

FedRAMP container security requirements

1) Hardened Images

FedRAMP requires CSPs to adhere to strict security standards for hardened images used by government agencies. The standard mandates that:

  • Only essential services and software are included in the images
  • Updated with the latest security patches
  • Configuration settings meet secure baselines
  • Disabling unnecessary ports and services
  • Managing user accounts securely
  • Implementing encryption
  • Maintaining logging and monitoring practices
  • Regular vulnerability scanning and prompt remediation

If you want to go in-depth with how to create hardened images that meet FedRAMP compliance, download our white papers:

DevSecOps for a DoD Software Factory: 6 Best Practices for Container Images

Complete Guide to Hardening Containers with STIG

2) Container Build, Test, and Orchestration Pipelines

FedRAMP sets stringent requirements for container build, test, and orchestration pipelines to protect federal agencies. These include:

  • Hardened base images (see above) 
  • Automated build processes with integrity checks
  • Strict configuration management
  • Immutable containers
  • Secure artifact management
  • Containers security testing
  • Comprehensive logging and monitoring

3) Vulnerability Scanning for Container Images

FedRAMP mandates rigorous vulnerability scanning protocols for container images to ensure their security within federal cloud deployments. This includes: 

  • Comprehensive scans integrated into CI/CD pipelines 
  • Prioritize remediation based on severity
  • Re-scanning policy post-remediation 
  • Detailed audit and compliance reports 
  • Checks against secure baselines (i.e., CIS or STIG)

4) Secure Sensors

FedRAMP requires continuous management of the security of machines, applications, and systems by identifying vulnerabilities. 

  • Authorized scanning tools
  • Authenticated security scans to simulate threats
  • Reporting and remediation
  • Scanning independent of developers
  • Direct integration with configuration management to track vulnerabilities

5) Registry Monitoring

While not explicitly called out in FedRAMP as either a control or a control family, there is still a requirement that the images stored in a container registry are scanned at least every 30-days if the images are deployed to production.

6) Asset Management and Inventory Reporting for Deployed Containers

FedRAMP mandates thorough asset management and inventory reporting for deployed containers to ensure security and compliance. Organizations must maintain detailed inventories including:

  • Container images
  • Source code
  • Versions
  • Configurations 
  • Continuous monitoring of container state 

7) Encryption

FedRAMP mandates robust encryption standards to secure federal information, requiring the use of NIST-approved cryptographic methods for both data at rest and data in transit. It is important that any containers that store data or move data through the system meet FIPS standards.

How Anchore helps organizations comply with these requirements

Anchore is the leading software supply chain security platform for meeting FedRAMP compliance. We have helped hundreds of organizations meet FedRAMP compliance by deploying Anchore Enterprise as the solution for achieving container security compliance. Below you can see an overview of how Anchore Enterprise integrates into a FedRAMP compliant environment. For more details on how each of these integrations meet FedRAMP compliance keep reading.

1) Hardened Images

Anchore Enterprise integrates multiple tools in order to meet the FedRAMP requirements for hardened container images. We provide compliance policies that scan specifically for compliance with container hardening standards, such as, STIG and CIS. These tools were custom built to perform the checks necessary to meet the two relevant standards or both!

2) Container Build, Test, and Orchestration Pipelines

Anchore integrates directly into your CI/CD pipelines either the Anchore Enterprise API or pre-built plug-ins. This tight integration meets the FedRAMP standards that require all container images are hardened, all security checks are automated within the build process and all actions are logged and audited. Anchore’s FedRAMP policy specifically checks to make sure that any container in any stage of the pipeline will be checked for compliance.

3) Vulnerability Scanning for Container Images

Anchore Enterprise can be integrated into each stage of the development pipeline, offer remediation recommendations based on severity (e.g., CISA’s KEV vulnerabilities can be flagged and prioritized for immediate action), enforce re-scanning of containers after remediation and produce compliance artifacts to automate compliance. This is accomplished with Anchore’s container scanner, direction pipeline integration and FedRAMP policy.

4) Secure Sensors

Anchore Enterprise’s container vulnerability scanner and Kubernetes inventory agent are both authorized scanning tools. The container vulnerability scanner is integrated directly into the build pipeline whereas the k8s agent is run in production and scans for non-compliant containers at runtime.

5) Registry Monitoring

Anchore Enterprise is able to scan an artifact registry continuously for potentially non-compliant containers. It is configured to watch each unique image in image registries. It will automatically scan images that get pushed to these registries.

6) Asset Management and Inventory Reporting for Deployed Containers

Anchore Enterprise includes a full software component inventory workflow. It can scan all software components, generate Software Bill of Materials (SBOMs) to keep track of the component and centrally store all SBOMs for analysis. Anchore Enterprises’s Kubernetes inventory agent can perform the same service for the runtime environment.

7) Encryption

Anchore Enterprise Static STIG tool can ensure that all containers are maintaining NIST & FIPS encryption standards. Verifying that containers are encrypting data at-rest and in-transit for each of thousands of containers is a difficult chore but easily automated via Anchore Enterprise.

The benefits of the shift left approach of Anchore Enterprise

Shift compliance left and prevent violations

Detect and remediate FedRAMP compliance violations early in the development lifecycle to prevent production/high-side violations that will threaten your hard earned compliance. Use Anchore’s “developer-bundle” in the integration phase to take immediate action on potential compliance violations. This will ensure vulnerabilities with fixes available and CISA KEV vulnerabilities are addressed before they make it to the registry and you have to report these non-compliance issues.

Below is an example of a workflow in GitLab of how Anchore Enterprise’s SBOM generation, vulnerability scanning and policy enforcement can catch issues early and keep your compliance record clean.

Automate Compliance Reporting

Automate monthly/annual reporting using Anchore’s reporting. Have these reports set up to auto generate based on the compliance reporting needs of FedRAMP.

Manage POA&Ms

Given that Anchore Enterprise centrally stores and manages vulnerability information for an organization, it can also be utilized to manage Plan Of Action & Milestones (POA&Ms) for any portions of the system that aren’t yet FedRAMP compliant but have a planned due date. Using Allowlists in Anchore Enterprise to centrally manage POA&Ms and assessed/justifiable findings.

Prevent Production Compliance Violations

Practice good production registry hygiene by utilizing Anchore Enterprise to scan stored images regularly. Anchore Enterprise’s Kubernetes runtime inventory will identify images that do not meet FedRAMP compliance or have not been used in the last ~7 days (company defined) to remove from your production registry.

Conclusion

Achieving FedRAMP compliance from scratch is an arduous process and not a key differentiator for many organizations. In order to maintain organizational priority on the aspects of the business that differentiate an organization from its competitors, strategic outsourcing of non-core competencies is always a recommended strategy. Anchore Enterprise aims to be that turnkey solution for organizations that want the benefits of FedRAMP compliance without developing the internal expertise, specifically for the container security aspect.

By integrating SBOM generation, vulnerability scanning, and policy enforcement into a single platform, Anchore Enterprise not only simplifies the path to compliance but also enhances overall software supply chain security. Through the deployment of Anchore Enterprise, companies can achieve and maintain compliance more quickly and with greater assurance. If you’re looking for an even deeper look at how to achieve all 7 of the container security requirements of FedRAMP with Anchore Enterprise, read our playbook: FedRAMP Pre-Assessment Playbook For Containers.

From Chaos to Compliance: Revolutionizing License Management with Automation

The ascent of both containerized applications and open-source software component building blocks has dramatically escalated the complexity of software and the burden of managing all of the associated licenses. Modern applications are often built from a mosaic of hundreds, if not thousands, of individual software components, each bound by its own potential licensing pitfalls. This intricate web of dependencies, akin to a supply chain, poses significant challenges not only for legal teams tasked with mitigating financial risks but also for developers who manage these components’ inventory and compliance.

Previously license management was primarily a manual affair, software wasn’t as complex in the past and more software was proprietary 1st party software that didn’t have the same license compliance issues. These original license management techniques haven’t kept up with the needs of modern, cloud-native application development. In this blog post, we’re going to discuss how automation is needed to address the challenges of continuing to manage licensing risk in modern software.

The Problem

Modern software is complex. This is fairly well known at this point but in case you need a quick visual reminder, we’ve inserted to images to quickly reinforce this idea:

Applications can be constructed from 10s or 100s or even 1000s of individual software components each with their own license for how it can be used. Modern software is so complex that this endlessly nested collection of dependencies are typically referred to as a metaphorical supply chain and there is an entire industry that has grown to provide security solutions for this quagmire called software supply chain security

This is a complexity nightmare for legal teams that are tasked with managing the financial risk of an organization. It’s also a nightmare for the developers who are tasked with maintaining an inventory of all of the software dependencies in an organization and the associated license for each component.

Let’s provide an example of how this normally manifests in a software startup. Assuming business is going well, you have a product and there are customers out in the world that are interested in purchasing your software. During the procurement cycle, your customer’s legal team will be tasked with assessing the risk of using your software. In order to create this assessment they will do a number of things and one of them will be to determine if your software is safe from a licensing perspective to use. In order to do this they will normally send over a document that looks like this:

As a software vendor, it will be your job to fill this out so that legal can approve the purchasing of your software and you can take that revenue to the bank.

Let’s say you manually fill this entire spreadsheet out. A developer would need to go through each dependency that is utilized in the software that you sell and “scan” the codebase for all of the licensing metadata. Component name, version number, OSS license (i.e., MIT, GPL, BSD, etc.), etc. It would take some time and be quite tedious but not an insurmountable task. In the end they would produce something like this:

This is great in the world of once-in-a-while deployments and updates. This becomes exhausting in the world of continuous integration/delivery that the DevOps movement has created. Imagine having to produce a new document like this everytime you push to production. DevOps has allowed for some times to push to production multiple times per day.  Requiring that a document is manually created for all of your customers’ legal teams for each release would almost eliminate all of the velocity gains that moving to the DevOps architecture created.

The Solution

The solution to this problem is the automation of the license discovery process. If software can scan your codebase and produce a document that will exhaustively cover all of the building blocks of your application this unlocks the potential to both have your DevOps cake and eat it too.

To this end, Anchore has created and open sourced a tool that does just this.

Introducing Grant: Automated License Discovery

Grant is an open-source command line tool that scans and discovers the software licenses of all dependencies in a piece of open-source software. If you want to get a quick primer on what you can do with Grant, read our announcement blog post. Or if you’re ready to dive straight in, you can view all of the Grant documentation on its GitHub repo.

How does Grant Integrate into my Development Workflow?

As a software license scanner, Grant operates on a software inventory artifact like an SBOM or directly on a container image. Let’s continue with the example from above to bring this to life. In the legal review example above you are a software developer that has been tasked with manually searching and finding all of the OSS license files to provide to your customer’s legal team for review.

Not wanting to do this manually by hand, you instead open up your CLI and install Grant. From there you navigate to your artifact registry and pull down the latest image of your application’s production build. Right before you run the Grant license scan on your production container image you notice that your team has been following software supply chain best practices and have already created an SBOM with a popular open-source tool called Syft. Instead of running the container through Grant which could take some time to scan the image, you instead pipe the SBOM into Grant which is already a JSON object of the entire dependency inventory of the application. A few seconds later you have a full report of all of the licenses in your application.  

From here you export the full component inventory with the license enrichment into a spreadsheet and send this off to the customer’s legal team for review. A process that might have taken a full day or even multiple days to do by hand was finished in seconds with the power of open-source tooling.

Automating License Compliance with Policy

Grant is an amazing tool that can automate much of the most tedious work of protecting an organization from legal consequences but when used by a developer as a CLI tool, there is still a human in the loop which can cause traffic jams to occur. With this in mind, our OSS team made sure to launch Grant with support for policy-based filters that can automate the execution and alerting of license scanning. 

Let’s say that your organization’s legal team has decided that using any GPL components in 1st party software is too risky. By writing a policy that fails any software that includes GPL licensed components and integrating the policy check as early as the staging CI environment or even allowing developers to run Grant in a one-off fashion during design as they prototype the initial idea, the potential for legally risky dependencies infiltrating into production software drops precipitously.

How Anchore Can Help

Grant is an amazing tool that automates the license compliance discovery process. This is great for small projects or software that does irregular releases. Things get much more complicated in the cloud-native, continuous integration/deployment paradigm on DevSecOps where there are new releases multiple times per day. Having Grant generate the license data is great but suddenly you will have an explosion of data that itself needs to be managed.

This is where Anchore Enterprise steps in to fill this gap. The Anchore Enterprise platform is an end-to-end data management solution that not only incorporates all of Anchore’s open-source tooling that generates artifacts like SBOMs, vulnerability scans and license scans. It also manages the massive amount of data that a high speed DevSecOps pipeline will create as part of its regular operation and on top of that apply a highly customizable policy engine that can then automate decision-making around the insights derived from the software supply chain artifacts like SBOMs, vulnerability scans and license scans. 

Want to make sure that no GPL license OSS components ever make it into your SDLC? No problem. Grant will uncover all components that have this license, Anchore Enterprise will centralize these scans and the Anchore policy engine will alert the developer who just integrated a new GPL licensed OSS component into their development environment that they need to find a different component or they won’t be able to push their branch to staging. The shift left principle of DevSecOps can be applied to LegalOps as well. 

Conclusion

The advent of tools like Grant, an open-source license discovery solution developed by Anchore, marks a significant advancement in the realm of open-source license management. By automating the tedious process of license verification, Grant not only enhances operational efficiency but also integrates seamlessly into continuous integration/continuous delivery (CI/CD) environments. This capability is crucial in modern DevOps practices, which demand frequent and fast-paced updates. Grant’s ability to quickly generate comprehensive licensing reports transforms a potentially day-long task into a matter of seconds.

Anchore Enterprise extends this functionality by managing the deluge of data from continuous deployments and integrating a policy engine that automates compliance decisions. This ecosystem not only streamlines the process of license management but also empowers developers and legal teams to preemptively address compliance issues, thereby embedding legal safeguards directly into the software development lifecycle. This proactive approach ensures that as the technological landscape evolves, businesses remain agile yet compliant, ready to capitalize on opportunities without being bogged down by legal liabilities.

If you’re interested to hear about the topics covered in this blog post directly from the lips of Anchore’s CTO, Dan Nurmi, and the maintainer of Grant, Christopher Phillips, you can watch the on-demand webinar here. Or join the Anchore Community Discourse forum to speak with our team directly. We look forward to hearing from you and reviewing your pull requests!

An Outline for Getting Up to Speed on the DoD Software Factory

This blog post is meant as a gateway to all things DoD software factory. We highlight content from across the Anchore universe that can help anyone get up to speed on what a DoD software factory is, why to use it and how to build one. This blog post is meant as an index to be scanned for the topics that are most interesting to you as the reader with links to more detailed content.

What is a DoD Software Factory?

The short answer is a DoD Software Factory is an implementation of the DoD Enterprise DevSecOps Reference Design. A slightly longer answer comes from our DoD software factory primer:

A Department of Defense (DoD) Software Factory is a software development pipeline that embodies the principles and tools of the larger DevSecOps movement with a few choice modifications that conform to the extremely high threat profile of the DoD and DIB.

Note that the diagram below looks like a traditional DevOps pipeline. The difference being that there are security controls layered into this environment that automate software component inventory, vulnerability scanning and policy enforcement to meet the requirements to be considered a DoD software factory.

Got the basics down? Go deeper and learn how Anchore can help you put the Sec into DevSecOps Reference Design by reading our DoD Software Factory Best Practices white paper.

Why do I want to utilize a DoD Software Factory?

For DoD programs, the primary reason to utilize a DoD software factory is that it is a requirement for achieving a continuous authorization to operation (cATO). The cATO standard specifically calls out that software is developed in a system that meets the DoD Enterprise DevSecOps Reference Design. A DoD software factory is the generic implementation of this design standard.

For Federal Service Integrators (FSIs), the biggest reason to utilize a DoD software factory is that it is a standard approach to meeting DoD compliance and certification standards. By meeting a standard, such as CMMC Level 2, you expand your opportunity to work with DoD programs.

Continuous Authorization to Operate (cATO)

If you’re looking for more information on cATO, Anchore has written a comprehensive guide on navigating the cATO process that can be found on our blog:

DevSecOps for a DoD Software Factory: 6 Best Practices for Container Images

The shift from traditional software delivery to DevSecOps in the Department of Defense (DoD) represents a crucial evolution in how software is built, secured, and deployed with a focus on efficiencies and speed. Our white paper advises on best practices that are setting new standards for security and efficiency in DoD software factories.

Cybersecurity Maturity Model Certification (CMMC)

The CMMC is the certification standard that is used by the DoD to vet FSIs from the defense industrial base (DIB). This is the gold standard for demonstrating to the DoD that your organization takes security seriously enough to work with the highest standards of any DoD program. The security controls that the CMMC references when determining certification are outlined in NIST 800-171. There are 17 total families of security controls that an organization has to meet in order to meet the CMMC Level 2 certification and a DoD software factory can help check a number of these off of the list.

The specific families of controls that a DoD software factory helps meet are:

  • Access Control (AC)
  • Audit and Accountability (AU)
  • Configuration Management (CM)
  • Incident Response (IR)
  • Maintenance (MA)
  • Risk Assessment (RA)
  • Security Assessment and Authorization (CA)
  • System and Communications Protection (SC)
  • System and Information Integrity (SI)
  • Supply Chain Risk Management (SR)

If you’re looking for more information on how apply software supply chain security to meet the CMMC, Anchore has published two blog posts on the topic:

NIST SP 800-171 & Controlled Unclassified Data: A Guide in Plain English

  • NIST SP 800-171 is the canonical list of security controls for meeting CMMC Level 2 certification. Anchore has broken down the entire 800-171 standard to give you an easy to understand overview.

Automated Policy Enforcement for CMMC with Anchore Enterprise

  • Policy Enforcement is the backbone of meeting the monitoring, enforcement and reporting requirements of the CMMC. In this blog post, we break down how Anchore Federal can meet a number of the controls specifically related to software supply chain security that are outlined in NIST 800-171.

How do I meet the DevSecOps Reference Design requirements?

The easy answer is by utilizing a DoD Software Factory Managed Service Provider (MSP). Below in the User Stories section, we deep dive into the US Air Force’s Platform One given they are the preeminent DoD software factory.

The DIY answer involves carefully reading and implementing the DoD Enterprise DevSecOps Reference Design. This document is massive but there are a few shortcuts you can utilize to help expedite your journey. 

Container Hardening

Deciding to utilize software containers in a DevOps pipeline is almost a foregone conclusion at this point. What is less well known is how to secure your containers, especially to meet the standards of a DoD software factory.

The DoD has published two guides that can help with this. The first is the DoD Container Hardening Guide, and the second is the Container Image Creation and Deployment Guide. Both name Anchore Federal as an approved container hardening scanner.

Anchore has published a number of blogs and even a white paper that condense the information in both of these guides into more digestible content. See below:

Container Security for U.S. Government Information Systems

  • This comprehensive white paper breaks down how to achieve a container build and deployment system that is hardened to the standards of a DoD software factory.

Enforcing the DoD Container Image and Deployment Guide with Anchore Federal

  • This blog post is great for those who are interested to see how Anchore Federal can turn all of the requirements of the DoD Container Hardening Guide and the Container Image Creation and Deployment Guide into an easy button.

Deep Dive into Anchore Federal’s Container Image Inspection and Vulnerability Management

  • This blog post deep dives into how to utilize Anchore Federal to find container vulnerabilities and alert or report on whether they are violating the security compliance required to be a DoD software factory.

Policy-based Software Supply Chain Security and Compliance

The power of a policy-based approach to software supply chain security is that it can be integrated directly into a DevOps pipeline and automate a significant amount of alerting, reporting and enforcement work. The blog posts below go into depth on how this automated approach to security and compliance can uplevel a DoD software factory:

A Policy Based Approach to Container Security & Compliance

  • This blog details how a policy-based platform works and how it can benefit both software supply chain security and compliance. 

The Power of Policy-as-Code for the Public Sector

  • This follow-up to the post above shows how the policy-based security platform outlined in the first blog post can have significant benefits to public sector organizations that have to focus on both internal information security and how to prove they are compliant with government standards.

Benefits of Static Image Inspection and Policy Enforcement

  • Getting a bit more technical this blog details how a policy-based development workflow can be utilized as a security gate with deployment orchestration systems like Kubernetes.

Getting Started With Anchore Policy Bundles

  • An even deeper dive into what is possible with the policy-based security system provided by Anchore Enterprise, this blog gets into the nitty-gritty on how to configure policies to achieve specific security outcomes.

Unpacking the Power of Policy at Scale in Anchore

  • This blog shows how a security practitioner can extend the security signals that Anchore Enterprise collects with the assistance of a more flexible data platform like New Relic to derive more actionable insights.

Security Technical Implementation Guide (STIG)

The Security Technical Implementation Guides (STIGs) are fantastic technical guides for configuring off the shelf software to DoD hardening standards. Anchore being a company focused on making security and compliance as simple as possible has written a significant amount about how to utilize STIGs and achieve STIG compliance, especially for container-based DevSecOps pipelines. Exactly the kind of software development environments that meet the standards of a DoD software factory. View our previous content below:

4 Ways to Prepare your Containers for the STIG Process

  • In this blog post, we give you four quick tips to help you prepare for the STIG process for software containers. Think of this as the amuse bouche to prepare you for the comprehensive white paper that comes next.

Navigating STIG Compliance for Containers

  • As promised, this is the extensive document that walks you through how to build a DevSecOps pipeline based on containers that is both high velocity and secure. Perfect for organizations that are aiming to roll their own DoD software factory.

User Stories

Anchore has been supporting FSIs and DoD programs to build DevSecOps programs that meet the criteria to be called a DoD software factory for the past decade. We can write technical guides and best practices documents till time ends but sometimes the best lessons are learned from real-life stories. Below are user stories that help fill in all of the details about how a DoD software factory can be built from scratch:

DoD’s Pathway to Secure Software

  • Join Major Camdon Cady of Platform One and Anchore’s VP of Security, Josh Bressers as they discuss the lessons learned from building a DoD software factory from the ground up. Watch this on-demand webinar to get all of the details in a laid back and casual conversation between two luminaries in their field.

Development at Mach Speed

  • If you prefer a written format over video, this case study highlights how Platform One utilized Red Hat OpenShift and Anchore Federal to build their DoD software factory that has become the leading Managed Service Provider for DoD programs.

Conclusion

Similar to how Cloud has taken over the infrastructure discussion in the enterprise world, DoD software factories are quickly becoming the go-to solution for DoD programs and the FSIs that support them. Delivering on the promise of the DevOps movement of high velocity development without compromising security, a DoD software factory is the one-stop shop to upgrade your software development practice into the modern age and become compliant as a bonus! If you’re looking for an easy button to infuse your DevOps pipeline with security and compliance without the headache of building it yourself, take a look at Anchore Federal and how it helps organizations layer software supply chain security into a DoD software factory and achieve a cATO.

4 Ways to Prepare your Containers for the STIG Process

The Security Technical Implementation Guide (STIG) is a Department of Defense (DoD) technical guidance standard that captures the cybersecurity requirements for a specific product, such as a cloud application going into production to support the warfighter. System integrators (SIs), government contractors, and independent software vendors know the STIG process as a well-governed process that all of their technology products must pass. The Defense Information Systems Agency (DISA) released the Container Platform Security Requirements Guide (SRG) in December 2020 to direct how software containers go through the STIG process. 

STIGs are notorious for their complexity and the hurdle that STIG compliance poses for technology project success in the DoD. Here are some tips to help your team prepare for your first STIG or to fine-tune your existing internal STIG processes.

4 Ways to Prepare for the STIG Process for Containers

Here are four ways to prepare your teams for containers entering the STIG process:

1. Provide your Team with Container and STIG Cross-Training

DevSecOps and containers, in particular, are still gaining ground in DoD programs. You may very well find your team in a situation where your cybersecurity/STIG experts may not have much container experience. Likewise, your programmers and solution architects may not have much STIG experience. Such a situation calls for some manner of formal or informal cross-training for your team on at least the basics of containers and STIGs. 

Look for ways to provide your cybersecurity specialists involved in the STIG process with training about containers if necessary. There are several commercial and free training options available. Check with your corporate training department to see what resources they might have available such as seats for online training vendors like A Cloud Guru and Cloud Academy.

There’s a lot of out-of-date and conflicting information about the STIG process on the web today. System integrators and government contractors need to build STIG expertise across their DoD project teams to cut through such noise.

Including STIG expertise as an essential part of your cybersecurity team is the first step. While contract requirements dictate this proficiency, it only helps if your organization can build a “bench” of STIG experts. 

Here are three tips for building up your STIG talent base:

  • Make STIG experience a “plus” or “bonus” in your cybersecurity job requirements for roles, even if they may not be directly billable to projects with STIG work (at least in the beginning)
  • Develop internal training around STIG practices led by your internal experts and make it part of employee onboarding and DoD project kickoffs
  • Create a “reach back” channel from your project teams  to get STIG expertise from other parts of your company, such as corporate and other project teams with STIG expertise, to get support for any issues and challenges with the STIG process

Depending on the size of your company, clearance requirements of the project, and other situational factors, the temptation might be there to bring in outside contractors to shore up your STIG expertise internally. For example, the Container Platform Security Resource Guide (SRG) is still new. It makes sense to bring in an outside contractor with some experience managing containers through the STIG process. If you go this route, prioritize the knowledge transfer from the contractor to your internal team. Otherwise, their container and STIG knowledge walk out the door at the end of the contract term.

2. Validate your STIG Source Materials

When researching the latest STIG requirements, you need to validate the source materials. There are many vendors and educational sites that publish STIG content. Some of that content is outdated and incomplete. It’s always best to go straight to the source. DISA provides authoritative and up-to-date STIG content online that you should consider as your single source of truth on the STIG process for containers.

3. Make the STIG Viewer part of your Approved Desktop

Working on DoD and other public sector projects requires secure environments for developers, solution architects, cybersecurity specialists, and other team members. The STIG Viewer should become a part of your DoD project team’s secure desktop environment. Save the extra step of your DoD security teams putting in a service desk ticket to request the STIG Viewer installation.

4. Look for Tools that Automate time-intensive Steps in the STIG process

The STIG process is time-intensive, primarily documenting policy controls. Look for tools that’ll help you automate compliance checks before you proceed into an audit of your STIG controls. The right tool can save you from audit surprises and rework that’ll slow down your application going live.

Parting Thought

The STIG process for containers is still very new to DoD programs. Being proactive and preparing your teams upfront in tandem with ongoing collaboration are the keys to navigating the STIG process for containers.

Learn more about putting your containers through the STIG process in our new white paper entitled Navigating STIG Compliance for Containers!

We don’t know how to fix the xz problem, but we can detect it

A very impressive and scary attack against the xz library was uncovered on Friday, which made for an interesting weekend for many of us.

There has been a lot written about the technical aspects of this attack, and more will be uncovered over the next few days and weeks. It’s likely we’re not done learning new details about this attack. This doesn’t appear to affect as many organizations as Log4Shell did, but it’s a pretty big deal. Especially what this sort of attack means for the larger ecosystem of open source. Trying to explain the details isn’t the point of this blog post. There’s another angle of this story that’s not getting much attention. How can we solve problems like this (we can’t), and what can we do going forward?

The unsolvable problem

Sometimes reality can be harsh, but the painful truth about this sort of attack is that there is no solution. Many projects and organizations are happy to explain how they keep you safe, or how you can prevent supply chain attacks, by doing this one simple thing. However, the industry as it stands today lacks the ability to prevent an attack created by a motivated and resourced threat actor. If we want to use an analogy, preventing an attack like xz is the equivalent of the pre-crime science fiction dystopian stories. The idea behind pre-crime is to use data or technology to predict when a crime is going to happen, then stopping it. This leads to a number of problems in any society that adopts such a thing as one can probably imagine.

If there is a malicious open source maintainer, we lack the tools and knowledge to prevent this sort of attack, as you can’t actually stop such behavior until after it happens. It may be possible there’s no way to stop something like this before it happens.

HOWEVER, that doesn’t mean we are helpless. We can take a page out of the playbook of the observability industry. Sometimes we’re able to see problems as they happen or after they happen, then use that knowledge from the past to improve the future, that is a problem we can solve. And it’s a solution that we can measure. If you have a solid inventory of your software, looking for affected versions of xz becomes simple and effective.

Today and Tomorrow

Of course looking for a vulnerable version of xz, specifically versions 5.6.0 and 5.6.1, is something we should all be doing. If you’ve not gone through the software you’re running you should go do this right now. See below for instructions on how to use Syft and Anchore Enterprise to accomplish this.

Finding two specific versions of xz is a very important task right now, there’s also what happens tomorrow. We’re all very worried about these two specific versions of xz, but we should prepare for what happens next. It’s very possible there will be other versions of xz that end up having questionable code that needs to be removed or downgraded. There could be other libraries that have problems (everyone is looking for similar attacks now). We don’t really know what’s coming next. The worst part of being in the middle of attacks like this are the unknowns. But there are some things we do know. If you have an accurate inventory of your software, figuring out what software or packages you need to update becomes trivial.

Creating an inventory

If you’re running Anchore Enterprise the good news is you already have an inventory of your software. You can create a report that will look for images affected by CVE-2024-3094.

The above image shows how a report in Anchore Enterprise can be created.  Another feature of Anchore Enterprise allows you to query all of your SBOMs for instances of specified software via an API call, by package name.  This is useful for gaining insights about the location, ubiquity, and the version spread of the software, present in your environment.

The package names in question are the liblzma5 and xz-libs packages, which cover the common naming across rpm, dpkg, and apkg based Linux distributions.  For example:

   

We don’t know how to fix the xz problem, but we can detect it

See the Anchore Enterprise API Browser for more information about the API, and the Anchore Enterprise Documentation for more details on reporting, vulnerability scanning, and other functions of the system.

If you’re using Syft, it’s a little bit more complicated, but still a very solvable problem. The first thing to do is generate SBOMs for the software you’re using. Let’s create SBOMs for container images in this example.

It’s important to keep those SBOMs you just created somewhere safe. If we then find out in a few days or weeks that other versions of xz shouldn’t be trusted, or if a different open source library has a similar problem, we can just run a query against those files to understand how or if we are affected.

Now that we have a directory full of SBOMs, we can run a query similar to these to figure out which SBOMs have a version of xz in them.

While the example looks for xz, if the need to quickly look for other packages arises in the near future, it’s easy to adjust your query. It’s even easier if you store the SBOMs in some sort of searchable database.

What now?

There’s no doubt it’s going to be a very interesting couple of weeks for many of us. Anyone who has been watching the news is probably wondering what wild supply chain story will happen next. The pace of solving open source security problems hasn’t kept pace with the growth of open source. There will be no quick fixes. The only way we get out of this one is a lot of hard questions and even more hard work.

But in the meantime, we can focus on understanding and defending what we have. Sometimes the best defense is a good defense.

Want a primer on software suppy chain security? Get our free white paper here.

Navigating the NVD Quagmire

The global cybersecurity community has been in a state of uncertainty since the National Vulnerability Database (NVD) has degraded service starting in mid-February. There has been a lot of coverage of this incident this month and Anchore has been at the center of much of it. If you haven’t been keeping up, this blog post is here to recap what has happened so far and how the community has been responding to this incident.

Our VP of Security, Josh Bressers has been leading the charge to educate and organize the community. First with his Open Source Security podcast that goes through what is happening with NVD and why it is important. On top of that, last week he participated in a livestream with Chainguard Co-founder Dan Lorenc on the Resilient Cyber Show hosted by Chris Hughes on the implications of the current delay with NVD service.

We’ve condensed the topics from these resources into a blog post that will cover the issues created by the delay in NVD service, a background on what has happened so far, a potential open-source solution to the problem and a call to action for advocacy. Continue reading for the good stuff.

The problem

Federal agencies mandate NVD is used as the primary data source of truth even if there could be higher quality data sources available. This mainly comes down to the fact that the severity scores, meaning the Common Vulnerability Scoring System (CVSS), determines when an agency or organization is out of compliance with a federal security standard. Given that compliance standards are created by the US government, only NVD can score a vulnerability and determine the appropriate action in order to stay in compliance.

That’s where the problem starts to come in, you’ve got a whole bunch of government agencies on one hand saying, ‘you must use this data’. And then another government agency that says, “No, you can’t rely on this for anything”. This leaves folks working with the government in a bit of a pickle.

–Dan Lorenc, Co-Founder, Chainguard

If NVD isn’t assigning severities to vulnerabilities, it’s not clear what that means for maintaining compliance and they could be exposing themselves to significant risk. For example, there could be high severity vulnerabilities being published by organizations that are unaware because this vital review and scoring process has been removed.

Background on NVD and the current state of affairs

NVD is the canonical source of truth for software vulnerabilities for the federal government, specifically for 10+ federal compliance standards. It has also become a go-to resource for the worldwide security community even if individual organizations in the wider community aren’t striving to meet a United States compliance standard.

NVD adds a number of enrichments to CVE data but there are two of particular importance; first, it adds a severity score to all CVEs and two, it adds information of which versions of the software are impacted by the CVE. The National Institute of Standards and Technology (NIST) has been providing this service to the security community for over 20 years through the NVD. That changed last month:

Timeline

  • Feb 12: NVD dramatically reduces the number of CVEs that are being enriched:
  • Feb 15: NVD posts message about delay of enrichment on NVD Website

Read a comprehensive background in our original blog post, National Vulnerability Database: Opaque changes and unanswered questions.

Developing an Open-Source Solution

The Anchore team developed and maintains an open-source vulnerability scanner called Grype that utilizes the NVD as one of many vulnerability feeds as well as a software supply chain security platform called Anchore Enterprise that incorporates Grype. Given that both products use data from NVD, it was particularly important for Anchore to engage in the current crisis.

While there is nothing that Anchore can do about the missing severity scores, the other highlighted missing enrichment was the versions of the software that are impacted by the CVE, aka, Common Platform Enumeration (CPE). The matching logic ends up being the more important signal during impact analysis because it is an objective measure of impact rather than severity scoring which can be debated (and is, at length).

Given Anchore’s history with the open-source software community, creating an OSS project to fill a gap in the NVD enrichment seemed the logical choice. The goal of going the OSS route is to leverage the transparent process and rapid iteration that comes from building software publicly. Anchore is excited to collaborate with the community to:

  • Ingest CVE data
  • Analyze CVEs
  • Improve the CVE-to-versioning mapping process 

Everyone is being crushed by the unrelenting influx of vulnerabilities. It’s not just NVD. It’s not one organization. We can either sit in our silos and be crushed to death or we can work together.

–Josh Bressers, VP of Security at Anchore

If you’re looking to utilize this data and software as a backfill while NVD continues delaying analysis or want to contribute to the project, please join us on GitHub

Cybersecurity Awareness and Advocacy

It might seem strange that the cybersecurity community would need to convince the US government that investing in the cybersecurity ecosystem is a net positive investment given that the federal government is the largest purchaser of software in the world and is probably the largest target for threat actors. But given how NIST has decided to degrade the service of NVD and provide only opaque guidance on how to fill the gap in the meantime, it doesn’t appear that the right hand is talking with the left.

Whether the federal government intended to or not, by requiring that organizations and agencies utilize NVD in order to meet a number of federal compliance standards, it effectively became the authority on the severity of software vulnerabilities for the global cybersecurity ecosystem. By providing a valuable and reliable service to the community, the US garnered the trust of the ecosystem. The current state of NVD and the manner in which it was rolled out has degraded that trust. 

It is unknown whether the US will cede its authority to another organization, the EU may attempt to fill this vacuum with its own authoritative database but in the meantime, advocacy for cybersecurity awareness within the government is paramount. It is up to the community to create the pressure that will demonstrate the urgency with the current strategy around a vital community resource like NVD. 

Conclusion

Anchore is committed to keeping the community up-to-date on this incident as it unfolds. To stay informed, be sure to follow us on LinkedIn or Twitter/X.

If you’d like to watch the livestream in all its glory, click on the image below to go to the VOD.

Also, if you’re looking for more in-depth coverage of the NVD incident, Josh Bressers has a security podcast called, Open Source Security that covers the NVD incident and the history of NVD.

Spring Webinar Update: Expand Your Knowledge with Our Expert-Led Sessions

In our continuous effort to bring valuable insights and tools to the world of software supply chain security, we are thrilled to announce two upcoming webinars and one recently held webinar now available for on-demand access. Whether you’re looking to enhance your understanding of software security, explore open-source tools to automate OSS licensing management, or navigate the complexities of compliance with federal standards, our expert-led webinars are designed to equip you with the knowledge you need. Here’s what’s on the agenda:

Tracking License Compliance Made Easy: Intro to Grant (OSS)

Date: Mar 26, 2024 at 2pm EDT  (11am PDT)

Join us as Anchore CTO, Dan Nurmi and Grant Maintainer, Christopher Phillips discuss the challenges of managing software licenses within production environments, highlighting the complexity and ongoing nature of tracking open-source licenses.

They will introduce Grant, an open-source tool designed to alleviate the burden of OSS license inspection by demonstrating how to scan for licenses within SBOMs or container images, simplifying a typically manual process. The session will cover the current landscape of software licenses, the difficulties of compliance checks, and a live demo of Grant’s features that automate this previously laborious process.

Software Security in the Real World with Kelsey Hightower and Dan Perry

Date: April 4th, 2024 at 2pm EDT  (11am PDT)

In our upcoming webinar, experts Kelsey Hightower and Dan Perry will delve into the nuances of securing software in cloud-native, containerized applications. This in-depth session will explore the criteria for vulnerability testing success or failure, offering insights into security testing and compliance for modern software environments. 

Through a live demonstration of Anchore Enterprise, they’ll provide a comprehensive look at visibility, inspection, policy enforcement, and vulnerability remediation, equipping attendees with a deeper understanding of software supply chain security, proactive security strategies, and practical steps to embark on a software security journey. 

The discussion will continue after the webinar on X/Twitter with Kelsey Hightower.

FedRAMP and SSDF Compliance: How to Sell to the Federal Government

This webinar explores how Anchore aids in navigating the complex compliance requirements for selling software to the federal government, focusing on FedRAMP vulnerability scanning and SSDF compliance. Led by Josh Bressers, VP of Security and Connor Wynveen, Senior Solutions Engineer it will detail evaluating FedRAMP controls for software containers and adhering to SSDF guidelines. 

Key takeaways include strategies to streamline FedRAMP and SSDF compliance efforts, leveraging SBOMs for efficiency, the critical role of automated vulnerability scans, and how Anchore’s policy pack can assist organizations in meeting compliance standards.

Accessing the Webinars

Don’t miss out on the opportunity to expand your knowledge and skills with these sessions. To register for the upcoming webinars or to access the on-demand webinar, visit our webinar landing page. Whether you’re looking to stay ahead of the curve in software security, explore funding opportunities for your open-source projects, or break into the federal market, our webinars are designed to provide you with the insights and tools you need.

We look forward to welcoming you to our upcoming webinars. Stay informed, stay ahead!

National Vulnerability Database: Opaque changes and unanswered questions

A short history lesson on the NVD

Founded in 2005, The National Vulnerability Database, or NVD, is a collection of vulnerability data by the National Institute of Standards and Technology (NIST) in the United States. As of today, many companies rely on NVD data for their security operations and vulnerability research.

NVD describes itself as:

The NVD is the U.S. government repository of standards based vulnerability management data represented using the Security Content Automation Protocol (SCAP). This data enables automation of vulnerability management, security measurement, and compliance. The NVD includes databases of security checklist references, security related software flaws, product names, and impact metrics.

The primary role of the NVD is adding data to vulnerabilities that have been assigned a CVE ID. They include additional metadata such as severity levels via Common Vulnerability Scoring System (CVSS), and affected data via Common Platform Enumeration (CPE). NIST is responsible for maintaining the NVD as each CVE ID can require additional modifications or maintenance as the nature of vulnerabilities can change daily. This is a service NVD has been providing for nearly 20 years.

The graph below shows a historical trend of CVE IDs that have been published in the CVE program (green), alongside the analysis data provided by NVD (red), since 2005.

Key: Green is all CVE IDs in NVD. Red is IDs with a CPE attached

We can see nearly every CVE has been enriched by NVD during this time.

A problematic website notice from the NVD

On February 15th 2024, a banner appeared on the NVD website stating:

NIST is currently working to establish a consortium to address challenges in the NVD program and develop improved tools and methods. You will temporarily see delays in analysis efforts during this transition. We apologize for the inconvenience and ask for your patience as we work to improve the NVD program.

It’s not entirely clear what this message means for the data provided by NVD or what the public should expect. 

While attempting to research the meaning behind this statement, Anchore engineers have discovered that as of February 15, 2024, NIST has almost completely stopped updating NVD with analysis for CVE IDs. The graph below shows the trend of CVE IDs that have been published in the CVE program (green), alongside the analysis data provided by NVD (red), since early January 1, 2024.

Key: Green is all CVE IDs in NVD. Red is IDs with a CPE attached

Starting February 12th, thousands of CVE IDs have been published without any record of analysis by NVD. Since the start of 2024 there been a total of 6,171 total CVE IDs with only 3,625 being enriched by NVD. That leaves a gap of 2,546 (42%!) IDs.

NVD has become an industry standard for organizations and security products to rely on their data for security operations, such as prioritizing vulnerability remediation and securing infrastructures. CVE IDs are constantly being added and updated, but those IDs are missing key analytical data provided by NVD. Any organizations that depend on NVD for vulnerability data such as CVSS scores are no longer receiving updates to the CVE data. This means that organizations relying on this data are left in the dark with new vulnerabilities, imposing greater risk and unmanaged attack surface for their environment.

Wait and see?

How to fill the gap of this missing data has not yet been addressed by NVD. There are other vulnerability databases such as the GitHub Advisory Database and the CVE5 database that contain severity ratings and affected products, but by definition, those databases cannot provide NVD severity scores.

Anchore is investigating options to create a public repository of identifiers to fill this gap. We invite members of the security community to join us at our next meetup on March 14th 2024 as we research options. Details for the meetup are available on GitHub

In the meantime, we will continue to look for updates from NIST and hope that they are more transparent about their service situation soon.

Syft Reaches v1.0!

Early in 2020 we started work on Syft, an SBOM generator that is easy to install and use and supports a variety of SBOM formats. In late September of 2020 we released the first version of Syft with seven ecosystems supported. Since then we’ve had 168 releases made up of the work from 134 contributors, collectively adding an additional 18 ecosystems — a total of over 40 package catalogers. 

Today we’re going one step further: we’re thrilled to announce the release of Syft v1.0.0! 🎉

What is Syft?

At a high level, Syft answers the seemingly simple question:  “what is in my application?”. To a finer point, Syft is a standalone CLI tool and library that scans container images and file systems in order to generate an SBOM: the Software Bill of Materials that describes what software components were found.

While Syft’s capability to generate a detailed SBOM is useful in its own right, we consider Syft to be a foundational tool that supports both our growing list of open-source offerings as well as dozens of other OSS projects. Ultimately it delivers a way for users to both generate SBOM material for their projects as well as use those same SBOMs for other important functions such as vulnerability scanning, OSS license reporting, tracking components at various stages of the application lifecycle, and other custom use cases.

Some of the important dimensions of Syft are:

  • Sources: types of software applications and artifacts that can be scanned by Syft to produce an SBOM, such as docker container images, source code directories, and more.
  • Catalogers: functions that are implemented for a given type of software artifact / packaging ecosystem, such as Java JAR files, RPMs, Go modules, and more.
  • Output Formats: interchangeable formats to put the SBOM material into, post-SBOM generation, such as the Syft-native JSON, SPDX (multiple versions), CycloneDX (multiple versions), and user-customizable formats via templating.

Over time, we’ve seen the list of sources, catalogers, and output formats continue to grow, and we are looking forward to adding even more capabilities as the project continues forward.  

Capabilities of Syft

To start with, Syft needs to know how to read what you’re giving it. There are a number of different types of sources Syft supports: directories, individual files, archives, and – of course – container images! And there are a lot of options such as letting you choose what scope you want to be cataloged within an image. By default we consider a squashed layer scope, which is most like what the filesystem will look like when a container is created from the image. But we also allow for looking at all layers within the container image, which would include contents that are distributed but not available when a container is created. In future versions we want to expand these selections to support even more use cases.

When scanning a source we have several catalogers scouring for packaging artifacts, covering several ecosystems:

  • Alpine (apk)
  • C/C++ (conan)
  • Dart (pubs)
  • Debian (dpkg)
  • Dotnet (deps.json)
  • Objective-C (cocoapods)
  • Elixir (mix)
  • Erlang (rebar3)
  • Go (go.mod, Go binaries)
  • Haskell (cabal, stack)
  • Java (jar, ear, war, par, sar, nar, native-image)
  • JavaScript (npm, yarn)
  • Jenkins Plugins (jpi, hpi)
  • Linux kernel module and archives (ko & vmlinz)
  • Nix (outputs in /nix/store)
  • PHP (composer)
  • Python (wheel, egg, poetry, requirements.txt)
  • Red Hat (rpm)
  • Ruby (gem)
  • Rust (cargo.lock)
  • Swift (cocoapods, swift-package-manager)
  • WordPress Plugins

We don’t blindly run all catalogers against all kinds of input sources though – we tailor the catalogers used to create the most accurate SBOM possible based on what is being analyzed. For example, when scanning images, we enable the alpm-db-cataloger but don’t enable the cocoapods-cataloger:

If there is a set of catalogers that you must / must not use for whatever reason, you can always tailor the set that runs with –override-default-catalogers and –select-catalogers to meet your needs. If you’re using Syft as an API, you can go further and implement your own package cataloger and provide it to Syft when creating an SBOM.

When it comes to SBOM formats, we are unopinionated about which format might be best for your use case — so we support as many as we can, including the most popular formats: SPDX and CycloneDX. This is a core decision point for us — this way you can pivot between SBOM formats and not be locked into a format based on a tooling decision. You can even select which version of a format to output (even output multiple at once!):

syft scan alpine:latest -o [email protected]
syft scan alpine:latest -o [email protected]
syft scan alpine:latest -o [email protected]
syft scan alpine:latest -o [email protected]=./alpine.spdx.json -o cyclonedx-json=./alpine.cdx.json

From the beginning, Syft was designed to lean into the Unix philosophies: do one thing and one thing well, allowing for it to plug into downstream tooling easily. For instance, to use your SBOM to get a vulnerability analysis, pipe the results to Grype

Or to discover software licenses and check for compliance, pipe the results to Grant:

What does v1 mean to us? 

Version 1 primarily signals a stabilization of the CLI and API. Moving forward you can expect that breaking changes will always coincide with a major version bump of Syft. Some specific guarantees we wanted to call out explicitly are:

  • We version our JSON schema of the syft JSON output separately than that of Syft, the application. In the past in a v0 state this has meant that breaking changes to the JSON schema would only result in a minor version bump of syft. Moving forward the JSON schema version used for the default JSON output of Syft will never include breaking changes.
  • We’ve implicitly supported the ability to decode any previous version of the Syft JSON model, however, moving forward this will be an explicit guarantee — If you have a Syft JSON document of any version, then you will be able to convert it into the latest Syft JSON version.
  • Stereoscope, our library for parsing container images and crafting squashed filesystem representations (critical to the functionality of Syft), will now be versioned at each release and have release tags.

What does this mean for the average user? In short: probably nothing! We will continue to ship enhancements and fixes every few weeks while remaining compatible with legacy SBOM document schemas.

So are we done?…

…Not even close 🤓. As the SBOM landscape keeps changing our goal is to grow Syft in terms of what can be cataloged, the accuracy of the results, the formats that can be expressed, and its usefulness in more use cases. 

Have an ecosystem you need that we don’t support yet? Let us know, and let’s build it together! Found an issue or have a feature idea? Open an issue and let’s talk about it! Looking to contribute and are looking for a good place to start? We have some issues set aside just for you! Curious as to what we’re looking to work on next? Check out our ever-growing roadmap! And always, come join us every-other week at our community office hours chats if you want to meet with us and talk about topical issues regarding our OSS projects.

Anchore Enterprise 5.1: Token-Based Authentication

In Anchore 5.1, we have added the functionality of using token-based authentication through our API keys. Now with Anchore Enterprise 5.1, an administrator can create a token for a user so that they can use API keys rather than a username or credential. Let’s dive into the details of what this means.

Token-based authentication is a protocol that provides an extra layer of security when users want to access an API. It allows users to verify their identity, and in return receive a unique access token for the specific service. Tokens have a lifespan and as long as it is used within that duration, users can access the application with that same token without having to continuously log in.

We list the step-by-step mechanism for a token-based authentication protocol:

  1. A user will send its credentials to the client. 
  2. The client sends the credentials to an authorization server where it will generate a unique token for that specific user’s credentials. 
  3. The authorization server sends the token to the client. 
  4. The client sends the token to the resource server
  5. The resource server sends data/resource to the client for the duration of the token’s lifespan.

Token-Based Authentication in Anchore 5.1

Now that we’ve laid the groundwork, in the following sections we’ll walk through how to create API keys and use them in AnchoreCTL.

Creating API Keys

In order to generate an API key, navigate to the Enterprise UI and click on the top right button and select ‘API Keys’:

alt text

Clicking ‘API Keys’ will present a dialog that lists your active, expired and revoked keys:

alt text

To create a new API key, click on the ‘Create New API Key’ on the top right. This will open another dialog where it asks you for relevant details for the API key:

alt text

You can specify the following fields:

  • Name: The name of your API key. This is mandatory and the name should be unique (you cannot have two API keys with the same name).
  • Description: An optional text descriptor for your API key.
  • Expiry Date: An expiry date for your API key, you cannot specify a date in the past and it cannot exceed 365 days by default. This is the lifespan you are configuring for your token.

Click save and the UI will generate a Key Value and display the following output of the operation:

alt text

NOTE: Make sure you copy the Key Value as there is no way to get this back once you close out of this window.

Revoking API Keys

If there is a situation where you feel your API key has been compromised, you can revoke an active key. This prevents the key from being used for authentication. To revoke a key, click on the ‘Revoke’ button next to a key:

alt text

NOTE: Be careful revoking a key as this is an irreversible operation i.e. you cannot mark it active later.

The UI by default only displays active API keys. If you want to see your revoked and expired keys, check the toggle to ‘Show only active API keys’ on the top right:

alt text

Managing API Keys as an Admin

As an account admin you can manage API keys for all users in the account you are administered in. A global admin can manage API keys across all accounts and all users.

To access the API keys as an admin, click on the ‘System’ icon and navigate to ‘Accounts’:

alt text

Click ‘Edit’ for the account you want to manage keys for and click on the ‘Tools’ button against the user you wish to manage keys for:

alt text

Using API Keys in AnchoreCTL

Generating API Keys as an SAML (SSO) User

API keys for SAML (SSO) users are disabled by default. To enable API keys for SAML users, please update your helm chart values file with the following:

    user_authentication: 

        allow_api_keys_for_saml_users: true

NOTE: API keys are an additional authentication mechanism for SAML users that bypasses the authentication control of the IDP. When access has been revoked at the IDP, it does not automatically disable the user or revoke all API keys for the user. Therefore, when access has been revoked for a user, the system admin is responsible to manually delete the Anchore User or revoke any API key which was created for that user.

Using API Keys

API keys are authenticated using basic auth. In order to use API keys, you need to use a special username _api_key and the password is the Key Value that was the output when you created the API key.

curl -u ‘_api_key:<API key value>’ http://localhost:8228/v2/images

  url: “http://localhost:8228”

  username: “_api_key”

  password: <API Key Value>

Caveats for API Keys

API Keys generally inherit the permissions and roles of the user they were generated for, but there are certain operations you cannot perform using API keys regardless of which user they were generated for:

  • You cannot Add/Edit/Remove Accounts, Users and Credentials.
  • You cannot Add/Edit/Remove Roles and Role Members.
  • You cannot Add/Edit/Revoke API Keys.

We invite you to learn more about Anchore Enterprise 5.0 with a free 15 day trial. Or, if you’ve got other questions, set up a call with one of our specialists.

Learn more from Anchore:

  1. User Management in Anchore Enterprise 
  2. User Authentication with API Keys
  3. AnchoreCTL Configurations 

Introducing Grant: A new OSS project from Anchore for inspecting and checking license compliance from SBOMs

Today Anchore has released Grant, a tool to help users discover and reason about software licenses. Grant represents our latest efforts to build OSS tools oriented around the fundamental idea of creating SBOMs as part of your development process, so that they can be used for multiple purposes. We maintain Syft for generating SBOMs, and Grype for taking an SBOM and performing a vulnerability scan. Now with Grant, that same SBOM can also be used for performing license checks.

Knowing what licenses you have is critical to ensure the delivery of your software. It’s important to account for all of the licenses that you are beholden to in your transitive dependencies before introducing new dependencies (and well before shipping to production!). For example, you might want to know the package under which a GPL (General Public License) is discovered within the dependencies of a certain software before releasing, or what license a new software library might require. This evaluation tends to not be a one-time decision: you need to continually ensure that the dependencies you are using are not swapping to different licenses after you initially brought that dependency into your codebase. Grant aims to aid in solving the above issues through the use of its check and list commands.

Grant – Design and Launch Features

Grant takes either a Software Bills of Material (SBOM), a filesystem with a collection of license files, or a container image as input. Grant can either generate an SBOM for the different provided inputs or read an SBOM provided as input. Grant has two primary commands:

  • list: show me discovered licenses
  • check: evaluate a list of licenses to ensure your artifacts are compliant to a given policy

List 

The list command can display licenses discovered from the SBOM and their related packages. It can also filter licenses to show which licenses are not OSI approved, which licenses are not associated with a package, and which packages were found with no associated license. Users can use the list to show the discovered licenses for a given container image. Here is an example using grant to display all the licenses found for the latest redis image:

You can also get more detailed information about the licenses from the json output by using the -o json flag. This flag provides a json formatted output which shows the locations of the discovered license, if it has an SPDX expression, and the packages and locations of those packages for which it was discovered

Check

The check command can evaluate given inputs for license compliance relative to the provided license rules. One thing users can look out for when trying to understand their license posture is if a license that was previously permissive has changed. Check allows the user to express simple policies that will pass/fail the command depending on what Grant discovers:

rules:
    - pattern: "*gpl*"
      name: "deny-gply"
      mode: "deny"
      reason: "GPL licenses are not allowed"

Here is grant running check with the above configuration against the same redis image as the list example

Users can also use a -o json option to return a more detailed and machine readable view. This output allows users to see more detailed license information if a license is matched to an official SPDX ID. This includes text of the license, references to its clauses, and sections or rules showing if a license was not found for a package.

Grant can be configured to accept multiple sources, including those from container images, directories, licenses, and software bills of materials. Grant stitches these sources into a report that can then display the license contents of each source. This report of licenses can be optionally paired with a policy file that gates/limits an execution (pass/fail) on the presence or lack of a predetermined set of licenses. Policy files have the ability to include exceptions for packages that are not used or that organizations deem are not applicable to their compliance policy.

Grant’s approach to license identification

Grant takes a simplified approach to asserting on and establishing a license’s presence. It first tries to match all discoveries when possible to the SPDX license list. For sources with more complex expressions, grant will try to break those expressions into the individual license IDs. This should allow the user to make decisions about which licenses are relevant to the project they are working on. An example of this is something like the SPDX expression `LGPL-2.1-only OR MIT OR BSD-3-Clause`discovered for the package `foobar`. If this expression was discovered then the 3 different licenses would be associated with the package. If users need an exception they can then construct a policy that allows them to exclude non permissive licenses from the statement:

pattern: "LGPL-2.1"
name: "deny-LGPL-2.1"
mode: "deny"
reason: "LGPL-2.1 licenses are not allowed by new corporate policy"
exclusions:
    - "foobar"

One notable inclusion in the code is grants use of the license classifier from google to recognize licenses that are input to the tool and assign them a confidence level. Further enhancements are planned to allow for SBOMs that provide the original license text to also be run against the classifier.

SPDX license compatibility

The landscape of open source licenses is vast and complex with various licenses having different terms and conditions. It can be challenging to understand and manage the obligations associated with each license. While grant makes no claim on establishing if licenses are compatible with each other, it can establish if a discovered license has an ID found in the latest SPDX license list. Users can use the grant list command with the --non-spdx flag to get an idea of which licenses were unable to be matched to a corresponding SPDX ID. This should allow users to find gaps where Grant might not be making the right choice for SPDX ID association, while also pointing out interesting one off licenses and their locations that might be private or custom to a certain vendor:

Conclusion

Grant is another open source tool from Anchore that can help improve your software processes. Simply by generating SBOMs for your software and making them available, you can now easily gate incoming code changes and steps of your delivery pipeline that are sensitive to licensing concerns.

Go download Grant here and try it for yourself!

Introducing VIPERR: The First Software Supply Chain Security Framework for All

Today Anchore announces the VIPERR software supply chain security framework. This framework distills our lessons learned from supporting the most challenging software supply chain environments across government agencies and the defense industrial base. The framework is a blueprint for organizations to implement that reliably creates secure software supply chains with the least possible lift.

Previously security teams had to wade through massive amounts of literature on software supply chain security and spend countless hours of their teams time digesting those learnings into concrete processes and controls that integrated with their specific software development process. This typically absorbed massive amounts of time and resources and it wasn’t always clear at the end that an organization’s security had improved significantly.

Now organizations can utilize the VIPERR framework as a trusted industry resource to confidently improve their security posture and reduce the possibility of a breach due to a supply chain incident without the lengthy research cycle that frequently comes before the implementation phase of a software supply chain solution.

If you’re interested to see how Anchore works with customers to apply the framework via the Anchore Enterprise platform take the free guided walkthrough of the VIPERR framework. Alternatively, you can view our on demand webinar for a thorough walk through the framework. Finally, if you would like a more hands-on experience to try out the VIPERR framework then you can try our Anchore Enterprise Free Trial.  

Frequently Asked Questions

What is the VIPERR framework?

VIPERR is a free software supply chain security framework that Anchore created for organizations to evaluate and improve the security posture of their software supply chain. VIPERR stands for visibility, inspection, policy enforcement, remediation, and reporting. 

While working alongside developers and security teams within some of the most challenging architectures and threat landscapes, Anchore field engineers developed the VIPERR framework to both contextualize lessons learned and prevent live threats. VIPERR is meant to be a practical self-assessment playbook that organizations can regularly reuse to evaluate and harden their software supply chain security posture. By following this guide, numerous Fortune 500 enterprises and top federal agencies have transformed their software supply chain security posture and become hard targets for advanced persistent threats.

Why did Anchore create the VIPERR framework?

There are already a number of frameworks that have been developed to help organizations improve their software supply chain security but most of them focus on giving general guidance that is flexible enough that most specific implementations of the guidance will yield compliance. This is great for general standards because it allows organizations to find the best fit for their environment, but by keeping guidance general, it is difficult to always know the best specific implementation that delivers real-world results. The VIPERR framework was designed to fill this gap.

VIPERR is a framework that can be used to fulfill compliance standards for software supply chain security that is opinionated on how to achieve the controls of most of the popular standards. It can be paired with Anchore’s turnkey offering Anchore Enterprise so that organizations can opt for a solution that accomplishes the entire VIPERR framework without having to build a system from scratch.

Access an interactive 50 point checklist or a pdf version to guide you through each step of the VIPERR framework, or share it with your team to introduce the model for learning and awareness.

How do I begin identifying the gaps in my software supply chain security program? 

“I have no budget. My boss doesn’t think it’s a priority. I lack resources.” These are all common refrains when we speak with security teams working with us via our open source community or commercial enterprise offering. There is a ton of guidance available between SSDF, SLSA, NIST, and S2C2F. A lot of this is contextualized in a manner difficult to digest. As mentioned in the previous question VIPERR was created to be highly actionable by finding the right balance between giving guidance that is flexible and providing opinions that reduce options to help organizations make decisions faster.

The VIPERR framework will be available in a formatted 50 point, self-assessment checklist in the coming weeks, check back here for updates on that. By completing the forthcoming checklist you will produce a prioritized list of action items to harden your organization’s software supply chain security with the least amount of effort. 

How do I build a solution that closes the gaps that VIPERR uncovers?

As stated, VIPERR is a framework, not a solution. Anchore has worked with companies that have implemented VIPERR by building an in-house solution with a collection of open source tools (e.g. Syft, Grype, and other open source tools) or using a combination of multiple security tools. If you want to get an idea of the components involved in building a solution by self-hosting open-source tools and tying all of these systems together with first-party code, we wrote about that approach here

If I don’t want to build a solution, are there any turnkey solutions available?

Yes. Anchore Enterprise was designed as a turnkey solution to implement the VIPERR framework. Anchore Enterprise also automates the 50 security controls of the framework by integrating directly into an organization’s SDLC toolset (i.e., developer environments, CI/CD build pipelines, artifact registry and production environments). This provides the ability for organizations to know at any point in time if their software supply chain has been compromised and how to remediate the exploit.

If you’d like to learn more about the Anchore Enterprise platform or speak with a member of our team, feel free to book a time to speak with one of our specialists.

NIST CSF 2.0: Key Takeaways and Implementation Strategies

This blog post has been archived and replaced by the supporting pillar page that can be found here:

https://anchore.com/wp-admin/post.php?post=987473315&action=edit

The blog post is meant to remain “public” so that it will continue to show on the /blog feed. This will help discoverability for people browsing the blog and potentially help SEO. If it is clicked on it will automatically redirect to the pillar page.

Anchore Enterprise 5.0: New, Free Self-Service Trial

This week we’re proud to announce the immediate availability of the Anchore Enterprise 5.0 free trial.  If you’ve been curious about how the platform works, or wondering how it can complement your application security measures, now you can have access to a 15 day free trial. 


To get started, just click here, fill out a short form, and you will immediately receive instructions via email on how to spin up your free 15 day trial in your own AWS account.  Please note that only AWS is supported at this time, so if you’d like to launch a trial on-prem or with another cloud provider, please reach out to us directly.

With just a few clicks, you’ll be up and running with the latest release of Anchore Enterprise which includes a new API, improvements to our reporting interface, and so much more.  In fact, we have pre-populated the trial with data that will allow you to explore the many features Anchore Enterprise has to offer. Take a look at the below screenshots for a glimpse behind the scenes.

Malware Scanning

Kubernetes Runtime Integration

Vulnerability Reports

We invite you to learn more about Anchore Enterprise 5.0 with a free 15 day trial here. Or, if you’ve got other questions, set up a call with one of our specialists here.

Unpacking the Power of Policy at Scale in Anchore

Generating a software bill of materials (SBOM) is starting to become common practice. Is your organization using them to their full potential? Here are a couple questions Anchore can help you answer with SBOMs and the power of our policy engine:

  • How far off are we from meeting the security requirements that Iron Bank, NIST, CIS, and DISA put out around container images?
  • How can I standardize the way our developers build container images to improve security without disrupting the development team’s output?
  • How can I best prioritize this endless list of security issues for my container images?
  • I’m new to containers. Where do I start on securing them?

If any of those questions still need answering at your organization and you have five minutes, you’re in the right place. Let’s dive in.

If you’re reading this you probably already know that Anchore creates developer tools to generate SBOMs, and has been since 2016. Beyond just SBOM generation, Anchore truly shines when it comes to its policy capabilities. Every company operates differently — some need to meet strict compliance standards while others are focused on refining their software development practices for enhanced security. No matter where you’re at in your container security journey today, Anchore’s policy framework can help improve your security practices.

Anchore Enterprise has a tailored approach to policy and enforcement that means whether you’re a healthcare provider abiding by stringent regulations or a startup eager to fortify its digital defenses, Anchore has got you covered. Our granular controls allow teams to craft policies that align perfectly with their security goals.

Exporting Policy Reports with Ease

Anchore also has a nifty command line tool called anchorectl that allows you to grab SBOMs and policy results related to those SBOMs. There are a lot of cool things you can do with a little bit of scripting and all the data that Anchore Enterprise stores. We are going to cover one example in this blog.

Once Anchore has created and stored an SBOM for a container image, you can quickly get policy results related to that image. The anchorectl command that will evaluate an image against the docker-cis-benchmark policy bundle:

anchorectl image details <image-id> -p docker-cis-benchmark

That command will return the policy result in a few seconds. Let’s say your organization develops 100 images and you want to meet the CIS benchmark standard. You wouldn’t want to assess each of these images individually, that sounds exhausting. 

To solve this problem, we have created a script that can iterate over any number of images, merge the results into a single policy report, and export that into a csv file. This allows you to make strategic decisions about how you can most effectively move towards compliance with the CIS benchmark (or any standard).

In this example, we ran the script against 30 images in my Anchore deployment. Now take a look holistically at how far off we are from CIS compliance. Here are a few metrics that standout:

  • 26 of the 30 images are running as ‘root’
  • 46.9% of our total vulnerabilities have fixes available (4978 /10611)
  • ADD instructions are being used in 70% of our images
  • Health checks missing in 80% of our images
  • 14 secrets (all from the same application team)
  • 1 malware hit (Cryptominer Casey is at it again)

As a security team member, we didn’t write any of this code myself, which means I need to work with my developer colleagues on the product/application teams to clear up these security issues. Usually this means an email that educates my colleagues on how to utilize health checks, prefer COPY instead over ADD in Dockerfiles, declaring a non-privileged user instead of root, and methods to upgrade packages with fixes available (e.g., Dependabot). Finally, we would prioritize investigating how that malware made its way into that image for myself.

This example illustrates how storing SBOMs and applying policy rules against them at scale can streamline your path to your container security goals.

Visualizing Your Aggregated Policy Reports

While this raw data is useful in and of itself, there are times when you may want to visualize the data in a way that is easier to understand.  While Anchore Enterprise does provide some dashboarding capabilities, it is not and does not aim to be a versatile dashboarding tool. This is where utilizing an observability vendor comes in handy.

In this example, I’ll be using New Relic as they provide a free tier that you can sign up for and begin using immediately. However, other providers such as Datadog and Grafana would also work quite well for this use case. 

Importing your Data

  1. Download the tsv-to-json.py script
  2. Save the data produced by the policy-report.py script as a TSV file
    • We use TABs as a separator because commas are used in many of the items contained in the report.
  3. Run the tsv-to-json.py script against the TSV file:
python3 tsv-to-json.py aggregated_output.tsv > test.json
  1. Sign-up for a New Relic account here
  2. Find your New Relic Account ID and License Key
    • Your New Relic Account ID can be seen in your browser’s address bar upon logging in to New Relic, and your New Relic License Key can be found on the right hand side of the screen upon initial login to your New Relic account.
  3. Use curl to push the data to New Relic:
gzip -c test.json | curl \
-X POST \
-H "Content-Type: application/json" \
-H "Api-Key: <YOUR_NEWRELIC_LICENSE_KEY>" \
-H "Content-Encoding: gzip" \
https://insights-collector.newrelic.com/v1/accounts/<YOUR_NEWRELIC_ACCOUNT_ID>/events \
--data-binary @-

Visualizing Your Data

New Relic uses the New Relic Query Language (NRQL) to perform queries and render charts based on the resulting data set.  The tsv-to-json.py script you ran earlier converted your TSV file into a JSON file compatible with New Relic’s event data type.  You can think of each collection of events as a table in a SQL database.  The tsv-to-json.py script will automatically create an event type for you, combining the string “Anchore” with a timestamp.

To create a dashboard in New Relic containing charts, you’ll need to write some NRQL queries.  Here is a quick example:

FROM Anchore1698686488 SELECT count(*) FACET severity

This query will count the total number of entries in the event type named Anchore1698686488 and group them by the associated vulnerability’s severity. You can experiment with creating your own, or start by importing a template we have created for you here.

Wrap-Up

The security data that your tools create is only as good as the insights that you are able to derive from them. In this blog post, we covered a way to help security practitioners turn a mountain of security data into actionable and prioritized security insights. That can help your organization to improve its security posture and meet compliance standards quicker. That being said this blog is dependent on you already being a customer of Anchore Enterprise.

Looking to learn more about how to utilizing a policy-based security posture to meet DoD compliance standards like cATO or CMMC? One of the most popular technology shortcuts is to utilize a DoD software factory. Anchore has been helping organizations and agencies put the Sec in DevSecOps by securing traditional software factories, transforming them into DoD software factories. Get caught up with the content below:

Your Guide to Software Compliance, from Federal Policy to Industry Standards

Let’s be real, cybersecurity compliance is like health insurance, massively complicated, mind-numbing to learn about and really important when something goes wrong. Complying with cybersecurity laws has only become more challenging in the past few years as the US federal government and European Union have both been accelerating their efforts to modernize cybersecurity legislation and regulations.

This accelerating pace of influence and involvement of governments worldwide is impacting all businesses that use software to operate (which is to say, all businesses). Not only because the government is being more prescriptive with the requirements that have to be met in order to operate a business but also because of the financial penalties involved with non-compliance.

This guide will help you understand how cybersecurity laws and regulations impact your businesses and how to think about cybersecurity compliance so you don’t run afoul of non-compliance fines.

What is Cybersecurity Compliance?

Cybersecurity compliance is the practice of conforming to established standards, regulations, and laws to protect digital information and systems from cybersecurity threats. By implementing specific policies, procedures, and controls, organizations meet the requirements set by various governing bodies. This enables these organizations to demonstrate their commitment to cybersecurity best practices and legal mandates.

Consider the construction of a house. Just as architects and builders follow blueprints and building codes to ensure the house is safe, sturdy, and functional, cybersecurity compliance serves as the “blueprint” for organizations in the digital world. These guidelines and standards ensure that the organization’s digital “structure” is secure, resilient, and trustworthy. By adhering to these blueprints, organizations not only protect their assets but also create a foundation of trust with their stakeholders, much like a well-built house stands strong and provides shelter for its inhabitants.

Why is Cybersecurity Compliance Important?

At its core, the importance of cybersecurity compliance can be distilled into one critical aspect: the financial well-being of an organization. Typically when we list the benefits of cybersecurity compliance, we are forced to use imprecise ideas like “enhanced trust” or “reputational safeguarding,” but the common thread connecting all these benefits is the tangible and direct impact on an organization’s bottom line. In this case, it is easier to understand the benefits of cybersecurity compliance by instead looking at the consequences of non-compliance.

  • Direct Financial Penalties: Regulatory bodies can impose substantial fines on organizations that neglect cybersecurity standards. According to the IBM Cost of a Data Breach Report 2023, the average company can expect to pay approximately $40,000 USD in fines due to a data breach. The emphasis of this figure is that it is the average. A black swan event can lead to a significantly different outcome. A prime example of this is the TJX Companies data breach in 2006. TJX faced a staggering fine of $40.9 million after the exposure of credit card information of more than 45 million customers for non-compliance with PCI DSS standards.
  • Operational Disruptions: Incidents like ransomware attacks can halt operations, leading to significant revenue loss.
  • Loss of Customer Trust: A single data breach can result in a mass exodus of clientele, leading to decreased revenue.
  • Reputational Damage: The long-term financial effects of a tarnished reputation can be devastating, from stock price drops to reduced market share.
  • Legal Fees: Lawsuits from affected parties can result in additional financial burdens.
  • Recovery Costs: Addressing a cyber incident, from forensic investigations to public relations efforts, can be expensive.
  • Missed Opportunities: Non-compliance can lead to lost contracts and business opportunities, especially with entities that mandate cybersecurity standards.

An Overview of Cybersecurity Laws and Legislation

This section will give a high-level overview of cybersecurity laws, standards and the governing bodies that exert their influence on these laws and standards.

Government Agencies that Influence Cybersecurity Regulations

Navigating the complex terrain of cybersecurity regulations in the United States is akin to understanding a vast network of interlinked agencies, each with its own charter to protect various facets of the nation’s digital and physical infrastructure. This ecosystem is a tapestry woven with the threads of policy, enforcement, and standardization, where agencies like the Cybersecurity and Infrastructure Security Agency (CISA), the National Institute of Standards and Technology (NIST), and the Department of Defense (DoD) play pivotal roles in crafting the guidelines and directives that shape the nation’s defense against cyber threats.

The White House and legislative bodies contribute to this web by issuing executive orders and laws that direct the course of cybersecurity policy, while international standards bodies such as the International Organization for Standardization (ISO) offer a global perspective on best practices. Together, these entities form a collaborative framework that influences the development, enforcement, and evolution of cybersecurity laws and standards, ensuring a unified approach to protecting the integrity, confidentiality, and availability of information systems and data.

  1. Cybersecurity and Infrastructure Security Agency (CISA)
    • Branch of Department of Homeland Security (DHS) that oversees cybersecurity for critical infrastructure for the US federal government
    • Houses critical cybersecurity services, such as, National Cybersecurity and Communications Integration Center (NCCIC), United States Computer Emergency Readiness Team (US-CERT), National Coordinating Center for Communications (NCC) and NCCIC Operations & Integration (NO&I)
    • Issues Binding Operational Directives, such as, BOD 22-01: Reducing the Significant Risk of Known Exploited Vulnerabilities which require federal agencies to take action
  1. National Institute of Standards and Technology (NIST)
  1. Department of Defense (DoD)
    • Enforces the Defense Federal Acquisition Regulation Supplement (DFARS) which mandates NIST SP 800-171 compliance for defense contractors
    • Introduced the Cybersecurity Maturity Model Certification (CMMC) for defense industrial base (DIB) which builds a certification around the security controls in NIST SP 800-171
    • Releases memorandums that amend other cybersecurity laws and standards specific to the defense industrial base (DIB), such as, the Continuous Authorization To Operate (cATO) memo
  1. The White House
    • Issues executive orders (EOs) that direct federal agencies to take specific actions related to cybersecurity (e.g., May 2021, President Biden issued “Executive Order on Improving the Nation’s Cybersecurity”)
    • Launches policy initiatives that prioritize cybersecurity, leading to the development of new regulations or the enhancement of existing ones
    • Release strategy documents to align agencies around a national vision for cybersecurity (e.g., National Cybersecurity Strategy)
  1. International Organization for Standardization (ISO)
    • Develops and publishes international standards, including those related to information security
    • Roughly equivalent to NIST but for European countries
    • Influence extends beyond Europe in practice though not officially
  1. European Union Agency for Cybersecurity (ENISA)
    • EU’s agency dedicated to achieving a high common level of cybersecurity across member states
    • Roughly equivalent to CISA but for European states
  1. The Federal Bureau of Investigation (FBI)
    • Investigates cyber attacks, including those by nation-states, hacktivists, and criminals; investigations can set legal precedent
    • Leads National Cyber Investigative Joint Task Force (NCIJTF) to coordinate interagency investigation efforts
    • Collaborates with businesses, academic institutions, and other organizations to share threat intelligence and best practices through the InfraGard program
  1. Federal Trade Commission (FTC)
    • Takes legal action against companies failing to protect consumer data
    • Publishes guidance for businesses on how to protect consumer data and ensure privacy
    • Recommends new legislation or changes to existing laws related to consumer data protection and cybersecurity
  1. U.S. Secret Service
    • Investigates cyber crimes, specifically financial crimes; investigations can set legal precedent
    • Manages the Electronic Crimes Task Forces (ECTFs) focusing on cyber intrusions, bank fraud, and data breaches
  1. National Security Agency (NSA)
    • Collects and analyzes signals intelligence (SIGINT) related to cyber threats
    • Established the Cybersecurity Directorate to unify foreign intelligence and cyber defense missions for national security systems and the defense industrial base (DIB)
    • Conducts extensive research in cybersecurity, cryptography, and related fields. Innovations and findings from this research often influence broader cybersecurity standards and practices
  1. Department of Health and Human Services (HHS)
    • Enforces the Health Insurance Portability and Accountability Act (HIPAA) ensuring the protection of health information
    • Oversees the Office for Civil Rights (OCR) which enforces HIPAA’s Privacy and Security Rules
  1. Food and Drug Administration (FDA)
    • Regulates the cybersecurity of medical devices, specifically Internet of Things (IoT) devices
    • Provides guidance to manufacturers on cybersecurity considerations for medical devices
  1. Securities and Exchange Commission (SEC)
    • Requires public companies to disclose material cybersecurity risks and incidents
    • Enforces the Sarbanes-Oxley Act (SOX) implications for cybersecurity, ensuring the integrity of financial data

U.S. Cybersecurity Laws and Standards to Know

Navigating the complex web of U.S. cybersecurity regulations can often feel like wading through an alphabet soup of acronyms. We have tried to highlight some of the most important and give context on how the laws, standards and regulations interact, overlap or build on each other.

  1. Federal Information Security Management Act (FISMA)
    • Law that requires federal agencies and their contractors implement comprehensive cybersecurity measures
    • Many of the standards and recommendations of the NIST Special Publication series on cybersecurity are a response to the mandate of FISMA
  1. Federal Risk and Authorization Management Program (FedRAMP)
    • Standard for assessing security of cloud/SaaS products and services used by federal agencies
    • Certification is the manifestation of the FISMA law
  1. Defense Federal Acquisition Regulation Supplement (DFARS)
  1. Cybersecurity Maturity Model Certification (CMMC)
    • Certification to prove that DoD contractors are in compliance with cybersecurity practices and processes required in DFARS
    • For many years DFARS was not enforced, CMMC is certification process to close this gap
  1. SOC 2 (System and Organization Controls 2)
    • Compliance framework for auditing and reporting on controls related to the security, availability, confidentiality, and privacy of a system
    • Very popular certification for cloud/SaaS companies to maintain as a way to assures clients that their information is managed in a secure and compliant manner
  1. Payment Card Industry Data Security Standard (PCI DSS)
    • Establishes security standards for organizations that handle credit cards
    • Must comply with this security standard in order to process or store payment data
  1. Health Insurance Portability and Accountability Act (HIPAA)
    • Protects the privacy and security of health information for consumers
    • Must comply with this security standard in order to process or store electronic health records
  1. NIST Cybersecurity Framework
    • Provides a policy framework to guide private sector organizations in the U.S. to assess and improve their ability to prevent, detect, and respond to cyber incidents
    • While voluntary, many organizations adopt this framework to enhance their cybersecurity posture
  1. NIST Secure Software Development Framework
    • Standardized, industry-agnostic set of best practices that can be integrated into any software development process to mitigate the risk of vulnerabilities and improve the security of software products
    • More specific security controls than NIST 800-53 that still meets the controls outlined in the Control Catalog regrading secure software development practices
  2. CCPA (California Consumer Privacy Act)
    • Statute to enhance privacy rights and consumer protection to prevent misuse of consumer data
    • While only application to business operating in California, it is considered the most likely candidate to be adopted by other states
  1. Gramm-Leach-Bliley Act (GLBA)
    • Protects consumers’ personal financial information held by financial institutions
    • Financial institutions must explain their information-sharing practices and safeguard sensitive data
  1. Sarbanes-Oxley Act (SOX)
    • Addresses corporate accounting scandals and mandates accurate financial reporting
    • Public companies must implement stringent measures to ensure the accuracy and integrity of financial data
  1. Children’s Online Privacy Protection Act (COPPA)
    • Protects the online privacy of children under 13.
    • Websites and online services targeting children must obtain parental consent before collecting personally identifiable information (PII)

EU Cybersecurity Laws and Standards to Know

  1. EU 881/2019 (Cybersecurity Act)
    • The law that codifies the mandate for ENISA to assist EU member states in dealing with cybersecurity issues and promote cooperation
    • Creates an EU-wide cybersecurity certification framework for member states to aim for when creating their own local legislation
  1. NIS2 (Revised Directive on Security of Network and Information Systems)
    • A law that requires a high level of security for network and information systems across various sectors in the EU
    • A more specific set of security requirements than the cybersecurity certification framework of the Cybersecurity Act
  1. ISO/IEC 27001
    • An international standard that provides the criteria for establishing, implementing, maintaining, and continuously improving a system
    • Roughly equivalent to NIST 800-37, the Risk Management Framework
    • Also includes a compliance and certification component; when combined with ISO/IEC 27002 it is roughly equivalent to FedRAMP
  1. ISO/IEC 27002
    • An international standard that provides more specific controls and best practices that assist in meeting the more general requirements outlined in ISO/IEC 27001
    • Roughly equivalent to NIST 800-53, the Control Catalog
  1. General Data Protection Regulation (GDPR)
    • A comprehensive data protection and privacy law
    • Non-compliance can result in significant fines, up to 4% of an organization’s annual global turnover or €20 million (whichever is greater)

How to Streamline Cybersecurity Compliance in your Organization

Ensuring cybersecurity compliance is a multifaceted challenge that requires a strategic approach tailored to an organization’s unique operational landscape. The first step is to identify the specific laws and regulations applicable to your organization, which can vary based on geography, industry, and business model. Whether it’s adhering to financial regulations like GLBA and SOX, healthcare standards such as HIPAA, or public sector requirements like FedRAMP and CMMC, understanding your compliance obligations is crucial. 

While this guide can’t give prescriptive steps for any organization to meet their individual needs, we have put together a high-level set of steps to consider when developing a cybersecurity compliance program.

Determine Which Laws and Regulations Apply to Your Organization

  1. Geography
    • US-only; if your business only operates in the United States then you only need to be focused on compliance with US laws
    • EU-only; if your business only operates in the European Union then you only need to be focused on compliance with EU laws
    • Global; if your business operates in both jurisdictions then you’ll need to consider compliance with both laws
  2. Industry
    • Financial Services; financial services firms have to comply with the GLBA and SOX laws but if they don’t process credit card payments they might not need to be concerned with PCI-DSS
    • E-commerce; any organization that processes payments, especially via credit card will need to adhere to PCI-DSS but not likely many other compliance frameworks
    • Healthcare; any organization that processes or stores data that is defined as protected health information (PHI) will need to comply with HIPAA requirements
    • Federal; any organization that wants to do business with a federal agency will need to be FedRAMP compliant
    • Defense; any defense contractor that wants to do business with the DoD will need to maintain CMMC compliance
    • B2B; there isn’t a law that mandates cybersecurity compliance for B2B relationships but many companies will only do business with companies that maintain SOC2 compliance
  3. Business Model
    • Data storage; if your organization stores data but does not process or transmit the data then your requirements will differ. For example, if you offer a cloud-based data storage service and a customer uses your service to store PHI, they are required to be HIPAA-compliant but you are considered a Business Associate and do not need to comply with HIPAA specifically
    • Data processing;  if your organization processes data but does not store the data then your requirements will differ. For example, if you process credit card transactions but don’t store the credit card information you will probably need to comply with PCI-DSS but maybe not GLBA and SOX
    • Data transmission; if your organization transmits data but does not process or store the data then your requirements will differ. For example, if you run a internet service provider (ISP) credit card transactions and PHI will traverse your network but you won’t need to be HIPAA or PCI-DSS compliant

Conduct a Gap Analysis

Current State Assessment: Evaluate the current cybersecurity posture and practices against the required standards and regulations.

Identify Gaps: Highlight areas where the organization does not meet required standards.

These steps can either be done manually or automatically. Anchore Enterprise offers organizations an automated, policy-based approach to scanning their entire application ecosystem and identifying which software is non-compliant with a specific framework.

If you’re interested to learn more check out our webinar titled, “Policy-Based Compliance for Containers: CIS, NIST, and More

Prioritize Compliance Needs

Risk-based Approach: Prioritize gaps based on risk. Address high-risk areas first.

Business Impact: Consider the potential business impact of non-compliance, such as fines, reputational damage, or business disruption.

Develop a Compliance Roadmap

Short-term Goals: Address immediate compliance requirements and any quick wins.

Long-term Goals: Plan for ongoing compliance needs, continuous monitoring, and future regulatory changes.

Implement Controls and Solutions

Technical Controls: Deploy cybersecurity solutions that align with compliance requirements, such as encryption, firewalls, intrusion detection systems, etc.

Procedural Controls: Establish and document processes and procedures that support compliance, such as incident response plans or data handling procedures.

Another important security solution, specifically targeting software supply chain security is a vulnerability scanner. Anchore Enterprise is a modern, SBOM-based software composition analysis platform that combines software vulnerability scanning with a monitoring solution and a policy-based component to automate the management of software vulnerabilities and regulation compliance.

If you’re interested to learn more, we have detailed our strategy in a blog, titled “A Policy Based Approach to Container Security & Compliance” and spelled out the benefits in a separate blog post called, “The Power of Policy-as-Code for the Public Sector”.

Monitor and Audit

Continuous Monitoring: Use tools and solutions to continuously monitor the IT environment for compliance.

Regular Audits: Conduct internal and external audits to ensure compliance and identify areas for improvement.

Being able to find vulnerabilities with a scanner at a point in time or evaluate a system against specific compliance policies is a great first step for a security program. Being able to do each of these things continuously in an automated fashion and be able to know the exact state of your system at any point in time is even better. Anchore Enterprise is capable of integrating security and compliance features into a continuously updated dashboard enabling minute by minute insight into the security and compliance of a software system.

Document Everything

Maintain comprehensive documentation of all compliance-related activities, decisions, and justifications. This is crucial for demonstrating compliance during audits.

Engage with Stakeholders

Regularly communicate with internal stakeholders (e.g., executive team, IT, legal) and external ones (e.g., regulators, auditors) to ensure alignment and address concerns.

Review and Adapt

Stay Updated: Regulatory landscapes and cybersecurity threats evolve. Stay updated on changes to ensure continued compliance.

Feedback Loop: Use insights from audits, incidents, and feedback to refine the compliance strategy.

How Anchore Can Help

Anchore is a leading software supply chain security company that has built a modern, SBOM-powered software composition analysis (SCA) platform that helps organizations meet and exceed the security standards in the above guide.

As we have learned working with Fortune 100 enterprises and federal agencies, including the Department of Defense, an organization’s supply chain security can only be as good as the depth of their data on their supply chain and the automation of processing the raw data into actionable insights. Anchore Enterprise provides an end-to-end software supply chain security system with total visibility, deep inspection, automated enforcement, expedited remediation and trusted reporting to deliver the actionable insights to make a software system compliant.

If you’d like to learn more about the Anchore Enterprise platform or speak with a member of our team, feel free to book a time to speak with one of our specialists.

Additional Compliance Resources

Introducing Anchore Enterprise 5.0

Today, we are pleased to announce the release of Anchore Enterprise 5.0 which is now Generally Available for download. This is a major release representing a step change from the Anchore Enterprise 4.x code base, which contains several new features and improvements to the foundational API. 

It’s been over a year and a half since Anchore Enterprise 4.0 was released and it’s been a tumultuous time in software security. The rate of critical flaws being discovered in open source software has been ever increasing and the regulatory response driven by the U.S. government’s Executive Order is now being felt in the market. We’ve always been proud of our customer base and have worked hard to ensure that we are delivering real value to them in response to these dynamics. The improvements to security posture usually come less from novel techniques than just making the existing hard tasks easier. We’d like to thank all our customers who contributed their feedback and insights into making 5.0 the foundation for their security workflows.

Anchore Enterprise 5.0 continues our mission of delivering new features on a fast and regular cadence to our customers while also giving us the opportunity to redesign some of the core parts of the product to make the day to day life of operators and users easier. 5.0 will now be the foundation for a series of major new features we are planning over the next 12 months. 

Simplified Reporting 

We have introduced a new design to the reporting section of the graphical user interface. The underlying functionality is the same as with 4.x but the UI now features a more intuitive layout to create and manage your reports. New reports start with a clean layout where new filters can be added and then saved as a template. Scheduled and unscheduled reports are summarized in a single clean overview.

Single, unified API and Helm Chart for simpler operations

Previously Anchore exposed its capabilities through multiple API endpoints making it hard to create integration workflows. 5.0 now unifies them under the new v2 API and makes them available under a single endpoint. This new API makes it easier to create scripts and code without coding to different endpoints.  In addition, we’ve created a new streamlined Helm chart for deploying Anchore in Kubernetes environments to ensure that all configuration options are easily accessed in a single location.

Easier Vulnerability Matching Logic

Reducing false positives is an ever-present goal for every security team. Based on Syft and Grype, our flagship open source projects, we are continually evaluating the best logic and vulnerability feeds for the highest quality results. With 5.0, we’ve made it easier for users to control which vulnerability feeds should be used for which language ecosystems. New sources such as Github’s Advisory Database often provide a higher quality experience for Java which continues to be ubiquitous in the enterprise. 

We invite you to learn more about Anchore Enterprise 5.0 in a demo call with one of our specialists or sign up for a free 15 day trial here.

SBOMs & Vulnerability Scanners: Better Together

In the world of software development, two mega-trends have emerged in the past decade that have reshaped the industry. First the practice of building applications with a foundation of open-source software components and, second, the adoption of DevOps principles to automate the build and delivery of software. While these innovations have accelerated the pace of software getting into the hands of users, they’ve also introduced new challenges, particularly in the realm of security. 

As software teams race to deliver applications at breakneck speeds, security often finds itself playing catch-up, leading to potential vulnerabilities and risks. But what if there was a way to harmonize rapid software delivery with robust security measures? 

In this post, we’ll explore the tension between engineering and security, the transformative role of Software Bill of Materials (SBOMs), and how modern approaches to software composition analysis (SCA) are paving the way for a secure, efficient, and integrated software development lifecycle.

The rise of open-source software ushered in an era where developers had innumerable off-the-shelf components to construct their applications from. These building blocks eliminated the need to reinvent the wheel, allowing developers to focus on innovating on top of the already existing foundation that had been built by others. By leveraging pre-existing, community-tested components, software teams could drastically reduce development time, ensuring faster product releases and more efficient engineering cycles. However, this boon also brought about a significant challenge: blindspots. Developers often found themselves unaware of all the ingredients that made up their software.

Enter the second mega-trend DevOps tools, with special emphasis on CI/CD build pipelines. These tools promised (and delivered) faster, more reliable software testing, building, and delivery. Which ultimately meant not only was the creation of software accelerated via open-source components but the build process of manufacturing the software into a state that a user could consume was also sped up. But, as Uncle Ben reminds us, “with great power comes great responsibility”. The accelerated delivery meant that any security issues, especially those lurking in the blindspots, found their way into production at the new accelerated pace that was enabled through open-source software components and DevOps tooling.

The Strain on Legacy Security Tools in the Age of Rapid Development

This double-shot of productivity boosts to engineering teams began to strain their security oriented counterparts. The legacy security tools that security teams had been relying on were designed for a different era. They were created when software development lifecycles were measured in quarters or years rather than weeks or months. Because of this they could afford to be leisurely with their process. 

The tools that were originally developed to ensure that an application’s supply chain was secure were called software composition analysis (SCA) platforms. They were originally developed as a method for scanning open source software for licensing information to prevent corporations from running into legal issues as their developers used open-source components. They scanned every software artifact in its entirety—a painstakingly slow process. Especially if you wanted to run a scan during every step of software integration and delivery (e.g. source, build, stage, delivery, production). 

As the wave of open-source software and DevOps principles took hold, a tug-of-war between security teams, who wanted thoroughness, and software teams, who were racing against time began to form. Organizations found themselves at a crossroads, choosing between slowing down software delivery to manage security risks or pushing ahead and addressing security issues reactively.

SBOMs to the Rescue!

But what if there was a way to bridge this gap? Enter the Software Bill of Materials (SBOM). An SBOM is essentially a comprehensive list of components, libraries, and modules that make up a software application. Think of it as an ingredient list for your software, detailing every component and its origin.

In the past, security teams had to scan each software artifact during the build process for vulnerabilities, a method that was not only time-consuming but also less efficient. With the sheer volume and complexity of modern software, this approach was akin to searching for a needle in a haystack.

SBOMs, on the other hand, provide a clear and organized view of all software components. This clarity allows security teams to swiftly scan their software component inventory, pinpointing potential vulnerabilities with precision. The result? A revolution in the vulnerability scanning process. Faster scans meant more frequent checks. And with the ability to re-scan their entire catalog of applications whenever a new vulnerability is discovered, organizations are always a step ahead, ensuring they’re not just reactive but proactive in their security approach.

In essence, organizations could now enjoy the best of both worlds: rapid software delivery without compromising on security. With SBOMs, the balance between speed and security isn’t just achievable; it’s the new standard.

How do I Implement an SBOM-powered Vulnerability Scanning Program?

Okay, we have the context (i.e. the history of how the problem came about), we have a solution, the next question then becomes how do you bring this all together to integrate this vision of the future with the reality of your software development lifecycle?

Below we outlined the high-level steps of how an organization might begin to adopt this solution into their software integration and delivery processes:

  1. Research and select the best SBOM generation and vulnerability scanning tools. (Hint: We have some favorites!)
  2. Educate your developers about SBOMs. Need guidance? Check out our detailed post on getting started with SBOMs.
  3. Store the generated SBOMs in a centralized repository.
  4. Create a system to pull vulnerability feeds from reputable sources. If you’re looking for a way to get started here, read our post on how to get started.
  5. Regularly scan your catalog of SBOMs for vulnerabilities, storing the results alongside the SBOMs.
  6. Integrate your SBOM generation and vulnerability scanning tooling into your CI/CD build pipeline to automate this process.
  7. Implement a query system to extract insights from your catalog of SBOMs.
  8. Create a tool to visualize your software supply chain’s security health.
  9. Create a system to alert on for newly discovered vulnerabilities in your application ecosystem.
  10. Integrate a policy enforcement system into your developers’ workflows, CI/CD pipelines, and container orchestrators to automatically prevent vulnerabilities from leaking into build or production environments.
  11. Maintain the entire system and continue to improve on it as new vulnerabilities are discovered, new technologies emerge and development processes evolve.

Alternatively, consider investing in a comprehensive platform that offers all these features, either as a SaaS or on-premise solution instead of building this entire system yourself. If you need some guidance trying to determine whether it makes more sense to build or buy, we have put together a post outlining the key signs to watch for when considering when to outsource this function.

How Anchore can Help you Achieve your Vulnerability Scanning Dreams

The previous section is a bit tongue-in-cheek but it is also a realistic portrait of how to build a scalable vulnerability scanning program in the Cloud Native-era. Open-source software and container pipelines have changed the face of the software industry for the better but as with any complex system there are always unintended side effects. Being able to deliver software more reliably at a faster cadence was an amazing step forward but doing it securely got left behind. 

Anchore Enterprise was built specifically to address this challenge. It is the manifestation of the list of steps outlined in the previous section on how to build an SBOM-powered software composition analysis (SCA) platform. Integrating into your existing DevOps tools, Anchore Enterprise is a turnkey solution for the management of software supply chain security. If you’d rather buy than build and save yourself the blood, sweat and tears that goes into designing an end-to-end SCA platform, we’re looking forward to talking to you.
If you’d like to learn more about the Anchore Enterprise platform or speak with a member of our team, feel free to book a time to speak with one of our specialists.

Guide to SBOMs: What They are and Their Role in Cybersecurity

In the dynamic landscape of software development, the past decade has witnessed two transformative shifts that have redefined the industry’s trajectory. The first is the widespread adoption of open-source software components, providing developers with a vast repository of pre-built modules to streamline their work. The second is the embrace of DevOps principles, automating and accelerating the software build and delivery process. Together, these shifts promised unprecedented efficiency and speed. However, they also introduced a labyrinth of complexity, with software compositions becoming increasingly intricate and opaque. 

This complexity, coupled with the relentless pace of modern development cycles, created a pressing need for a solution that could offer clarity amidst the chaos. This is the backdrop against which the Software Bill of Materials (SBOM) emerged. This guide delves into the who, what, why and how of SBOMs. Whether you’re a developer, a security professional, or simply someone keen on understanding the backbone of modern software security, this guide offers insights that will equip you with the knowledge to navigate all of the gory details of SBOMs.

What is a Software Bill of Materials (SBOM)? 

A software bill of materials (SBOM) is a structured list of software components, modules, and libraries that are included in an application. Similar to the nutrition labels on the back of the foods that you buy, SBOMs are a list of ingredients that the software is composed of. We normally think of SBOMs as an artifact of the software development process. As a developer is building an application using different open-source components they are also creating a list of ingredients, an SBOM is the digital artifact of this list.

To fully extend the metaphor, creating a modern software application is analogous to crafting a gourmet dish. When you savor a dish at a restaurant, what you experience is the final, delicious result. Behind that dish, however, is a complex blend of ingredients sourced from various producers, each contributing to the dish’s unique flavor profile. Just as a dish might have tomatoes from Italy, spices from India, olive oil from Spain, and fresh herbs from a local garden, a software application is concocted from individual software components (i.e., software dependencies). These components, like ingredients in a dish, are meticulously combined to create the final product. Similarly, while you interact with a seamless software interface, behind the scenes, it’s an intricate assembly of diverse open source software components working in harmony.

Why are SBOMs important?

SBOMs are one of the most powerful security tools that you can use. Large-scale software supply chain attacks that affected SolarWinds, Codecov, and Log4j highlight the need for organizations to understand the software components—and the associated risk—of the software they create or use. SBOMs are critical not only for identifying security vulnerabilities and risks in software. They are also key for understanding how that software changes over time, potentially introducing new risks or threats. 

Knowing what’s in software is the first step to securing it. Increasingly organizations are developing and using cloud-native software that runs in containers. Consider the complexity of these containerized applications that have hundreds—sometimes thousands—of components from commercial vendors, partners, custom-built software, and open source software (OSS). Each of these pieces is a potential source of risk and vulnerabilities.

Generating SBOMs enables you to create a trackable inventory of these components. Yet, despite the importance of SBOMs for container security practices, only 36% of the respondents to the Anchore 2022 Software Supply Chain Report produce an SBOM for the containerized apps they build, and only 27% require an SBOM from their software suppliers.

SBOMs Use-Cases

An organization can use SBOMs for many purposes. The data inside an SBOM has internal uses such as:

  • Compliance review
  • Security assessments
  • License compliance
  • Quality assurance

Additionally, you can share an SBOM externally for compliance and customer audits. Within the security and development role, SBOMs serve a similar purpose as a bill of materials in other industries. For example, automotive manufacturers must track the tens of thousands of parts coming from a wide range of suppliers when manufacturing a modern car. All it takes is one faulty part to ruin the final product.

Cloud-native software faces similar challenges. Modern applications use significant amounts of open source software that depends on other open source software components which in turn incorporate further open source components. They also include internally developed code, commercial software, and custom software developed by partners. 

Combining components and code from such a wide range of sources introduces additional risks and potential for vulnerabilities at each step in the software development lifecycle. As a result, SBOMs become a critical foundation for getting a full picture of the “ingredients” in any software application over the course of the development lifecycle.

Collecting SBOMs from software suppliers and generating SBOMs throughout the process to track component inventory changes and identify security issues is an integral first step to ensuring the overall security of your applications.

Security and development teams can either request SBOMs from their software suppliers or generate an SBOM themselves. Having the ability to generate SBOMs internally is currently the more optimal approach. This way teams can produce multiple SBOMs throughout the development process to track component changes and search for vulnerabilities as new issues become known in software.

SBOMs can help to alleviate the challenges faced by both developers and security teams by:

  • Understanding risk exposure inherent in open source and third-party tools
  • Reduce development time and cost through exposing and remediating issues earlier in the cycle
  • Identifying license and compliance requirements

Ultimately, SBOMs are a source of truth. To ensure product integrity, development and security teams must quickly and accurately establish:

  • Specific versions of the software in use
  • Location of the software in their builds and existing products

SBOM Security Benefits

There are many SBOM security benefits for your organization. Any effective solution to securing your software supply chain is transparency. Let’s dive into what SBOM security means with regards to these ingredients and why transparency is so vital. 

Transparency = Discovering What is in There

It all starts with knowing what software is being used. You need an accurate list of “ingredients” (such as libraries, packages, and files) that are included in a piece of software. This list of “ingredients” is known as a software bill of materials. Once you have an SBOM for any piece of software you create or use, you can begin to answer critical questions about the security of our software supply chain.

It’s important to note that SBOMs themselves can also serve as input to other types of analyses. A noteworthy example of this is vulnerability scanning. Typically, vulnerability scanning is a term for discovering known security problems with a piece of software based on previously published vulnerability reports. Detecting and mitigating vulnerabilities goes a long way toward preventing security incidents.

In the case of software deployed in containers, developers can use SBOMs and vulnerability scans together to provide better transparency into container images. When performing these two types of analyses within a CI/CD pipeline, you need to realize two things:

Each time you create a new container image (i.e. an image with a unique digest), you only need to generate an SBOM once. And that SBOM can be forever associated with that unique image. 

Even though that unique image never changes, it’s vital to continually scan for vulnerabilities. Many people scan for vulnerabilities once an image is built, and then move on. But new vulnerabilities are discovered and published every day (literally) — so it’s vital to periodically scan any existing images you’re already consuming or distributing to identify if they are impacted by new vulnerabilities. Using an SBOM means you can quickly and confidently scan an application for new vulnerabilities.

Why SBOMs Matter for Software Supply Chain Security

Today’s software is complex, that is why SBOMs have become the foundation of  software supply chain security. The role of an SBOM is to provide transparency about the software components of an application, providing a foundation for vulnerability analysis and other security assessments. 

For example, organizations that have a comprehensive SBOM for every software application they buy or build can instantly identify the impact of new zero-day vulnerabilities, such as the Log4Shell vulnerability in Log4j, and discern its exact location for faster remediation. Similarly, they can evaluate the provenance and operational risk of open source components to comply with internal policies or industry standards. These are critical capabilities when it comes to maintaining and actively managing a secure software supply chain. 

The importance of the SBOM was highlighted in the 2021 U.S. Executive Order to Improve the Nation’s Cybersecurity. The Executive Order directs federal agencies to “publish minimum SBOM standard” and define criteria regarding “providing a purchaser a software bill of materials (SBOM) directly or publish to a public website.” This Executive Order is having a ripple effect across the industry, as software suppliers that sell to the U.S. federal government will increasingly need to provide SBOMs for the software they deliver. Over time these standards will spread as companies in other industries begin to mirror the federal requirements in their own software procurement efforts.

If you’re looking for a deep dive into the world of software supply chain security, we have written a comprehensive guide to the subject.

What is an SBOM made of? What’s inside? 

Each modern software application typically includes a large number of open source and commercial components coming from a wide range of sources. An SBOM is a structured list of components, modules, and libraries that are included in a given piece of software that provides the developer with visibility into that application. Think of an SBOM like a list of ingredients that evolves throughout the software development lifecycle as you add new code or components.

An example of items included in a SBOM are:

  • A data format that catalogs all the software in an application, including deeply nested dependencies
  • Data tracked includes details such as dependency name, version, and language ecosystem
  • SBOM data files can catalog files not associated with the operating system or included dependency

Anchore Enterprise supports SPDX, CycloneDX, and Syft formats. This is a continually evolving space with new formats introduced periodically. To learn about the latest on SBOM formats see the Anchore blog here

Who needs SBOMs? 

When it comes to who needs SBOMs, they are mainly used by DevSecOps practitioners and compliance teams for audits, license monitoring, and to comply with industry-specific regulations. However, with the rise of software supply chain attacks (like the SolarWinds hack and the recent Log4Shell vulnerability in Log4j) SBOM use is now on the radar for both security and development teams alike.

Security Teams

SBOMs play a critical role for security teams, especially when it comes to vulnerability scanning. It is much quicker and easier to scan a library of SBOMs than it is to scan all of your software applications, and in the event of a zero-day vulnerability, every minute counts. 

SBOMs can also be leveraged by security teams to prioritize issues for remediation based on their presence and location and to create policies specific to software component attributes such as vendor, version, or package type.

Development Teams

Development teams use SBOMs to track the open source, commercial, and custom-built software components that they use across the applications they develop, manage, and operate. This assists development teams by reducing time spent on rework by helping to manage dependencies, identify security issues for remediation early, and ensure that developers are using approved code and sources.

Fueling the cross-functional use of SBOMs, is the Executive Order on Improving the Nation’s Cybersecurity, where President Biden issued an SBOM requirement that plays a prominent role in securing software supply chains. 

The Current State of SBOMs

The current state of SBOMs is complex and evolving. The risks of software supply chain attacks are real, with almost two-thirds of enterprises impacted by a software supply chain attack in the last year according to the Anchore 2022 Software Supply Chain Report

To stem these rising threats, the Executive Order outlines new requirements for SBOMs along with other security measures for software used by federal agencies. Until now, the use of SBOMs by cybersecurity teams has been limited to the largest, most advanced organizations. However, as a consequence of these two forces, the use of SBOMs is on the cusp of a rapid transformation.

With governments and large enterprises leading the way, standardized SBOMs are poised to become a baseline requirement for all software as it moves through the supply chain. As a result, organizations that produce or consume software need the ability to generate, consume, manage, and leverage SBOMs as a foundational element of their cybersecurity efforts.

In recent years we have seen threat actors shift their focus to third-party software suppliers. Rather than attacking their targets directly, they aim to compromise software at the build level, introducing malicious code that can later be executed once that software has been deployed, giving the attacker access to new corporate networks. Now, instead of taking down one target, supply chain attacks can potentially create a ripple effect that could affect hundreds, even thousands, of unsuspecting targets. 

Open source software can also be an attack vector if it contains un-remediated vulnerabilities.

SBOMs are a critical foundation for securing against software supply chain attacks. By generating SBOMs into the development cycle, developers and security teams can identify and manage the software in their supply chains and catch these bad actors early before they reach runtime and wreak havoc. Additionally, SBOMs allow organizations to create a data trail that can provide an extended view of the supply chain history of a particular product.

Additional SBOM Resources

Say Goodbye to False Positives

You might be in for a bit of a surprise when running the latest version of Grype – potential vulnerabilities you may have become accustomed to seeing are no longer there! Keep calm. This is a good thing – we made your life easier! Today, we released an improvement to Grype that is the culmination of months of work and testing, which will dramatically improve the results you see, in fact some ecosystems can see up to an 80% reduction of false positives! If you’re reading this, you may have used Grype in the past and seen things you weren’t expecting, or you may just be curious to see how we’ve achieved an improvement like this. Let’s dig in.

The surprising source of false positives

The process of scanning for vulnerabilities involves several different factors, but, without a doubt, one of the most important is for Grype to have accurate data: both when identifying software artifacts and also when applying vulnerability data against those artifacts. To address the latter, Anchore provides a database (GrypeDB), which aggregates multiple data sources that Grype uses to assess whether components are vulnerable or not. This data includes the GitHub Advisory Database and the National Vulnerability Database (NVD), along with several other more specific data sources like those provided by Debian, Red Hat, Alpine, and more.

Once Grype has a set of artifacts identified, vulnerability matching can take place. This matching works well, but inevitably may result in certain vulnerabilities being incorrectly excluded (false negatives) or incorrectly included (false positives). False results are not great, either way, and the false positives certainly constitute a number of issues reported against Grype over the years.

One of the biggest problems we’ve encountered is the fact that the data sources used to build the Grype database use different identifiers – for example, GitHub Advisory Database uses data that includes a package’s ecosystem, name, and version; while NVD uses the Common Platform Enumeration (CPE). These identifiers have some trade-offs, but the most important of which is how accurate it is for a package to be matched against the vulnerability record. In particular, the GitHub Advisory Database data is partitioned by ecosystems such as npm or Python whereas the NVD data does not generally have this distinction. The result of this is a situation where a Python package named “foo” might match vulnerabilities against another “foo” in another ecosystem. When taking a closer look at reports by the community, it is apparent that the most common reason for reported false positives is due to CPEs matching.

Focusing on the negative

After experimenting with a number of options for improving vulnerability matching, ultimately one of the simplest solutions proved most effective: stop matching with CPEs.

The first question you might ask is: won’t this result in a lot of false negatives? And, secondly, if we’re not matching against CPEs, what are we matching against? Grype has already been using GitHub Advisory Database data for vulnerability matching, so we simply leaned into this. Thankfully, we already have a way to test that this change isn’t resulting in a significant change in false negatives: the Grype quality gate.

One of the things we’ve put in place for Grype is a quality gate, which uses manually labeled vulnerability information to validate that a change in Grype hasn’t significantly affected the vulnerability match results. Every pull request and push to main runs the quality gate, which compares the previously released version of Grype against the newly introduced changes to ensure the matching hasn’t become worse. In our set of test data, we have been able to reduce false positive matches by 2,000+, while only seeing 11 false negatives.

Instead of focusing on how we reduce the false positives, we can now focus on a much smaller set of false negatives to see why they were missed. In our sample data set, this is due to 11 Java JARs that don’t have Maven group, artifact, or version information, which brings up the next area of improvement: Java artifact identification.

When first exploring the option to stop CPE matching there were a lot more than 11 false negatives, but it was still a manageable number – less than 200 false negatives are a lot easier to handle than thousands of false positives. Focusing on these, we found almost all of these were cases where Java JARs were not being identified properly, so we improved this, too. Today, it’s still not perfect – the main reason being that some JARs simply don’t have enough information to identify accurately without using some sort of external data (and we have some ideas for handling these cases, too). However, the majority of JARs do have enough information to accurately be identified. To make sure we weren’t regressing on this front, we downloaded gigabytes (25+ GB) of JARs, scanned, and validated that we are finding the right information to correctly extract the correct names and versions from these JARs. And much of this information ends up being included in the labeled vulnerability data we use to test every commit to Grype.

This change doesn’t mean all CPE matching is turned off by default, however. There are some types of artifacts that Grype still needs to use CPE matching for. Binaries, for example, are not present in the GitHub Advisory Database and Alpine only provides entries for things that are fixed, so we need to continue using CPE matching to determine the vulnerabilities before querying for fix information there. But, for ecosystems supported by the GitHub Advisory Database, we can confidently use this data and prevent the plethora of false positives associated with CPE matching.

GitHub + Grype for the win

The next question you might ask is: how is the GitHub Advisory Database better? There are many reasons that the GitHub data is great, but the things that are most important for Grype are data quality, updatability, and community involvement.

The GitHub Advisory Database is already a well-curated, machine-readable collection of vulnerability data. A surprising amount of public vulnerability data that exists isn’t very machine readable or high quality, and while a large volume of data that needs updates isn’t a problem by itself, it is a problem when the ability to provide such updates is nearly impossible. GitHub can review the existing public vulnerability data and update it with relevant details by correcting descriptions, package names, version information, and inaccurate severities along with all the rest of the captured information. Being able to update the data quickly and easily is vital to maintain a quality data set.

And it’s not just GitHub that can contribute to these data corrections – because the GitHub Advisory Database is stored in a public GitHub repository, anyone with a GitHub account can submit updates. If you notice an incorrect version or spelling mistake in the description, the fix is one pull request away. Since GitHub repositories are historical archives, in addition to just submitting fixes, is the ability to look back in time at discussions, decisions, and questions. Much of the public vulnerability data today lacks transparency. Decisions might be made in private or by a single person, with no record of why. With the GitHub Advisory Database, we can see who did what, when, and why. Having a strong community makes open source work and using the open source model with vulnerability data works great too.

We’ve got your back

We believe this change will be a significant improvement for all Grype users, but we don’t know everyone’s situation. Since Grype is a versatile tool, it’s easy to enable CPE matching, if that’s something you still want to do. Just add the appropriate options to your .grype.yaml file or use the appropriate environment variables (see the Grype configuration for all the options), for example:

We want to ensure Grype is the best vulnerability scanner that exists, which is a lofty goal. Today we made a big stride towards this goal. There will always be more work to do: better package detection, better vulnerability detection, and better vulnerability data. Grype and the GrypeDB are open source projects, so if you would like to help please join us.

But today, we celebrate saying goodbye to lots of false positives, so keep calm and scan on, your list of vulnerabilities just got shorter!

The Complete Guide to Software Supply Chain Security

The mega-trends of the containerization of applications and the rise of open-source software components have sped up the velocity of software delivery. This evolution, while offering significant benefits, has also introduced complexity and challenges to traditional software supply chain security. 

Anchore was founded on the belief that the legacy security solutions of the monolith-era could be re-built to deliver on the promises of speed without sacrificing security. Anchore is trusted by Fortune 100 companies and the most exacting federal agencies across the globe because it has delivered on this promise.  

If you’d like to learn more about how the Anchore Enterprise platform is able to accomplish this, feel free to book a time to speak with one of our specialists.

If you’re looking to get a better understanding of how software supply chains operate, where the risks lie and best practices on how to manage the risks, then keep reading.

An Overview of Software Supply Chains 

Before you can understand how to secure the software supply chain, it’s important to understand what the software supply chain is in the first place. A software supply chain is all of the individual software components that make up a software application. 

Software supply chains are similar to physical supply chains. When you purchase an iPhone all you see is the finished product. Behind the final product is a complex web of component suppliers that are then assembled together to produce an iPhone. Displays and camera lenses from a Japanese company, CPUs from Arizona, modems from San Diego, lithium ion batteries from a Canadian mine; all of these pieces come together in a Shenzhen assembly plant to create a final product that is then shipped straight to your door. In the same way that an iPhone is made up of a screen, a camera, a CPU, a modem, and a battery, modern applications are composed of individual software components (i.e. dependencies) that are bundled together to create the finished product. 

With the rise of open source software, most of these components are open source frameworks, libraries and operating systems. Specifically, 70-90% of modern applications are built utilizing open source software components. Before the ascent of open source software, applications were typically developed with proprietary, in-house code without a large and diverse set of software “suppliers”. In this environment the entire “supply chain” were employees of the company which reduced the complex nature of managing all of these teams. The move to Cloud Native and DevSecOps design patterns dramatically sped up the delivery of software with the complication that the complexity of coordinating all of the open source software suppliers increased significantly.

This shift in the way that software is developed impacts essentially all modern software that is written. This means that all businesses and government agencies are waking up to the realization that they are building a software supply chain whether they want it or not.

One of the ways this new supply chain complexity is being tamed is with the software bill of materials (SBOM). A software bill of materials (SBOM) is a structured list of software components, modules, and libraries that are included in a given piece of software. Similar to the nutrition labels on the back of the foods that you buy, SBOMs are a list of ingredients that go into the software that your applications consume. We normally think of SBOMs as an artifact of the development process. As a developer is “manufacturing” their application using different dependencies they are also building a “recipe” based on the ingredients.

What is software supply chain security? 

Software supply chain security is the process of finding and preventing any vulnerabilities that exist from impacting the software applications that utilize the vulnerable components. Going back to the iPhone analogy from the previous section, in the same way that an attacker could target one of the iPhone suppliers to modify a component before the iPhone is assembled, a software supply chain threat actor could do the same but target an open source package that is then built into a commercial application. 

Given the size and prevalence of open source software components in modern applications, the supply chain is only as secure as its weakest link. The image below of the iceberg has become a somewhat overused meme of software supply chain security but it has become overused precisely because it explains the situation so well.

A different analogy would be to view the open source software components that your application is built using as a pyramid. Your application’s supply chain is all of the open source components that your proprietary business logic is built on top of. The rub is that each of these components that you utilize have their own pyramid of dependencies that they are built with. The foundation of your app might look solid but there is always the potential that if you follow the dependencies chain far enough down that you will find a vulnerability that could topple the entire structure.

This gives adversaries their opening. A single compromised package allows attackers to manipulate all of the packages “downstream” of their entrypoint.

This reality was viscerally felt by the software industry (and all industries that rely on the software industry, meaning all industries) during the Log4j incident. 

Common Software Supply Chain Risks

Software development is a multifaceted process, encompassing various components and stages. From the initial lines of code to the final deployment in a production environment, each step presents potential risks for vulnerabilities to creep in. As organizations increasingly integrate third-party components and open-source libraries into their applications, understanding the risks associated with the software supply chain becomes paramount. This section delves into the common risks that permeate the software supply chain, offering insights into their origins and implications.

Source Code

Supply chain risks start with the code itself. Below are the most common risks associated with a software supply chain when generating custom first-party code:

  1. Insecure first-party code

Custom code is the first place to be aware of risk in the supply chain. If the code written by your developers isn’t secure then your application will be vulnerable at its foundation. Insecure code is any application logic that can be manipulated to perform a function that wasn’t originally intended by the developer.

For example, if a developer writes a function that allows a user to login to their account by checking the user database that a username and password match the ones provided by the user but an attacker crafts a payload that instead causes the function to delete the entire user database this is insecure code.

  1. Source code management (SCM) compromise

Source code is typically stored in a centralized repository so that all of your developers can collaborate on the same codebase. An SCM is software that can potentially be vulnerable the same as your first-party code. If an adversary gains access to your SCM through a vulnerability in the software or through social engineering then they will be able to manipulate your source code at the foundation.

  1. Developer environments

Developer environments are powerful productivity tools for your engineers but they are another potential fount of risk for an organization. Most integrated developer environments come with a plug-in system so that developers can customize their workflows for maximum efficiency. These plug-in systems typically also have a marketplace associated with them. In the same way that a malicious Chrome browser plug-in and compromise a user’s laptop, a malicious developer plug-in can gain access to a “trusted” engineers development system and piggyback on this trusted access to manipulate the source code of an application.

3rd-Party Dependencies (Open source or otherwise)

Third-party software is really just first-party software written by someone else. The same way that the cloud is just servers run by someone else. Third-party software dependencies are potentially vulnerable to all of the same risks associated with your own first-party code in the above section. Since it isn’t your code you have to deal with the risks in a different way. Below we layout the two risks associated with this software supply chain risk:

  1. Known vulnerabilities (CVEs, etc)

Known vulnerabilities are insecure or malicious code that has been identified in a third-party dependency. Typically the maintainer of a third-party dependency will fix their insecure code when they are notified and publish an update. Sometimes if the vulnerability isn’t a priority they won’t address it for a long time (if ever). If your developers rely on this dependency for your application then you have to assume the risk.

  1. Unknown vulnerabilities (zero-days)

Unknown vulnerabilities are insecure or malicious code that hasn’t been discovered. These vulnerabilities can lay dormant in a codebase for months, years or even decades. When they are finally uncovered and announced there is typically a scramble across the world by any business that uses software (i.e. almost all businesses) to figure out whether they utilize this dependency and how to protect themselves from having it be exploited. Attackers are in a scramble themselves to determine who is using the vulnerable software and crafting exploits to take advantage of businesses that are slow to react.

Build Pipeline & Artifact Repository

  1. Build pipeline compromise

A software build pipeline is a software system that pulls the original source code from an SCM then pulls all of the third-party dependencies from their source repositories and goes through the process of creating and optimizing the code into a binary that can then be stored in an artifact repository. It is similar to an SCM in that it is software, it is composed of both first- and third-party code which means there will be all of the same associated risks to its source code and software dependencies.

Organizations deal with these risks differently than the developers of the build systems because they do not control this code. Instead the risks are around managing who has access to the build system and what they can do with their access. Risks range from modifying where the build system is pulling source code from to modifying the build instructions to inject malicious or vulnerable code into previously secure source.

  1. Artifact registry compromise

An artifact registry is a centralized repository of the fully built applications (typically in the format of a container or image) that a deployment orchestrator would use to pull the software from in order to run it in a production environment. It is also software similar to a build pipeline or SCM and has the same associated risks as mentioned before.

Typically, the risks of registries are managed through how trust is managed between the registry and the build system or any other system/person that has access to it. Risks range from an attacker poisoning the registry with an untrusted container or an attacker gaining privileged access to the repository and modifying a container in place.

Production

  1. Deployment orchestrator compromise

A deployment orchestrator is a system that pulls pre-built software binaries and runs the applications on servers. It is another type of software system similar to a build pipeline or SCM and has the same associated risks as mentioned before.

Typically, the risks of orchestrators are managed through trust relationships between the orchestrator and the artifact registry or any other system/person that has access to it. Risks range from an attacker manipulating the orchestrator into deploying an untrusted container or an attacker gaining privileged access to the orchestrator and modifying a running container or manifest.

  1. Production environment compromise

The production environment is the application running on a server that was deployed by an orchestrator. It is the software system built from the original source code that fulfills user’s requests. It is the final product that is created from the software supply chain. The risks associated with this system are different from most other systems because it typically serves users outside of the organization and has different risks associated with it because not as much is known about external users as internal users. 

Examples of software supply chain attacks

As reliance on third-party components and open-source libraries grows, so does the potential for vulnerabilities in the software supply chain. Several notable incidents have exposed these risks, emphasizing the need for proactive security and a deep understanding of software dependencies. In this section, we explore significant software supply chain attacks and the lessons they impart.

SolarWinds (2020)

In one of the most sophisticated supply chain attacks, malicious actors compromised the update mechanism of SolarWinds’ Orion software. This breach allowed the attackers to distribute malware to approximately 18,000 customers. The attack had far-reaching consequences, affecting numerous government agencies, private companies, and critical infrastructure.

Lessons Learned: The SolarWinds attack underscored the importance of securing software update mechanisms and highlighted the need for continuous monitoring and validation of software components.

Log4j (2021)

In late 2021, a critical vulnerability was discovered in the Log4j logging library, a widely used Java-based logging utility. Dubbed “Log4Shell,” this vulnerability allowed attackers to execute arbitrary code remotely, potentially gaining full control over vulnerable systems. Given the ubiquity of Log4j in various software applications, the potential impact was massive, prompting organizations worldwide to scramble for patches and mitigation strategies.

Lessons Learned: The Log4j incident underscored the risks associated with ubiquitous open-source components. It highlighted the importance of proactive vulnerability management, rapid response to emerging threats, and the need for organizations to maintain an updated inventory of third-party components in their software stack.

NotPetya (2017)

Originating from a compromised software update mechanism of an Ukrainian accounting software, NotPetya spread rapidly across the globe. Masquerading as ransomware, its primary intent was data destruction. Major corporations, including Maersk, FedEx, and Merck, faced disruptions, leading to financial losses amounting to billions.

Lessons Learned: NotPetya highlighted the dangers of nation-state cyber warfare and the need for robust cybersecurity measures, even in seemingly unrelated software components.

Node.js Packages coa and rc

In July 2021, two widely-used npm packages, coa and rc, were compromised. Malicious versions of these packages were published to the npm registry, attempting to run a script to access sensitive information from users’ .npmrc files. The compromised versions were downloaded thousands of times before being identified and removed.

Lessons Learned: This incident emphasized the vulnerabilities in open-source repositories and the importance of continuous monitoring of dependencies. It also highlighted the need for developers and organizations to verify the integrity of packages before installation and to be wary of unexpected package updates.

JuiceStealer Malware

JuiceStealer is a malware spread through a technique known as typosquatting on the PyPI (Python Package Index). Malicious packages were seeded on PyPI, intending to infect users with the JuiceStealer malware, designed to steal sensitive browser data. The attack involved a complex chain, including phishing emails to PyPI developers.

Lessons Learned: JuiceStealer showcased the risks of typosquatting in package repositories and the importance of verifying package names and sources. It also underscored the need for repository maintainers to have robust security measures in place to detect and remove malicious packages promptly.

Node.js Packages colors and faker

In January 2022, the developer behind popular npm libraries colors and faker intentionally sabotaged both packages in an act of “protestware.” This move affected thousands of applications, leading to broken builds and potential security risks. The compromised versions were swiftly removed from the npm registry.

Lessons Learned: This incident highlighted the potential risks associated with relying heavily on open-source libraries and the actions of individual developers. It underscored the importance of diversifying dependencies, having backup plans, and the need for the open-source community to address developer grievances constructively.

Standards and Best Practices for Preventing Attacks

There are a number of different initiatives to define best practices for software supply chain security. Organizations ranging from the National Institute of Standards and Technology (NIST) to the Cloud Native Computing Foundation (CNCF) to Open Source Security Foundation (OpenSSF) have created fantastically detailed documentation on their recommendations to achieve an optimally secure supply chain.

Choosing any of the standards defined is better than choosing none or even cherry-picking from each of the standards to create a program that is best tailored to the risk profile of your organization. If you’d prefer to stick to one for simplicity sake and need some help deciding, Anchore has detailed our thoughts on the pros and cons of each software supply chain standard here.

Below is a concise summary of each of the major standards to help get you started:

National Institute of Standards and Technology (NIST)

NIST has a few different standards that are worth noting. We’ve ordered them from the broadest to the most specific and, coincidently, chronically as well.

NIST SP 800-53, “Security and Privacy Controls for Information Systems and Organizations”

NIST 800-53, aka the Control Catalog, is the grandaddy of NIST security standards. It has had a long life and evolved alongside the security landscape. Typically paired with NIST 800-37, the Risk Management Framework or RMF, this pair of standards create a one-two punch that not only produce a highly secure environment for protecting classified and confidential information but set up organizations to more easily be compliant with federal compliance standards like FedRAMP.

Software supply chain security (SSCS) topics first began filtering into NIST 800-53 in 2013 but it wasn’t until 2020 that the Control Catalog was updated to break out SSCS into its own section. If your concern is to get up and running with SCSS as quickly as possible then this standard will be overkill. If your goal is to build toward FedRAMP and NIST 800-53 compliance as well as build a secure software development process then this standard is for you. If you’re looking for something more specific, one of the two next standards might be for you.

If you need a comprehensive guide to NIST 800-53 or its spiritual sibling, NIST 800-37, we have put together both. You can find a detailed but comprehensible guide to the Control Catalog here and the same plain english, deep-dive into NIST 800-37 here.

NIST SP 800-161, “Cybersecurity Supply Chain Risk Management Practices for Systems and Organizations”

NIST 800-161 is an interesting application of both the RMF and the Control Catalog for supply chain security specifically. The controls in NIST 800-161 take the base controls from NIST 800-53 and provide guidance on how to achieve more specific outcomes for the controls. For the framework, NIST 800-161 takes the generic RMF and creates a version that is tailored to SSCS. 

NIST 800-161 is a comprehensive standard that will guide your organization to create a development process with its primary output being highly secure software and systems. 

NIST SP 800-218, “Secure Software Development Framework (SSDF)”

NIST 800-218, the SSDF, is an even more refined standard than NIST 800-161. The SSDF targets the software developer as the audience and gives even more tailored recommendations on how to create secure software systems.

If you’re a developer attempting to build secure software that complies with all of these standards, we have an ongoing blog series that breaks down the individual controls that are part of the SSDF.

NIST SP 800-204D, “Strategies for the Integration of Software Supply Chain Security in DevSecOps CI/CD Pipelines”

Focused specifically on Cloud-native architectures and Continuous Integration/Continuous Delivery (CI/CD) pipelines, NIST 800-204D is a significantly more specific standard than any of the previous standards. That being said, if the primary insertion point for software supply chain security in your organization is via the DevOps team then this standard will have the greatest impact on your overall software supply chain security.

Also, it is important to note that this standard is still a draft and will likely change as it is finalized.

Open Source Security Foundation (OpenSSF)

A project of the Linux Foundation, the Open Source Security Foundation is a cross-industry organization that focuses on the security of the open source ecosystem. Since most 3rd-party dependencies are open source they carry a lot of weight in the software supply chain security domain. 

Supply-chain Levels for Software Artifacts (SLSA)

If an SBOM is an ingredients label for a product then the SLSA (pronounced ‘salsa’) is the food safety handling guidelines of the factory where they are produced. It focuses primarily on updating traditional DevOps workflows with signed attestations around the quality of the software that is produced.

Google originally donated the framework and has been using an internal version of SLSA since 2013 which it requires for all of their production workloads. 

You can view the entire framework on its dedicated website here

Secure Supply Chain Consumption Framework (S2C2F) 

The S2C2F is similar to SLSA but much broader in its scope. It gives recommendations around the security of the entire software supply chain using both traditional security practices such as scanning for vulnerabilities. It touches on signed attestations but not at the same level of depth at the SLSA.

The S2C2F was built and donated by Microsoft, where it has been used and refined internally since 2019.

You can view the entire list of recommendations on its GitHub repository.

Cloud Native Computing Foundation (CNCF)

The CNCF is also a project of the Linux Foundation but is focused on the entire ecosystem of open-source, cloud-native software. The Security Technical Advisory Group at the CNCF has a vested interest in supply chain security because the majority of the software that is incubated and matured at the CNCF is part of the software development lifecycle.

Software Supply Chain Best Practices White Paper

The Security Technical Advisory Group at CNCF, created a best practices white paper that was heralded as a huge step forward for the security of software supply chains. The document creation was led by the CTO of Docker and the Chief Open Source Officer at Isovalent. It captures over 50 recommended practices to secure the software supply chain.

You can view the full document here.

Types of Supply Chain Compromise

This document isn’t a standard or best practices, instead it is support for the best practices white paper that defines a full list of supply chain compromises.

Catalog of Supply Chain Compromises

This isn’t a standard or best practices document, as well. It is instead a detailed history of the significant supply chain breaches that have occurred over the years. Helpful for understanding this history that informed the best practices detailed in the accompanying white paper.

How Anchore Can Help 

Anchore is a leading software supply chain security company that has built a modern, SBOM-powered software composition analysis (SCA) platform that helps organizations incorporate many of the software supply chain best practices that are defined in the above guides.

As we have learned working with Fortune 100 enterprises and federal agencies, including the Department of Defense, an organization’s supply chain security can only be as good as the depth of their data on their supply chain and the automation of processing the raw data into actionable insights. Anchore Enterprise provides an end-to-end software supply chain security system with total visibility, deep inspection, automated enforcement, expedited remediation and trusted reporting to deliver the actionable insights to make a supply chain as secure as possible.

If you’d like to learn more about the Anchore Enterprise platform or speak with a member of our team, feel free to book a time to speak with one of our specialists.

Detecting Exploits within your Software Supply Chain

SBOMs. What are they good for? At Anchore, we see SBOMs (software bills of material) as the foundation of an application’s supply chain hierarchy. Upon this foundation you can build a lot of powerful features, such as, the ability to detect vulnerabilities in your open source dependencies before they are pushed to production. An unintended side effect of giving users the power to easily see deeply into their application’s dependencies and detect the vulnerabilities in those dependencies is that there can sometimes be hundreds of vulnerabilities discovered in the process. 

We’ve seen customer applications that generate up to 400+ known vulnerabilities! This creates an information overload that typically ends in the application developer ignoring the results because it is too much effort to triage and remediate each one. Knowing that an application is riddled with vulnerabilities is better than not but excessive information does not lead to actionable insights. 

Anchore Enterprise solves this challenge by pairing vulnerability data (e.g. CVEs, etc) with exploit data (e.g. KEV, etc). By combining these two data sources we can create actionable insight by showing users both the vulnerabilities in their applications and which vulnerabilities are actually being exploited. Actively exploited vulnerabilities are significantly higher risk and can be prioritized for triage and remediation first. In this blog post, we’ll discuss how we do that and how it can save both your security team and application developers time.

How Does Anchore Enterprise Help You Find Exploits in Your Application Dependencies?

What is an Exploited Vulnerability?

“Exploited” is an important distinction because it means that not only does a vulnerability exist but a payload also exists that can reliably trigger the vulnerability and cause an application to execute unintended functionality (e.g. leaking all of the contents of a database or deleting all of the data in a database). For instance, almost all bank vaults in the world are vulnerable to an asteroid strike “deleting” all of the contents of the safe but no one has developed a system to reliably cause an asteroid to strike bank vaults. Maybe Elon Musk can make this happen in a few more years but today this vulnerability isn’t exploitable. It is important for organizations to prioritize exploited vulnerabilities because the potential for damage is significantly greater.

Source High-Quality Data on Exploits

In order to find vulnerabilities that are exploitable, you need high-quality data from security researchers that are either crafting exploits to known vulnerabilities or analyzing attack data for payloads that are triggering an exploit in a live application. Thankfully, there are two exceedingly high-quality databases that publish this information publicly and regularly; the Known Exploited Vulnerability (KEV) Catalog and the Exploit Database (Exploit-DB).

The KEV Catalog is a database of known exploited vulnerabilities that is published and maintained by the US government through the Cybersecurity and Infrastructure Security Agency, CISA. It is updated regularly; they typically add 1-5 new KEVs every week. 

While not an exploit database itself, the National Vulnerability Database (NVD) is an important source of exploit data because it checks all of the vulnerabilities that it publishes and maintains against the Exploit-DB and embeds the relevant identifiers when a match is found.

Anchore Enterprise ingests both of these data feeds and stores the data in a centralized repository. Once this data is structured and available to your organization it can then be used to determine which applications and their associated dependencies are exploitable.

Map Data on Exploits to Your Application Dependencies

Now that you have a quality source of data on known exploited vulnerabilities, you need to determine if any of these exploits exist in your applications and/or the dependencies that they are built with. The industry-standard method for storing information on applications and their dependency supply chain is via a software bill of materials (SBOM)

After you have an SBOM for your application you can then cross-reference the dependencies against both a list of known vulnerabilities and a list of known exploited vulnerabilities. The output of this is a list of all of the applications in your organization that are vulnerable to exploits.

If done manually, via something like a spreadsheet this can quickly become a tedious process. Anchore Enterprise automates this process by generating SBOMs for all of your applications and running scans of the SBOMs against vulnerability and exploit databases. 

How Does Anchore Enterprise Help You Prioritize Remediation of Exploits in Your Application Dependencies?

Once we’ve used Anchore Enterprise to detect CVEs in our containers that are also exploitable through the KEV or ExploitDB lists, then we can take the severity score back into account with more contextual evidence. We need to know two things for each finding: what is the severity of the finding and can I accept the risk associated with leaving that vulnerable code in my application or container. 

If we look back to the Log4J event in December of 2021, that particular vulnerability scored a 10 on the CVSS. That score alone provides us little detail on how dangerous that vulnerability is. If a CVE is discovered against any given piece of software and the NVD researchers cannot reach the authors of the code, then it’s assigned a score of 10 and the worst case is assumed. 

However, if we have applied our KEV and ExploitDB bundles and determined that we do indeed have a critical vulnerability that has active known exploits and evidence that it is being exploited in the wild AND the severity exceeds our personal or organizational risk thresholds then we know that we need to take action immediately. 

Everyone has questioned the utility of the SBOM but Anchore Enterprise is making this an afterthought. Moving past the basics of just generating an SBOM and detecting CVE’s, Anchore Enterprise is automatically mapping exploit data to specific packages in your software supply chain allowing you to generate reports and notifications for your teams. By analyzing this higher quality information, you can determine  which vulnerabilities actually pose a threat to your and in turn make more intelligent decisions about which to fix and which to accept, saving your organization time and money.

Wrap Up

Returning to our original question, “what are SBOMs good for”? It turns out the answer is scaling the process of finding and prioritizing vulnerabilities in your organization’s software supply chain.

In today’s increasingly complex software landscape, the importance of securing your application’s supply chain cannot be overstated. Traditional SBOMs have empowered organizations to identify vulnerabilities but often left them inundated with too much information, rendering the data less actionable. Anchore Enterprise revolutionizes this process by not only automating the generation of SBOMs but also cross-referencing them against reputable databases like KEV Catalog and Exploit-DB to isolate actively exploited vulnerabilities. By focusing on the vulnerabilities that are actually being exploited in the wild, your security team can prioritize remediation efforts more effectively, saving both time and resources. 

Anchore Enterprise moves beyond merely detecting vulnerabilities to providing actionable insights, enabling organizations to make intelligent decisions on which risks to address immediately and which to monitor. Don’t get lost in the sea of vulnerabilities; let Anchore Enterprise be your compass in navigating the choppy waters of software security.

If you’d like to learn more about the Anchore Enterprise platform or speak with a member of our team, feel free to book a time to speak with one of our specialists

Introducing Grype Explain

Since releasing Grype 3 years ago (in September 2020), one of the most frequent questions we’ve gotten is, “why is image X vulnerable to vulnerability Y?” Today, we’re introducing a new sub-command to help users answer this question: Grype Explain.

Now, when users are surprised to see some CVE they’ve never heard of in their Grype output, they can ask Grype to explain itself: grype -o json alpine:3.7 | grype explain --id CVE-2021-42374. We’re asking the community to please give it a try, and if you have feedback or questions, let us know.

The goal of Grype Explain is to help operators evaluate a reported vulnerability so that they can decide what, if any, action to take. To demonstrate, let’s look at a simple scenario.

First, an operator who deploys a file called fireline.hpi into production sees some vulnerabilities:

❯ grype fireline.hpi| grep Critical

✔ Vulnerability DB                [no update available]
✔ Indexed file system
✔ Cataloged packages              [35 packages]
✔ Scanned for vulnerabilities     [36 vulnerabilities]
├── 10 critical, 14 high, 9 medium, 3 low, 0 negligible
└── 14 fixed

bcel                 6.0-SNAPSHOT  6.6.0     java-archive    GHSA-97xg-phpr-rg8q  Critical
commons-collections  3.1           3.2.2     java-archive    GHSA-fjq5-5j5f-mvxh  Critical
dom4j                1.6.1         2.0.3     java-archive    GHSA-hwj3-m3p6-hj38  Critical
fastjson             1.2.9         1.2.31    java-archive    GHSA-xjrr-xv9m-4pw5  Critical
fastjson             1.2.9                   java-archive    CVE-2022-25845       Critical
fastjson             1.2.9                   java-archive    CVE-2017-18349       Critical
log4j-core           2.11.1        2.12.2    java-archive    GHSA-jfh8-c2jp-5v3q  Critical
log4j-core           2.11.1        2.12.2    java-archive    GHSA-7rjr-3q55-vv33  Critical
log4j-core           2.11.1                  java-archive    CVE-2021-45046       Critical
log4j-core           2.11.1                  java-archive    CVE-2021-44228       Critical

Wait, isn’t CVE-2021-44228 log4shell? I thought we patched that! The operator asks for an explanation of the vulnerability:

❯ grype -q -o json fireline.hpi| grype explain --id CVE-2021-44228

[0000]  WARN grype explain is a prototype feature and is subject to change

CVE-2021-44228 from nvd:cpe (Critical)

Apache Log4j2 2.0-beta9 through 2.15.0 (excluding security releases 2.12.2, 2.12.3, and 2.3.1)
JNDI features used in configuration, log messages, and parameters do not protect against attacker
controlled LDAP and other JNDI related endpoints. An attacker who can control log messages or log
message parameters can execute arbitrary code loaded from LDAP servers when message lookup
substitution is enabled. From log4j 2.15.0, this behavior has been disabled by default. From
version 2.16.0 (along with 2.12.2, 2.12.3, and 2.3.1), this functionality has been completely
removed. Note that this vulnerability is specific to log4j-core and does not affect log4net,
log4cxx, or other Apache Logging Services projects.

Related vulnerabilities:
    - github:language:java GHSA-jfh8-c2jp-5v3q (Critical)
Matched packages:
    - Package: log4j-core, version: 2.11.1
      PURL: pkg:maven/org.apache.logging.log4j/[email protected]
      Match explanation(s):
          - github:language:java:GHSA-jfh8-c2jp-5v3q Direct match (package name, version, and
            ecosystem) against log4j-core (version 2.11.1).
          - nvd:cpe:CVE-2021-44228 CPE match on `cpe:2.3:a:apache:log4j:2.11.1:*:*:*:*:*:*:*`.
      Locations:
          - /fireline.hpi:WEB-INF/lib/fireline.jar:lib/firelineJar.jar:log4j-core-2.11.1.jar
URLs:
    - https://nvd.nist.gov/vuln/detail/CVE-2021-44228
    - https://github.com/advisories/GHSA-jfh8-c2jp-5v3q

Right away this gives us some information an operator might need:

  • Where’s the vulnerable file?
    • /fireline.hpi:WEB-INF/lib/fireline.jar:lib/firelineJar.jar:log4j-core-2.11.1.jar
    • Seeing the location inside a jar inside the .hpi file tells the operator that a jar inside a jar inside the .hpi file is responsible for the vulnerability.
  • How was it matched?
    • Seeing both a CPE match on cpe:2.3:a:apache:log4j:2.11.1:*:*:*:*:*:*:* and a GHSA match on pkg:maven/org.apache.logging.log4j/[email protected] gives the operator confidence that this is a real match. 
  • What’s the URL where I can read more about it?
    • Links to the NVD and GHSA sites for the vulnerability are printed out so the operator can easily learn more.

Based on this information, the operator can assess the severity of the issue, and know what to patch.

We hope that Grype Explain will help users better understand and respond faster to vulnerabilities in their applications. Do you have feedback on how Grype Explain could be improved? Please let us know!

How to Scan Your Containers for Vulnerabilities with Free Open Source Tools

This blog post has been archived and replaced by the supporting pillar page that can be found here:

https://anchore.com/wp-admin/post.php?post=987473420&action=edit

The blog post is meant to remain “public” so that it will continue to show on the /blog feed. This will help discoverability for people browsing the blog and potentially help SEO. If it is clicked on it will automatically redirect to the pillar page.

NIST’s Comprehensive Approach to Software Supply Chain Security

The National Institute of Standards and Technology (NIST) has always been at the forefront of setting benchmarks and standards for industry. They recently released a draft publication, 800-240D, titled “Strategies for the Integration of Software Supply Chain Security in DevSecOps CI/CD pipelines.” This document is exciting as it’s a testament to their commitment to evolving with the times and addressing challenges with supply chain security.

It should be noted that this current document is a draft. NIST is seeking guidance from stakeholders in order to write the final draft. Anyone who has input on this topic can and should contribute suggestions. NIST guidance is not produced in a bubble, it’s important we all help collaborate on these documents.

Understanding the Significance of the Supply Chain

Before we explain the purpose of the document, it’s important to understand the software supply chain’s complexity. When we think of “supply chain” we have historically imagined lines of code, software packages, and developer tools. However, it’s a complex system that spans the foundational hardware, operating systems that run on them, developer workstations where software is crafted, and even the systems that distribute our software to users worldwide. Each node in this chain presents unique challenges with the idea of security.

A great deal of previous guidance has been heavily focused on the development and procurement of software that goes into products. NIST 800-240D is a document that focuses on continuous integration and continuous delivery (CI/CD) systems. The security of a CI/CD system is no less important than the security of the packages that go into your software.

NIST’s Holistic Approach

With 800-240D, NIST isn’t merely adding another document to the pile. NIST recently released 800-218 or the Security Software Development Framework; they maintain NIST 800-53, the granddaddy of most other cybersecurity compliance frameworks. NIST is signaling they want to help move the goal in how the industry should approach software supply chain security. In this instance by emphasizing CI/CD pipelines, NIST is highlighting the importance of the processes that drive software development and deployment, rather than just the end product.

While there’s no shortage of guidance on CI/CD pipelines, much of the existing literature is either outdated or too narrow in scope. This is where NIST’s intervention should make us pay attention. Their comprehensive approach ensures that every aspect of the software supply chain, from code creation to deployment, is under scrutiny.

Comparing with Existing Content

The CNCF supply chain security white paper serves as an example. A few years ago, this document was hailed as a significant step forward. It provided a detailed overview of supply chain concerns and offered solutions to secure them. However, the document hasn’t seen an update in over two years. The tech landscape is ever-evolving. What was relevant two years ago might not hold today. This rapid evolution underscores the need for regularly updated guidance.

Maintaining and updating such comprehensive documents is no small feat. It requires expertise, resources, and a commitment to staying on top of industry developments. NIST, who has been providing guidance like this for decades, is uniquely positioned to take on this challenge. Their track record of maintaining and updating documents over extended periods is unparalleled.

The Promise of Modern Initiatives

Modern projects like SLSA and S2C2F have shown promise. They represent the industry’s proactive approach to addressing supply chain security challenges. However, they face inherent challenges that NIST does not. The lack of consistent funding and a clear mandate means that their future is less certain than a NIST document. Key personnel changes, shifts in organizational priorities, or a myriad of other factors could unexpectedly derail their progress.

NIST, with its government backing, doesn’t face these challenges. NIST guidance is not only assured of longevity but also of regular updates to stay relevant. This longevity ensures that even as projects like SLSA or S2C2F evolve or new initiatives emerge, there’s a stable reference point that the industry can rely on. Of course, something becoming a NIST standard doesn’t solve all problems, sometimes NIST guidance can become outdated and isn’t updated as often as it should be. Given the rash of government mandates around security lately, this is not expected to happen for supply chain related guidance.

The NIST Advantage

NIST’s involvement goes beyond just providing guidance. Their reputation and credibility mean that their publications carry significant weight. Organizations, both public and private, pay attention when NIST speaks. The guidance NIST has been providing to the United States since its inception has helped the industry in countless ways. Everything from safety, to measurements, even keeping our clocks running! This influence ensures that best practices and recommendations are more likely to be adopted, leading to a more secure and robust software supply chain.

However, it’s essential to temper expectations. While NIST’s guidance is invaluable, it’s not magic. Some NIST standards become outdated, some are difficult for small businesses or individuals to follow. Not all recommendations can be universally applicable. However given the current global focus on supply chain security, we can expect NIST to be proactive in updating their guidance.

It should also be noted that NIST guidance has a feedback mechanism. In the case of 800-240D, the document is a draft. NIST wants feedback. The current document will change between the current draft and the final version. Good feedback is a way we can all ensure the guidance is high quality.

Looking Ahead

The broader message from NIST’s involvement is clear: broad supply chain security is important. It’s not about isolated solutions or patchwork fixes. The industry needs a comprehensive approach that addresses risk at every stage of the software supply chain.

In NIST’s proactive approach, there is hope. Their commitment to providing long-lasting, influential guidance, combined with their holistic view of the supply chain, promises a future where supply chain security is not just an afterthought but an integral part of software development and deployment.

NIST’s 800-240D is more than just a publication. It’s a call for the industry to come together, adopt best practices, and work towards a future where software supply chain security is robust, reliable, and resilient.

If you’d like to learn more about how Anchore can help with NIST compliance, feel free to book a time to speak with one of our specialists.

Josh Bressers

Josh Bressers is vice president of security at Anchore where he guides security feature development for the company’s commercial and open source solutions. He serves on the Open Source Security Foundation technical advisory council and is a co-founder of the Global Security Database project, which is a Cloud Security Alliance working group that is defining the future of security vulnerability identifiers.

Scaling Software Security with NVIDIA

Personal computing and Apple in the 80s. The modern internet and Netscape in the 90s. Open source and Red Hat in the 2000s. Cloud and Amazon Web Services in the 2010s. Certain companies tend to define the computing paradigm of a decade. And so it is with AI and Nvidia in the 2020s. With its advanced GPU hardware, NVIDIA has enabled stunning advances in machine learning and AI models. That, in turn, has enabled services such as GitHub CoPilot and ChatGPT. 

However, AI/ML is not just a hardware and data story. Software continues to be the glue that enables the use of large data sets with high-performance hardware. Like Intel before them, NVIDIA is as much a software solution vendor as a hardware company, with applications like CUDA and others being how developers interact with NVIDIA’s GPUs. Building on the trends of previous decades, much of this software is built from open source and designed to run on the cloud. 

Unfortunately, the less welcome trend over the past decade has been increased software insecurity and novel attack vectors targeted at open source and the supply chain in general. For the past few years, we’ve been proud to partner with NVIDIA to ensure that the software they produce is secure and also secure for end users to run on their NVIDIA GPU Cloud (NGC). This has not only been a question of high-quality security scanning but ensuring that scanning can happen at scale and in a cost-effective manner. 

We’re inviting the Anchore community to join us for a webinar with NVIDIA where we cover the use-case, architecture, and policies used by one of the most cutting-edge companies in technology. Those interested can learn more and save your seat here

Automated Policy Enforcement for CMMC with Anchore Enterprise

The Cyber Maturity Model Certification (CMMC) is an important program to harden the cybersecurity posture of the defense industrial base. Its purpose is to validate that appropriate safeguards are in place to protect controlled unclassified information (CUI). Many of the organizations that are required to comply with CMMC are Anchore customers. They have the responsibility to protect the sensitive, but not classified data, of US military and government agencies as they support the various missions of the United States. 

CMMC 2.0 Levels

  • Level 1 Foundation: Safeguard federal contract information (FCI); not critical to national security.
  • Level 2 Advanced:  This maps directly to NIST Special Publication (SP) 800-171. Its primary goal is to ensure that government contractors are properly protecting controlled unclassified information (CUI).
  • Level 3 Expert: This maps directly to NIST Special Publication (SP) 800-172. Its primary goal is to go beyond the base-level security requirements defined in NIST 800-171. NIST 800-172 provides security requirements that specifically defend against advanced persistent threats (APTs).

This is of critical importance as these organizations leverage common place DevOps tooling to build their software. Additionally, these large organizations may be working with smaller subcontractors or suppliers who are building software in tandem or partnership. 

For example, a mega-defense contractor is working alongside a small mom-and-pop shop to develop software for a classified government system. Lots of questions we should have here:

  1. How can my company as a mega-defense contractor validate what software built by my partner is not using blacklisted software packages?
  2. How can my company validate software supplied to me is free of malware?
  3. How can I validate that the software supplied to me is in compliance with licensing standards and vulnerability compliance thresholds of my security team?
  4. How do I validate that the software I’m supplying is compliant not only against NIST 800-171 and CMMC, but against the compliance standards of my government end user (Such as NIST 800-53 or NIST 800-161)?

Validating Security between DevSecOps Pipelines and Software Supply Chain

At any major or small contractor alike, everyone has taken steps to build internal DevSecOps (DSO) pipelines. However, the defense industrial base (DIB) commonly involves daily relationships in which smaller defense contractors supply software to a larger defense contractor for a program or DSO pipeline that consumes and implements that software. With Anchore Enterprise, we can now validate if that software supplied is compliant with CMMC controls as specified in NIST 800-171.

Looking to learn more about how to achieve CMMC Level 2 or NIST 800-171 compliance? One of the most popular technology shortcuts is to utilize a DoD software factory. Anchore has been helping organizations and agencies put the Sec in DevSecOps by securing traditional software factories, transforming them into DoD software factories.

Which Controls does Anchore Enterprise Automate?

3.1.7 – Restrict Non-Privileged Users and Log Privileged Actions

Related NIST 800-53 Controls: AC-6 (10)

Description: Prevent non-privileged users from executing privileged functions and capture the execution of such functions in audit logs. 

Implementation: Anchore Enterprise can scan the container manifests to determine if the user is being given root privileges and implement an automated policy to prevent build containers from entering a runtime environment. This prevents a scenario where any privileged functions can be utilized in a runtime environment.

3.4.1 – Maintain Baseline Configurations & Inventories

Related NIST 800-53 Controls: CM-2(1), CM-8(1), CM-6

Description: Establish and maintain baseline configurations and inventories of organizational systems (including hardware, software, firmware, and documentation) throughout the respective system development life cycles.

Implementation: Anchore Enterprise provides a centralized inventory of all containers and their associated manifests at each stage of the development pipeline. All manifests, images and containers are automatically added to the central tracking inventory so that a complete list of all artifacts of the build pipeline can be tracked at any moment in time.

3.4.2 – Enforce Security Configurations

Related NIST 800-53 Controls: CM-2 (1) & CM-8(1) & CM-6

Description: Establish and enforce security configuration settings for information technology products employed in organizational systems.

Implementation: Implementation: Anchore Enterprise scans all container manifest files for security configurations and publishes found vulnerabilities to a centralized database that can be used for monitoring, ad-hoc reporting, alerting and/or automated policy enforcement.

3.4.3 – Monitor and Log System Changes with Approval Process

Related NIST 800-53 Controls: CM-3

Description: Track, review, approve or disapprove, and log changes to organizational systems.

Implementation: Anchore Enterprise provides a centralized dashboard that tracks all changes to applications which makes scheduled reviews simple. It also provides an automated controller that can apply policy-based decision making to either automatically approve or reject changes to applications based on security rules.

3.4.4 – Run Security Analysis on All System Changes

Related NIST 800-53 Controls: CM-4

Description: Analyze the security impact of changes prior to implementation.

Implementation: Anchore Enterprise can scan changes to applications for security vulnerabilities during the build pipeline to determine the security impact of the changes.

3.4.6 – Apply Principle of Least Functionality

Related NIST 800-53 Controls: CM-7

Description: Employ the principle of least functionality by configuring organizational systems to provide only essential capabilities.

Implementation: Anchore Enterprise can scan all applications to ensure that they are uniformly applying the principle of least functionality to individual applications. If an application does not meet this standard then Anchore Enterprise can be configured to prevent an application from being deployed to a production environment.

3.4.7 – Limit Use of Nonessential Programs, Ports, and Services

Related NIST 800-53 Controls: CM-7(1), CM-7(2)

Description: Restrict, disable, or prevent the use of nonessential programs, functions, ports, protocols, and services.

Implementation: Anchore Enterprise can be configured as a gating agent that will scan for specific security violations and prevent these applications from being deployed until the violations are remediated.

3.4.8 – Implement Blacklisting and Whitelisting Software Policies

Related NIST 800-53 Controls: CM-7(4), CM-7(5)

Description: Apply deny-by-exception (blacklisting) policy to prevent the use of unauthorized software or deny-all, permit-by-exception (whitelisting) policy to allow the execution of authorized software.

Implementation: Anchore Enterprise can be configured as a gating agent that will apply a security policy to all scanned software. The policies can be configured in a black- or white-listing manner.

3.4.9 – Control and Monitor User-Installed Software

Related NIST 800-53 Controls: CM-11

Description: Control and monitor user-installed software.

Implementation: Anchore Enterprise scans all software in the development pipeline and records all user-installed software. The scans can be monitored in the provided dashboard. User-installed software can be controlled (allowed or denied) via the gating agent.

3.5.10 – Store and Transmit Only Cryptographically-Protected Passwords

Related NIST 800-53 Controls: IA-5(1)

Description: Store and transmit only cryptographically-protected of passwords.

Implementation: Anchore Enterprise can scan for plain-text secrets in build artifacts and prevent exposed secrets from being promoted to the next environment until the violation is remediated. This prevents unauthorized storage or transmission of unencrypted passwords or secrets. See screenshot below to see this protection in action.

3.11.2 – Scan for Vulnerabilities

Related NIST 800-53 Controls: RA-5, RA-5(5)

Description: Scan for vulnerabilities in organizational systems and applications periodically and when new vulnerabilities affecting those systems and applications are identified.

Implementation: Anchore Enterprise is designed to scan all systems and applications for vulnerabilities continuously and alert when any changes introduce new vulnerabilities. See screenshot below to see this protection in action.

3.11.3 – Remediate Vulnerabilities Respective to Risk Assessments

Related NIST 800-53 Controls: RA-5, RA-5(5)

Description: Remediate vulnerabilities in accordance with risk assessments.

Implementation: Anchore Enterprise can be tuned to allow or deny changes based on a risk scoring system.

3.12.2 – Implement Plans to Address System Vulnerabilities

Related NIST 800-53 Controls: CA-5

Description: Develop and implement plans of action designed to correct deficiencies and reduce or eliminate vulnerabilities in organizational systems.

Implementation: Anchore Enterprise automates the process of ensuring all software and systems are in compliance with the security policy of the organization. 

3.13.4 – Block Unauthorized Information Transfer via Shared Resources

Related NIST 800-53 Controls: SC-4

Description: Prevent unauthorized and unintended information transfer via shared system resources.

Implementation: Anchore Enterprise can be configured as a gating agent that will scan for unauthorized and unintended information transfer and prevent violations from being transferred between shared system resources until the violations are remediated.

3.13.8 – Use Cryptography to Safeguard CUI During Transmission

Related NIST 800-53 Controls: SC-8

Description: Transmission Confidentiality and Integrity: Implement cryptographic mechanisms to prevent unauthorized disclosure of CUI during transmission unless otherwise protected by alternative physical safeguards.

Implementation: Anchore Enterprise can be configured as a gating agent that will scan for CUI and prevent violations of organization defined policies regarding CUI from being disclosed between systems.

3.14.5 – Periodically Scan Systems and Real-time Scan External Files

Related NIST 800-53 Controls: SI-2

Description: Perform periodic scans of organizational systems and real-time scans of files from external sources as files are downloaded, opened, or executed.

Implementation: Anchore Enterprise can be configured to scan all external dependencies that are built into software and provide information about relevant security vulnerabilities in the software development pipeline. See screenshot below to see this protection in action.

Wrap-Up

In a world increasingly defined by software solutions, the cybersecurity posture of defense-related industries stands paramount. The CMMC, a framework with its varying levels of compliance, underscores the commitment of the defense industrial base to fortify its cyber defenses. 

As a multitude of organizations, ranging from the largest defense contractors to smaller mom-and-pop shops, work in tandem to support U.S. missions, the intricacies of maintaining cybersecurity standards grow. The questions posed exemplify the necessity to validate software integrity, especially in complex collaborations. 

Anchore Enterprise solves these problems by automating software supply chain security best practices. It not only automates a myriad of crucial controls, ranging from user privilege restrictions to vulnerability scanning, but it also empowers organizations to meet and exceed the benchmarks set by CMMC and NIST. 

In essence, as defense entities navigate the nuanced web of software development and partnerships, tools like Anchore Enterprise are indispensable in safeguarding the nation’s interests, ensuring the integrity of software supply chains, and championing the highest levels of cybersecurity.

If you’d like to learn more about the Anchore Enterprise platform or speak with a member of our team, feel free to book a time to speak with one of our specialists.

Breaking Down NIST SSDF: Spotlight on P0.1 – Prepare the Organization

After the last blog post about SSDF, I decided to pick something much easier to write about, and it happens to be at the top of the list. We’ll cover PO.1 this time, it’s the very first control in the SSDF. The PO stands for Prepare the Organization. The description is

Define Security Requirements for Software Development (PO.1): Ensure that security requirements for software development are known at all times so that they can be taken into account throughout the SDLC and duplication of effort can be minimized because the requirements information can be collected once and shared. This includes requirements from internal sources (e.g., the organization’s policies, business objectives, and risk management strategy) and external sources (e.g., applicable laws and regulations).

How hard can it be to prepare the organization? Just tell them we’re doing it and it’s a job well done!

This is actually one of the most important steps, and one of the hardest steps when creating a secure development program. When we create a secure development program we really only get one chance with developers. What I mean by that is if we try to keep changing what we’re asking developers to do we create an environment that lacks trust, empathy, and cooperation. This is why the preparation stage is such an important step when trying to deploy the SSDF in your organization.

We all work for a company whose primary mission isn’t to write secure software, the actual mission is to provide a product or service to our customers. Writing security software is one of the tools that can help with the primary mission. Sometimes as security professionals we forget this very important point. Security isn’t the purpose, security is part of what we do or at least it should be part of what we do. It’s important that we integrate into the existing process and procedures that our organization has. One of the reference documents for PO.1 is NIST 800-161, or Cybersecurity Supply Chain Risk Management Practices for Systems and Organizations. It’s worth reading the first few sections of NIST 800-161 not for the security advice, but the organizational aspect. It stresses the importance of cooperation and getting the company to buy into a supply chain risk management program. We could say the days of security-making mandates are over, but they probably never really existed.

The steps to prepare the organization

The PO step of the SSDF is broken into three pieces. The first two sections explain how to document the process we’re going to have to create and implement. The third section revolves around communicating these documented processes. This seems obvious, but the reality is it’s something that doesn’t happen on a regular basis. It’s really easy for a security team to create a process, but never tell anyone about it. Telling people about the process is harder than creating the process in many instances. It’s much harder to bring that process and policy to another group and collect their feedback on it and make sure they buy into it.

PO.1.1: Infrastructure security requirements

The first section is about documenting the internal infrastructure and process. There’s also a mention in the first control to maintain this documentation over time. It’s possible your organization already has these documents in place, if so, good job! If not, there’s no better time to start than now. SANS has a nice library of existing security policy documents that can help get things moving. The intention of these isn’t to take them as is and declare it your new security policy. You have to use existing documents as a guide and make sure you have agreement and understanding from the business. Security can’t show up with a canned document they didn’t write and declare it the new policy. That won’t work.

PO.1.2: Software security requirements

The second section revolves around how we’re going to actually secure our software and services. It’s important to note this control isn’t about the actual process of securing the software, but documenting what that process will look like. One of the difficulties of securing the software you build is there are no two organizations that are the same. Documenting how we’re going to build secure software or how we’re going to secure our environment is a lot of work. OWASP has a nice toolbox they call SAMM, or Software Assurance Maturity Model, that can help with this stage.

There’s no shortcut to building a secure development program. It’s a lot of hard work and there will be plenty of trial and error. The most important aspect will be getting cooperation and buy in from all the stakeholders. Security can’t do this alone.

PO.1.3: Communicate the requirements

The third section talks about communicating these requirements to the organization. How hard can communication be? A security team can create documentation and policy that is fantastic, but then they put it somewhere that the rest of the company might not know exists or in some cases, the rest of the company might not even be able to access.

This is obviously a problem because it has to be something that everyone in the organization is aware of and they know where to find it they know how to get help and they know what it is so it can’t be stressed how truly important this stage is. If you tell your developers that they have to follow your policy and it’s not well written or they can’t find it or they don’t understand why something is happening, those are developers that aren’t going to engage and they aren’t going to want to work with you 

Next steps

If you’re on a journey to implement SSDF, or you’re just someone looking to start formalizing your secure development program, these are some steps you can start taking today. This blog series uses SSDF as our secure development standard. You can start by reading the content NIST has published on their site Secure Software Development Framework. You can follow up on the SSDF content with a tour of the SANS policy templates. Most of these templates shouldn’t be used without customization. Every company and every development team will have unique processes and needs. The CSA has a nice document called CAIQ, or Consensus Assessment Initiative Questionnaire that can help create some focus on what you need. Combining this with a SAMM assessment would be a great place to start.

And lastly, whatever standard you choose, and whatever internal process you create, it’s important to keep in mind this is a fluid and ever-changing space. You will have to revisit decisions and assumptions on a regular basis. The SSDF standard will change, your business will change, everything will change. It’s the old joke that the only constant is change.

Want to better understand how Anchore can help? Schedule a demo here.

Josh Bressers

Josh Bressers is vice president of security at Anchore where he guides security feature development for the company’s commercial and open source solutions. He serves on the Open Source Security Foundation technical advisory council and is a co-founder of the Global Security Database project, which is a Cloud Security Alliance working group that is defining the future of security vulnerability identifiers.

NIST SP 800-53, the Control Catalog: A Guide in Plain English

This blog post has been archived and replaced by the supporting pillar page that can be found here: https://anchore.com/wp-admin/post.php?post=987473301&action=edit

The blog post is meant to remain “public” so that it will continue to show on the /blog feed. This will help discoverability for people browsing the blog and potentially help SEO. If it is clicked on it will automatically redirect to the pillar page.

NIST 800-37, the Risk Management Framework: A Guide in Plain English

This blog post has been archived. It is replaced by the supporting pillar page, found here: https://anchore.com/wp-admin/post.php?post=987473296&action=edit

The blog post is meant to remain “public” so that it will continue to show on the /blog feed. This will help discoverability for people browsing the blog and potentially help SEO. If it is clicked on it will automatically redirect to the pillar page.

Four Signs You’re Ready to Upgrade from DIY Supply Chain Security to Anchore Enterprise

Build versus buy is always a complex decision for most organizations. Typically there is a tipping point that is hit when the friction of building and running your own tooling outweighs the cost benefits of abstaining from adding yet another vendor to your SaaS bill. The signals that point to when an organization is approaching this moment varies based on the tool you’re considering.

In this blog post, we will outline some of the common signals that your organization is approaching this event for managing software supply chain risk. Whether your developers have self-adopted software development best practices like creating software bills of material (SBOMs) and now you’re drowning in an ocean of valuable but scattered security data, or you’re ready to start scaling your shift left security strategy across your entire software development life cycle, we will cover all of these scenarios and more.

Challenge Type: Scaling SBOM Management

Managing SBOMs is getting out of hand. Each day there is more SBOM data to sort and store. SBOM generation is by far the easiest capability to implement today. It’s free, extremely lightweight (low learning curve for engineers to adopt, unlike some enterprise products), and it’s fast…blazing fast! As a result of this, teams can quickly generate hundreds, thousands (even millions!) of SBOMs over the course of a fiscal year. This is great from a data security perspective but creates its own problems.

Once the friction of creating SBOMs becomes trivial, teams typically struggle with good ways to store and manage all of this new data. Just like any other context, questions arise about how long to retain data, query the data for security related issues, or even integrate all of that data with third party tooling to glean actionable security insights. Once teams have fully adopted SBOM generation in a few areas, it is a good practice to consider the best way to manage the data so your developers’ time is not in vain. 

Anchore Enterprise helps in a variety of ways, not just to manage SBOMs but to detect SBOM drift in the build process and alert security teams to changes in SBOMs so they can be assessed for risks or malicious activity.

Challenge Type: Regulatory Compliance

Let’s say that you just got a massive policy compliance mandate dropped in your lap from your manager. It’s your job to implement the parameters within the allotted deadline, and you’re not sure where to start.

As we’ve talked about in other posts, meeting compliance standards is more than a full-time job. Organizations have to make the decision to either DIY compliance or work with third parties that have expertise in specific standards. With the debut of revision 5 of NIST 800-53, the “Control Catalog”, more and more compliance standards require companies to implement controls that specifically address software supply chain security. This is due to the fact that many federal compliance standards build off of the “Control Catalog” as the source of truth for secure IT systems.

Whether it’s FedRAMP, a compliance framework related to NIST 800-53, or something as simple as a CIS benchmark, Anchore can help. The Anchore Enterprise SBOM solutions offer automated policy enforcement in your software supply chain. It serves to enforce compliance frameworks on your source code repos, images in development, and runtime Kubernetes clusters.

Challenge Type: Zero-Day Response

When a zero-day vulnerability is discovered, how do you answer the question “Am I vulnerable?” Depending on how well you have structured your security practice that question can take anywhere from an hour to a week or more. The longer that window the more risk your organization accrues. Once a zero-day incident occurs, it is very easy to spot the organizations that are prepared and those that are not. 

If you haven’t figured it out yet, the retention and centralized management of SBOM’s are probably one of the most useful tools in modern incident response plans for identification and triage of zero-day incidents impacting organizations. Even though software teams are empowered to make decentralized decision making they can still adhere to security principles that can benefit from a centralized data storage solution. This type of centralization allows organizations to answer critical questions with speed at critical moments in the life of an organization.

Anchore Enterprise helps answer the question “Am I vulnerable?” and it does it in minutes rather than days or weeks. By creating a centralized store of software supply chain data (via SBOMs) Anchore Enterprise allows organizations to quickly query this information and get back precise information on if a vulnerable package exists within the organization and exactly where to focus the remediation efforts. We also provide hands-on training that takes our customers through table top exercises in a controlled environment. By simulating a zero-day incident we test how well an organization is prepared to handle an uncontrolled threat environment and identify the gaps that could lead to extended uncertainty.

Challenge Type: Scaling a Shift Left Security Culture 

The shift left security movement was based on the principle that organizations can preempt security incidents by implementing secure development practices earlier in the software development lifecycle. The problem with this approach arises as you attempt to scale it. The more gates that you put in to catch security vulnerabilities earlier in the life cycle slows the software development process and requires more security resources.

In order to scale shift left security practices organizations will need to adopt software-based solutions to automate these checks and allow developers to self-diagnose and remediate vulnerabilities without significant intervention from the security team. The earlier in the software development process that vulnerabilities are caught the faster secure software can be shipped. 

Anchore enables organizations to scale their shift left security strategy by automating security checks at multiple points in the development life cycle. On top of that, due to the speed that Anchore can run its security scans, organizations can check every software artifact in the development pipeline without adding significant friction. Checking every deployed image during integration (CI), storage (registry) and runtime (CD) allows Anchore to scale a continuous security program that significantly reduces the potential for a vulnerable application to find its way to production where it can be exploited by a malicious adversary. The Anchore Enterprise runtime monitoring capabilities allow you to see what is running in your environment, detect issues within those images, and prevent images that fail policy checks from being deployed in your cluster or runtime environment.

Wrap-Up

The landscape of software supply chain security is increasingly complex, underscored by the rapid proliferation of SBOMs, rising compliance standards, and evolving security threats. Organizations today face the dilemma of scaling in-house security tools or seeking more streamlined and comprehensive solutions. As highlighted in this post, many of the above signals might indicate that it’s time for your organization to transition from DIY methods to a more robust solution. 

Anchore Enterprise was developed to overcome the challenges that are most common to organizations. With its focus on aiding organizations in scaling their shift-left security strategies, Anchore not only ensures compliance but also facilitates faster and safer software deployment. Even though each organization has its own set of unique challenges pertaining to software supply chain security, Anchore Enterprise is ready to enable organizations to mitigate and respond to these challenges.

If you’d like to learn more about the Anchore Enterprise platform or speak with a member of our team, feel free to book a time to speak with one of our specialists.

Software Supply Chain Hierarchy of Needs: SBOMs as the Foundation

Software serves as a powerful tool that simplifies complex and technical concepts but with the incredible power of software comes an interconnected labyrinth of software dependencies that often form the foundation for innovative applications. These dependencies are not without their pitfalls, as we’ve learned from incidents like Log4Shell. As we try to navigate the ever-evolving landscape of software supply chain security, we need to ensure that our applications are built on strong foundations.

In this blog post, we delve into the concept of a Software Bill of Materials (SBOM) as a foundational requirement for a secure software supply chain. Just as a physical supply chain is scrutinized to ensure the quality and safety of a product, a software supply chain also requires critical evaluation. What’s at stake isn’t just the functionality of an application, but the security of information that the application has access to. Let’s dive into the world of software supply chains and explore how SBOMs could serve as the bedrock for a more resilient future in software development and security.

What are Software Supply Chain Attacks?

Supply chain attacks are malicious strikes that target the suppliers of components of an application rather than the application itself. Software supply chains are similar to physical supply chains. When you purchase an iPhone all you see is the finished product. Behind the final product is a complex web of component suppliers that are then assembled together to produce an iPhone. Displays and camera lenses from a Japanese company, CPUs from Arizona, modems from San Diego, lithium ion batteries from a Canadian mine; all of these pieces come together in a Shenzhen assembly plant to create a final product that is then shipped straight to your door.

In the same way that an attacker could target one of the iPhone suppliers to modify a component before the iPhone is assembled, a software supply chain threat actor could do the same but target an open source package that is then built into a commercial application. This is a problem when 70-90% of modern applications are built utilizing open source software components. GIven this the supply chain is only as secure as its weakest link. The image below of the iceberg has become a somewhat overused meme of software supply chain security but it has become overused precisely because it explains the situation so well.

Below is the same idea but without the floating ice analogy. Each layer is another layer of abstraction that is far removed from “Your App”. All of these dependencies give your software developers the superpower to build extremely complex applications, very quickly but with the unintentional side-effect that they cannot possibly understand all of the ingredients that are coming together.

This gives adversaries their opening. A single compromised package allows attackers to manipulate all of the packages “downstream” of their entrypoint.

This reality was viscerally felt by the software industry (and all industries that rely on the software industry, meaning all industries) during the Log4j incident.

Log4Shell Impact

Log4Shell is the poster child for the importance of software supply chain security. We’re not going to go deep on the vulnerability in this post. We have actually done this in a number of other posts. Instead we’re going to focus on the impact of the incident for organizations that had instances of Log4j in their applications and what they had to go through in order to remediate this vulnerability.

First let’s brush up on the timeline:

The vulnerability in log4j was originally privately disclosed on November 24. Five days later a pull request was published to close the vulnerability and a week after that the new package was released. The official public disclosure happened on December 10. This is when the mayhem began and companies began the work of determining whether they were vulnerable and figuring out how to remediate the vulnerability.

On average, impacted individuals spent ~90 hours dealing with the Log4j incident. Roughly 20% of that time was spent identifying where the log4j package was deployed into an application. 

From conversations with our customers and prospects the primary culprit for why this took up such a large portion of time was whether or not an organization had a central repository of metadata about the software dependencies that had been utilized in building their applications. For customers that did have a central repository and a way to query this database, the step of identifying which applications had the log4j vulnerability present took 1-2 hours instead of 20+ hours as seen with the other organizations. This is the power of having SBOMs in place for all of the software and a tool to help with SBOM management.

What is a Software Bill of Materials?

Similar to the nutrition labels on the back of the foods that you buy, SBOMs are a list of ingredients that go into the software that your applications consume. We normally think of SBOMs as an artifact of the development process. As a developer is manufacturing their application using different dependencies they are also building a recipe based on the ingredients. In reality, an SBOM can (and should) be generated at all steps of the build pipeline. Source, builds, images, and production software can all be used to generate an SBOM.

Similar to Maslow’s hierarchy of needs, software supply chain management has an analogous hierarchy of needs

At the base of the pyramid are the contents of the application, in other words, SBOMs about the application. The diagram below shows all of the layers of the proposed hierarchy of software supply chains.

By using an SBOM as the foundation of the pyramid, organizations can ensure that all of the additional security features they layer on to this foundation will stand the test of time. We can only know that our software is free from known vulnerabilities if we have confidence in the process that was used to generate the “ingredients” label. Signing software to prove that a package hasn’t been tampered with is only valid if the signed software is both free of known vulnerabilities. Signing a vulnerabile package or image only proves that software hasn’t been tampered with from that point forward. It can’t look back retrospectively and validate that the packages that came before are secure without the help of an SBOM or a vulnerability scanner.

What are the Benefits of this Approach?

Utilizing Software Bills of Materials (SBOMs) as the foundational element of software supply chain security brings several substantial benefits:

  1. Transparency: SBOMs provide a comprehensive view of all the components used in an application. They reveal the ‘ingredients’ that make up the software, enabling teams to understand the entire composition of their applications, including all dependencies. No more black box dependencies and the associated risk that comes with it.
  1. Risk Management: With the transparency provided by SBOMs, organizations can identify potential security risks within the components of their software and address them proactively. This includes detecting vulnerabilities in dependencies or third-party components. SBOMs allow organizations to standardize their software supply chain which allows for an automated approach to vulnerability management and impacts risk management.
  1. Quick Response to Vulnerabilities: When a new vulnerability is discovered in a component used within the software, SBOMs can help quickly identify all affected applications. This significantly reduces the time taken to respond and remediate these vulnerabilities, minimizing potential damages. When an incident occurs, NOT if, an organization is able to rapidly respond to the breach and limit the impact. 
  1. Regulatory Compliance: Regulations and standards are increasingly requiring SBOMs for demonstrating software integrity. By incorporating SBOMs, organizations can ensure they meet these cybersecurity compliance requirements. Especially when working with highly regulated industries like the federal government, financial services and healthcare.
  1. Trust and Verification: SBOMs facilitate trust and confidence in software products by allowing users to verify the components used. They serve as a ‘proof of integrity’ to customers, partners, and regulators, showcasing the organization’s commitment to security. They also enable higher level security abstractions like signed images or source code to inherit the underlying foundation security guarantees provided by SBOMs.

By putting SBOMs at the base of software supply chain security, organizations can build a robust structure that’s secure, resilient, and efficient.

Building on a Strong Foundation

The utilization of Software Bills of Materials (SBOMs) as the bedrock for secure software supply chains provides a fundamental shift towards increased transparency, improved risk management, quicker responses to vulnerabilities, heightened regulatory compliance, and stronger trust in software products. By unraveling the complex labyrinth of dependencies in software applications, SBOMs offer the necessary insight to identify and address potential weaknesses, thus creating a resilient structure capable of withstanding potential security threats. In the face of incidents like Log4Shell, the industry needs to adopt a proactive and strategic approach, emphasizing the creation of a secure foundation that can stand the test of time. By elevating the role of SBOMs, we are taking a crucial step towards a future of software development and security that is not only innovative but also secure, trustworthy, and efficient. In the realm of software supply chain security, the adage “knowing is half the battle” couldn’t be more accurate. SBOMs provide that knowledge and, as such, are an indispensable cornerstone of a comprehensive security strategy.

If you’re interested in learning about how to integrate SBOMs into your software supply chain the Anchore team of supply chain security specialists are ready and willing to discuss.

Customizing Grype Vulnerability Reports With Templates

If you’ve used Grype to scan your images and SBOMs for vulnerabilities, you might be familiar with some of the output formats that Grype can produce. The standard output is a simple tabular format. If you want more detail, or if you want to produce a machine-parsable output, you can use the “-o” option to get reports in a more comprehensive Grype JSON output, or reports based on the CycloneDX standard in either JSON or XML.

If those aren’t suitable for your needs, there is another option, “-o template”, which allows you to specify your own completely customized template based on the Go templating language. If you have developed a useful template that you’d like to share with others, we have a place for community-contributed templates in the Grype source code on GitHub.

How to Build a Template

To create a template, create a text file describing your desired output using the Go template syntax. There are a couple of simple templates included in the Grype source code that you can use as a starting point, including the standard table-based output you see when you run Grype without setting an output format. The template that generates the table is here: templates/table.tmpl

There is also a very simple template that generates CSV (comma separated values): templates/csv.tmpl

"Package","Version Installed","Vulnerability ID","Severity"
{{- range .Matches}}
"{{.Artifact.Name}}","{{.Artifact.Version}}","{{.Vulnerability.ID}}","{{.Vulnerability.Severity}}"
{{- end}}

This template produces this output:

"Package","Version Installed","Vulnerability ID","Severity"
"coreutils","8.30-3ubuntu2","CVE-2016-2781","Low"
"libc-bin","2.31-0ubuntu9","CVE-2016-10228","Negligible"
"libc-bin","2.31-0ubuntu9","CVE-2020-6096","Low"
...

Grype also includes the utility templating functions from the standard golang text/template module, so if you need to do more processing in your template, you can. In addition to the standard golang functions, Grype also includes the utility functions from sprig.

(Please note that templates can access information about the system they are running on, such as environment variables. You should never run untrusted templates.)

Contribute Your Own Templates

Have you developed a template for Grype’s output that you have found useful? If you think other people might also benefit from it, consider sending us a pull request to include it in Grype’s user-contributed templates directory. Come chat with us on Discourse or join our community meeting every other week if you have questions or suggestions.

Anchore OSS Now Supports Microsoft’s Azure Linux

Recently, Microsoft  announced the general availability of Azure Linux, a new open source Linux distribution maintained by Microsoft. A core principle of this new offering is to keep the set of OS packages small and light-weight, in support of some of the key characteristics of the distribution as described in the Azure Linux GA Announcement and Azure Linux Intro pages.

As a Linux distribution that prominently highlights security in its list of core benefits, the Anchore team was thrilled to collaborate with Microsoft. Through the open source community, we’re bringing support for Azure Linux into Anchore’s open source tool set, for end-to-end support for Azure Linux that includes:

  • New capabilities in Syft to generate SBOMs from Azure Linux OS container images and hosts.
  • A new provider for Vunnel that ingests Microsoft’s open vulnerability data feed for Azure Linux, ensuring accurate package-level vulnerability matching for their new operating system.
  • Assurance that Azure Linux SBOMs, whether generated directly or shared between organizations can be used to generate vulnerability reports with information about available fixes from Microsoft, using Grype.

Syft, Grype and Vunnel support an ever growing set of operating systems, but as new significant OS releases are infrequent, we want to highlight the concepts and technical process involved when bringing a new operating system into Anchore’s OSS toolset. In this post, we’ll learn how Syft detects a supported OS and then catalogs that particular OS’s software package type, and critically, how to teach Grype to recognize vulnerabilities for a new distribution.

NOTE: throughout this post, you might see ‘mariner’ used in technical areas that involve the name of the OS distribution. While the formal name of the new OS is Azure Linux, the internal data sources (/etc/os-release, vulnerability namespaces) use ‘mariner’ to identify the OS name.

How does Grype recognize vulnerabilities in a new distribution?

In order to scan an image (or other filesystem) for vulnerabilities, Syft and Grype work together. Syft generates a Software Bill of Materials (SBOM), which is a manifest of all the packages we can detect in the image, and Grype reports vulnerabilities by comparing this list with a database of known vulnerabilities. But where does this database come from?

The first piece of the puzzle is a vulnerability feed. A vulnerability feed is a feed of new information, similar in many ways to how blogs and podcasts might have RSS feeds. The maintainers of a distribution often publish vulnerabilities to one of these feeds as they learn of them. However, there is a lot of variety in how these vulnerabilities are reported: how often are new vulnerabilities posted? How is the vulnerable software described? How does the feed reflect that a vulnerability has been fixed in a newer version?

In order to use these vulnerability feeds, Grype needs an aggregated and normalized view of the data they report. Enter Vunnel. Vunnel is a new, open source vulnerability feed aggregation tool from Anchore. When we need to add data from a new set of vulnerabilities to Grype, the first step is to build a provider for the feed in Vunnel. A Vunnel provider is a Python module inside Vunnel that knows how to download and normalize the vulnerabilities from a particular feed.

So in today’s announcement, when we say we Grype supports Azure Linux, what we mean is that Vunnel now has a vulnerability feed provider that pulls in vulnerabilities relating to Azure Linux and its packages, and makes them available to Grype. Let’s take a look at what went into adding this support end-to-end.

Detecting the Distro

Azure Linux is a Redhat Package Manager (RPM) based Linux distribution. Since Syft has long standing support for other RPM based distributions, we need to ensure that when an Azure Linux system is scanned and that the distro identifiers are picked up. This allows Syft to know how to correctly catalog the software installed using regular distro packaging mechanisms (in addition to non-distro software). This information will be used later by Grype during a vulnerability scan.  Last year, support for Mariner was added to Syft via contributions from the community here, here and here! (Can we just say, the open source community is awesome!)  

We can see what this looks like by inspecting the ‘distro’ section of an SBOM generated against a mariner linux container image, like so:

$ syft -q -o json mcr.microsoft.com/cbl-mariner/base/core:2.0 | jq '.distro'

{
  "prettyName": "CBL-Mariner/Linux",
  "name": "Common Base Linux Mariner",
  "id": "mariner",
  "version": "2.0.20230518",
  "versionID": "2.0",
  "homeURL": "https://aka.ms/cbl-mariner",
  "supportURL": "https://aka.ms/cbl-mariner",
  "bugReportURL": "https://aka.ms/cbl-mariner"
}

Cataloging the Distro Software

Now that we have the distro, and Syft knows that this particular distro uses RPM to manage packages, Syft will now automatically find and catalog any discovered RPMs, and add them to the ‘artifacts’ section of the generated SBOM. We won’t show all of the package metadata that is captured by Syft during analysis here (it’s quite a lot!), but importantly we can see that the Mariner RPMs are being identified correctly:

$ syft -q -o json mcr.microsoft.com/cbl-mariner/base/core:2.0 | jq '.artifacts[0].metadata' | more

{
  "name": "bash",
  "version": "5.1.8",
  "epoch": null,
  "architecture": "aarch64",
  "release": "1.cm2",
  "sourceRpm": "bash-5.1.8-1.cm2.src.rpm",
  
  
}

Understanding the Vulnerability Data

Grype has logic that it uses to determine what the best vulnerability data source is for any given software element in a provided SBOM. In the case of operating system managed software, where the OS maintainer additionally provides a high quality vulnerability data feed (as is the case for Azure Linux), we need to find and understand that data source (both in terms of access, as well as format and data characteristics). For Azure Linux, we collaborated with the Azure team who let us know that this repository on GitHub is one place where feeds that are appropriate for this use case are made available.

Add a Provider to Vunnel

The last step in this journey is to implement a provider in Vunnel that takes vulnerability feed data on one side and normalizes the data into an intermediate form that the Grype DB system understands, which then ultimately gets packaged up and made available to all users of Grype. Because of the way syft -> vunnel -> grype-db -> grype are designed, as long as there is agreement between the ‘distro’ that Syft is storing, and the ‘namespace’ that ultimately ends up in the grype vulnerability DB, then Grype will automatically begin matching any ‘mariner distro RPMs’ to the ‘mariner vulnerability data’, to get the most accurate results available. Take a look at this thread in grype and related links to vunnel to look at the discussion and resulting provider implementation. Once completed, we can see the new vunnel provider doing its work when asked to create a normalized set of data that grype-db can understand:

$ vunnel run mariner

[INFO ] running mariner provider
[INFO ] wrote 4025 entries
[INFO ] recording workspace state

Verify Matching in Grype

With all the pieces in hand, now all that’s left is to see all the pieces come together – using a very old mariner linux container image (so that we have some examples of vulnerabilities that have since fixed) shows a successful outcome:

$ grype -q mcr.microsoft.com/cbl-mariner/base/core@sha256:e20e222517e903144f01f4503ca6d5ab5f575669f7ac402bb85e2a1917511bf0


curl            7.82.0-1.cm2   0:7.86.0-1.cm2   rpm   CVE-2022-35252  Low
curl-libs       7.82.0-1.cm2   0:7.86.0-1.cm2   rpm   CVE-2022-35252  Low

With that final step, showing the culmination of Syft, Vunnel, and Grype interoperating to provide a distro-aware vulnerability scan of mariner linux systems, our journey of adding support for a new Linux distribution is complete!

Grype Now Supports the Azure Linux Vulnerability Feed

Azure Linux, also known as Mariner Linux, is an open source Linux distribution from Microsoft, optimized for performance on Azure. In this post, we’ve described the process of adding the Azure Linux vulnerability feed to Vunnel, Anchore’s open source tool for importing and normalizing different vulnerability feeds. This allows Grype to scan for vulnerabilities from this feed. Versions of Grype 0.62.0 and later support this new feed, and can find vulnerabilities reported against Azure Linux packages.

It’s great to see Anchore’s Syft and Grype add support for Azure Linux. Having accurate vulnerability data is critical for our customers, and these tools provide comprehensive open source options.

– Jim Perrin, Principal PM, Azure Linux

So if you’re using, or planning to use, Azure Linux, try out Syft and Grype for your SBOM and vulnerability scanning needs! Finally, as the full end-to-end stack of syft, vunnel, grype-db, grype, Azure Linux itself and Azure Linux Vulnerability Feeds are all open source, we hope that this post can be used to encourage community members to build providers for new vulnerability feeds just as described here, so that Grype can detect vulnerabilities from those feeds as well. 

To get started, please open an issue and read the DEVELOPING.md from Vunnel.

From Code to Cloud: Anchore Delivers SBOM-Powered SCA

Anchore was launched in 2016 to address the software complexity that was growing exponentially as a result of the increasing use of container-based applications. To provide insight into the security status of containers, Anchore focused on generating the most complete picture of the contents in the container. This includes (but is not limited to) generating a high-fidelity software bill of materials (SBOM).

Traditionally, generating SBOMs has been an implicit function of tools known as software composition analysis (SCA). These tools were originally developed in the late 90s/early 00s to focus on software licenses checks in source code. Vulnerability management for source code was bolted on later.

However, the SCA approach from this era no longer works. There is too much software, shipping too rapidly, with too much complexity for them to scan adequately.

Now, software is modified and shipped multiple times a day. Open source software now forms the majority content of any modern application. Attackers are using innovative supply chain attacks such as registry spoofing to obfuscate content. Containers are the default. And, finally, new compliance controls driven by the US government, are putting additional burdens on software transparency.

Today, we are launching our new website to address this new reality with a modern, SBOM-powered SCA product that offers a more effective approach to the challenges of software transparency. This approach has been recognized by major Fortune 500 enterprises and leaders in the public sector across the US, UK, and Australia.

Anchore Enterprise is focused on cloud-native applications. At Anchore we recognize that SBOMs have to be generated and scanned at every step of the process from CI/CD to registry to production. That’s why we put federal compliance at the heart of our policy engine.

This approach enables a variety of solutions. Whether you are trying to modernize your team with DevSecOps practices, address board-level concerns about the software supply chain after Log4j, or sell to the U.S. government in the wake of the Biden Executive Order, we have you covered.

Contact us to today for more info on how Anchore Enterprise can work for you.

Amazon ECS and Anchore Enterprise: Big Updates

Until now, Anchore’s primary runtime focus has been enabling deep vulnerability analysis through the generation of Software Bill of Materials (SBOMs) for images that are built and deployed in Kubernetes. Kubernetes is one of the most widely used container orchestration platforms in the industry, but Anchore has recognized that there are other platforms that our customers are using.


Amazon Elastic Container Service (ECS) is another powerful container orchestration platform that, until now, was limited in its API functionality to allow container scanning outside of the AWS ecosystem. Before this major update to Anchore Enterprise 4.8, those who were using ECS for their runtime environment were unable to perform vulnerability analysis for the images unless they were hosted in Amazon Elastic Container Registry (ECR).

Now that the proper updates have been implemented by AWS to their API, users can gather their inventory of images in use from ECS and run vulnerability scans via our new anchore-ecs-inventory agent (downloadable here) for any registry.


Explore the code more in-depth here.

How to Deploy

`anchore-ecs-inventory` just like `anchore-k8s-inventory` can be deployed via a helm chart into any kubernetes environment with access to AWS and your Anchore deployment. You can install the chart via:

helm repo add anchore https://charts.anchore.io

helm install ecs-inventory anchore/ecs-inventory

An example of values.yaml can be found here.


The ECS runtime agent gathers data via the AWS API so there is no need for it to be collocated with your runtime environment. However, should you want to have the agent run directly on ECS and report back to Anchore Enterprise, we have an example task definition in our documentation that can be used in the docs here.

Subscribe to Watch ECS Inventory to Auto Analyze

It’s possible to create a subscription to watch for new ECS Inventory that is reported to Anchore and automatically schedule those images for analysis. A subscription can be created by sending a POST to /v1/subscriptions with the following payload:

{

  "subscription_key": "<SUBSCRIPTION_KEY>",

  "subscription_type": "runtime_inventory"

}

Curl example:

curl -X POST -u USERNAME:PASSWORD --url ANCHORE_URL/v1/subscriptions --header 'Content-Type: application/json' --data '{

  "subscription_key": "arn:aws:ecs:eu-west-2:123456789012:cluster/myclustername",

  "subscription_type": "runtime_inventory"

}'

The subscription_key can be set to any part of an ECS ClusterARN. For example, setting the subscription_key to the:

  • full ClusterARN arn:aws:ecs:us-east-1:012345678910:cluster/telemetry will create a subscription that only watches this cluster

partial ClusterARN arn:aws:ecs:eu-west-2:988505687240 will result in a subscription that watches every cluster within the account 988505687240

After a subscription has been created it needs to be activated. This can be achieved with anchorectl.

anchorectl subscription activate <SUBSCRIPTION_KEY> runtime_inventory

To verify that you are tracking ECS Inventory you can access inventory results with the command anchorectl inventory list and look for results where the TYPE is ecs.

Reporting

If you navigate to the Reportings tab in the Anchore Enterprise UI, you will now be able to see the new “Vulnerabilities by ECS Container” report under the Templates section.

Once selected, you will be able to adjust the specific filters and criteria you want to set for your report.

After saving your template, you will be able to query the template and generate a report.

We hope this post provides you with the insights needed to get started with conducting vulnerability analysis in your ECS runtime environment.

Breaking Down NIST SSDF: Spotlight on PW.6 – Build Systems

This is part two of control PW.6. Part one was Breaking Down NIST SSDF: Spotlight on PW.6 Compilers and Interpreter Security.

In this part of the long running series breaking down NIST Secure Software Development Framework (SSDF), also known as the standard NIST 800-218, we are going to discuss the build system portion of PW.6.

To review the text of PW.6

PW.6.1: Use compiler, interpreter, and build tools that offer features to improve executable security.

PW.6.2: Determine which compiler, interpreter, and build tool features should be used and how each should be configured, then implement and use the approved configurations.

We covered compilers and interpreters last time, we will focus on build systems this time.

PW.6.1

Example 1: Use up-to-date versions of compiler, interpreter, and build tools.

Example 2: Follow change management processes when deploying or updating compiler, interpreter, and build tools, and audit all unexpected changes to tools.

Example 3: Regularly validate the authenticity and integrity of compiler, interpreter, and build tools. See PO.3.

PW.6.2

Example 1: Enable compiler features that produce warnings for poorly secured code during the compilation process.

Example 2: Implement the “clean build” concept, where all compiler warnings are treated as errors and eliminated except those determined to be false positives or irrelevant.

Example 3: Perform all builds in a dedicated, highly controlled build environment.

Example 4: Enable compiler features that randomize or obfuscate execution characteristics, such as memory location usage, that would otherwise be predictable and thus potentially exploitable.

Example 5: Test to ensure that the features are working as expected and are not inadvertently causing any operational issues or other problems.

Example 6: Continuously verify that the approved configurations are being used.

Example 7: Make the approved tool configurations available as configuration-as-code so developers can readily use them.

If we review the references you will find there’s a massive swath of suggestions. Everything from code signing, to obfuscating binaries, to handling compiler warnings, to threat modeling. The net was cast wide on this one. Every environment is different. Every project or product uses its own technology. There’s no way to “one size fits all” this control. This is one of the challenges that has made compliance for developers so very difficult in the past, and it remains extremely difficult today. We have to determine how this applies to our environment, and the way we apply this finding will be drastically different than the way someone else applies it.

We’re splitting this topic along the lines of build environments and compiler/interpreter security. The first blog focused on compiler security, which is an easy topic to understand. For this post, we are going to focus on what SSDF means for build tools, which is not at all well defined or obvious. Of course you will have to review the guidance and understand what makes sense for your environment, everything we discuss here is for example purposes only.

The build system guidance isn’t very complete at all. None of the suggested standards have a huge focus on build systems. We don’t even get a definition of what a build system is, just that we need one. Many of us have a continuous integration and continuous delivery/continuous deployment (CI/CD) system now, that’s basically what counts as our build system in most instances. However, if you looked at two different CI/CD configurations they will almost certainly be drastically different.

This is a really hard topic to discuss in a sensible manner, it’s easy to see why the SSDF sort of hand waives this one away.

We’ve historically focused on secure development while ignoring much of what happens before and after development. This is starting to change, especially with things like SSDF, but there’s still a long way to go. There’s a reason build systems are often attacked yet have so little hardening guidance available. Build systems are incredibly complicated, poorly understood, and hard to lock down.

One of the only resources that specifically addresses the build system is the Cloud Native Computing Foundation Software Supply Chain Security Paper (CNCFSSCP). It hasn’t been updated in over two years at the time of this post. We will also use a standard known as Supply-chain Levels for Software Artifacts, or SLSA (“salsa”) for this discussion; it’s being actively worked on as part of the Open Source Security Foundation (OpenSSF). While SLSA is one way to measure build systems standards, SLSA is very new, we will need more data on which controls are effective and which are not. Much of this guidance lacks scientific rigor at this point, but there are sensible things we can do to avoid attacks against our build systems.

The honest reality is trying to secure a build system is still in its infancy, that’s why the SSDF guidance is so squishy. It will get better, many people are working on these problems. Keep this in mind as you hear advice and suggestions around security build systems. Much of the current guidance is conjecture.

Secure the build

After that intro, what possible advice is there to give? Things sound pretty rough out there.

Let’s split this guidance into two pieces. The security of the build system, as in the actual computers and software that run the build system. And the scripts and programs that are the build system.

For the security of the hardware, let someone else do the hard work. This sounds like sort of weird advice, but it’s the best there is. You can get access to many CI systems that are run by well experienced professionals. If you’re a GitHub customer, which many of us are, they have GitHub actions you can use at no added cost. There are too many options for CI systems to even try to list a few. All of these systems help us remove the burden of locking down our build systems. Locking down and monitoring build systems is a really hard job.

Securing the build system, the scripts and programs that build our software, is a much different and less obvious challenge. Building an application could be everything from turning source code into a binary to packaging up HTML into a container image. Even SLSA and the CNCFSSCP don’t really give concrete guidance here.

SLSA has a nugget of wisdom here which they refer to as provenance. They define provenance as “Attestation (metadata) describing how the outputs were produced, including identification of the platform and external parameters.” If we turn that into something easier to understand, let’s call it “log everything”.

We’re not really at a point of technology or understanding how to secure build systems without gigantic teams doing a lot of heavy lifting. There are organizations that have secure build infrastructure. But that security is in spite of the tooling, not because of it.

Don’t prevent, detect

When we think about securing our build systems, how to prevent attacks seems like the obvious goal. Today prevention is not possible without standards to describe where to even start. We might not be able to focus on prevention, but we can focus on detection today. If you have a log of your build, that log can be revisited at a later date. If an indicator of compromise emerges in the future, logs can be revisited. As new technologies and practices emerge, old logs can be analyzed. Use those logs to look for attacks, or mistakes, or just bad practices that need to be fixed.

Conclusion

This post unfortunately doesn’t have a lot of concrete advice in it. There just isn’t a lot of great guidance today. It’s being worked on but no doubt part of the reason it’s taking time is because of how new, hard, and broad this topic is. Sometimes it’s exciting being an early adopter and sometimes it’s frustrating.

It would be easy to end this post by making up some advice and guidance that sounds good. There are things we can do that sound good but can be dangerous due to second order problems and unexpected outcomes. Remember when everyone thought changing passwords every 30 days was a good idea? Sometimes it’s better to wait and see than it is to jump right in. This is certainly one of those instances.

Josh Bressers

Josh Bressers is vice president of security at Anchore where he guides security feature development for the company’s commercial and open source solutions. He serves on the Open Source Security Foundation technical advisory council and is a co-founder of the Global Security Database project, which is a Cloud Security Alliance working group that is defining the future of security vulnerability identifiers.

New Syft Feature: R Package Cataloging

Syft can now identify and catalog R packages and include them in the resulting Software Bill of Materials. R is a popular software environment for statistical computing and graphics. To use Syft’s new R cataloging support, point Syft at a directory, image, or container that has some R packages installed. For instance, you can scan the r-base image provided by Docker:

$ syft r-base
 ✔ Parsed image
 ✔ Cataloged packages      [305 packages]

NAME                        VERSION                         TYPE
KernSmooth                  2.23-20                         R-package
MASS                        7.3-58.2                        R-package
Matrix                      1.5-3                           R-package
apt                         2.6.0                           deb
base                        4.3.0                           R-package
...

This feature is new in Syft 0.81. Please let us know if you have any questions or problems with this new cataloger by filing an issue in our GitHub. If you want to extend Syft and write your own cataloger for a new kind of package, check out our contributor’s guide.

New Syft Feature: Location Annotations

One of Syft’s most important jobs is identifying operating system packages on your container images or filesystems. This list of packages and other software is the raw material for the resulting Software Bill of Materials (SBOM). A common question when you see a package in an SBOM is “how did Syft come to the conclusion that this package exists?” 

To answer this question, we implemented a new feature in Syft 0.78.0 that can “show its work” and include information in the SBOM of the files that Syft has detected as evidence of the package. This is called location annotation. Here is an example from an SBOM in JSON format, generated by scanning an image that uses dpkg package management:

[
  {
    "id": "3e9282034226b93f",
    "name": "adduser",
    "version": "3.118",
    "type": "deb",
    "foundBy": "dpkgdb-cataloger",
    "locations": [
      {
        "path": "/var/lib/dpkg/status",
        "layerID": "sha256:ec09eb83ea031896df916feb3a61cefba9facf449c8a55d88667927538dca2b4",
        "annotations": {
          "evidence": "primary"
        }
      }
      {
        "path": "/usr/share/doc/adduser/copyright",
        "layerID": "sha256:ec09eb83ea031896df916feb3a61cefba9facf449c8a55d88667927538dca2b4",
        "annotations": {
          "evidence": "supporting"
        }
      },
      {
        "path": "/var/lib/dpkg/info/adduser.conffiles",
        "layerID": "sha256:ec09eb83ea031896df916feb3a61cefba9facf449c8a55d88667927538dca2b4",
        "annotations": {
          "evidence": "supporting"
        }
      },
...

You can see several items in the locations array. These are some of the specific files that Syft has used to identify the adduser package, version 3.118. There are two kinds of evidence in this array: primary and supporting. Primary evidence are the files that are used to determine a package’s existence on the system being scanned, and supporting evidence is additional data not necessarily fundamental to the package’s existence, but providing additional information.

We additionally raise up locations that are annotated as primary evidence as package-to-file relationships in the SBOM so that this information can be used across more SBOM formats in a portable way.

You can read through the pull request for this feature for more technical details. If you’re interested in learning more, implementing location annotations for a new package cataloger, or if you have any questions about the new feature, please join us on Discourse!

Build Your Own Custom Data Provider for Grype with Vunnel

Several weeks ago we announced that we open sourced the process to create a vulnerability database for Grype. A new tool called Vunnel (“vulnerability data funnel”) is the first part of the pipeline. Vunnel takes vulnerability data from an external service like an OS distribution’s vulnerability database or API, transforms it into an intermediary format, and makes it available to Grype-DB. Here’s a sketch of the general architecture:

Grype’s database builder pipeline relies on Vunnel as a key component. Vunnel’s main function is to transform software vulnerability data into a standardized format that other tools can utilize. Vunnel’s Providers, written in Python, are responsible for translating vulnerability information from various sources and formats into a common format.

In this post we’ll walk through an example provider we have written, called “Awesome”, and show how it is put together, and how to build your own. We will assume that you have some Python development knowledge and are at least somewhat familiar with Grype already.

A Quick Tour of a New Provider

First, check out the example “Awesome” provider on GitHub:

README.md

The README has some more details describing how to run the provider in a test environment, some information about code organization, and a few more tips to build a useful and robust provider. To implement your own provider for Vunnel, you will need to implement a class inheriting from vunnel.provider.Provider, and implement two functions: update() and name():

  • name() should return a unique and useful name for your provider. If you’re ingesting vulnerabilities from a Linux distribution, the name of the Linux distribution would be a good choice.
  • update() is responsible for downloading the vulnerability data from an external source and processing it. This is where all of the work is done!

Here is part of our Awesome Provider’s class that implements these two functions (slightly modified for readability):

        # this provider requires the previous state from former runs
        provider.disallow_existing_input_policy(config.runtime)

    @classmethod
    def name(cls) -> str:
        return PROVIDER_NAME

    def update(self, last_updated: datetime.datetime | None) -> tuple[list[str], int]:
        with self.results_writer() as writer:

            for vuln_id, record in self.parser.get():
                vuln_id = vuln_id.lower()

                writer.write(
                    identifier=vuln_id,
                    schema=SCHEMA,
                    payload=record,
                )

        return self.parser.urls, len(writer)

The Provider class has functions to save the processed data in Vunnel’s format, so you don’t need to worry about writing to files or managing storage underneath.

The arguments passed into writer.write include identifier, a unique indicator for a particular vulnerability, schema, the Vunnel schema for the kind of vulnerability you’re parsing (see schema.py for details), and payload, the data associated with the vulnerability:

    def update(self, last_updated: datetime.datetime | None) -> tuple[list[str], int]:
        with self.results_writer() as writer:

            for vuln_id, record in self.parser.get():
                vuln_id = vuln_id.lower()

                writer.write(
                    identifier=vuln_id,
                    schema=SCHEMA,
                    payload=record,
                )

        return self.parser.urls, len(writer)

(from vunnel/blob/main/example/awesome/__init__.py)

As you can see from the example, you may want to factor out the download and processing steps into separate classes or functions for code portability and readability. Our example has most of the parsing logic in parser.py.

In the Awesome example you will find some sections of code labeled “CHANGE ME!”. This is where you will need to make modifications to suit your particular provider.

Trying out the Awesome Provider

To begin, install the basic requirements by following the bootstrapping instructions outlined in Vunnel’s DEVELOPING.md document.

Once you have installed Poetry and bootstrapped the necessary project tooling, you can test the example provider by running:

poetry run python run.py

You should get an output that looks something like this:

tgerla@Timothys-MacBook-Pro example % poetry run python run.py
[DEBUG] config: Config(runtime=RuntimeConfig(on_error=OnErrorConfig(action=fail, retry_count=3, retry_delay=5, input=keep, results=keep), existing_input=keep, existing_results=delete-before-write, result_store=flat-file), request_timeout=125)
[DEBUG] using './data/my-awesome-provider' as workspace
[DEBUG] creating input workspace './data/my-awesome-provider/input'
[DEBUG] creating results workspace './data/my-awesome-provider/results'
[INFO] downloading vulnerability data from https://services.nvd.nist.gov/made-up-location
[DEBUG] clearing existing results
[INFO] wrote 2 entries
[INFO] recording workspace state
[DEBUG] wrote workspace state to ./data/my-awesome-provider/metadata.json

You can inspect the resulting output in ./data/my-awesome-provider/metadata.json:

{
  "schema": "https://raw.githubusercontent.com/anchore/vunnel/main/schema/vulnerability/os/schema-1.0.0.json",
  "identifier": "fake-sa-001",
  "item": {
      "Vulnerability": {
          "Name": "FAKE-SA-001",
          "NamespaceName": "GRYPEOSNAMESPACETHATYOUCHOOSE",
          "Link": "https://someplace.com/FAKE-SA-001",
          "Severity": "Critical",
          "Description": "Bad thing, really bad thing",
          "FixedIn": [
              {
                  "Name": "curl",
                  "VersionFormat": "apk",
                  "NamespaceName": "GRYPEOSNAMESPACETHATYOUCHOOSE",
                  "Version": "2.0"
              }
          ]
      }
  }
}

Now you are ready to modify the example provider to suit your own needs. To contribute your provider to the Vunnel project and share it with the rest of the open source community, you will need to write some tests and create a GitHub pull request. For more information on Vunnel and writing new Providers, you can find a lot more information in Vunnel’s README.md, DEVELOPING.md, and CONTRIBUTING.md documents. Please join us on Discourse if you have any questions or need any help. We will be glad to get you started!

The next post in this series will help you connect your new provider to Grype itself. Stay tuned!

Mitigating Three Popular Software Supply Chain Attacks with Anchore

Software supply chain attacks are extremely prevalent and a great way for attackers to easily proliferate a single vulnerability across an entire organization to have maximum impact. Thankfully, mitigating these three types of threats is easy by utilizing Anchore’s automated policy enforcement throughout your software supply chain.

In this article, I’m going to walk through three types of software supply chain attacks and how Anchore helps in each scenario.

Penetrating Source Code Repositories: Exploiting a Known Vulnerability in the Software Supply Chain

The first type of attack begins in a source code repository. Attackers aren’t dumb and they don’t like to waste time. This is why advanced persistent threats typically leverage a software package that is known to have an exploited vulnerability that is being actively exploited. Even today, over a year after the log4shell vulnerability there are still recent exploitations that entangled the security industry in a vice grip. Incidents like this can all be mitigated automatically with Anchore’s Known Exploited Vulnerabilities (KEV) policy. 

Leveraging Anchore’s policy pack, I am able to scan against all of CISA’s Known Exploited Vulnerabilities. This will allow me to detect KEV’s across my SDLC whether it’s in a GitHub repo, image registry, or runtime. What’s more is that Anchore provides a recommendation for remediation that can automatically notify team members  to remove that package from your development process.

Registry Poisoning: An Image with Malware appears in a Trusted/Protected Registry

This is a popular attack vector. In 2021, the Anchore team saw threat actors use this style of attack to proliferate cryptominers and malicious software across target environments with relative ease. Anchore can detect and prevent these attacks by keeping a watchful eye on customers’ registries, allowing us to continuously monitor that registry for unauthorized pushes of malicious images.

Anchore Enterprise 4.4 continuously monitors repositories within a registry

Anchore Enterprise 4.4 continuously monitors images and all tags within a registry

I’m going to dig in on a very oddly worded PostGres Image that was scanned and automatically triggered a notification about malware in that image. In this example, this attacker combined registry poisoning with typosquatting. Anchore detects the malware in the image and triggers an alert based on that image analysis for my investigation. It’s important to note here that it’s a very unique feature that Anchore provides a malware scan as a part of every SBOM.

In an ideal scenario, developers would be following a software supply chain security architecture that would utilize Anchore policy enforcement that scans for malware before it hits the registry. However, as we have seen in many supply chain attacks since, credentials tend to be left on CI servers, in pipelines in plain text, source code repositories, or on a post-it note left at a coffee shop. Hence, this is a good reminder why we should always be continuously monitoring the registry to prevent registry poisoning attacks.

Cloud Credential Leak: Credential is Unintentionally Left Behind in a Build Artifact

Not all software supply chain attacks are malicious in nature. Some are simple human errors. While the intent isn’t meant to cause havoc, the leaking of a credential (or secret) can still have a lasting impact. Even the most security minded organizations can be hit with an incident of credential leakage. 

In March of ‘23 GitHub experienced a very public instance of this supply chain attack. An accidental commit to a public git repository revealed the private key for GitHub’s entire SSH service. This created the opening for an enterprising attacker to set up a fake GitHub service and masquerade as GitHub with full legitimacy. The ultimate deep fake. Fortunately, GitHub caught it quickly and rotated their key out of an abundance of caution.

In order to prevent a potential catastrophic incident like that from happening to your organization it is important to scan your entire SDLC for credential and secret leaks. Catching the exposure before it is pushed to a publicly accessible location mitigates a potential sev 0 before it happens. Actively scanning for leaked secrets has the added benefit of helping prevent the previous attack we highlighted from occurring as well. It is difficult to poison a private container registry when the environment is protected with a secret (that hasn’t been leaked).

Anchore Enterprise offers the ability to use policies to scan images and discover leaked credentials before they are deployed to production. The policies can be configured to enforce a preset action if the presence of a secret is detected, preventing a security incident from happening before it starts.

I hope this article provided insight into the three most popular types of software supply chain attacks and how they happen. Anchore’s technology platform provides the confidence and tools needed for developers and security teams to stay ahead of malicious threats to your software supply chain. If you want to learn more click here to request a demo or watch one of our recent webinars.

Navigating Continuous Authority To Operate (cATO): A Guide for Getting Started

Continuous Authority to Operate (cATO), sometimes known as Rapid ATO, is becoming necessary as the DoD and civilian agencies put more applications and data in the cloud. Speed and agility are becoming increasingly critical to the mission as the government and federal system integrators seek new features and functionalities to support the warfighter and other critical U.S. government priorities.

In this blog post, we’ll break down the concept of cATO in understandable terms, explain its benefits, explore the myths and realities of cATO and show how Anchore can help your organization meet this standard.

What is Continuous Authority To Operate (cATO)?

Continuous ATO is the merging of traditional authority to operate (ATO) risk management practices with flexible and responsive DevSecOps practices to improve software security posture.

Traditional Risk Management Framework (RMF) implementations focus on obtaining authorization to operate once every three years. The problem with this approach is that security threats aren’t static, they evolve. cATO is the evolution of this framework which requires the continual authorization of software components, such as containers, by building security into the entire development lifecycle using DevSecOps practices. All software development processes need to ensure that the application and its components meet security levels equal to or greater than what an ATO requires.

You authorize once and use the software component many times. With a cATO, you gain complete visibility into all assets, software security, and infrastructure as code.

By automating security, you are then able to obtain and maintain cATO. There’s no better statement about the current process for obtaining an ATO than this commentary from Mary Lazzeri with Federal Computer Week:

“The muddled, bureaucratic process to obtain an ATO and launch an IT system inside government is widely maligned — but beyond that, it has become a pervasive threat to system security. The longer government takes to launch a new-and-improved system, the longer an old and potentially insecure system remains in operation.”

The Three Pillars of cATO

To achieve cATO, an Authorizing Official (AO) must demonstrate three main competencies:

  1. Ongoing visibility: A robust continuous monitoring strategy for RMF controls must be in place, providing insight into key cybersecurity activities within the system boundary.
  2. Active cyber defense: Software engineers and developers must be able to respond to cyber threats in real-time or near real-time, going beyond simple scanning and patching to deploy appropriate countermeasures that thwart adversaries.
  3. Adoption of an approved DevSecOps reference design: This involves integrating development, security, and operations to close gaps, streamline processes, and ensure a secure software supply chain.

Looking to learn more about the DoD DevSecOps Reference Design? It’s commonly referred to as a DoD Software Factory. Anchore has been helping organizations and agencies put the Sec in DevSecOps by securing traditional software factories, transforming them into DoD software factories.

Continuous ATO vs. ATO

The primary difference between traditional ATOs and continuous ATOs is the frequency at which a system seeks to prove the validity of its security claims. ATOs require that a system can prove its security once every three years whereas cATO systems prove their security every moment that the system is running.

The Benefits of Continuous ATO

Continuous ATO is essentially the process of applying DevSecOps principles to the compliance framework of Authority to Operate. Automating the individual compliance processes speeds up development work by avoiding repetitive tasks to obtain permission. Next, we’ll explore additional (and sometimes unexpected) benefits of cATO.

Increase Velocity of System Deployment

CI/CD systems and the DevSecOps design pattern were created to increase the velocity at which new software can be deployed from development to production. On top of that, Continuous ATOs can be more easily scaled to accommodate changes in the system or the addition of new systems, thanks to the automation and flexibility offered by DevSecOps environments.

Reduce Time and Complexity to Achieve an ATO

With the cATO approach, you can build a system to automate the process of generating the artifacts to achieve ATO rather than manually producing them every three years. This automation in DevSecOps pipelines helps in speeding up the ATO process, as it can generate the artifacts needed for the AO to make a risk determination. This reduces the time spent on manual reviews and approvals. Much of the same information will be requested for each ATO, and there will be many overlapping security controls. Designing the DevSecOps pipeline to produce the unique authorization package for each ATO from the corpus of data and information available can lead to increased efficiency via automation and re-use.

No Need to Reinvent AND Maintain the Wheel

When you inherit the security properties of the DevSecOps reference design or utilize an approved managed platform, then the provider will shoulder the burden. Someone else has already done the hard work of creating a framework of tools that integrate together to achieve cATO, re-use their effort to achieve cATO for your system. 

Alternatively, you can utilize a platform provider, such as Platform One, Kessel Run, Black Pearl, or the Army Software Factory to outsource the infrastructure management.

Learn how Anchore helped Platform One achieve cATO and become the preeminent DoD software factory:

Myths & Realities

Myth or Reality?: DevSecOps can be at Odds with cATO

Myth! DevSecOps in the DoD and civilian government agencies are still the domain of early adopters. The strict security and compliance requirements — the ATO in particular — of the federal government make it a fertile ground for DevSecOps adoption. Government leaders such as Nicolas Chaillan, former chief software officer for the United States Air Force, are championing DevSecOps standards and best practices that the DoD, federal government agencies, and even the commercial sector can use to launch their own DevSecOps initiatives.

One goal of DevSecOps is to develop and deploy applications as quickly as possible. An ATO is a bureaucratic morass if you’re not proactive. When you build a DevSecOps toolchain that automates container vulnerability scanning and other areas critical to ATO compliance controls, can you put in the tools, reporting, and processes to test against ATO controls while still in your development environment.

DevSecOps, much like DevOps, suffers from a marketing problem as vendors seek to spin the definitions and use cases that best suit their products. The DoD and government agencies need more champions like Chaillan in government service who can speak to the benefits of DevSecOps in a language that government decision-makers can understand.

Myth or Reality?: Agencies need to adopt DevSecOps to prepare for the cATO 

Reality! One of the cATO requirements is to demonstrate that you are aligned with an Approved DevSecOps Reference Design. The “shift left” story that DevSecOps espouses in vendor marketing literature and sales decks isn’t necessarily one size fits all. Likewise, DoD and federal agency DevSecOps play at a different level. 

Using DevSecOps to prepare for a cATO requires upfront analysis and planning with your development and operations teams’ participation. Government program managers need to collaborate closely with their contractor teams to put the processes and tools in place upfront, including container vulnerability scanning and reporting. Break down your Continuous Integration/Continuous Development (CI/CD) toolchain with an eye on how you can prepare your software components for continuous authorization.

Myth or Reality?: You need to have SBOMs for everything in your environment

Myth! However…you need to be able to show your Authorizing Official (AO) that you have “the ability to conduct active cyber defense in order to respond to cyber threats in real time.” If a zero day (like log4j) comes along you need to demonstrate you are equipped to identify the impact on your environment and remediate the issue quickly. Showing your AO that you manage SBOMs and can quickly query them to respond to threats will have you in the clear for this requirement.

Myth or Reality?: cATO is about technology and process only

Myth! As more elements of the DoD and civilian federal agencies push toward the cATO to support their missions, and a DevSecOps culture takes hold, it’s reasonable to expect that such a culture will influence the cATO process. Central tenets of a DevSecOps culture include:

  • Collaboration
  • Infrastructure as Code (IaC)
  • Automation
  • Monitoring

Each of these tenets contributes to the success of a cATO. Collaboration between the government program office, contractor’s project team leadership, third-party assessment organization (3PAO), and FedRAMP program office is at the foundation of a well-run authorization. IAC provides the tools to manage infrastructure such as virtual machines, load balancers, networks, and other infrastructure components using practices similar to how DevOps teams manage software code.

Myth or Reality?: Reusable Components Make a Difference in cATO

Reality! The growth of containers and other reusable components couldn’t come at a better time as the Department of Defense (DoD) and civilian government agencies push to the cloud driven by federal cloud initiatives and demands from their constituents.

Reusable components save time and budget when it comes to authorization because you can authorize once and use the authorized components across multiple projects. Look for more news about reusable components coming out of Platform One and other large-scale government DevSecOps and cloud projects that can help push this development model forward to become part of future government cloud procurements.

How Anchore Helps Organizations Implement the Continuous ATO Process

Anchore’s comprehensive suite of solutions is designed to help federal agencies and federal system integrators meet the three requirements of cATO.

Ongoing Visibility

Anchore Enterprise can be integrated into a build pipeline, image registry and runtime environment in order to provide a comprehensive view of the entire software development lifecycle (SDLC). On top of this, Anchore provides out-of-the-box policy packs mapped to NIST 800-53 controls for RMF, ensuring a robust continuous monitoring strategy. Real-time notifications alert users when images are out of compliance, helping agencies maintain ongoing visibility into their system’s security posture.

Active Cyber Defense

While Anchore Enterprise is integrated into the decentralized components of the SDLC, it provides a centralized database to track and monitor every component of software in all environments. This centralized datastore enables agencies to quickly triage zero-day vulnerabilities with a single database query. Remediation plans for impacted application teams can be drawn up in hours rather than days or weeks. By setting rules that flag anomalous behavior, such as image drift or blacklisted packages, Anchore supports an active cyber defense strategy for federal systems.

Adoption of an Approved DevSecOps Reference Design

Anchore aligns with the DoD DevSecOps Reference Design by offering solutions for:

  • Container hardening (Anchore DISA policy pack)
  • Container policy enforcement (Anchore Enterprise policies)
  • Container image selection (Iron Bank)
  • Artifact storage (Anchore image registry integration)
  • Release decision-making (Anchore Kubernetes Admission Controller)
  • Runtime policy monitoring (Anchore Kubernetes Automated Inventory)

Anchore is specifically mentioned in the DoD Container Hardening Process Guide, and the Iron Bank relies on Anchore technology to scan and enforce policy that ensures every image in Iron Bank is hardened and secure.

Final Thoughts

Continuous Authorization To Operate (cATO) is a vital framework for federal system integrators and agencies to maintain a strong security posture in the face of evolving cybersecurity threats. By ensuring ongoing visibility, active cyber defense, and the adoption of an approved DevSecOps reference design, software engineers and developers can effectively protect their systems in real-time. Anchore’s comprehensive suite of solutions is specifically designed to help meet the three requirements of cATO, offering a robust, secure, and agile approach to stay ahead of cybersecurity threats. 

By partnering with Anchore, federal system integrators and federal agencies can confidently navigate the complexities of cATO and ensure their systems remain secure and compliant in a rapidly changing cyber landscape. If you’re interested to learn more about how Anchore can help your organization embed DevSecOps tooling and principles into your software development process, click below to read our white paper.

Open Source is Bigger Than You Can Imagine

If we pay attention to the news lately we hear about supply chain security and how it’s the most important topic ever and we need to start doing something right now. But the term “supply chain security” isn’t well defined.  The real challenge we actually have is understanding open source. Open source is in everything now. There is no supply chain problem, there is an understanding open source problem.

Log4Shell was our Keyser Söze moment

There’s a scene in the movie “The Usual Suspects”, where the detective realizes everything he was just told has been a lie. His entire world changed in an instant. It was a plot twist, not even the audience saw coming. Humans love plot twists and surprises in our stories, but not in real life. Log4Shell was a plot twist but in real life. It was not a fun time.

Open source didn’t take over the world overnight. It took decades. It was a silent takeover that only the developers knew about. Until Log4Shell. When Log4Shell happened everyone started looking for Log4j and they found it, everywhere they looked. But while finding Log4j, we also found a lot more open source. And I mean A LOT more. Open source was in everything, both the software acquired from other vendors and the software built in house. Everything from what’s running on our phones to what’s running the toaster. It’s all full of open source software.

Now that we know open source is everywhere, we should start to ask what open source really is. It’s not what we’ve been told. There’s often talk of “the community”, but there is no community. Open source is a vast collection of independent projects. Some of these projects are worked on by Fortune 100 companies, some by scrappy startups, and some are just a person in their basement who only can work on their project from 9:15pm to 10:05pm every other Wednesday. And open source is big. We can steal a quote from Douglas Adams’ Hitchhiker’s Guide to the Galaxy to properly capture the magnitude of open source:

Space Open source … is big. Really big. You just won’t believe how vastly hugely mind-bogglingly big it is. I mean, you may think it’s a long way down the road to the chemist, but that’s just peanuts to space open source.”

The challenge for something like open source isn’t just claiming it’s big. We all know it’s big. The challenge is showing how mind-bogglingly big it is. Imagine the biggest thing we can, open source is bigger.

Let’s do some homework.

The size of NPM

For the rest of this post we will focus on NPM, the Node Package Manager. NPM is how we would install dependencies for our Node.js applications. The reason this data was picked is it’s very easy to work with, it has good public data, and it’s the largest package ecosystem in the world today.

It should be said, NPM isn’t special in the context of the below data, if we compare these graphs to Python’s PyPI for example, we see very similar shapes, just not as large. In the future we may explore other packaging ecosystems, but fundamentally it’s going to look a lot like this. All of this data was generated using the scripts stored in GitHub, the repo is aptly named npm-analysis.

Let’s start with the sheer number of NPM package releases over time. It’s a very impressive and beautiful graph.

This is an incredible number of packages. At the time of capturing data, there were 32,600,904 packages. There are of course far more now, just look at the growth. By packages, we mean every version of every package released. There are about 2.3 million unique packages, but when we take those packages times all the released versions, we end up with over 32 million.

It’s hard to imagine how big this really is. There was a proposal recently that suggested we could try to conduct a security review on 10,000 open source projects per year. This is already a number that would need thousands of people to accomplish. But even at 10,000 projects per year, it would take more than 3,000 years to get through just npm at its current size. Ignoring the fact that we’ve been adding more than 1 million packages per year, so doing some math … we will be done … never, the answer is never.

The people

As humans, we love to start creating reasons for this sort of growth. Maybe it’s all malicious packages, or spammers using NPM to sell vitamins. “It’s probably big projects publishing lots of little packages”, or “have you ever seen the amount of stuff in the React framework”? It turns out almost all of NPM is single maintainer projects. The graph below shows the number of maintainers for a given project. We see there are more than 18 million releases that list a single maintainer in their package.json file. That’s over half of all NPM releases ever having just one person maintaining them.

This graph shows a ridiculous amount of NPM is one person, or a small team. If we look at the graph on a logarithmic scale we can see what the larger projects look like, the linear graph is sort of useless because of the sheer number of one person projects.

These graphs contain duplicate entries when it comes to maintainers. There are many maintainers who have more than one project, it’s quite common in fact. If we filter the graph by the number of unique maintainers, we see this chart.

It’s a lot less maintainers, but we see the data is still dominated by single maintainer projects. In this data set we see 727,986 unique NPM maintainers. This is an amazing number of developers. This a true testament to the power and reach of open source.

New packages

Now that we see there are a lot of people doing an enormous amount of work. Let’s talk about how things are growing. We mentioned earlier that more than one million packages and versions are being added per year.

If this continues we’re going to be adding more than one million new packages per month soon.

Now, it should be noted this graph isn’t new packages, it’s new releases, so if an existing project releases five updates, it shows up in this graph all five times.

If we only look at brand new packages being added, we get the below graph. A moving average was used here because this graph is a bit jumpy otherwise. New projects don’t get added very consistently.

This shows us we’re adding less than 500,000 new projects per year, which is way better than one million! But still a lot more than 10,000.

The downloads

We unfortunately don’t have an impressive graph of downloads to show. The most npm data we can get is for one year of download statistics and it’s a single number, it’s not spread out by date.

In the last year, there were 130,046,251,733,027 NPM downloads. That feels like a fake number, 15 digits. That’s 130 TRILLION downloads. Now, that’s not spread out very evenly. The median downloads of a package are only 217. The bottom 5% are 71 downloads, and the top 5% are more than 16,000 downloads. It’s pretty clear the number of downloads are very uneven. The most popular projects are getting most of the downloads.

Here is a graph of the top 100 projects by downloads. It follows a very common power distribution curve.

We probably can’t imagine what this download data over all time must look like. It’s almost certainly even more mind boggling than the current data set.

Most of these don’t REALLY matter

Nobody would argue if someone said that the vast majority of NPM packages will never see widespread use. Using the download data we can show 95% of NPM packages aren’t widely used. But the sheer scale is what’s important. 5% of NPM is still more than 100,000 unique packages. That’s a massive number, even at our 10,000 packages a year review, that’s more than ten years of work and this is just NPM.

If we filter our number of maintainers graph to only include the top 5% of downloaded packages, it basically looks the same, just with smaller numbers

Every way we look at this data, these trends seem to hold.

Now that we know how incredibly huge this all really is, we can start to talk about this supposed supply chain and what comes next.

What we can actually do about this

First, don’t panic. Then the most important thing we can do is to understand the problem. Open source is already too big to manage and growing faster than we can keep up. It is important to have realistic expectations. Before now many of us didn’t know how huge NPM was. And that’s just one ecosystem. There is a lot more open source out there in the wild.

There’s another quote from Douglas Adams’ Hitchhiker’s Guide to the Galaxy that seems appropriate right now:

‘“I thought,” he said, “that if the world was going to end we were meant to lie down or put a paper bag over our head or something.”

“If you like, yes,” said Ford.

“Will that help?” asked the barman.

“No,” said Ford and gave him a friendly smile.”’

Open source isn’t a force we command, it is a resource for us to use. Open source also isn’t one thing, it’s a collection of individual projects. Open source is more like a natural resource. A recent report from the Atlantic Council titled Avoiding the success trap: Toward policy for open-source software as infrastructure compares open source to water. It’s an apt analogy on many levels, especially when we realize most of the surface of the planet is covered in water.

The first step to fixing a problem is understanding it. It’s hard to wrap our heads around just how huge open source is, humans are bad at exponential growth. We can’t have an honest conversation about the challenges of using open source without first understanding how big and fast it really is. The intent of this article isn’t to suggest open source is broken, or bad, or should be avoided. It’s to set the stage to understand what our challenge looks like.

The importance and overall size of open source will only grow as we move forward. Trying to use the ideas of the past can’t work at this scale. We need new tools, ideas, and processes to face our new software challenges. There are many people, companies, and organizations working on this but not always with a grasp of the true scale of open source. We can and should help existing projects, but the easiest first step is to understand how big our open source use is. Do we know what open source we’re using?

Anchore is working on this problem every day. Come help with our open source projects Syft and Grype, or have a chat with us about our enterprise solution.

Josh Bressers
Josh Bressers is vice president of security at Anchore where he guides security feature development for the company’s commercial and open source solutions. He serves on the Open Source Security Foundation technical advisory council and is a co-founder of the Global Security Database project, which is a Cloud Security Alliance working group that is defining the future of security vulnerability identifiers.