ModuleQ reduces vulnerability management time by 80% with Anchore Secure

ModuleQ, an AI-driven enterprise knowledge platform, knows only too well the stakes for a company providing software solutions in the highly regulated financial services sector. In this world where data breaches are cause for termination of a vendor relationship and evolving cyberthreats loom large, proactive vulnerability management is not just a best practice—it’s a necessity. 

ModuleQ required a vulnerability management platform that could automatically identify and remediate vulnerabilities, maintain airtight security, and meet stringent compliance requirements—all without slowing down their development velocity.

Learn the essential container security best practices to reduce the risk of software supply chain attacks in this white paper.

The Challenge: Scaling Security in a High-Stakes Environment

ModuleQ found itself drowning in a flood of newly released vulnerabilities—over 25,000 in 2023 alone. Operating in a heavily regulated industry meant any oversight could have severe repercussions. High-profile incidents like the Log4j exploit underscored the importance of supply chain security, yet the manual, resource-intensive nature of ModuleQ’s vulnerability management process made it hard to keep pace.

The mandate that no critical vulnerabilities reached production was a particularly high bar to meet with the existing manual review process. Each time engineers stepped away from their coding environment to check a separate security dashboard, they lost context, productivity, and confidence. The fear of accidentally letting something slip through the cracks was ever present.

The Solution: Anchore Secure for Automated, Integrated Vulnerability Management

ModuleQ chose Anchore Secure to simplify, automate, and fully integrate vulnerability management into their existing DevSecOps workflows. Instead of relying on manual security reviews, Anchore Secure injected security measures seamlessly into ModuleQ’s Azure DevOps pipelines, .NET, and C# environment. Every software build—staged nightly through a multi-step pipeline—was automatically scanned for vulnerabilities. Any critical issues triggered immediate notifications and halted promotions to production, ensuring that potential risks were addressed before they could ever reach customers.

Equally important, Anchore’s platform was built to operate in on-prem or air-gapped environments. This guaranteed that ModuleQ’s clients could maintain the highest security standards without the need for external connectivity. For an organization whose customers demand this level of diligence, Anchore’s design provided peace of mind and strengthened client relationships.

Results: Faster, More Secure Deployments

By adopting Anchore Secure, ModuleQ dramatically accelerated and enhanced its vulnerability management approach:

  • 80% Reduction in Vulnerability Management Time: Automated scanning, triage, and reporting freed the team from manual checks, letting them focus on building new features rather than chasing down low-priority issues.
  • 50% Less Time on Security Tasks During Deployments: Proactive detection of high-severity vulnerabilities streamlined deployment workflows, enabling ModuleQ to deliver software faster—without compromising security.
  • Unwavering Confidence in Compliance: With every new release automatically vetted for critical vulnerabilities, ModuleQ’s customers in the financial sector gained renewed trust. Anchore’s support for fully on-prem deployments allowed ModuleQ to meet stringent security demands consistently.

Looking Ahead

In an era defined by unrelenting cybersecurity threats, ModuleQ proved that speed and security need not be at odds. Anchore Secure provided a turnkey solution that integrated seamlessly into their workflow, saving time, strengthening compliance, and maintaining the agility to adapt to future challenges. By adopting an automated security backbone, ModuleQ has positioned itself as a trusted and reliable partner in the financial services landscape.

Looking for more details? Read the ModuleQ case study in full. If you’re ready to move forward see all of the features on Anchore Secure’s product page or reach out to our team to schedule a demo.

The Evolution of SBOMs in the DevSecOps Lifecycle: Part 2

Welcome back to the second installment of our two-part series on “The Evolution of SBOMs in the DevSecOps Lifecycle”. In our first post, we explored how Software Bills of Materials (SBOMs) evolve over the first 4 stages of the DevSecOps pipeline—Plan, Source, Build & Test—and how each type of SBOM serves different purposes. Some of those use-cases include: shift left vulnerability detection, regulatory compliance automation, OSS license risk management and incident root cause analysis.

In this part, we’ll continue our exploration with the final 4 stages of the DevSecOps lifecycle, examining:

  • Analyzed SBOMs at the Release (Registry) stage
  • Deployed SBOMs during the Deployment phase
  • Runtime SBOMs in Production (Operate & Monitor stages)

As applications migrate down the pipeline, design decisions made at the beginning begin to ossify becoming more difficult to change; this influences the challenges that are experienced and the role that SBOMs play in overcoming these novel problems. Some of the new challenges that come up include: pipeline leaks, vulnerabilities in third-party packages, and runtime injection. All of which introduce significant risk. Understanding how SBOMs evolve across these stages helps organizations mitigate these risks effectively.

Whether you’re aiming to enhance your security posture, streamline compliance reporting, or improve incident response times, this comprehensive guide will equip you with the knowledge to leverage SBOMs effectively from Release to Production. Additionally, we’ll offer pro tips to help you maximize the benefits of SBOMs in your DevSecOps practices.

So, let’s continue our journey through the DevSecOps pipeline and discover how SBOMs can transform the latter stages of your software development lifecycle.

Learn the 5 best practices for container security and how SBOMs play a pivotal role in securing your software supply chain.

Release (or Registry) => Analyzed SBOM

After development is completed and the new release of the software is declared a “golden” image the build system will push the release artifact to a registry for storage until it is deployed. At this stage, an SBOM that is generated based on these container images, binaries, etc. is named an “Analyzed SBOM” by CISA. The name is a little confusing since all SBOMs should be analyzed regardless of the stage they are generated. A more appropriate name might be a Release SBOM but we’ll stick with CISA’s name for now.

At first glance, it would seem that Analyzed SBOMs and the final Build SBOMs should be identical since it is the same software but that doesn’t hold up in practice. DevSecOps pipelines aren’t hermetically sealed systems, they can be “leaky”. You might be surprised what finds its way into this storage repository and eventually gets deployed bypassing your carefully constructed build and test setup.

On top of that, the registry holds more than just first-party applications that are built in-house. It also stores 3rd-party container images like operating systems and any other self-contained applications used by the organization.

The additional metadata that is collected for an Analyzed SBOM includes:

  • Release images that bypass the happy path build and test pipeline
  • 3rd-party container images, binaries and applications

Pros and Cons

Pros:

  • Comprehensive Artifact Inventory: A more holistic view of all software—both 1st- and 3rd-party—that is utilized in production.
  • Enhanced Security and Compliance Posture: Catches vulnerabilities and non-compliant images for all software that will be deployed to production. This reduces the risk of security incidents and compliance violations.
  • Third-Party Supply Chain Risk Management: Provides insights into the vulnerabilities and compliance status of third-party components.
  • Ease of implementation: This stage is typically the lowest lift for implementation given that most SBOM generators can be deployed standalone and pointed at the registry to scan all images.

Cons:

  • High Risk for Release Delays: Scanning images at this stage are akin to traditional waterfall-style development patterns. Most design decisions are baked-in and changes typically incur a steep penalty.
  • Difficult to Push Feedback into Exist Workflow: The registry sits outside of typical developer workflows and creating a feedback loop that seamlessly reports issues without changing the developer’s process is a non-trivial amount of work.
  • Complexity in Management: Managing SBOMs for both internally developed and third-party components adds complexity to the software supply chain.

Use-Cases

  • Software Supply Chain Security: Organizations can detect vulnerabilities in both their internal developed software and external software to prevent supply chain injections from leading to a security incident.
  • Compliance Reporting: Reporting on both 1st- and 3rd-party software is necessary for industries with strict regulatory requirements.
  • Detection of Leaky Pipelines: Identifies release images that have bypassed the standard build and test pipeline, allowing teams to take corrective action.
  • Third-Party Risk Analysis: Assesses the security and compliance of third-party container images, binaries, and applications before they are deployed.

Example: An organization subject to strict compliance standards like FedRAMP or cATO uses Analyzed SBOMs to verify that all artifacts in their registry, including third-party applications, comply with security policies and licensing requirements. This practice not only enhances their security posture but also streamlines the audit process.

Pro Tip

A registry is an easy and non-invasive way to test and evaluate potential SBOM generators. It won’t give you a full picture of what can be found in your DevSecOps pipeline but it will at least give you an initial idea of its efficacy and help you make the decision on whether to go through the effort of integrating it into your build pipeline where it will produce deeper insights.

Deploy => Deployed SBOM

As your container orchestrator is deploying an image from your registry into production it will also orchestrate any production dependencies such as sidecar containers or production dependencies. At this stage, an SBOM that is generated is named an “Deployed SBOM” by CISA.

The ideal scenario is that your operations team is storing all of these images in the same central registry as your engineering team but—as we’ve noted before—reality diverges from the ideal.

The additional metadata that is collected for a Deployed SBOM includes:

  • Any additional sidecar containers or production dependencies that are injected or modified through a release controller.

Pros and Cons

Pros:

  • Enhanced Security Posture: The final gate to prevent vulnerabilities from being deployed into production. This reduces the risk of security incidents and compliance violations.
  • Leaky Pipeline Detection: Another location to increase visibility into the happy path of the DevSecOps pipeline being circumvented.
  • Compliance Enforcement: Some compliance standards require a deployment breaking enforcement gate before any software is deployed to production. A container orchestrator release controller is the ideal location to implement this.

Cons:

Essentially the same issues that come up during the release phase.

  • High Risk for Release Delays: Scanning images at this stage are even later than traditional waterfall-style development patterns and will incur a steep penalty if an issue is uncovered.
  • Difficult to Push Feedback into Exist Workflow: A deployment release controller sits outside of typical developer workflows and creating a feedback loop that seamlessly reports issues without changing the developer’s process is a non-trivial amount of work.

Use-Cases

  • Strict Software Supply Chain Security: Implementing a pipeline breaking gating mechanism is typically reserved for only the most critical security vulnerabilities (think: an actively exploitable known vulnerability).
  • High-Stakes Compliance Enforcement: Industries like defense, financial services and critical infrastructure will require vendors to implement a deployment gate for specific risk scenarios beyond actively exploitable vulnerabilities.
  • Compliance Audit Automation: Most regulatory compliance frameworks mandate audit artifacts at deploy time, these documents can be automatically generated and stored for future audits.

Example: A Deployed SBOM can be used as the source of truth for generating a report that demonstrates that no HIGH or CRITICAL vulnerabilities were deployed to production during an audit period.

Pro Tip

Combine a Deployed SBOM with a container vulnerability scanner that cross-checks all vulnerabilities against CISA’s Known Exploitable Vulnerability (KEV) database. In the scenario where a matching KEV is found for a software component you can configure your vulnerability scanner to return a FAIL response to your release controller to abort the deployment.

This strategy creates an ideal balance between not adding delays to software delivery and an extremely high probability for a security incident. 

Operate & Monitor (or Production) => Runtime SBOM

After your container orchestrator has deployed an application into your production environment it is live and serving customer traffic. An SBOM that is generated at this stage don’t have a name as specified by CISA. They are sometimes referred to as “Runtime SBOMs”. SBOMs are still a new-ish standard and will continue to evolve. 

The additional metadata that is collected for a Runtime SBOM includes:

  • Modifications (i.e., intentional hotfixes or malicious malware injection) made to running applications in your production environment. 

Pros and Cons

Pros:

  • Continuous Security Monitoring: Identifies new vulnerabilities that emerge after deployment.
  • Active Runtime Inventory: Provides a canonical view into an organization’s active software landscape.
  • Low Lift Implementation: Deploying SBOM generation into a production environment typically only requires deploying the scanner as another container and giving it permission to access the rest of the production environment.

Cons:

  • No Shift Left Security: By definition is excluded as a part of a shift left security posture.
  • Potential for Release Rollbacks: Scanning images at this stage is the worst possible place for proactive remediation. Discovering a vulnerability could potentially cause a security incident and force a release rollback.

Use-Cases

  • Rapid Incident Management: When new critical vulnerabilities are discovered and announced by the community the first priority for an organization is to determine exposure. An accurate production inventory, down to the component-level, is needed to answer this critical question.
  • Threat Detection: Continuously monitoring for anomalous activity linked to specific components. Sealing your system off completely from advanced persistent threats (APTs) is an unfeasible goal. Instead, quick detection and rapid intervention is the scalable solution to limit the impact of these adversaries.
  • Patch Management: As new releases of 3rd-party components and applications are released an inventory of impacted production assets provides helpful insights that can direct the prioritization of engineering efforts.

Example: When the XZ Utils vulnerability was announced in the spring of 2024, organizations that already automatically generated a Runtime SBOM inventory ran a simple search query against their SBOM database and knew within minutes—or even seconds—whether they were impacted.

Pro Tip

If you want to learn about how Google was able to go from an all-hands on deck security incident when the XZ Utils vulnerability was announced to an all clear under 10 minutes, watch our webinar with the lead of Google’s SBOM initiative.

Wrap-Up

As the SBOM standard has evolved the subject has grown considerably. What started as a structured way to store information about open source licenses has expanded to include numerous use-cases. A clear understanding of the evolution of SBOMs throughout the DevSecOps lifecycle is essential for organizations aiming to solve problems ranging from software supply chain security to regulatory compliance to legal risk management.

SBOMs are a powerful tool in the arsenal of modern software development. By recognizing their importance and integrating them thoughtfully across the DevSecOps lifecycle, you position your organization at the forefront of secure, efficient, and compliant software delivery.

Ready to secure your software supply chain and automate compliance tasks with SBOMs? Anchore is here to help. We offer SBOM management, vulnerability scanning and compliance automation enforcement solutions. If you still need some more information before looking at solutions, check out our webinar below on scaling a secure software supply chain with Kubernetes. 👇👇👇

Learn how Spectro Cloud secured their Kubernetes-based software supply chain and the pivotal role SBOMs played.

The Evolution of SBOMs in the DevSecOps Lifecycle: From Planning to Production

The software industry has wholeheartedly adopted the practice of building new software on the shoulders of the giants that came before them. To accomplish this developers construct a foundation of pre-built, 3rd-party components together then wrap custom 1st-party code around this structure to create novel applications. It is an extraordinarily innovative and productive practice but it also introduces challenges ranging from security vulnerabilities to compliance headaches to legal risk nightmares. Software bills of materials (SBOMs) have emerged to provide solutions for these wide ranging problems.

An SBOM provides a detailed inventory of all the components that make up an application at a point in time. However, it’s important to recognize that not all SBOMs are the same—even for the same piece of software! SBOMs evolve throughout the DevSecOps lifecycle; just as an application evolves from source code to a container image to a running application. The Cybersecurity and Infrastructure Security Agency’s (CISA) has codified this idea by differentiating between all of the different types of SBOMs. Each type serves different purposes and captures information about an application through its evolutionary process.

In this 2-part blog series, we’ll deep dive into each stage of the DevSecOps process and the associated SBOM. Highlighting the differences, the benefits and disadvantages and the use-cases that each type of SBOM supports. Whether you’re just beginning your SBOM journey or looking to deepen your understanding of how SBOMs can be integrated into your DevSecOps practices, this comprehensive guide will provide valuable insights and advice from industry experts.

Learn about the role that SBOMs for the security of your organization in this white paper.

Learn about the role that SBOMs for the security, including open source software (OSS) security, of your organization in this white paper.

Types of SBOMs and the DevSecOps Pipeline

Over the past decade the US government got serious about software supply chain security and began advocating for SBOMs as the standardized approach to the problem. As part of this initiative CISA created the Types of Software Bill of Material (SBOM) Documents white paper that codified the definitions of the different types of SBOMs and mapped them to each stage of the DevSecOps lifecycle. We will discuss each in turn but before we do, let’s anchor on some terminology to prevent confusion or misunderstanding.

Below is a diagram that lays out each stage of the DevSecOps lifecycle as well as the naming convention we will use going forward.

With that out of the way, let’s get started!

Plan => Design SBOM

As the DevSecOps paradigm has spread across the software industry, a notable best practice known as the security architecture review has become integral to the development process. This practice embodies the DevSecOps goal of integrating security into every phase of the software lifecycle, aligning perfectly with the concept of Shift-Left Security—addressing security considerations as early as possible.

At this stage, the SBOM documents the planned components of the application. The CISA refers to SBOMs generated during this phase as Design SBOMs. These SBOMs are preliminary and outline the intended components and dependencies before any code is written.

The metadata that is collected for a Design SBOM includes:

  • Component Inventory: Identifying potential OSS libraries and frameworks to be used as well as the dependency relationship between the components.
  • Licensing Information: Understanding the licenses associated with selected components to ensure compliance.
  • Risk Assessment Data: Evaluating known vulnerabilities and security risks associated with each component.

This might sound like a lot of extra work but luckily if you’re already performing DevSecOps-style planning that incorporates a security and legal review—as is best practice—you’re already surfacing all of this information. The only thing that is different is that this preliminary data is formatted and stored in a standardized data structure, namely an SBOM.

Pros and Cons

Pros:

  • Maximal Shift-Left Security: Vulnerabilities cannot be found any earlier in the software development process. Design time security decisions are the peak of a proactive security posture and preempt bad design decisions before they become ingrained into the codebase.
  • Cost Efficiency: Resolving security issues at this stage is generally less expensive and less disruptive than during later stages of development or—worst of all—after deployment.
  • Legal and Compliance Risk Mitigation: Ensures that all selected components meet necessary compliance standards, avoiding legal complications down the line.

Cons:

  • Upfront Investment: Gathering detailed information on potential components and maintaining an SBOM at this stage requires a non-trivial commitment of time and effort.
  • Incomplete Information: Projects are not static, they will adapt as unplanned challenges surface. A design SBOM likely won’t stay relevant for long.

Use-Cases

There are a number of use-cases that are enabled by 

  • Security Policy Enforcement: Automatically checking proposed components against organizational security policies to prevent the inclusion of disallowed libraries or frameworks.
  • License Compliance Verification: Ensuring that all components comply with the project’s licensing requirements, avoiding potential legal issues.
  • Vendor and Third-Party Risk Management: Assessing the security posture of third-party components before they are integrated into the application.
  • Enhance Transparency and Collaboration: A well-documented SBOM provides a clear record of the software’s components but more importantly that the project aligns with the goals of all of the stakeholders (engineering, security, legal, etc). This builds trust and creates a collaborative environment that increases the chances of each individual stakeholder outcome will be achieved.

Example:

A financial services company operating within a strict regulatory environment uses SBOMs during planning to ensure that all components comply with compliance standards like PCI DSS. By doing so, they prevent the incorporation of insecure components that won’t meet PCI compliance. This reduces the risk of the financial penalties associated with security breaches and regulatory non-compliance.

Pro Tip

If your organization is still early in the maturity of its SBOM initiative then we generally recommend moving the integration of design time SBOMs to the back of the queue. As we mention at the beginning of this the information stored in a design SBOMs is naturally surfaced during the DevSecOps process, as long as the information is being recorded and stored much of the value of a design SBOM will be captured in the artifact. This level of SBOM integration is best saved for later maturity stages when your organization is ready to begin exploring deeper levels of insights that have a higher risk-to-reward ratio. 

Alternatively, if your organization is having difficulty getting your teams to adopt a collaborative DevSecOps planning process mandating a SBOM as a requirement can act as a forcing function to catalyze a cultural shift.

Source => Source SBOM

During the development stage, engineers implement the selected 3rd-party components into the codebase. CISA refers to SBOMs generated during this phase as Source SBOMs. The SBOMs generated here capture the actual implemented components and additional information that is specific to the developer who is doing the work.

The additional metadata that is collected for a Source SBOM includes:

  • Dependency Mapping: Documenting direct and transitive dependencies.
  • Identity Metadata: Adding contributor and commit information.
  • Developer Environment: Captures information about the development environment.

Unlike Design SBOMs which are typically done manually, these SBOMs can be generated programmatically with a software composition analysis (SCA) tool—like Syft. They are usually packaged as command line interfaces (CLIs) since this is the preferred interface for developers. 

If you’re looking for an SBOM generation tool (SCA embedded), we have a comprehensive list of options to make this decision easier.

Pros and Cons

Pros:

  • Accurate and Timely Component Inventory: Reflects the actual components used in the codebase and tracks changes as codebase is actively being developed.
  • Shift-Left Vulnerability Detection: Identifies vulnerabilities as components are integrated but requires commit level automation and feedback mechanisms to be effective.
  • Facilitates Collaboration and Visibility: Keeps all stakeholders members informed about divergence from the original plan and provokes conversations as needed. This is also dependent on automation to record changes during development and the notification systems to broadcast the updates.

Example: A developer adds a new logging library to the project like an outdated version of Log4j. The SBOM, paired with a vulnerability scanner, immediately flags the Log4Shell vulnerability, prompting the engineer to update to a patched version.

Cons:

  • Noise from Developer Toolchains: A lot of times developer environments are bespoke. This creates noise for security teams by recording development dependencies.
  • Potential Overhead: Continuous updates to the SBOM can be resource-intensive when done manually; the only resource efficient method is by using an SBOM generation tool that automates the process.
  • Possibility of Missing Early Risks: Issues not identified during planning may surface here, requiring code changes.

Use-Cases

  • Faster Root Cause Analysis: During service incident retrospectives questions about where, when and by whom a specific component was introduced into an application. Source SBOMs are the programmatic record that can provide answers and decrease manual root cause analysis.
  • Real-Time Security Alerts: Immediate notification of vulnerabilities upon adding new components, decreasing time to remediation and keeping security teams informed.
  • Automated Compliance Checks: Ensuring added components comply with security or license policies to manage compliance risk.
  • Effortless Collaboration: Stakeholders can subscribe to a live feed of changes and immediately know when implementation diverges from the plan.

Pro Tip

Some SBOM generators allow developers to specify development dependencies that should be ignored, similar to .gitignore file. This can help cut down on the noise created by unique developer setups.

Build & Test => Build SBOM

When a developer pushes a commit to the CI/CD build system an automated process initiates that converts the application source code into an artifact that can then be deployed. CISA refers to SBOMs generated during this phase as Build SBOMs. These SBOMs capture both source code dependencies and build tooling dependencies.

The additional metadata that is collected includes:

  • Build Dependencies: Build tooling such as the language compilers, testing frameworks, package managers, etc.
  • Binary Analysis Data: Metadata for compiled binaries that don’t utilize traditional container formats.
  • Configuration Parameters: Details on build configuration files that might impact security or compliance.

Pros and Cons

Pros:

  • Build Infrastructure Analysis: Captures build-specific components which may have their own vulnerability or compliance issues.
  • Reuses Existing Automation Tooling: Enables programmatic security and compliance scanning as well as policy enforcement without introducing any additional build tooling.
  • Reuses Existing Automation Tooling: Directly integrates with developer workflow. Engineers receive security, compliance, etc. feedback without the need to reference a new tool.
  • Reproducibility: Facilitates reproducing builds for debugging and auditing.

Cons:

  • SBOM Sprawl: Build processes run frequently, if it is generating an SBOM with each run you will find yourself with a glut of files that you will have to manage.
  • Delayed Detection: Vulnerabilities or non-compliance issues found at this stage may require rework.

Use-Cases

  • SBOM Drift Detection: By comparing SBOMs from two or more stages, unexpected dependency injection can be detected. This might take the form of a benign, leaky build pipeline that requires manual workarounds or a malicious actor attempting to covertly introduce malware. Either way this provides actionable and valuable information.
  • Policy Enforcement: Enables the creation of build breaking gates to enforce security or compliance. For high-stakes operating environments like defense, financial services or critical infrastructure, automating security and compliance at the expense of some developer friction is a net-positive strategy.
  • Automated Compliance Artifacts: Compliance requires proof in the form of reports and artifacts. Re-utilizing existing build tooling automation to automate this task significantly reduces the manual work required by security teams to meet compliance requirements.

Example: A security scan during testing uses the Build SBOM to identify a critical vulnerability and alerts the responsible engineer. The remediation process is initiated and a patch is applied before deployment.

Pro Tip

If your organization is just beginning their SBOM journey, this is the recommended phase of the DevSecOps lifecycle to implement SBOMs first. The two primary cons of this phase can be mitigated the easiest. For SBOM sprawl, you can procure a turnkey SBOM management solution like Anchore SBOM

As for the delay in feedback created by waiting till the build phase, if your team is utilizing DevOps best practices and breaking features up into smaller components that fit into 2-week sprints then this tight scoping will limit the impact of any significant vulnerabilities or non-compliance discovered.

Intermission

So far we’ve covered the first half of the DevSecOps lifecycle. Next week we will publish the second part of this blog series where we’ll cover the remainder of the pipeline. Watch our socials to be sure you get notified when part 2 is published.

If you’re looking for some additional reading in the meantime, check out our container security white paper below.

Learn the 5 best practices for container security and how SBOMs play a pivotal role in securing your software supply chain.

Choosing the Right SBOM Generator: A Framework for Success

Choosing the right SBOM (software bill of materials) generator is tricker than it looks at first glance. SBOMs are the foundation for a number of different uses ranging from software supply chain security to continuous regulatory compliance. Due to its cornerstone nature, the SBOM generator that you choose will either pave the way for achieving your organization’s goals or become a road block that delays critical initiatives.

But how do you navigate the crowded market of SBOM generation tools to find the one that aligns with your organization’s unique needs? It’s not merely about selecting a tool with the most features or the nicest CLI. It’s about identifying a solution that maps directly to your desired outcomes and use-cases, whether that’s rapid incident response, proactive vulnerability management, or compliance reporting.

We at Anchore have been enabling organizations to achieve their SBOM-related outcomes and do it with the least amount of frustration and setbacks. We’ve compiled our learnings on choosing the right SBOM generation tool into a framework to help the wider community make decisions that set them up for success.

Below is a quick TL;DR of the high-level evaluation criteria that we cover in this blog post:

  • Understanding Your Use-Cases: Aligning the tool with your specific goals.
  • Ecosystem Compatibility: Ensuring support for your programming languages, operating systems, and build artifacts.
  • Data Accuracy: Evaluating the tool’s ability to provide comprehensive and precise SBOMs.
  • DevSecOps Integration: Assessing how well the tool fits into your existing DevSecOps tooling.
  • Proprietary vs. Open Source: Weighing the long-term implications of your choice.

By focusing on these key areas, you’ll be better equipped to select an SBOM generator that not only meets your current requirements but also positions your organization for future success.

Learn about the role that SBOMs for the security of your organization in this white paper.

Learn about the role that SBOMs for the security, including open source software (OSS) security, of your organization in this white paper.

Know your use-cases

When choosing from the array of SBOM generation tools in the market, it is important to frame your decision with the outcome(s) that you are trying to achieve. If your goal is to improve the response time/mean time to remediation when the next Log4j-style incident occurs—and be sure that there will be a next time—an SBOM tool that excels at correctly identifying open source licenses in a code base won’t be the best solution for your use-case (even if you prefer its CLI ;-D).

What to Do:

  • Identify and prioritize the outcomes that your organization is attempting to achieve
  • Map the outcomes to the relevant SBOM use-cases
  • Review each SBOM generation tool to determine whether they are best suited to your use-cases

It can be tempting to prioritize an SBOM generator that is best suited to our preferences and workflows; we are the ones that will be using the tool regularly—shouldn’t we prioritize what makes our lives easier? If we prioritize our needs above the goal of the initiative we might end up putting ourselves into a position where our choice in tools impedes our ability to recognize the desired outcome. Using the correct framing, in this case by focusing on the use-cases, will keep us focused on delivering the best possible outcome.

SBOMs can be utilized for numerous purposes: security incident response, open source license compliance, proactive vulnerability management, compliance reporting or software supply chain risk management. We won’t address all use-cases/outcomes in this blog post, a more comprehensive treatment of all of the potential SBOM use-cases can be found on our website.

Example SBOM Use-Cases:

  • Security incident response: an inventory of all applications and their dependencies that can be queried quickly and easily to identify whether a newly announced zero-day impacts the organization.
  • Proactive vulnerability management: all software and dependencies are scanned for vulnerabilities as part of the DevSecOps lifecycle and remediated based on organizational priority.
  • Regulatory compliance reporting: compliance artifacts and reports are automatically generated by the DevSecOps pipeline to enable continuous compliance and prevent manual compliance work.
  • Software supply chain risk management: an inventory of software components with identified vulnerabilities used to inform organizational decision making when deciding between remediating risk versus building new features.
  • Open source license compliance: an inventory of all software components and the associated OSS license to measure potential legal exposure.

Pro tip: While you will inevitably leave many SBOM use-cases out of scope for your current project, keeping secondary use-cases in the back of your mind while making a decision on the right SBOM tool will set you up for success when those secondary use-cases eventually become a priority in the future.

Does the SBOM generator support your organization’s ecosystem of programming languages, etc?

SBOM generators aren’t just tools to ingest data and re-format it into a standardized format. They are typically paired with a software composition analysis (SCA) tool that scans an application/software artifact for metadata that will populate the final SBOM.

Support for the complete array of programming languages, build artifacts and operating system ecosystems is essentially an impossible task. This means that support varies significantly depending on the SBOM generator that you select. An SBOM generator’s ability to help you reach your organizational goals is directly related to its support for your organization’s software tooling preferences. This will likely be one of the most important qualifications when choosing between different options and will rule out many that don’t meet the needs of your organization.

Considerations:

  • Programming Languages: Does the tool support all languages used by your team?
  • Operating Systems: Can it scan the different OS environments your applications run on top of?
  • Build Artifacts: Does the tool scan containers? Binaries? Source code repositories? 
  • Frameworks and Libraries: Does it recognize the frameworks and libraries your applications depend on?

Data accuracy

This is one of the most important criteria when evaluating different SBOM tools. An SBOM generator may claim support for a particular programming language but after testing the scanner you may discover that it returns an SBOM with only direct dependencies—honestly not much better than a package.json or go.mod file that your build process spits out.

Two different tools might both generate a valid SPDX SBOM document when run against the same source artifact, but the content of those documents can vary greatly. This variation depends on what the tool can inspect, understand, and translate. Being able to fully scan an application for both direct and transitive dependencies as well as navigate non-ideomatic patterns for how software can be structured end up being the true differentiators between the field of SBOM generation contenders.

Imagine using two SBOM tools on a Debian package. One tool recognizes Debian packages and includes detailed information about them in the SBOM. The latter can’t fully parse the Debian .deb format and omits critical information. Both produce an SBOM, but only one provides the data you need to power use-case based outcomes like security incident response or proactive vulnerability management.

Let’s make this example more concrete by simulating this difference with Syft, Anchore’s open source SBOM generation tool:

$ syft -q -o spdx-json nginx:latest > nginx_a.spdx.json
$ grype -q nginx_a.spdx.json | grep Critical
libaom3             3.6.0-1+deb12u1          (won't fix)       deb   CVE-2023-6879     Critical    
libssl3             3.0.14-1~deb12u2         (won't fix)       deb   CVE-2024-5535     Critical    
openssl             3.0.14-1~deb12u2         (won't fix)       deb   CVE-2024-5535     Critical    
zlib1g              1:1.2.13.dfsg-1          (won't fix)       deb   CVE-2023-45853    Critical

In this example, we first generate an SBOM using Syft then run it through Grype—our vulnerability scanning tool. Syft + Grype uncover 4 critical vulnerabilities.

Now let’s try the same thing but “simulate” an SBOM generator that can’t fully parse the structure of the software artifact in question:

$ syft -q -o spdx-json --select-catalogers "-dpkg-db-cataloger,-binary-classifier-cataloger" nginx:latest > nginx_b.spdx.json 
$ grype -q nginx_b.spdx.json | grep Critical
$

In this case, we are returned none of the critical vulnerabilities found with the former tool.

This highlights the importance of careful evaluation of the SBOM generator that you decide on. It could mean the difference between effective vulnerability risk management and a security incident.

Can the SBOM tool integration into your DevSecOps pipeline?

If the SBOM generator is packaged as a self-contained binary with a command line interface (CLI) then it should tick this box. CI/CD build tools are most amenable to this deployment model. If the SBOM generation tool in question isn’t a CLI then it should at least run as a server with an API that can be called as part of the build process.

Integrating with an organization’s DevSecOps pipeline is key to enable a scalable SBOM generation process. By implementing SBOM creation directly into the existing build tooling, organizations can leverage existing automation tools to ensure consistency and efficiency which are necessary for achieving the desired outcomes.

Proprietary vs. open source SBOM generator?

Using an open source SBOM tool is considered an industry best practice. This is because it guards against the risks associated with vendor lock-in. As a bonus, the ecosystem for open source SBOM generation tooling is very healthy. OSS will always have an advantage over proprietary in regards to ecosystem coverage and data quality because it will get into the hands of more users which will create a feedback loop that closes gaps in coverage or quality.

Finally, even if your organization decides to utilize a software supply chain security product that has its own proprietary SBOM generator, it is still better to create your SBOMs with an open source SBOM generator, export to a standardized format (e.g., SPDX or CycloneDX) then have your software supply chain security platform ingest these non-proprietary data structures. All platforms will be able to ingest SBOMs from one or both of these standards-based formats.

Wrap-Up

In a landscape where the next security/compliance/legal challenge is always just around the corner, equipping your team with the right SBOM generator empowers you to act swiftly and confidently. It’s an investment not just in a tool, but in the resilience and security of your entire software supply chain. By making a thoughtful, informed choice now, you’re laying the groundwork for a more secure and efficient future.

Anchore on AWS Marketplace and joins ISV Accelerate

We are excited to announce two significant milestones in our partnership with Amazon Web Services (AWS) today:  

  • Anchore Enterprise can now be purchased through the AWS marketplace and 
  • Anchore has joined the APN’s (Amazon Partner Network) ISV Accelerate Program

Organizations like Nvidia, Cisco Umbrella and Infoblox validate our commitment to delivering trusted solutions for SBOM management, secure software supply chains, and automated compliance enforcement  and can now benefit from a stronger partnership between AWS and Anchore.

Anchore’s best-in-breed container security solution was chosen by Cisco Umbrella as it seamlessly integrated in their AWS infrastructure and accelerated meeting all six FedRAMP requirements. They deployed Anchore into an environment that had to meet a number of high-security and compliance standards. Chief among those was STIG compliance for Amazon EC2 nodes that backed the Amazon EKS deployment. 

In addition, Anchore Enterprise supports high-security requirements such as IL4/IL6, FIPS, SSDF attestation and EO 14208 compliance. 

Contact Anchore’s sales team today for a pricing quote or demo that suits your unique needs.

Anchore Enterprise is now available on AWS Marketplace

The AWS Marketplace offers a convenient and efficient way for AWS customers to procure Anchore. It simplifies the procurement process, provides greater control and governance, and fosters innovation by offering a rich selection of tools and services that seamlessly integrate with their existing AWS infrastructure. 

Anchore Enterprise on AWS Marketplace benefits DevSecOps teams by enabling:

  • Self-procurement via the AWS console
  • Faster procurement with applicable legal terms provided and standardized security review
  • Easier budget management with a single consolidated AWS bill for all infrastructure spend
  • Spend on Anchore Enterprise partially counts towards EDP (Enterprise Discount Program) committed spend

By strengthening our collaboration with AWS, customers can now feel at ease that Anchore Enterprise integrates and operates seamlessly on AWS infrastructure. Joining the ISV Accelerate Program allows us to work closely with AWS account teams to ensure seamless support and exceptional service for our joint clients. 

Purchase Anchore Enterprise on the AWS Marketplace or contact our sales team for a pricing quote that meets your organization’s needs.

Automate STIG Compliance with MITRE SAF: the Fastest Path to ATO

Trying to get your head around STIG (Security Technical Implementation Guides) compliance? Anchore is here to help. With the help of MITRE Security Automation Framework (SAF) we’ll walk you through the quickset path to STIG Compliance and ultimately the converted Authority to Operate (ATO).

The goal for any company that aims to provide software services to the Department of Defense (DoD) is an ATO. Without this stamp of approval your software will never get into the hands of the warfighters that need it most. STIG compliance is a necessary needle that must be thread on the path to ATO. Luckily, MITRE has developed and open-sourced SAF to smooth the often complex and time-consuming STIG compliance process.

We’ll get you up to speed on MITRE SAF and how it helps you achieve STIG compliance in this blog post but before we jump straight into the content be sure to bookmark our webinar with the Chief Architect of MITRE Security Automation Framework (SAF), Aaron Lippold. Josh Bressers, VP of Security at Anchore and Lippold provide a behind the scenes look at SAF and how it dramatically reduces the friction of the STIG compliance process.

What is the MITRE Security Automation Framework (SAF)?

The MITRE SAF is both a high-level cybersecurity framework and an umbrella that encompasses a set of security/compliance tools. It is designed to simplify STIG compliance by translating DISA (Defense Information Systems Agency) SRG (Security Requirements Guide) guidance into actionable steps. 

By following the Security Automation Framework, organizations can streamline and automate the hardened configuration of their DevSecOps pipeline to achieve an ATO (Authority to Operate).

The SAF offers four primary benefits:

  1. Accelerate Path to ATO: By streamlining STIG compliance, SAF enables organizations to get their applications into the hands of DoD operators faster. This acceleration is crucial for meeting operational demands without compromising on security standards.
  2. Establish Security Requirements: SAF translates SRGs and STIGs into actionable steps tailored to an organization’s specific DevSecOps pipeline. This eliminates ambiguity and ensures security controls are implemented correctly.
  3. Build Security In: The framework provides tooling that can be directly embedded into the software development pipeline. By automating STIG configurations and policy checks, it ensures that security measures are consistently applied, leaving no room for false steps.
  4. Assess and Monitor Vulnerabilities: SAF offers visualization and analysis tools that assist organizations in making informed decisions about their current vulnerability inventory. It helps chart a path toward achieving STIG compliance and ultimately an ATO.

The overarching vision of the MITRE SAF is to “implement evolving security requirements while deploying apps at speed.” In essence, it allows organizations to have their cake and eat it too—gaining the benefits of accelerated software delivery without letting cybersecurity risks grow unchecked.

How does MITRE SAF work?

MITRE SAF is segmented into 5 capabilities that map to specific stages of the DevSecOps pipeline or STIG compliance process:

  1. Plan
  2. Harden
  3. Validate
  4. Normalize
  5. Visualize

Let’s break down each of these capabilities.

Plan

There are hundreds of existing STIGs for products ranging from Microsoft Windows to Cisco Routers to MySQL databases. On the off chance that a product your team wants to use doesn’t have a pre-existing STIG, SAF’s Vulcan tool is helps translate the application SRG into a tailored STIG that can then be used to achieve compliance.

Vulcan helps streamline the process of creating STIG-ready security guidance and the accompanying InSpec automated policy that confirms a specific instance of software is configured in a compliant manner.

Vulcan does this by modeling the STIG intent form and tailoring the applicable SRG controls into a finished STIG for an application. The finished STIG is then sent to DISA for peer review and formal publishing as a STIG. Vulcan allows the author to develop both human-readable instructions and machine-readable InSpec automated validation code at the same time.

Diagram of process to map SRG controls to STIG guidelines via the MITE SAF Vulcan CLI tool; an automated conversion tool to speed up STIG compliance process.

Harden

The hardening capability focuses on automating STIG compliance through the use of pre-built infrastructure configuration scripts. SAF hardening content allows organizations to:

  • Use their preferred configuration management tools: Chef Cookbooks, Ansible Playbooks, Terraform Modules, etc. are available as open source templates on MITRE’s GitHub page.
  • Share and collaborate: All hardening content is open source, encouraging community involvement and shared learning.
  • Coverage for the full development stack: Ensuring that every layer, from cloud infrastructure to applications, adheres to security standards.

Validate

The validation capability focuses on verifying the hardening meets the applicable STIG compliance standard. These validation checks are automated via the SAF CLI tool that incorporates the InSpec policies for a STIG. With SAF CLI, organizations can:

  • Automatically validate STIG compliance: By integrating SAF CLI directly into your CI/CD pipeline and invoking InSpec policy checks at every build; shifting security left by surfacing policy violations early.
  • Promote community collaboration: Like the hardening content, validation scripts are open source and accessible by the community for collaborative efforts.
  • Span the entire development stack: Validation—similar to hardening—isn’t limited to a single layer; it encompasses cloud infrastructure, platforms, operating systems, databases, web servers, and applications.
  • Incorporate manual attestation: To achieve comprehensive coverage of policy requirements that automated tools might not fully address.

Normalize

Normalization addresses the challenge of interoperability between different security tools and data formats. SAF CLI performs double-duty by taking on the normalization function as well as validation. It is able to:

  • Translate data into OHDF: OASIS Heimdall Data Format (OHDF), is an open standard that structures countless proprietary security metadata formats into a single universal format.
  • Leverage open source OHDF libraries: Organizations can use OHDF converters as libraries within their custom applications.
  • Automate data conversion: By incorporating SAF CLI into the DevSecOps pipeline, data is automatically standardized with each run.
  • Increased compliance efficiency: A single data format for all security data allows interoperability and facilitates efficient and automated STIG compliance.

Example: Below is an example of Burp Suite’s proprietary data format normalized to the OHDF JSON format:

Image of Burp Suite data format being mapped to MITRE SAF's OHDF to reduce manual data mapping and reduce time to STIG compliance.

Visualize

Visualization is critical for understanding security posture and making informed decisions. SAF provides an open source, self-hosted visualization tool named Heimdall. It ingests OHDF normalized security data and provides the data analysis tools to enable organizations to:

  • Aggregate security and compliance results: Compiling data into comprehensive rollups, charts, and timelines for a holistic view of security and compliance status.
  • Perform deep dives: Allowing teams to explore detailed vulnerability information to facilitate investigation and remediation, ultimately speeding up time to STIG compliance.
  • Guide risk reduction efforts: Visualization of insights help with prioritization of security and compliance tasks reducing risk in the most efficient manner.

How is SAF related to a DoD Software Factory?

A DoD Software Factory is the common term for a DevSecOps pipeline that meets the definition laid out in DoD Enterprise DevSecOps Reference Design. All software that ultimately achieves an ATO has to be built on a fully implemented DoD Software Factory. You can either build your own or use a pre-existing DoD Software Factory like the US Air Force’s Platform One or the US Navy’s Black Pearl.

As we saw earlier, MITRE SAF is a framework meant to help you achieve STIG compliance and is a portion of your journey towards an ATO. STIG compliance applies to both the software that you write as well as the DevSecOps platform that your software is built on. Building your own DoD Software Factory means committing to going through the ATO process and STIG compliance for the DevSecOps platform first then a second time for the end-user application.

Wrap-Up

The MITRE SAF is a huge leg up for modern, cloud-native DevSecOps software vendors that are currently navigating the labyrinth towards ATO. By providing actionable guidance, automation tooling, and a community-driven approach, SAF dramatically reduces the time to ATO. It bridges the gap between the speed of DevOps software delivery and secure, compliant applications ready for critical DoD missions with national security implications. 

Embracing SAF means more than just meeting regulatory requirements; it’s about building a resilient, efficient, and secure development pipeline that can adapt to evolving security demands. In an era where cybersecurity threats are evolving just as rapidly as software, leveraging frameworks like MITRE SAF is not an efficient path to compliance—it’s essential for sustained success.

Introducing Anchore Data Service and Anchore Enterprise 5.10

We are thrilled to announce the release of Anchore Enterprise 5.10, our tenth release of 2024. This update brings two major enhancements that will elevate your experience and bolster your security posture: the new Anchore Data Service (ADS) and expanded AnchoreCTL ecosystem coverage. 

With ADS, we’ve built a fast and reliable solution that reduces time spent by DevSecOps teams debugging intermittent network issues from flaky services that are vital to software supply chain security.

On top of that, we have buffed our software composition analysis (SCA) scanner’s ecosystem coverage (e.g., C++, Swift, Elixir, R, etc.) for all Anchore customers. To do this we embedded Syftour popular, open source SCA/SBOM (software bill of materials) generator—directly into Anchore Enterprise.

It’s been a fall of big releases at Anchore and we’re excited to continue delivering value to our loyal customers. Read on to get all of the gory details >>

Announcing the Anchore Data Service

Before, customers ran the Anchore Feed Service within their environment to pull data feeds into their Anchore Enterprise deployment. To get an idea of what this looked like, see the architecture diagram of Anchore Enterprise prior to version 5.10:

Originally we did this to give customers more control over their environment. Unfortunately this wasn’t without its issues. The data feeds are provided by the community which means the services were designed to be accessible but cost-efficient. This meant they were unreliable; frequently having accessibility issues.

We only have to stretch our memory back to the spring to recall an example that made national headlines. The National Vulnerability Database (NVD) ran into funding issues. This impacted both the creation of new vulnerabilities AND the availability of their API. This created significant friction for Anchore customers—not to mention the entirety of the software industry.

Now, Anchore is running its own enterprise-grade service, named Anchore Data Service (ADS). It is a replacement for the former feed service. ADS aggregates all of the community data feeds, enriches the data (with proprietary threat data) and packages it for customers; all of this with a service availability guarantee expected of an enterprise service.

The new architecture with ADS as the intermediary is illustrated below:

As a bonus for our customers running air-gapped deployments of Anchore Enterprise, there is no more need to run a second deployment of Anchore Enterprise in a DMZ to pull down the data feeds. Instead a single file is pulled from ADS then transferred to a USB thumb drive. From there a single CLI command is run to update your air-gapped deployment of Anchore Enterprise.

Increased AnchoreCTL Ecosystem Coverage

We have increased the number of supported ecosystems (e.g., C++, Swift, Elixir, R, etc.) in Anchore Enterprise. This improves coverage and increases the likelihood that all of your organization’s applications can be scanned and protected by Anchore Enterprise.

More importantly, we have completely re-architected the process for how Anchore Enterprise supports new ecosystems. By integrating Syft—Anchore’s open source SBOM generation tool—directly into AnchoreCTL, Anchore’s customers will now get access to new ecosystem support as they are merged into Syft’s codebase.

Before Syft and AnchoreCTL were somewhat separate which caused AnchoreCTL’s support for new ecosystems to lag Syft’s. Now, they are fully integrated. This enables all of Anchore’s enterprise and public sector customers to take full advantage of the open source community’s development velocity.

Full list of support ecosystems

Below is a complete list of all supported ecosystems by both Syft and AnchoreCTL (as of Anchore Enterprise 5.10; see our docs for most current info):

  • Alpine (apk)
  • C (conan)
  • C++ (conan)
  • Dart (pubs)
  • Debian (dpkg)
  • Dotnet (deps.json)
  • Objective-C (cocoapods)
  • Elixir (mix)
  • Erlang (rebar3)
  • Go (go.mod, Go binaries)
  • Haskell (cabal, stack)
  • Java (jar, ear, war, par, sar, nar, native-image)
  • JavaScript (npm, yarn)
  • Jenkins Plugins (jpi, hpi)
  • Linux kernel archives (vmlinuz)
  • Linux kernel a (ko)
  • Nix (outputs in /nix/store)
  • PHP (composer)
  • Python (wheel, egg, poetry, requirements.txt)
  • Red Hat (rpm)
  • Ruby (gem)
  • Rust (cargo.lock)
  • Swift (cocoapods, swift-package-manager)
  • WordPress plugins

After you update to Anchore Enterprise 5.10, the SBOM inventory will now display all of the new ecosystems. Any SBOMs that have been generated for a particular ecosystem will show up top. The screenshot below gives you an idea of what this will look like:

Wrap-Up

Anchore Enterprise 5.10 marks a new chapter in providing reliable, enterprise-ready security tooling for modern software development. The introduction of the Anchore Data Service ensures that you have consistent and dependable access to critical vulnerability and exploit data, while the expanded ecosystem support means that no part of your tech stack is left unscrutinized for latent risk. Upgrade to the latest version and experience these new features for yourself.

To update and leverage these new features check out our docs, reach out to your Customer Success Engineer or contact our support team. Your feedback is invaluable to us, and we look forward to continuing to support your organization’s security needs.We are offering all of our product updates as a new quarterly product update webinar series. Watch the fall webinar update in the player below to get all of the juicy tidbits from our product team.

Compliance Requirements for DISA’s Security Technical Implementation Guides (STIGs)

In the rapidly modernizing landscape of cybersecurity compliance, evolving to a continuous compliance posture is more critical than ever—particularly for organizations involved with the Department of Defense (DoD) and other government agencies. At the heart of the DoD’s modern approach to software development is the DoD Enterprise DevSecOps Reference Design, commonly implemented as a DoD Software Factory

A key component of this framework is adhering to the Security Technical Implementation Guides (STIGs) developed by the Defense Information Systems Agency (DISA). STIG compliance within the DevSecOps pipeline not only accelerates the delivery of secure software but also embeds robust security practices directly into the development process, safeguarding sensitive data and reinforcing national security.

This comprehensive guide will walk you through what STIGs are, who should care about them, the levels of STIG compliance, key categories of STIG requirements, how to prepare for the STIG compliance process, and the tools available to automate STIG implementation and maintenance.

What are STIGs and who should care?

Understanding DISA and STIGs

The Defense Information Systems Agency (DISA) is the DoD agency responsible for delivering information technology (IT) support to ensure the security of U.S. national defense systems. To help organizations meet the DoD’s rigorous security controls, DISA develops Security Technical Implementation Guides (STIGs).

STIGs are configuration standards that provide prescriptive guidance on how to secure operating systems, network devices, software, and other IT systems. They serve as a secure configuration standard to harden systems against cyber threats.

For example, a STIG for the open source Apache web server would specify that encryption is enabled for all traffic (incoming or outgoing). This would require the generation of SSL/TLS certificates on the server in the correct location, updating the server’s configuration file to reference this certificate and re-configuration of the server to serve traffic from a secure port rather than the default insecure port.

Who should care about STIG compliance?

STIG compliance is mandatory for any organization that operates within the DoD network or handles DoD information. This includes:

  • DoD Contractors and Vendors: Companies providing products or services to the DoD—a.k.a. the defense industrial base (DIB)—must ensure their systems comply with STIG requirements.
  • Government Agencies: Federal agencies interfacing with the DoD need to adhere to applicable STIGs.
  • DoD Information Technology Teams: IT professionals within the DoD responsible for system security must implement STIGs.

Connection to the RMF and NIST SP 800-53

The Risk Management Framework (RMF)—more formally NIST 800-37—is a framework that integrates security and risk management into IT systems as they are being developed. The STIG compliance process outlined below is directly integrated into the higher-level RMF process. As you follow the RMF, the individual steps of STIG compliance will be completed in turn.

STIGs are also closely connected to the NIST 800-53, colloquially known as the “Control Catalog”. NIST 800-53 outlines security and privacy controls for all federal information systems; the controls are not prescriptive about the implementation, only the best practices and outcomes that need to be achieved. 

As DISA developed the STIG compliance standard, they started with the NIST 800-53 controls then “tailored” them to meet the needs of the DoD; these customized security best practices are known as Security Requirements Guides (SRGs). In order to remove all ambiguity around how to meet these higher-level best practices STIGs were created with implementation specific instructions.

For example, an SRG will mandate that all systems utilize a cybersecurity best practice, such as, role-based access control (RBAC) to prevent users without the correct privileges from accessing certain systems. A STIG, on the other hand, will detail exactly how to configure an RBAC system to meet the highest security standards.

Levels of STIG Compliance

The DISA STIG compliance standard uses Severity Category Codes to classify vulnerabilities based on their potential impact on system security. These codes help organizations prioritize remediation efforts. The three Severity Category Codes are:

  1. Category I (Cat I): These are the highest risk vulnerabilities, allowing an attacker immediate access to a system or network or allowing superuser access. Due to their high risk nature, these vulnerabilities be addressed immediately.
  2. Category II (Cat II): These vulnerabilities provide information with a high potential of giving access to intruders. These findings are considered a medium risk and should be remediated promptly.
  3. Category III (Cat III): These vulnerabilities constitute the lowest risk, providing information that could potentially lead to compromise. Although not as pressing as Cat II & III issues, it is still important to address these vulnerabilities to minimize risk and enhance overall security.

Understanding these categories is crucial in the STIG process, as they guide organizations in prioritizing remediation of vulnerabilities.

Key categories of STIG requirements

Given the extensive range of technologies used in DoD environments, there are hundreds of STIGs applicable to different systems, devices, applications, and more. While we won’t list all STIG requirements here, it’s important to understand the key categories and who they apply to.

1. Operating System STIGs

Applies to: System Administrators and IT Teams managing servers and workstations

Examples:

  • Microsoft Windows STIGs: Provides guidelines for securing Windows operating systems.
  • Linux STIGs: Offers secure configuration requirements for various Linux distributions.

2. Network Device STIGs

Applies to: Network Engineers and Administrators

Examples:

  • Network Router STIGs: Outlines security configurations for routers to protect network traffic.
  • Network Firewall STIGs: Details how to secure firewall settings to control access to networks.

3. Application STIGs

Applies to: Software Developers and Application Managers

Examples:

  • Generic Application Security STIG: Outlines the necessary security best practices needed to be STIG compliant
  • Web Server STIG: Provides security requirements for web servers.
  • Database STIG: Specifies how to secure database management systems (DBMS).

4. Mobile Device STIGs

Applies to: Mobile Device Administrators and Security Teams

Examples:

  • Apple iOS STIG: Guides securing of Apple mobile devices used within the DoD.
  • Android OS STIG: Details security configurations for Android devices.

5. Cloud Computing STIGs

Applies to: Cloud Service Providers and Cloud Infrastructure Teams

Examples:

  • Microsoft Azure SQL Database STIG: Offers security requirements for Azure SQL Database cloud service.
  • Cloud Computing OS STIG: Details secure configurations for any operating system offered by a cloud provider that doesn’t have a specific STIG.

Each category addresses specific technologies and includes a STIG checklist to ensure all necessary configurations are applied. 

You can view an example of a STIG checklist for “Application Security and Development” by following this link.

How to Prepare for the STIG Compliance Process

Achieving DISA STIG compliance involves a structured approach. Here are the stages of the STIG process and tips to prepare:

Stage 1: Identifying Applicable STIGs

With hundreds of STIGs relevant to different organizations and technology stacks, this step should not be underestimated. First, conduct an inventory of all systems, devices, applications, and technologies in use. Then, review the complete list of STIGs to match each to your inventory to ensure that all critical areas requiring secure configuration are addressed. This step is essential to avoiding gaps in compliance.

Tip: Use automated tools to scan your environment then match assets to relevant STIGs.

Stage 2: Implementation

After you’ve mapped your technology to the corresponding STIGs, the process of implementing the security configurations outlined in the guides begins. This step may require collaboration between IT, security, and development teams to ensure that the configurations are compatible with the organization’s infrastructure while enforcing strict security standards. Be sure to keep detailed records of changes made.

Tip: Prioritize implementing fixes for Cat I vulnerabilities first, followed by Cat II and Cat III. Depending on the urgency and needs of the mission, ATO can still be achieved with partial STIG compliance. Prioritizing efforts increases the chances that partial compliance is permitted.

Stage 3: Auditing & Maintenance

After the STIGs have been implemented, regular auditing and maintenance are critical to ensure ongoing compliance, verifying that no deviations have occurred over time due to system updates, patches, or other changes. This stage includes periodic scans, manual reviews, and remediation of any identified gaps. Additionally, organizations should develop a plan to stay informed about new STIG releases and updates from DISA.

Tip: Establish a maintenance schedule and assign responsibilities to team members. Alternatively, adopting a policy-as-code approach to continuous compliance by embedding STIG compliance requirements “-as-code” directly into your DevSecOps pipeline, you can automate this process.

General Preparation Tips

  • Training: Ensure your team is familiar with STIG requirements and the compliance process.
  • Collaboration: Work cross-functionally with all relevant departments, including IT, security, and compliance teams.
  • Resource Allocation: Dedicate sufficient resources, including time and personnel, to the compliance effort.
  • Continuous Improvement: Treat STIG compliance as an ongoing process rather than a one-time project.

Tools to automate STIG implementation and maintenance

Automation can significantly streamline the STIG compliance process. Here are some tools that can help:

1. Anchore STIG (Static and Runtime)

  • Purpose: Automates the process of checking container images against STIG requirements.
  • Benefits:
    • Simplifies compliance for containerized applications.
    • Integrates into CI/CD pipelines for continuous compliance.
  • Use Case: Ideal for DevSecOps teams utilizing containers in their deployments.

2. SCAP Compliance Checker

  • Purpose: Provides automated compliance scanning using the Security Content Automation Protocol (SCAP).
  • Benefits:
    • Validates system configurations against STIGs.
    • Generates detailed compliance reports.
  • Use Case: Useful for system administrators needing to audit various operating systems.

3. DISA STIG Viewer

  • Purpose: Helps in viewing and managing STIG checklists.
  • Benefits:
    • Allows for easy navigation of STIG requirements.
    • Facilitates documentation and reporting.
  • Use Case: Assists compliance officers in tracking compliance status.

4. DevOps Automation Tools

  • Infrastructure Automation Examples: Red Hat Ansible, Perforce Puppet, Hashicorp Terraform
  • Software Build Automation Examples: CloudBees CI, GitLab
  • Purpose: Automate the deployment of secure configurations that meet STIG compliance across multiple systems.
  • Benefits:
    • Ensures consistent application of secure configuration standards.
    • Reduces manual effort and the potential for errors.
  • Use Case: Suitable for large-scale environments where manual configuration is impractical.

5. Vulnerability Management Tools

  • Examples: Anchore Secure
  • Purpose: Identify vulnerabilities and compliance issues within your network.
  • Benefits:
    • Provides actionable insights to remediate security gaps.
    • Offers continuous monitoring capabilities.
  • Use Case: Critical for security teams focused on proactive risk management.

Wrap-Up

Achieving DISA STIG compliance is mandatory for organizations working with the DoD. By understanding what STIGs are, who they apply to, and how to navigate the compliance process, your organization can meet the stringent compliance requirements set forth by DISA. As a bonus, you will enhance its security posture and reduce the potential for a security breach.

Remember, compliance is not a one-time event but an ongoing effort that requires regular updates, audits, and maintenance. Leveraging automation tools like Anchore STIG and Anchore Secure can significantly ease this burden, allowing your team to focus on strategic initiatives rather than manual compliance tasks.

Stay proactive, keep your team informed, and make use of the resources available to ensure that your IT systems remain secure and compliant.

Navigating Open Source Software Compliance in Regulated Industries

Open source software (OSS) brings a wealth of benefits; speed, innovation, cost savings. But when serving customers in highly regulated industries like defense, energy, or finance, a new complication enters the picture—compliance.

Imagine your DevOps-fluent engineering team has been leveraging OSS to accelerate product delivery, and suddenly, a major customer hits you with a security compliance questionnaire. What now? 

Regulatory compliance isn’t just about managing the risks of OSS for your business anymore; it’s about providing concrete evidence that you meet standards like FedRAMP and the Secure Software Development Framework (SSDF).

The tricky part is that the OSS “suppliers” making up 70-90% of your software supply chain aren’t traditional vendors—they don’t have the same obligations or accountability, and they’re not necessarily aligned with your compliance needs. 

So, who bears the responsibility? You do.

The OSS your engineering team consumes is your resource and your responsibility. This means you’re not only tasked with managing the security risks of using OSS but also with proving that both your applications and your OSS supply chain meet compliance standards. 

In this post, we’ll explore why you’re ultimately responsible for the OSS you consume and outline practical steps to help you use OSS while staying compliant.

Learn about CISA’s SSDF attestation form and how to meet compliance.

What does it mean to use open source software in a regulated environment?

Highly regulated environments add a new wrinkle to the OSS security narrative. The OSS developers that author the software dependencies that make up the vast majority of modern software supply chains aren’t vendors in the traditional sense. They are more of a volunteer force that allow you to re-use their work but it is a take it or leave it agreement. You have no recourse if it doesn’t work as expected, or worse, has vulnerabilities in it.

So, how do you meet compliance standards when your software supply chain is built on top of a foundation of OSS?

Who is the vendor? You are!

Whether you have internalized this or not the open source software that your developers consume is your resource and thus your responsibility.

This means that you are shouldered with the burden of not only managing the security risk of consuming OSS but also having to shoulder the burden of proving that both your applications and the your OSS supply chain meets compliance.

Open source software is a natural resource

Before we jump into how to accomplish the task set forth in the previous section, let’s take some time to understand why you are the vendor when it comes to open source software.

The common idea is that OSS is produced by a 3rd-party that isn’t part of your organization, so they are the software supplier. Shouldn’t they be the ones required to secure their code? They control and maintain what goes in, right? How are they not responsible?

To answer that question, let’s think about OSS as a natural resource that is shared by the public at large, for instance the public water supply.

This shouldn’t be too much of a stretch. We already use terms like upstream and downstream to think about the relationship between software dependencies and the global software supply chain.

Using this mental model, it becomes easier to understand that a public good isn’t a supplier. You can’t ask a river or a lake for an audit report that it is contaminant free and safe to drink. 

Instead the organization that processes and provides the water to the community is responsible for testing the water and guaranteeing its safety. In this metaphor, your company is the one processing the water and selling it as pristine bottled water. 

How do you pass the buck to your “supplier”? You can’t. That’s the point.

This probably has you asking yourself, if I am responsible for my own OSS supply chain then how to meet a compliance standard for something that I don’t have control over? Keep reading and you’ll find out.

How do I use OSS and stay compliant?

While compliance standards are often thought of as rigid, the reality is much more nuanced. Just because your organization doesn’t own/control the open source projects that you consume doesn’t mean that you can’t use OSS and meet compliance requirements.

There are a few different steps that you need to take in order to build a “reasonably secure” OSS supply chain that will pass a compliance audit. We’ll walk you through the steps below:

Step 1 — Know what you have (i.e., an SBOM inventory)

The foundation of the global software supply chain is the SBOM (software bill of materials) standard. Each of the security and compliance functions outlined in the steps below use or manipulate an SBOM.

SBOMs are the foundational component of the global software supply chain because they record the ingredients that were used to produce the application an end-user will consume. If you don’t have a good grasp of the ingredients of your applications there isn’t much hope for producing any upstream security or compliance guarantees.

The best way to create observability into your software supply chain is to generate an SBOM for every single application in your DevSecOps build pipeline—at each stage of the pipeline!

Step 2 — Maintain a historical record of application source code

To meet compliance standards like FedRAMP and SSDF, you need to be able to maintain a historical record of the source code of your applications, including: 

  • Where it comes from, 
  • Who created it, and 
  • Any modifications made to it over time.

SBOMs were designed to meet these requirements. They act as a record of how applications were built and when/where OSS dependencies were introduced. They also double as compliance artifacts that prove you are compliant with regulatory standards.

Governments aren’t content with self-attestation (at least not for long); they need hard evidence to verify that you are trustworthy. Even though SSDF is currently self-attestation only, the federal government is known for rolling out compliance frameworks in stages. First advising on best-practices, then requiring self-attestation, finally external validation via a certification process. 

The Cybersecurity Maturity Model Certification (CMMC) is a good example of this dynamic process. It recently transitioned from self-attestation to external validation with the introduction of the 2.0 release of the framework.

Step 3 — Manage your OSS vulnerabilities

Not only do you need to keep a record of applications as they evolve over time, you have to track the known vulnerabilities of your OSS dependencies to achieve compliance. Just as SBOMs prove provenance, vulnerability scans are proof that your application and its dependencies aren’t vulnerable. These scans are a crucial piece of the evidence that you will need to provide to your compliance officer as you go through the certification process. 

Remember the buck stops with you! If the OSS that your application consumes doesn’t supply an SBOM and vulnerability scan (which is essentially all OSS projects) then you are responsible to create them. There is no vendor to pass the blame to for proving that your supply chain is reasonably secure and thus compliant.

Step 4 — Continuous compliance of open source software supply chain

It is important to recognize that modern compliance standards are no longer sprints but marathons. Not only do you have to prove that your application(s) are compliant at the time of audit but you have to be able to demonstrate that it remains secure continuously in order to maintain your certification.

This can be challenging to scale but it is made easier by integrating SBOM generation, vulnerability scanning and policy checks directly into the DevSecOps pipeline. This is the approach that modern, SBOM-powered SCAs advocate for.

By embedding the compliance policy-as-code into your DevSecOps pipeline as policy gates, compliance can be maintained over time. Developers are alerted when their code doesn’t meet a compliance standard and are directed to take the corrective action. Also, these compliance checks can be used to automatically generate the compliance artifacts needed. 

You already have an automated DevSecOps pipeline that is producing and delivering applications with minimal human intervention, why not take advantage of this existing tooling to automate open source software compliance in the same way that security was integrated directly into DevOps.

Real-world Examples

To help bring these concepts to life, we’ve outlined some real-world examples of how open source software and compliance intersect:

Open source project has unfixed vulnerabilities

This is far and wide the most common issue that comes up during compliance audits. One of your application’s OSS dependencies has a known vulnerability that has been sitting in the backlog for months or even years!

There are several reasons why an open source software developer might leave a known vulnerability unresolved:

  • They prioritize a feature over fixing a vulnerability
  • The vulnerability is from a third-party dependency they don’t control and can’t fix
  • They don’t like fixing vulnerabilities and choose to ignore it
  • They reviewed the vulnerability and decided it’s not likely to be exploited, so it’s not worth their time
  • They’re planning a codebase refactor that will address the vulnerability in the future

These are all rational reasons for vulnerabilities to persist in a codebase. Remember, OSS projects are owned and maintained by 3rd-party developers who control the repository; they make no guarantees about its quality. They are not vendors.

You, on the other hand, are a vendor and must meet compliance requirements. The responsibility falls on you. An OSS vulnerability management program is how you meet your compliance requirements while enjoying the benefits of OSS.

Need to fill out a supplier questionnaire

Imagine you’re a cloud service provider or software vendor. Your sales team is trying to close a deal with a significant customer. As the contract nears signing, the customer’s legal team requests a security questionnaire. They’re in the business of protecting their organization from financial risk stemming from their supply chain, and your company is about to become part of that supply chain.

These forms are usually from lawyers, very formal, and not focused on technical attacks. They just want to know what you’re using. The quick answer? “Here’s our SBOM.” 

Compliance comes in the form of public standards like FedRAMP, SSDF, NIST, etc., and these less formal security questionnaires. Either way, being unable to provide a full accounting of the risks in your software supply chain can be a speed bump to your organization’s revenue growth and success.

Integrating SBOM scanning, generation, and management deeply into your DevSecOps pipeline is key to accelerating the sales process and your organization’s overall success.

Prove provenance

CISA’s SSDF Attestation form requires that enterprises selling software to the federal government can produce a historical record of their applications. Quoting directly: “The software producer [must] maintain provenance for internal code and third-party components incorporated into the software to the greatest extent feasible.”

If you want access to the revenue opportunities the U.S. federal government offers, SSDF attestation is the needle you have to thread. Meeting this requirement without hiring an army of compliance engineers to manually review your entire DevSecOps pipeline demands an automated OSS component observability and management system.

Often, we jump to cryptographic signatures, encryption keys, trust roots—this quickly becomes a mess. Really, just a hash of the files in a database (read: SBOM inventory) satisfies the requirement. Sometimes, simpler is better. 

Discover the “easy button” to SSDF Attestation and OSS supply chain compliance in our previous blog post.

Takeaways

OSS Is Not a Vendor—But You Are! The best way to have your OSS cake and eat it too (without the indigestion) is to:

  1. Know Your Ingredients: Maintain an SBOM inventory of your OSS supply chain.
  2. Maintain a Complete Historical Record: Keep track of your application’s source code and build process.
  3. Scan for Known Vulnerabilities: Regularly check your OSS dependencies.
  4. Continuous Compliance thru Automation: Generate compliance records automatically to scale your compliance process.

There are numerous reasons to aim for open source software compliance, especially for your software supply chain:

  • Balance Gains Against Risks: Leverage OSS benefits while managing associated risks.
  • Reduce Financial Risk: Protect your organization’s existing revenue.
  • Increase Revenue Opportunities: Access new markets that mandate specific compliance standards.
  • Avoid Becoming a Cautionary Tale: Stay ahead of potential security incidents.

Regardless of your motivation for wanting to use OSS and use it responsibly (i.e., securely and compliantly), Anchore is here to help. Reach out to our team to learn more about how to build and manage a secure and compliant OSS supply chain.

Learn the container security best practices to reduce the risk of software supply chain attacks.

Learn the container security best practices, including open source software (OSS) security, to reduce the risk of software supply chain attacks.

US Navy achieves ATO in days with continuous compliance and OSS risk management

Implementing secure and compliant software solutions within the Department of Defense’s (DoD) software factory framework is no small feat. 

For Black Pearl, the premier DevSecOps platform for the U.S. Navy, and Sigma Defense, a leading DoD technology contractor, the challenge was not just about meeting stringent security requirements but to empower the warfighter. 

We’ll cover how they streamlined compliance, managed open source software (OSS) risk, and reduced vulnerability overload—all while accelerating their Authority to Operate (ATO) process.

Challenge: Navigating Complex Security and Compliance Requirements

Black Pearl and Sigma Defense faced several critical hurdles in meeting the stringent security and compliance standards of the DoD Enterprise DevSecOps Reference Design:

  • Achieving RMF Security and Compliance: Black Pearl needed to secure its own platform and help its customers achieve ATO under the Risk Management Framework (RMF). This involved meeting stringent security controls like RA-5 (Vulnerability Management), SI-3 (Malware Protection), and IA-5 (Credential Management) for both the platform and the applications built on it.
  • Maintaining Continuous Compliance: With the RAISE 2.0 memo emphasizing continuous ATO compliance, manual processes were no longer sufficient. The teams needed to automate compliance tasks to avoid the time-consuming procedures traditionally associated with maintaining ATO status.
  • Managing Open-Source Software (OSS) Risks: Open-source components are integral to modern software development but come with inherent risks. Black Pearl had to manage OSS risks for both its platform and its customers’ applications, ensuring vulnerabilities didn’t compromise security or compliance.
  • Vulnerability Overload for Developers: Developers often face an overwhelming number of vulnerabilities, many of which may not pose significant risks. Prioritizing actionable items without draining resources or slowing down development was a significant challenge.

“By using Anchore and the Black Pearl platform, applications inherit 80% of the RMF’s security controls. You can avoid all of the boring stuff and just get down to what everyone does well, which is write code.”

Christopher Rennie, Product Lead/Solutions Architect

Solution: Automating Compliance and Security with Anchore

To address these challenges, Black Pearl and Sigma Defense implemented Anchore, which provided:

“Working alongside Anchore, we have customized the compliance artifacts that come from the Anchore API to look exactly how the AOs are expecting them to. This has created a good foundation for us to start building the POA&Ms that they’re expecting.”

Josiah Ritchie, DevSecOps Staff Engineer

  • Managing OSS Risks with Continuous Monitoring: Anchore’s integrated vulnerability scanner, policy enforcer, and reporting system provided continuous monitoring of open-source software components. This proactive approach ensured vulnerabilities were detected and addressed promptly, effectively mitigating security risks.
  • Automated Prioritization of Vulnerabilities: By integrating the Anchore Developer Bundle, Black Pearl enabled automatic prioritization of actionable vulnerabilities. Developers received immediate alerts on critical issues, reducing noise and allowing them to focus on what truly matters.

Results: Accelerated ATO and Enhanced Security

The implementation of Anchore transformed Black Pearl’s compliance process and security posture:

  • Platform ATO in 3-5 days: With Anchore’s integration, Black Pearl users accessed a fully operational DevSecOps platform within days, a significant reduction from the typical six months for DIY builds.

“The DoD has four different layers of authorizing officials in order to achieve ATO. You have to figure out how to make all of them happy. We want to innovate by automating the compliance process. Anchore helps us achieve this, so that we can build a full ATO package in an afternoon rather than taking a month or more.”

Josiah Ritchie, DevSecOps Staff Engineer

  • Significantly reduced time spent on compliance reporting: Anchore automated compliance checks and artifact generation, cutting down hours spent on manual reviews and ensuring consistency in reports submitted to authorizing officials.
  • Proactive OSS risk management: By shifting security and compliance to the left, developers identified and remediated open-source vulnerabilities early in the development lifecycle, mitigating risks and streamlining the compliance process.
  • Reduced vulnerability overload with prioritized vulnerability reporting: Anchore’s prioritization of vulnerabilities prevented developer overwhelm, allowing teams to focus on critical issues without hindering development speed.

Conclusion: Empowering the Warfighter Through Efficient Compliance and Security

Black Pearl and Sigma Defense’s partnership with Anchore demonstrates how automating security and compliance processes leads to significant efficiencies. This empowers Navy divisions to focus on developing software that supports the warfighter. 

Achieving ATO in days rather than months is a game-changer in an environment where every second counts, setting a new standard for efficiency through the combination of Black Pearl’s robust DevSecOps platform and Anchore’s comprehensive security solutions.

If you’re facing similar challenges in securing your software supply chain and accelerating compliance, it’s time to explore how Anchore can help your organization achieve its mission-critical objectives.

Download the full case study below👇

Mark Your Calendars: Anchore’s Must-Attend Events and Webinars in October

Are you ready for cooler temperatures and the changing of the leaves? Anchore is! We are excited to announce a series of events and webinars next month. From in-person conferences to insightful webinars, we have a lineup designed to keep you informed about the latest developments in software supply chain security, DevSecOps, and compliance. Join us to learn, connect, and explore how Anchore can help your organization navigate the evolving landscape of software security.

EVENT: TD Synnex Inspire

Date: October 9-12, 2024

Location: Booth T84 | Greenville Convention Center in Greenville, SC

Anchore is thrilled to be exhibiting at the 2024 TD SYNNEX Inspire. Visit us at Booth T84 in the Pavilion to discover how Anchore secures containers for AI, machine learning applications—with a special emphasis on high-performance computing (HPC).

Anchore has helped many Fortune 50 enterprises scale their container security and vulnerability management programs for their entire software supply chain including luminaries like NVIDIA. If you’d like to book dedicated time to speak with our team, drop by our booth or email us at [email protected].

WEBINAR: Introducing the Anchore Data Service

Date: October 15, 2024 at 10am PT

We will showcase the exciting new features introduced in Anchore Enterprise 5.8, 5.9, and 5.10. All designed to effortlessly secure your software supply chain and reduce risk for your organization. Highlights include:

  • Version 5.10: New Anchore Data Service which automatically updates your vulnerability feeds—even in air-gapped environments!
  • Version 5.9: Improved SBOM generation with native integration of Syft, etc.
  • Version 5.8: CISA Known Exploited Vulnerabilities (KEV) feed, etc.

We will be demo-ing all of the new features, sharing pro tips and providing takeaways on how to best utilize the new releases. Don’t miss out!

EVENT: All Things Open Conference

Date: October 27-29, 2024

Location: Booth #95 | Raleigh Convention Center in Raleigh, NC

Anchore is excited to participate in the 2024 All Things Open Conference—one of the largest open source software events in the U.S. Drop by and visit us at Booth #95 to learn how our open source tools, Syft and Grype, can help you start your journey to a more secure DevSecOps pipeline. 

Anchore employees will be on hand to help you understand:

WEBINAR: Accelerate FedRAMP Compliance on Amazon EKS with Anchore

Date: October 29, 2024 at 10am PT

Navigating FedRAMP compliance can be challenging, but Anchore and AWS are here to simplify the process. Join Luis Morales, Solutions Architect at AWS, and Brian Thomason, Manager of Partner and Solutions Engineering at Anchore, as they explain how Cisco achieved FedRAMP compliance in weeks rather than months.

In this live session, we’ll share actionable guidance and insights that address:

  • How to meet six FedRAMP vulnerability scanning requirements
  • Automating STIG and FIPS compliance for Amazon EC2 virtual machines
  • Securing containers end-to-end across CI/CD, Amazon EKS, and ECS

*We’ll also discuss the architecture of Anchore running in an AWS customer environment, demonstrating how to leverage AWS tools and services to enhance your cloud security posture.

WEBINAR: Expert Series: Solving Real-World Challenges in FedRAMP Compliance

Date: October 30, 2024 at 10am PT

Navigating the path to FedRAMP authorization can be daunting, especially with the evolving landscape of federal security requirements. In this Expert Series webinar, Neil Levine, SVP of Product at Anchore, and Mike Strohecker, Director of Cloud Operations at InfusionPoints, will share real-world stories of how we’ve helped our FedRAMP customers overcome key challenges—from achieving compliance faster to meeting the latest FedRAMP Rev 5 requirements.

We’ll dive into practical solutions, including:

  • Overcoming common FedRAMP compliance hurdles
  • Meeting Rev 5 security hardening standards like STIG and CIS (CM-6)
  • Effectively shifting security left in the CI/CD pipeline
  • Automating policy enforcement and continuous monitoring

*We’ll also explore the future impact of the July 2024 FedRAMP modernization memo, highlighting how increased automation with OSCAL is transforming the compliance process.

Wrap-Up

With a brimming schedule of events, October promises to be a jam packed month for Anchore and our community. Whether you’re interested in our latest product updates, exploring strategies for FedRAMP compliance, or connecting at industry-leading events, there’s something for everyone. Mark your calendars and join us to stay ahead in the evolving world of software supply chain security.

Stay informed about upcoming events and developments at Anchore by bookmarking our Events Page and checking back regularly!

How to build an OSS risk management program

In previous blog posts we have covered the risks of open source software (OSS) and security best practices to manage that risk. From there we zoomed in on the benefits of tightly coupling two of those best practices (SBOMs and vulnerability scanning)

Now, we’ll dig deeper into the practical considerations of integrating this paired solution into a DevSecOps pipeline. By examining the design and implementation of SBOMs and vulnerability scanning, we’ll illuminate the path to creating a holistic open source software (OSS) risk management program.

Learn about the role that SBOMs for the security of your organization in this white paper.

Learn about the role that SBOMs for the security, including open source software (OSS) security, of your organization in this white paper.

How do I integrate SBOM management and vulnerability scanning into my development process?

Ideally, you want to generate an SBOM at each stage of the software development process (see image below). By generating an SBOM and scanning for vulnerabilities at each stage, you unlock a number of novel use-cases and benefits that we covered previously.

DevSecOps lifecycle diagram with all stages to integrate SBOM generation and vulnerability scanning.

Let’s break down how to integrate SBOM generation and vulnerability scanning into each stage of the development pipeline:

Source (PLAN & CODE)

The easiest way to integrate SBOM generation and vulnerability scanning into the design and coding phases is to provide CLI (command-line interface) tools to your developers. Engineers are already used to these tools—and have a preference for them!

If you’re going the open source route, we recommend both Syft (SBOM generation) and Grype (vulnerability scanner) as easy options to get started. If you’re interested in an integrated enterprise tool then you’ll want to look at AnchoreCTL.

Developers can generate SBOMs and run vulnerability scans right from the workstation. By doing this at design or commit time developers can shift security left and know immediately about security implications of their design decisions.

If existing vulnerabilities are found, developers can immediately pivot to OSS dependencies that are clean or start a conversation with their security team to understand if their preferred framework will be a deal breaker. Either way, security risk is addressed early before any design decisions are made that will be difficult to roll back.

Build (BUILD + TEST)

The ideal location to integrate SBOM generation and vulnerability scanning during the build and test phases are directly into the organization’s continuous integration (CI) pipeline.

The same self-contained CLI tools used during the source stage are integrated as additional steps into CI scripts/runbooks. When a developer pushes a commit that triggers the build process, the new steps are executed and both an SBOM and vulnerability scan are created as outputs. 

Check out our docs site to see how AnchoreCTL (running in distributed mode) makes this integration a breeze.

If you’re having trouble convincing your developers to jump on the SBOM train, we recommend that developers think about all security scans as just another unit test that is part of their testing suite.

Running these steps in the CI pipeline delays feedback a little versus performing the check as incremental code commits are made as an application is being coded but it is still light years better than waiting till a release is code complete. 

If you are unable to enforce vulnerability scanning of OSS dependencies by your engineering team, a CI-based strategy can be a good happy medium. It is much easier to ensure every build runs exactly the same each time than it is to do the same for developers.

Release (aka Registry)

Another integration option is the container registry. This option will require you to either roll your own service that will regularly call the registry and scan new containers or use a service that does this for you.

See how Anchore Enterprise can automate this entire process by reviewing our integration docs.

Regardless of the path you choose, you will end up creating an IAM service account within your CI application which will give your SBOM and vulnerability scanning solution the access to your registries.

The release stage tends to be fairly far along in the development process and is not an ideal location for these functions to run. Most of the benefits of a shift left security posture won’t be available anymore.

If this is an additional vulnerability scanning stage—rather than the sole stage—then this is a fantastic environment to integrate into. Software supply chain attacks that target registries are popular and can be prevented with a continuous scanning strategy.

Deploy

This is the traditional stage of the SDLC (software development lifecycle) to run vulnerability scans. SBOM generation can be added on as another step in an organization’s continuous deployment (CD) runbook.

Similar to the build stage, the best integration method is by calling CLI tools directly in the deploy script to generate the SBOM and then scan it for vulnerabilities.

Alternatively, if you utilize a container orchestrator like Kubernetes you can also configure an admission controller to act as a deployment gate. The admissions controller should be configured to make a call out to a standalone SBOM generator and vulnerability scanner. 

If you’d like to understand how this is implemented with Anchore Enterprise, see our docs.

While this is the traditional location for running vulnerability scans, it is not recommended that this is the only stage to scan for vulnerabilities. Feedback about security issues would be arriving very late in the development process and prior design decisions may prevent vulnerabilities from being easily remediated. Don’t do this unless you have no other option.

Production (OPERATE + MONITOR)

This is not a traditional stage to run vulnerability scans since the goal is to prevent vulnerabilities from getting to production. Regardless, this is still an important environment to scan. Production containers have a tendency to drift from their pristine build states (DevSecOps pipelines are leaky!).

Also, new vulnerabilities are discovered all of the time and being able to prioritize remediation efforts to the most vulnerable applications (i.e., runtime containers) considerably reduces the risk of exploitation.

The recommended way to run SBOM generation and vulnerability scans in production is to run an independent container with the SBOM generator and vulnerability scanner installed. Most container orchestrators have SDKs that will allow you to integrate an SBOM generator and vulnerability scanner to the preferred administration CLI (e.g., kubectl for k8s clusters). 

Read how Anchore Enterprise integrates these components together into a single container for both Kubernetes and Amazon ECS.

How do I manage all of the SBOMs and vulnerability scans?

Tightly coupling SBOM generation and vulnerability scanning creates a number of benefits but it also creates one problem; a firehose of data. This unintended side effect is named SBOM sprawl and it inevitably becomes a headache in and of itself.

The concise solution to this problem is to create a centralized SBOM repository. The brevity of this answer downplays the challenges that go along with building and managing a new data pipeline.

We’ll walk you through the high-level steps below but if you’re looking to understand the challenges and solutions of SBOM sprawl in more detail, we have a separate article that covers that.

Integrating SBOMs and vulnerability scanning for better OSS risk management

Assuming you’ve deployed an SBOM generator and vulnerability scanner into at least one of your development stages (as detailed above in “How do I integrate SBOM management and vulnerability scanning into my development process?”) and have an SBOM repository for storing your SBOMs and/or vulnerability scans, we can now walkthrough how to tie these systems together.

  1. Create a system to pull vulnerability feeds from reputable sources. If you’re looking for a way to get started here, read our post on how to get started.
  2. Regularly scan your catalog of SBOMs for vulnerabilities, storing the results alongside the SBOMs.
  3. Implement a query system to extract insights from your inventory of SBOMs.
  4. Create a dashboard to visualize your software supply chain’s health.
  5. Build alerting automation to ping your team as newly discovered vulnerabilities are announced.
  6. Maintain all of these DIY security applications and tools. 
  7. Continue to incrementally improve on these tools as new threats emerge, technologies evolve and development processes change.

If this feels like more work than you’re willing to take on, this is why security vendors exist. See the benefits of a managed SBOM-powered SCA below.

Prefer not to DIY? Evaluate Anchore Enterprise

Anchore Enterprise was designed from the ground up to provide a reliable software supply chain security platform that requires the least amount of work to integrate and maintain. Included in the product is:

  • Out-of-the-box integrations for popular CI/CD software (e.g., GitHub, Jenkins, GitLab, etc.)
  • End-to-end SBOM management
  • Enterprise-grade vulnerability scanning with best-in-class false positives
  • Built-in SBOM drift detection
  • Remediation recommendations
  • Continuous visibility and monitoring of software supply chain health

Enterprises like NVIDIA, Cisco, Infoblox, etc. have chosen Anchore Enterprise as their “easy button” to achieve open source software security with the least amount of lift.

If you’re interested to learn more about how to roll out a complete OSS security solution without the blood, sweat and tears that come with the DIY route—reach out to our team to get a demo or try Anchore Enterprise yourself with a 15-day free trial.

Learn the container security best practices to reduce the risk of software supply chain attacks.

Learn the container security best practices, including open source software (OSS) security, to reduce the risk of software supply chain attacks.

SBOMs and Vulnerability Management: OSS Security in the DevSecOps Era

The rise of open-source software (OSS) development and DevOps practices has unleashed a paradigm shift in OSS security. As traditional approaches to OSS security have proven inadequate in the face of rapid development cycles, the Software Bill of Materials (SBOM) has re-made OSS vulnerability management in the era of DevSecOps.

This blog post zooms in on two best practices from our introductory article on OSS security and the software supply chain:

  1. Maintain a Software Dependency Inventory
  2. Implement Vulnerability Scanning

These two best practices are set apart from the rest because they are a natural pair. We’ll cover how this novel approach,

  • Scaled OSS vulnerability management under the pressure of rapid software delivery
  • Is set apart from legacy SCAs
  • Unlocks new use-cases in software supply chain security, OSS risk management, etc.
  • Benefits software engineering orgs
  • Benefits an organization’s overall security posture
  • Has measurably impacted modern enterprises, such as, NVIDIA, Infoblox, etc.

Whether you’re a seasoned DevSecOps professional or just beginning to tackle the challenges of securing your software supply chain, this blog post offers insights into how SBOMs and vulnerability management can transform your approach to OSS security.

Learn about the role that SBOMs for the security of your organization in this white paper.

Learn about the role that SBOMs for the security, including open source software (OSS) security, of your organization in this white paper.

Why do I need SBOMs for OSS vulnerability management?

The TL;DR is SBOMs enabled DevSecOps teams to scale OSS vulnerability management programs in a modern, cloud native environment. Legacy security tools (i.e., SCA platforms) weren’t built to handle the pace of software delivery after a DevOps face lift.

Answering this question in full requires some historical context. Below is a speed-run of how we got to a place where SBOMs became the clear solution for vulnerability management after the rise of DevOps and OSS; the original longform is found on our blog.

If you’re not interested in a history lesson, skip to the next section, “What new use-cases are unlocked with a software dependency inventory?” to get straight to the impact of this evolution on software supply chain security (SSCS).

A short history on software composition analysis (SCA)

  • SCAs were originally designed to solve the problem of OSS licensing risk
  • Remember that Microsoft made a big fuss about the dangers of OSS at the turn of the millennium
  • Vulnerability scanning and management was tacked-on later
  • These legacy SCAs worked well enough until DevOps and OSS popularity hit critical mass

How the rise of OSS and DevOps principles broke legacy SCAs

  • DevOps and OSS movements hit traction in the 2010s
  • Software development and delivery transitioned from major updates with long development times to incremental updates with frequent releases
  • Modern engineering organizations are measured and optimized for delivery speed
  • Legacy SCAs were designed to scan a golden image once and take as much as needed to do it; upwards of weeks in some cases
  • This wasn’t compatible with the DevOps promise and created friction between engineering and security
  • This meant not all software could be scanned and much was scanned after release increasing the risk of a security breach

SBOMs as the solution

  • SBOMs were introduced as a standardized data structure that comprised a complete list of all software dependencies (OSS or otherwise)
  • These lightweight files created a reliable way to scan software for vulnerabilities without the slow performance of scanning the entire application—soup to nuts
  • Modern SCAs utilize SBOMs as the foundational layer to power vulnerability scanning in DevSecOps pipelines
  • SBOMs + SCAs deliver on the performance of DevOps without compromising security

What is the difference between SBOMs and legacy SCA scanning?

SBOMs offer two functional innovations over the legacy model: 

  1. Deeper visibility into an organization’s application inventory and; 
  2. A record of changes to applications over-time.

The deeper visibility comes from the fact that modern SCA scanners identify software dependencies recursively and build a complete software dependency tree (both direct and transitive). The record of changes comes from the fact that the OSS ecosystem has begun to standardize the contents of SBOMs to allow interoperability between OSS consumers and producers.

Legacy SCAs typically only scan for direct software dependencies and don’t recursively scan for dependencies of dependencies. Also, legacy SCAs don’t generate standardized scans that can then be used to track changes over time.

What new use-cases are unlocked with an SBOM inventory?

The innovations brought by SBOMs (see above) have unlocked new use-cases that benefit both the software supply chain security niche and the greater DevSecOps world. See the list below:

OSS Dependency Drift Detection

Ideally software dependencies are only injected in source code but the reality is that CI/CD pipelines are leaky and both automated and one-off modifications are made at all stages of development. Plugging 100% of the leaks is a strategy with diminishing returns. Application drift detection is a scalable solution to this challenge.

SBOMs unlocks drift detection by creating a point-in-time record on the composition of an application at each stage of the development process. This creates an auditable record of when software builds are modified; how they are changed and who changed it. 

Software Supply Chain Attack Detection

Not all dependency injections are performed by benevolent 1st-party developers. Malicious threat actors who gain access to your organization’s DevSecOps pipeline or the pipeline of one of your OSS suppliers can inject malicious code into your applications.

An SBOM inventory creates the historical record that can identify anomalous behavior and catch these security breaches before organizational damage is done. This is a particularly important strategy for dealing with advanced persistent threats (APTs) that are expert at infiltration and stealth. For a real-world example, see our blog on the recent XZ supply chain attack.

OSS Licensing Risk Management

OSS licenses are currently undergoing the beginning of a new transformation. The highly permissive licenses that came into fashion over the last 20 years are proving to be unsustainable. As prominent open source startups amend their licenses (e.g., Hashicorp, Elastic, Redis, etc.), organizations need to evaluate these changes and how it impacts their OSS supply chain strategy.

Similar to the benefits during a security incident, an SBOM inventory acts as the source of truth for OSS licensing risk. As licenses are amended, an organization can quickly evaluate their risk by querying their inventory and identifying who their “critical” OSS suppliers are. 

Domain Expertise Risk Management

Another emerging use-case of software dependency inventories is the management of domain expertise of developers in your organization. A comprehensive inventory of software dependencies allows organization’s to map critical software to individual employee’s domain knowledge. This creates a measurement of how well resourced your engineering organization is and who owns the knowledge that could impact business operations.

While losing an employee with a particular set of skills might not have the same urgency as a security incident, over time this gap can create instability. An SBOM inventory allows organizations to maintain a list of critical OSS suppliers and get ahead of any structural risks in their organization.

What are the benefits of a software dependency inventory?

SBOM inventories create a number of benefits for tangential domains, such as, software supply chain security, risk management, etc. but there is one big benefit for the core practices of software development.

Reduced engineering and QA time for debugging

A software dependency inventory stores metadata about applications and their OSS dependencies over-time in a centralized repository. This datastore is a simple and efficient way to search and answer critical questions about the state of an organization’s software development pipeline.

Previously, engineering and QA teams had to manually search codebases and commits in order to determine the source of a rogue dependency being added to an application. A software dependency inventory combines a centralized repository of SBOMs with an intuitive search interface. Now, these time consuming investigations can be accomplished in minutes versus hours.

What are the benefits of scanning SBOMs for vulnerabilities?

There are a number of security benefits that can be achieved by integrating SBOMs and vulnerability scanning. We’ve highlighted the most important below:

Reduce risk by scaling vulnerability scanning for complete coverage

One of the side effects of transitioning to DevOps practices was that legacy SCAs couldn’t keep up with the software output of modern engineering orgs. This meant that not all applications were scanned before being deployed to production—a risky security practice!

Modern SCAs solved this problem by scanning SBOMs rather than applications or codebases. These lightweight SBOM scans are so efficient that they can keep up with the pace of DevOps output. Scanning 100% of applications reduces risk by preventing unscanned software from being deployed into vulnerable environments.

Prevent delays in software delivery

Overall organizational productivity can be increased by adopting modern, SBOM-powered SCAs that allow organizations to shift security left. When vulnerabilities are uncovered during application design, developers can make informed decisions about the OSS dependencies that they choose. 

This prevents the situation where engineering creates a new application or feature but right before it is deployed into production the security team scans the dependencies and finds a critical vulnerability. These last minute security scans can delay a release and create frustration across the organization. Scanning early and often prevents this productivity drain from occurring at the worst possible time.

Reduced financial risk during a security incident

The faster a security incident is resolved the less risk that an organization is exposed to. The primary metric that organizations track is called mean-time-to-recovery (MTTR). SBOM inventories are utilized to significantly reduce this metric and improve incident outcomes.

An application inventory with full details on the software dependencies is a prerequisite for rapid security response in the event of an incident. A single SQL query to an SBOM inventory will return a list of all applications that have exploitable dependencies installed. Recent examples include Log4j and XZ. This prevents the need for manual scanning of codebases or production containers. This is the difference between a zero-day incident lasting a few hours versus weeks.

Reduce hours spent on compliance with automation

Compliance certifications are powerful growth levers for organizations; they open up new market opportunities. The downside is that they create a lot of work for organizations. Manually confirming that each compliance control is met and providing evidence for the compliance officer to review discourages organizations from pursuing these certifications.

Providing automated vulnerability scans from DevSecOps pipelines that integrate SBOM inventories and vulnerability scanners significantly reduces the hours needed to generate and collect evidence for compliance audits.

How impactful are these benefits?

Many modern enterprises are adopting SBOM-powered SCAs and reaping the benefits outlined above. The quantifiable benefits to any organization are unique to that enterprise but anecdotal evidence is still helpful when weighing how to prioritize a software supply chain security initiative, like the adoption of an SBOM-powered SCA against other organizational priorities.

As a leading SBOM-powered SCA, Anchore has helped numerous organizations achieve the benefits of this evolution in the software industry. To get an estimate of what your organization can expect, see the case studies below:

NVIDIA

  • Reduced time to production by scanning SBOMs instead of full applications
  • Scaled vulnerability scanning and management program to 100% coverage across 1000s of containerized applications and 100,000s of containers

Read the full NVIDIA case study here >>

Infoblox

  • 75% reduction in engineering hours spent performing manual vulnerability detection
  • 55% reduction in hours allocated to retroactive remediation of vulnerabilities
  • 60% reduction in hours spent on manual compliance discovery and documentation

Read the full Infoblox case study here >>

DreamFactory

  • 75% reduction in engineering hours spent on vulnerability management and compliance
  • 70% faster production deployments with automated vulnerability scanning and management

Read the full DreamFactory case study here >>

Next Steps

Hopefully you now have a better understanding of the power of integrating an SBOM inventory into OSS vulnerability management. This “one-two” combo has unlocked novel use-cases, numerous benefits and measurable results for modern enterprises.

If you’re interested in learning more about how Anchore can help your organization achieve similar results, reach out to our team.

Learn the container security best practices to reduce the risk of software supply chain attacks.

Learn the container security best practices, including open source software (OSS) security, to reduce the risk of software supply chain attacks.

DreamFactory Achieves 75% Time Savings with Anchore: A Case Study in Secure API Generation

As the popularity of APIs has swept the software industry, API security has become paramount, especially for organizations in highly regulated industries. DreamFactory, an API generation platform serving the defense industry and critical national infrastructure, required an air-gapped vulnerability scanning and management solution that didn’t slow down their productivity. Avoiding security breaches and compliance failures are non-negotiables for the team to maintain customer trust.

Challenge: Security Across the Gap

DreamFactory encountered several critical hurdles in meeting the needs of its high-profile clients, particularly those in the defense community and other highly regulated sectors:

  1. Secure deployments without cloud connectivity: Many clients, including the Department of Defense (DoD), required on-premises deployments with air-gapping, breaking the assumptions of modern cloud-based security strategies.
  2. Air-gapped vulnerability scans: Despite air-gapping, these organizations still demanded comprehensive vulnerability reporting to protect their sensitive data.
  3. Building high-trust partnerships: In industries where security breaches could have catastrophic consequences, establishing trust rapidly was crucial.

As Terence Bennett, CEO of DreamFactory, explains, “The data processed by these organizations have the highest national security implications. We needed a solution that could deliver bulletproof security without cloud connectivity.”

Solution: Anchore Enterprise On-Prem and Air-Gapped 

To address these challenges, DreamFactory implemented Anchore Enterprise, which provided:

  1. Support for on-prem and air-gapped deployments: Anchore Enterprise was designed to operate in air-gapped environments, aligning perfectly with DreamFactory’s needs.
  2. Comprehensive vulnerability scanning: DreamFactory integrated Anchore Enterprise into its build pipeline, running daily vulnerability scans on all deployment versions.
  3. Automated SBOM generation and management: Every build is now cataloged and stored (as an SBOM), providing immediate transparency into the software’s components.

“By catching vulnerabilities in our build pipeline, we can inform our customers and prevent any of the APIs created by a DreamFactory install from being leveraged to exploit our customer’s network,” Bennett notes. “Anchore has helped us achieve this massive value-add for our customers.”

Results: Developer Time Savings and Enhanced Trust

The implementation of Anchore Enterprise transformed DreamFactory’s security posture and business operations:

  • 75% reduction in time spent on vulnerability management and compliance requirements
  • 70% faster production deployments with integrated security checks
  • Rapid trust development through transparency

“We’re seeing a lot of traction with data warehousing use-cases,” says Bennett. “Being able to bring an SBOM to the conversation at the very beginning completely changes the conversation and allows CISOs to say, ‘let’s give this a go’.”

Conclusion: A Competitive Edge in High-Stakes Environments

By leveraging Anchore Enterprise, DreamFactory has positioned itself as a trusted partner for organizations requiring the highest levels of security and compliance in their API generation solutions. In an era where API security is more critical than ever, DreamFactory’s success story demonstrates that with the right tools and approach, it’s possible to achieve both ironclad security and operational efficiency.


Are you facing similar challenges hardening your software supply chain in order to meet the requirements of the DoD? By designing your DevSecOps pipeline to the DoD software factory standard, your organization can guarantee to meet these sky-high security and compliance requirements. Learn more about the DoD software factory standard by downloading our white paper below.

How is Open Source Software Security Managed in the Software Supply Chain?

Open source software has revolutionized the way developers build applications, offering a treasure trove of pre-built software “legos” that dramatically boost productivity and accelerate innovation. By leveraging the collective expertise of a global community, developers can create complex, feature-rich applications in a fraction of the time it would take to build everything from scratch. However, this incredible power comes with a significant caveat: the open source model introduces risk.

Organizations inherit both the good and bad parts of the OSS source code they don’t own. This double-edged sword of open source software necessitates a careful balance between harnessing its productivity benefits and managing the risks. A comprehensive OSS security program is the industry standard best practice for managing the risk of open source software within an organization’s software supply chain.

Learn the container security best practices to reduce the risk of software supply chain attacks.

Learn the container security best practices, including open source software security, to reduce the risk of software supply chain attacks.

What is open source software security?

Open source software security is the ecosystem of security tools (some of it being OSS!) that have developed to compensate for the inherent risk of OSS development. The security of the OSS environment was founded on the idea that “given enough eyeballs, all bugs are shallow”. The reality of OSS is that the majority of it is written and maintained by single contributors. The percentage of open source software that passes the qualifier of “enough eyeballs” is miniscule.

Does that mean open source software isn’t secure? Fortunately, no. The OSS community still produces secure software but an entire ecosystem of tools ensure that this is verified—not only trusted implicitly.

What is the difference between closed source and open source software security?

The primary difference between open source software security and closed source software security is how much control you have over the source code. Open source code is public and can have many contributors that are not employees of your organization while proprietary source code is written exclusively by employees of your organization. The threat models required to manage risk for each of these software development methods are informed by these differences.

Due to the fact that open source software is publicly accessible and can be contributed to by a diverse, often anonymous community, its threat model must account for the possibility of malicious code contributions, unintentional vulnerabilities introduced by inexperienced developers, and potential exploitation of disclosed vulnerabilities before patches are applied. This model emphasizes continuous monitoring, rigorous code review processes, and active community engagement to mitigate risks. 

In contrast, proprietary software’s threat model centers around insider threats, such as disgruntled employees or lapses in secure coding practices, and focuses heavily on internal access controls, security audits, and maintaining strict development protocols. 

The need for external threat intelligence is also greater in OSS, as the public nature of the code makes it a target for attackers seeking to exploit weaknesses, while proprietary software relies on obscurity and controlled access as a first line of defense against potential breaches.

What are the risks of using open source software?

  1. Vulnerability exploitation of your application
    • The bargain that is struck when utilizing OSS is your organization gives up significant amounts of control of the quality of the software. When you use OSS you inherit both good AND bad (read: insecure) code. Any known or latent vulnerabilities in the software become your problem.
  2. Access to source code increases the risk of vulnerabilities being discovered by threat actors
    • OSS development is unique in that both the defenders and the attackers have direct access to the source code. This gives the threat actors a leg up. They don’t have to break through perimeter defenses before they get access to source code that they can then analyze for vulnerabilities.
  3. Increased maintenance costs for DevSecOps function
    • Adopting OSS into an engineering organization is another function that requires management. Data has to be collected about the OSS that is embedded in your applications. That data has to be stored and made available in case of the event of a security incident. These maintenance costs are typically incurred by the DevOps and Security teams.
  4. OSS license legal exposure
    • OSS licenses are mostly permissive for use within commercial applications but a non-trivial subset are not, or worse they are highly adversarial when used by a commercial enterprise. Organizations that don’t manage this risk increase the potential for legal action to be taken against them.

How serious are the risks associated with the use of open source software?

Current estimates are that 70-90% of modern applications are composed of open source software. This means that only 10-30% of applications developed by organizations are written by developers employed by the organization. Without having significant visibility into the security of OSS, organization’s are handing over the keys to the castle to the community and hoping for the best.

Not only is OSS a significant footprint in modern application composition but its growth is accelerating. This means the associated risks are growing just as fast. This is part of the reason we see an acceleration in the frequency of software supply chain attacks. Organizations that aren’t addressing these realities are getting caught on their back foot when zero-days are announced like the recent XZ utils backdoor.

Why are SBOMs important to open source software security?

Software Bills of Materials (SBOMs) serve as the foundation of software supply chain security by providing a comprehensive “ingredient list” of all components within an application. This transparency is crucial in today’s software landscape, where modern applications are a complex web of mostly open source software dependencies that can harbor hidden vulnerabilities. 

SBOMs enable organizations to quickly identify and respond to security threats, as demonstrated during incidents like Log4Shell, where companies with centralized SBOM repositories were able to locate vulnerable components in hours rather than days. By offering a clear view of an application’s composition, SBOMs form the bedrock upon which other software supply chain security measures can be effectively built and validated.

The importance of SBOMs in open source software security cannot be overstated. Open source projects often involve numerous contributors and dependencies, making it challenging to maintain a clear picture of all components and their potential vulnerabilities. By implementing SBOMs, organizations can proactively manage risks associated with open source software, ensure regulatory compliance, and build trust with customers and partners. 

SBOMs enable quick responses to newly discovered vulnerabilities, facilitate automated vulnerability management, and support higher-level security abstractions like cryptographically signed images or source code. In essence, SBOMs provide the critical knowledge needed to navigate the complex world of open source dependencies by enabling us to channel our inner GI Joe—”knowing is half the battle” in software supply chain security.

Best practices for securing open source software?

Open source software has become an integral part of modern development practices, offering numerous benefits such as cost-effectiveness, flexibility, and community-driven innovation. However, with these advantages come unique security challenges. To mitigate risks and ensure the safety of your open source components, consider implementing the following best practices:

1. Model Security Scans as Unit Tests

Re-branding security checks as another type of unit test helps developers orient to DevSecOps principles. This approach helps developers re-imagine security as an integral part of their workflow rather than a separate, post-development concern. By modeling security checks as unit tests, you can:

  • Catch vulnerabilities earlier in the development process
  • Reduce the time between vulnerability detection and remediation
  • Empower developers to take ownership of security issues
  • Create a more seamless integration between development and security teams

Remember, the goal is to make security an integral part of the development process, not a bottleneck. By treating security checks as unit tests, you can achieve a balance between rapid development and robust security practices.

2. Review Code Quality

Assessing the quality of open source code is crucial for identifying potential vulnerabilities and ensuring overall software reliability. Consider the following steps:

  • Conduct thorough code reviews, either manually or using automated tools
  • Look for adherence to coding standards and best practices
  • Look for projects developed with secure-by-default principles
  • Evaluate the overall architecture and design patterns used

Remember, high-quality code is generally more secure and easier to maintain.

3. Assess Overall Project Health

A vibrant, active community and committed maintainers are crucial indicators of a well-maintained open source project. When evaluating a project’s health and security:

  • Examine community involvement:
    • Check the number of contributors and frequency of contributions
    • Review the project’s popularity metrics (e.g., GitHub stars, forks, watchers)
    • Assess the quality and frequency of discussions in forums or mailing lists
  • Evaluate maintainer(s) commitment:
    • Check the frequency of commits, releases, and security updates
    • Check for active engagement between maintainers and contributors
    • Review the time taken to address reported bugs and vulnerabilities
    • Look for a clear roadmap or future development plans

4. Maintain a Software Dependency Inventory

Keeping track of your open source dependencies is crucial for managing security risks. To create and maintain an effective inventory:

  • Use tools like Syft or Anchore SBOM to automatically scan your application source code for OSS dependencies
    • Include both direct and transitive dependencies in your scans
  • Generate a Software Bill of Materials (SBOM) from the dependency scan
    • Your dependency scanner should also do this for you
  • Store your SBOMs in a central location that can be searched and analyzed
  • Scan your entire DevSecOps pipeline regularly (ideally every build and deploy)

An up-to-date inventory allows for quicker responses to newly discovered vulnerabilities.

5. Implement Vulnerability Scanning

Regular vulnerability scanning helps identify known security issues in your open source components. To effectively scan for vulnerabilities:

  • Use tools like Grype or Anchore Secure to automatically scan your SBOMs for vulnerabilities
  • Automate vulnerability scanning tools directly into your CI/CD pipeline
    • At minimum implement vulnerability scanning as containers are built
    • Ideally scan container registries, container orchestrators and even each time a new dependency is added during design
  • Set up alerts for newly discovered vulnerabilities in your dependencies
  • Establish a process for addressing identified vulnerabilities promptly

6. Implement Version Control Best Practices

Version control practices are crucial for securing all DevSecOps pipelines that utilize open source software:

  • Implement branch protection rules to prevent unauthorized changes
  • Require code reviews and approvals before merging changes
  • Use signed commits to verify the authenticity of contributions

By implementing these best practices, you can significantly enhance the security of your software development pipeline and reduce the risk intrinsic to open source software. By doing this you will be able to have your cake (productivity boost of OSS) and eat it too (without the inherent risk).

How do I integrate open source software security into my development process?

DIY a comprehensive OSS security system

We’ve written about the steps to build a OSS security system from scratch in a previous blog post—below is the TL;DR:

  • Integrate dependency scanning, SBOM generation and vulnerability scanning into your DevSecOps pipeline
  • Implement a data pipeline to manage the influx of security metadata
  • Use automated policy-as-code “security tests” to provide rapid feedback to developers
  • Automate remediation recommendations to reduce cognitive load on developers

Outsource OSS security to a turnkey vendor

Modern software composition analysis (SCA) tools, like Anchore Enterprise, are purpose built to provide you with a comprehensive OSS security system out-of-the-box. All of the same features of DIY but without the hassle of building while maintaining your current manual process.

  • Anchore SBOM: comprehensive dependency scanning, SBOM generation and management
  • Anchore Secure: vulnerability scanning and management
  • Anchore Enforce: automated security enforcement and compliance

Whether you want to scale an understaffed security to increase their reach across your organization or free your team up to focus on different priorities, the buy versus build opportunity cost is a straightforward decision.

Next Steps

Hopefully, you now have a strong understanding of the risks associated with adopting open source software. If you’re looking to continue your exploration into the intricacies of software supply chain security, Anchore has a catalog of deep dive content on our website. If you’d prefer to get your hands dirty, we also offer a 15-day free trial of Anchore Enterprise.

Learn about the role that SBOMs for the security of your organization in this white paper.

Learn about the role that SBOMs for the security, including open source software security, of your organization in this white paper.

SSDF Attestation Template: Battle-tested Compliance Guidance

The CISA Secure Software Development Attestation form, commonly referred to as, SSDF attestation, was released earlier this year and with any new compliance framework, knowing the exact wording and details to provide in order to meet the compliance requirements can be difficult.

We feel you here. Anchore is heavily invested in the public sector and had to generate our own SSDF attestation for our platform, Anchore Enterprise. Having gone through the process ourselves and working with a number of customers that requested our expertise on this matter, we developed a document that helps you put together an SSDF attestation that will make a compliance officer’s heart sing.

Our goal with this document is to make SSDF attestation as easy as possible and demonstrate how Anchore Enterprise is an “easy button” that you can utilize to satisfy the majority of evidence needed to achieve compliance. We have already submitted in our own SSDF attestation and been approved, so we have confidence these answers will help get you over the line. You can find our SSDF attestation guide on our docs site.

Explore SSDF attestation in-depth with this eBook. Learn the benefits of the framework and how you can benefit from it.

How do I fill out the SSDF attestation form?

This is the difficult part, isn’t it? The SSDF attestation form looks very simple at a glance, but it has a number of sections that expect evidence to be attached that details how your organization secures both your development environments and production systems. Like all compliance standards, it doesn’t specify what will or won’t meet compliance for your organization, hence the importance of the evidence.

At Anchore, we both experienced this ourselves and helped our customers navigate this ambiguity. Out of these experiences we created a document that breaks down each item and what evidence was able to achieve compliance without being rejected by a compliance officer.

We have published this document on our Docs site for all other organizations to use as a template when attempting to meet SSDF attestation compliance.

Structure of the SSDF attestation form

The SSDF attestation is divided into 3 sections:

Section I

The first section is very short, it is where you list the type of attestation you are submitting and information about the product that you are attesting to meeting compliance.

Section II

This section is also short, the form is collecting contact information. CISA wants to be able to know how to get in contact with your organization and who is responsible for any questions or concerns that need to be addressed.

Section III

For all intents and purposes, Section III is the SSDF attestation form. This is where you will provide all of the technical supporting information to demonstrate that your organization complies with the requirements set out in the SSDF attestation form. 

The guide that Anchore has developed is focused specifically on how to fill out this section in a way that will meet the expectations of CISA compliance officers.

Where do I submit the SSDF attestation form?

If you are a US government vendor you can submit your organization’s completed form on the Repository for Software Attestations and Artifacts. You will need an account that can be requested on the login page. It normally takes a few days for the account to be created. Be sure to give yourself at least a week for it to be created. This can be done ahead of time while you’re gathering the information to fill out your form.

It’s also possible you will receive requests directly to pass along the form. Not every agency will use the repository. It’s even possible you will have non-government customers asking for the form. While it’s being mandated by the government, there’s a lot of good evidence in the document.

What tooling do I need to meet SSDF attestation compliance?

There are many ways in order to meet the technical requirements of SSDF attestation but there is also a well worn path. Anchore utilizes modern DevSecOps practices and assumes that the majority of our customers do as well. Below is a list of common DevSecOps tools that are typically used to help meet SSDF compliance

Endpoint Protection

Description: Endpoint protection tools secure individual devices (endpoints) that connect to a network. They protect against malware, detect and prevent intrusions, and provide real-time monitoring and response capabilities.

SSDF Requirement: [3.1] — “Separating and protecting each environment involved in developing and building software”

Examples: Jamf, Elastic, SentinelOne, etc.

Source Control

Description: Source control systems manage changes to source code over time. They help track modifications, facilitate collaboration among developers, and maintain different versions of code.

SSDF Requirement: [3.1] — “Separating and protecting each environment involved in developing and building software”

Examples: GitHub, GitLab, etc.

CI/CD Build Pipeline

Description: Continuous Integration/Continuous Deployment (CI/CD) pipelines automate the process of building, testing, and deploying software. They help ensure consistent and reliable software delivery.

SSDF Requirement: [3.1] — “Separating and protecting each environment involved in developing and building software”

Examples: Jenkins, GitLab, GitHub Actions, etc.

Single Sign-on (SSO)

Description: SSO allows users to access multiple applications with one set of login credentials. It enhances security by centralizing authentication and reducing the number of attack vectors.

SSDF Requirement: [3.1] — “Enforcing multi-factor authentication and conditional access across the environments relevant to developing and building software in a manner that minimizes security risk;”

Examples: Okta, Google Workspace, etc.

Security Event and Incident Management (SEIM)

Description: Monitoring tools provide real-time visibility into the performance and security of systems and applications. They can detect anomalies, track resource usage, and alert on potential issues.

SSDF Requirement: [3.1] — “Implementing defensive cybersecurity practices, including continuous monitoring of operations and alerts and, as necessary, responding to suspected and confirmed cyber incidents;”

Examples: Elasticsearch, Splunk, Panther, RunReveal, etc.

Audit Logging

Description: Audit logging captures a record of system activities, providing a trail of actions performed within the software development and build environments.

SSDF Requirement: [3.1] — “Regularly logging, monitoring, and auditing trust relationships used for authorization and access: i) to any software development and build environments; and ii) among components within each environment;”

Examples: Typically a built-in feature of CI/CD, SCM, SSO, etc.

Secrets Encryption

Description: Secrets encryption tools secure sensitive information such as passwords, API keys, and certificates used in the development and build processes.

SSDF Requirement: [3.1] — “Encrypting sensitive data, such as credentials, to the extent practicable and based on risk;”

Examples: Typically a built-in feature of CI/CD and SCM

Secrets Scanning

Description: Secrets scanning tools automatically detect and alert on exposed secrets in code repositories, preventing accidental leakage of sensitive information.

SSDF Requirement: [3.1] — “Encrypting sensitive data, such as credentials, to the extent practicable and based on risk;”

Examples: Anchore Secure or other container security platforms

OSS Component Inventory (+ Provenance)

Description: These tools maintain an inventory of open-source software components used in a project, including their origins and lineage (provenance).

SSDF Requirement: [3.3] — “The software producer maintains provenance for internal code and third-party components incorporated into the software to the greatest extent feasible;”

Examples: Anchore SBOM or other SBOM generation and management platform

Vulnerability Scanning

Description: Vulnerability scanning tools automatically detect security weaknesses in code, dependencies, and infrastructure.

SSDF Requirement: [3.4] — “The software producer employs automated tools or comparable processes that check for security vulnerabilities. In addition: a) The software producer operates these processes on an ongoing basis and prior to product, version, or update releases;”

Examples: Anchore Secure or other software composition analysis (SCA) platform

Vulnerability Management and Remediation Runbook

Description: This is a process and set of guidelines for addressing discovered vulnerabilities, including prioritization and remediation steps.

SSDF Requirement: [3.4] — “The software producer has a policy or process to address discovered security vulnerabilities prior to product release; and The software producer operates a vulnerability disclosure program and accepts, reviews, and addresses disclosed software vulnerabilities in a timely fashion and according to and timelines specified in the vulnerability disclosure program or applicable policies.”

Examples: This is not necessarily a tool but an organizational SLA on security operations. For reference Anchore has included a screenshot from our vulnerability management guide.

Next Steps

If your organization currently provides software services to a federal agency or is looking to in the future, Anchore is here to help you in your journey. Reach out to our team and learn how you can integrate continuous and automated compliance directly into your CI/CD build pipeline with Anchore Enterprise.

Learn about the importance of both FedRAMP and SSDF compliance for selling to the federal government.

Ad for webinar by Anchore about how to sell software services to the federal government by achieving FedRAMP or SSDF Compliance

FedRAMP & FISMA Compliance: Key Differences Explained

This blog post has been archived and replaced by the supporting pillar page that can be found here: https://anchore.com/wp-admin/post.php?post=987474188&action=edit

The blog post is meant to remain “public” so that it will continue to show on the /blog feed. This will help discoverability for people browsing the blog and potentially help SEO. If it is clicked on it will automatically redirect to the pillar page.

Anchore at Billington CyberSecurity Summit: Automating Defense in the AI Era

Are you gearing up for the 15th Annual Billington CyberSecurity Summit? So are we! The Anchore team will be front and center in the exhibition hall throughout the event, ready to showcase how we’re revolutionizing cybersecurity in the age of AI.

This year’s summit promises to be a banger, highlighting the evolution in cybersecurity as the latest iteration of AI takes center stage. While large language models (LLMs) like ChatGPT have been making waves across industries, the cybersecurity realm is still charting its course in this new AI-driven landscape. But make no mistake – this is no time to rest on our laurels.

As blue teams explore innovative ways to harness LLMs, cybercriminals are working overtime to weaponize the same technology. If there’s one lesson we’ve learned from every software and AI hype cycle: automation is key. As adversaries incorporate novel automations into their tactics, defenders must not just keep pace—they need to get ahead.

At Anchore, we’re all-in with this strategy. The Anchore Enterprise platform is purpose-built to automate and scale cybersecurity across your entire software development lifecycle. By automating continuous vulnerability scanning and compliance in your DevSecOps pipeline, we’re equipping warfighters with the tools they need to outpace adversaries that never sleep.

Ready to see how Anchore can transform your cybersecurity posture in the AI era? Stop by our booth for a live demo. Don’t miss this opportunity to stay ahead of the curve—book a meeting (below) with our team and take the first step towards a more secure tomorrow.

Anchore at the Billington CyberSecurity Summit

Date: September 3–6, 2024

Location: The Ronald Reagan Building and International Trade Center in Washington, DC

Our team is looking forward to meeting you! Book a demo session in advance to ensure a preferred slot.

Anchore’s Showcase: DevSecOps and Automated Compliance

We will be demonstrating the Anchore Enterprise platform at the event. Our showcase will focus on:

  1. Software Composition Analysis (SCA) for Cloud-Native Environments: Learn how our tools can help you gain visibility into your software supply chain and manage risk effectively.
  2. Automated SBOM Generation and Management: Discover how Anchore simplifies the creation and maintenance of Software Bills of Materials (SBOMs), the foundational component in software supply chain security.
  3. Continuous Scanning for Vulnerabilities, Secrets, and Malware: See our advanced scanning capabilities in action, designed to protect your applications across the DevSecOps pipeline or DoD software factory.
  4. Automated Compliance Enforcement: Experience how Anchore can streamline compliance with key standards such as cATO, RAISE 2.0,  NIST, CISA, and FedRAMP, saving time and reducing human error.

We invite all attendees to visit our booth to learn more about how Anchore’s DevSecOps and automated compliance solutions can enhance your organization’s security posture in the age of AI and cloud computing.

Event Highlights

Still on the fence about whether to attend? Here is a quick run-down to help get you off of the fence. This year’s summit, themed “Advancing Cybersecurity in the AI Age,” will feature more than 40 sessions and breakouts, covering critical topics such as:

  • The increasing impact of artificial intelligence on cybersecurity
  • Cloud security challenges and solutions
  • Proactive approaches to technical risk management
  • Emerging cyber risks and defense strategies
  • Data protection against breaches and insider threats
  • The intersection of cybersecurity and critical infrastructure

The event will showcase fireside chats with top government officials, including FBI Deputy Director Paul Abbate, Chairman of the Joint Chiefs of Staff General CQ Brown, Jr., and U.S. Cyber Command Commander General Timothy D. Haugh, among others.

Next Steps and Additional Resources

Join us at the Billington Cybersecurity Summit to network with industry leaders, gain valuable insights, and explore innovative technologies that are shaping the future of cybersecurity. We look forward to seeing you there!

If you are interested in the Anchore Enterprise platform and can’t wait till the show, here are some resources to help get you started:

Learn about best practices that are setting new standards for security in DoD software factories.

Anchore Awarded DoD ESI DevSecOps Phase II Agreement

The Department of Defense (DoD) Enterprise Software Initiative (ESI) has awarded Anchore inclusion in its DevSecOps program, which is part of the ESI’s DevSecOps Phase II enterprise agreements.

The DoD ESI’s main objective is to streamline the acquisition process for software and services across the DoD, in order to gain significant cost savings and improve efficiency. Admittance into the ESI program validates Anchore’s commitment to be a trusted partner to the DoD, delivering advanced container vulnerability scanning as well as SBOM management solutions that meet the most stringent compliance and security requirements.

Anchore’s inclusion in the DoD ESI DevSecOps Phase II agreement is a testament to our commitment to delivering cutting-edge software supply chain security solutions. This milestone enables us to more efficiently support the DoD’s critical missions by providing them with the tools they need to secure their software development pipelines. Our continued partnership with the DoD reinforces Anchore’s position as a trusted leader in SBOM-powered DevSecOps and container security.

—Tim Zeller, EVP Sales & Marketing

The agreements also included DevSecOps luminaries Hashicorp and Rancher Government as well as Cloudbees, Infoblox, GitLab, Crowdstrike, F5 Networks; all are now part of the preferred vendor list for all DoD missions that require cybersecurity solutions, generally, and software supply chain security, specifically.

Anchore is steadily growing their presence on federal contracts and catalogues such as Iron Patriot & Minerva, GSA, 2GIT, NASA SEWP, ITES and most recently also JFAC (Joint Federated Assurance Center).

What does this mean?

Similar to the GSA Advantage marketplace, DoD missions can now procure Anchore through the fully negotiated and approved ESI Agreements on the Solutions for Enterprise-Wide Procurement (SEWP) Marketplace. 

Anchore’s History with DoD

This award continues Anchore’s deepening relationship with the DoD. Starting in 2020, the DoD has vetted and approved Anchore’s container vulnerability scanning tools. Anchore is named in both the DoD Container Image Creation and Deployment Guide and the DoD Container Hardening Process Guide as recommended solutions.

The same year, Anchore was selected by the US Air Force’s Platform One to become the software supply chain vendor to implement the best practices in the above guides for all software built on the platform. Read our case study on how Anchore partnered with Platform One to build the premier DevSecOps platform for the DoD.

The following year, Anchore won the Small Business Innovation Research (SBIR) Phase III contract with Platform One to integrate directly into the Iron Bank container image process. If your image has achieved Iron Bank certification it is because Anchore’s solution has given it a passing grade. Read more about this DevSecOps success story in our case study with the Iron Bank.

Due to the success of Platform One within the US Air Force, in 2022 Anchore partnered with the US Navy to secure the Black Pearl DevSecOps platform. Similar to Platform One, Black Pearl is the go-to standard for modern software development within the Department of the Navy (DON) software development.

As Anchore continued to expand its relationship with the DoD and federal agencies, its offerings became available for purchase through the online government marketplaces and contracts such as GSA Advantage and Second Generation IT Blanket Purchase Agreements (2GIT), NASA SEWP, Iron Patriot/Minerva, ITES and JFAC. The ESI’s DevSecOps Phase II award was built on the back of all of the previous success stories that came before it. 

Achieving ATO is now easier with the inclusion of Anchore into the DoD ESI. Read our white paper on DoD software factory best practices to reach cATO or RAISE 2.0 compliance in days versus months.

We advise on best practices that are setting new standards for security and efficiency in DoD software factories, such as: Hardening container images, automation for policy enforcement and continuous monitoring for vulnerabilities.

Anchore Enterprise 5.8 Adds KEV Enrichment Feed

Today we have released Anchore Enterprise 5.8, featuring the integration of the U.S. Cybersecurity and Infrastructure Security Agency’s (CISA) Known Exploited Vulnerabilities (KEV) catalog as a new vulnerability feed.

Previously, Anchore Enterprise matched software libraries and frameworks inside applications against vulnerability databases, such as, National Vulnerability Database (NVD), the GitHub Advisory Database or individual vendor feeds. With Anchore Enterprise 5.8, customers can augment their vulnerability feeds with the KEV catalog without having to leave the dashboard. In addition, teams can automatically flag exploitable vulnerabilities as software is being developed or gate build artifacts from being released into production. 

Before we jump into what all of this means, let’s take a step back and get some context to KEV and its impact on DevSecOps pipelines.

What is CISA KEV?

The KEV (Known Exploited Vulnerabilities) catalog is a critical cybersecurity resource maintained by the U.S. Cybersecurity and Infrastructure Security Agency (CISA). It is a database of exploited vulnerabilities that is current and active in the wild. While addressing these vulnerabilities is mandatory for U.S. federal agencies under Binding Operational Directive 22-01, the KEV catalog serves as an essential public resource for improving cybersecurity for any organization.

The primary difference between CISA KEV and a standard vulnerability feed (e.g., the CVE program) are the adjectives, “actively exploited”. Actively exploited vulnerabilities are being used by attackers to compromise systems in real-time, meaning now. They are real and your organization may be standing in the line of fire, whereas CVE lists vulnerabilities that may or may not have any available exploits currently. Due to the imminent threat of actively exploited vulnerabilities, they are considered the highest risk outside of an active security incident.

The benefits of KEV enrichment

The KEV catalog offers significant benefits to organizations striving to improve their cybersecurity posture. One of its primary advantages is its high signal-to-noise ratio. By focusing exclusively on vulnerabilities that are actively being exploited in the wild, the KEV cuts through the noise of countless potential vulnerabilities, allowing developers and security teams to prioritize their efforts on the most critical and immediate threats. This focused approach ensures that limited resources are allocated to addressing the vulnerabilities that pose the greatest risk, significantly enhancing an organization’s security efficiency.

Moreover, the KEV can be leveraged as a powerful tool in an organization’s development and deployment processes. By using the KEV as a trigger for build pipeline gates, companies can prevent exploitable vulnerabilities from being promoted to production environments. This proactive measure adds an extra layer of security to the software development lifecycle, reducing the risk of deploying vulnerable code. 

Additionally, while adherence to the KEV is not yet a universal compliance requirement, it represents a security best practice that forward-thinking organizations are adopting. Given the trend of such practices evolving into compliance mandates, integrating the KEV into security protocols can be seen as a form of future-proofing, potentially easing the transition if and when such practices inevitably become compliance requirements.

How Anchore Enterprise delivers KEV enrichment

With Anchore Enterprise, CISA KEV is now a vulnerability feed similar to any other data feed that gets imported into the system. Anchore Enterprise can be configured to pull this directly from the source as part of the deployment feed service.

To make use of the new KEV data, we have an additional rule option in the Anchore Policy Engine that allows a STOP or WARN to be configured when a vulnerability is detected that is on the KEV list. When any application build, registry store or runtime deploy occurs, Anchore Enterprise will evaluate the artifiact’s SBOM against the security policy and if the SBOM has been annotated with a KEV entry then the Anchore policy engine can return a STOP value to inform the build pipeline to fail the step and return the KEV as the source of the violation.

To configure the KEV feed as a trigger in the policy engine, first select vulnerabilities as the gate then kev list as a trigger. Finally choose an action.

Anchore Enterprise dashboard policy engine rule set configuration showing vulnerabilities as the gate value and the CISA KEV catalog as the trigger value.

After you save the new rule, you will see the kev list rule as part of the entire policy.

Anchore Enterprise 5.8 policy engine dashboard showing all rules for the default policy including the CISA KEV catalog rule at the top (highlighted in the red square).

After scanning a container with the policy that has the kev list rule in it, you can view all dependencies that match the kev list vulnerability feed.

Anchore Enterprise 5.8 vulnerability scan report with policy enrichment and policy actions. All software dependencies are matched against the CISA KEV catalog of known exploitable vulnerabilities and the assigned action is reported in the dashboard.

Next Steps

To stay on top of our releases, sign-up for our monthly newsletter or follow our LinkedIn account. If you are already an Anchore customer, please reach out to your account manager to upgrade to 5.8 and gain access to KEV support. We also offer a 15 day free trial to get hands on with Anchore Enterprise or you can reach out to us for a guided tour.

A Guide to FedRAMP in 2024: FAQs & Key Takeaways

This blog post has been archived and replaced by the supporting pillar page that can be found here: https://anchore.com/wp-admin/post.php?post=987473983&action=edit

The blog post is meant to remain “public” so that it will continue to show on the /blog feed. This will help discoverability for people browsing the blog and potentially help SEO. If it is clicked on it will automatically redirect to the pillar page.

DevSecOps Evolution: How DoD Software Factories Are Reshaping Federal Compliance

Anchore’s Vice President of Security, Josh Bressers recently did an interview with Fed Gov Today about the role of automation in DevSecOps and how it is impacting the US federal government. We’ve condensed the highlights of the interview into a snackable blog post below.

Automation is the foundation of DevSecOps

Automation isn’t just a buzzword but is actually the foundation of DevSecOps. It is what gives meaning to marketing taglines like “shift left”. The point of DevSecOps is to create automated workflows that provide feedback to software developers as they are writing the application. This unwinds the previous practice of  artificially grouping all of the “compliance” or “security” tasks into large blocks at the end of the development process. The challenge with this pattern is that it delays feedback and design decisions are made that become difficult to undo after development has completed. By inverting the narrative and automating feedback as design decisions are made, developers are able to prevent compliance or security issues before they become deeply embedded into the software.

DoD Software Factories are leading the way in DevSecOps adoption

The US Department of Defense (DoD) is at the forefront of implementing DevSecOps through their DoD software factory model. The US Navy’s Black Pearl and the Air Force’s Platform One are perfect examples of this program. These organizations are leveraging automation to streamline compliance work. Instead of relying on manual documentation ahead of Authority to Operate (ATO) reviews, automated workflows built directly into the software development pipeline provide direct feedback to developers. This approach has proven highly effective, Bressers emphasizes this in his interview:

It’s obvious why the DoD software factory model is catching on. It’s because they work. It’s not just talk, it’s actually working. There’s many organizations that have been talking about DevSecOps for a long time. There’s a difference between talking and doing. Software factories are doing and it’s amazing.

—Josh Bressers, VP of Security, Anchore

Benefits of compliance automation

By giving compliance the same treatment as security (i.e., automate all the things), tasks that once took weeks or even months, can now be completed in minutes or hours. This dramatic reduction in time-to-compliance not only accelerates development cycles but also allows teams to focus on collaboration and solution delivery rather than getting bogged down in procedural details. The result is a “shift left” approach that extends beyond security to compliance as well. When compliance is integrated early in the development process the benefits cascade down the entire development waterfall.

Compliance automation is shifting the policy checks left into the software development process. What this means is that once your application is finished; instead of the compliance taking weeks or months, we’re talking hours or minutes.

—Josh Bressers, VP of Security, Anchore

Areas for improvement

While automation is crucial, there are still several areas for improvement in DevSecOps environments. Key focus areas include ensuring developers fully understand the automated processes, improving communication between team members and agencies, and striking the right balance between automation and human oversight. Bressers emphasizes the importance of letting “people do people things” while leveraging computers for tasks they excel at. This approach fosters genuine collaboration and allows teams to focus on meaningful work rather than simply checking boxes to meet compliance requirements.

Standardizing communication workflows with integrated developer tools

Software development pipelines are primarily platforms to coordinate the work of distributed teams of developers. They act like old-fashioned switchboard operators that connect one member of the development team to the next as they hand-off work in the development production line. Leveraging developer tooling like GitLab or GitHub standardizes communication workflows. These platforms provide mechanisms for different team members to interact across various stages of the development pipeline. Teams can easily file and track issues, automatically pass or fail tests (e.g., compliance tests), and maintain a searchable record of discussions. This approach facilitates better understanding between developers and those identifying issues, leading to more efficient problem-solving and collaboration.

The government getting ahead of the private sector: an unexpected narrative inversion

In a surprising turn of events, Bressers points out that government agencies are now leading the way in DevSecOps implementation by integrating automated compliance. Historically often seen as technologically behind, federal agencies, through the DoD software factory model, are setting new standards that are likely to influence the private sector. As these practices become more widespread, contractors and private companies working with the government will need to adapt to these new requirements. This shift is evident in recent initiatives like the SSDF attestation questionnaire and White House Executive Order (EO) 14028. These initiatives are setting new expectations for federal contractors, signaling a broader move towards making compliance a native pillar of DevSecOps.

This is one of the few instances in recent memory where the government is truly leading the way. Historically the government has often been the butt of jokes about being behind in technology but these DoD software factories are absolutely amazing. The next thing that we’re going to see is the compliance expectations that are being built into these DoD software factories will seep out into the private sector. The SSDF attestation and the White House Executive Order are only the beginning. Ironically my expectation is everyone is going to have to start paying attention to this, not just federal agencies.

—Josh Bressers, VP of Security, Anchore

Next Steps

If you’re interested to learn more about how to future-proof your software supply chain with compliance automation via the DoD software factory model, be sure to read our white paper.

If you’d like to hear the interview in full, be sure to watch it on Fed Gov Today’s Youtube channel.

High volume image scanning and vulnerability management at the Iron Bank (Platform One)

The Iron Bank provides Platform One and any US Department of Defense (DoD) agency with a hardened and centralized container image repository that supports the end-to-end lifecycle needed for secure software development. Anchore and the Iron Bank have been collaborating since 2020 to balance deployment velocity, and policy compliance while maintaining rigorous security standards and adapting to new security threats. 

The Challenge

The Iron Bank development team is responsible for the integrity and security of 1,800 base images that are provided to build and create software applications across the DoD. They face difficult tasks such as:

  • Providing hardened components for downstream applications across the DoD
  • Meeting rigorous security standards crucial for military systems
  • Improving deployment frequency while maintaining policy compliance
  • Reducing the burden of false positives on the development team

Camdon Cady, Chief Technology Officer at Platform One:

People want to be security minded, and they want to do the right thing. But what they really want is tooling that helps them to do that with all the necessary information in one place. That’s why we looked to Anchore for help.

The Solution

Anchore’s engineering team is deeply embedded with the Iron Bank infrastructure and development team to improve and maintain DevSecOps standards. Anchore Enterprise is the software supply chain security tool of choice as it provides: 

The Results: Sustainable security at DevOps speed

The partnership between Iron Bank and Anchore has yielded impressive results:

  • Reduced False Positives: The introduction of an exclusion feed captured over 12,000 known false positives, significantly reducing the security assessment load.
  • Improved SBOM Accuracy: Custom capabilities like SBOM Hints and SBOM Corrections allow for more precise component identification and vulnerability mapping.
  • Standardized Compliance: A jointly developed custom policy enforces the DoD Container Hardening requirements consistently across all images.
  • Enhanced Scanning Capabilities: Additions like time-based allowlisting, content hints, and image scanning have expanded Iron Bank’s security coverage.
  • Streamlined Processes: The standardized scanning process adheres to the DoD’s Container Hardening Guide while delivering high-quality vulnerability and compliance findings.

Even though security is important for all organizations, the stakes are higher for the DoD. What we need is a repeatable development process. It’s imperative that we have a standardized way of building secure software across our military agencies.

Camdon Cady, Chief Technology Officer at Platform One

Download the full case study to learn more about how Anchore Enterprise can help your organization achieve a proactive security stance while maintaining development velocity.

How Infoblox Scaled Product Security and Compliance with Anchore Enterprise

In today’s fast-paced software development world, maintaining the highest levels of security and compliance is a daunting challenge. Our new case study highlights how Infoblox, a leader in Enterprise DDI (DNS, DHCP, IPAM), successfully scaled their product security and compliance efforts using Anchore Enterprise. Let’s dive into their journey and the impressive results they achieved.

The Challenge: Scaling security in high-velocity Environments

Infoblox faced several critical challenges in their product security efforts:

  • Implementing “shift-left” security at scale for 150 applications developed by over 600 engineers with a security team of 15 (a 40:1 ratio!)
  • Managing vulnerabilities across thousands of containers produced monthly
  • Maintaining multiple compliance certifications (FedRAMP, SOC 2, StateRAMP, ISO 27001)
  • Integrating seamlessly into existing DevOps workflows

“When I first started, I was manually searching GitHub repos for references to vulnerable libraries,” recalls Sukhmani Sandhu, Product Security Engineer at Infoblox. This manual approach was unsustainable and prone to errors.

The Solution: Anchore Enterprise

To address these challenges, Infoblox turned to Anchore Enterprise to provide:

  • Container image scanning with low false positives
  • Comprehensive vulnerability and CVE management
  • Native integrations with Amazon EKS, Harbor, and Jenkins CI
  • A FedRAMP, SOC 2, StateRAMP, and ISO compliant platform

Chris Wallace, Product Security Engineering Manager at Infoblox, emphasizes the importance of accuracy: “We’re not trying to waste our team or other team’s time. We don’t want to report vulnerabilities that don’t exist. A low false-positive rate is paramount.

Impressive Results

The implementation of Anchore Enterprise transformed Infoblox’s product security program:

  • 75% reduction in time for manual vulnerability detection tasks
  • 55% reduction in hours allocated to retroactive vulnerability remediation
  • 60% reduction in hours spent on compliance tasks
  • Empowered the product security team to adopt a proactive, “shift-left” security posture

These improvements allowed the Infoblox team to focus on higher-value initiatives like automating policy and remediation. Developers even began self-adopting scanning tools during development, catching vulnerabilities before they entered the build pipeline.

“We effectively had no tooling before Anchore. Everything was manual. We reduced the amount of time on vulnerability detection tasks by 75%,” says Chris Wallace.

Conclusion: Scaling security without compromise

Infoblox’s success story demonstrates that it’s possible to scale product security and compliance efforts without compromising on development speed or accuracy. By leveraging Anchore Enterprise, they transformed their security posture from reactive to proactive, significantly reduced manual efforts, and maintained critical compliance certifications.

Are you facing similar challenges in your organization? Download the full case study and take the first step towards a secure, compliant, and efficient development environment. Or learn more about how Anchore’s container security platform can help your organization.

Introduction to the DoD Software Factory

In the rapidly evolving landscape of national defense and cybersecurity, the concept of a Department of Defense (DoD) software factory has emerged as a cornerstone of innovation and security. These software factories represent an integration of the principles and practices found within the DevSecOps movement, tailored to meet the unique security requirements of the DoD and Defense Industrial Base (DIB). 

By fostering an environment that emphasizes continuous monitoring, automation, and cyber resilience, DoD Software Factories are at the forefront of the United States Government’s push towards modernizing its software and cybersecurity capabilities. This initiative not only aims to enhance the velocity of software development but also ensures that these advancements are achieved without compromising on security, even against the backdrop of an increasingly sophisticated threat landscape.

Building and running a DoD software factory is so central to the future of software development that “Establish a Software Factory” is the one of the explicitly named plays from the DoD DevSecOps Playbook. On top of that, the compliance capstone of the authorization to operate (ATO) or its DevSecOps infused cousin the continuous ATO (cATO) effectively require a software factory in order to meet the requirements of the standard. In this blog post, we’ll break down the concept of a DoD software factory and a high-level overview of the components that make up one.

What is a DoD software factory?

A Department of Defense (DoD) Software Factory is a software development pipeline that embodies the principles and tools of the larger DevSecOps movement with a few choice modifications that conform to the extremely high threat profile of the DoD and DIB. It is part of the larger software and cybersecurity modernization trend that has been a central focus for the United States Government in the last two decades.

The goal of a DoD Software Factory is aimed at creating an ecosystem that enables continuous delivery of secure software that meet the needs of end-users while ensuring cyber resilience (a DoD catchphrase that emphasizes the transition from point-in-time security compliance to continuous security compliance). In other words, the goal is to leverage automation of software security tasks in order to fulfill the promise of the DevSecOps movement to increase the velocity of software development.

What is an example of a DoD software factory?

Platform One is the canonical example of a DoD software factory. Run by the US Air Force, it offers both a comprehensive portfolio of software development tools and services. It has come to prominence due to its hosted services like Repo One for source code hosting and collaborative development, Big Bang for a end-to-end DevSecOps CI/CD platform and the Iron Bank for centralized container storage (i.e., container registry). These services have led the way to demonstrating that the principles of DevSecOps can be integrated into mission critical systems while still preserving the highest levels of security to protect the most classified information.

If you’re interested to learn more about how Platform One has unlocked the productivity bonus of DevSecOps while still maintaining DoD levels of security, watch our webinar with Camdon Cady, Chief of Operations and Chief Technology Officer of Platform One.

Who does it apply to?

Federal Service Integrators (FSI)

Any organization that works with the DoD as a federal service integrator will want to be intimately familiar with DoD software factories as they will either have to build on top of existing software factories or, if the mission/program wants to have full control over their software factory, be able to build their own for the agency.

Department of Defense (DoD) Mission

Any Department of Defense (DoD) mission will need to be well-versed on DoD software factories as all of their software and systems will be required to run on a software factory as well as both reach and maintain a cATO.

What are the components of a DoD Software Factory?

A DoD software factory is composed of both high-level principles and specific technologies that meet these principles. Below are a list of some of the most significant principles of a DoD software factory:

Principles of DevSecOps embedded into a DoD software factory

  1. Breakdown organizational silos
    • This principle is borrowed directly from the DevSecOps movement, specifically the DoD aims to integrate software development, test, deployment, security and operations into a single culture with the organization.
  2. Open source and reusable code
    • Composable software building blocks is another principle of the DevSecOps that increases productivity and reduces security implementation errors from developers writing secure software packages that they are not experts in.
  3. Immutable Infrastructure-as-Code (IaC)
    • This principle focuses on treating the infrastructure that software runs on as ephemeral and managed via configuration rather than manual systems operations. Enabled by cloud computing (i.e., hardware virtualization) this principle increases the security of the underlying infrastructure through templated secure-by-design defaults and reliability of software as all infrastructure has to be designed to fail at any moment.
  4. Microservices architecture (via containers)
    • Microservices are a design pattern that creates smaller software services that can be built and scale independently of each other. This principle allows for less complex software that only performs a limited set of behavior.
  5. Shift Left
    • Shift left is the DevSecOps principle that re-frames when and how security testing is done in the software development lifecycle. The goal is to begin security testing while software is being written and tested rather than after the software is “complete”. This prevents insecure practices from cascading into significant issues right as software is ready to be deployed.
  6. Continuous improvement through key capabilities
    • The principle of continuous improvement is a primary characteristic of the DevSecOps ethos but the specific key capabilities that are defined in the DoD DevSecOps playbook are what make this unique to the DoD.
  7. Define a DevSecOps pipeline
    • A DevSecOps pipeline is the system that utilizes all of the preceding principles in order to create the continuously improving security outcomes that is the goal of the DoD software factory program.
  8. Cyber resilience
    • Cyber resiliency is the goal of a DoD software factory, is it defined as, “the ability to anticipate, withstand, recover from, and adapt to adverse conditions, stresses, attacks, or compromises on the systems that include cyber resources.”

Common tools and systems of a DoD software factory

  1. Code Repository (e.g., Repo One)
    • Where software source code is stored, managed and collaborated on.
  2. CI/CD Build Pipeline (e.g., Big Bang)
    • The system that automates the creation of software build artifacts, tests the software and packages the software for deployment.
  3. Artifact Repository (e.g., Iron Bank)
    • The storage system for software components used in development and the finished software artifacts that are produced from the build process.
  4. Runtime Orchestrator and Platform (e.g., Big Bang)
    • The deployment system that hosts the software artifacts pulled from the registry and keeps the software running so that users can access it.

How do I meet the security requirements for a DoD Software Factory? (Best Practices)

Use a pre-existing software factory

The benefit of using a pre-existing DoD software factory is the same as using a public cloud provider; someone else manages the infrastructure and systems. What you lose is the ability to highly customize your infrastructure to your specific needs. What you gain is the simplicity of only having to write software and allow others with specialized skill sets to deal with the work of building and maintaining the software infrastructure. When you are a car manufacturer, you don’t also want to be a civil engineering firm that designs roads.

To view existing DoD software factories, visit the Software Factory Ecosystem Coalition website.

Map of all DoD software factories in the US.

Roll your own by following DoD best practices 

If you need the flexibility and customization of managing your own software factory then we’d recommend following the DoD Enterprise DevSecOps Reference Design as the base framework. There are a few software supply chain security recommendations that we would make in order to ensure that things go smoothly during the authorization to operate (ATO) process:

  1. Continuous vulnerability scanning across all stages of CI/CD pipeline
    • Use a cloud-native vulnerability scanner that can be directly integrated into your CI/CD pipeline and called automatically during each phase of the SDLC
  2. Automated policy checks to enforce requirements and achieve ATO
    • Use a cloud-native policy engine in tandem with your vulnerability scanner in order to automate the reporting and blocking of software that is a security threat and a compliance risk
  3. Remediation feedback
    • Use a cloud-native policy engine that can provide automated remediation feedback to developers in order to maintain a high velocity of software development
  4. Compliance (Trust but Verify)
    • Use a reporting system that can be directly integrated with your CI/CD pipeline to create and collect the compliance artifacts that can prove compliance with DoD frameworks (e.g., CMMC and cATO)
  5. Air-gapped system

Is a software factory required in order to achieve cATO?

Technically, no. Effectively, yes. A cATO requires that your software is deployed on an Approved DoD Enterprise DevSecOps Reference Design not a software factory specifically. If you build your own DevSecOps platform that meets the criteria for the reference design then you have effectively rolled your own software factory.

How Anchore can help

The easiest and most effective method for achieving the security guarantees that a software factory is required to meet for its software supply chain are by using: 

  1. An SBOM generation tool that integrates directly into your software development pipeline
  2. A container vulnerability scanner that integrates directly into your software development pipeline
  3. A policy engine that integrates directly into your software development pipeline
  4. A centralized database to store all of your software supply chain security logs
  5. A query engine that can continuously monitor your software supply chain and automate the creation of compliance artifacts

These are the primary components of both Anchore Enterprise and Anchore Federal cloud native, SBOM-powered software composition analysis (SCA) platforms that provide an end-to-end software supply chain security to holistically protect your DevSecOps pipeline and automate compliance. This approach has been validated by the DoD, in fact the DoD’s Container Hardening Process Guide specifically named Anchore Federal as a recommended container hardening solution.

Learn more about how Anchore fuses DevSecOps and DoD software factories.

Conclusion and Next Steps

DoD software factories can come off as intimidating at first but hopefully we have broken them down into a more digestible form. At their core they reflect the best of the DevSecOps movement with specific adaptations that are relevant to the extreme threat environment that the DoD has to operate in, as well as, the intersecting trend of the modernization of federal security compliance standards.

If you’re looking to dive deeper into all things DoD software factory, we have a white paper that lays out the 6 best practices for container images in highly secure environments. Download the white paper below.

Modernizing FedRAMP: GSA’s Roadmap to Streamline Authorization

If you’ve ever thought that the FedRAMP (Federal Risk and Authorization Management Program) authorization process is challenging and laborious, things may be getting better. The General Services Administration’s (GSA) has publicly committed to improving the authorization process by publishing a public roadmap to modernize FedRAMP

The purpose of FedRAMP is to act as a central intermediary between federal agencies and cloud service providers (CSP) in order to make it easier for agencies to purchase software services and for CSPs to sell software services to agencies. By being the middleman, FedRAMP creates a single marketplace that reduces the amount of time it takes for an agency to select and purchase a product. From the CSP perspective, FedRAMP becomes a single standard that they can target for compliance and after achieving authorization they get access to 200+ agencies that they can sell to—a win-win.

Unfortunately, that promised land wasn’t the typical experience for either side of the exchange. Since FedRAMP’s inception in 2011, the demand for cloud services has increased significantly. Cloud was still in its infancy at the birth of FedRAMP and the majority of federal agencies still procured software with perpetual licenses rather than as a cloud service (e.g., SaaS). In the following 10+ years that have passed, that preference has inverted and now the predominant delivery model is infrastructure/platform/software-as-a-service.

This has led to an environment where new cloud services are popping up every year but federal agencies don’t have access to them via the streamlined FedRAMP marketplace. On the other side of the coin, CSPs want access to the market of federal agencies that are only able to procure software via FedRAMP but the process of becoming FedRAMP certified is a complex and laborious process that reduces the opportunity cost of access to this market.

Luckily, the GSA isn’t resting on its laurels. Due to feedback from all stakeholders they are prioritizing a revamp of the FedRAMP authorization process to take into account the shifting preferences in the market. To help you get a sense of what is happening, how quickly you can expect changes and the benefits of the initiative, we have compiled a comprehensive FAQ.

Frequently Asked Questions (FAQ)

How soon will the benefits of FedRAMP modernization be realized?

Optimistically changes will be rolling out over the next 18 months and be completed by the end of 2025. See the full rollout schedule on the public roadmap.

Who does this impact?

  • Federal agencies
  • Cloud service providers (CSP)
  • Third-party assessment organization (3PAO)

What are the benefits of the FedRAMP modernization initiative?

TL;DR—For agencies

  • Increased vendor options within the FedRAMP marketplace
  • Reduced wait time for CSPs in authorization process

TL;DR—For CSPs

  • Reduced friction during the authorization process
  • More clarity on how to meet security requirements
  • Less time and cost spent on the authorization process

TL;DR—For 3PAOs

  • Reduced friction between 3PAO and CSP during authorization process
  • Increased clarity on how to evaluate CSPs

What prompted the GSA to improve FedRAMP now?

GSA is modernizing FedRAMP because of feedback from stakeholders. Both federal agencies and CSPs levied complaints about the current FedRAMP process. Agencies wanted more CSPs in the FedRAMP marketplace that they could then easily procure. CSPs wanted a more streamlined process so that they could get into the FedRAMP marketplace faster. The point of friction was the FedRAMP authorization process that hasn’t evolved at the same pace as the transition from the on-premise, perpetual license delivery model to the rapid, cloud services model.

How will GSA deliver on its promises to modernize FedRAMP?

The full list of initiatives can be found in their public product roadmap document but the highlights are:

  • Taking a customer-centric approach that reduces friction in the authorization process based on customer interviews
  • Publishing clear guidance on how to meet core security requirements
  • Streamlining authorization process to reduce bottlenecks based on best practices from agencies that have developed a strong authorization process
  • Moving away from lengthy documents and towards a data-first foundation to enable automation of the authorization process for CSPs and 3PAOs

Wrap-Up

The GSA has made a commitment to being transparent about the improvements to the modernization process. Anchore, as well as, the rest of the public sector stakeholders will be watching and holding the GSA accountable. Follow this blog or the Anchore LinkedIn page to stay updated on progress.If the 18 month timeline is longer than you’re willing to wait, Anchore is already an expert in supporting organizations that are seeking FedRAMP authorization. Anchore Enterprise is a modern, cloud-native software composition analysis (SCA) platform that both meets FedRAMP compliance standards and helps evaluate whether your software supply chain is FedRAMP compliant. If you’re interested to learn more, download “FedRAMP Requirements Checklist for Container Vulnerability Scanning” or learn more about how Anchore Enterprise has helped organizations like Cisco achieve FedRAMP compliance in weeks versus months.

Reduce risk in your software supply chain: 5 tips for container security

Rising threats to the software supply chain and increasing use of containers are causing organizations to focus on container security. Containers bring many unique security challenges due to their layered dependencies and the fact that many container images come from public repositories.

Our new white paper, Reduce Risk for Software Supply Chain Attacks: Best Practices for Container Security, digs into 5 tips for securing containers. It also describes how Anchore Enterprise simplifies implementation of these critical best practices, so you don’t have to.

5 best practices to instantly strengthening container security

  1. Use SBOMs to build a transparent foundation

SBOMs—Software Bill of Materials—create a trackable inventory of the components you use, which is a precursor for identifying security risks, meeting regulatory requirements and assessing license compliance. Get recommendations on the best way to generate, store, search and share SBOMs for better transparency.  

  1. Identify vulnerabilities early with continuous scanning

Security issues can arise at any point in the software supply chain. Learn why shifting left is necessary, but not sufficient for container security. Understand the role SBOMs are critical when responding to zero-day vulnerabilities.

  1. Automate policy enforcement and security gates

Find out how to use automated policies to identify which vulnerabilities should be fixed and enforce regulatory requirements. Learn how a customizable policy engine and out-of-the-box policy packs streamline your compliance efforts. 

  1. Reduce toil in the developer experience

Integrating with the tools developers use, minimizing false positives, and providing a path to faster remediation will keep developers happy and your software development moving efficiently.  See how Anchore Enterprise makes it easy to provide a good developer experience

  1. Protect your software supply chain with security controls

To protect your software supply chain, you must ensure that the code you bring in from third-party sources is trusted and vetted. Implement vetting processes for open-source code that you use.

Balancing the Scale: Software Supply Chain Security and APTs

Note: This is a multi-part series primer on the intersection of advanced persistent threats (APTs) and software supply chain security (SSCS). This blog post is the first in the series. We will update this blog post with links to the additional parts of the series as they are published.
Part 1 | With Great Power Comes Great Responsibility: APTs & Software Supply Chain Security
Part 2 | David and Goliath: the Intersection of APTs and Software Supply Chain Security
• Part 3 (This blog post)

Last week we dug into the details of why an organization’s software supply chain is a ripe target for well-resourced groups like APTs and the potential avenues that companies have to combat this threat. This week we’re going to highlight the Anchore Enterprise platform and how it provides a turnkey solution for combating threats to any software supply chain.

How Anchore Can Help

Anchore was founded on the belief that a security platform that delivers deep, granular insights into an organization’s software supply chain, covers the entire breadth of the SDLC and integrates automated feedback from the platform will create a holistic security posture to detect advanced threats and allow for human interaction to remediate security incidents. Anchore is trusted by Fortune 100 companies and the most exacting federal agencies across the globe because it has delivered on this promise.

The rest of the blog post will detail how Anchore Enterprise accomplishes this.

Depth: Automating Software Supply Chain Threat Detection

Having deeper visibility into an organization’s software supply chain is crucial for security purposes because it enables the identification and tracking of every component in the software’s construction. This comprehensive understanding helps in pinpointing vulnerabilities, understanding dependencies, and identifying potential security risks. It allows for more effective management of these risks by enabling targeted security measures and quicker response to potential threats. Essentially, deeper visibility equips an organization to better protect itself against complex cyber threats, including those that exploit obscure or overlooked aspects of the software supply chain.

Anchore Enterprise accomplishes this by generating a comprehensive software bill of materials (SBOM) for every piece of software (even down to the component/library/framework-level). It then compares this detailed ingredients list against vulnerability and active exploit databases to identify exactly where in the software supply chain there are security risks. These surgically precise insights can then be fed back to the original software developers, rolled-up into reports for the security team to better inform risk management or sent directly into an incident management workflow if the vulnerability is evaluated as severe enough to warrant an “all-hands on deck” response.

Developers shouldn’t have to worry about manually identifying threats and risks inside your software supply chain. Having deep insights into your software supply chain and being able to automate the detection and response is vital to creating a resilient and scalable solution to the risk of APTs.

Breadth: Continuous Monitoring in Every Step of Your Software Supply Chain

The breadth of instrumentation in the Software Development Lifecycle (SDLC) is crucial for securing the software supply chain because it ensures comprehensive security coverage across all stages of software development. This broad instrumentation facilitates early detection and mitigation of vulnerabilities, ensures consistent application of security policies, and allows for a more agile response to emerging threats. It provides a holistic view of the software’s security posture, enabling better risk management and enhancing the overall resilience of the software against cyber threats.

Powered by a 100% feature complete platform API, Anchore Enterprise integrates into your existing DevOps pipeline.

Anchore has been supporting the DoD in this effort since 2019. Commonly referred to as “overwatch” for the DoD’s software supply chain. Anchore Enterprise continuously monitors how risk is evolving based on the ingesting of tens of thousands of runtime containers, hundreds of source code repositories and alerting on malware-laced images submitted to the registry. Monitoring every stage of the DevOps pipeline, source to build to registry to deploy, to gain a holistic view of when and where threats enter the software development lifecycle.

Feedback: Alerting on Breaches or Critical Events in Your Software Supply Chain

Integrating feedback from your software supply chain and SDLC into your overall security program is important because it allows for real-time insights and continuous improvement in security practices. This integration ensures that lessons learned and vulnerabilities identified at any stage of the development or deployment process are quickly communicated and addressed. It enhances the ability to preemptively manage risks and adapt to new threats, thereby strengthening the overall security posture of the organization.

How would you know if something is wrong in a system? Create high-quality feedback loops, of course. If there is a fire in your house, you typically have a fire alarm. That is a great source of feedback. It’s loud and creates urgency. When you investigate to confirm the fire is legitimate and not a false alarm; you can see fire, you can feel fire.

Software supply chain breaches are more similar to carbon monoxide leaks. Silent, often undetected, and potentially lethal. If you don’t have anything in place to specifically alert for that kind of threat then you could pay severely. 

Anchore Enterprise was designed specifically as both a set of sensors that can be deployed both deeply and broadly into your software supply chain AND a system of feedback that uses the sensors in your supply chain to detect and alert on potential threats that are silently emitting carbon monoxide in your warehouse.

Anchore Enterprise’s feedback mechanisms come in three flavors; automatic, recommendations and informational. Anchore Enterprise utilizes a policy engine to enable automatic action based on the feedback provided by the software supply chain sensors. If you want to make sure that no software is ever deployed into production (or any environment) with an exploitable version of Log4j the Anchore policy engine can review the security metadata created by the sensors for the existence of this software component and stop a deployment in progress before it ever becomes accessible to attackers.

Anchore Enterprise can also be configured to make recommendations and provide opinionated actions based on security signals. If a vulnerability is discovered in a software component but it isn’t considered urgent, Anchore Enterprise can instead provide a recommendation to the software developer to fix the vulnerability but still allow them to continue to test and deploy their software. This allows developers to become aware of security issues very early in the SDLC but also provide flexibility for them to fix the vulnerability based on their own prioritization.

Finally, Anchore Enterprise offers informational feedback that alerts developers, the security team or even the executive team to potential security risks but doesn’t offer a specific solution. These types of alerts can be integrated into any development, support or incident management systems the organization utilizes. Often these alerts are for high risk vulnerabilities that require deeper organizational analysis to determine the best course of action in order to remediate.

Conclusion

Due to the asymmetry between APTs and under-resourced security teams, the goal isn’t to create an impenetrable fortress that can never be breached. The goal is instead to follow security best practices and instead litter your SDLC with sensors and automated feedback mechanisms. APTs may have significantly more resources than your security team but they are still human and all humans make mistakes. By placing low-effort tripwires in as many locations as possible, you reverse the asymmetry of resources and instead allow the well-resourced adversary to become their own worst enemy. APTs are still software developers at the end of the day and no one writes bug-free code in the long run. By transforming your software supply chain into a minefield of best practices, you create a battlefield that requires your adversaries to slow down and carefully disable each individual security mechanism. None are impossible to disarm but each speed bump creates another opportunity for your adversary to make a mistake and reveal themselves. If the zero-trust architecture has taught us anything, it is that an impenetrable perimeter was never the best strategy.

David and Goliath: the Intersection of APTs and Software Supply Chain Security

Note: This is a multi-part series primer on the intersection of advanced persistent threats (APTs) and software supply chain security (SSCS). This blog post is the second in the series. If you’d like to start from the beginning, you can find the first blog post here.

Last week we set the stage for discussing APTs and the challenges they pose for software supply chain security by giving a quick overview of each topic. This week we will dive into the details of how the structure of the open source software supply chain is a uniquely ripe target for APTs.

The Intersection of APTs and Software Supply Chain Security

The Software Ecosystem: A Ripe Target

APT groups often prioritize the exploitation of software supply chain vulnerabilities. This is due to the asymmetric structure of the software ecosystem. By breaching a single component, such as a build system, they can gain access to any organization using the compromised software component. This creates an inversion in the cost benefit of the effort involved in the research and development effort needed to discover a vulnerability and craft an exploit for the vulnerability. Before APTs were focused primarily on targets where the pay off could warrant the investment or vulnerabilities that were so wide-spread that the attack could be automated. The complex interactions of software dependencies allows APTs to scale their attack due to the structure of the ecosystem.

The Software Supply Chain Security Dynamic: An Unequal Playing Ground

The interesting challenge with software supply chain security is that securing the supply chain requires even more effort than an APT would take to exploit it. The rub comes because each company that consumes software has to build a software supply chain security system to protect their organization. An APT investing in exploiting a popular component or system gets the benefit of access to all of the software built on top of it.

Given that security organizations are at a structural disadvantage, how can organizations even the odds?

How Do I Secure My Software Supply Chain from APTs?

An organization’s ability to detect the threat of APTs in its internal software supply chain comes down to three core themes that can be summed up as “go deep, go wide and integrate feedback”. Specifically this means, the deeper the visibility into your organization’s software supply chain the less surface area an attack has to slip in malicious software. The wider this visibility is deployed across the software development lifecycle, the earlier an attacker will be caught. Neither of the first two points matter if the feedback produced isn’t integrated into the overall security program that can act on the signals surfaced.

By applying these three core principles to the design of a secure software supply chain, an organization can ensure that they balance the playing field against the structural advantage APTs possess.

How Can I Easily Implement a Strategy for Securing My Software Supply Chain?

The core principles of depth, breadth and feedback are powerful touchstones to utilize when designing a secure software supply chain that can challenge APTs but they aren’t specific rules that can be easily implemented. To address this, Anchore has created the open source VIPERR Framework to provide specific guidance on how to achieve the core principles of software supply chain security.

VIPERR is a free software supply chain security framework that Anchore created for organizations to evaluate and improve the security posture of their software supply chain. VIPERR stands for visibility, inspection, policy enforcement, remediation, and reporting. 

Utilizing the VIPERR Framework an organization can satisfy the three core principles of software supply chain security; depth, breadth and feedback. By following this guide, numerous Fortune 500 enterprises and top federal agencies have transformed their software supply chain security posture and become harder targets for advanced persistent threats. If you’re looking to design and run your own secure software supply chain system, this framework will provide a shortcut to ensure the developed system will be resilient. 

How Can I Comprehensively Implement a Strategy for Securing My Software Supply Chain?

There are a number of different comprehensive initiatives to define best practices for software supply chain security. Organizations ranging from the National Institute of Standards and Technology (NIST) with standards such as SP 800-53, SP 800-218, and SP 800-161. The Cloud Native Computing Foundation (CNCF) to Open Source Security Foundation (OpenSSF) have created detailed documentation on their recommendations to achieve a comprehensive supply chain security program, such as, the SLSA framework and Secure Supply Chain Consumption Framework (S2C2F) Project. Be aware that these are not quick and dirty solutions for achieving a “reasonably” secure software supply chain. They are large undertakings for any organization and should be given the resources needed to achieve success. 

We don’t have the time to go over each in this blog post but we have broken each down in our complete guide to software supply chain security.

This is the second in a series of blog posts focused on the intersection of APTs and software supply chain security. This blog post highlighted the reasons that APTs focus their efforts on software supply chain exploits and the potential avenues that companies have to combat this threat. Next week we will discuss the Anchore Enterprise solution as a turnkey platform to implement the strategies outlined above.

How Cisco Umbrella Achieved FedRAMP Compliance in Weeks

Implementing compliance standards can be a daunting task for IT and security teams. The complexity and volume of requirements, increased workload, and resource constraints make it challenging to ensure compliance without overwhelming those responsible. Our latest case study, “How Cisco Umbrella Achieved FedRAMP Compliance in Weeks,” provides a roadmap for overcoming these challenges, leading to a world of streamlined compliance with low cognitive overhead.

Challenges Faced by Cisco Umbrella

Cisco Umbrella for Government, a cloud-native cybersecurity solution tailored for federal, state, and local government agencies, faced a tight deadline to meet FedRAMP vulnerability scanning requirements. They needed to integrate multiple security functions into a single, manageable solution while ensuring comprehensive protection across various environments, including remote work settings. Key challenges included:

  • Meeting all six FedRAMP vulnerability scanning requirements
  • Maintaining and automating STIG & FIPS compliance for Amazon EC2 virtual machines
  • Integrating end-to-end container security across the CI/CD pipeline, Amazon EKS, and Amazon ECS
  • Meeting SBOM requirements for White House Executive Order (EO 14028)

Solutions Implemented

To overcome these challenges, Cisco Umbrella leveraged Anchore Enterprise, a leading software supply chain security platform specializing in container security and vulnerability management. Anchore Enterprise integrated seamlessly with Cisco’s existing infrastructure, providing:

These features enabled Cisco Umbrella to secure their software supply chain, ensuring compliance with FedRAMP, STIG, FIPS, and EO 14028 within a short timeframe.

Remarkable Results

By integrating Anchore Enterprise, Cisco Umbrella achieved:

  • FedRAMP, FIPS, and STIG compliance in weeks versus months
  • Reduced implementation time and improved developer experience
  • Proactive vulnerability detection in development, saving hours of developer time
  • Simplified security data management with a complete SBOM management solution

Download the Case Study Today

Navigating the complexity and volume of compliance requirements can be overwhelming for IT and security teams, especially with increased workloads and resource constraints. Cisco Umbrella’s experience shows that with the right tools, achieving compliance can be streamlined and manageable. Discover how you can implement these strategies in your organization by downloading our case study, “How Cisco Umbrella Achieved FedRAMP Compliance in Weeks,” and take the first step towards streamlined compliance today.

Using the Common Form for SSDF Attestation: What Software Producers Need to Know

The release of the long-awaited Secure Software Development Attestation Form on March 18, 2024 by the Cybersecurity and Infrastructure Agency (CISA) increases the focus on cybersecurity compliance for software used by the US government. With the release of the SSDF attestation form, the clock is now ticking for software vendors and federal systems integrators to comply with and attest to secure software development practices.

This initiative is rooted in the cybersecurity challenges highlighted by Executive Order 14028, including the SolarWinds attack and the Colonial Pipeline ransomware attack, which clearly demonstrated the need for a coordinated national response to the emerging threats of a complex software supply chain. Attestation to Secure Software Development Framework (SSDF) requirements using the new Common Form is the most recent, and likely not the final, step towards a more secure software supply chain for both the United States and the world at large. We will take you through the details of what this form means for your organization and how to best approach it.

Overview of the SSDF attestation

SSDF attestation is part of a broader effort derived from the Cybersecurity EO 14028 (formally called “Improving the Nation’s Cybersecurity). As a result of this EO, the Office of Management and Budget (OMB) issued two memorandums, M-22-18 “Enhancing the Security of the Software Supply Chain through Secure Software Development Practices” and M-23-16 “Update to Memorandum M-22-18”.

These memos require the Federal agencies to obtain self-attestation forms from software suppliers. Software suppliers have to attest to complying with a subset of the Secure Software Development Framework (SSDF).

Before the publication of the SSDF attestation form, the SSDF was a software development best practices standard published by the National Institute of Standards and Technology (NIST) based on industry best practices like OWASP’s BSIMM and SAMM, a useful resource for organizations that valued security intrinsically and wanted to run secure software development without any external incentives like formal compliance requirements.

Now, the SSDF attestation form requires software providers to self-attest to having met a subset of the SSDF best practices. There are a number of implications to this transition from secure software development as being an aspiration standard to a compliance standard that we will cover below. The most important thing to keep in mind is that while the Attestation Form doesn’t require a software provider to be formally certified before they can transaction with a federal agency like FedRAMP does, there are retroactive punishments that can be applied in cases of non-compliance.

Who/What is Affected?

  1. Software providers to federal agencies
  • Federal service integrators
  • Independent software vendor
  • Cloud service providers
  1. Federal agencies and DoD programs who use any of the above software providers

Included

  • New software: Any software developed after September 14, 2022
  • Major updates to existing software: A major version change after September 14, 2022
  • Software-as-a-Service (SaaS)

Exclusions

  • First-party software: Software developed in-house by federal agencies. SSDF is still considered a best practice but does not require self-attestation
  • Free and open-source software (FOSS): Even though FOSS components and end-user products are excluded from self-attestation the SSDF requires that specific controls are in place to protect against software supply chain security breaches

Key Requirements of the Attestation Form

There are two high-level requirements for meeting compliance with the SSDF attestation form;

  1. Meet the technical requirements of the form
    • Note: NIST SSDF has 19 categories and 42 total requirements. The self-attestation form has 4 categories which are a subset of the full SSDF
  2. Self-attest to compliance with the subset of SSDF
    • Sign and return the form

Timeline

The timeline for compliance with the SSDF self-attestation form involves two critical dates:

  • Critical software: Jun 11, 2024 (3 months after approval on March 11)
  • All software: Sep 11, 2024 (6 months after approval on March 11)

Implications

Now that CISA has published the final version of the SSDF attestation form there are a number of implications to this transition. One is financial and the other is potentially criminal.

The financial penalty of not attesting to secure software development practices via the form can be significant. Federal agencies are required to stop using the software, potentially impacting your revenue,  and any future agencies you want to work with will ask to see your SSDF attestation form before procurement. Sign the form or miss out on this revenue.

The second penalty is a bit scarier from an individual perspective. An officer of the company has to sign the attestation form to state that they are responsible for attesting to the fact that all of the form’s requirements have been met. Here is the relevant quote from the form:

“Willfully providing false or misleading information may constitute a violation of 18 U.S.C. § 1001, a criminal statute.”

It is also important to realize that this isn’t an unenforceable threat. There is evidence that the DOJ Civil Cyber Fraud Initiative is trying to crack down on government contractors failing to meet cybersecurity requirements. They are bringing False Claims Act investigations and enforcement actions. This will likely weigh heavily on both the individual that signs the form and who is chosen at the organization to sign the form.

Given this, most organizations will likely opt to utilize a third-party assessment organization (3PAO) to sign the form in order to shift liability off of any individual in the organization.

Challenges and Considerations

Do I still have to sign if I have a 3PAO do the technical assessment?

No. As long as the 3PAO is FedRAMP-certified. 

What if I can’t comply in time?

You can draft a plan of action and milestones (POA&M) to fill the gap while you are addressing the gaps between your current system and the system required by the attestation form. If the agency is satisfied with the POA&M then they can continue to use your software. But they have to request either an extension of the deadline from OMB or a waiver in order to do that.

Can only the CEO and COO sign the form?

The wording in the draft form that was published required either the CEO or COO but new language was added to the final form that allows for a different company employee to sign the attestation form.

Conclusion

Cybersecurity compliance is a journey not a destination. SSDF attestation is the next step in that journey for secure software development. With the release of the SSDF attestation for, the SSDF standard is not transformed from a recommendation into a requirement. Given the overall trend of cybersecurity modernization that was kickstarted with FISMA in 2002, it would be prudent to assume that this SSDF attestation form is an intermediate step before the requirements become a hard gate where compliance will have to be demonstrated as a prerequisite to utilizing the software.

If you’re interested to get a deep-dive into what is technically required to meet the requirements of the SSDF attestation form, read all of the nitty-gritty details in our eBook, “SSDF Attestation 101: A Practical Guide for Software Producers“. 

If you’re looking for a solution to help you achieve the technical requirements of SSDF attestation quickly, take a look at Anchore Enterprise. We have helped hundreds of enterprises achieve SSDF attestation in days versus months with our automated compliance platform.

With Great Power Comes Great Responsibility: APTs & Software Supply Chain Security

Note: This is a multi-part series primer on the intersection of advanced persistent threats (APTs) and software supply chain security (SSCS). This blog post is the first in the series. We will update this blog post with links to the additional parts of the series as they are published.
• Part 1 (This blog post)
Part 2
Part 3

In the realm of cybersecurity, the convergence of Advanced Persistent Threats (APTs) and software supply chain security presents a uniquely daunting challenge for organizations. APTs, characterized by their sophisticated, state-sponsored or well-funded nature, focus on stealthy, long-term data theft, espionage, or sabotage, targeting specific entities. Their effectiveness is amplified by the asymmetric power dynamics of a well funded attacker versus a resource constrained security team.

Modern supply chains inadvertently magnify the impact of APTs due to the complex and interconnected dependency network of software and hardware components. The exploitation of this weakness by APTs not only compromises the targeted organizations but also poses a systemic risk to all users of the compromised downstream components. The infamous Solarwinds exploit exemplifies the far-reaching consequences of such breaches.

This landscape underscores the necessity for an integrated approach to cybersecurity, emphasizing depth, breadth, and feedback to create a holistic software supply chain security program that can withstand even adversaries as motivated and well-resourced as APTs. Before we jump into how to create a secure software supply chain that can resist APTs, let’s understand our adversary a bit better first.

Know Your Adversary: Advanced Persistent Threats (APTs)

What is an Advanced Persistent Threats (APT)?

An Advanced Persistent Threat (APT) is a sophisticated, prolonged cyberattack, usually state-sponsored or executed by well-funded criminal groups, targeting specific organizations or nations. Characterized by advanced techniques, APTs exploit zero-day vulnerabilities and custom malware, focusing on stealth and long-term data theft, espionage, or sabotage. Unlike broad, indiscriminate cyber threats, APTs are highly targeted, involving extensive research into the victim’s vulnerabilities and tailored attack strategies.

APTs are marked by their persistence, maintaining a foothold in a target’s network for extended periods, often months or years, to continually gather information. They are designed to evade detection, blending in with regular network traffic, and using methods like encryption and log deletion. Defending against APTs requires robust, advanced security measures, continuous monitoring, and a proactive cybersecurity approach, often necessitating collaboration with cybersecurity experts and authorities.

High-Profile APT Example: Operation Triangulation

The recent Operation Triangulation campaign disclosed by Kaspersky researchers is an extraordinary example of an APT in both its sophistication and depth. The campaign made use of four separate zero-day vulnerabilities, utilized a highly targeted approach towards specific individuals at Kaspersky, combined a multi-phase attack pattern and persisted over a four year period. Its complexity, implied significant resources possibly from a nation-state, and the stealthy, methodical progression of the attack, align closely with the hallmarks of APTs. Famed security researcher, Bruce Schneier, writing on his blog, Schneier on Security, wasn’t able to contain his surprise upon reading the details of the campaign, “[t]his is nation-state stuff, absolutely crazy in its sophistication.”

What is the impact of APTs on organizations?

Ignoring the threat posed by Advanced Persistent Threats (APTs) can lead to significant impact for organizations, including extensive data breaches and severe financial losses. These sophisticated attacks can disrupt operations, damage reputations, and, in cases involving government targets, even compromise national security. APTs enable long-term espionage and strategic disadvantage due to their persistent nature. Thus, overlooking APTs leaves organizations exposed to continuous, sophisticated cyber espionage and the multifaceted damages that follow.

Now that we have a good grasp on the threat of APTs, we turn our attention to the world of software supply chain security to understand the unique features of this landscape.

Setting the Stage: Software Supply Chain Security

What is Software Supply Chain Security?

Software supply chain security is focused on protecting the integrity of software through its development and distribution. Specifically it aims to prevent the introduction of malicious code into software that is utilized as components to build widely-used software services.

The open source software ecosystem is a complex supply chain that solves the problem of redundancy of effort. By creating a single open source version of a web server and distributing it, new companies that want to operate a business on the internet can re-use the generic open source web server instead of having to build its own before it can do business. These new companies can instead focus their efforts on building new bespoke software on top of a web server that does new, useful functions for users that were previously unserved. This is typically referred to as compostable software building blocks and it is one of the most important outcomes of the open source software movement.

But as they say, “there are no free lunches”. While open source software has created this incredible productivity boon comes responsibility. 

What is the Key Vulnerability of the Modern Software Supply Chain Ecosystem?

The key vulnerability in the modern software supply chain is the structure of how software components are re-used, each with its own set of dependencies, creating a complex web of interlinked parts. This intricate dependency network can lead to significant security risks if even a single component is compromised, as vulnerabilities can cascade throughout the entire network. This interconnected structure makes it challenging to ensure comprehensive security, as a flaw in any part of the supply chain can affect the entire system.

Modern software is particularly vulnerable to software supply chain attacks because 70-90% of modern applications are open source software components with the remaining 10-30% being the proprietary code that implements company specific features. This means that by breaching popular open source software frameworks and libraries an attacker can amplify the blast radius of their attack to effectively reach significant portions of internet based services with a single attack.

If you’re looking for a deeper understanding of software supply chain security we have written a comprehensive guide to walk you through the topic in full.

High-Profile Software Supply Chain Exploit Example: SolarWinds

In one of the most sophisticated supply chain attacks, malicious actors compromised the update mechanism of SolarWinds’ Orion software. This breach allowed the attackers to distribute malware to approximately 18,000 customers. The attack had far-reaching consequences, affecting numerous government agencies, private companies, and critical infrastructure.

Looking at the example of SolarWinds, the lesson we should take away is not to put a focus on prevention. APTs has a wealth of resources to draw upon. Instead the focus should be on monitoring the software we consume, build, and ship for unexpected changes. Modern software supply chains come with a great deal of responsibility. The software we use and ship need to be understood and monitored.

This is the first in a series of blog posts focused on the intersection of APTs and software supply chain security. This blog post highlighted the contextual background to set the stage for the unique consequences of these two larger forces. Next week, we will discuss the implications of the collision of these two spheres in the second blog post in this series.

Anchore’s June Line-Up: Essential Events for Software Supply Chain Security and DevSecOps Enthusiasts

Summer is beginning to ramp up, but before we all check out for the holidays, Anchore has a sizzling hot line-up of events to keep you engaged and informed. This June, we are excited to host and participate in a number of events that cater to the DevSecOps crowd and the public sector. From insightful webinars to hands-on workshops and major conferences, there’s something for everyone looking to enhance their knowledge and connect with industry leaders. Join us at these events to learn more about how we are driving innovation in the software supply chain security industry.

WEBINAR: How the US Navy is enabling software delivery from lab to fleet

Date: Jun 4, 2024

The US Navy’s DevSecOps platform, Party Barge, has revolutionized feature delivery by significantly reducing onboarding time from 5 weeks to just 1 day. This improvement enhances developer experience and productivity through actionable findings and fewer false positives, while maintaining high security standards with inherent policy enforcement and Authorization to Operate (ATO). As a result, development teams can efficiently ship applications that have achieved cyber-readiness for Navy Authorizing Officials (AOs).

In an upcoming webinar, Sigma Defense and Anchore will provide an in-depth look at the secure pipeline automation and security artifacts that expedite application ATO and production timelines. Topics will include strategies for removing silos in DevSecOps, building efficient development pipeline roles and component templates, delivering critical security artifacts for ATO (such as SBOMs, vulnerability reports, and policy evidence), and streamlining operations with automated policy checks on container images.

WORKSHOP: VIPERR — Actionable Framework for Software Supply Chain Security

Date: Jun 17, 2024 from 8:30am – 2:00pm ET

Location: Carahsoft office in Reston, VA

Anchore, in partnership with Carahsoft, is offering an exclusive in-person workshop to walk security practitioners through the principles of the VIPERR framework. Learn the framework hands-on from the team that originally developed the industry leading software supply chain security framework. In case you’re not familiar, the VIPERR framework enhances software supply chain security by enabling teams to evaluate and improve their security posture. It offers a structured approach to meet popular compliance standards. VIPERR stands for visibility, inspection, policy enforcement, remediation, and reporting, focusing on actionable strategies to bolster supply chain security.

The workshop covers building a software bill of materials (SBOM) for visibility, performing security checks for vulnerabilities and malware during inspection, enforcing compliance with both external and internal standards, and providing recommendations and automation for quick issue remediation. Additionally, timely reporting at any development stage is emphasized, along with a special topic on achieving STIG compliance.

EVENT: Carahsoft DevSecOps Conference 2024

Date: Jun 18, 2024

Location: The Ronald Reagan Building and International Trade Center in Washington, DC

If you’re planning to be at the show, our team is looking forward to meeting you.  You can book a demo session with us in advance!

On top of offering the VIPERR workshop, the Anchore team will be attending Carahsoft’s 2nd annual DevSecOps Conference in Washington, DC, a comprehensive forum designed to address the pressing technological, security, and innovation challenges faced by government agencies today. The event aims to explore innovative approaches such as DoD software factories, which drive efficiency and enhance the delivery of citizen-centric services, and DevSecOps, which integrates security into the software development lifecycle to combat evolving cybersecurity threats. Through a series of panels and discussions, attendees will gain valuable knowledge on how to leverage these cutting-edge strategies to improve their operations and service delivery.

EVENT: AWS Summit Washington, DC

Dates:  June 26-27, 2024

Location: Walter E. Washington Convention Center in Washington, DC

If you’re planning to be at the show, our team is looking forward to meeting you.  You can book a demo session with us in advance!

To round out June, Anchore will also be attending AWS Summit Washington, DC. The event highlights how AWS partners can help public sector organizations meet the needs of federal agencies. Anchore is an AWS Public Sector Partner and a graduate of the AWS ISV Accelerate program.

See how Anchore helped Cisco Umbrella for Government achieve FedRAMP compliance by reading the co-authored blog post on the AWS Partner Network (APN) Blog. Or better yet, drop by our booth and the team can give you a live demo of the product.

VIRTUAL EVENT: Life after the xz utils backdoor hack with Josh Bressers

Date: Wednesday, June 5, from 12:35 PM – 1:20 PM EDT

The xz utils hack was a significant breach that profoundly undermined trust within the open source community. The discovery of the backdoor revealed vulnerabilities in the software supply chain. As a member of both the open source community and a solution provider for the software supply chain security field, we at Anchore have strong opinions about XZ specifically, and open source security generally. Anchore’s VP of Security,  Josh Bressers will be speaking publicly about this topic at Upstream 2024.

Be sure to catch the live stream of “Life after the xz utils backdoor hack,” a panel discussion featuring Josh Bressers. The panel will cover the implications of the recent xz utils backdoor hack and how the attack deeply impacted trust within the open source community. In keeping with the Upstream 2024 theme of “Unusual Ideas to Solve the Usual Problems”, Josh will be presenting the “unusual” solution that Anchore has developed to keep these types of hacks from impacting the industry. The discussion will include insights from industry experts such as Shaun Martin of BlackIce, Jordan Harband, prolific JavaScript maintainer, Rachel Stephens from RedMonk, and Terrence Fischer from Boeing.

Wrap-Up

Don’t miss out on these exciting opportunities to connect with Anchore and learn about the latest advancements in software supply chain security and DevSecOps. Whether you join us for a webinar, participate in our in-person VIPERR workshop, or visit us at one of the major conferences, you’ll gain valuable insights and practical knowledge to enhance your organization’s security posture. We’re looking forward to engaging with you and helping you navigate the evolving digital landscape. See you in June!

Also, if you want to stay up-to-date on all of the events that Anchore hosts or participates in be sure to bookmark our events page and check back often!

Navigating the Updates to cATO: Critical Changes & Practical Advice for DoD Programs

On April 11, the US Department of Defense (DoD)’s Chief Information Officer (CIO) released the DevSecOps Continuous Authorization Implementation Guide, marking the next step in the evolution of the DoD’s efforts to modernize its security and compliance ecosystem. This guide is part of a larger trend of compliance modernization that is transforming the US public sector and the global public sector as a whole. It aims to streamline and enhance the processes for achieving continuous authorization to operate (cATO), reflecting a continued push to shift from traditional, point-in-time authorizations to operate (ATOs) to a more dynamic and ongoing compliance model.

The new guide introduces several significant updates, including the introduction of specific security and development metrics required to achieve cATO, comprehensive evaluation criteria, practical advice on how to meet cATO requirements and a special emphasis on software supply chain security via software bills of material (SBOMs).

We break down the updates that are important to highlight if you’re already familiar with the cATO process. If you’re looking for a primer on cATO to get yourself up to speed, read our original blog post or click below to watch our webinar on-demand.

Continuous Authorization Metrics

A new addition to the corpus of information on cATO is the introduction of specific security and software development metrics that are required to be continuously monitored. Many of these come from the private sector DevSecOps best practices that have been honed by organizations at the cutting edge of this field, such as Google, Microsoft, Facebook and Amazon.

We’ve outlined the major ones below.

  1. Mean Time to Patch Vulnerabilities:
    • Description: Average time between the identification of a vulnerability in the DevSecOps Platform (DSOP) or application and the successful production deployment of a patch.
    • Focus: Emphasis on vulnerabilities with high to moderate impact on the application or mission.
  2. Trend Metrics:
    • Description: Metrics associated with security guardrails and control gates PASS/FAIL ratio over time.
    • Focus: Show improvements in development team efforts at developing secure code with each new sprint and the system’s continuous improvement in its security posture.
  3. Feedback Communication Frequency:
    • Description: Metrics to ensure feedback loops are in place, being used, and trends showing improvement in security posture.
  4. Effectiveness of Mitigations:
    • Description: Metrics associated with the continued effectiveness of mitigations against a changing threat landscape.
  5. Security Posture Dashboard Metrics:
    • Description: Metrics showing the stage of application and its security posture in the context of risk tolerances, security control compliance, and security control effectiveness results.
  6. Container Metrics:
    • Description: Measure the age of containers against the number of times they have been used in a subsystem and the residual risk based on the aggregate set of open security issues.
  7. Test Metrics:
    • Description: Percentage of test coverage passed, percentage of passing functional tests, count of various severity level findings, percentage of threat actor actions mitigated, security findings compared to risk tolerance, and percentage of passing security control compliance.

The overall thread with the metrics required is to quickly understand whether the overall security of the application is improving. If they aren’t this is a sign that something within the system is out of balance and is in need of attention.

Comprehensive and detailed evaluation criteria

Tucked away in Appendix B. “Requirements” is a detailed table that spells out the individual requirements that need to be met in order to achieve a cATO. This table is meant to improve the cATO process so that the individuals in a program that are implementing the requirements know the criteria they will be evaluated against. The goal being to reduce the amount of back-and-forth between the program and the Authorizing Official (AO) that is evaluating them.

Practical Implementation Advice

The ecosystem for DSOPs has evolved significantly since cATO was first announced in February 2022. Over the past 2+ years, a number of early adopters, such as Platform One have blazed a trail and learned all of the painful lessons in order to smooth the path for other organizations that are now looking to modernize their development practices. The advice in the implementation guide is a high-signal, low-noise distillation of these hard won lessons learned.

DevSecOps Platform (DSOP) Advice

If you’re more interested in writing software than operating a DSOP then you’ll want to focus your attention on pre-existing DSOP’s, commonly called DoD software factories.

We have written both a primer for understanding DoD software factories and an index of additional content that can quickly direct you to deep dives in specific content you’re interested in.

If you love to get your hands dirty and would rather have full control over your development environment, just be aware that this is specifically recommended against:

Build a new DSOP using hardened components (this is the most time-consuming approach and should be avoided if possible).

DevSecOps Culture Advice

While the DevSecOps culture and process advice is well-known in the private sector, it is still important to emphasize in the federal context that is currently transitioning to the modern software development paradigm.

  1. Bring the security team at the start of development and keep them involved throughout.
  2. Create secure agile processes to support the continued delivery of value without the introduction of unnecessary risk

Continuous Monitoring (ConMon) Advice

Ensure that all environments are continuously monitored (e.g., development, test and production). Utilize the security data collected from these environments to power and inform thresholds and triggers for active incident response. ConMon and ACD are separate pillars of cATO but need to be integrated so that information is flowing to the systems that can make best use of it. It is this integrated approach that delivers on the promise of significantly improved security and risk outcomes.

Active Cyber Defense (ACD) Advice

Both a Security Operations Center (SOC) and external CSSP are needed in order to achieve the Active Cyber Defense (ACD) pillar of cATO. On top of that, there also has to be a detailed incident response plan and personnel trained on it. While cATO’s goal is to automate as much of the security and incident response system as possible to reduce the burden of manual intervention. Humans in the loop are still an important component in order to tune the system and react with appropriate urgency.

Software Supply Chain Security (SSCS) Advice

The new implementation guide is very clear that a DSOP creates SBOMs for itself and any applications that pass through it. This is a mega-trend that has been sweeping over the software supply chain security industry for the past decade. It is now the consensus that SBOMs are the best abstraction and practice for securing software development in the age of composible and complex software.

The 3 (+1) Pillars of cATO

While the 3 pillars of cATO and its recommendation for SBOMs as the preferred software supply chain security tool were called out in the original cATO memo, the recently published implementation guide again emphasizes the importance of the 3 (+1) pillars of cATO.

The guide quotes directly from the memo:

In order to prevent any combination of human errors, supply chain interdictions, unintended code, and support the creation of a software bill of materials (SBOM), the adoption of an approved software platform and development pipeline(s) are critical.

This is a continuation of the DoD specifically, and the federal government generally, highlighting the importance of software supply chain security and software bills of material (SBOMs) as “critical” for achieving the 3 pillars of cATO. This is why Anchore refers to this as the “3 (+1) Pillars of cATO“.

  1. Continuous Monitoring (ConMon)
  2. Active Cyber Defense (ACD)
  3. DevSecOps (DSO) Reference Design
  4. Secure Software Supply Chain (SSSC)

Wrap-up

The release of the new DevSecOps Continuous Authorization Implementation Guide marks a significant advancement in the DoD’s approach to cybersecurity and compliance. With a focus on transitioning from traditional point-in-time Authorizations to Operate (ATOs) to a continuous authorization model, the guide introduces comprehensive updates designed to streamline the cATO process. The goal being to ease the burden of the process and help more programs modernize their security and compliance posture.

If you’re interested to learn more about the benefits and best practices of utilizing a DSOP (i.e., DoD software factory) in order to transform cATO compliance into a “switch flip”. Be sure to pick up a copy of our “DevSecOps for a DoD Software Factory: 6 Best Practices for Container Images” white paper. Click below to download.

Best Practices for DevSecOps in DoD Software Factories: A White Paper

The Department of Defense’s (DoD) Software Modernization Implementation Plan, unveiled in March 2023, represents a significant stride towards transforming software delivery timelines from years to days. This ambitious plan leverages the power of containers and modern DevSecOps practices within a DoD software factory.

Our latest white paper, titled “DevSecOps for a DoD Software Factory: 6 Best Practices for Container Images,” dives deep into the security practices for securing container images in a DoD software factory. It also details how Anchore Federal—a pivotal tool within this framework—supports these best practices to enhance security and compliance across multiple DoD software factories including the US Air Force’s Platform One, Iron Bank, and the US Navy’s Black Pearl.

Key Insights from the White Paper

  • Securing Container Images: The paper outlines six essential best practices ranging from using trusted base images to continuous vulnerability scanning and remediation. Each practice is backed by both DoD guidance and relevant NIST standards, ensuring alignment with federal requirements.
  • Role of Anchore Federal: As a proven tool in the arena of container image security, Anchore Federal facilitates these best practices by integrating seamlessly into DevSecOps workflows, providing continuous scanning, and enabling automated policy enforcement. It’s designed to meet the stringent security needs of DoD software factories, ready for deployment even in classified and air-gapped environments.
  • Supporting Rapid and Secure Software Delivery: With the DoD’s shift towards software factories, the need for robust, secure, and agile software delivery mechanisms has never been more critical. Anchore Federal is the turnkey solution for automating security processes and ensuring that all container images meet the DoD’s rigorous security and compliance requirements.

Download the White Paper Today

Empower your organization with the insights and tools needed for secure software delivery within the DoD ecosystem. Download our white paper now and take a significant step towards implementing best-in-class DevSecOps practices in your operations. Equip your teams with the knowledge and technology to not just meet, but exceed the modern security demands of the DoD’s software modernization efforts.

Navigate SSDF Attestation with this Practical Guide

The clock is ticking again for software producers selling to federal agencies. In the second half of 2024, CEOs or their designees must begin providing an SSDF attestation that their organization adheres to secure software development practices documented in NIST SSDF 800-218

Download our latest ebook to navigate through SSDF attestation quickly and adhere to timelines. 

SSDF attestation covers four main areas from NIST SSDF including: 

  1. Securing development environments, 
  2. Using automated tools to maintain trusted source code supply chains
  3. Maintaining provenance (e.g., via SBOMs) for internal code and third-party components, and 
  4. Using automated tools to check for security vulnerabilities.  

This new requirement is not to be taken lightly. It applies to all software producers, regardless of whether they provide a software end product as SaaS or on-prem, to any federal agency. The SSDF attestation deadline is June 11, 2024, for critical software and September 11, 2024, for all software. However, on-prem software developed before September 14, 2022, will only require SSDF attestation when a new major version is released. The bottom line is that most organizations will need to comply by 2024.

Companies will make their SSDF attestation through an online Common Form that covers the minimum secure software development requirements that software producers must meet. Individual agencies can add agency-specific instructions outside of the Common Form. 

Organizations that want to ensure they meet all relevant requirements can submit a third-party assessment instead of a CEO attestation. You must use a Third-Party Assessment Organization (3PAO) that is FedRAMP-certified or approved by an agency official.  This option is a no-brainer for cloud software producers who use a 3PAO for FedRAMP.

Details over details – so we put together a practical guide to the SSDF attestation requirements and how to meet them “SSDF Attestation 101: A Practical Guide for Software Producers”. We also included how Anchore Enterprise automates the SSDF attestation compliance by directly integrating into your software development pipeline and utilizing continuous policy scanning to detect issues before they hit production.

Modeling Software Security as Unit Tests: A Mental Model for Developers

Modern software development is complex to say the least. Vulnerabilities often lurk within the vast networks of dependencies that underpin applications. A typical scenario involves a simple app.go source file that is underpinned by a sprawling tree of external libraries and frameworks (check the go.mod file for the receipts). As developers incorporate these dependencies into their applications, the security risks escalate, often surpassing the complexity of the original source code. This real-world challenge highlights a critical concern: the hidden vulnerabilities that are magnified by the dependencies themselves, making the task of securing software increasingly daunting.

Addressing this challenge requires reimagining software supply chain security through a different lens. In a recent webinar with the famed Kelsey Hightower, he provides an apt analogy to help bring the sometimes opaque world of security into focus for a developer. Software security can be thought of as just another test in the software testing suite. And the system that manages the tests and the associated metadata is a data pipeline. We’ll be exploring this analogy in more depth in this blog post and by the end we will have created a bridge between developers and security.

The Problem: Modern software is built on a tower

Modern software is built from a tower of libraries and dependencies that increase the productivity of developers but with these boosts comes the risks of increased complexity. Below is a simple ‘ping-pong’ (i.e., request-response) application written in Go that imports a single HTTP web framework:

package main

import (
	"net/http"

	"github.com/gin-gonic/gin"
)

func main() {
	r := gin.Default()
	r.GET("/ping", func(c *gin.Context) {
		c.JSON(http.StatusOK, gin.H{
			"message": "pong",
		})
	})
	r.Run()
}

With this single framework comes a laundry list of dependencies that are needed in order to work. This is the go.mod file that accompanies the application:

module app

go 1.20

require github.com/gin-gonic/gin v1.7.2

require (
	github.com/gin-contrib/sse v0.1.0 // indirect
	github.com/go-playground/locales v0.13.0 // indirect
	github.com/go-playground/universal-translator v0.17.0 // indirect
	github.com/go-playground/validator/v10 v10.4.1 // indirect
	github.com/golang/protobuf v1.3.3 // indirect
	github.com/json-iterator/go v1.1.9 // indirect
	github.com/leodido/go-urn v1.2.0 // indirect
	github.com/mattn/go-isatty v0.0.12 // indirect
	github.com/modern-go/concurrent v0.0.0-20180228061459-e0a39a4cb421 // indirect
	github.com/modern-go/reflect2 v0.0.0-20180701023420-4b7aa43c6742 // indirect
	github.com/ugorji/go/codec v1.1.7 // indirect
	golang.org/x/crypto v0.0.0-20200622213623-75b288015ac9 // indirect
	golang.org/x/sys v0.0.0-20200116001909-b77594299b42 // indirect
	gopkg.in/yaml.v2 v2.2.8 // indirect
)

The dependencies for the application end up being larger than the application source code. And in each of these dependencies is the potential for a vulnerability that could be exploited by a determined adversary. Kelsey Hightower summed this up well, “this is software security in the real world”. Below is an example of a Java app that hides vulnerabile dependencies inside the frameworks that the application is built off of.

As much as we might want to put the genie back in the bottle, the productivity boosts of building on top of frameworks are too good to reverse this trend. Instead we have to look for different ways to manage security in this more complex world of software development.

If you’re looking for a solution to the complexity of modern software vulnerability management, be sure to take a look at the Anchore Enterprise platform and the included container vulnerability scanner.

The Solution: Modeling software supply chain security as a data pipeline

Software supply chain security is a meta problem of software development. The solution to most meta problems in software development is data pipeline management. 

Developers have learned this lesson before when they first build an application and something goes wrong. In order to solve the problem they have to create a log of the error. This is a great solution until you’ve written your first hundred logging statements. Suddenly your solution has become its own problem and a developer becomes buried under a mountain of logging data. This is where a logging (read: data) pipeline steps in. The pipeline manages the mountain of log data and helps developers sift the signal from the noise.

The same pattern emerges in software supply chain security. From the first run of a vulnerability scanner on almost any modern software, a developer will find themselves buried under a mountain of security metadata. 

$ grype dir:~/webinar-demo/examples/app:v2.0.0

 ✔ Vulnerability DB                [no update available]  
 ✔ Indexed file system                                                                            ~/webinar-demo/examples/app:v2.0.0
 ✔ Cataloged contents                                                         889d95358bbb68b88fb72e07ba33267b314b6da8c6be84d164d2ed425c80b9c3
   ├── ✔ Packages                        [16 packages]  
   └── ✔ Executables                     [0 executables]  
 ✔ Scanned for vulnerabilities     [11 vulnerability matches]  
   ├── by severity: 1 critical, 5 high, 5 medium, 0 low, 0 negligible
   └── by status:   11 fixed, 0 not-fixed, 0 ignored 

NAME                      INSTALLED                           FIXED-IN                           TYPE          VULNERABILITY        SEVERITY 
github.com/gin-gonic/gin  v1.7.2                              1.7.7                              go-module     GHSA-h395-qcrw-5vmq  High      
github.com/gin-gonic/gin  v1.7.2                              1.9.0                              go-module     GHSA-3vp4-m3rf-835h  Medium    
github.com/gin-gonic/gin  v1.7.2                              1.9.1                              go-module     GHSA-2c4m-59x9-fr2g  Medium    
golang.org/x/crypto       v0.0.0-20200622213623-75b288015ac9  0.0.0-20211202192323-5770296d904e  go-module     GHSA-gwc9-m7rh-j2ww  High      
golang.org/x/crypto       v0.0.0-20200622213623-75b288015ac9  0.0.0-20220314234659-1baeb1ce4c0b  go-module     GHSA-8c26-wmh5-6g9v  High      
golang.org/x/crypto       v0.0.0-20200622213623-75b288015ac9  0.0.0-20201216223049-8b5274cf687f  go-module     GHSA-3vm4-22fp-5rfm  High      
golang.org/x/crypto       v0.0.0-20200622213623-75b288015ac9  0.17.0                             go-module     GHSA-45x7-px36-x8w8  Medium    
golang.org/x/sys          v0.0.0-20200116001909-b77594299b42  0.0.0-20220412211240-33da011f77ad  go-module     GHSA-p782-xgp4-8hr8  Medium    
log4j-core                2.15.0                              2.16.0                             java-archive  GHSA-7rjr-3q55-vv33  Critical  
log4j-core                2.15.0                              2.17.0                             java-archive  GHSA-p6xc-xr62-6r2g  High      
log4j-core                2.15.0                              2.17.1                             java-archive  GHSA-8489-44mv-ggj8  Medium

All of this from a single innocuous include statements to your favorite application framework. 

Again the data pipeline comes to the rescue and helps manage the flood of security metadata. In this blog post we’ll step through the major functions of a data pipeline customized for solving the problem of software supply chain security.

Modeling SBOMs and vulnerability scans as unit tests

I like to think of security tools as just another test. A unit test might test the behavior of my code. I think this falls in the same quality bucket as linters to make sure you are following your company’s style guide. This is a way to make sure you are following your company’s security guide.

Kelsey Hightower

This idea from renowned developer, Kelsey Hightower is apt, particularly for software supply chain security. Tests are a mental model that developers utilize on a daily basis. Security tooling are functions that are run against your application in order to produce security data about your application rather than behavioral information like a unit test. The first two foundational functions of software supply chain security are being able to identify all of the software dependencies and to scan the dependencies for known existing vulnerabilities (i.e., ‘testing’ for vulnerabilities in an application). 

This is typically accomplished by running an SBOM generation tool like Syft to create an inventory of all dependencies followed by running a vulnerability scanner like Grype to compare the inventory of software components in the SBOM against a database of vulnerabilities. Going back to the data pipeline model, the SBOM and vulnerability database are the data sources and the vulnerability report is the transformed security metadata that will feed the rest of the pipeline.

$ grype dir:~/webinar-demo/examples/app:v2.0.0 -o json

 ✔ Vulnerability DB                [no update available]  
 ✔ Indexed file system                                                                            ~/webinar-demo/examples/app:v2.0.0
 ✔ Cataloged contents                                                         889d95358bbb68b88fb72e07ba33267b314b6da8c6be84d164d2ed425c80b9c3
   ├── ✔ Packages                        [16 packages]  
   └── ✔ Executables                     [0 executables]  
 ✔ Scanned for vulnerabilities     [11 vulnerability matches]  
   ├── by severity: 1 critical, 5 high, 5 medium, 0 low, 0 negligible
   └── by status:   11 fixed, 0 not-fixed, 0 ignored 

{
 "matches": [
  {
   "vulnerability": {
    "id": "GHSA-h395-qcrw-5vmq",
    "dataSource": "https://github.com/advisories/GHSA-h395-qcrw-5vmq",
    "namespace": "github:language:go",
    "severity": "High",
    "urls": [
     "https://github.com/advisories/GHSA-h395-qcrw-5vmq"
    ],
    "description": "Inconsistent Interpretation of HTTP Requests in github.com/gin-gonic/gin",
    "cvss": [
     {
      "version": "3.1",
      "vector": "CVSS:3.1/AV:N/AC:L/PR:N/UI:R/S:U/C:H/I:L/A:N",
      "metrics": {
       "baseScore": 7.1,
       "exploitabilityScore": 2.8,
       "impactScore": 4.2
      },
      "vendorMetadata": {
       "base_severity": "High",
       "status": "N/A"
      }
     }
    ],
    . . . 

This was previously done just prior to pushing an application to production as a release gate that would need to be passed before software could be shipped. As unit tests have moved forward in the software development lifecycle as DevOps principles have won the mindshare of the industry, so has security testing “shifted left” in the development cycle. With self-contained, open source CLI tooling like Syft and Grype, developers can now incorporate security testing into their development environment and test for vulnerabilities before even pushing a commit to a continuous integration (CI) server.

From a security perspective this is a huge win. Security vulnerabilities are caught earlier in the development process and fixed before they come up against a delivery due date. But with all of this new data being created, the problem of data overload has led to a different set of problems.

Vulnerability overload; Uncovering the signal in the noise

Like the world of application logs that came before it, at some point there is so much information that an automated process generates that finding the signal in the noise becomes its own problem.

How Anchore Enterprise manages SBOMs and vulnerability scans

Centralized management of SBOMs and vulnerability scans can end up being a massive headache. No need to come up with your own storage and data management solution. Just configure the AnchoreCTL CLI tool to automatically submit SBOMs and vulnerability scans as you run them locally. Anchore Enterprise stores all of this data for you.

On top of this, Anchore Enterprise offers data analysis tools so that you can search and filter SBOMs and vulnerability scans by version, build stage, vulnerability type, etc.

Combining local developer tooling with centralized data management creates a best of both worlds environment where engineers can still get their tasks done locally with ease but offload the arduous management tasks to a server.

Added benefit, SBOM drift detection

Another benefit of distributed SBOM generation and vulnerability scanning is that this security check can be run at each stage of the build process. It would be ideal to believe that the software that is written on in a developers local environment always makes it through to production in an untouched, pristine state but this is rarely the case.

Running SBOM generation and vulnerability scanning at development, on the build server, in the artifact registry, at deploy and during runtime will create a full picture of where and when software is modified in the development process and simplify post-incident investigations or even better catch issues well before they make it to a production environment.

This historical record is a feature provided by Anchore Enterprise called Drift Detection. In the same way that an HTTP cookie creates state between individual HTTP requests, Drift detection is security metadata about security metadata (recursion, much?) that allows state to be created between each stage of the build pipeline.  Being the central store for all of the associated security metadata makes the Anchore Enterprise platform the ideal location to aggregate and scan for these particular anomalies.

Policy as a lever

Being able to filter through all of the noise created by integrating security checks across the software development process creates massive leverage when searching for a particular issue but it is still a manual process and being a full-time investigator isn’t part of the software engineer job description. Wouldn’t it be great if we could automate some if not the majority of these investigations?

I’m glad we’re of like minds because this is where policy comes into picture. Returning to Kelsey Hightower’s original comparison between security tools as linters, policy is the security guide that is codified by your security team that will allow you to quickly check whether the commit that you put together will meet the standards for secure software.

By running these checks and automating the feedback, developers can quickly receive feedback on any potential security issues discovered in their commit. This allows developers to polish their code before it is flagged by the security check in the CI server and potentially failed. No more waiting on the security team to review your commit before it can proceed to the next stage. Developers are empowered to solve the security risks and feel confident that their code won’t be held up downstream.

Policies-as-code supports existing developer workflows

Anchore Enterprise designed its policy engine to ingest the individual policies as JSON objects that can be integrated directly into the existing software development tooling. Create a policy in the UI or CLI, generate the JSON and commit it directly to the repo.

This prevents the painful context switching of moving between different interfaces for developers and allows engineering and security teams to easily reap the rewards of version control and rollbacks that come pre-baked into tools like version control. Anchore Enterprise was designed by engineers for engineers which made policy-as-code the obvious choice when designing the platform.

Remediation automation integrated into the development workflow

Being able to be alerted when a commit is violating your company’s security guidelines is better than pushing insecure code and waiting for the breach to find out that you forgot to sanitize the user input. Even after you get alerted to a problem, you still need to understand what is insecure and how to fix it. This can be done by trying to Google the issue or starting up a conversation with your security team. But this just ends up creating more work for you before you can get your commit into the build pipeline. What if you could get the answer to how to fix your commit in order to make it secure directly into your normal workflow?

Anchore Enterprise provides remediation recommendations to help create actionable advice on how to resolve security alerts that are flagged by a policy. This helps point developers in the right direction so that they can resolve their vulnerabilities quickly and easily without the manual back and forth of opening a ticket with the security team or Googling aimlessly to find the correct solution. The recommendations can be integrated directly into GitHub Issues or Jira tickets in order to blend seamlessly into the workflows that teams depend on to coordinate work across the organization.

Wrap-Up

From the perspective of a developer it can sometimes feel like the security team is primarily a frustration that only slows down your ability to ship code. Anchore has internalized this feedback and has built a platform that allows developers to still move at DevOps speeds and do so while producing high quality, secure code. By integrating directly into developer workflows (e.g., CLI tooling, CI/CD integrations, source code repository integrations, etc.) and providing actionable feedback Anchore Enterprise removes the traditional roadblock mentality that has typically described the relationship between development and security.

If you’re interested to see all of the features described in this blog post via a hands-on demo, check out the webinar by clicking on the screenshot below and going to the workshop hosted on GitHub.

If you’re looking to go further in-depth with how to build and secure containers in the software supply chain, be sure to read our white paper: The Fundamentals of Container Security.

Streamlining FedRAMP Compliance: How Anchore Enterprise Simplifies the Process

FedRAMP compliance is hard, not only because there are hundreds of controls that need to be reviewed and verified. On top of this, the controls can be interpreted and satisfied in multiple different ways. It is admirable to see an enterprise achieve FedRAMP compliance from scratch but most of us want to achieve compliance without spending more than a year debating the interpretation of specific controls. This is where turnkey solutions like Anchore Enterprise come in. 

Anchore Enterprise is a cloud-native software composition analysis platform that integrates SBOM generation, vulnerability scanning and policy enforcement into a single platform to provide a comprehensive solution for software supply chain security.

Overview of FedRAMP, who it applies to and the challenges of compliance

FedRAMP, or the Federal Risk and Authorization Management Program, is a federal compliance program that standardizes security assessment, authorization, and continuous monitoring for cloud products and services. As with any compliance standard, FedRAMP is modeled from the “Trust but Verify” security principle. FedRAMP standardizes how security is verified for Cloud Service Providers (CSP).

One of the biggest challenges with achieving FedRAMP compliance comes from sorting through the vast volumes of data that make up the standard. Depending on the level of FedRAMP compliance you are attempting to meet, this could mean complying with 125 controls in the case of a FedRAMP low certification or up to 425 for FedRAMP high compliance.

While we aren’t going to go through the entire FedRAMP standard in this blog post, we will be focusing on the container security controls that are interleaved into FedRAMP.

FedRAMP container security requirements

1) Hardened Images

FedRAMP requires CSPs to adhere to strict security standards for hardened images used by government agencies. The standard mandates that:

  • Only essential services and software are included in the images
  • Updated with the latest security patches
  • Configuration settings meet secure baselines
  • Disabling unnecessary ports and services
  • Managing user accounts securely
  • Implementing encryption
  • Maintaining logging and monitoring practices
  • Regular vulnerability scanning and prompt remediation

If you want to go in-depth with how to create hardened images that meet FedRAMP compliance, download our white papers:

DevSecOps for a DoD Software Factory: 6 Best Practices for Container Images

Complete Guide to Hardening Containers with STIG

2) Container Build, Test, and Orchestration Pipelines

FedRAMP sets stringent requirements for container build, test, and orchestration pipelines to protect federal agencies. These include:

  • Hardened base images (see above) 
  • Automated build processes with integrity checks
  • Strict configuration management
  • Immutable containers
  • Secure artifact management
  • Containers security testing
  • Comprehensive logging and monitoring

3) Vulnerability Scanning for Container Images

FedRAMP mandates rigorous vulnerability scanning protocols for container images to ensure their security within federal cloud deployments. This includes: 

  • Comprehensive scans integrated into CI/CD pipelines 
  • Prioritize remediation based on severity
  • Re-scanning policy post-remediation 
  • Detailed audit and compliance reports 
  • Checks against secure baselines (i.e., CIS or STIG)

4) Secure Sensors

FedRAMP requires continuous management of the security of machines, applications, and systems by identifying vulnerabilities. 

  • Authorized scanning tools
  • Authenticated security scans to simulate threats
  • Reporting and remediation
  • Scanning independent of developers
  • Direct integration with configuration management to track vulnerabilities

5) Registry Monitoring

While not explicitly called out in FedRAMP as either a control or a control family, there is still a requirement that the images stored in a container registry are scanned at least every 30-days if the images are deployed to production.

6) Asset Management and Inventory Reporting for Deployed Containers

FedRAMP mandates thorough asset management and inventory reporting for deployed containers to ensure security and compliance. Organizations must maintain detailed inventories including:

  • Container images
  • Source code
  • Versions
  • Configurations 
  • Continuous monitoring of container state 

7) Encryption

FedRAMP mandates robust encryption standards to secure federal information, requiring the use of NIST-approved cryptographic methods for both data at rest and data in transit. It is important that any containers that store data or move data through the system meet FIPS standards.

How Anchore helps organizations comply with these requirements

Anchore is the leading software supply chain security platform for meeting FedRAMP compliance. We have helped hundreds of organizations meet FedRAMP compliance by deploying Anchore Enterprise as the solution for achieving container security compliance. Below you can see an overview of how Anchore Enterprise integrates into a FedRAMP compliant environment. For more details on how each of these integrations meet FedRAMP compliance keep reading.

1) Hardened Images

Anchore Enterprise integrates multiple tools in order to meet the FedRAMP requirements for hardened container images. We provide compliance policies that scan specifically for compliance with container hardening standards, such as, STIG and CIS. These tools were custom built to perform the checks necessary to meet the two relevant standards or both!

2) Container Build, Test, and Orchestration Pipelines

Anchore integrates directly into your CI/CD pipelines either the Anchore Enterprise API or pre-built plug-ins. This tight integration meets the FedRAMP standards that require all container images are hardened, all security checks are automated within the build process and all actions are logged and audited. Anchore’s FedRAMP policy specifically checks to make sure that any container in any stage of the pipeline will be checked for compliance.

3) Vulnerability Scanning for Container Images

Anchore Enterprise can be integrated into each stage of the development pipeline, offer remediation recommendations based on severity (e.g., CISA’s KEV vulnerabilities can be flagged and prioritized for immediate action), enforce re-scanning of containers after remediation and produce compliance artifacts to automate compliance. This is accomplished with Anchore’s container scanner, direction pipeline integration and FedRAMP policy.

4) Secure Sensors

Anchore Enterprise’s container vulnerability scanner and Kubernetes inventory agent are both authorized scanning tools. The container vulnerability scanner is integrated directly into the build pipeline whereas the k8s agent is run in production and scans for non-compliant containers at runtime.

5) Registry Monitoring

Anchore Enterprise is able to scan an artifact registry continuously for potentially non-compliant containers. It is configured to watch each unique image in image registries. It will automatically scan images that get pushed to these registries.

6) Asset Management and Inventory Reporting for Deployed Containers

Anchore Enterprise includes a full software component inventory workflow. It can scan all software components, generate Software Bill of Materials (SBOMs) to keep track of the component and centrally store all SBOMs for analysis. Anchore Enterprises’s Kubernetes inventory agent can perform the same service for the runtime environment.

7) Encryption

Anchore Enterprise Static STIG tool can ensure that all containers are maintaining NIST & FIPS encryption standards. Verifying that containers are encrypting data at-rest and in-transit for each of thousands of containers is a difficult chore but easily automated via Anchore Enterprise.

The benefits of the shift left approach of Anchore Enterprise

Shift compliance left and prevent violations

Detect and remediate FedRAMP compliance violations early in the development lifecycle to prevent production/high-side violations that will threaten your hard earned compliance. Use Anchore’s “developer-bundle” in the integration phase to take immediate action on potential compliance violations. This will ensure vulnerabilities with fixes available and CISA KEV vulnerabilities are addressed before they make it to the registry and you have to report these non-compliance issues.

Below is an example of a workflow in GitLab of how Anchore Enterprise’s SBOM generation, vulnerability scanning and policy enforcement can catch issues early and keep your compliance record clean.

Automate Compliance Reporting

Automate monthly/annual reporting using Anchore’s reporting. Have these reports set up to auto generate based on the compliance reporting needs of FedRAMP.

Manage POA&Ms

Given that Anchore Enterprise centrally stores and manages vulnerability information for an organization, it can also be utilized to manage Plan Of Action & Milestones (POA&Ms) for any portions of the system that aren’t yet FedRAMP compliant but have a planned due date. Using Allowlists in Anchore Enterprise to centrally manage POA&Ms and assessed/justifiable findings.

Prevent Production Compliance Violations

Practice good production registry hygiene by utilizing Anchore Enterprise to scan stored images regularly. Anchore Enterprise’s Kubernetes runtime inventory will identify images that do not meet FedRAMP compliance or have not been used in the last ~7 days (company defined) to remove from your production registry.

Conclusion

Achieving FedRAMP compliance from scratch is an arduous process and not a key differentiator for many organizations. In order to maintain organizational priority on the aspects of the business that differentiate an organization from its competitors, strategic outsourcing of non-core competencies is always a recommended strategy. Anchore Enterprise aims to be that turnkey solution for organizations that want the benefits of FedRAMP compliance without developing the internal expertise, specifically for the container security aspect.

By integrating SBOM generation, vulnerability scanning, and policy enforcement into a single platform, Anchore Enterprise not only simplifies the path to compliance but also enhances overall software supply chain security. Through the deployment of Anchore Enterprise, companies can achieve and maintain compliance more quickly and with greater assurance. If you’re looking for an even deeper look at how to achieve all 7 of the container security requirements of FedRAMP with Anchore Enterprise, read our playbook: FedRAMP Pre-Assessment Playbook For Containers.

From Chaos to Compliance: Revolutionizing License Management with Automation

The ascent of both containerized applications and open-source software component building blocks has dramatically escalated the complexity of software and the burden of managing all of the associated licenses. Modern applications are often built from a mosaic of hundreds, if not thousands, of individual software components, each bound by its own potential licensing pitfalls. This intricate web of dependencies, akin to a supply chain, poses significant challenges not only for legal teams tasked with mitigating financial risks but also for developers who manage these components’ inventory and compliance.

Previously license management was primarily a manual affair, software wasn’t as complex in the past and more software was proprietary 1st party software that didn’t have the same license compliance issues. These original license management techniques haven’t kept up with the needs of modern, cloud-native application development. In this blog post, we’re going to discuss how automation is needed to address the challenges of continuing to manage licensing risk in modern software.

The Problem

Modern software is complex. This is fairly well known at this point but in case you need a quick visual reminder, we’ve inserted to images to quickly reinforce this idea:

Applications can be constructed from 10s or 100s or even 1000s of individual software components each with their own license for how it can be used. Modern software is so complex that this endlessly nested collection of dependencies are typically referred to as a metaphorical supply chain and there is an entire industry that has grown to provide security solutions for this quagmire called software supply chain security

This is a complexity nightmare for legal teams that are tasked with managing the financial risk of an organization. It’s also a nightmare for the developers who are tasked with maintaining an inventory of all of the software dependencies in an organization and the associated license for each component.

Let’s provide an example of how this normally manifests in a software startup. Assuming business is going well, you have a product and there are customers out in the world that are interested in purchasing your software. During the procurement cycle, your customer’s legal team will be tasked with assessing the risk of using your software. In order to create this assessment they will do a number of things and one of them will be to determine if your software is safe from a licensing perspective to use. In order to do this they will normally send over a document that looks like this:

As a software vendor, it will be your job to fill this out so that legal can approve the purchasing of your software and you can take that revenue to the bank.

Let’s say you manually fill this entire spreadsheet out. A developer would need to go through each dependency that is utilized in the software that you sell and “scan” the codebase for all of the licensing metadata. Component name, version number, OSS license (i.e., MIT, GPL, BSD, etc.), etc. It would take some time and be quite tedious but not an insurmountable task. In the end they would produce something like this:

This is great in the world of once-in-a-while deployments and updates. This becomes exhausting in the world of continuous integration/delivery that the DevOps movement has created. Imagine having to produce a new document like this everytime you push to production. DevOps has allowed for some times to push to production multiple times per day.  Requiring that a document is manually created for all of your customers’ legal teams for each release would almost eliminate all of the velocity gains that moving to the DevOps architecture created.

The Solution

The solution to this problem is the automation of the license discovery process. If software can scan your codebase and produce a document that will exhaustively cover all of the building blocks of your application this unlocks the potential to both have your DevOps cake and eat it too.

To this end, Anchore has created and open sourced a tool that does just this.

Introducing Grant: Automated License Discovery

Grant is an open-source command line tool that scans and discovers the software licenses of all dependencies in a piece of open-source software. If you want to get a quick primer on what you can do with Grant, read our announcement blog post. Or if you’re ready to dive straight in, you can view all of the Grant documentation on its GitHub repo.

How does Grant Integrate into my Development Workflow?

As a software license scanner, Grant operates on a software inventory artifact like an SBOM or directly on a container image. Let’s continue with the example from above to bring this to life. In the legal review example above you are a software developer that has been tasked with manually searching and finding all of the OSS license files to provide to your customer’s legal team for review.

Not wanting to do this manually by hand, you instead open up your CLI and install Grant. From there you navigate to your artifact registry and pull down the latest image of your application’s production build. Right before you run the Grant license scan on your production container image you notice that your team has been following software supply chain best practices and have already created an SBOM with a popular open-source tool called Syft. Instead of running the container through Grant which could take some time to scan the image, you instead pipe the SBOM into Grant which is already a JSON object of the entire dependency inventory of the application. A few seconds later you have a full report of all of the licenses in your application.  

From here you export the full component inventory with the license enrichment into a spreadsheet and send this off to the customer’s legal team for review. A process that might have taken a full day or even multiple days to do by hand was finished in seconds with the power of open-source tooling.

Automating License Compliance with Policy

Grant is an amazing tool that can automate much of the most tedious work of protecting an organization from legal consequences but when used by a developer as a CLI tool, there is still a human in the loop which can cause traffic jams to occur. With this in mind, our OSS team made sure to launch Grant with support for policy-based filters that can automate the execution and alerting of license scanning. 

Let’s say that your organization’s legal team has decided that using any GPL components in 1st party software is too risky. By writing a policy that fails any software that includes GPL licensed components and integrating the policy check as early as the staging CI environment or even allowing developers to run Grant in a one-off fashion during design as they prototype the initial idea, the potential for legally risky dependencies infiltrating into production software drops precipitously.

How Anchore Can Help

Grant is an amazing tool that automates the license compliance discovery process. This is great for small projects or software that does irregular releases. Things get much more complicated in the cloud-native, continuous integration/deployment paradigm on DevSecOps where there are new releases multiple times per day. Having Grant generate the license data is great but suddenly you will have an explosion of data that itself needs to be managed.

This is where Anchore Enterprise steps in to fill this gap. The Anchore Enterprise platform is an end-to-end data management solution that not only incorporates all of Anchore’s open-source tooling that generates artifacts like SBOMs, vulnerability scans and license scans. It also manages the massive amount of data that a high speed DevSecOps pipeline will create as part of its regular operation and on top of that apply a highly customizable policy engine that can then automate decision-making around the insights derived from the software supply chain artifacts like SBOMs, vulnerability scans and license scans. 

Want to make sure that no GPL license OSS components ever make it into your SDLC? No problem. Grant will uncover all components that have this license, Anchore Enterprise will centralize these scans and the Anchore policy engine will alert the developer who just integrated a new GPL licensed OSS component into their development environment that they need to find a different component or they won’t be able to push their branch to staging. The shift left principle of DevSecOps can be applied to LegalOps as well. 

Conclusion

The advent of tools like Grant, an open-source license discovery solution developed by Anchore, marks a significant advancement in the realm of open-source license management. By automating the tedious process of license verification, Grant not only enhances operational efficiency but also integrates seamlessly into continuous integration/continuous delivery (CI/CD) environments. This capability is crucial in modern DevOps practices, which demand frequent and fast-paced updates. Grant’s ability to quickly generate comprehensive licensing reports transforms a potentially day-long task into a matter of seconds.

Anchore Enterprise extends this functionality by managing the deluge of data from continuous deployments and integrating a policy engine that automates compliance decisions. This ecosystem not only streamlines the process of license management but also empowers developers and legal teams to preemptively address compliance issues, thereby embedding legal safeguards directly into the software development lifecycle. This proactive approach ensures that as the technological landscape evolves, businesses remain agile yet compliant, ready to capitalize on opportunities without being bogged down by legal liabilities.

If you’re interested to hear about the topics covered in this blog post directly from the lips of Anchore’s CTO, Dan Nurmi, and the maintainer of Grant, Christopher Phillips, you can watch the on-demand webinar here. Or join the Anchore Community Discourse forum to speak with our team directly. We look forward to hearing from you and reviewing your pull requests!

An Outline for Getting Up to Speed on the DoD Software Factory

This blog post is meant as a gateway to all things DoD software factory. We highlight content from across the Anchore universe that can help anyone get up to speed on what a DoD software factory is, why to use it and how to build one. This blog post is meant as an index to be scanned for the topics that are most interesting to you as the reader with links to more detailed content.

What is a DoD Software Factory?

The short answer is a DoD Software Factory is an implementation of the DoD Enterprise DevSecOps Reference Design. A slightly longer answer comes from our DoD software factory primer:

A Department of Defense (DoD) Software Factory is a software development pipeline that embodies the principles and tools of the larger DevSecOps movement with a few choice modifications that conform to the extremely high threat profile of the DoD and DIB.

Note that the diagram below looks like a traditional DevOps pipeline. The difference being that there are security controls layered into this environment that automate software component inventory, vulnerability scanning and policy enforcement to meet the requirements to be considered a DoD software factory.

Got the basics down? Go deeper and learn how Anchore can help you put the Sec into DevSecOps Reference Design by reading our DoD Software Factory Best Practices white paper.

Why do I want to utilize a DoD Software Factory?

For DoD programs, the primary reason to utilize a DoD software factory is that it is a requirement for achieving a continuous authorization to operation (cATO). The cATO standard specifically calls out that software is developed in a system that meets the DoD Enterprise DevSecOps Reference Design. A DoD software factory is the generic implementation of this design standard.

For Federal Service Integrators (FSIs), the biggest reason to utilize a DoD software factory is that it is a standard approach to meeting DoD compliance and certification standards. By meeting a standard, such as CMMC Level 2, you expand your opportunity to work with DoD programs.

Continuous Authorization to Operate (cATO)

If you’re looking for more information on cATO, Anchore has written a comprehensive guide on navigating the cATO process that can be found on our blog:

DevSecOps for a DoD Software Factory: 6 Best Practices for Container Images

The shift from traditional software delivery to DevSecOps in the Department of Defense (DoD) represents a crucial evolution in how software is built, secured, and deployed with a focus on efficiencies and speed. Our white paper advises on best practices that are setting new standards for security and efficiency in DoD software factories.

Cybersecurity Maturity Model Certification (CMMC)

The CMMC is the certification standard that is used by the DoD to vet FSIs from the defense industrial base (DIB). This is the gold standard for demonstrating to the DoD that your organization takes security seriously enough to work with the highest standards of any DoD program. The security controls that the CMMC references when determining certification are outlined in NIST 800-171. There are 17 total families of security controls that an organization has to meet in order to meet the CMMC Level 2 certification and a DoD software factory can help check a number of these off of the list.

The specific families of controls that a DoD software factory helps meet are:

  • Access Control (AC)
  • Audit and Accountability (AU)
  • Configuration Management (CM)
  • Incident Response (IR)
  • Maintenance (MA)
  • Risk Assessment (RA)
  • Security Assessment and Authorization (CA)
  • System and Communications Protection (SC)
  • System and Information Integrity (SI)
  • Supply Chain Risk Management (SR)

If you’re looking for more information on how apply software supply chain security to meet the CMMC, Anchore has published two blog posts on the topic:

NIST SP 800-171 & Controlled Unclassified Data: A Guide in Plain English

  • NIST SP 800-171 is the canonical list of security controls for meeting CMMC Level 2 certification. Anchore has broken down the entire 800-171 standard to give you an easy to understand overview.

Automated Policy Enforcement for CMMC with Anchore Enterprise

  • Policy Enforcement is the backbone of meeting the monitoring, enforcement and reporting requirements of the CMMC. In this blog post, we break down how Anchore Federal can meet a number of the controls specifically related to software supply chain security that are outlined in NIST 800-171.

How do I meet the DevSecOps Reference Design requirements?

The easy answer is by utilizing a DoD Software Factory Managed Service Provider (MSP). Below in the User Stories section, we deep dive into the US Air Force’s Platform One given they are the preeminent DoD software factory.

The DIY answer involves carefully reading and implementing the DoD Enterprise DevSecOps Reference Design. This document is massive but there are a few shortcuts you can utilize to help expedite your journey. 

Container Hardening

Deciding to utilize software containers in a DevOps pipeline is almost a foregone conclusion at this point. What is less well known is how to secure your containers, especially to meet the standards of a DoD software factory.

The DoD has published two guides that can help with this. The first is the DoD Container Hardening Guide, and the second is the Container Image Creation and Deployment Guide. Both name Anchore Federal as an approved container hardening scanner.

Anchore has published a number of blogs and even a white paper that condense the information in both of these guides into more digestible content. See below:

Container Security for U.S. Government Information Systems

  • This comprehensive white paper breaks down how to achieve a container build and deployment system that is hardened to the standards of a DoD software factory.

Enforcing the DoD Container Image and Deployment Guide with Anchore Federal

  • This blog post is great for those who are interested to see how Anchore Federal can turn all of the requirements of the DoD Container Hardening Guide and the Container Image Creation and Deployment Guide into an easy button.

Deep Dive into Anchore Federal’s Container Image Inspection and Vulnerability Management

  • This blog post deep dives into how to utilize Anchore Federal to find container vulnerabilities and alert or report on whether they are violating the security compliance required to be a DoD software factory.

Policy-based Software Supply Chain Security and Compliance

The power of a policy-based approach to software supply chain security is that it can be integrated directly into a DevOps pipeline and automate a significant amount of alerting, reporting and enforcement work. The blog posts below go into depth on how this automated approach to security and compliance can uplevel a DoD software factory:

A Policy Based Approach to Container Security & Compliance

  • This blog details how a policy-based platform works and how it can benefit both software supply chain security and compliance. 

The Power of Policy-as-Code for the Public Sector

  • This follow-up to the post above shows how the policy-based security platform outlined in the first blog post can have significant benefits to public sector organizations that have to focus on both internal information security and how to prove they are compliant with government standards.

Benefits of Static Image Inspection and Policy Enforcement

  • Getting a bit more technical this blog details how a policy-based development workflow can be utilized as a security gate with deployment orchestration systems like Kubernetes.

Getting Started With Anchore Policy Bundles

  • An even deeper dive into what is possible with the policy-based security system provided by Anchore Enterprise, this blog gets into the nitty-gritty on how to configure policies to achieve specific security outcomes.

Unpacking the Power of Policy at Scale in Anchore

  • This blog shows how a security practitioner can extend the security signals that Anchore Enterprise collects with the assistance of a more flexible data platform like New Relic to derive more actionable insights.

Security Technical Implementation Guide (STIG)

The Security Technical Implementation Guides (STIGs) are fantastic technical guides for configuring off the shelf software to DoD hardening standards. Anchore being a company focused on making security and compliance as simple as possible has written a significant amount about how to utilize STIGs and achieve STIG compliance, especially for container-based DevSecOps pipelines. Exactly the kind of software development environments that meet the standards of a DoD software factory. View our previous content below:

4 Ways to Prepare your Containers for the STIG Process

  • In this blog post, we give you four quick tips to help you prepare for the STIG process for software containers. Think of this as the amuse bouche to prepare you for the comprehensive white paper that comes next.

Navigating STIG Compliance for Containers

  • As promised, this is the extensive document that walks you through how to build a DevSecOps pipeline based on containers that is both high velocity and secure. Perfect for organizations that are aiming to roll their own DoD software factory.

User Stories

Anchore has been supporting FSIs and DoD programs to build DevSecOps programs that meet the criteria to be called a DoD software factory for the past decade. We can write technical guides and best practices documents till time ends but sometimes the best lessons are learned from real-life stories. Below are user stories that help fill in all of the details about how a DoD software factory can be built from scratch:

DoD’s Pathway to Secure Software

  • Join Major Camdon Cady of Platform One and Anchore’s VP of Security, Josh Bressers as they discuss the lessons learned from building a DoD software factory from the ground up. Watch this on-demand webinar to get all of the details in a laid back and casual conversation between two luminaries in their field.

Development at Mach Speed

  • If you prefer a written format over video, this case study highlights how Platform One utilized Red Hat OpenShift and Anchore Federal to build their DoD software factory that has become the leading Managed Service Provider for DoD programs.

Conclusion

Similar to how Cloud has taken over the infrastructure discussion in the enterprise world, DoD software factories are quickly becoming the go-to solution for DoD programs and the FSIs that support them. Delivering on the promise of the DevOps movement of high velocity development without compromising security, a DoD software factory is the one-stop shop to upgrade your software development practice into the modern age and become compliant as a bonus! If you’re looking for an easy button to infuse your DevOps pipeline with security and compliance without the headache of building it yourself, take a look at Anchore Federal and how it helps organizations layer software supply chain security into a DoD software factory and achieve a cATO.

Navigating the NVD Quagmire

The global cybersecurity community has been in a state of uncertainty since the National Vulnerability Database (NVD) has degraded service starting in mid-February. There has been a lot of coverage of this incident this month and Anchore has been at the center of much of it. If you haven’t been keeping up, this blog post is here to recap what has happened so far and how the community has been responding to this incident.

Our VP of Security, Josh Bressers has been leading the charge to educate and organize the community. First with his Open Source Security podcast that goes through what is happening with NVD and why it is important. On top of that, last week he participated in a livestream with Chainguard Co-founder Dan Lorenc on the Resilient Cyber Show hosted by Chris Hughes on the implications of the current delay with NVD service.

We’ve condensed the topics from these resources into a blog post that will cover the issues created by the delay in NVD service, a background on what has happened so far, a potential open-source solution to the problem and a call to action for advocacy. Continue reading for the good stuff.

The problem

Federal agencies mandate NVD is used as the primary data source of truth even if there could be higher quality data sources available. This mainly comes down to the fact that the severity scores, meaning the Common Vulnerability Scoring System (CVSS), determines when an agency or organization is out of compliance with a federal security standard. Given that compliance standards are created by the US government, only NVD can score a vulnerability and determine the appropriate action in order to stay in compliance.

That’s where the problem starts to come in, you’ve got a whole bunch of government agencies on one hand saying, ‘you must use this data’. And then another government agency that says, “No, you can’t rely on this for anything”. This leaves folks working with the government in a bit of a pickle.

–Dan Lorenc, Co-Founder, Chainguard

If NVD isn’t assigning severities to vulnerabilities, it’s not clear what that means for maintaining compliance and they could be exposing themselves to significant risk. For example, there could be high severity vulnerabilities being published by organizations that are unaware because this vital review and scoring process has been removed.

Background on NVD and the current state of affairs

NVD is the canonical source of truth for software vulnerabilities for the federal government, specifically for 10+ federal compliance standards. It has also become a go-to resource for the worldwide security community even if individual organizations in the wider community aren’t striving to meet a United States compliance standard.

NVD adds a number of enrichments to CVE data but there are two of particular importance; first, it adds a severity score to all CVEs and two, it adds information of which versions of the software are impacted by the CVE. The National Institute of Standards and Technology (NIST) has been providing this service to the security community for over 20 years through the NVD. That changed last month:

Timeline

  • Feb 12: NVD dramatically reduces the number of CVEs that are being enriched:
  • Feb 15: NVD posts message about delay of enrichment on NVD Website

Read a comprehensive background in our original blog post, National Vulnerability Database: Opaque changes and unanswered questions.

Developing an Open-Source Solution

The Anchore team developed and maintains an open-source vulnerability scanner called Grype that utilizes the NVD as one of many vulnerability feeds as well as a software supply chain security platform called Anchore Enterprise that incorporates Grype. Given that both products use data from NVD, it was particularly important for Anchore to engage in the current crisis.

While there is nothing that Anchore can do about the missing severity scores, the other highlighted missing enrichment was the versions of the software that are impacted by the CVE, aka, Common Platform Enumeration (CPE). The matching logic ends up being the more important signal during impact analysis because it is an objective measure of impact rather than severity scoring which can be debated (and is, at length).

Given Anchore’s history with the open-source software community, creating an OSS project to fill a gap in the NVD enrichment seemed the logical choice. The goal of going the OSS route is to leverage the transparent process and rapid iteration that comes from building software publicly. Anchore is excited to collaborate with the community to:

  • Ingest CVE data
  • Analyze CVEs
  • Improve the CVE-to-versioning mapping process 

Everyone is being crushed by the unrelenting influx of vulnerabilities. It’s not just NVD. It’s not one organization. We can either sit in our silos and be crushed to death or we can work together.

–Josh Bressers, VP of Security at Anchore

If you’re looking to utilize this data and software as a backfill while NVD continues delaying analysis or want to contribute to the project, please join us on GitHub

Cybersecurity Awareness and Advocacy

It might seem strange that the cybersecurity community would need to convince the US government that investing in the cybersecurity ecosystem is a net positive investment given that the federal government is the largest purchaser of software in the world and is probably the largest target for threat actors. But given how NIST has decided to degrade the service of NVD and provide only opaque guidance on how to fill the gap in the meantime, it doesn’t appear that the right hand is talking with the left.

Whether the federal government intended to or not, by requiring that organizations and agencies utilize NVD in order to meet a number of federal compliance standards, it effectively became the authority on the severity of software vulnerabilities for the global cybersecurity ecosystem. By providing a valuable and reliable service to the community, the US garnered the trust of the ecosystem. The current state of NVD and the manner in which it was rolled out has degraded that trust. 

It is unknown whether the US will cede its authority to another organization, the EU may attempt to fill this vacuum with its own authoritative database but in the meantime, advocacy for cybersecurity awareness within the government is paramount. It is up to the community to create the pressure that will demonstrate the urgency with the current strategy around a vital community resource like NVD. 

Conclusion

Anchore is committed to keeping the community up-to-date on this incident as it unfolds. To stay informed, be sure to follow us on LinkedIn or Twitter/X.

If you’d like to watch the livestream in all its glory, click on the image below to go to the VOD.

Also, if you’re looking for more in-depth coverage of the NVD incident, Josh Bressers has a security podcast called, Open Source Security that covers the NVD incident and the history of NVD.

Spring Webinar Update: Expand Your Knowledge with Our Expert-Led Sessions

In our continuous effort to bring valuable insights and tools to the world of software supply chain security, we are thrilled to announce two upcoming webinars and one recently held webinar now available for on-demand access. Whether you’re looking to enhance your understanding of software security, explore open-source tools to automate OSS licensing management, or navigate the complexities of compliance with federal standards, our expert-led webinars are designed to equip you with the knowledge you need. Here’s what’s on the agenda:

Tracking License Compliance Made Easy: Intro to Grant (OSS)

Date: Mar 26, 2024 at 2pm EDT  (11am PDT)

Join us as Anchore CTO, Dan Nurmi and Grant Maintainer, Christopher Phillips discuss the challenges of managing software licenses within production environments, highlighting the complexity and ongoing nature of tracking open-source licenses.

They will introduce Grant, an open-source tool designed to alleviate the burden of OSS license inspection by demonstrating how to scan for licenses within SBOMs or container images, simplifying a typically manual process. The session will cover the current landscape of software licenses, the difficulties of compliance checks, and a live demo of Grant’s features that automate this previously laborious process.

Software Security in the Real World with Kelsey Hightower and Dan Perry

Date: April 4th, 2024 at 2pm EDT  (11am PDT)

In our upcoming webinar, experts Kelsey Hightower and Dan Perry will delve into the nuances of securing software in cloud-native, containerized applications. This in-depth session will explore the criteria for vulnerability testing success or failure, offering insights into security testing and compliance for modern software environments. 

Through a live demonstration of Anchore Enterprise, they’ll provide a comprehensive look at visibility, inspection, policy enforcement, and vulnerability remediation, equipping attendees with a deeper understanding of software supply chain security, proactive security strategies, and practical steps to embark on a software security journey. 

The discussion will continue after the webinar on X/Twitter with Kelsey Hightower.

FedRAMP and SSDF Compliance: How to Sell to the Federal Government

This webinar explores how Anchore aids in navigating the complex compliance requirements for selling software to the federal government, focusing on FedRAMP vulnerability scanning and SSDF compliance. Led by Josh Bressers, VP of Security and Connor Wynveen, Senior Solutions Engineer it will detail evaluating FedRAMP controls for software containers and adhering to SSDF guidelines. 

Key takeaways include strategies to streamline FedRAMP and SSDF compliance efforts, leveraging SBOMs for efficiency, the critical role of automated vulnerability scans, and how Anchore’s policy pack can assist organizations in meeting compliance standards.

Accessing the Webinars

Don’t miss out on the opportunity to expand your knowledge and skills with these sessions. To register for the upcoming webinars or to access the on-demand webinar, visit our webinar landing page. Whether you’re looking to stay ahead of the curve in software security, explore funding opportunities for your open-source projects, or break into the federal market, our webinars are designed to provide you with the insights and tools you need.

We look forward to welcoming you to our upcoming webinars. Stay informed, stay ahead!

Introducing VIPERR: The First Software Supply Chain Security Framework for All

Today Anchore announces the VIPERR software supply chain security framework. This framework distills our lessons learned from supporting the most challenging software supply chain environments across government agencies and the defense industrial base. The framework is a blueprint for organizations to implement that reliably creates secure software supply chains with the least possible lift.

Previously security teams had to wade through massive amounts of literature on software supply chain security and spend countless hours of their teams time digesting those learnings into concrete processes and controls that integrated with their specific software development process. This typically absorbed massive amounts of time and resources and it wasn’t always clear at the end that an organization’s security had improved significantly.

Now organizations can utilize the VIPERR framework as a trusted industry resource to confidently improve their security posture and reduce the possibility of a breach due to a supply chain incident without the lengthy research cycle that frequently comes before the implementation phase of a software supply chain solution.

If you’re interested to see how Anchore works with customers to apply the framework via the Anchore Enterprise platform take the free guided walkthrough of the VIPERR framework. Alternatively, you can view our on demand webinar for a thorough walk through the framework. Finally, if you would like a more hands-on experience to try out the VIPERR framework then you can try our Anchore Enterprise Free Trial.  

Frequently Asked Questions

What is the VIPERR framework?

VIPERR is a free software supply chain security framework that Anchore created for organizations to evaluate and improve the security posture of their software supply chain. VIPERR stands for visibility, inspection, policy enforcement, remediation, and reporting. 

While working alongside developers and security teams within some of the most challenging architectures and threat landscapes, Anchore field engineers developed the VIPERR framework to both contextualize lessons learned and prevent live threats. VIPERR is meant to be a practical self-assessment playbook that organizations can regularly reuse to evaluate and harden their software supply chain security posture. By following this guide, numerous Fortune 500 enterprises and top federal agencies have transformed their software supply chain security posture and become hard targets for advanced persistent threats.

Why did Anchore create the VIPERR framework?

There are already a number of frameworks that have been developed to help organizations improve their software supply chain security but most of them focus on giving general guidance that is flexible enough that most specific implementations of the guidance will yield compliance. This is great for general standards because it allows organizations to find the best fit for their environment, but by keeping guidance general, it is difficult to always know the best specific implementation that delivers real-world results. The VIPERR framework was designed to fill this gap.

VIPERR is a framework that can be used to fulfill compliance standards for software supply chain security that is opinionated on how to achieve the controls of most of the popular standards. It can be paired with Anchore’s turnkey offering Anchore Enterprise so that organizations can opt for a solution that accomplishes the entire VIPERR framework without having to build a system from scratch.

Access an interactive 50 point checklist or a pdf version to guide you through each step of the VIPERR framework, or share it with your team to introduce the model for learning and awareness.

How do I begin identifying the gaps in my software supply chain security program? 

“I have no budget. My boss doesn’t think it’s a priority. I lack resources.” These are all common refrains when we speak with security teams working with us via our open source community or commercial enterprise offering. There is a ton of guidance available between SSDF, SLSA, NIST, and S2C2F. A lot of this is contextualized in a manner difficult to digest. As mentioned in the previous question VIPERR was created to be highly actionable by finding the right balance between giving guidance that is flexible and providing opinions that reduce options to help organizations make decisions faster.

The VIPERR framework will be available in a formatted 50 point, self-assessment checklist in the coming weeks, check back here for updates on that. By completing the forthcoming checklist you will produce a prioritized list of action items to harden your organization’s software supply chain security with the least amount of effort. 

How do I build a solution that closes the gaps that VIPERR uncovers?

As stated, VIPERR is a framework, not a solution. Anchore has worked with companies that have implemented VIPERR by building an in-house solution with a collection of open source tools (e.g. Syft, Grype, and other open source tools) or using a combination of multiple security tools. If you want to get an idea of the components involved in building a solution by self-hosting open-source tools and tying all of these systems together with first-party code, we wrote about that approach here

If I don’t want to build a solution, are there any turnkey solutions available?

Yes. Anchore Enterprise was designed as a turnkey solution to implement the VIPERR framework. Anchore Enterprise also automates the 50 security controls of the framework by integrating directly into an organization’s SDLC toolset (i.e., developer environments, CI/CD build pipelines, artifact registry and production environments). This provides the ability for organizations to know at any point in time if their software supply chain has been compromised and how to remediate the exploit.

If you’d like to learn more about the Anchore Enterprise platform or speak with a member of our team, feel free to book a time to speak with one of our specialists.

SBOMs & Vulnerability Scanners: Better Together

In the world of software development, two mega-trends have emerged in the past decade that have reshaped the industry. First the practice of building applications with a foundation of open-source software components and, second, the adoption of DevOps principles to automate the build and delivery of software. While these innovations have accelerated the pace of software getting into the hands of users, they’ve also introduced new challenges, particularly in the realm of security. 

As software teams race to deliver applications at breakneck speeds, security often finds itself playing catch-up, leading to potential vulnerabilities and risks. But what if there was a way to harmonize rapid software delivery with robust security measures? 

In this post, we’ll explore the tension between engineering and security, the transformative role of Software Bill of Materials (SBOMs), and how modern approaches to software composition analysis (SCA) are paving the way for a secure, efficient, and integrated software development lifecycle.

The rise of open-source software ushered in an era where developers had innumerable off-the-shelf components to construct their applications from. These building blocks eliminated the need to reinvent the wheel, allowing developers to focus on innovating on top of the already existing foundation that had been built by others. By leveraging pre-existing, community-tested components, software teams could drastically reduce development time, ensuring faster product releases and more efficient engineering cycles. However, this boon also brought about a significant challenge: blindspots. Developers often found themselves unaware of all the ingredients that made up their software.

Enter the second mega-trend DevOps tools, with special emphasis on CI/CD build pipelines. These tools promised (and delivered) faster, more reliable software testing, building, and delivery. Which ultimately meant not only was the creation of software accelerated via open-source components but the build process of manufacturing the software into a state that a user could consume was also sped up. But, as Uncle Ben reminds us, “with great power comes great responsibility”. The accelerated delivery meant that any security issues, especially those lurking in the blindspots, found their way into production at the new accelerated pace that was enabled through open-source software components and DevOps tooling.

The Strain on Legacy Security Tools in the Age of Rapid Development

This double-shot of productivity boosts to engineering teams began to strain their security oriented counterparts. The legacy security tools that security teams had been relying on were designed for a different era. They were created when software development lifecycles were measured in quarters or years rather than weeks or months. Because of this they could afford to be leisurely with their process. 

The tools that were originally developed to ensure that an application’s supply chain was secure were called software composition analysis (SCA) platforms. They were originally developed as a method for scanning open source software for licensing information to prevent corporations from running into legal issues as their developers used open-source components. They scanned every software artifact in its entirety—a painstakingly slow process. Especially if you wanted to run a scan during every step of software integration and delivery (e.g. source, build, stage, delivery, production). 

As the wave of open-source software and DevOps principles took hold, a tug-of-war between security teams, who wanted thoroughness, and software teams, who were racing against time began to form. Organizations found themselves at a crossroads, choosing between slowing down software delivery to manage security risks or pushing ahead and addressing security issues reactively.

SBOMs to the Rescue!

But what if there was a way to bridge this gap? Enter the Software Bill of Materials (SBOM). An SBOM is essentially a comprehensive list of components, libraries, and modules that make up a software application. Think of it as an ingredient list for your software, detailing every component and its origin.

In the past, security teams had to scan each software artifact during the build process for vulnerabilities, a method that was not only time-consuming but also less efficient. With the sheer volume and complexity of modern software, this approach was akin to searching for a needle in a haystack.

SBOMs, on the other hand, provide a clear and organized view of all software components. This clarity allows security teams to swiftly scan their software component inventory, pinpointing potential vulnerabilities with precision. The result? A revolution in the vulnerability scanning process. Faster scans meant more frequent checks. And with the ability to re-scan their entire catalog of applications whenever a new vulnerability is discovered, organizations are always a step ahead, ensuring they’re not just reactive but proactive in their security approach.

In essence, organizations could now enjoy the best of both worlds: rapid software delivery without compromising on security. With SBOMs, the balance between speed and security isn’t just achievable; it’s the new standard.

How do I Implement an SBOM-powered Vulnerability Scanning Program?

Okay, we have the context (i.e. the history of how the problem came about), we have a solution, the next question then becomes how do you bring this all together to integrate this vision of the future with the reality of your software development lifecycle?

Below we outlined the high-level steps of how an organization might begin to adopt this solution into their software integration and delivery processes:

  1. Research and select the best SBOM generation and vulnerability scanning tools. (Hint: We have some favorites!)
  2. Educate your developers about SBOMs. Need guidance? Check out our detailed post on getting started with SBOMs.
  3. Store the generated SBOMs in a centralized repository.
  4. Create a system to pull vulnerability feeds from reputable sources. If you’re looking for a way to get started here, read our post on how to get started.
  5. Regularly scan your catalog of SBOMs for vulnerabilities, storing the results alongside the SBOMs.
  6. Integrate your SBOM generation and vulnerability scanning tooling into your CI/CD build pipeline to automate this process.
  7. Implement a query system to extract insights from your catalog of SBOMs.
  8. Create a tool to visualize your software supply chain’s security health.
  9. Create a system to alert on for newly discovered vulnerabilities in your application ecosystem.
  10. Integrate a policy enforcement system into your developers’ workflows, CI/CD pipelines, and container orchestrators to automatically prevent vulnerabilities from leaking into build or production environments.
  11. Maintain the entire system and continue to improve on it as new vulnerabilities are discovered, new technologies emerge and development processes evolve.

Alternatively, consider investing in a comprehensive platform that offers all these features, either as a SaaS or on-premise solution instead of building this entire system yourself. If you need some guidance trying to determine whether it makes more sense to build or buy, we have put together a post outlining the key signs to watch for when considering when to outsource this function.

How Anchore can Help you Achieve your Vulnerability Scanning Dreams

The previous section is a bit tongue-in-cheek but it is also a realistic portrait of how to build a scalable vulnerability scanning program in the Cloud Native-era. Open-source software and container pipelines have changed the face of the software industry for the better but as with any complex system there are always unintended side effects. Being able to deliver software more reliably at a faster cadence was an amazing step forward but doing it securely got left behind. 

Anchore Enterprise was built specifically to address this challenge. It is the manifestation of the list of steps outlined in the previous section on how to build an SBOM-powered software composition analysis (SCA) platform. Integrating into your existing DevOps tools, Anchore Enterprise is a turnkey solution for the management of software supply chain security. If you’d rather buy than build and save yourself the blood, sweat and tears that goes into designing an end-to-end SCA platform, we’re looking forward to talking to you.
If you’d like to learn more about the Anchore Enterprise platform or speak with a member of our team, feel free to book a time to speak with one of our specialists.

Detecting Exploits within your Software Supply Chain

SBOMs. What are they good for? At Anchore, we see SBOMs (software bills of material) as the foundation of an application’s supply chain hierarchy. Upon this foundation you can build a lot of powerful features, such as, the ability to detect vulnerabilities in your open source dependencies before they are pushed to production. An unintended side effect of giving users the power to easily see deeply into their application’s dependencies and detect the vulnerabilities in those dependencies is that there can sometimes be hundreds of vulnerabilities discovered in the process. 

We’ve seen customer applications that generate up to 400+ known vulnerabilities! This creates an information overload that typically ends in the application developer ignoring the results because it is too much effort to triage and remediate each one. Knowing that an application is riddled with vulnerabilities is better than not but excessive information does not lead to actionable insights. 

Anchore Enterprise solves this challenge by pairing vulnerability data (e.g. CVEs, etc) with exploit data (e.g. KEV, etc). By combining these two data sources we can create actionable insight by showing users both the vulnerabilities in their applications and which vulnerabilities are actually being exploited. Actively exploited vulnerabilities are significantly higher risk and can be prioritized for triage and remediation first. In this blog post, we’ll discuss how we do that and how it can save both your security team and application developers time.

How Does Anchore Enterprise Help You Find Exploits in Your Application Dependencies?

What is an Exploited Vulnerability?

“Exploited” is an important distinction because it means that not only does a vulnerability exist but a payload also exists that can reliably trigger the vulnerability and cause an application to execute unintended functionality (e.g. leaking all of the contents of a database or deleting all of the data in a database). For instance, almost all bank vaults in the world are vulnerable to an asteroid strike “deleting” all of the contents of the safe but no one has developed a system to reliably cause an asteroid to strike bank vaults. Maybe Elon Musk can make this happen in a few more years but today this vulnerability isn’t exploitable. It is important for organizations to prioritize exploited vulnerabilities because the potential for damage is significantly greater.

Source High-Quality Data on Exploits

In order to find vulnerabilities that are exploitable, you need high-quality data from security researchers that are either crafting exploits to known vulnerabilities or analyzing attack data for payloads that are triggering an exploit in a live application. Thankfully, there are two exceedingly high-quality databases that publish this information publicly and regularly; the Known Exploited Vulnerability (KEV) Catalog and the Exploit Database (Exploit-DB).

The KEV Catalog is a database of known exploited vulnerabilities that is published and maintained by the US government through the Cybersecurity and Infrastructure Security Agency, CISA. It is updated regularly; they typically add 1-5 new KEVs every week. 

While not an exploit database itself, the National Vulnerability Database (NVD) is an important source of exploit data because it checks all of the vulnerabilities that it publishes and maintains against the Exploit-DB and embeds the relevant identifiers when a match is found.

Anchore Enterprise ingests both of these data feeds and stores the data in a centralized repository. Once this data is structured and available to your organization it can then be used to determine which applications and their associated dependencies are exploitable.

Map Data on Exploits to Your Application Dependencies

Now that you have a quality source of data on known exploited vulnerabilities, you need to determine if any of these exploits exist in your applications and/or the dependencies that they are built with. The industry-standard method for storing information on applications and their dependency supply chain is via a software bill of materials (SBOM)

After you have an SBOM for your application you can then cross-reference the dependencies against both a list of known vulnerabilities and a list of known exploited vulnerabilities. The output of this is a list of all of the applications in your organization that are vulnerable to exploits.

If done manually, via something like a spreadsheet this can quickly become a tedious process. Anchore Enterprise automates this process by generating SBOMs for all of your applications and running scans of the SBOMs against vulnerability and exploit databases. 

How Does Anchore Enterprise Help You Prioritize Remediation of Exploits in Your Application Dependencies?

Once we’ve used Anchore Enterprise to detect CVEs in our containers that are also exploitable through the KEV or ExploitDB lists, then we can take the severity score back into account with more contextual evidence. We need to know two things for each finding: what is the severity of the finding and can I accept the risk associated with leaving that vulnerable code in my application or container. 

If we look back to the Log4J event in December of 2021, that particular vulnerability scored a 10 on the CVSS. That score alone provides us little detail on how dangerous that vulnerability is. If a CVE is discovered against any given piece of software and the NVD researchers cannot reach the authors of the code, then it’s assigned a score of 10 and the worst case is assumed. 

However, if we have applied our KEV and ExploitDB bundles and determined that we do indeed have a critical vulnerability that has active known exploits and evidence that it is being exploited in the wild AND the severity exceeds our personal or organizational risk thresholds then we know that we need to take action immediately. 

Everyone has questioned the utility of the SBOM but Anchore Enterprise is making this an afterthought. Moving past the basics of just generating an SBOM and detecting CVE’s, Anchore Enterprise is automatically mapping exploit data to specific packages in your software supply chain allowing you to generate reports and notifications for your teams. By analyzing this higher quality information, you can determine  which vulnerabilities actually pose a threat to your and in turn make more intelligent decisions about which to fix and which to accept, saving your organization time and money.

Wrap Up

Returning to our original question, “what are SBOMs good for”? It turns out the answer is scaling the process of finding and prioritizing vulnerabilities in your organization’s software supply chain.

In today’s increasingly complex software landscape, the importance of securing your application’s supply chain cannot be overstated. Traditional SBOMs have empowered organizations to identify vulnerabilities but often left them inundated with too much information, rendering the data less actionable. Anchore Enterprise revolutionizes this process by not only automating the generation of SBOMs but also cross-referencing them against reputable databases like KEV Catalog and Exploit-DB to isolate actively exploited vulnerabilities. By focusing on the vulnerabilities that are actually being exploited in the wild, your security team can prioritize remediation efforts more effectively, saving both time and resources. 

Anchore Enterprise moves beyond merely detecting vulnerabilities to providing actionable insights, enabling organizations to make intelligent decisions on which risks to address immediately and which to monitor. Don’t get lost in the sea of vulnerabilities; let Anchore Enterprise be your compass in navigating the choppy waters of software security.

If you’d like to learn more about the Anchore Enterprise platform or speak with a member of our team, feel free to book a time to speak with one of our specialists

Automated Policy Enforcement for CMMC with Anchore Enterprise

The Cyber Maturity Model Certification (CMMC) is an important program to harden the cybersecurity posture of the defense industrial base. Its purpose is to validate that appropriate safeguards are in place to protect controlled unclassified information (CUI). Many of the organizations that are required to comply with CMMC are Anchore customers. They have the responsibility to protect the sensitive, but not classified data, of US military and government agencies as they support the various missions of the United States. 

CMMC 2.0 Levels

  • Level 1 Foundation: Safeguard federal contract information (FCI); not critical to national security.
  • Level 2 Advanced:  This maps directly to NIST Special Publication (SP) 800-171. Its primary goal is to ensure that government contractors are properly protecting controlled unclassified information (CUI).
  • Level 3 Expert: This maps directly to NIST Special Publication (SP) 800-172. Its primary goal is to go beyond the base-level security requirements defined in NIST 800-171. NIST 800-172 provides security requirements that specifically defend against advanced persistent threats (APTs).

This is of critical importance as these organizations leverage common place DevOps tooling to build their software. Additionally, these large organizations may be working with smaller subcontractors or suppliers who are building software in tandem or partnership. 

For example, a mega-defense contractor is working alongside a small mom-and-pop shop to develop software for a classified government system. Lots of questions we should have here:

  1. How can my company as a mega-defense contractor validate what software built by my partner is not using blacklisted software packages?
  2. How can my company validate software supplied to me is free of malware?
  3. How can I validate that the software supplied to me is in compliance with licensing standards and vulnerability compliance thresholds of my security team?
  4. How do I validate that the software I’m supplying is compliant not only against NIST 800-171 and CMMC, but against the compliance standards of my government end user (Such as NIST 800-53 or NIST 800-161)?

Validating Security between DevSecOps Pipelines and Software Supply Chain

At any major or small contractor alike, everyone has taken steps to build internal DevSecOps (DSO) pipelines. However, the defense industrial base (DIB) commonly involves daily relationships in which smaller defense contractors supply software to a larger defense contractor for a program or DSO pipeline that consumes and implements that software. With Anchore Enterprise, we can now validate if that software supplied is compliant with CMMC controls as specified in NIST 800-171.

Looking to learn more about how to achieve CMMC Level 2 or NIST 800-171 compliance? One of the most popular technology shortcuts is to utilize a DoD software factory. Anchore has been helping organizations and agencies put the Sec in DevSecOps by securing traditional software factories, transforming them into DoD software factories.

Which Controls does Anchore Enterprise Automate?

3.1.7 – Restrict Non-Privileged Users and Log Privileged Actions

Related NIST 800-53 Controls: AC-6 (10)

Description: Prevent non-privileged users from executing privileged functions and capture the execution of such functions in audit logs. 

Implementation: Anchore Enterprise can scan the container manifests to determine if the user is being given root privileges and implement an automated policy to prevent build containers from entering a runtime environment. This prevents a scenario where any privileged functions can be utilized in a runtime environment.

3.4.1 – Maintain Baseline Configurations & Inventories

Related NIST 800-53 Controls: CM-2(1), CM-8(1), CM-6

Description: Establish and maintain baseline configurations and inventories of organizational systems (including hardware, software, firmware, and documentation) throughout the respective system development life cycles.

Implementation: Anchore Enterprise provides a centralized inventory of all containers and their associated manifests at each stage of the development pipeline. All manifests, images and containers are automatically added to the central tracking inventory so that a complete list of all artifacts of the build pipeline can be tracked at any moment in time.

3.4.2 – Enforce Security Configurations

Related NIST 800-53 Controls: CM-2 (1) & CM-8(1) & CM-6

Description: Establish and enforce security configuration settings for information technology products employed in organizational systems.

Implementation: Implementation: Anchore Enterprise scans all container manifest files for security configurations and publishes found vulnerabilities to a centralized database that can be used for monitoring, ad-hoc reporting, alerting and/or automated policy enforcement.

3.4.3 – Monitor and Log System Changes with Approval Process

Related NIST 800-53 Controls: CM-3

Description: Track, review, approve or disapprove, and log changes to organizational systems.

Implementation: Anchore Enterprise provides a centralized dashboard that tracks all changes to applications which makes scheduled reviews simple. It also provides an automated controller that can apply policy-based decision making to either automatically approve or reject changes to applications based on security rules.

3.4.4 – Run Security Analysis on All System Changes

Related NIST 800-53 Controls: CM-4

Description: Analyze the security impact of changes prior to implementation.

Implementation: Anchore Enterprise can scan changes to applications for security vulnerabilities during the build pipeline to determine the security impact of the changes.

3.4.6 – Apply Principle of Least Functionality

Related NIST 800-53 Controls: CM-7

Description: Employ the principle of least functionality by configuring organizational systems to provide only essential capabilities.

Implementation: Anchore Enterprise can scan all applications to ensure that they are uniformly applying the principle of least functionality to individual applications. If an application does not meet this standard then Anchore Enterprise can be configured to prevent an application from being deployed to a production environment.

3.4.7 – Limit Use of Nonessential Programs, Ports, and Services

Related NIST 800-53 Controls: CM-7(1), CM-7(2)

Description: Restrict, disable, or prevent the use of nonessential programs, functions, ports, protocols, and services.

Implementation: Anchore Enterprise can be configured as a gating agent that will scan for specific security violations and prevent these applications from being deployed until the violations are remediated.

3.4.8 – Implement Blacklisting and Whitelisting Software Policies

Related NIST 800-53 Controls: CM-7(4), CM-7(5)

Description: Apply deny-by-exception (blacklisting) policy to prevent the use of unauthorized software or deny-all, permit-by-exception (whitelisting) policy to allow the execution of authorized software.

Implementation: Anchore Enterprise can be configured as a gating agent that will apply a security policy to all scanned software. The policies can be configured in a black- or white-listing manner.

3.4.9 – Control and Monitor User-Installed Software

Related NIST 800-53 Controls: CM-11

Description: Control and monitor user-installed software.

Implementation: Anchore Enterprise scans all software in the development pipeline and records all user-installed software. The scans can be monitored in the provided dashboard. User-installed software can be controlled (allowed or denied) via the gating agent.

3.5.10 – Store and Transmit Only Cryptographically-Protected Passwords

Related NIST 800-53 Controls: IA-5(1)

Description: Store and transmit only cryptographically-protected of passwords.

Implementation: Anchore Enterprise can scan for plain-text secrets in build artifacts and prevent exposed secrets from being promoted to the next environment until the violation is remediated. This prevents unauthorized storage or transmission of unencrypted passwords or secrets. See screenshot below to see this protection in action.

3.11.2 – Scan for Vulnerabilities

Related NIST 800-53 Controls: RA-5, RA-5(5)

Description: Scan for vulnerabilities in organizational systems and applications periodically and when new vulnerabilities affecting those systems and applications are identified.

Implementation: Anchore Enterprise is designed to scan all systems and applications for vulnerabilities continuously and alert when any changes introduce new vulnerabilities. See screenshot below to see this protection in action.

3.11.3 – Remediate Vulnerabilities Respective to Risk Assessments

Related NIST 800-53 Controls: RA-5, RA-5(5)

Description: Remediate vulnerabilities in accordance with risk assessments.

Implementation: Anchore Enterprise can be tuned to allow or deny changes based on a risk scoring system.

3.12.2 – Implement Plans to Address System Vulnerabilities

Related NIST 800-53 Controls: CA-5

Description: Develop and implement plans of action designed to correct deficiencies and reduce or eliminate vulnerabilities in organizational systems.

Implementation: Anchore Enterprise automates the process of ensuring all software and systems are in compliance with the security policy of the organization. 

3.13.4 – Block Unauthorized Information Transfer via Shared Resources

Related NIST 800-53 Controls: SC-4

Description: Prevent unauthorized and unintended information transfer via shared system resources.

Implementation: Anchore Enterprise can be configured as a gating agent that will scan for unauthorized and unintended information transfer and prevent violations from being transferred between shared system resources until the violations are remediated.

3.13.8 – Use Cryptography to Safeguard CUI During Transmission

Related NIST 800-53 Controls: SC-8

Description: Transmission Confidentiality and Integrity: Implement cryptographic mechanisms to prevent unauthorized disclosure of CUI during transmission unless otherwise protected by alternative physical safeguards.

Implementation: Anchore Enterprise can be configured as a gating agent that will scan for CUI and prevent violations of organization defined policies regarding CUI from being disclosed between systems.

3.14.5 – Periodically Scan Systems and Real-time Scan External Files

Related NIST 800-53 Controls: SI-2

Description: Perform periodic scans of organizational systems and real-time scans of files from external sources as files are downloaded, opened, or executed.

Implementation: Anchore Enterprise can be configured to scan all external dependencies that are built into software and provide information about relevant security vulnerabilities in the software development pipeline. See screenshot below to see this protection in action.

Wrap-Up

In a world increasingly defined by software solutions, the cybersecurity posture of defense-related industries stands paramount. The CMMC, a framework with its varying levels of compliance, underscores the commitment of the defense industrial base to fortify its cyber defenses. 

As a multitude of organizations, ranging from the largest defense contractors to smaller mom-and-pop shops, work in tandem to support U.S. missions, the intricacies of maintaining cybersecurity standards grow. The questions posed exemplify the necessity to validate software integrity, especially in complex collaborations. 

Anchore Enterprise solves these problems by automating software supply chain security best practices. It not only automates a myriad of crucial controls, ranging from user privilege restrictions to vulnerability scanning, but it also empowers organizations to meet and exceed the benchmarks set by CMMC and NIST. 

In essence, as defense entities navigate the nuanced web of software development and partnerships, tools like Anchore Enterprise are indispensable in safeguarding the nation’s interests, ensuring the integrity of software supply chains, and championing the highest levels of cybersecurity.

If you’d like to learn more about the Anchore Enterprise platform or speak with a member of our team, feel free to book a time to speak with one of our specialists.

Four Signs You’re Ready to Upgrade from DIY Supply Chain Security to Anchore Enterprise

Build versus buy is always a complex decision for most organizations. Typically there is a tipping point that is hit when the friction of building and running your own tooling outweighs the cost benefits of abstaining from adding yet another vendor to your SaaS bill. The signals that point to when an organization is approaching this moment varies based on the tool you’re considering.

In this blog post, we will outline some of the common signals that your organization is approaching this event for managing software supply chain risk. Whether your developers have self-adopted software development best practices like creating software bills of material (SBOMs) and now you’re drowning in an ocean of valuable but scattered security data, or you’re ready to start scaling your shift left security strategy across your entire software development life cycle, we will cover all of these scenarios and more.

Challenge Type: Scaling SBOM Management

Managing SBOMs is getting out of hand. Each day there is more SBOM data to sort and store. SBOM generation is by far the easiest capability to implement today. It’s free, extremely lightweight (low learning curve for engineers to adopt, unlike some enterprise products), and it’s fast…blazing fast! As a result of this, teams can quickly generate hundreds, thousands (even millions!) of SBOMs over the course of a fiscal year. This is great from a data security perspective but creates its own problems.

Once the friction of creating SBOMs becomes trivial, teams typically struggle with good ways to store and manage all of this new data. Just like any other context, questions arise about how long to retain data, query the data for security related issues, or even integrate all of that data with third party tooling to glean actionable security insights. Once teams have fully adopted SBOM generation in a few areas, it is a good practice to consider the best way to manage the data so your developers’ time is not in vain. 

Anchore Enterprise helps in a variety of ways, not just to manage SBOMs but to detect SBOM drift in the build process and alert security teams to changes in SBOMs so they can be assessed for risks or malicious activity.

Challenge Type: Regulatory Compliance

Let’s say that you just got a massive policy compliance mandate dropped in your lap from your manager. It’s your job to implement the parameters within the allotted deadline, and you’re not sure where to start.

As we’ve talked about in other posts, meeting compliance standards is more than a full-time job. Organizations have to make the decision to either DIY compliance or work with third parties that have expertise in specific standards. With the debut of revision 5 of NIST 800-53, the “Control Catalog”, more and more compliance standards require companies to implement controls that specifically address software supply chain security. This is due to the fact that many federal compliance standards build off of the “Control Catalog” as the source of truth for secure IT systems.

Whether it’s FedRAMP, a compliance framework related to NIST 800-53, or something as simple as a CIS benchmark, Anchore can help. The Anchore Enterprise SBOM solutions offer automated policy enforcement in your software supply chain. It serves to enforce compliance frameworks on your source code repos, images in development, and runtime Kubernetes clusters.

Challenge Type: Zero-Day Response

When a zero-day vulnerability is discovered, how do you answer the question “Am I vulnerable?” Depending on how well you have structured your security practice that question can take anywhere from an hour to a week or more. The longer that window the more risk your organization accrues. Once a zero-day incident occurs, it is very easy to spot the organizations that are prepared and those that are not. 

If you haven’t figured it out yet, the retention and centralized management of SBOM’s are probably one of the most useful tools in modern incident response plans for identification and triage of zero-day incidents impacting organizations. Even though software teams are empowered to make decentralized decision making they can still adhere to security principles that can benefit from a centralized data storage solution. This type of centralization allows organizations to answer critical questions with speed at critical moments in the life of an organization.

Anchore Enterprise helps answer the question “Am I vulnerable?” and it does it in minutes rather than days or weeks. By creating a centralized store of software supply chain data (via SBOMs) Anchore Enterprise allows organizations to quickly query this information and get back precise information on if a vulnerable package exists within the organization and exactly where to focus the remediation efforts. We also provide hands-on training that takes our customers through table top exercises in a controlled environment. By simulating a zero-day incident we test how well an organization is prepared to handle an uncontrolled threat environment and identify the gaps that could lead to extended uncertainty.

Challenge Type: Scaling a Shift Left Security Culture 

The shift left security movement was based on the principle that organizations can preempt security incidents by implementing secure development practices earlier in the software development lifecycle. The problem with this approach arises as you attempt to scale it. The more gates that you put in to catch security vulnerabilities earlier in the life cycle slows the software development process and requires more security resources.

In order to scale shift left security practices organizations will need to adopt software-based solutions to automate these checks and allow developers to self-diagnose and remediate vulnerabilities without significant intervention from the security team. The earlier in the software development process that vulnerabilities are caught the faster secure software can be shipped. 

Anchore enables organizations to scale their shift left security strategy by automating security checks at multiple points in the development life cycle. On top of that, due to the speed that Anchore can run its security scans, organizations can check every software artifact in the development pipeline without adding significant friction. Checking every deployed image during integration (CI), storage (registry) and runtime (CD) allows Anchore to scale a continuous security program that significantly reduces the potential for a vulnerable application to find its way to production where it can be exploited by a malicious adversary. The Anchore Enterprise runtime monitoring capabilities allow you to see what is running in your environment, detect issues within those images, and prevent images that fail policy checks from being deployed in your cluster or runtime environment.

Wrap-Up

The landscape of software supply chain security is increasingly complex, underscored by the rapid proliferation of SBOMs, rising compliance standards, and evolving security threats. Organizations today face the dilemma of scaling in-house security tools or seeking more streamlined and comprehensive solutions. As highlighted in this post, many of the above signals might indicate that it’s time for your organization to transition from DIY methods to a more robust solution. 

Anchore Enterprise was developed to overcome the challenges that are most common to organizations. With its focus on aiding organizations in scaling their shift-left security strategies, Anchore not only ensures compliance but also facilitates faster and safer software deployment. Even though each organization has its own set of unique challenges pertaining to software supply chain security, Anchore Enterprise is ready to enable organizations to mitigate and respond to these challenges.

If you’d like to learn more about the Anchore Enterprise platform or speak with a member of our team, feel free to book a time to speak with one of our specialists.

Software Supply Chain Hierarchy of Needs: SBOMs as the Foundation

Software serves as a powerful tool that simplifies complex and technical concepts but with the incredible power of software comes an interconnected labyrinth of software dependencies that often form the foundation for innovative applications. These dependencies are not without their pitfalls, as we’ve learned from incidents like Log4Shell. As we try to navigate the ever-evolving landscape of software supply chain security, we need to ensure that our applications are built on strong foundations.

In this blog post, we delve into the concept of a Software Bill of Materials (SBOM) as a foundational requirement for a secure software supply chain. Just as a physical supply chain is scrutinized to ensure the quality and safety of a product, a software supply chain also requires critical evaluation. What’s at stake isn’t just the functionality of an application, but the security of information that the application has access to. Let’s dive into the world of software supply chains and explore how SBOMs could serve as the bedrock for a more resilient future in software development and security.

What are Software Supply Chain Attacks?

Supply chain attacks are malicious strikes that target the suppliers of components of an application rather than the application itself. Software supply chains are similar to physical supply chains. When you purchase an iPhone all you see is the finished product. Behind the final product is a complex web of component suppliers that are then assembled together to produce an iPhone. Displays and camera lenses from a Japanese company, CPUs from Arizona, modems from San Diego, lithium ion batteries from a Canadian mine; all of these pieces come together in a Shenzhen assembly plant to create a final product that is then shipped straight to your door.

In the same way that an attacker could target one of the iPhone suppliers to modify a component before the iPhone is assembled, a software supply chain threat actor could do the same but target an open source package that is then built into a commercial application. This is a problem when 70-90% of modern applications are built utilizing open source software components. GIven this the supply chain is only as secure as its weakest link. The image below of the iceberg has become a somewhat overused meme of software supply chain security but it has become overused precisely because it explains the situation so well.

Below is the same idea but without the floating ice analogy. Each layer is another layer of abstraction that is far removed from “Your App”. All of these dependencies give your software developers the superpower to build extremely complex applications, very quickly but with the unintentional side-effect that they cannot possibly understand all of the ingredients that are coming together.

This gives adversaries their opening. A single compromised package allows attackers to manipulate all of the packages “downstream” of their entrypoint.

This reality was viscerally felt by the software industry (and all industries that rely on the software industry, meaning all industries) during the Log4j incident.

Log4Shell Impact

Log4Shell is the poster child for the importance of software supply chain security. We’re not going to go deep on the vulnerability in this post. We have actually done this in a number of other posts. Instead we’re going to focus on the impact of the incident for organizations that had instances of Log4j in their applications and what they had to go through in order to remediate this vulnerability.

First let’s brush up on the timeline:

The vulnerability in log4j was originally privately disclosed on November 24. Five days later a pull request was published to close the vulnerability and a week after that the new package was released. The official public disclosure happened on December 10. This is when the mayhem began and companies began the work of determining whether they were vulnerable and figuring out how to remediate the vulnerability.

On average, impacted individuals spent ~90 hours dealing with the Log4j incident. Roughly 20% of that time was spent identifying where the log4j package was deployed into an application. 

From conversations with our customers and prospects the primary culprit for why this took up such a large portion of time was whether or not an organization had a central repository of metadata about the software dependencies that had been utilized in building their applications. For customers that did have a central repository and a way to query this database, the step of identifying which applications had the log4j vulnerability present took 1-2 hours instead of 20+ hours as seen with the other organizations. This is the power of having SBOMs in place for all of the software and a tool to help with SBOM management.

What is a Software Bill of Materials?

Similar to the nutrition labels on the back of the foods that you buy, SBOMs are a list of ingredients that go into the software that your applications consume. We normally think of SBOMs as an artifact of the development process. As a developer is manufacturing their application using different dependencies they are also building a recipe based on the ingredients. In reality, an SBOM can (and should) be generated at all steps of the build pipeline. Source, builds, images, and production software can all be used to generate an SBOM.

Similar to Maslow’s hierarchy of needs, software supply chain management has an analogous hierarchy of needs

At the base of the pyramid are the contents of the application, in other words, SBOMs about the application. The diagram below shows all of the layers of the proposed hierarchy of software supply chains.

By using an SBOM as the foundation of the pyramid, organizations can ensure that all of the additional security features they layer on to this foundation will stand the test of time. We can only know that our software is free from known vulnerabilities if we have confidence in the process that was used to generate the “ingredients” label. Signing software to prove that a package hasn’t been tampered with is only valid if the signed software is both free of known vulnerabilities. Signing a vulnerabile package or image only proves that software hasn’t been tampered with from that point forward. It can’t look back retrospectively and validate that the packages that came before are secure without the help of an SBOM or a vulnerability scanner.

What are the Benefits of this Approach?

Utilizing Software Bills of Materials (SBOMs) as the foundational element of software supply chain security brings several substantial benefits:

  1. Transparency: SBOMs provide a comprehensive view of all the components used in an application. They reveal the ‘ingredients’ that make up the software, enabling teams to understand the entire composition of their applications, including all dependencies. No more black box dependencies and the associated risk that comes with it.
  1. Risk Management: With the transparency provided by SBOMs, organizations can identify potential security risks within the components of their software and address them proactively. This includes detecting vulnerabilities in dependencies or third-party components. SBOMs allow organizations to standardize their software supply chain which allows for an automated approach to vulnerability management and impacts risk management.
  1. Quick Response to Vulnerabilities: When a new vulnerability is discovered in a component used within the software, SBOMs can help quickly identify all affected applications. This significantly reduces the time taken to respond and remediate these vulnerabilities, minimizing potential damages. When an incident occurs, NOT if, an organization is able to rapidly respond to the breach and limit the impact. 
  1. Regulatory Compliance: Regulations and standards are increasingly requiring SBOMs for demonstrating software integrity. By incorporating SBOMs, organizations can ensure they meet these cybersecurity compliance requirements. Especially when working with highly regulated industries like the federal government, financial services and healthcare.
  1. Trust and Verification: SBOMs facilitate trust and confidence in software products by allowing users to verify the components used. They serve as a ‘proof of integrity’ to customers, partners, and regulators, showcasing the organization’s commitment to security. They also enable higher level security abstractions like signed images or source code to inherit the underlying foundation security guarantees provided by SBOMs.

By putting SBOMs at the base of software supply chain security, organizations can build a robust structure that’s secure, resilient, and efficient.

Building on a Strong Foundation

The utilization of Software Bills of Materials (SBOMs) as the bedrock for secure software supply chains provides a fundamental shift towards increased transparency, improved risk management, quicker responses to vulnerabilities, heightened regulatory compliance, and stronger trust in software products. By unraveling the complex labyrinth of dependencies in software applications, SBOMs offer the necessary insight to identify and address potential weaknesses, thus creating a resilient structure capable of withstanding potential security threats. In the face of incidents like Log4Shell, the industry needs to adopt a proactive and strategic approach, emphasizing the creation of a secure foundation that can stand the test of time. By elevating the role of SBOMs, we are taking a crucial step towards a future of software development and security that is not only innovative but also secure, trustworthy, and efficient. In the realm of software supply chain security, the adage “knowing is half the battle” couldn’t be more accurate. SBOMs provide that knowledge and, as such, are an indispensable cornerstone of a comprehensive security strategy.

If you’re interested in learning about how to integrate SBOMs into your software supply chain the Anchore team of supply chain security specialists are ready and willing to discuss.

Navigating Continuous Authority To Operate (cATO): A Guide for Getting Started

Continuous Authority to Operate (cATO), sometimes known as Rapid ATO, is becoming necessary as the DoD and civilian agencies put more applications and data in the cloud. Speed and agility are becoming increasingly critical to the mission as the government and federal system integrators seek new features and functionalities to support the warfighter and other critical U.S. government priorities.

In this blog post, we’ll break down the concept of cATO in understandable terms, explain its benefits, explore the myths and realities of cATO and show how Anchore can help your organization meet this standard.

What is Continuous Authority To Operate (cATO)?

Continuous ATO is the merging of traditional authority to operate (ATO) risk management practices with flexible and responsive DevSecOps practices to improve software security posture.

Traditional Risk Management Framework (RMF) implementations focus on obtaining authorization to operate once every three years. The problem with this approach is that security threats aren’t static, they evolve. cATO is the evolution of this framework which requires the continual authorization of software components, such as containers, by building security into the entire development lifecycle using DevSecOps practices. All software development processes need to ensure that the application and its components meet security levels equal to or greater than what an ATO requires.

You authorize once and use the software component many times. With a cATO, you gain complete visibility into all assets, software security, and infrastructure as code.

By automating security, you are then able to obtain and maintain cATO. There’s no better statement about the current process for obtaining an ATO than this commentary from Mary Lazzeri with Federal Computer Week:

“The muddled, bureaucratic process to obtain an ATO and launch an IT system inside government is widely maligned — but beyond that, it has become a pervasive threat to system security. The longer government takes to launch a new-and-improved system, the longer an old and potentially insecure system remains in operation.”

The Three Pillars of cATO

To achieve cATO, an Authorizing Official (AO) must demonstrate three main competencies:

  1. Ongoing visibility: A robust continuous monitoring strategy for RMF controls must be in place, providing insight into key cybersecurity activities within the system boundary.
  2. Active cyber defense: Software engineers and developers must be able to respond to cyber threats in real-time or near real-time, going beyond simple scanning and patching to deploy appropriate countermeasures that thwart adversaries.
  3. Adoption of an approved DevSecOps reference design: This involves integrating development, security, and operations to close gaps, streamline processes, and ensure a secure software supply chain.

Looking to learn more about the DoD DevSecOps Reference Design? It’s commonly referred to as a DoD Software Factory. Anchore has been helping organizations and agencies put the Sec in DevSecOps by securing traditional software factories, transforming them into DoD software factories.

Continuous ATO vs. ATO

The primary difference between traditional ATOs and continuous ATOs is the frequency at which a system seeks to prove the validity of its security claims. ATOs require that a system can prove its security once every three years whereas cATO systems prove their security every moment that the system is running.

The Benefits of Continuous ATO

Continuous ATO is essentially the process of applying DevSecOps principles to the compliance framework of Authority to Operate. Automating the individual compliance processes speeds up development work by avoiding repetitive tasks to obtain permission. Next, we’ll explore additional (and sometimes unexpected) benefits of cATO.

Increase Velocity of System Deployment

CI/CD systems and the DevSecOps design pattern were created to increase the velocity at which new software can be deployed from development to production. On top of that, Continuous ATOs can be more easily scaled to accommodate changes in the system or the addition of new systems, thanks to the automation and flexibility offered by DevSecOps environments.

Reduce Time and Complexity to Achieve an ATO

With the cATO approach, you can build a system to automate the process of generating the artifacts to achieve ATO rather than manually producing them every three years. This automation in DevSecOps pipelines helps in speeding up the ATO process, as it can generate the artifacts needed for the AO to make a risk determination. This reduces the time spent on manual reviews and approvals. Much of the same information will be requested for each ATO, and there will be many overlapping security controls. Designing the DevSecOps pipeline to produce the unique authorization package for each ATO from the corpus of data and information available can lead to increased efficiency via automation and re-use.

No Need to Reinvent AND Maintain the Wheel

When you inherit the security properties of the DevSecOps reference design or utilize an approved managed platform, then the provider will shoulder the burden. Someone else has already done the hard work of creating a framework of tools that integrate together to achieve cATO, re-use their effort to achieve cATO for your system. 

Alternatively, you can utilize a platform provider, such as Platform One, Kessel Run, Black Pearl, or the Army Software Factory to outsource the infrastructure management.

Learn how Anchore helped Platform One achieve cATO and become the preeminent DoD software factory:

Myths & Realities

Myth or Reality?: DevSecOps can be at Odds with cATO

Myth! DevSecOps in the DoD and civilian government agencies are still the domain of early adopters. The strict security and compliance requirements — the ATO in particular — of the federal government make it a fertile ground for DevSecOps adoption. Government leaders such as Nicolas Chaillan, former chief software officer for the United States Air Force, are championing DevSecOps standards and best practices that the DoD, federal government agencies, and even the commercial sector can use to launch their own DevSecOps initiatives.

One goal of DevSecOps is to develop and deploy applications as quickly as possible. An ATO is a bureaucratic morass if you’re not proactive. When you build a DevSecOps toolchain that automates container vulnerability scanning and other areas critical to ATO compliance controls, can you put in the tools, reporting, and processes to test against ATO controls while still in your development environment.

DevSecOps, much like DevOps, suffers from a marketing problem as vendors seek to spin the definitions and use cases that best suit their products. The DoD and government agencies need more champions like Chaillan in government service who can speak to the benefits of DevSecOps in a language that government decision-makers can understand.

Myth or Reality?: Agencies need to adopt DevSecOps to prepare for the cATO 

Reality! One of the cATO requirements is to demonstrate that you are aligned with an Approved DevSecOps Reference Design. The “shift left” story that DevSecOps espouses in vendor marketing literature and sales decks isn’t necessarily one size fits all. Likewise, DoD and federal agency DevSecOps play at a different level. 

Using DevSecOps to prepare for a cATO requires upfront analysis and planning with your development and operations teams’ participation. Government program managers need to collaborate closely with their contractor teams to put the processes and tools in place upfront, including container vulnerability scanning and reporting. Break down your Continuous Integration/Continuous Development (CI/CD) toolchain with an eye on how you can prepare your software components for continuous authorization.

Myth or Reality?: You need to have SBOMs for everything in your environment

Myth! However…you need to be able to show your Authorizing Official (AO) that you have “the ability to conduct active cyber defense in order to respond to cyber threats in real time.” If a zero day (like log4j) comes along you need to demonstrate you are equipped to identify the impact on your environment and remediate the issue quickly. Showing your AO that you manage SBOMs and can quickly query them to respond to threats will have you in the clear for this requirement.

Myth or Reality?: cATO is about technology and process only

Myth! As more elements of the DoD and civilian federal agencies push toward the cATO to support their missions, and a DevSecOps culture takes hold, it’s reasonable to expect that such a culture will influence the cATO process. Central tenets of a DevSecOps culture include:

  • Collaboration
  • Infrastructure as Code (IaC)
  • Automation
  • Monitoring

Each of these tenets contributes to the success of a cATO. Collaboration between the government program office, contractor’s project team leadership, third-party assessment organization (3PAO), and FedRAMP program office is at the foundation of a well-run authorization. IAC provides the tools to manage infrastructure such as virtual machines, load balancers, networks, and other infrastructure components using practices similar to how DevOps teams manage software code.

Myth or Reality?: Reusable Components Make a Difference in cATO

Reality! The growth of containers and other reusable components couldn’t come at a better time as the Department of Defense (DoD) and civilian government agencies push to the cloud driven by federal cloud initiatives and demands from their constituents.

Reusable components save time and budget when it comes to authorization because you can authorize once and use the authorized components across multiple projects. Look for more news about reusable components coming out of Platform One and other large-scale government DevSecOps and cloud projects that can help push this development model forward to become part of future government cloud procurements.

How Anchore Helps Organizations Implement the Continuous ATO Process

Anchore’s comprehensive suite of solutions is designed to help federal agencies and federal system integrators meet the three requirements of cATO.

Ongoing Visibility

Anchore Enterprise can be integrated into a build pipeline, image registry and runtime environment in order to provide a comprehensive view of the entire software development lifecycle (SDLC). On top of this, Anchore provides out-of-the-box policy packs mapped to NIST 800-53 controls for RMF, ensuring a robust continuous monitoring strategy. Real-time notifications alert users when images are out of compliance, helping agencies maintain ongoing visibility into their system’s security posture.

Active Cyber Defense

While Anchore Enterprise is integrated into the decentralized components of the SDLC, it provides a centralized database to track and monitor every component of software in all environments. This centralized datastore enables agencies to quickly triage zero-day vulnerabilities with a single database query. Remediation plans for impacted application teams can be drawn up in hours rather than days or weeks. By setting rules that flag anomalous behavior, such as image drift or blacklisted packages, Anchore supports an active cyber defense strategy for federal systems.

Adoption of an Approved DevSecOps Reference Design

Anchore aligns with the DoD DevSecOps Reference Design by offering solutions for:

  • Container hardening (Anchore DISA policy pack)
  • Container policy enforcement (Anchore Enterprise policies)
  • Container image selection (Iron Bank)
  • Artifact storage (Anchore image registry integration)
  • Release decision-making (Anchore Kubernetes Admission Controller)
  • Runtime policy monitoring (Anchore Kubernetes Automated Inventory)

Anchore is specifically mentioned in the DoD Container Hardening Process Guide, and the Iron Bank relies on Anchore technology to scan and enforce policy that ensures every image in Iron Bank is hardened and secure.

Final Thoughts

Continuous Authorization To Operate (cATO) is a vital framework for federal system integrators and agencies to maintain a strong security posture in the face of evolving cybersecurity threats. By ensuring ongoing visibility, active cyber defense, and the adoption of an approved DevSecOps reference design, software engineers and developers can effectively protect their systems in real-time. Anchore’s comprehensive suite of solutions is specifically designed to help meet the three requirements of cATO, offering a robust, secure, and agile approach to stay ahead of cybersecurity threats. 

By partnering with Anchore, federal system integrators and federal agencies can confidently navigate the complexities of cATO and ensure their systems remain secure and compliant in a rapidly changing cyber landscape. If you’re interested to learn more about how Anchore can help your organization embed DevSecOps tooling and principles into your software development process, click below to read our white paper.