Best Practices for DevSecOps in DoD Software Factories: A White Paper

The Department of Defense’s (DoD) Software Modernization Implementation Plan, unveiled in March 2023, represents a significant stride towards transforming software delivery timelines from years to days. This ambitious plan leverages the power of containers and modern DevSecOps practices within a DoD software factory.

Our latest white paper, titled “DevSecOps for a DoD Software Factory: 6 Best Practices for Container Images,” dives deep into the security practices for securing container images in a DoD software factory. It also details how Anchore Federal—a pivotal tool within this framework—supports these best practices to enhance security and compliance across multiple DoD software factories including the US Air Force’s Platform One, Iron Bank, and the US Navy’s Black Pearl.

Key Insights from the White Paper

  • Securing Container Images: The paper outlines six essential best practices ranging from using trusted base images to continuous vulnerability scanning and remediation. Each practice is backed by both DoD guidance and relevant NIST standards, ensuring alignment with federal requirements.
  • Role of Anchore Federal: As a proven tool in the arena of container image security, Anchore Federal facilitates these best practices by integrating seamlessly into DevSecOps workflows, providing continuous scanning, and enabling automated policy enforcement. It’s designed to meet the stringent security needs of DoD software factories, ready for deployment even in classified and air-gapped environments.
  • Supporting Rapid and Secure Software Delivery: With the DoD’s shift towards software factories, the need for robust, secure, and agile software delivery mechanisms has never been more critical. Anchore Federal is the turnkey solution for automating security processes and ensuring that all container images meet the DoD’s rigorous security and compliance requirements.

Download the White Paper Today

Empower your organization with the insights and tools needed for secure software delivery within the DoD ecosystem. Download our white paper now and take a significant step towards implementing best-in-class DevSecOps practices in your operations. Equip your teams with the knowledge and technology to not just meet, but exceed the modern security demands of the DoD’s software modernization efforts.

Navigate SSDF Attestation with this Practical Guide

The clock is ticking again for software producers selling to federal agencies. In the second half of 2024, CEOs or their designees must begin providing an SSDF attestation that their organization adheres to secure software development practices documented in NIST SSDF 800-218

Download our latest ebook to navigate through SSDF attestation quickly and adhere to timelines. 

SSDF attestation covers four main areas from NIST SSDF including: 

  1. Securing development environments, 
  2. Using automated tools to maintain trusted source code supply chains
  3. Maintaining provenance (e.g., via SBOMs) for internal code and third-party components, and 
  4. Using automated tools to check for security vulnerabilities.  

This new requirement is not to be taken lightly. It applies to all software producers, regardless of whether they provide a software end product as SaaS or on-prem, to any federal agency. The SSDF attestation deadline is June 11, 2024, for critical software and September 11, 2024, for all software. However, on-prem software developed before September 14, 2022, will only require SSDF attestation when a new major version is released. The bottom line is that most organizations will need to comply by 2024.

Companies will make their SSDF attestation through an online Common Form that covers the minimum secure software development requirements that software producers must meet. Individual agencies can add agency-specific instructions outside of the Common Form. 

Organizations that want to ensure they meet all relevant requirements can submit a third-party assessment instead of a CEO attestation. You must use a Third-Party Assessment Organization (3PAO) that is FedRAMP-certified or approved by an agency official.  This option is a no-brainer for cloud software producers who use a 3PAO for FedRAMP.

Details over details – so we put together a practical guide to the SSDF attestation requirements and how to meet them “SSDF Attestation 101: A Practical Guide for Software Producers”. We also included how Anchore Enterprise automates the SSDF attestation compliance by directly integrating into your software development pipeline and utilizing continuous policy scanning to detect issues before they hit production.

Modeling Software Security as Unit Tests: A Mental Model for Developers

Modern software development is complex to say the least. Vulnerabilities often lurk within the vast networks of dependencies that underpin applications. A typical scenario involves a simple app.go source file that is underpinned by a sprawling tree of external libraries and frameworks (check the go.mod file for the receipts). As developers incorporate these dependencies into their applications, the security risks escalate, often surpassing the complexity of the original source code. This real-world challenge highlights a critical concern: the hidden vulnerabilities that are magnified by the dependencies themselves, making the task of securing software increasingly daunting.

Addressing this challenge requires reimagining software supply chain security through a different lens. In a recent webinar with the famed Kelsey Hightower, he provides an apt analogy to help bring the sometimes opaque world of security into focus for a developer. Software security can be thought of as just another test in the software testing suite. And the system that manages the tests and the associated metadata is a data pipeline. We’ll be exploring this analogy in more depth in this blog post and by the end we will have created a bridge between developers and security.

The Problem: Modern software is built on a tower

Modern software is built from a tower of libraries and dependencies that increase the productivity of developers but with these boosts comes the risks of increased complexity. Below is a simple ‘ping-pong’ (i.e., request-response) application written in Go that imports a single HTTP web framework:

package main

import (
 "net/http"

 "github.com/gin-gonic/gin"
)

func main() {
 r := gin.Default()
 r.GET("/ping", func(c *gin.Context) {
  c.JSON(http.StatusOK, gin.H{
   "message": "pong",
  })
 })
 r.Run()
}

With this single framework comes a laundry list of dependencies that are needed in order to work. This is the go.mod file that accompanies the application:

module app

go 1.20

require github.com/gin-gonic/gin v1.7.2

require (
 github.com/gin-contrib/sse v0.1.0 // indirect
 github.com/go-playground/locales v0.13.0 // indirect
 github.com/go-playground/universal-translator v0.17.0 // indirect
 github.com/go-playground/validator/v10 v10.4.1 // indirect
 github.com/golang/protobuf v1.3.3 // indirect
 github.com/json-iterator/go v1.1.9 // indirect
 github.com/leodido/go-urn v1.2.0 // indirect
 github.com/mattn/go-isatty v0.0.12 // indirect
 github.com/modern-go/concurrent v0.0.0-20180228061459-e0a39a4cb421 // indirect
 github.com/modern-go/reflect2 v0.0.0-20180701023420-4b7aa43c6742 // indirect
 github.com/ugorji/go/codec v1.1.7 // indirect
 golang.org/x/crypto v0.0.0-20200622213623-75b288015ac9 // indirect
 golang.org/x/sys v0.0.0-20200116001909-b77594299b42 // indirect
 gopkg.in/yaml.v2 v2.2.8 // indirect
)

The dependencies for the application end up being larger than the application source code. And in each of these dependencies is the potential for a vulnerability that could be exploited by a determined adversary. Kelsey Hightower summed this up well, “this is software security in the real world”. Below is an example of a Java app that hides vulnerabile dependencies inside the frameworks that the application is built off of.

As much as we might want to put the genie back in the bottle, the productivity boosts of building on top of frameworks are too good to reverse this trend. Instead we have to look for different ways to manage security in this more complex world of software development.

If you’re looking for a solution to the complexity of modern software vulnerability management, be sure to take a look at the Anchore Enterprise platform and the included container vulnerability scanner.

The Solution: Modeling software supply chain security as a data pipeline

Software supply chain security is a meta problem of software development. The solution to most meta problems in software development is data pipeline management. 

Developers have learned this lesson before when they first build an application and something goes wrong. In order to solve the problem they have to create a log of the error. This is a great solution until you’ve written your first hundred logging statements. Suddenly your solution has become its own problem and a developer becomes buried under a mountain of logging data. This is where a logging (read: data) pipeline steps in. The pipeline manages the mountain of log data and helps developers sift the signal from the noise.

The same pattern emerges in software supply chain security. From the first run of a vulnerability scanner on almost any modern software, a developer will find themselves buried under a mountain of security metadata. 

$ grype dir:~/webinar-demo/examples/app:v2.0.0

 ✔ Vulnerability DB                [no update available]  
 ✔ Indexed file system                                                                            ~/webinar-demo/examples/app:v2.0.0
 ✔ Cataloged contents                                                         889d95358bbb68b88fb72e07ba33267b314b6da8c6be84d164d2ed425c80b9c3
   ├── ✔ Packages                        [16 packages]  
   └── ✔ Executables                     [0 executables]  
 ✔ Scanned for vulnerabilities     [11 vulnerability matches]  
   ├── by severity: 1 critical, 5 high, 5 medium, 0 low, 0 negligible
   └── by status:   11 fixed, 0 not-fixed, 0 ignored 

NAME                      INSTALLED                           FIXED-IN                           TYPE          VULNERABILITY        SEVERITY 
github.com/gin-gonic/gin  v1.7.2                              1.7.7                              go-module     GHSA-h395-qcrw-5vmq  High      
github.com/gin-gonic/gin  v1.7.2                              1.9.0                              go-module     GHSA-3vp4-m3rf-835h  Medium    
github.com/gin-gonic/gin  v1.7.2                              1.9.1                              go-module     GHSA-2c4m-59x9-fr2g  Medium    
golang.org/x/crypto       v0.0.0-20200622213623-75b288015ac9  0.0.0-20211202192323-5770296d904e  go-module     GHSA-gwc9-m7rh-j2ww  High      
golang.org/x/crypto       v0.0.0-20200622213623-75b288015ac9  0.0.0-20220314234659-1baeb1ce4c0b  go-module     GHSA-8c26-wmh5-6g9v  High      
golang.org/x/crypto       v0.0.0-20200622213623-75b288015ac9  0.0.0-20201216223049-8b5274cf687f  go-module     GHSA-3vm4-22fp-5rfm  High      
golang.org/x/crypto       v0.0.0-20200622213623-75b288015ac9  0.17.0                             go-module     GHSA-45x7-px36-x8w8  Medium    
golang.org/x/sys          v0.0.0-20200116001909-b77594299b42  0.0.0-20220412211240-33da011f77ad  go-module     GHSA-p782-xgp4-8hr8  Medium    
log4j-core                2.15.0                              2.16.0                             java-archive  GHSA-7rjr-3q55-vv33  Critical  
log4j-core                2.15.0                              2.17.0                             java-archive  GHSA-p6xc-xr62-6r2g  High      
log4j-core                2.15.0                              2.17.1                             java-archive  GHSA-8489-44mv-ggj8  Medium

All of this from a single innocuous include statements to your favorite application framework. 

Again the data pipeline comes to the rescue and helps manage the flood of security metadata. In this blog post we’ll step through the major functions of a data pipeline customized for solving the problem of software supply chain security.

Modeling SBOMs and vulnerability scans as unit tests

I like to think of security tools as just another test. A unit test might test the behavior of my code. I think this falls in the same quality bucket as linters to make sure you are following your company’s style guide. This is a way to make sure you are following your company’s security guide.

Kelsey Hightower

This idea from renowned developer, Kelsey Hightower is apt, particularly for software supply chain security. Tests are a mental model that developers utilize on a daily basis. Security tooling are functions that are run against your application in order to produce security data about your application rather than behavioral information like a unit test. The first two foundational functions of software supply chain security are being able to identify all of the software dependencies and to scan the dependencies for known existing vulnerabilities (i.e., ‘testing’ for vulnerabilities in an application). 

This is typically accomplished by running an SBOM generation tool like Syft to create an inventory of all dependencies followed by running a vulnerability scanner like Grype to compare the inventory of software components in the SBOM against a database of vulnerabilities. Going back to the data pipeline model, the SBOM and vulnerability database are the data sources and the vulnerability report is the transformed security metadata that will feed the rest of the pipeline.

$ grype dir:~/webinar-demo/examples/app:v2.0.0 -o json

 ✔ Vulnerability DB                [no update available]  
 ✔ Indexed file system                                                                            ~/webinar-demo/examples/app:v2.0.0
 ✔ Cataloged contents                                                         889d95358bbb68b88fb72e07ba33267b314b6da8c6be84d164d2ed425c80b9c3
   ├── ✔ Packages                        [16 packages]  
   └── ✔ Executables                     [0 executables]  
 ✔ Scanned for vulnerabilities     [11 vulnerability matches]  
   ├── by severity: 1 critical, 5 high, 5 medium, 0 low, 0 negligible
   └── by status:   11 fixed, 0 not-fixed, 0 ignored 

{
 "matches": [
  {
   "vulnerability": {
    "id": "GHSA-h395-qcrw-5vmq",
    "dataSource": "https://github.com/advisories/GHSA-h395-qcrw-5vmq",
    "namespace": "github:language:go",
    "severity": "High",
    "urls": [
     "https://github.com/advisories/GHSA-h395-qcrw-5vmq"
    ],
    "description": "Inconsistent Interpretation of HTTP Requests in github.com/gin-gonic/gin",
    "cvss": [
     {
      "version": "3.1",
      "vector": "CVSS:3.1/AV:N/AC:L/PR:N/UI:R/S:U/C:H/I:L/A:N",
      "metrics": {
       "baseScore": 7.1,
       "exploitabilityScore": 2.8,
       "impactScore": 4.2
      },
      "vendorMetadata": {
       "base_severity": "High",
       "status": "N/A"
      }
     }
    ],
    . . . 

This was previously done just prior to pushing an application to production as a release gate that would need to be passed before software could be shipped. As unit tests have moved forward in the software development lifecycle as DevOps principles have won the mindshare of the industry, so has security testing “shifted left” in the development cycle. With self-contained, open source CLI tooling like Syft and Grype, developers can now incorporate security testing into their development environment and test for vulnerabilities before even pushing a commit to a continuous integration (CI) server.

From a security perspective this is a huge win. Security vulnerabilities are caught earlier in the development process and fixed before they come up against a delivery due date. But with all of this new data being created, the problem of data overload has led to a different set of problems.

Vulnerability overload; Uncovering the signal in the noise

Like the world of application logs that came before it, at some point there is so much information that an automated process generates that finding the signal in the noise becomes its own problem.

How Anchore Enterprise manages SBOMs and vulnerability scans

Centralized management of SBOMs and vulnerability scans can end up being a massive headache. No need to come up with your own storage and data management solution. Just configure the AnchoreCTL CLI tool to automatically submit SBOMs and vulnerability scans as you run them locally. Anchore Enterprise stores all of this data for you.

On top of this, Anchore Enterprise offers data analysis tools so that you can search and filter SBOMs and vulnerability scans by version, build stage, vulnerability type, etc.

Combining local developer tooling with centralized data management creates a best of both worlds environment where engineers can still get their tasks done locally with ease but offload the arduous management tasks to a server.

Added benefit, SBOM drift detection

Another benefit of distributed SBOM generation and vulnerability scanning is that this security check can be run at each stage of the build process. It would be ideal to believe that the software that is written on in a developers local environment always makes it through to production in an untouched, pristine state but this is rarely the case.

Running SBOM generation and vulnerability scanning at development, on the build server, in the artifact registry, at deploy and during runtime will create a full picture of where and when software is modified in the development process and simplify post-incident investigations or even better catch issues well before they make it to a production environment.

This historical record is a feature provided by Anchore Enterprise called Drift Detection. In the same way that an HTTP cookie creates state between individual HTTP requests, Drift detection is security metadata about security metadata (recursion, much?) that allows state to be created between each stage of the build pipeline.  Being the central store for all of the associated security metadata makes the Anchore Enterprise platform the ideal location to aggregate and scan for these particular anomalies.

Policy as a lever

Being able to filter through all of the noise created by integrating security checks across the software development process creates massive leverage when searching for a particular issue but it is still a manual process and being a full-time investigator isn’t part of the software engineer job description. Wouldn’t it be great if we could automate some if not the majority of these investigations?

I’m glad we’re of like minds because this is where policy comes into picture. Returning to Kelsey Hightower’s original comparison between security tools as linters, policy is the security guide that is codified by your security team that will allow you to quickly check whether the commit that you put together will meet the standards for secure software.

By running these checks and automating the feedback, developers can quickly receive feedback on any potential security issues discovered in their commit. This allows developers to polish their code before it is flagged by the security check in the CI server and potentially failed. No more waiting on the security team to review your commit before it can proceed to the next stage. Developers are empowered to solve the security risks and feel confident that their code won’t be held up downstream.

Policies-as-code supports existing developer workflows

Anchore Enterprise designed its policy engine to ingest the individual policies as JSON objects that can be integrated directly into the existing software development tooling. Create a policy in the UI or CLI, generate the JSON and commit it directly to the repo.

This prevents the painful context switching of moving between different interfaces for developers and allows engineering and security teams to easily reap the rewards of version control and rollbacks that come pre-baked into tools like version control. Anchore Enterprise was designed by engineers for engineers which made policy-as-code the obvious choice when designing the platform.

Remediation automation integrated into the development workflow

Being able to be alerted when a commit is violating your company’s security guidelines is better than pushing insecure code and waiting for the breach to find out that you forgot to sanitize the user input. Even after you get alerted to a problem, you still need to understand what is insecure and how to fix it. This can be done by trying to Google the issue or starting up a conversation with your security team. But this just ends up creating more work for you before you can get your commit into the build pipeline. What if you could get the answer to how to fix your commit in order to make it secure directly into your normal workflow?

Anchore Enterprise provides remediation recommendations to help create actionable advice on how to resolve security alerts that are flagged by a policy. This helps point developers in the right direction so that they can resolve their vulnerabilities quickly and easily without the manual back and forth of opening a ticket with the security team or Googling aimlessly to find the correct solution. The recommendations can be integrated directly into GitHub Issues or Jira tickets in order to blend seamlessly into the workflows that teams depend on to coordinate work across the organization.

Wrap-Up

From the perspective of a developer it can sometimes feel like the security team is primarily a frustration that only slows down your ability to ship code. Anchore has internalized this feedback and has built a platform that allows developers to still move at DevOps speeds and do so while producing high quality, secure code. By integrating directly into developer workflows (e.g., CLI tooling, CI/CD integrations, source code repository integrations, etc.) and providing actionable feedback Anchore Enterprise removes the traditional roadblock mentality that has typically described the relationship between development and security.

If you’re interested to see all of the features described in this blog post via a hands-on demo, check out the webinar by clicking on the screenshot below and going to the workshop hosted on GitHub.

If you’re looking to go further in-depth with how to build and secure containers in the software supply chain, be sure to read our white paper: The Fundamentals of Container Security.

Streamlining FedRAMP Compliance: How Anchore Enterprise Simplifies the Process

FedRAMP compliance is hard, not only because there are hundreds of controls that need to be reviewed and verified. On top of this, the controls can be interpreted and satisfied in multiple different ways. It is admirable to see an enterprise achieve FedRAMP compliance from scratch but most of us want to achieve compliance without spending more than a year debating the interpretation of specific controls. This is where turnkey solutions like Anchore Enterprise come in. 

Anchore Enterprise is a cloud-native software composition analysis platform that integrates SBOM management, vulnerability scanning and policy enforcement into a single platform to provide a comprehensive solution for software supply chain security.

Overview of FedRAMP, who it applies to and the challenges of compliance

FedRAMP, or the Federal Risk and Authorization Management Program, is a federal compliance program that standardizes security assessment, authorization, and continuous monitoring for cloud products and services. As with any compliance standard, FedRAMP is modeled from the “Trust but Verify” security principle. FedRAMP standardizes how security is verified for Cloud Service Providers (CSP).

One of the biggest challenges with achieving FedRAMP compliance comes from sorting through the vast volumes of data that make up the standard. Depending on the level of FedRAMP compliance you are attempting to meet, this could mean complying with 125 controls in the case of a FedRAMP low certification or up to 425 for FedRAMP high compliance.

While we aren’t going to go through the entire FedRAMP standard in this blog post, we will be focusing on the container security controls that are interleaved into FedRAMP.

FedRAMP container security requirements

1) Hardened Images

FedRAMP requires CSPs to adhere to strict security standards for hardened images used by government agencies. The standard mandates that:

  • Only essential services and software are included in the images
  • Updated with the latest security patches
  • Configuration settings meet secure baselines
  • Disabling unnecessary ports and services
  • Managing user accounts securely
  • Implementing encryption
  • Maintaining logging and monitoring practices
  • Regular vulnerability scanning and prompt remediation

If you want to go in-depth with how to create hardened images that meet FedRAMP compliance, download our white papers:

DevSecOps for a DoD Software Factory: 6 Best Practices for Container Images

Complete Guide to Hardening Containers with STIG

2) Container Build, Test, and Orchestration Pipelines

FedRAMP sets stringent requirements for container build, test, and orchestration pipelines to protect federal agencies. These include:

  • Hardened base images (see above) 
  • Automated build processes with integrity checks
  • Strict configuration management
  • Immutable containers
  • Secure artifact management
  • Containers security testing
  • Comprehensive logging and monitoring

3) Vulnerability Scanning for Container Images

FedRAMP mandates rigorous vulnerability scanning protocols for container images to ensure their security within federal cloud deployments. This includes: 

  • Comprehensive scans integrated into CI/CD pipelines 
  • Prioritize remediation based on severity
  • Re-scanning policy post-remediation 
  • Detailed audit and compliance reports 
  • Checks against secure baselines (i.e., CIS or STIG)

4) Secure Sensors

FedRAMP requires continuous management of the security of machines, applications, and systems by identifying vulnerabilities. 

  • Authorized scanning tools
  • Authenticated security scans to simulate threats
  • Reporting and remediation
  • Scanning independent of developers
  • Direct integration with configuration management to track vulnerabilities

5) Registry Monitoring

While not explicitly called out in FedRAMP as either a control or a control family, there is still a requirement that the images stored in a container registry are scanned at least every 30-days if the images are deployed to production.

6) Asset Management and Inventory Reporting for Deployed Containers

FedRAMP mandates thorough asset management and inventory reporting for deployed containers to ensure security and compliance. Organizations must maintain detailed inventories including:

  • Container images
  • Source code
  • Versions
  • Configurations 
  • Continuous monitoring of container state 

7) Encryption

FedRAMP mandates robust encryption standards to secure federal information, requiring the use of NIST-approved cryptographic methods for both data at rest and data in transit. It is important that any containers that store data or move data through the system meet FIPS standards.

How Anchore helps organizations comply with these requirements

Anchore is the leading software supply chain security platform for meeting FedRAMP compliance. We have helped hundreds of organizations meet FedRAMP compliance by deploying Anchore Enterprise as the solution for achieving container security compliance. Below you can see an overview of how Anchore Enterprise integrates into a FedRAMP compliant environment. For more details on how each of these integrations meet FedRAMP compliance keep reading.

1) Hardened Images

Anchore Enterprise integrates multiple tools in order to meet the FedRAMP requirements for hardened container images. We provide compliance policies that scan specifically for compliance with container hardening standards, such as, STIG and CIS. These tools were custom built to perform the checks necessary to meet the two relevant standards or both!

2) Container Build, Test, and Orchestration Pipelines

Anchore integrates directly into your CI/CD pipelines either the Anchore Enterprise API or pre-built plug-ins. This tight integration meets the FedRAMP standards that require all container images are hardened, all security checks are automated within the build process and all actions are logged and audited. Anchore’s FedRAMP policy specifically checks to make sure that any container in any stage of the pipeline will be checked for compliance.

3) Vulnerability Scanning for Container Images

Anchore Enterprise can be integrated into each stage of the development pipeline, offer remediation recommendations based on severity (e.g., CISA’s KEV vulnerabilities can be flagged and prioritized for immediate action), enforce re-scanning of containers after remediation and produce compliance artifacts to automate compliance. This is accomplished with Anchore’s container scanner, direction pipeline integration and FedRAMP policy.

4) Secure Sensors

Anchore Enterprise’s container vulnerability scanner and Kubernetes inventory agent are both authorized scanning tools. The container vulnerability scanner is integrated directly into the build pipeline whereas the k8s agent is run in production and scans for non-compliant containers at runtime.

5) Registry Monitoring

Anchore Enterprise is able to scan an artifact registry continuously for potentially non-compliant containers. It is configured to watch each unique image in image registries. It will automatically scan images that get pushed to these registries.

6) Asset Management and Inventory Reporting for Deployed Containers

Anchore Enterprise includes a full software component inventory workflow. It can scan all software components, generate Software Bill of Materials (SBOMs) to keep track of the component and centrally store all SBOMs for analysis. Anchore Enterprises’s Kubernetes inventory agent can perform the same service for the runtime environment.

7) Encryption

Anchore Enterprise Static STIG tool can ensure that all containers are maintaining NIST & FIPS encryption standards. Verifying that containers are encrypting data at-rest and in-transit for each of thousands of containers is a difficult chore but easily automated via Anchore Enterprise.

The benefits of the shift left approach of Anchore Enterprise

Shift compliance left and prevent violations

Detect and remediate FedRAMP compliance violations early in the development lifecycle to prevent production/high-side violations that will threaten your hard earned compliance. Use Anchore’s “developer-bundle” in the integration phase to take immediate action on potential compliance violations. This will ensure vulnerabilities with fixes available and CISA KEV vulnerabilities are addressed before they make it to the registry and you have to report these non-compliance issues.

Below is an example of a workflow in GitLab of how Anchore Enterprise’s SBOM generation, vulnerability scanning and policy enforcement can catch issues early and keep your compliance record clean.

Automate Compliance Reporting

Automate monthly/annual reporting using Anchore’s reporting. Have these reports set up to auto generate based on the compliance reporting needs of FedRAMP.

Manage POA&Ms

Given that Anchore Enterprise centrally stores and manages vulnerability information for an organization, it can also be utilized to manage Plan Of Action & Milestones (POA&Ms) for any portions of the system that aren’t yet FedRAMP compliant but have a planned due date. Using Allowlists in Anchore Enterprise to centrally manage POA&Ms and assessed/justifiable findings.

Prevent Production Compliance Violations

Practice good production registry hygiene by utilizing Anchore Enterprise to scan stored images regularly. Anchore Enterprise’s Kubernetes runtime inventory will identify images that do not meet FedRAMP compliance or have not been used in the last ~7 days (company defined) to remove from your production registry.

Conclusion

Achieving FedRAMP compliance from scratch is an arduous process and not a key differentiator for many organizations. In order to maintain organizational priority on the aspects of the business that differentiate an organization from its competitors, strategic outsourcing of non-core competencies is always a recommended strategy. Anchore Enterprise aims to be that turnkey solution for organizations that want the benefits of FedRAMP compliance without developing the internal expertise, specifically for the container security aspect.

By integrating SBOM generation, vulnerability scanning, and policy enforcement into a single platform, Anchore Enterprise not only simplifies the path to compliance but also enhances overall software supply chain security. Through the deployment of Anchore Enterprise, companies can achieve and maintain compliance more quickly and with greater assurance. If you’re looking for an even deeper look at how to achieve all 7 of the container security requirements of FedRAMP with Anchore Enterprise, read our playbook: FedRAMP Pre-Assessment Playbook For Containers.

From Chaos to Compliance: Revolutionizing License Management with Automation

The ascent of both containerized applications and open-source software component building blocks has dramatically escalated the complexity of software and the burden of managing all of the associated licenses. Modern applications are often built from a mosaic of hundreds, if not thousands, of individual software components, each bound by its own potential licensing pitfalls. This intricate web of dependencies, akin to a supply chain, poses significant challenges not only for legal teams tasked with mitigating financial risks but also for developers who manage these components’ inventory and compliance.

Previously license management was primarily a manual affair, software wasn’t as complex in the past and more software was proprietary 1st party software that didn’t have the same license compliance issues. These original license management techniques haven’t kept up with the needs of modern, cloud-native application development. In this blog post, we’re going to discuss how automation is needed to address the challenges of continuing to manage licensing risk in modern software.

The Problem

Modern software is complex. This is fairly well known at this point but in case you need a quick visual reminder, we’ve inserted to images to quickly reinforce this idea:

Applications can be constructed from 10s or 100s or even 1000s of individual software components each with their own license for how it can be used. Modern software is so complex that this endlessly nested collection of dependencies are typically referred to as a metaphorical supply chain and there is an entire industry that has grown to provide security solutions for this quagmire called software supply chain security

This is a complexity nightmare for legal teams that are tasked with managing the financial risk of an organization. It’s also a nightmare for the developers who are tasked with maintaining an inventory of all of the software dependencies in an organization and the associated license for each component.

Let’s provide an example of how this normally manifests in a software startup. Assuming business is going well, you have a product and there are customers out in the world that are interested in purchasing your software. During the procurement cycle, your customer’s legal team will be tasked with assessing the risk of using your software. In order to create this assessment they will do a number of things and one of them will be to determine if your software is safe from a licensing perspective to use. In order to do this they will normally send over a document that looks like this:

As a software vendor, it will be your job to fill this out so that legal can approve the purchasing of your software and you can take that revenue to the bank.

Let’s say you manually fill this entire spreadsheet out. A developer would need to go through each dependency that is utilized in the software that you sell and “scan” the codebase for all of the licensing metadata. Component name, version number, OSS license (i.e., MIT, GPL, BSD, etc.), etc. It would take some time and be quite tedious but not an insurmountable task. In the end they would produce something like this:

This is great in the world of once-in-a-while deployments and updates. This becomes exhausting in the world of continuous integration/delivery that the DevOps movement has created. Imagine having to produce a new document like this everytime you push to production. DevOps has allowed for some times to push to production multiple times per day.  Requiring that a document is manually created for all of your customers’ legal teams for each release would almost eliminate all of the velocity gains that moving to the DevOps architecture created.

The Solution

The solution to this problem is the automation of the license discovery process. If software can scan your codebase and produce a document that will exhaustively cover all of the building blocks of your application this unlocks the potential to both have your DevOps cake and eat it too.

To this end, Anchore has created and open sourced a tool that does just this.

Introducing Grant: Automated License Discovery

Grant is an open-source command line tool that scans and discovers the software licenses of all dependencies in a piece of open-source software. If you want to get a quick primer on what you can do with Grant, read our announcement blog post. Or if you’re ready to dive straight in, you can view all of the Grant documentation on its GitHub repo.

How does Grant Integrate into my Development Workflow?

As a software license scanner, Grant operates on a software inventory artifact like an SBOM or directly on a container image. Let’s continue with the example from above to bring this to life. In the legal review example above you are a software developer that has been tasked with manually searching and finding all of the OSS license files to provide to your customer’s legal team for review.

Not wanting to do this manually by hand, you instead open up your CLI and install Grant. From there you navigate to your artifact registry and pull down the latest image of your application’s production build. Right before you run the Grant license scan on your production container image you notice that your team has been following software supply chain best practices and have already created an SBOM with a popular open-source tool called Syft. Instead of running the container through Grant which could take some time to scan the image, you instead pipe the SBOM into Grant which is already a JSON object of the entire dependency inventory of the application. A few seconds later you have a full report of all of the licenses in your application.  

From here you export the full component inventory with the license enrichment into a spreadsheet and send this off to the customer’s legal team for review. A process that might have taken a full day or even multiple days to do by hand was finished in seconds with the power of open-source tooling.

Automating License Compliance with Policy

Grant is an amazing tool that can automate much of the most tedious work of protecting an organization from legal consequences but when used by a developer as a CLI tool, there is still a human in the loop which can cause traffic jams to occur. With this in mind, our OSS team made sure to launch Grant with support for policy-based filters that can automate the execution and alerting of license scanning. 

Let’s say that your organization’s legal team has decided that using any GPL components in 1st party software is too risky. By writing a policy that fails any software that includes GPL licensed components and integrating the policy check as early as the staging CI environment or even allowing developers to run Grant in a one-off fashion during design as they prototype the initial idea, the potential for legally risky dependencies infiltrating into production software drops precipitously.

How Anchore Can Help

Grant is an amazing tool that automates the license compliance discovery process. This is great for small projects or software that does irregular releases. Things get much more complicated in the cloud-native, continuous integration/deployment paradigm on DevSecOps where there are new releases multiple times per day. Having Grant generate the license data is great but suddenly you will have an explosion of data that itself needs to be managed.

This is where Anchore Enterprise steps in to fill this gap. The Anchore Enterprise platform is an end-to-end data management solution that not only incorporates all of Anchore’s open-source tooling that generates artifacts like SBOMs, vulnerability scans and license scans. It also manages the massive amount of data that a high speed DevSecOps pipeline will create as part of its regular operation and on top of that apply a highly customizable policy engine that can then automate decision-making around the insights derived from the software supply chain artifacts like SBOMs, vulnerability scans and license scans. 

Want to make sure that no GPL license OSS components ever make it into your SDLC? No problem. Grant will uncover all components that have this license, Anchore Enterprise will centralize these scans and the Anchore policy engine will alert the developer who just integrated a new GPL licensed OSS component into their development environment that they need to find a different component or they won’t be able to push their branch to staging. The shift left principle of DevSecOps can be applied to LegalOps as well. 

Conclusion

The advent of tools like Grant, an open-source license discovery solution developed by Anchore, marks a significant advancement in the realm of open-source license management. By automating the tedious process of license verification, Grant not only enhances operational efficiency but also integrates seamlessly into continuous integration/continuous delivery (CI/CD) environments. This capability is crucial in modern DevOps practices, which demand frequent and fast-paced updates. Grant’s ability to quickly generate comprehensive licensing reports transforms a potentially day-long task into a matter of seconds.

Anchore Enterprise extends this functionality by managing the deluge of data from continuous deployments and integrating a policy engine that automates compliance decisions. This ecosystem not only streamlines the process of license management but also empowers developers and legal teams to preemptively address compliance issues, thereby embedding legal safeguards directly into the software development lifecycle. This proactive approach ensures that as the technological landscape evolves, businesses remain agile yet compliant, ready to capitalize on opportunities without being bogged down by legal liabilities.

If you’re interested to hear about the topics covered in this blog post directly from the lips of Anchore’s CTO, Dan Nurmi, and the maintainer of Grant, Christopher Phillips, you can watch the on-demand webinar here. Or join the Anchore Community Discourse forum to speak with our team directly. We look forward to hearing from you and reviewing your pull requests!

An Outline for Getting Up to Speed on the DoD Software Factory

This blog post is meant as a gateway to all things DoD software factory. We highlight content from across the Anchore universe that can help anyone get up to speed on what a DoD software factory is, why to use it and how to build one. This blog post is meant as an index to be scanned for the topics that are most interesting to you as the reader with links to more detailed content.

What is a DoD Software Factory?

The short answer is a DoD Software Factory is an implementation of the DoD Enterprise DevSecOps Reference Design. A slightly longer answer comes from our DoD software factory primer:

A Department of Defense (DoD) Software Factory is a software development pipeline that embodies the principles and tools of the larger DevSecOps movement with a few choice modifications that conform to the extremely high threat profile of the DoD and DIB.

Note that the diagram below looks like a traditional DevOps pipeline. The difference being that there are security controls layered into this environment that automate software component inventory, vulnerability scanning and policy enforcement to meet the requirements to be considered a DoD software factory.

Got the basics down? Go deeper and learn how Anchore can help you put the Sec into DevSecOps Reference Design by reading our DoD Software Factory Best Practices white paper.

Why do I want to utilize a DoD Software Factory?

For DoD programs, the primary reason to utilize a DoD software factory is that it is a requirement for achieving a continuous authorization to operation (cATO). The cATO standard specifically calls out that software is developed in a system that meets the DoD Enterprise DevSecOps Reference Design. A DoD software factory is the generic implementation of this design standard.

For Federal Service Integrators (FSIs), the biggest reason to utilize a DoD software factory is that it is a standard approach to meeting DoD compliance and certification standards. By meeting a standard, such as CMMC Level 2, you expand your opportunity to work with DoD programs.

Continuous Authorization to Operate (cATO)

If you’re looking for more information on cATO, Anchore has written a comprehensive guide on navigating the cATO process that can be found on our blog:

DevSecOps for a DoD Software Factory: 6 Best Practices for Container Images

The shift from traditional software delivery to DevSecOps in the Department of Defense (DoD) represents a crucial evolution in how software is built, secured, and deployed with a focus on efficiencies and speed. Our white paper advises on best practices that are setting new standards for security and efficiency in DoD software factories.

Cybersecurity Maturity Model Certification (CMMC)

The CMMC is the certification standard that is used by the DoD to vet FSIs from the defense industrial base (DIB). This is the gold standard for demonstrating to the DoD that your organization takes security seriously enough to work with the highest standards of any DoD program. The security controls that the CMMC references when determining certification are outlined in NIST 800-171. There are 17 total families of security controls that an organization has to meet in order to meet the CMMC Level 2 certification and a DoD software factory can help check a number of these off of the list.

The specific families of controls that a DoD software factory helps meet are:

  • Access Control (AC)
  • Audit and Accountability (AU)
  • Configuration Management (CM)
  • Incident Response (IR)
  • Maintenance (MA)
  • Risk Assessment (RA)
  • Security Assessment and Authorization (CA)
  • System and Communications Protection (SC)
  • System and Information Integrity (SI)
  • Supply Chain Risk Management (SR)

If you’re looking for more information on how apply software supply chain security to meet the CMMC, Anchore has published two blog posts on the topic:

NIST SP 800-171 & Controlled Unclassified Data: A Guide in Plain English

  • NIST SP 800-171 is the canonical list of security controls for meeting CMMC Level 2 certification. Anchore has broken down the entire 800-171 standard to give you an easy to understand overview.

Automated Policy Enforcement for CMMC with Anchore Enterprise

  • Policy Enforcement is the backbone of meeting the monitoring, enforcement and reporting requirements of the CMMC. In this blog post, we break down how Anchore Federal can meet a number of the controls specifically related to software supply chain security that are outlined in NIST 800-171.

How do I meet the DevSecOps Reference Design requirements?

The easy answer is by utilizing a DoD Software Factory Managed Service Provider (MSP). Below in the User Stories section, we deep dive into the US Air Force’s Platform One given they are the preeminent DoD software factory.

The DIY answer involves carefully reading and implementing the DoD Enterprise DevSecOps Reference Design. This document is massive but there are a few shortcuts you can utilize to help expedite your journey. 

Container Hardening

Deciding to utilize software containers in a DevOps pipeline is almost a foregone conclusion at this point. What is less well known is how to secure your containers, especially to meet the standards of a DoD software factory.

The DoD has published two guides that can help with this. The first is the DoD Container Hardening Guide, and the second is the Container Image Creation and Deployment Guide. Both name Anchore Federal as an approved container hardening scanner.

Anchore has published a number of blogs and even a white paper that condense the information in both of these guides into more digestible content. See below:

Container Security for U.S. Government Information Systems

  • This comprehensive white paper breaks down how to achieve a container build and deployment system that is hardened to the standards of a DoD software factory.

Enforcing the DoD Container Image and Deployment Guide with Anchore Federal

  • This blog post is great for those who are interested to see how Anchore Federal can turn all of the requirements of the DoD Container Hardening Guide and the Container Image Creation and Deployment Guide into an easy button.

Deep Dive into Anchore Federal’s Container Image Inspection and Vulnerability Management

  • This blog post deep dives into how to utilize Anchore Federal to find container vulnerabilities and alert or report on whether they are violating the security compliance required to be a DoD software factory.

Policy-based Software Supply Chain Security and Compliance

The power of a policy-based approach to software supply chain security is that it can be integrated directly into a DevOps pipeline and automate a significant amount of alerting, reporting and enforcement work. The blog posts below go into depth on how this automated approach to security and compliance can uplevel a DoD software factory:

A Policy Based Approach to Container Security & Compliance

  • This blog details how a policy-based platform works and how it can benefit both software supply chain security and compliance. 

The Power of Policy-as-Code for the Public Sector

  • This follow-up to the post above shows how the policy-based security platform outlined in the first blog post can have significant benefits to public sector organizations that have to focus on both internal information security and how to prove they are compliant with government standards.

Benefits of Static Image Inspection and Policy Enforcement

  • Getting a bit more technical this blog details how a policy-based development workflow can be utilized as a security gate with deployment orchestration systems like Kubernetes.

Getting Started With Anchore Policy Bundles

  • An even deeper dive into what is possible with the policy-based security system provided by Anchore Enterprise, this blog gets into the nitty-gritty on how to configure policies to achieve specific security outcomes.

Unpacking the Power of Policy at Scale in Anchore

  • This blog shows how a security practitioner can extend the security signals that Anchore Enterprise collects with the assistance of a more flexible data platform like New Relic to derive more actionable insights.

Security Technical Implementation Guide (STIG)

The Security Technical Implementation Guides (STIGs) are fantastic technical guides for configuring off the shelf software to DoD hardening standards. Anchore being a company focused on making security and compliance as simple as possible has written a significant amount about how to utilize STIGs and achieve STIG compliance, especially for container-based DevSecOps pipelines. Exactly the kind of software development environments that meet the standards of a DoD software factory. View our previous content below:

4 Ways to Prepare your Containers for the STIG Process

  • In this blog post, we give you four quick tips to help you prepare for the STIG process for software containers. Think of this as the amuse bouche to prepare you for the comprehensive white paper that comes next.

Navigating STIG Compliance for Containers

  • As promised, this is the extensive document that walks you through how to build a DevSecOps pipeline based on containers that is both high velocity and secure. Perfect for organizations that are aiming to roll their own DoD software factory.

User Stories

Anchore has been supporting FSIs and DoD programs to build DevSecOps programs that meet the criteria to be called a DoD software factory for the past decade. We can write technical guides and best practices documents till time ends but sometimes the best lessons are learned from real-life stories. Below are user stories that help fill in all of the details about how a DoD software factory can be built from scratch:

DoD’s Pathway to Secure Software

  • Join Major Camdon Cady of Platform One and Anchore’s VP of Security, Josh Bressers as they discuss the lessons learned from building a DoD software factory from the ground up. Watch this on-demand webinar to get all of the details in a laid back and casual conversation between two luminaries in their field.

Development at Mach Speed

  • If you prefer a written format over video, this case study highlights how Platform One utilized Red Hat OpenShift and Anchore Federal to build their DoD software factory that has become the leading Managed Service Provider for DoD programs.

Conclusion

Similar to how Cloud has taken over the infrastructure discussion in the enterprise world, DoD software factories are quickly becoming the go-to solution for DoD programs and the FSIs that support them. Delivering on the promise of the DevOps movement of high velocity development without compromising security, a DoD software factory is the one-stop shop to upgrade your software development practice into the modern age and become compliant as a bonus! If you’re looking for an easy button to infuse your DevOps pipeline with security and compliance without the headache of building it yourself, take a look at Anchore Federal and how it helps organizations layer software supply chain security into a DoD software factory and achieve a cATO.

Navigating the NVD Quagmire

The global cybersecurity community has been in a state of uncertainty since the National Vulnerability Database (NVD) has degraded service starting in mid-February. There has been a lot of coverage of this incident this month and Anchore has been at the center of much of it. If you haven’t been keeping up, this blog post is here to recap what has happened so far and how the community has been responding to this incident.

Our VP of Security, Josh Bressers has been leading the charge to educate and organize the community. First with his Open Source Security podcast that goes through what is happening with NVD and why it is important. On top of that, last week he participated in a livestream with Chainguard Co-founder Dan Lorenc on the Resilient Cyber Show hosted by Chris Hughes on the implications of the current delay with NVD service.

We’ve condensed the topics from these resources into a blog post that will cover the issues created by the delay in NVD service, a background on what has happened so far, a potential open-source solution to the problem and a call to action for advocacy. Continue reading for the good stuff.

The problem

Federal agencies mandate NVD is used as the primary data source of truth even if there could be higher quality data sources available. This mainly comes down to the fact that the severity scores, meaning the Common Vulnerability Scoring System (CVSS), determines when an agency or organization is out of compliance with a federal security standard. Given that compliance standards are created by the US government, only NVD can score a vulnerability and determine the appropriate action in order to stay in compliance.

That’s where the problem starts to come in, you’ve got a whole bunch of government agencies on one hand saying, ‘you must use this data’. And then another government agency that says, “No, you can’t rely on this for anything”. This leaves folks working with the government in a bit of a pickle.

–Dan Lorenc, Co-Founder, Chainguard

If NVD isn’t assigning severities to vulnerabilities, it’s not clear what that means for maintaining compliance and they could be exposing themselves to significant risk. For example, there could be high severity vulnerabilities being published by organizations that are unaware because this vital review and scoring process has been removed.

Background on NVD and the current state of affairs

NVD is the canonical source of truth for software vulnerabilities for the federal government, specifically for 10+ federal compliance standards. It has also become a go-to resource for the worldwide security community even if individual organizations in the wider community aren’t striving to meet a United States compliance standard.

NVD adds a number of enrichments to CVE data but there are two of particular importance; first, it adds a severity score to all CVEs and two, it adds information of which versions of the software are impacted by the CVE. The National Institute of Standards and Technology (NIST) has been providing this service to the security community for over 20 years through the NVD. That changed last month:

Timeline

  • Feb 12: NVD dramatically reduces the number of CVEs that are being enriched:
  • Feb 15: NVD posts message about delay of enrichment on NVD Website

Read a comprehensive background in our original blog post, National Vulnerability Database: Opaque changes and unanswered questions.

Developing an Open-Source Solution

The Anchore team developed and maintains an open-source vulnerability scanner called Grype that utilizes the NVD as one of many vulnerability feeds as well as a software supply chain security platform called Anchore Enterprise that incorporates Grype. Given that both products use data from NVD, it was particularly important for Anchore to engage in the current crisis.

While there is nothing that Anchore can do about the missing severity scores, the other highlighted missing enrichment was the versions of the software that are impacted by the CVE, aka, Common Platform Enumeration (CPE). The matching logic ends up being the more important signal during impact analysis because it is an objective measure of impact rather than severity scoring which can be debated (and is, at length).

Given Anchore’s history with the open-source software community, creating an OSS project to fill a gap in the NVD enrichment seemed the logical choice. The goal of going the OSS route is to leverage the transparent process and rapid iteration that comes from building software publicly. Anchore is excited to collaborate with the community to:

  • Ingest CVE data
  • Analyze CVEs
  • Improve the CVE-to-versioning mapping process 

Everyone is being crushed by the unrelenting influx of vulnerabilities. It’s not just NVD. It’s not one organization. We can either sit in our silos and be crushed to death or we can work together.

–Josh Bressers, VP of Security at Anchore

If you’re looking to utilize this data and software as a backfill while NVD continues delaying analysis or want to contribute to the project, please join us on GitHub

Cybersecurity Awareness and Advocacy

It might seem strange that the cybersecurity community would need to convince the US government that investing in the cybersecurity ecosystem is a net positive investment given that the federal government is the largest purchaser of software in the world and is probably the largest target for threat actors. But given how NIST has decided to degrade the service of NVD and provide only opaque guidance on how to fill the gap in the meantime, it doesn’t appear that the right hand is talking with the left.

Whether the federal government intended to or not, by requiring that organizations and agencies utilize NVD in order to meet a number of federal compliance standards, it effectively became the authority on the severity of software vulnerabilities for the global cybersecurity ecosystem. By providing a valuable and reliable service to the community, the US garnered the trust of the ecosystem. The current state of NVD and the manner in which it was rolled out has degraded that trust. 

It is unknown whether the US will cede its authority to another organization, the EU may attempt to fill this vacuum with its own authoritative database but in the meantime, advocacy for cybersecurity awareness within the government is paramount. It is up to the community to create the pressure that will demonstrate the urgency with the current strategy around a vital community resource like NVD. 

Conclusion

Anchore is committed to keeping the community up-to-date on this incident as it unfolds. To stay informed, be sure to follow us on LinkedIn or Twitter/X.

If you’d like to watch the livestream in all its glory, click on the image below to go to the VOD.

Also, if you’re looking for more in-depth coverage of the NVD incident, Josh Bressers has a security podcast called, Open Source Security that covers the NVD incident and the history of NVD.

Spring Webinar Update: Expand Your Knowledge with Our Expert-Led Sessions

In our continuous effort to bring valuable insights and tools to the world of software supply chain security, we are thrilled to announce two upcoming webinars and one recently held webinar now available for on-demand access. Whether you’re looking to enhance your understanding of software security, explore open-source tools to automate OSS licensing management, or navigate the complexities of compliance with federal standards, our expert-led webinars are designed to equip you with the knowledge you need. Here’s what’s on the agenda:

Tracking License Compliance Made Easy: Intro to Grant (OSS)

Date: Mar 26, 2024 at 2pm EDT  (11am PDT)

Join us as Anchore CTO, Dan Nurmi and Grant Maintainer, Christopher Phillips discuss the challenges of managing software licenses within production environments, highlighting the complexity and ongoing nature of tracking open-source licenses.

They will introduce Grant, an open-source tool designed to alleviate the burden of OSS license inspection by demonstrating how to scan for licenses within SBOMs or container images, simplifying a typically manual process. The session will cover the current landscape of software licenses, the difficulties of compliance checks, and a live demo of Grant’s features that automate this previously laborious process.

Software Security in the Real World with Kelsey Hightower and Dan Perry

Date: April 4th, 2024 at 2pm EDT  (11am PDT)

In our upcoming webinar, experts Kelsey Hightower and Dan Perry will delve into the nuances of securing software in cloud-native, containerized applications. This in-depth session will explore the criteria for vulnerability testing success or failure, offering insights into security testing and compliance for modern software environments. 

Through a live demonstration of Anchore Enterprise, they’ll provide a comprehensive look at visibility, inspection, policy enforcement, and vulnerability remediation, equipping attendees with a deeper understanding of software supply chain security, proactive security strategies, and practical steps to embark on a software security journey. 

The discussion will continue after the webinar on X/Twitter with Kelsey Hightower.

FedRAMP and SSDF Compliance: How to Sell to the Federal Government

This webinar explores how Anchore aids in navigating the complex compliance requirements for selling software to the federal government, focusing on FedRAMP vulnerability scanning and SSDF compliance. Led by Josh Bressers, VP of Security and Connor Wynveen, Senior Solutions Engineer it will detail evaluating FedRAMP controls for software containers and adhering to SSDF guidelines. 

Key takeaways include strategies to streamline FedRAMP and SSDF compliance efforts, leveraging SBOMs for efficiency, the critical role of automated vulnerability scans, and how Anchore’s policy pack can assist organizations in meeting compliance standards.

Accessing the Webinars

Don’t miss out on the opportunity to expand your knowledge and skills with these sessions. To register for the upcoming webinars or to access the on-demand webinar, visit our webinar landing page. Whether you’re looking to stay ahead of the curve in software security, explore funding opportunities for your open-source projects, or break into the federal market, our webinars are designed to provide you with the insights and tools you need.

We look forward to welcoming you to our upcoming webinars. Stay informed, stay ahead!

Anchore Enterprise 5.1: Token-Based Authentication

In Anchore 5.1, we have added the functionality of using token-based authentication through our API keys. Now with Anchore Enterprise 5.1, an administrator can create a token for a user so that they can use API keys rather than a username or credential. Let’s dive into the details of what this means.

Token-based authentication is a protocol that provides an extra layer of security when users want to access an API. It allows users to verify their identity, and in return receive a unique access token for the specific service. Tokens have a lifespan and as long as it is used within that duration, users can access the application with that same token without having to continuously log in.

We list the step-by-step mechanism for a token-based authentication protocol:

  1. A user will send its credentials to the client. 
  2. The client sends the credentials to an authorization server where it will generate a unique token for that specific user’s credentials. 
  3. The authorization server sends the token to the client. 
  4. The client sends the token to the resource server
  5. The resource server sends data/resource to the client for the duration of the token’s lifespan.

Token-Based Authentication in Anchore 5.1

Now that we’ve laid the groundwork, in the following sections we’ll walk through how to create API keys and use them in AnchoreCTL.

Creating API Keys

In order to generate an API key, navigate to the Enterprise UI and click on the top right button and select ‘API Keys’:

alt text

Clicking ‘API Keys’ will present a dialog that lists your active, expired and revoked keys:

alt text

To create a new API key, click on the ‘Create New API Key’ on the top right. This will open another dialog where it asks you for relevant details for the API key:

alt text

You can specify the following fields:

  • Name: The name of your API key. This is mandatory and the name should be unique (you cannot have two API keys with the same name).
  • Description: An optional text descriptor for your API key.
  • Expiry Date: An expiry date for your API key, you cannot specify a date in the past and it cannot exceed 365 days by default. This is the lifespan you are configuring for your token.

Click save and the UI will generate a Key Value and display the following output of the operation:

alt text

NOTE: Make sure you copy the Key Value as there is no way to get this back once you close out of this window.

Revoking API Keys

If there is a situation where you feel your API key has been compromised, you can revoke an active key. This prevents the key from being used for authentication. To revoke a key, click on the ‘Revoke’ button next to a key:

alt text

NOTE: Be careful revoking a key as this is an irreversible operation i.e. you cannot mark it active later.

The UI by default only displays active API keys. If you want to see your revoked and expired keys, check the toggle to ‘Show only active API keys’ on the top right:

alt text

Managing API Keys as an Admin

As an account admin you can manage API keys for all users in the account you are administered in. A global admin can manage API keys across all accounts and all users.

To access the API keys as an admin, click on the ‘System’ icon and navigate to ‘Accounts’:

alt text

Click ‘Edit’ for the account you want to manage keys for and click on the ‘Tools’ button against the user you wish to manage keys for:

alt text

Using API Keys in AnchoreCTL

Generating API Keys as an SAML (SSO) User

API keys for SAML (SSO) users are disabled by default. To enable API keys for SAML users, please update your helm chart values file with the following:

    user_authentication: 

        allow_api_keys_for_saml_users: true

NOTE: API keys are an additional authentication mechanism for SAML users that bypasses the authentication control of the IDP. When access has been revoked at the IDP, it does not automatically disable the user or revoke all API keys for the user. Therefore, when access has been revoked for a user, the system admin is responsible to manually delete the Anchore User or revoke any API key which was created for that user.

Using API Keys

API keys are authenticated using basic auth. In order to use API keys, you need to use a special username _api_key and the password is the Key Value that was the output when you created the API key.

curl -u ‘_api_key:<API key value>’ http://localhost:8228/v2/images

  url: “http://localhost:8228”

  username: “_api_key”

  password: <API Key Value>

Caveats for API Keys

API Keys generally inherit the permissions and roles of the user they were generated for, but there are certain operations you cannot perform using API keys regardless of which user they were generated for:

  • You cannot Add/Edit/Remove Accounts, Users and Credentials.
  • You cannot Add/Edit/Remove Roles and Role Members.
  • You cannot Add/Edit/Revoke API Keys.

We invite you to learn more about Anchore Enterprise 5.0 with a free 15 day trial. Or, if you’ve got other questions, set up a call with one of our specialists.

Learn more from Anchore:

  1. User Management in Anchore Enterprise 
  2. User Authentication with API Keys
  3. AnchoreCTL Configurations 

Introducing VIPERR: The First Software Supply Chain Security Framework for All

Today Anchore announces the VIPERR software supply chain security framework. This framework distills our lessons learned from supporting the most challenging software supply chain environments across government agencies and the defense industrial base. The framework is a blueprint for organizations to implement that reliably creates secure software supply chains with the least possible lift.

Previously security teams had to wade through massive amounts of literature on software supply chain security and spend countless hours of their teams time digesting those learnings into concrete processes and controls that integrated with their specific software development process. This typically absorbed massive amounts of time and resources and it wasn’t always clear at the end that an organization’s security had improved significantly.

Now organizations can utilize the VIPERR framework as a trusted industry resource to confidently improve their security posture and reduce the possibility of a breach due to a supply chain incident without the lengthy research cycle that frequently comes before the implementation phase of a software supply chain solution.

If you’re interested to see how Anchore works with customers to apply the framework via the Anchore Enterprise platform take the free guided walkthrough of the VIPERR framework. Alternatively, you can view our on demand webinar for a thorough walk through the framework. Finally, if you would like a more hands-on experience to try out the VIPERR framework then you can try our Anchore Enterprise Free Trial.  

Frequently Asked Questions

What is the VIPERR framework?

VIPERR is a free software supply chain security framework that Anchore created for organizations to evaluate and improve the security posture of their software supply chain. VIPERR stands for visibility, inspection, policy enforcement, remediation, and reporting. 

While working alongside developers and security teams within some of the most challenging architectures and threat landscapes, Anchore field engineers developed the VIPERR framework to both contextualize lessons learned and prevent live threats. VIPERR is meant to be a practical self-assessment playbook that organizations can regularly reuse to evaluate and harden their software supply chain security posture. By following this guide, numerous Fortune 500 enterprises and top federal agencies have transformed their software supply chain security posture and become hard targets for advanced persistent threats.

Why did Anchore create the VIPERR framework?

There are already a number of frameworks that have been developed to help organizations improve their software supply chain security but most of them focus on giving general guidance that is flexible enough that most specific implementations of the guidance will yield compliance. This is great for general standards because it allows organizations to find the best fit for their environment, but by keeping guidance general, it is difficult to always know the best specific implementation that delivers real-world results. The VIPERR framework was designed to fill this gap.

VIPERR is a framework that can be used to fulfill compliance standards for software supply chain security that is opinionated on how to achieve the controls of most of the popular standards. It can be paired with Anchore’s turnkey offering Anchore Enterprise so that organizations can opt for a solution that accomplishes the entire VIPERR framework without having to build a system from scratch.

Access an interactive 50 point checklist or a pdf version to guide you through each step of the VIPERR framework, or share it with your team to introduce the model for learning and awareness.

How do I begin identifying the gaps in my software supply chain security program? 

“I have no budget. My boss doesn’t think it’s a priority. I lack resources.” These are all common refrains when we speak with security teams working with us via our open source community or commercial enterprise offering. There is a ton of guidance available between SSDF, SLSA, NIST, and S2C2F. A lot of this is contextualized in a manner difficult to digest. As mentioned in the previous question VIPERR was created to be highly actionable by finding the right balance between giving guidance that is flexible and providing opinions that reduce options to help organizations make decisions faster.

The VIPERR framework will be available in a formatted 50 point, self-assessment checklist in the coming weeks, check back here for updates on that. By completing the forthcoming checklist you will produce a prioritized list of action items to harden your organization’s software supply chain security with the least amount of effort. 

How do I build a solution that closes the gaps that VIPERR uncovers?

As stated, VIPERR is a framework, not a solution. Anchore has worked with companies that have implemented VIPERR by building an in-house solution with a collection of open source tools (e.g. Syft, Grype, and other open source tools) or using a combination of multiple security tools. If you want to get an idea of the components involved in building a solution by self-hosting open-source tools and tying all of these systems together with first-party code, we wrote about that approach here

If I don’t want to build a solution, are there any turnkey solutions available?

Yes. Anchore Enterprise was designed as a turnkey solution to implement the VIPERR framework. Anchore Enterprise also automates the 50 security controls of the framework by integrating directly into an organization’s SDLC toolset (i.e., developer environments, CI/CD build pipelines, artifact registry and production environments). This provides the ability for organizations to know at any point in time if their software supply chain has been compromised and how to remediate the exploit.

If you’d like to learn more about the Anchore Enterprise platform or speak with a member of our team, feel free to book a time to speak with one of our specialists.

Anchore Enterprise 5.0: New, Free Self-Service Trial

This week we’re proud to announce the immediate availability of the Anchore Enterprise 5.0 free trial.  If you’ve been curious about how the platform works, or wondering how it can complement your application security measures, now you can have access to a 15 day free trial. 


To get started, just click here, fill out a short form, and you will immediately receive instructions via email on how to spin up your free 15 day trial in your own AWS account.  Please note that only AWS is supported at this time, so if you’d like to launch a trial on-prem or with another cloud provider, please reach out to us directly.

With just a few clicks, you’ll be up and running with the latest release of Anchore Enterprise which includes a new API, improvements to our reporting interface, and so much more.  In fact, we have pre-populated the trial with data that will allow you to explore the many features Anchore Enterprise has to offer. Take a look at the below screenshots for a glimpse behind the scenes.

Malware Scanning

Kubernetes Runtime Integration

Vulnerability Reports

We invite you to learn more about Anchore Enterprise 5.0 with a free 15 day trial here. Or, if you’ve got other questions, set up a call with one of our specialists here.

SBOMs & Vulnerability Scanners: Better Together

In the world of software development, two mega-trends have emerged in the past decade that have reshaped the industry. First the practice of building applications with a foundation of open-source software components and, second, the adoption of DevOps principles to automate the build and delivery of software. While these innovations have accelerated the pace of software getting into the hands of users, they’ve also introduced new challenges, particularly in the realm of security. 

As software teams race to deliver applications at breakneck speeds, security often finds itself playing catch-up, leading to potential vulnerabilities and risks. But what if there was a way to harmonize rapid software delivery with robust security measures? 

In this post, we’ll explore the tension between engineering and security, the transformative role of Software Bill of Materials (SBOMs), and how modern approaches to software composition analysis (SCA) are paving the way for a secure, efficient, and integrated software development lifecycle.

The rise of open-source software ushered in an era where developers had innumerable off-the-shelf components to construct their applications from. These building blocks eliminated the need to reinvent the wheel, allowing developers to focus on innovating on top of the already existing foundation that had been built by others. By leveraging pre-existing, community-tested components, software teams could drastically reduce development time, ensuring faster product releases and more efficient engineering cycles. However, this boon also brought about a significant challenge: blindspots. Developers often found themselves unaware of all the ingredients that made up their software.

Enter the second mega-trend DevOps tools, with special emphasis on CI/CD build pipelines. These tools promised (and delivered) faster, more reliable software testing, building, and delivery. Which ultimately meant not only was the creation of software accelerated via open-source components but the build process of manufacturing the software into a state that a user could consume was also sped up. But, as Uncle Ben reminds us, “with great power comes great responsibility”. The accelerated delivery meant that any security issues, especially those lurking in the blindspots, found their way into production at the new accelerated pace that was enabled through open-source software components and DevOps tooling.

The Strain on Legacy Security Tools in the Age of Rapid Development

This double-shot of productivity boosts to engineering teams began to strain their security oriented counterparts. The legacy security tools that security teams had been relying on were designed for a different era. They were created when software development lifecycles were measured in quarters or years rather than weeks or months. Because of this they could afford to be leisurely with their process. 

The tools that were originally developed to ensure that an application’s supply chain was secure were called software composition analysis (SCA) platforms. They were originally developed as a method for scanning open source software for licensing information to prevent corporations from running into legal issues as their developers used open-source components. They scanned every software artifact in its entirety—a painstakingly slow process. Especially if you wanted to run a scan during every step of software integration and delivery (e.g. source, build, stage, delivery, production). 

As the wave of open-source software and DevOps principles took hold, a tug-of-war between security teams, who wanted thoroughness, and software teams, who were racing against time began to form. Organizations found themselves at a crossroads, choosing between slowing down software delivery to manage security risks or pushing ahead and addressing security issues reactively.

SBOMs to the Rescue!

But what if there was a way to bridge this gap? Enter the Software Bill of Materials (SBOM). An SBOM is essentially a comprehensive list of components, libraries, and modules that make up a software application. Think of it as an ingredient list for your software, detailing every component and its origin.

In the past, security teams had to scan each software artifact during the build process for vulnerabilities, a method that was not only time-consuming but also less efficient. With the sheer volume and complexity of modern software, this approach was akin to searching for a needle in a haystack.

SBOMs, on the other hand, provide a clear and organized view of all software components. This clarity allows security teams to swiftly scan their software component inventory, pinpointing potential vulnerabilities with precision. The result? A revolution in the vulnerability scanning process. Faster scans meant more frequent checks. And with the ability to re-scan their entire catalog of applications whenever a new vulnerability is discovered, organizations are always a step ahead, ensuring they’re not just reactive but proactive in their security approach.

In essence, organizations could now enjoy the best of both worlds: rapid software delivery without compromising on security. With SBOMs, the balance between speed and security isn’t just achievable; it’s the new standard.

How do I Implement an SBOM-powered Vulnerability Scanning Program?

Okay, we have the context (i.e. the history of how the problem came about), we have a solution, the next question then becomes how do you bring this all together to integrate this vision of the future with the reality of your software development lifecycle?

Below we outlined the high-level steps of how an organization might begin to adopt this solution into their software integration and delivery processes:

  1. Research and select the best SBOM generation and vulnerability scanning tools. (Hint: We have some favorites!)
  2. Educate your developers about SBOMs. Need guidance? Check out our detailed post on getting started with SBOMs.
  3. Store the generated SBOMs in a centralized repository.
  4. Create a system to pull vulnerability feeds from reputable sources. If you’re looking for a way to get started here, read our post on how to get started.
  5. Regularly scan your catalog of SBOMs for vulnerabilities, storing the results alongside the SBOMs.
  6. Integrate your SBOM generation and vulnerability scanning tooling into your CI/CD build pipeline to automate this process.
  7. Implement a query system to extract insights from your catalog of SBOMs.
  8. Create a tool to visualize your software supply chain’s security health.
  9. Create a system to alert on for newly discovered vulnerabilities in your application ecosystem.
  10. Integrate a policy enforcement system into your developers’ workflows, CI/CD pipelines, and container orchestrators to automatically prevent vulnerabilities from leaking into build or production environments.
  11. Maintain the entire system and continue to improve on it as new vulnerabilities are discovered, new technologies emerge and development processes evolve.

Alternatively, consider investing in a comprehensive platform that offers all these features, either as a SaaS or on-premise solution instead of building this entire system yourself. If you need some guidance trying to determine whether it makes more sense to build or buy, we have put together a post outlining the key signs to watch for when considering when to outsource this function.

How Anchore can Help you Achieve your Vulnerability Scanning Dreams

The previous section is a bit tongue-in-cheek but it is also a realistic portrait of how to build a scalable vulnerability scanning program in the Cloud Native-era. Open-source software and container pipelines have changed the face of the software industry for the better but as with any complex system there are always unintended side effects. Being able to deliver software more reliably at a faster cadence was an amazing step forward but doing it securely got left behind. 

Anchore Enterprise was built specifically to address this challenge. It is the manifestation of the list of steps outlined in the previous section on how to build an SBOM-powered software composition analysis (SCA) platform. Integrating into your existing DevOps tools, Anchore Enterprise is a turnkey solution for the management of software supply chain security. If you’d rather buy than build and save yourself the blood, sweat and tears that goes into designing an end-to-end SCA platform, we’re looking forward to talking to you.
If you’d like to learn more about the Anchore Enterprise platform or speak with a member of our team, feel free to book a time to speak with one of our specialists.

Detecting Exploits within your Software Supply Chain

SBOMs. What are they good for? At Anchore, we see SBOMs (software bills of material) as the foundation of an application’s supply chain hierarchy. Upon this foundation you can build a lot of powerful features, such as, the ability to detect vulnerabilities in your open source dependencies before they are pushed to production. An unintended side effect of giving users the power to easily see deeply into their application’s dependencies and detect the vulnerabilities in those dependencies is that there can sometimes be hundreds of vulnerabilities discovered in the process. 

We’ve seen customer applications that generate up to 400+ known vulnerabilities! This creates an information overload that typically ends in the application developer ignoring the results because it is too much effort to triage and remediate each one. Knowing that an application is riddled with vulnerabilities is better than not but excessive information does not lead to actionable insights. 

Anchore Enterprise solves this challenge by pairing vulnerability data (e.g. CVEs, etc) with exploit data (e.g. KEV, etc). By combining these two data sources we can create actionable insight by showing users both the vulnerabilities in their applications and which vulnerabilities are actually being exploited. Actively exploited vulnerabilities are significantly higher risk and can be prioritized for triage and remediation first. In this blog post, we’ll discuss how we do that and how it can save both your security team and application developers time.

How Does Anchore Enterprise Help You Find Exploits in Your Application Dependencies?

What is an Exploited Vulnerability?

“Exploited” is an important distinction because it means that not only does a vulnerability exist but a payload also exists that can reliably trigger the vulnerability and cause an application to execute unintended functionality (e.g. leaking all of the contents of a database or deleting all of the data in a database). For instance, almost all bank vaults in the world are vulnerable to an asteroid strike “deleting” all of the contents of the safe but no one has developed a system to reliably cause an asteroid to strike bank vaults. Maybe Elon Musk can make this happen in a few more years but today this vulnerability isn’t exploitable. It is important for organizations to prioritize exploited vulnerabilities because the potential for damage is significantly greater.

Source High-Quality Data on Exploits

In order to find vulnerabilities that are exploitable, you need high-quality data from security researchers that are either crafting exploits to known vulnerabilities or analyzing attack data for payloads that are triggering an exploit in a live application. Thankfully, there are two exceedingly high-quality databases that publish this information publicly and regularly; the Known Exploited Vulnerability (KEV) Catalog and the Exploit Database (Exploit-DB).

The KEV Catalog is a database of known exploited vulnerabilities that is published and maintained by the US government through the Cybersecurity and Infrastructure Security Agency, CISA. It is updated regularly; they typically add 1-5 new KEVs every week. 

While not an exploit database itself, the National Vulnerability Database (NVD) is an important source of exploit data because it checks all of the vulnerabilities that it publishes and maintains against the Exploit-DB and embeds the relevant identifiers when a match is found.

Anchore Enterprise ingests both of these data feeds and stores the data in a centralized repository. Once this data is structured and available to your organization it can then be used to determine which applications and their associated dependencies are exploitable.

Map Data on Exploits to Your Application Dependencies

Now that you have a quality source of data on known exploited vulnerabilities, you need to determine if any of these exploits exist in your applications and/or the dependencies that they are built with. The industry-standard method for storing information on applications and their dependency supply chain is via a software bill of materials (SBOM)

After you have an SBOM for your application you can then cross-reference the dependencies against both a list of known vulnerabilities and a list of known exploited vulnerabilities. The output of this is a list of all of the applications in your organization that are vulnerable to exploits.

If done manually, via something like a spreadsheet this can quickly become a tedious process. Anchore Enterprise automates SBOM management by generating SBOMs for all of your applications and running scans of the SBOMs against vulnerability and exploit databases. 

How Does Anchore Enterprise Help You Prioritize Remediation of Exploits in Your Application Dependencies?

Once we’ve used Anchore Enterprise to detect CVEs in our containers that are also exploitable through the KEV or ExploitDB lists, then we can take the severity score back into account with more contextual evidence. We need to know two things for each finding: what is the severity of the finding and can I accept the risk associated with leaving that vulnerable code in my application or container. 

If we look back to the Log4J event in December of 2021, that particular vulnerability scored a 10 on the CVSS. That score alone provides us little detail on how dangerous that vulnerability is. If a CVE is discovered against any given piece of software and the NVD researchers cannot reach the authors of the code, then it’s assigned a score of 10 and the worst case is assumed. 

However, if we have applied our KEV and ExploitDB bundles and determined that we do indeed have a critical vulnerability that has active known exploits and evidence that it is being exploited in the wild AND the severity exceeds our personal or organizational risk thresholds then we know that we need to take action immediately. 

Everyone has questioned the utility of the SBOM but Anchore Enterprise is making this an afterthought. Moving past the basics of just generating an SBOM and detecting CVE’s, Anchore Enterprise is automatically mapping exploit data to specific packages in your software supply chain allowing you to generate reports and notifications for your teams. By analyzing this higher quality information, you can determine  which vulnerabilities actually pose a threat to your and in turn make more intelligent decisions about which to fix and which to accept, saving your organization time and money.

Wrap Up

Returning to our original question, “what are SBOMs good for”? It turns out the answer is scaling the process of finding and prioritizing vulnerabilities in your organization’s software supply chain.

In today’s increasingly complex software landscape, the importance of securing your application’s supply chain cannot be overstated. Traditional SBOMs have empowered organizations to identify vulnerabilities but often left them inundated with too much information, rendering the data less actionable. Anchore Enterprise revolutionizes this process by not only automating the generation of SBOMs but also cross-referencing them against reputable databases like KEV Catalog and Exploit-DB to isolate actively exploited vulnerabilities. By focusing on the vulnerabilities that are actually being exploited in the wild, your security team can prioritize remediation efforts more effectively, saving both time and resources. 

Anchore Enterprise moves beyond merely detecting vulnerabilities to providing actionable insights, enabling organizations to make intelligent decisions on which risks to address immediately and which to monitor. Don’t get lost in the sea of vulnerabilities; let Anchore Enterprise be your compass in navigating the choppy waters of software security.

If you’d like to learn more about the Anchore Enterprise platform or speak with a member of our team, feel free to book a time to speak with one of our specialists

Automated Policy Enforcement for CMMC with Anchore Enterprise

The Cyber Maturity Model Certification (CMMC) is an important program to harden the cybersecurity posture of the defense industrial base. Its purpose is to validate that appropriate safeguards are in place to protect controlled unclassified information (CUI). Many of the organizations that are required to comply with CMMC are Anchore customers. They have the responsibility to protect the sensitive, but not classified data, of US military and government agencies as they support the various missions of the United States. 

CMMC 2.0 Levels

  • Level 1 Foundation: Safeguard federal contract information (FCI); not critical to national security.
  • Level 2 Advanced:  This maps directly to NIST Special Publication (SP) 800-171. Its primary goal is to ensure that government contractors are properly protecting controlled unclassified information (CUI).
  • Level 3 Expert: This maps directly to NIST Special Publication (SP) 800-172. Its primary goal is to go beyond the base-level security requirements defined in NIST 800-171. NIST 800-172 provides security requirements that specifically defend against advanced persistent threats (APTs).

This is of critical importance as these organizations leverage common place DevOps tooling to build their software. Additionally, these large organizations may be working with smaller subcontractors or suppliers who are building software in tandem or partnership. 

For example, a mega-defense contractor is working alongside a small mom-and-pop shop to develop software for a classified government system. Lots of questions we should have here:

  1. How can my company as a mega-defense contractor validate what software built by my partner is not using blacklisted software packages?
  2. How can my company validate software supplied to me is free of malware?
  3. How can I validate that the software supplied to me is in compliance with licensing standards and vulnerability compliance thresholds of my security team?
  4. How do I validate that the software I’m supplying is compliant not only against NIST 800-171 and CMMC, but against the compliance standards of my government end user (Such as NIST 800-53 or NIST 800-161)?

Validating Security between DevSecOps Pipelines and Software Supply Chain

At any major or small contractor alike, everyone has taken steps to build internal DevSecOps (DSO) pipelines. However, the defense industrial base (DIB) commonly involves daily relationships in which smaller defense contractors supply software to a larger defense contractor for a program or DSO pipeline that consumes and implements that software. With Anchore Enterprise, we can now validate if that software supplied is compliant with CMMC controls as specified in NIST 800-171.

Looking to learn more about how to achieve CMMC Level 2 or NIST 800-171 compliance? One of the most popular technology shortcuts is to utilize a DoD software factory. Anchore has been helping organizations and agencies put the Sec in DevSecOps by securing traditional software factories, transforming them into DoD software factories.

Which Controls does Anchore Enterprise Automate?

3.1.7 – Restrict Non-Privileged Users and Log Privileged Actions

Related NIST 800-53 Controls: AC-6 (10)

Description: Prevent non-privileged users from executing privileged functions and capture the execution of such functions in audit logs. 

Implementation: Anchore Enterprise can scan the container manifests to determine if the user is being given root privileges and implement an automated policy to prevent build containers from entering a runtime environment. This prevents a scenario where any privileged functions can be utilized in a runtime environment.

3.4.1 – Maintain Baseline Configurations & Inventories

Related NIST 800-53 Controls: CM-2(1), CM-8(1), CM-6

Description: Establish and maintain baseline configurations and inventories of organizational systems (including hardware, software, firmware, and documentation) throughout the respective system development life cycles.

Implementation: Anchore Enterprise provides a centralized inventory of all containers and their associated manifests at each stage of the development pipeline. All manifests, images and containers are automatically added to the central tracking inventory so that a complete list of all artifacts of the build pipeline can be tracked at any moment in time.

3.4.2 – Enforce Security Configurations

Related NIST 800-53 Controls: CM-2 (1) & CM-8(1) & CM-6

Description: Establish and enforce security configuration settings for information technology products employed in organizational systems.

Implementation: Implementation: Anchore Enterprise scans all container manifest files for security configurations and publishes found vulnerabilities to a centralized database that can be used for monitoring, ad-hoc reporting, alerting and/or automated policy enforcement.

3.4.3 – Monitor and Log System Changes with Approval Process

Related NIST 800-53 Controls: CM-3

Description: Track, review, approve or disapprove, and log changes to organizational systems.

Implementation: Anchore Enterprise provides a centralized dashboard that tracks all changes to applications which makes scheduled reviews simple. It also provides an automated controller that can apply policy-based decision making to either automatically approve or reject changes to applications based on security rules.

3.4.4 – Run Security Analysis on All System Changes

Related NIST 800-53 Controls: CM-4

Description: Analyze the security impact of changes prior to implementation.

Implementation: Anchore Enterprise can scan changes to applications for security vulnerabilities during the build pipeline to determine the security impact of the changes.

3.4.6 – Apply Principle of Least Functionality

Related NIST 800-53 Controls: CM-7

Description: Employ the principle of least functionality by configuring organizational systems to provide only essential capabilities.

Implementation: Anchore Enterprise can scan all applications to ensure that they are uniformly applying the principle of least functionality to individual applications. If an application does not meet this standard then Anchore Enterprise can be configured to prevent an application from being deployed to a production environment.

3.4.7 – Limit Use of Nonessential Programs, Ports, and Services

Related NIST 800-53 Controls: CM-7(1), CM-7(2)

Description: Restrict, disable, or prevent the use of nonessential programs, functions, ports, protocols, and services.

Implementation: Anchore Enterprise can be configured as a gating agent that will scan for specific security violations and prevent these applications from being deployed until the violations are remediated.

3.4.8 – Implement Blacklisting and Whitelisting Software Policies

Related NIST 800-53 Controls: CM-7(4), CM-7(5)

Description: Apply deny-by-exception (blacklisting) policy to prevent the use of unauthorized software or deny-all, permit-by-exception (whitelisting) policy to allow the execution of authorized software.

Implementation: Anchore Enterprise can be configured as a gating agent that will apply a security policy to all scanned software. The policies can be configured in a black- or white-listing manner.

3.4.9 – Control and Monitor User-Installed Software

Related NIST 800-53 Controls: CM-11

Description: Control and monitor user-installed software.

Implementation: Anchore Enterprise scans all software in the development pipeline and records all user-installed software. The scans can be monitored in the provided dashboard. User-installed software can be controlled (allowed or denied) via the gating agent.

3.5.10 – Store and Transmit Only Cryptographically-Protected Passwords

Related NIST 800-53 Controls: IA-5(1)

Description: Store and transmit only cryptographically-protected of passwords.

Implementation: Anchore Enterprise can scan for plain-text secrets in build artifacts and prevent exposed secrets from being promoted to the next environment until the violation is remediated. This prevents unauthorized storage or transmission of unencrypted passwords or secrets. See screenshot below to see this protection in action.

3.11.2 – Scan for Vulnerabilities

Related NIST 800-53 Controls: RA-5, RA-5(5)

Description: Scan for vulnerabilities in organizational systems and applications periodically and when new vulnerabilities affecting those systems and applications are identified.

Implementation: Anchore Enterprise is designed to scan all systems and applications for vulnerabilities continuously and alert when any changes introduce new vulnerabilities. See screenshot below to see this protection in action.

3.11.3 – Remediate Vulnerabilities Respective to Risk Assessments

Related NIST 800-53 Controls: RA-5, RA-5(5)

Description: Remediate vulnerabilities in accordance with risk assessments.

Implementation: Anchore Enterprise can be tuned to allow or deny changes based on a risk scoring system.

3.12.2 – Implement Plans to Address System Vulnerabilities

Related NIST 800-53 Controls: CA-5

Description: Develop and implement plans of action designed to correct deficiencies and reduce or eliminate vulnerabilities in organizational systems.

Implementation: Anchore Enterprise automates the process of ensuring all software and systems are in compliance with the security policy of the organization. 

3.13.4 – Block Unauthorized Information Transfer via Shared Resources

Related NIST 800-53 Controls: SC-4

Description: Prevent unauthorized and unintended information transfer via shared system resources.

Implementation: Anchore Enterprise can be configured as a gating agent that will scan for unauthorized and unintended information transfer and prevent violations from being transferred between shared system resources until the violations are remediated.

3.13.8 – Use Cryptography to Safeguard CUI During Transmission

Related NIST 800-53 Controls: SC-8

Description: Transmission Confidentiality and Integrity: Implement cryptographic mechanisms to prevent unauthorized disclosure of CUI during transmission unless otherwise protected by alternative physical safeguards.

Implementation: Anchore Enterprise can be configured as a gating agent that will scan for CUI and prevent violations of organization defined policies regarding CUI from being disclosed between systems.

3.14.5 – Periodically Scan Systems and Real-time Scan External Files

Related NIST 800-53 Controls: SI-2

Description: Perform periodic scans of organizational systems and real-time scans of files from external sources as files are downloaded, opened, or executed.

Implementation: Anchore Enterprise can be configured to scan all external dependencies that are built into software and provide information about relevant security vulnerabilities in the software development pipeline. See screenshot below to see this protection in action.

Wrap-Up

In a world increasingly defined by software solutions, the cybersecurity posture of defense-related industries stands paramount. The CMMC, a framework with its varying levels of compliance, underscores the commitment of the defense industrial base to fortify its cyber defenses. 

As a multitude of organizations, ranging from the largest defense contractors to smaller mom-and-pop shops, work in tandem to support U.S. missions, the intricacies of maintaining cybersecurity standards grow. The questions posed exemplify the necessity to validate software integrity, especially in complex collaborations. 

Anchore Enterprise solves these problems by automating software supply chain security best practices. It not only automates a myriad of crucial controls, ranging from user privilege restrictions to vulnerability scanning, but it also empowers organizations to meet and exceed the benchmarks set by CMMC and NIST. 

In essence, as defense entities navigate the nuanced web of software development and partnerships, tools like Anchore Enterprise are indispensable in safeguarding the nation’s interests, ensuring the integrity of software supply chains, and championing the highest levels of cybersecurity.

If you’d like to learn more about the Anchore Enterprise platform or speak with a member of our team, feel free to book a time to speak with one of our specialists.

Four Signs You’re Ready to Upgrade from DIY Supply Chain Security to Anchore Enterprise

Build versus buy is always a complex decision for most organizations. Typically there is a tipping point that is hit when the friction of building and running your own tooling outweighs the cost benefits of abstaining from adding yet another vendor to your SaaS bill. The signals that point to when an organization is approaching this moment varies based on the tool you’re considering.

In this blog post, we will outline some of the common signals that your organization is approaching this event for managing software supply chain risk. Whether your developers have self-adopted software development best practices like creating software bills of material (SBOMs) and now you’re drowning in an ocean of valuable but scattered security data, or you’re ready to start scaling your shift left security strategy across your entire software development life cycle, we will cover all of these scenarios and more.

Challenge Type: Scaling SBOM Management

Managing SBOMs is getting out of hand. Each day there is more SBOM data to sort and store. SBOM generation is by far the easiest capability to implement today. It’s free, extremely lightweight (low learning curve for engineers to adopt, unlike some enterprise products), and it’s fast…blazing fast! As a result of this, teams can quickly generate hundreds, thousands (even millions!) of SBOMs over the course of a fiscal year. This is great from a data security perspective but creates its own problems.

Once the friction of creating SBOMs becomes trivial, teams typically struggle with good ways to store and manage all of this new data. Just like any other context, questions arise about how long to retain data, query the data for security related issues, or even integrate all of that data with third party tooling to glean actionable security insights. Once teams have fully adopted SBOM generation in a few areas, it is a good practice to consider the best way to manage the data so your developers’ time is not in vain. 

Anchore Enterprise helps in a variety of ways, not just to manage SBOMs but to detect SBOM drift in the build process and alert security teams to changes in SBOMs so they can be assessed for risks or malicious activity.

Challenge Type: Regulatory Compliance

Let’s say that you just got a massive policy compliance mandate dropped in your lap from your manager. It’s your job to implement the parameters within the allotted deadline, and you’re not sure where to start.

As we’ve talked about in other posts, meeting compliance standards is more than a full-time job. Organizations have to make the decision to either DIY compliance or work with third parties that have expertise in specific standards. With the debut of revision 5 of NIST 800-53, the “Control Catalog”, more and more compliance standards require companies to implement controls that specifically address software supply chain security. This is due to the fact that many federal compliance standards build off of the “Control Catalog” as the source of truth for secure IT systems.

Whether it’s FedRAMP, a compliance framework related to NIST 800-53, or something as simple as a CIS benchmark, Anchore can help. The Anchore Enterprise SBOM management solutions offer automated policy enforcement in your software supply chain. It serves to enforce compliance frameworks on your source code repos, images in development, and runtime Kubernetes clusters.

Challenge Type: Zero-Day Response

When a zero-day vulnerability is discovered, how do you answer the question “Am I vulnerable?” Depending on how well you have structured your security practice that question can take anywhere from an hour to a week or more. The longer that window the more risk your organization accrues. Once a zero-day incident occurs, it is very easy to spot the organizations that are prepared and those that are not. 

If you haven’t figured it out yet, the retention and centralized management of SBOM’s are probably one of the most useful tools in modern incident response plans for identification and triage of zero-day incidents impacting organizations. Even though software teams are empowered to make decentralized decision making they can still adhere to security principles that can benefit from a centralized data storage solution. This type of centralization allows organizations to answer critical questions with speed at critical moments in the life of an organization.

Anchore Enterprise helps answer the question “Am I vulnerable?” and it does it in minutes rather than days or weeks. By creating a centralized store of software supply chain data (via SBOMs) Anchore Enterprise allows organizations to quickly query this information and get back precise information on if a vulnerable package exists within the organization and exactly where to focus the remediation efforts. We also provide hands-on training that takes our customers through table top exercises in a controlled environment. By simulating a zero-day incident we test how well an organization is prepared to handle an uncontrolled threat environment and identify the gaps that could lead to extended uncertainty.

Challenge Type: Scaling a Shift Left Security Culture 

The shift left security movement was based on the principle that organizations can preempt security incidents by implementing secure development practices earlier in the software development lifecycle. The problem with this approach arises as you attempt to scale it. The more gates that you put in to catch security vulnerabilities earlier in the life cycle slows the software development process and requires more security resources.

In order to scale shift left security practices organizations will need to adopt software-based solutions to automate these checks and allow developers to self-diagnose and remediate vulnerabilities without significant intervention from the security team. The earlier in the software development process that vulnerabilities are caught the faster secure software can be shipped. 

Anchore enables organizations to scale their shift left security strategy by automating security checks at multiple points in the development life cycle. On top of that, due to the speed that Anchore can run its security scans, organizations can check every software artifact in the development pipeline without adding significant friction. Checking every deployed image during integration (CI), storage (registry) and runtime (CD) allows Anchore to scale a continuous security program that significantly reduces the potential for a vulnerable application to find its way to production where it can be exploited by a malicious adversary. The Anchore Enterprise runtime monitoring capabilities allow you to see what is running in your environment, detect issues within those images, and prevent images that fail policy checks from being deployed in your cluster or runtime environment.

Wrap-Up

The landscape of software supply chain security is increasingly complex, underscored by the rapid proliferation of SBOMs, rising compliance standards, and evolving security threats. Organizations today face the dilemma of scaling in-house security tools or seeking more streamlined and comprehensive solutions. As highlighted in this post, many of the above signals might indicate that it’s time for your organization to transition from DIY methods to a more robust solution. 

Anchore Enterprise was developed to overcome the challenges that are most common to organizations. With its focus on aiding organizations in scaling their shift-left security strategies, Anchore not only ensures compliance but also facilitates faster and safer software deployment. Even though each organization has its own set of unique challenges pertaining to software supply chain security, Anchore Enterprise is ready to enable organizations to mitigate and respond to these challenges.

If you’d like to learn more about the Anchore Enterprise platform or speak with a member of our team, feel free to book a time to speak with one of our specialists.

Software Supply Chain Hierarchy of Needs: SBOMs as the Foundation

Software serves as a powerful tool that simplifies complex and technical concepts but with the incredible power of software comes an interconnected labyrinth of software dependencies that often form the foundation for innovative applications. These dependencies are not without their pitfalls, as we’ve learned from incidents like Log4Shell. As we try to navigate the ever-evolving landscape of software supply chain security, we need to ensure that our applications are built on strong foundations.

In this blog post, we delve into the concept of a Software Bill of Materials (SBOM) as a foundational requirement for a secure software supply chain. Just as a physical supply chain is scrutinized to ensure the quality and safety of a product, a software supply chain also requires critical evaluation. What’s at stake isn’t just the functionality of an application, but the security of information that the application has access to. Let’s dive into the world of software supply chains and explore how SBOMs could serve as the bedrock for a more resilient future in software development and security.

What are Software Supply Chain Attacks?

Supply chain attacks are malicious strikes that target the suppliers of components of an application rather than the application itself. Software supply chains are similar to physical supply chains. When you purchase an iPhone all you see is the finished product. Behind the final product is a complex web of component suppliers that are then assembled together to produce an iPhone. Displays and camera lenses from a Japanese company, CPUs from Arizona, modems from San Diego, lithium ion batteries from a Canadian mine; all of these pieces come together in a Shenzhen assembly plant to create a final product that is then shipped straight to your door.

In the same way that an attacker could target one of the iPhone suppliers to modify a component before the iPhone is assembled, a software supply chain threat actor could do the same but target an open source package that is then built into a commercial application. This is a problem when 70-90% of modern applications are built utilizing open source software components. GIven this the supply chain is only as secure as its weakest link. The image below of the iceberg has become a somewhat overused meme of software supply chain security but it has become overused precisely because it explains the situation so well.

Below is the same idea but without the floating ice analogy. Each layer is another layer of abstraction that is far removed from “Your App”. All of these dependencies give your software developers the superpower to build extremely complex applications, very quickly but with the unintentional side-effect that they cannot possibly understand all of the ingredients that are coming together.

This gives adversaries their opening. A single compromised package allows attackers to manipulate all of the packages “downstream” of their entrypoint.

This reality was viscerally felt by the software industry (and all industries that rely on the software industry, meaning all industries) during the Log4j incident.

Log4Shell Impact

Log4Shell is the poster child for the importance of software supply chain security. We’re not going to go deep on the vulnerability in this post. We have actually done this in a number of other posts. Instead we’re going to focus on the impact of the incident for organizations that had instances of Log4j in their applications and what they had to go through in order to remediate this vulnerability.

First let’s brush up on the timeline:

The vulnerability in log4j was originally privately disclosed on November 24. Five days later a pull request was published to close the vulnerability and a week after that the new package was released. The official public disclosure happened on December 10. This is when the mayhem began and companies began the work of determining whether they were vulnerable and figuring out how to remediate the vulnerability.

On average, impacted individuals spent ~90 hours dealing with the Log4j incident. Roughly 20% of that time was spent identifying where the log4j package was deployed into an application. 

From conversations with our customers and prospects the primary culprit for why this took up such a large portion of time was whether or not an organization had a central repository of metadata about the software dependencies that had been utilized in building their applications. For customers that did have a central repository and a way to query this database, the step of identifying which applications had the log4j vulnerability present took 1-2 hours instead of 20+ hours as seen with the other organizations. This is the power of having SBOMs in place for all of the software and a tool to help with SBOM management.

What is a Software Bill of Materials?

Similar to the nutrition labels on the back of the foods that you buy, SBOMs are a list of ingredients that go into the software that your applications consume. We normally think of SBOMs as an artifact of the development process. As a developer is manufacturing their application using different dependencies they are also building a recipe based on the ingredients. In reality, an SBOM can (and should) be generated at all steps of the build pipeline. Source, builds, images, and production software can all be used to generate an SBOM.

Similar to Maslow’s hierarchy of needs, software supply chain management has an analogous hierarchy of needs

At the base of the pyramid are the contents of the application, in other words, SBOMs about the application. The diagram below shows all of the layers of the proposed hierarchy of software supply chains.

By using an SBOM as the foundation of the pyramid, organizations can ensure that all of the additional security features they layer on to this foundation will stand the test of time. We can only know that our software is free from known vulnerabilities if we have confidence in the process that was used to generate the “ingredients” label. Signing software to prove that a package hasn’t been tampered with is only valid if the signed software is both free of known vulnerabilities. Signing a vulnerabile package or image only proves that software hasn’t been tampered with from that point forward. It can’t look back retrospectively and validate that the packages that came before are secure without the help of an SBOM or a vulnerability scanner.

What are the Benefits of this Approach?

Utilizing Software Bills of Materials (SBOMs) as the foundational element of software supply chain security brings several substantial benefits:

  1. Transparency: SBOMs provide a comprehensive view of all the components used in an application. They reveal the ‘ingredients’ that make up the software, enabling teams to understand the entire composition of their applications, including all dependencies. No more black box dependencies and the associated risk that comes with it.
  1. Risk Management: With the transparency provided by SBOMs, organizations can identify potential security risks within the components of their software and address them proactively. This includes detecting vulnerabilities in dependencies or third-party components. SBOMs allow organizations to standardize their software supply chain which allows for an automated approach to vulnerability management and impacts risk management.
  1. Quick Response to Vulnerabilities: When a new vulnerability is discovered in a component used within the software, SBOMs can help quickly identify all affected applications. This significantly reduces the time taken to respond and remediate these vulnerabilities, minimizing potential damages. When an incident occurs, NOT if, an organization is able to rapidly respond to the breach and limit the impact. 
  1. Regulatory Compliance: Regulations and standards are increasingly requiring SBOMs for demonstrating software integrity. By incorporating SBOMs, organizations can ensure they meet these cybersecurity compliance requirements. Especially when working with highly regulated industries like the federal government, financial services and healthcare.
  1. Trust and Verification: SBOMs facilitate trust and confidence in software products by allowing users to verify the components used. They serve as a ‘proof of integrity’ to customers, partners, and regulators, showcasing the organization’s commitment to security. They also enable higher level security abstractions like signed images or source code to inherit the underlying foundation security guarantees provided by SBOMs.

By putting SBOMs at the base of software supply chain security, organizations can build a robust structure that’s secure, resilient, and efficient.

Building on a Strong Foundation

The utilization of Software Bills of Materials (SBOMs) as the bedrock for secure software supply chains provides a fundamental shift towards increased transparency, improved risk management, quicker responses to vulnerabilities, heightened regulatory compliance, and stronger trust in software products. By unraveling the complex labyrinth of dependencies in software applications, SBOMs offer the necessary insight to identify and address potential weaknesses, thus creating a resilient structure capable of withstanding potential security threats. In the face of incidents like Log4Shell, the industry needs to adopt a proactive and strategic approach, emphasizing the creation of a secure foundation that can stand the test of time. By elevating the role of SBOMs, we are taking a crucial step towards a future of software development and security that is not only innovative but also secure, trustworthy, and efficient. In the realm of software supply chain security, the adage “knowing is half the battle” couldn’t be more accurate. SBOMs provide that knowledge and, as such, are an indispensable cornerstone of a comprehensive security strategy.

If you’re interested in learning about how to integrate SBOMs into your software supply chain the Anchore team of supply chain security specialists are ready and willing to discuss.

Navigating Continuous Authority To Operate (cATO): A Guide for Getting Started

Continuous Authority to Operate (cATO), sometimes known as Rapid ATO, is becoming necessary as the DoD and civilian agencies put more applications and data in the cloud. Speed and agility are becoming increasingly critical to the mission as the government and federal system integrators seek new features and functionalities to support the warfighter and other critical U.S. government priorities.

In this blog post, we’ll break down the concept of cATO in understandable terms, explain its benefits, explore the myths and realities of cATO and show how Anchore can help your organization meet this standard.

What is Continuous Authority To Operate (cATO)?

Continuous ATO is the merging of traditional authority to operate (ATO) risk management practices with flexible and responsive DevSecOps practices to improve software security posture.

Traditional Risk Management Framework (RMF) implementations focus on obtaining authorization to operate once every three years. The problem with this approach is that security threats aren’t static, they evolve. cATO is the evolution of this framework which requires the continual authorization of software components, such as containers, by building security into the entire development lifecycle using DevSecOps practices. All software development processes need to ensure that the application and its components meet security levels equal to or greater than what an ATO requires.

You authorize once and use the software component many times. With a cATO, you gain complete visibility into all assets, software security, and infrastructure as code.

By automating security, you are then able to obtain and maintain cATO. There’s no better statement about the current process for obtaining an ATO than this commentary from Mary Lazzeri with Federal Computer Week:

“The muddled, bureaucratic process to obtain an ATO and launch an IT system inside government is widely maligned — but beyond that, it has become a pervasive threat to system security. The longer government takes to launch a new-and-improved system, the longer an old and potentially insecure system remains in operation.”

The Three Pillars of cATO

To achieve cATO, an Authorizing Official (AO) must demonstrate three main competencies:

  1. Ongoing visibility: A robust continuous monitoring strategy for RMF controls must be in place, providing insight into key cybersecurity activities within the system boundary.
  2. Active cyber defense: Software engineers and developers must be able to respond to cyber threats in real-time or near real-time, going beyond simple scanning and patching to deploy appropriate countermeasures that thwart adversaries.
  3. Adoption of an approved DevSecOps reference design: This involves integrating development, security, and operations to close gaps, streamline processes, and ensure a secure software supply chain.

Looking to learn more about the DoD DevSecOps Reference Design? It’s commonly referred to as a DoD Software Factory. Anchore has been helping organizations and agencies put the Sec in DevSecOps by securing traditional software factories, transforming them into DoD software factories.

Continuous ATO vs. ATO

The primary difference between traditional ATOs and continuous ATOs is the frequency at which a system seeks to prove the validity of its security claims. ATOs require that a system can prove its security once every three years whereas cATO systems prove their security every moment that the system is running.

The Benefits of Continuous ATO

Continuous ATO is essentially the process of applying DevSecOps principles to the compliance framework of Authority to Operate. Automating the individual compliance processes speeds up development work by avoiding repetitive tasks to obtain permission. Next, we’ll explore additional (and sometimes unexpected) benefits of cATO.

Increase Velocity of System Deployment

CI/CD systems and the DevSecOps design pattern were created to increase the velocity at which new software can be deployed from development to production. On top of that, Continuous ATOs can be more easily scaled to accommodate changes in the system or the addition of new systems, thanks to the automation and flexibility offered by DevSecOps environments.

Reduce Time and Complexity to Achieve an ATO

With the cATO approach, you can build a system to automate the process of generating the artifacts to achieve ATO rather than manually producing them every three years. This automation in DevSecOps pipelines helps in speeding up the ATO process, as it can generate the artifacts needed for the AO to make a risk determination. This reduces the time spent on manual reviews and approvals. Much of the same information will be requested for each ATO, and there will be many overlapping security controls. Designing the DevSecOps pipeline to produce the unique authorization package for each ATO from the corpus of data and information available can lead to increased efficiency via automation and re-use.

No Need to Reinvent AND Maintain the Wheel

When you inherit the security properties of the DevSecOps reference design or utilize an approved managed platform, then the provider will shoulder the burden. Someone else has already done the hard work of creating a framework of tools that integrate together to achieve cATO, re-use their effort to achieve cATO for your system. 

Alternatively, you can utilize a platform provider, such as Platform One, Kessel Run, Black Pearl, or the Army Software Factory to outsource the infrastructure management.

Learn how Anchore helped Platform One achieve cATO and become the preeminent DoD software factory:

Myths & Realities

Myth or Reality?: DevSecOps can be at Odds with cATO

Myth! DevSecOps in the DoD and civilian government agencies are still the domain of early adopters. The strict security and compliance requirements — the ATO in particular — of the federal government make it a fertile ground for DevSecOps adoption. Government leaders such as Nicolas Chaillan, former chief software officer for the United States Air Force, are championing DevSecOps standards and best practices that the DoD, federal government agencies, and even the commercial sector can use to launch their own DevSecOps initiatives.

One goal of DevSecOps is to develop and deploy applications as quickly as possible. An ATO is a bureaucratic morass if you’re not proactive. When you build a DevSecOps toolchain that automates container vulnerability scanning and other areas critical to ATO compliance controls, can you put in the tools, reporting, and processes to test against ATO controls while still in your development environment.

DevSecOps, much like DevOps, suffers from a marketing problem as vendors seek to spin the definitions and use cases that best suit their products. The DoD and government agencies need more champions like Chaillan in government service who can speak to the benefits of DevSecOps in a language that government decision-makers can understand.

Myth or Reality?: Agencies need to adopt DevSecOps to prepare for the cATO 

Reality! One of the cATO requirements is to demonstrate that you are aligned with an Approved DevSecOps Reference Design. The “shift left” story that DevSecOps espouses in vendor marketing literature and sales decks isn’t necessarily one size fits all. Likewise, DoD and federal agency DevSecOps play at a different level. 

Using DevSecOps to prepare for a cATO requires upfront analysis and planning with your development and operations teams’ participation. Government program managers need to collaborate closely with their contractor teams to put the processes and tools in place upfront, including container vulnerability scanning and reporting. Break down your Continuous Integration/Continuous Development (CI/CD) toolchain with an eye on how you can prepare your software components for continuous authorization.

Myth or Reality?: You need to have SBOMs for everything in your environment

Myth! However…you need to be able to show your Authorizing Official (AO) that you have “the ability to conduct active cyber defense in order to respond to cyber threats in real time.” If a zero day (like log4j) comes along you need to demonstrate you are equipped to identify the impact on your environment and remediate the issue quickly. Showing your AO that you manage SBOMs and can quickly query them to respond to threats will have you in the clear for this requirement.

Myth or Reality?: cATO is about technology and process only

Myth! As more elements of the DoD and civilian federal agencies push toward the cATO to support their missions, and a DevSecOps culture takes hold, it’s reasonable to expect that such a culture will influence the cATO process. Central tenets of a DevSecOps culture include:

  • Collaboration
  • Infrastructure as Code (IaC)
  • Automation
  • Monitoring

Each of these tenets contributes to the success of a cATO. Collaboration between the government program office, contractor’s project team leadership, third-party assessment organization (3PAO), and FedRAMP program office is at the foundation of a well-run authorization. IAC provides the tools to manage infrastructure such as virtual machines, load balancers, networks, and other infrastructure components using practices similar to how DevOps teams manage software code.

Myth or Reality?: Reusable Components Make a Difference in cATO

Reality! The growth of containers and other reusable components couldn’t come at a better time as the Department of Defense (DoD) and civilian government agencies push to the cloud driven by federal cloud initiatives and demands from their constituents.

Reusable components save time and budget when it comes to authorization because you can authorize once and use the authorized components across multiple projects. Look for more news about reusable components coming out of Platform One and other large-scale government DevSecOps and cloud projects that can help push this development model forward to become part of future government cloud procurements.

How Anchore Helps Organizations Implement the Continuous ATO Process

Anchore’s comprehensive suite of solutions is designed to help federal agencies and federal system integrators meet the three requirements of cATO.

Ongoing Visibility

Anchore Enterprise can be integrated into a build pipeline, image registry and runtime environment in order to provide a comprehensive view of the entire software development lifecycle (SDLC). On top of this, Anchore provides out-of-the-box policy packs mapped to NIST 800-53 controls for RMF, ensuring a robust continuous monitoring strategy. Real-time notifications alert users when images are out of compliance, helping agencies maintain ongoing visibility into their system’s security posture.

Active Cyber Defense

While Anchore Enterprise is integrated into the decentralized components of the SDLC, it provides a centralized database to track and monitor every component of software in all environments. This centralized datastore enables agencies to quickly triage zero-day vulnerabilities with a single database query. Remediation plans for impacted application teams can be drawn up in hours rather than days or weeks. By setting rules that flag anomalous behavior, such as image drift or blacklisted packages, Anchore supports an active cyber defense strategy for federal systems.

Adoption of an Approved DevSecOps Reference Design

Anchore aligns with the DoD DevSecOps Reference Design by offering solutions for:

  • Container hardening (Anchore DISA policy pack)
  • Container policy enforcement (Anchore Enterprise policies)
  • Container image selection (Iron Bank)
  • Artifact storage (Anchore image registry integration)
  • Release decision-making (Anchore Kubernetes Admission Controller)
  • Runtime policy monitoring (Anchore Kubernetes Automated Inventory)

Anchore is specifically mentioned in the DoD Container Hardening Process Guide, and the Iron Bank relies on Anchore technology to scan and enforce policy that ensures every image in Iron Bank is hardened and secure.

Final Thoughts

Continuous Authorization To Operate (cATO) is a vital framework for federal system integrators and agencies to maintain a strong security posture in the face of evolving cybersecurity threats. By ensuring ongoing visibility, active cyber defense, and the adoption of an approved DevSecOps reference design, software engineers and developers can effectively protect their systems in real-time. Anchore’s comprehensive suite of solutions is specifically designed to help meet the three requirements of cATO, offering a robust, secure, and agile approach to stay ahead of cybersecurity threats. 

By partnering with Anchore, federal system integrators and federal agencies can confidently navigate the complexities of cATO and ensure their systems remain secure and compliant in a rapidly changing cyber landscape. If you’re interested to learn more about how Anchore can help your organization embed DevSecOps tooling and principles into your software development process, click below to read our white paper.