Streamlining FedRAMP Compliance: How Anchore Enterprise Simplifies the Process

FedRAMP compliance is hard, not only because there are hundreds of controls that need to be reviewed and verified. On top of this, the controls can be interpreted and satisfied in multiple different ways. It is admirable to see an enterprise achieve FedRAMP compliance from scratch but most of us want to achieve compliance without spending more than a year debating the interpretation of specific controls. This is where turnkey solutions like Anchore Enterprise come in. 

Anchore Enterprise is a cloud-native software composition analysis platform that integrates SBOM generation, vulnerability scanning and policy enforcement into a single platform to provide a comprehensive solution for software supply chain security.

Overview of FedRAMP, who it applies to and the challenges of compliance

FedRAMP, or the Federal Risk and Authorization Management Program, is a federal compliance program that standardizes security assessment, authorization, and continuous monitoring for cloud products and services. As with any compliance standard, FedRAMP is modeled from the “Trust but Verify” security principle. FedRAMP standardizes how security is verified for Cloud Service Providers (CSP).

One of the biggest challenges with achieving FedRAMP compliance comes from sorting through the vast volumes of data that make up the standard. Depending on the level of FedRAMP compliance you are attempting to meet, this could mean complying with 125 controls in the case of a FedRAMP low certification or up to 425 for FedRAMP high compliance.

While we aren’t going to go through the entire FedRAMP standard in this blog post, we will be focusing on the container security controls that are interleaved into FedRAMP.

FedRAMP container security requirements

1) Hardened Images

FedRAMP requires CSPs to adhere to strict security standards for hardened images used by government agencies. The standard mandates that:

  • Only essential services and software are included in the images
  • Updated with the latest security patches
  • Configuration settings meet secure baselines
  • Disabling unnecessary ports and services
  • Managing user accounts securely
  • Implementing encryption
  • Maintaining logging and monitoring practices
  • Regular vulnerability scanning and prompt remediation

If you want to go in-depth with how to create hardened images that meet FedRAMP compliance, download our white papers:

DevSecOps for a DoD Software Factory: 6 Best Practices for Container Images

Complete Guide to Hardening Containers with STIG

2) Container Build, Test, and Orchestration Pipelines

FedRAMP sets stringent requirements for container build, test, and orchestration pipelines to protect federal agencies. These include:

  • Hardened base images (see above) 
  • Automated build processes with integrity checks
  • Strict configuration management
  • Immutable containers
  • Secure artifact management
  • Containers security testing
  • Comprehensive logging and monitoring

3) Vulnerability Scanning for Container Images

FedRAMP mandates rigorous vulnerability scanning protocols for container images to ensure their security within federal cloud deployments. This includes: 

  • Comprehensive scans integrated into CI/CD pipelines 
  • Prioritize remediation based on severity
  • Re-scanning policy post-remediation 
  • Detailed audit and compliance reports 
  • Checks against secure baselines (i.e., CIS or STIG)

4) Secure Sensors

FedRAMP requires continuous management of the security of machines, applications, and systems by identifying vulnerabilities. 

  • Authorized scanning tools
  • Authenticated security scans to simulate threats
  • Reporting and remediation
  • Scanning independent of developers
  • Direct integration with configuration management to track vulnerabilities

5) Registry Monitoring

While not explicitly called out in FedRAMP as either a control or a control family, there is still a requirement that the images stored in a container registry are scanned at least every 30-days if the images are deployed to production.

6) Asset Management and Inventory Reporting for Deployed Containers

FedRAMP mandates thorough asset management and inventory reporting for deployed containers to ensure security and compliance. Organizations must maintain detailed inventories including:

  • Container images
  • Source code
  • Versions
  • Configurations 
  • Continuous monitoring of container state 

7) Encryption

FedRAMP mandates robust encryption standards to secure federal information, requiring the use of NIST-approved cryptographic methods for both data at rest and data in transit. It is important that any containers that store data or move data through the system meet FIPS standards.

How Anchore helps organizations comply with these requirements

Anchore is the leading software supply chain security platform for meeting FedRAMP compliance. We have helped hundreds of organizations meet FedRAMP compliance by deploying Anchore Enterprise as the solution for achieving container security compliance. Below you can see an overview of how Anchore Enterprise integrates into a FedRAMP compliant environment. For more details on how each of these integrations meet FedRAMP compliance keep reading.

1) Hardened Images

Anchore Enterprise integrates multiple tools in order to meet the FedRAMP requirements for hardened container images. We provide compliance policies that scan specifically for compliance with container hardening standards, such as, STIG and CIS. These tools were custom built to perform the checks necessary to meet the two relevant standards or both!

2) Container Build, Test, and Orchestration Pipelines

Anchore integrates directly into your CI/CD pipelines either the Anchore Enterprise API or pre-built plug-ins. This tight integration meets the FedRAMP standards that require all container images are hardened, all security checks are automated within the build process and all actions are logged and audited. Anchore’s FedRAMP policy specifically checks to make sure that any container in any stage of the pipeline will be checked for compliance.

3) Vulnerability Scanning for Container Images

Anchore Enterprise can be integrated into each stage of the development pipeline, offer remediation recommendations based on severity (e.g., CISA’s KEV vulnerabilities can be flagged and prioritized for immediate action), enforce re-scanning of containers after remediation and produce compliance artifacts to automate compliance. This is accomplished with Anchore’s container scanner, direction pipeline integration and FedRAMP policy.

4) Secure Sensors

Anchore Enterprise’s container vulnerability scanner and Kubernetes inventory agent are both authorized scanning tools. The container vulnerability scanner is integrated directly into the build pipeline whereas the k8s agent is run in production and scans for non-compliant containers at runtime.

5) Registry Monitoring

Anchore Enterprise is able to scan an artifact registry continuously for potentially non-compliant containers. It is configured to watch each unique image in image registries. It will automatically scan images that get pushed to these registries.

6) Asset Management and Inventory Reporting for Deployed Containers

Anchore Enterprise includes a full software component inventory workflow. It can scan all software components, generate Software Bill of Materials (SBOMs) to keep track of the component and centrally store all SBOMs for analysis. Anchore Enterprises’s Kubernetes inventory agent can perform the same service for the runtime environment.

7) Encryption

Anchore Enterprise Static STIG tool can ensure that all containers are maintaining NIST & FIPS encryption standards. Verifying that containers are encrypting data at-rest and in-transit for each of thousands of containers is a difficult chore but easily automated via Anchore Enterprise.

The benefits of the shift left approach of Anchore Enterprise

Shift compliance left and prevent violations

Detect and remediate FedRAMP compliance violations early in the development lifecycle to prevent production/high-side violations that will threaten your hard earned compliance. Use Anchore’s “developer-bundle” in the integration phase to take immediate action on potential compliance violations. This will ensure vulnerabilities with fixes available and CISA KEV vulnerabilities are addressed before they make it to the registry and you have to report these non-compliance issues.

Below is an example of a workflow in GitLab of how Anchore Enterprise’s SBOM generation, vulnerability scanning and policy enforcement can catch issues early and keep your compliance record clean.

Automate Compliance Reporting

Automate monthly/annual reporting using Anchore’s reporting. Have these reports set up to auto generate based on the compliance reporting needs of FedRAMP.

Manage POA&Ms

Given that Anchore Enterprise centrally stores and manages vulnerability information for an organization, it can also be utilized to manage Plan Of Action & Milestones (POA&Ms) for any portions of the system that aren’t yet FedRAMP compliant but have a planned due date. Using Allowlists in Anchore Enterprise to centrally manage POA&Ms and assessed/justifiable findings.

Prevent Production Compliance Violations

Practice good production registry hygiene by utilizing Anchore Enterprise to scan stored images regularly. Anchore Enterprise’s Kubernetes runtime inventory will identify images that do not meet FedRAMP compliance or have not been used in the last ~7 days (company defined) to remove from your production registry.

Conclusion

Achieving FedRAMP compliance from scratch is an arduous process and not a key differentiator for many organizations. In order to maintain organizational priority on the aspects of the business that differentiate an organization from its competitors, strategic outsourcing of non-core competencies is always a recommended strategy. Anchore Enterprise aims to be that turnkey solution for organizations that want the benefits of FedRAMP compliance without developing the internal expertise, specifically for the container security aspect.

By integrating SBOM generation, vulnerability scanning, and policy enforcement into a single platform, Anchore Enterprise not only simplifies the path to compliance but also enhances overall software supply chain security. Through the deployment of Anchore Enterprise, companies can achieve and maintain compliance more quickly and with greater assurance. If you’re looking for an even deeper look at how to achieve all 7 of the container security requirements of FedRAMP with Anchore Enterprise, read our playbook: FedRAMP Pre-Assessment Playbook For Containers.

From Chaos to Compliance: Revolutionizing License Management with Automation

The ascent of both containerized applications and open-source software component building blocks has dramatically escalated the complexity of software and the burden of managing all of the associated licenses. Modern applications are often built from a mosaic of hundreds, if not thousands, of individual software components, each bound by its own potential licensing pitfalls. This intricate web of dependencies, akin to a supply chain, poses significant challenges not only for legal teams tasked with mitigating financial risks but also for developers who manage these components’ inventory and compliance.

Previously license management was primarily a manual affair, software wasn’t as complex in the past and more software was proprietary 1st party software that didn’t have the same license compliance issues. These original license management techniques haven’t kept up with the needs of modern, cloud-native application development. In this blog post, we’re going to discuss how automation is needed to address the challenges of continuing to manage licensing risk in modern software.

The Problem

Modern software is complex. This is fairly well known at this point but in case you need a quick visual reminder, we’ve inserted to images to quickly reinforce this idea:

Applications can be constructed from 10s or 100s or even 1000s of individual software components each with their own license for how it can be used. Modern software is so complex that this endlessly nested collection of dependencies are typically referred to as a metaphorical supply chain and there is an entire industry that has grown to provide security solutions for this quagmire called software supply chain security

This is a complexity nightmare for legal teams that are tasked with managing the financial risk of an organization. It’s also a nightmare for the developers who are tasked with maintaining an inventory of all of the software dependencies in an organization and the associated license for each component.

Let’s provide an example of how this normally manifests in a software startup. Assuming business is going well, you have a product and there are customers out in the world that are interested in purchasing your software. During the procurement cycle, your customer’s legal team will be tasked with assessing the risk of using your software. In order to create this assessment they will do a number of things and one of them will be to determine if your software is safe from a licensing perspective to use. In order to do this they will normally send over a document that looks like this:

As a software vendor, it will be your job to fill this out so that legal can approve the purchasing of your software and you can take that revenue to the bank.

Let’s say you manually fill this entire spreadsheet out. A developer would need to go through each dependency that is utilized in the software that you sell and “scan” the codebase for all of the licensing metadata. Component name, version number, OSS license (i.e., MIT, GPL, BSD, etc.), etc. It would take some time and be quite tedious but not an insurmountable task. In the end they would produce something like this:

This is great in the world of once-in-a-while deployments and updates. This becomes exhausting in the world of continuous integration/delivery that the DevOps movement has created. Imagine having to produce a new document like this everytime you push to production. DevOps has allowed for some times to push to production multiple times per day.  Requiring that a document is manually created for all of your customers’ legal teams for each release would almost eliminate all of the velocity gains that moving to the DevOps architecture created.

The Solution

The solution to this problem is the automation of the license discovery process. If software can scan your codebase and produce a document that will exhaustively cover all of the building blocks of your application this unlocks the potential to both have your DevOps cake and eat it too.

To this end, Anchore has created and open sourced a tool that does just this.

Introducing Grant: Automated License Discovery

Grant is an open-source command line tool that scans and discovers the software licenses of all dependencies in a piece of open-source software. If you want to get a quick primer on what you can do with Grant, read our announcement blog post. Or if you’re ready to dive straight in, you can view all of the Grant documentation on its GitHub repo.

How does Grant Integrate into my Development Workflow?

As a software license scanner, Grant operates on a software inventory artifact like an SBOM or directly on a container image. Let’s continue with the example from above to bring this to life. In the legal review example above you are a software developer that has been tasked with manually searching and finding all of the OSS license files to provide to your customer’s legal team for review.

Not wanting to do this manually by hand, you instead open up your CLI and install Grant. From there you navigate to your artifact registry and pull down the latest image of your application’s production build. Right before you run the Grant license scan on your production container image you notice that your team has been following software supply chain best practices and have already created an SBOM with a popular open-source tool called Syft. Instead of running the container through Grant which could take some time to scan the image, you instead pipe the SBOM into Grant which is already a JSON object of the entire dependency inventory of the application. A few seconds later you have a full report of all of the licenses in your application.  

From here you export the full component inventory with the license enrichment into a spreadsheet and send this off to the customer’s legal team for review. A process that might have taken a full day or even multiple days to do by hand was finished in seconds with the power of open-source tooling.

Automating License Compliance with Policy

Grant is an amazing tool that can automate much of the most tedious work of protecting an organization from legal consequences but when used by a developer as a CLI tool, there is still a human in the loop which can cause traffic jams to occur. With this in mind, our OSS team made sure to launch Grant with support for policy-based filters that can automate the execution and alerting of license scanning. 

Let’s say that your organization’s legal team has decided that using any GPL components in 1st party software is too risky. By writing a policy that fails any software that includes GPL licensed components and integrating the policy check as early as the staging CI environment or even allowing developers to run Grant in a one-off fashion during design as they prototype the initial idea, the potential for legally risky dependencies infiltrating into production software drops precipitously.

How Anchore Can Help

Grant is an amazing tool that automates the license compliance discovery process. This is great for small projects or software that does irregular releases. Things get much more complicated in the cloud-native, continuous integration/deployment paradigm on DevSecOps where there are new releases multiple times per day. Having Grant generate the license data is great but suddenly you will have an explosion of data that itself needs to be managed.

This is where Anchore Enterprise steps in to fill this gap. The Anchore Enterprise platform is an end-to-end data management solution that not only incorporates all of Anchore’s open-source tooling that generates artifacts like SBOMs, vulnerability scans and license scans. It also manages the massive amount of data that a high speed DevSecOps pipeline will create as part of its regular operation and on top of that apply a highly customizable policy engine that can then automate decision-making around the insights derived from the software supply chain artifacts like SBOMs, vulnerability scans and license scans. 

Want to make sure that no GPL license OSS components ever make it into your SDLC? No problem. Grant will uncover all components that have this license, Anchore Enterprise will centralize these scans and the Anchore policy engine will alert the developer who just integrated a new GPL licensed OSS component into their development environment that they need to find a different component or they won’t be able to push their branch to staging. The shift left principle of DevSecOps can be applied to LegalOps as well. 

Conclusion

The advent of tools like Grant, an open-source license discovery solution developed by Anchore, marks a significant advancement in the realm of open-source license management. By automating the tedious process of license verification, Grant not only enhances operational efficiency but also integrates seamlessly into continuous integration/continuous delivery (CI/CD) environments. This capability is crucial in modern DevOps practices, which demand frequent and fast-paced updates. Grant’s ability to quickly generate comprehensive licensing reports transforms a potentially day-long task into a matter of seconds.

Anchore Enterprise extends this functionality by managing the deluge of data from continuous deployments and integrating a policy engine that automates compliance decisions. This ecosystem not only streamlines the process of license management but also empowers developers and legal teams to preemptively address compliance issues, thereby embedding legal safeguards directly into the software development lifecycle. This proactive approach ensures that as the technological landscape evolves, businesses remain agile yet compliant, ready to capitalize on opportunities without being bogged down by legal liabilities.

If you’re interested to hear about the topics covered in this blog post directly from the lips of Anchore’s CTO, Dan Nurmi, and the maintainer of Grant, Christopher Phillips, you can watch the on-demand webinar here. Or join Anchore’s OSS Slack community to speak with our team directly. We look forward to hearing from you and reviewing your pull requests!

An Outline for Getting Up to Speed on the DoD Software Factory

This blog post is meant as a gateway to all things DoD software factory. We highlight content from across the Anchore universe that can help anyone get up to speed on what a DoD software factory is, why to use it and how to build one. This blog post is meant as an index to be scanned for the topics that are most interesting to you as the reader with links to more detailed content.

What is a DoD Software Factory?

The short answer is a DoD Software Factory is an implementation of the DoD Enterprise DevSecOps Reference Design. A slightly longer answer comes from our DoD software factory primer:

A Department of Defense (DoD) Software Factory is a software development pipeline that embodies the principles and tools of the larger DevSecOps movement with a few choice modifications that conform to the extremely high threat profile of the DoD and DIB.

Note that the diagram below looks like a traditional DevOps pipeline. The difference being that there are security controls layered into this environment that automate software component inventory, vulnerability scanning and policy enforcement to meet the requirements to be considered a DoD software factory.

Got the basics down? Go deeper and learn how Anchore can help you put the Sec into DevSecOps Reference Design by reading our DoD Software Factory Best Practices white paper.

Why do I want to utilize a DoD Software Factory?

For DoD programs, the primary reason to utilize a DoD software factory is that it is a requirement for achieving a continuous authorization to operation (cATO). The cATO standard specifically calls out that software is developed in a system that meets the DoD Enterprise DevSecOps Reference Design. A DoD software factory is the generic implementation of this design standard.

For Federal Service Integrators (FSIs), the biggest reason to utilize a DoD software factory is that it is a standard approach to meeting DoD compliance and certification standards. By meeting a standard, such as CMMC Level 2, you expand your opportunity to work with DoD programs.

Continuous Authorization to Operate (cATO)

If you’re looking for more information on cATO, Anchore has written a comprehensive guide on navigating the cATO process that can be found on our blog:

Navigating Continuous Authority To Operate (cATO): A Guide for Getting Started

  • Learn how a DoD software factory is the foundation for achieving a cATO when combined with continuous visibility and real-time incident response.
Navigating Continuous Authority To Operate (cATO): A Guide for Getting Started

Cybersecurity Maturity Model Certification (CMMC)

The CMMC is the certification standard that is used by the DoD to vet FSIs from the defense industrial base (DIB). This is the gold standard for demonstrating to the DoD that your organization takes security seriously enough to work with the highest standards of any DoD program. The security controls that the CMMC references when determining certification are outlined in NIST 800-171. There are 17 total families of security controls that an organization has to meet in order to meet the CMMC Level 2 certification and a DoD software factory can help check a number of these off of the list.

The specific families of controls that a DoD software factory helps meet are:

  • Access Control (AC)
  • Audit and Accountability (AU)
  • Configuration Management (CM)
  • Incident Response (IR)
  • Maintenance (MA)
  • Risk Assessment (RA)
  • Security Assessment and Authorization (CA)
  • System and Communications Protection (SC)
  • System and Information Integrity (SI)
  • Supply Chain Risk Management (SR)

If you’re looking for more information on how apply software supply chain security to meet the CMMC, Anchore has published two blog posts on the topic:

NIST SP 800-171 & Controlled Unclassified Data: A Guide in Plain English

  • NIST SP 800-171 is the canonical list of security controls for meeting CMMC Level 2 certification. Anchore has broken down the entire 800-171 standard to give you an easy to understand overview.

Automated Policy Enforcement for CMMC with Anchore Enterprise

  • Policy Enforcement is the backbone of meeting the monitoring, enforcement and reporting requirements of the CMMC. In this blog post, we break down how Anchore Federal can meet a number of the controls specifically related to software supply chain security that are outlined in NIST 800-171.

How do I meet the DevSecOps Reference Design requirements?

The easy answer is by utilizing a DoD Software Factory Managed Service Provider (MSP). Below in the User Stories section, we deep dive into the US Air Force’s Platform One given they are the preeminent DoD software factory.

The DIY answer involves carefully reading and implementing the DoD Enterprise DevSecOps Reference Design. This document is massive but there are a few shortcuts you can utilize to help expedite your journey. 

Container Hardening

Deciding to utilize software containers in a DevOps pipeline is almost a foregone conclusion at this point. What is less well known is how to secure your containers, especially to meet the standards of a DoD software factory.

The DoD has published two guides that can help with this. The first is the DoD Container Hardening Guide, and the second is the Container Image Creation and Deployment Guide. Both name Anchore Federal as an approved container hardening scanner.

Anchore has published a number of blogs and even a white paper that condense the information in both of these guides into more digestible content. See below:

Container Security for U.S. Government Information Systems

  • This comprehensive white paper breaks down how to achieve a container build and deployment system that is hardened to the standards of a DoD software factory.

Enforcing the DoD Container Image and Deployment Guide with Anchore Federal

  • This blog post is great for those who are interested to see how Anchore Federal can turn all of the requirements of the DoD Container Hardening Guide and the Container Image Creation and Deployment Guide into an easy button.

Deep Dive into Anchore Federal’s Container Image Inspection and Vulnerability Management

  • This blog post deep dives into how to utilize Anchore Federal to find container vulnerabilities and alert or report on whether they are violating the security compliance required to be a DoD software factory.

Policy-based Software Supply Chain Security and Compliance

The power of a policy-based approach to software supply chain security is that it can be integrated directly into a DevOps pipeline and automate a significant amount of alerting, reporting and enforcement work. The blog posts below go into depth on how this automated approach to security and compliance can uplevel a DoD software factory:

A Policy Based Approach to Container Security & Compliance

  • This blog details how a policy-based platform works and how it can benefit both software supply chain security and compliance. 

The Power of Policy-as-Code for the Public Sector

  • This follow-up to the post above shows how the policy-based security platform outlined in the first blog post can have significant benefits to public sector organizations that have to focus on both internal information security and how to prove they are compliant with government standards.

Benefits of Static Image Inspection and Policy Enforcement

  • Getting a bit more technical this blog details how a policy-based development workflow can be utilized as a security gate with deployment orchestration systems like Kubernetes.

Getting Started With Anchore Policy Bundles

  • An even deeper dive into what is possible with the policy-based security system provided by Anchore Enterprise, this blog gets into the nitty-gritty on how to configure policies to achieve specific security outcomes.

Unpacking the Power of Policy at Scale in Anchore

  • This blog shows how a security practitioner can extend the security signals that Anchore Enterprise collects with the assistance of a more flexible data platform like New Relic to derive more actionable insights.

Security Technical Implementation Guide (STIG)

The Security Technical Implementation Guides (STIGs) are fantastic technical guides for configuring off the shelf software to DoD hardening standards. Anchore being a company focused on making security and compliance as simple as possible has written a significant amount about how to utilize STIGs, especially for container-based DevSecOps pipelines. Exactly the kind of software development environments that meet the standards of a DoD software factory. View our previous content below:

4 Ways to Prepare your Containers for the STIG Process

  • In this blog post, we give you four quick tips to help you prepare for the STIG process for software containers. Think of this as the amuse bouche to prepare you for the comprehensive white paper that comes next.

Navigating STIG Compliance for Containers

  • As promised, this is the extensive document that walks you through how to build a DevSecOps pipeline based on containers that is both high velocity and secure. Perfect for organizations that are aiming to roll their own DoD software factory.

User Stories

Anchore has been supporting FSIs and DoD programs to build DevSecOps programs that meet the criteria to be called a DoD software factory for the past decade. We can write technical guides and best practices documents till time ends but sometimes the best lessons are learned from real-life stories. Below are user stories that help fill in all of the details about how a DoD software factory can be built from scratch:

DoD’s Pathway to Secure Software

  • Join Major Camdon Cady of Platform One and Anchore’s VP of Security, Josh Bressers as they discuss the lessons learned from building a DoD software factory from the ground up. Watch this on-demand webinar to get all of the details in a laid back and casual conversation between two luminaries in their field.

Development at Mach Speed

  • If you prefer a written format over video, this case study highlights how Platform One utilized Red Hat OpenShift and Anchore Federal to build their DoD software factory that has become the leading Managed Service Provider for DoD programs.

Conclusion

Similar to how Cloud has taken over the infrastructure discussion in the enterprise world, DoD software factories are quickly becoming the go-to solution for DoD programs and the FSIs that support them. Delivering on the promise of the DevOps movement of high velocity development without compromising security, a DoD software factory is the one-stop shop to upgrade your software development practice into the modern age and become compliant as a bonus! If you’re looking for an easy button to infuse your DevOps pipeline with security and compliance without the headache of building it yourself, take a look at Anchore Federal and how it helps organizations layer software supply chain security into a DoD software factory and achieve a cATO.

4 Ways to Prepare your Containers for the STIG Process

The Security Technical Implementation Guide (STIG) is a Department of Defense (DoD) technical guidance standard that captures the cybersecurity requirements for a specific product, such as a cloud application going into production to support the warfighter. System integrators (SIs), government contractors, and independent software vendors know the STIG process as a well-governed process that all of their technology products must pass. The Defense Information Systems Agency (DISA) released the Container Platform Security Requirements Guide (SRG) in December 2020 to direct how software containers go through the STIG process. 

STIGs are notorious for their complexity and the hurdle that STIG compliance poses for technology project success in the DoD. Here are some tips to help your team prepare for your first STIG or to fine-tune your existing internal STIG processes.

4 Ways to Prepare for the STIG Process for Containers

Here are four ways to prepare your teams for containers entering the STIG process:

1. Provide your Team with Container and STIG Cross-Training

DevSecOps and containers, in particular, are still gaining ground in DoD programs. You may very well find your team in a situation where your cybersecurity/STIG experts may not have much container experience. Likewise, your programmers and solution architects may not have much STIG experience. Such a situation calls for some manner of formal or informal cross-training for your team on at least the basics of containers and STIGs. 

Look for ways to provide your cybersecurity specialists involved in the STIG process with training about containers if necessary. There are several commercial and free training options available. Check with your corporate training department to see what resources they might have available such as seats for online training vendors like A Cloud Guru and Cloud Academy.

There’s a lot of out-of-date and conflicting information about the STIG process on the web today. System integrators and government contractors need to build STIG expertise across their DoD project teams to cut through such noise.

Including STIG expertise as an essential part of your cybersecurity team is the first step. While contract requirements dictate this proficiency, it only helps if your organization can build a “bench” of STIG experts. 

Here are three tips for building up your STIG talent base:

  • Make STIG experience a “plus” or “bonus” in your cybersecurity job requirements for roles, even if they may not be directly billable to projects with STIG work (at least in the beginning)
  • Develop internal training around STIG practices led by your internal experts and make it part of employee onboarding and DoD project kickoffs
  • Create a “reach back” channel from your project teams  to get STIG expertise from other parts of your company, such as corporate and other project teams with STIG expertise, to get support for any issues and challenges with the STIG process

Depending on the size of your company, clearance requirements of the project, and other situational factors, the temptation might be there to bring in outside contractors to shore up your STIG expertise internally. For example, the Container Platform Security Resource Guide (SRG) is still new. It makes sense to bring in an outside contractor with some experience managing containers through the STIG process. If you go this route, prioritize the knowledge transfer from the contractor to your internal team. Otherwise, their container and STIG knowledge walk out the door at the end of the contract term.

2. Validate your STIG Source Materials

When researching the latest STIG requirements, you need to validate the source materials. There are many vendors and educational sites that publish STIG content. Some of that content is outdated and incomplete. It’s always best to go straight to the source. DISA provides authoritative and up-to-date STIG content online that you should consider as your single source of truth on the STIG process for containers.

3. Make the STIG Viewer part of your Approved Desktop

Working on DoD and other public sector projects requires secure environments for developers, solution architects, cybersecurity specialists, and other team members. The STIG Viewer should become a part of your DoD project team’s secure desktop environment. Save the extra step of your DoD security teams putting in a service desk ticket to request the STIG Viewer installation.

4. Look for Tools that Automate time-intensive Steps in the STIG process

The STIG process is time-intensive, primarily documenting policy controls. Look for tools that’ll help you automate compliance checks before you proceed into an audit of your STIG controls. The right tool can save you from audit surprises and rework that’ll slow down your application going live.

Parting Thought

The STIG process for containers is still very new to DoD programs. Being proactive and preparing your teams upfront in tandem with ongoing collaboration are the keys to navigating the STIG process for containers.

Learn more about putting your containers through the STIG process in our new white paper entitled Navigating STIG Compliance for Containers!

The Secure Software Development Attestation Form: The Next Step in SSDF Compliance

The long-awaited Secure Software Development Attestation Form was published on March 18, 2024 by the Cybersecurity and Infrastructure Agency (CISA). This is a continuation of the trend toward modernization of cybersecurity compliance that the US government has been marching toward since the turn of the millennium. 

This initiative is rooted in the cybersecurity challenges highlighted by Executive Order 14028, including the SolarWinds attack and the Colonial Pipeline ransomware attack, which clearly demonstrated the need for a coordinated national response to the emerging threats of a complex software supply chain.

The Secure Software Development Attestation Form is the newest, and likely not the final, step towards a more secure software supply chain for both the United States and the world at large. We will take you through the details of what this form means for your organization and how to best approach it.

While the SSDF and the new Secure Software Development Attestation Form are important compliance milestones in their own right, they are typically intermediate steps on the way to more rigorous compliance standards such as FedRAMP or continuous Authority to Operate (cATO). Regardless of your end destination, we will guide you through how to approach this step and all of the steps that come after.

Overview of the Secure Software Development Attestation Form

The Secure Software Development Attestation Form is part of a broader effort derived from the Cybersecurity EO 14028 (formally called “Improving the Nation’s Cybersecurity). As a result of this EO, the Office of Management and Budget (OMB) issued two memorandums, M-22-18 “Enhancing the Security of the Software Supply Chain through Secure Software Development Practices” and M-23-16 “Update to Memorandum M-22-18”.

These memos require the Federal agencies to obtain self-attestation forms from software suppliers. Software suppliers have to attest to complying with a subset of the Secure Software Development Framework (SSDF).

Before the publication of the Secure Software Development Attestation Form, the SSDF was a software development best practices standard published by the National Institute of Standards and Technology (NIST) based on industry best practices like OWASP’s BSIMM and SAMM. A useful resource for organizations that valued security intrinsically and wanted to run secure software development without any external incentives like formal compliance requirements.

Now, the Secure Software Development Attestation Form requires software providers to self-attest to having met a subset of the SSDF best practices. There are a number of implications to this transition from secure software development as being an aspiration standard to a compliance standard that we will cover below. The most important thing to keep in mind is that while the Attestation Form doesn’t require a software provider to be formally certified before they can transaction with a federal agency like FedRAMP does, there are retroactive punishments that can be applied in cases of non-compliance.

Who/What is Affected?

Who is Affected?

  1. Software providers to federal agencies:
    • Federal service integrators
    • Independent software vendor
    • Cloud service providers
  1. Federal agencies and DoD programs who use any of the above software providers

What is Included

  • New software: Any software developed after September 14, 2022
  • Major updates to existing software: A major version change after September 14, 2022
  • Software-as-a-Service (SaaS)

What is Excluded

  • First-party software: Software developed in-house by federal agencies. SSDF is still considered a best practice but does not require self-attestation.
  • Free and open-source software (FOSS): Even though FOSS components and end-user products are excluded from self-attestation the SSDF requires that specific controls are in place to protect against software supply chain security breaches.

Anchore is the leading open-source software supply chain security platform and helps numerous DoD programs achieve cATO. Anchore Federal offers:

Key Requirements of the Attestation Form

There are two high-level requirements for meeting compliance with the Secure Software Development Attestation Form;

  1. Meet the technical requirements of the form
    • Note: NIST SSDF has 19 categories and 42 total requirements. The self-attestation form has 4 which are a subset of the full SSDF. See below for a breakdown of all 4 requirements.
  2. Self-attest to compliance with the subset of SSDF
    • Sign and return the form.

Timeline

The timeline for compliance with the SSDF self-attestation form involves two critical dates: for critical software, compliance is required by early June, and for all other software in scope, by early September. Precise dates are not available yet due to ambiguity from CISA in their public communications.

Implications

Now that CISA has published the final version of the Secure Software Development Attestation Form there are a number of implications to this transition. One is economical and the other is potentially criminal.

The economic penalty of not self-attesting to secure software development practices via the form is that any federal agencies that you’re currently working with won’t be able to pay you any more and any future agencies you want to work with will ask to see your form before procurement. Sign the form or miss out on this revenue.

The second penalty is a bit scarier from an individual perspective. An officer of the company has to sign the attestation form to state that they are responsible for attesting to the fact that all of the form’s requirements have been met. Here is the relevant quote from the form:

“Willfully providing false or misleading information may constitute a violation of 18 U.S.C. § 1001, a criminal statute.”

It is also important to realize that this isn’t an unenforceable threat. There is evidence that the DOJ Civil Cyber Fraud Initiative is trying to crack down on government contractors failing to meet cybersecurity requirements. They are bringing False Claims Act investigations and enforcement actions. This will likely weigh heavily on both the individual that signs the form and who is chosen at the organization to sign the form.

Challenges and Considerations

Do I still have to sign if I have a 3PAO do the technical assessment?

No. As long as the 3PAO is FedRAMP-certified. 

What if I can’t comply in time?

You can draft a plan of action and milestones (POAM) to fill the gap while you are addressing the gaps between your current system and the system required by the attestation form. If the agency is satisfied with the POAM then they can continue to use your software. But they have to request either an extension of the deadline from OMB or a waiver in order to do that.

Can only the CEO and COO sign the form?

The wording in the draft form that was published required either the CEO or COO but new language was added to the final form that allows for a different company employee to sign the attestation form.

Full Requirements of Secure Software Development Attestation Form

  1. Software is developed and built in secure environments
    • Isolation and hardening of development and build environments
    • Authorization and access monitoring, logging and auditing
    • Multi-factor authentication
    • Catalog of software components
    • Encrypt sensitive data
    • Continuous monitoring of cybersecurity alerts and incidents
  2. Maintain trusted source code supply chains
    • Make a good-faith effort to maintain trusted software supply chains by employing automated tools to address the security supply chain and manage related vulnerabilities
  3. Maintain provenance for 1st- and 3rd-party components
  4. Employ automated tools for security vulnerabilities
    • Continuous scanning for vulnerabilities
    • Operate a vulnerability remediation policy
    • Operate a vulnerability disclosure program

How Anchore Can Help

Anchore Enterprise is a software supply chain security platform and can address many of the requirements of the Secure Software Development Attestation Form. By utilizing the platform, you can ensure that whoever is required to sign the attestation form has a battle-tested software platform to back-up their signature.

Software is developed and built in secure environments

Catalog of software components (i.e., an SBOM)

Anchore Enterprise includes a software component scanner that can make a full inventory of all software and the 3rd party components that are integrated into the end-user software by generating a software bill of materials (SBOM).

Continuous monitoring of cybersecurity alerts and incidents

Anchore Enterprise includes a vulnerability scanner that can uncover vulnerabilities in all software components and alert on vulnerabilities as they are found and feed these alerts into an incident management system that allows for teams to manage and remediate any critical discoveries.

Maintain trusted source code supply chains

This requirement is a bit nebulous but Anchore Enterprise is able to meet the requirement of providing an “automated tool to address the security of internal code and third-party components and manage related vulnerabilities”. Anchore Enterprise includes a vulnerability scanner that can scan the entire catalog of software components and a management interface for the vulnerabilities that are discovered.

Employ automated tools for security vulnerabilities

Continuous scanning for vulnerabilities

Anchore Enterprise includes a vulnerability scanner that can be integrated into all stages of the CI/CD pipeline and allow for scanning for vulnerabilities across all build environments.

Operate a vulnerability remediation policy

Anchore Enterprise includes remediation recommendations that will utilize the results of a vulnerability scanner to provide feedback to developers on how to best resolve a security vulnerability. 

Operate a vulnerability disclosure program

Anchore Enterprise includes an API that can be consumed by a vulnerability disclosure application or process. All of the vulnerability metadata can be collected from the API in order to populate the required vulnerability metadata for the disclosure.

Conclusion

Cybersecurity compliance modernization is a journey not a destination. The Secure Software Development Attestation Form is the next step in that journey for secure software development. This stop was focused on transforming the preceding SSDF standard from a recommendation into a requirement with a soft gate. Given the overall trend of cybersecurity modernization that was kickstarted with FISMA in 2002, it would be prudent to assume that this attestation form is an intermediate step before the requirements become a hard gate where compliance will have to be demonstrated as a prerequisite to utilizing the software.

Anchore is committed to keeping you up-to-date on the latest with these changes. If you’re interested to learn more about how Anchore Federal can help you achieve compliance with the Secure Software Development Attestation Form follow the link. Also, we have two webinars that go into more depth on how to achieve SSDF compliance that will expand your knowledge on this deeply technical topic, links below:

Webinars

We don’t know how to fix the xz problem, but we can detect it

A very impressive and scary attack against the xz library was uncovered on Friday, which made for an interesting weekend for many of us.

There has been a lot written about the technical aspects of this attack, and more will be uncovered over the next few days and weeks. It’s likely we’re not done learning new details about this attack. This doesn’t appear to affect as many organizations as Log4Shell did, but it’s a pretty big deal. Especially what this sort of attack means for the larger ecosystem of open source. Trying to explain the details isn’t the point of this blog post. There’s another angle of this story that’s not getting much attention. How can we solve problems like this (we can’t), and what can we do going forward?

The unsolvable problem

Sometimes reality can be harsh, but the painful truth about this sort of attack is that there is no solution. Many projects and organizations are happy to explain how they keep you safe, or how you can prevent supply chain attacks, by doing this one simple thing. However, the industry as it stands today lacks the ability to prevent an attack created by a motivated and resourced threat actor. If we want to use an analogy, preventing an attack like xz is the equivalent of the pre-crime science fiction dystopian stories. The idea behind pre-crime is to use data or technology to predict when a crime is going to happen, then stopping it. This leads to a number of problems in any society that adopts such a thing as one can probably imagine.

If there is a malicious open source maintainer, we lack the tools and knowledge to prevent this sort of attack, as you can’t actually stop such behavior until after it happens. It may be possible there’s no way to stop something like this before it happens.

HOWEVER, that doesn’t mean we are helpless. We can take a page out of the playbook of the observability industry. Sometimes we’re able to see problems as they happen or after they happen, then use that knowledge from the past to improve the future, that is a problem we can solve. And it’s a solution that we can measure. If you have a solid inventory of your software, looking for affected versions of xz becomes simple and effective.

Today and Tomorrow

Of course looking for a vulnerable version of xz, specifically versions 5.6.0 and 5.6.1, is something we should all be doing. If you’ve not gone through the software you’re running you should go do this right now. See below for instructions on how to use Syft and Anchore Enterprise to accomplish this.

Finding two specific versions of xz is a very important task right now, there’s also what happens tomorrow. We’re all very worried about these two specific versions of xz, but we should prepare for what happens next. It’s very possible there will be other versions of xz that end up having questionable code that needs to be removed or downgraded. There could be other libraries that have problems (everyone is looking for similar attacks now). We don’t really know what’s coming next. The worst part of being in the middle of attacks like this are the unknowns. But there are some things we do know. If you have an accurate inventory of your software, figuring out what software or packages you need to update becomes trivial.

Creating an inventory

If you’re running Anchore Enterprise the good news is you already have an inventory of your software. You can create a report that will look for images affected by CVE-2024-3094.

The above image shows how a report in Anchore Enterprise can be created.  Another feature of Anchore Enterprise allows you to query all of your SBOMs for instances of specified software via an API call, by package name.  This is useful for gaining insights about the location, ubiquity, and the version spread of the software, present in your environment.

The package names in question are the liblzma5 and xz-libs packages, which cover the common naming across rpm, dpkg, and apkg based Linux distributions.  For example:

   

We don’t know how to fix the xz problem, but we can detect it

See the Anchore Enterprise API Browser for more information about the API, and the Anchore Enterprise Documentation for more details on reporting, vulnerability scanning, and other functions of the system.

If you’re using Syft, it’s a little bit more complicated, but still a very solvable problem. The first thing to do is generate SBOMs for the software you’re using. Let’s create SBOMs for container images in this example.

It’s important to keep those SBOMs you just created somewhere safe. If we then find out in a few days or weeks that other versions of xz shouldn’t be trusted, or if a different open source library has a similar problem, we can just run a query against those files to understand how or if we are affected.

Now that we have a directory full of SBOMs, we can run a query similar to these to figure out which SBOMs have a version of xz in them.

While the example looks for xz, if the need to quickly look for other packages arises in the near future, it’s easy to adjust your query. It’s even easier if you store the SBOMs in some sort of searchable database.

What now?

There’s no doubt it’s going to be a very interesting couple of weeks for many of us. Anyone who has been watching the news is probably wondering what wild supply chain story will happen next. The pace of solving open source security problems hasn’t kept pace with the growth of open source. There will be no quick fixes. The only way we get out of this one is a lot of hard questions and even more hard work.

But in the meantime, we can focus on understanding and defending what we have. Sometimes the best defense is a good defense.

Want a primer on software suppy chain security? Get our free white paper here.

Navigating the NVD Quagmire

The global cybersecurity community has been in a state of uncertainty since the National Vulnerability Database (NVD) has degraded service starting in mid-February. There has been a lot of coverage of this incident this month and Anchore has been at the center of much of it. If you haven’t been keeping up, this blog post is here to recap what has happened so far and how the community has been responding to this incident.

Our VP of Security, Josh Bressers has been leading the charge to educate and organize the community. First with his Open Source Security podcast that goes through what is happening with NVD and why it is important. On top of that, last week he participated in a livestream with Chainguard Co-founder Dan Lorenc on the Resilient Cyber Show hosted by Chris Hughes on the implications of the current delay with NVD service.

We’ve condensed the topics from these resources into a blog post that will cover the issues created by the delay in NVD service, a background on what has happened so far, a potential open-source solution to the problem and a call to action for advocacy. Continue reading for the good stuff.

The problem

Federal agencies mandate NVD is used as the primary data source of truth even if there could be higher quality data sources available. This mainly comes down to the fact that the severity scores, meaning the Common Vulnerability Scoring System (CVSS), determines when an agency or organization is out of compliance with a federal security standard. Given that compliance standards are created by the US government, only NVD can score a vulnerability and determine the appropriate action in order to stay in compliance.

That’s where the problem starts to come in, you’ve got a whole bunch of government agencies on one hand saying, ‘you must use this data’. And then another government agency that says, “No, you can’t rely on this for anything”. This leaves folks working with the government in a bit of a pickle.

–Dan Lorenc, Co-Founder, Chainguard

If NVD isn’t assigning severities to vulnerabilities, it’s not clear what that means for maintaining compliance and they could be exposing themselves to significant risk. For example, there could be high severity vulnerabilities being published by organizations that are unaware because this vital review and scoring process has been removed.

Background on NVD and the current state of affairs

NVD is the canonical source of truth for software vulnerabilities for the federal government, specifically for 10+ federal compliance standards. It has also become a go-to resource for the worldwide security community even if individual organizations in the wider community aren’t striving to meet a United States compliance standard.

NVD adds a number of enrichments to CVE data but there are two of particular importance; first, it adds a severity score to all CVEs and two, it adds information of which versions of the software are impacted by the CVE. The National Institute of Standards and Technology (NIST) has been providing this service to the security community for over 20 years through the NVD. That changed last month:

Timeline

  • Feb 12: NVD dramatically reduces the number of CVEs that are being enriched:
  • Feb 15: NVD posts message about delay of enrichment on NVD Website

Read a comprehensive background in our original blog post, National Vulnerability Database: Opaque changes and unanswered questions.

Developing an Open-Source Solution

The Anchore team developed and maintains an open-source vulnerability scanner called Grype that utilizes the NVD as one of many vulnerability feeds as well as a software supply chain security platform called Anchore Enterprise that incorporates Grype. Given that both products use data from NVD, it was particularly important for Anchore to engage in the current crisis.

While there is nothing that Anchore can do about the missing severity scores, the other highlighted missing enrichment was the versions of the software that are impacted by the CVE, aka, Common Platform Enumeration (CPE). The matching logic ends up being the more important signal during impact analysis because it is an objective measure of impact rather than severity scoring which can be debated (and is, at length).

Given Anchore’s history with the open-source software community, creating an OSS project to fill a gap in the NVD enrichment seemed the logical choice. The goal of going the OSS route is to leverage the transparent process and rapid iteration that comes from building software publicly. Anchore is excited to collaborate with the community to:

  • Ingest CVE data
  • Analyze CVEs
  • Improve the CVE-to-versioning mapping process 

Everyone is being crushed by the unrelenting influx of vulnerabilities. It’s not just NVD. It’s not one organization. We can either sit in our silos and be crushed to death or we can work together.

–Josh Bressers, VP of Security at Anchore

If you’re looking to utilize this data and software as a backfill while NVD continues delaying analysis or want to contribute to the project, please join us on GitHub

Cybersecurity Awareness and Advocacy

It might seem strange that the cybersecurity community would need to convince the US government that investing in the cybersecurity ecosystem is a net positive investment given that the federal government is the largest purchaser of software in the world and is probably the largest target for threat actors. But given how NIST has decided to degrade the service of NVD and provide only opaque guidance on how to fill the gap in the meantime, it doesn’t appear that the right hand is talking with the left.

Whether the federal government intended to or not, by requiring that organizations and agencies utilize NVD in order to meet a number of federal compliance standards, it effectively became the authority on the severity of software vulnerabilities for the global cybersecurity ecosystem. By providing a valuable and reliable service to the community, the US garnered the trust of the ecosystem. The current state of NVD and the manner in which it was rolled out has degraded that trust. 

It is unknown whether the US will cede its authority to another organization, the EU may attempt to fill this vacuum with its own authoritative database but in the meantime, advocacy for cybersecurity awareness within the government is paramount. It is up to the community to create the pressure that will demonstrate the urgency with the current strategy around a vital community resource like NVD. 

Conclusion

Anchore is committed to keeping the community up-to-date on this incident as it unfolds. To stay informed, be sure to follow us on LinkedIn or Twitter/X.

If you’d like to watch the livestream in all its glory, click on the image below to go to the VOD.

Also, if you’re looking for more in-depth coverage of the NVD incident, Josh Bressers has a security podcast called, Open Source Security that covers the NVD incident and the history of NVD.

Introduction to the DoD Software Factory

In the rapidly evolving landscape of national defense and cybersecurity, the concept of a Department of Defense (DoD) software factory has emerged as a cornerstone of innovation and security. These software factories represent an integration of the principles and practices found within the DevSecOps movement, tailored to meet the unique security requirements of the DoD and Defense Industrial Base (DIB). 

By fostering an environment that emphasizes continuous monitoring, automation, and cyber resilience, DoD Software Factories are at the forefront of the United States Government’s push towards modernizing its software and cybersecurity capabilities. This initiative not only aims to enhance the velocity of software development but also ensures that these advancements are achieved without compromising on security, even against the backdrop of an increasingly sophisticated threat landscape.

Building and running a DoD software factory is so central to the future of software development that “Establish a Software Factory” is the one of the explicitly named plays from the DoD DevSecOps Playbook. On top of that, the compliance capstone of the authorization to operate (ATO) or its DevSecOps infused cousin the continuous ATO (cATO) effectively require a software factory in order to meet the requirements of the standard. In this blog post, we’ll break down the concept of a DoD software factory and a high-level overview of the components that make up one.

What is a DoD software factory?

A Department of Defense (DoD) Software Factory is a software development pipeline that embodies the principles and tools of the larger DevSecOps movement with a few choice modifications that conform to the extremely high threat profile of the DoD and DIB. It is part of the larger software and cybersecurity modernization trend that has been a central focus for the United States Government in the last two decades.

The goal of a DoD Software Factory is aimed at creating an ecosystem that enables continuous delivery of secure software that meet the needs of end-users while ensuring cyber resilience (a DoD catchphrase that emphasizes the transition from point-in-time security compliance to continuous security compliance). In other words, the goal is to leverage automation of software security tasks in order to fulfill the promise of the DevSecOps movement to increase the velocity of software development.

What is an example of a DoD software factory?

Platform One is the canonical example of a DoD software factory. Run by the US Air Force, it offers both a comprehensive portfolio of software development tools and services. It has come to prominence due to its hosted services like Repo One for source code hosting and collaborative development, Big Bang for a end-to-end DevSecOps CI/CD platform and the Iron Bank for centralized container storage (i.e., container registry). These services have led the way to demonstrating that the principles of DevSecOps can be integrated into mission critical systems while still preserving the highest levels of security to protect the most classified information.

If you’re interested to learn more about how Platform One has unlocked the productivity bonus of DevSecOps while still maintaining DoD levels of security, watch our webinar with Camdon Cady, Chief of Operations and Chief Technology Officer of Platform One.

Who does it apply to?

Federal Service Integrators (FSI)

Any organization that works with the DoD as a federal service integrator will want to be intimately familiar with DoD software factories as they will either have to build on top of existing software factories or, if the mission/program wants to have full control over their software factory, be able to build their own for the agency.

Department of Defense (DoD) Program

Any Department of Defense (DoD) program will need to be well-versed on DoD software factories as all of their software and systems will be required to run on a software factory as well as both reach and maintain a cATO.

What are the components of a DoD Software Factory?

A DoD software factory is composed of both high-level principles and specific technologies that meet these principles. Below are a list of some of the most significant principles of a DoD software factory:

Principles of DevSecOps embedded into a DoD software factory

  1. Breakdown Organizational Silos
    • This principle is borrowed directly from the DevSecOps movement, specifically the DoD aims to integrate software development, test, deployment, security and operations into a single culture with the organization.
  2. Open Source and Reusable Code
    • Composable software building blocks is another principle of the DevSecOps that increases productivity and reduces security implementation errors from developers writing secure software packages that they are not experts in.
  3. Immutable Infrastructure as Code (IaC)
    • This principle focuses on treating the infrastructure that software runs on as ephemeral and managed via configuration rather than manual systems operations. Enabled by cloud computing (i.e., hardware virtualization) this principle increases the security of the underlying infrastructure through templated secure-by-design defaults and reliability of software as all infrastructure has to be designed to fail at any moment.
  4. Microservices architecture (via containers)
    • Microservices are a design pattern that creates smaller software services that can be built and scale independently of each other. This principle allows for less complex software that only performs a limited set of behavior.
  5. Shift Left
    • Shift left is the DevSecOps principle that re-frames when and how security testing is done in the software development lifecycle. The goal is to begin security testing while software is being written and tested rather than after the software is “complete”. This prevents insecure practices from cascading into significant issues right as software is ready to be deployed.
  6. Continuous improvement through key capabilities
    • The principle of continuous improvement is a primary characteristic of the DevSecOps ethos but the specific key capabilities that are defined in the DoD DevSecOps playbook are what make this unique to the DoD.
  7. Define a DevSecOps Pipeline
    • A DevSecOps pipeline is the system that utilizes all of the preceding principles in order to create the continuously improving security outcomes that is the goal of the DoD software factory program.
  8. Cyber Resilience
    • Cyber resiliency is the goal of a DoD software factory, is it defined as, “the ability to anticipate, withstand, recover from, and adapt to adverse conditions, stresses, attacks, or compromises on the systems that include cyber resources.”

Common tools and systems of a DoD software factory

  1. Code Repository (e.g., Repo One)
    • Where software source code is stored, managed and collaborated on.
  2. CI/CD Build Pipeline (e.g., Big Bang)
    • The system that automates the creation of software build artifacts, tests the software and packages the software for deployment.
  3. Artifact Repository (e.g., Iron Bank)
    • The storage system for software components used in development and the finished software artifacts that are produced from the build process.
  4. Runtime Orchestration and Platform (e.g., Big Bang)
    • The deployment system that hosts the software artifacts pulled from the registry and keeps the software running so that users can access it.

How do I meet the security requirements for a DoD Software Factory? (Best Practices)

Use a pre-existing software factory

The benefit of using a pre-existing DoD software factory is the same as using a public cloud provider; someone else manages the infrastructure and systems. What you lose is the ability to highly customize your infrastructure to your specific needs. What you gain is the simplicity of only having to write software and allow others with specialized skill sets to deal with the work of building and maintaining the software infrastructure. When you are a car manufacturer, you don’t also want to be a civil engineering firm that designs roads.

To view existing DoD software factories, visit the Software Factory Ecosystem Coalition website.

Roll your own by following DoD best practices 

If you need the flexibility and customization of managing your own software factory then we’d recommend following the DoD Enterprise DevSecOps Reference Design as the base framework. There are a few software supply chain security recommendations that we would make in order to ensure that things go smoothly during the authorization to operate (ATO) process:

  1. Continuous vulnerability scanning across all stages of CI/CD pipeline
    • Use a cloud-native vulnerability scanner that can be directly integrated into your CI/CD pipeline and called automatically during each phase of the SDLC
  2. Automated policy checks to enforce requirements and achieve ATO
    • Use a cloud-native policy engine in tandem with your vulnerability scanner in order to automate the reporting and blocking of software that is a security threat and a compliance risk
  3. Remediation feedback
    • Use a cloud-native policy engine that can provide automated remediation feedback to developers in order to maintain a high velocity of software development
  4. Compliance (Trust but Verify)
    • Use a reporting system that can be directly integrated with your CI/CD pipeline to create and collect the compliance artifacts that can prove compliance with DoD frameworks (e.g., CMMC and cATO)
  5. Air-gapped system
    • Utilize a cloud-native software supply chain security platform that can be deployed into an air-gapped environment in order to maintain the most strict security for classified missions

Is a software factory required in order to achieve cATO?

Technically, no. Effectively, yes. A cATO requires that your software is deployed on an Approved DoD Enterprise DevSecOps Reference Design not a software factory specifically. If you build your own DevSecOps platform that meets the criteria for the reference design then you have effectively rolled your own software factory.

How Anchore Can Help

The easiest and most effective method for achieving the security guarantees that a software factory is required to meet for its software supply chain are by using: 

  1. An SBOM generation tool that integrates directly into your software development pipeline
  2. A container vulnerability scanner that integrates directly into your software development pipeline
  3. A policy engine that integrates directly into your software development pipeline
  4. A centralized database to store all of your software supply chain security logs
  5. A query engine that can continuously monitor your software supply chain and automate the creation of compliance artifacts

These are the primary components of both Anchore Enterprise and Anchore Federal cloud native, SBOM-powered software composition analysis (SCA) platforms that provide an end-to-end software supply chain security to holistically protect your DevSecOps pipeline and automate compliance. This approach has been validated by the DoD, in fact the DoD’s Container Hardening Process Guide specifically named Anchore Federal as a recommended container hardening solution.

Learn more about how Anchore brings DevSecOps to DoD software factories.

Conclusion and Next Steps

DoD software factories can come off as intimidating at first but hopefully we have broken them down into a more digestible form. At their core they reflect the best of the DevSecOps movement with specific adaptations that are relevant to the extreme threat environment that the DoD has to operate in, as well as, the intersecting trend of the modernization of federal security compliance standards.

For those that want to understand how a policy engine can reliably deliver DevSecOps developer productivity, continuous security and automated compliance, read our overview of how policy engine’s are the perfect compliment for software supply chain security: The Power of Policy-as-Code for the Public Sector.

Spring Webinar Update: Expand Your Knowledge with Our Expert-Led Sessions

In our continuous effort to bring valuable insights and tools to the world of software supply chain security, we are thrilled to announce two upcoming webinars and one recently held webinar now available for on-demand access. Whether you’re looking to enhance your understanding of software security, explore open-source tools to automate OSS licensing management, or navigate the complexities of compliance with federal standards, our expert-led webinars are designed to equip you with the knowledge you need. Here’s what’s on the agenda:

Tracking License Compliance Made Easy: Intro to Grant (OSS)

Date: Mar 26, 2024 at 2pm EDT  (11am PDT)

Join us as Anchore CTO, Dan Nurmi and Grant Maintainer, Christopher Phillips discuss the challenges of managing software licenses within production environments, highlighting the complexity and ongoing nature of tracking open-source licenses.

They will introduce Grant, an open-source tool designed to alleviate the burden of OSS license inspection by demonstrating how to scan for licenses within SBOMs or container images, simplifying a typically manual process. The session will cover the current landscape of software licenses, the difficulties of compliance checks, and a live demo of Grant’s features that automate this previously laborious process.

Software Security in the Real World with Kelsey Hightower and Dan Perry

Date: April 4th, 2024 at 2pm EDT  (11am PDT)

In our upcoming webinar, experts Kelsey Hightower and Dan Perry will delve into the nuances of securing software in cloud-native, containerized applications. This in-depth session will explore the criteria for vulnerability testing success or failure, offering insights into security testing and compliance for modern software environments. 

Through a live demonstration of Anchore Enterprise, they’ll provide a comprehensive look at visibility, inspection, policy enforcement, and vulnerability remediation, equipping attendees with a deeper understanding of software supply chain security, proactive security strategies, and practical steps to embark on a software security journey. 

The discussion will continue after the webinar on X/Twitter with Kelsey Hightower.

FedRAMP and SSDF Compliance: How to Sell to the Federal Government

This webinar explores how Anchore aids in navigating the complex compliance requirements for selling software to the federal government, focusing on FedRAMP vulnerability scanning and SSDF compliance. Led by Josh Bressers, VP of Security and Connor Wynveen, Senior Solutions Engineer it will detail evaluating FedRAMP controls for software containers and adhering to SSDF guidelines. 

Key takeaways include strategies to streamline FedRAMP and SSDF compliance efforts, leveraging SBOMs for efficiency, the critical role of automated vulnerability scans, and how Anchore’s policy pack can assist organizations in meeting compliance standards.

Accessing the Webinars

Don’t miss out on the opportunity to expand your knowledge and skills with these sessions. To register for the upcoming webinars or to access the on-demand webinar, visit our webinar landing page. Whether you’re looking to stay ahead of the curve in software security, explore funding opportunities for your open-source projects, or break into the federal market, our webinars are designed to provide you with the insights and tools you need.

We look forward to welcoming you to our upcoming webinars. Stay informed, stay ahead!

National Vulnerability Database: Opaque changes and unanswered questions

A short history lesson on the NVD

Founded in 2005, The National Vulnerability Database, or NVD, is a collection of vulnerability data by the National Institute of Standards and Technology (NIST) in the United States. As of today, many companies rely on NVD data for their security operations and vulnerability research.

NVD describes itself as:

The NVD is the U.S. government repository of standards based vulnerability management data represented using the Security Content Automation Protocol (SCAP). This data enables automation of vulnerability management, security measurement, and compliance. The NVD includes databases of security checklist references, security related software flaws, product names, and impact metrics.

The primary role of the NVD is adding data to vulnerabilities that have been assigned a CVE ID. They include additional metadata such as severity levels via Common Vulnerability Scoring System (CVSS), and affected data via Common Platform Enumeration (CPE). NIST is responsible for maintaining the NVD as each CVE ID can require additional modifications or maintenance as the nature of vulnerabilities can change daily. This is a service NVD has been providing for nearly 20 years.

The graph below shows a historical trend of CVE IDs that have been published in the CVE program (green), alongside the analysis data provided by NVD (red), since 2005.

Key: Green is all CVE IDs in NVD. Red is IDs with a CPE attached

We can see nearly every CVE has been enriched by NVD during this time.

A problematic website notice from the NVD

On February 15th 2024, a banner appeared on the NVD website stating:

NIST is currently working to establish a consortium to address challenges in the NVD program and develop improved tools and methods. You will temporarily see delays in analysis efforts during this transition. We apologize for the inconvenience and ask for your patience as we work to improve the NVD program.

It’s not entirely clear what this message means for the data provided by NVD or what the public should expect. 

While attempting to research the meaning behind this statement, Anchore engineers have discovered that as of February 15, 2024, NIST has almost completely stopped updating NVD with analysis for CVE IDs. The graph below shows the trend of CVE IDs that have been published in the CVE program (green), alongside the analysis data provided by NVD (red), since early January 1, 2024.

Key: Green is all CVE IDs in NVD. Red is IDs with a CPE attached

Starting February 12th, thousands of CVE IDs have been published without any record of analysis by NVD. Since the start of 2024 there been a total of 6,171 total CVE IDs with only 3,625 being enriched by NVD. That leaves a gap of 2,546 (42%!) IDs.

NVD has become an industry standard for organizations and security products to rely on their data for security operations, such as prioritizing vulnerability remediation and securing infrastructures. CVE IDs are constantly being added and updated, but those IDs are missing key analytical data provided by NVD. Any organizations that depend on NVD for vulnerability data such as CVSS scores are no longer receiving updates to the CVE data. This means that organizations relying on this data are left in the dark with new vulnerabilities, imposing greater risk and unmanaged attack surface for their environment.

Wait and see?

How to fill the gap of this missing data has not yet been addressed by NVD. There are other vulnerability databases such as the GitHub Advisory Database and the CVE5 database that contain severity ratings and affected products, but by definition, those databases cannot provide NVD severity scores.

Anchore is investigating options to create a public repository of identifiers to fill this gap. We invite members of the security community to join us at our next meetup on March 14th 2024 as we research options. Details for the meetup are available on GitHub

In the meantime, we will continue to look for updates from NIST and hope that they are more transparent about their service situation soon.