Automated Policy Enforcement for CMMC with Anchore Enterprise

The Cyber Maturity Model Certification (CMMC) is an important program to harden the cybersecurity posture of the defense industrial base. Its purpose is to validate that appropriate safeguards are in place to protect controlled unclassified information (CUI). Many of the organizations that are required to comply with CMMC are Anchore customers. They have the responsibility to protect the sensitive, but not classified data, of US military and government agencies as they support the various missions of the United States. 

CMMC 2.0 Levels

  • Level 1 Foundation: Safeguard federal contract information (FCI); not critical to national security.
  • Level 2 Advanced:  This maps directly to NIST Special Publication (SP) 800-171. Its primary goal is to ensure that government contractors are properly protecting controlled unclassified information (CUI).
  • Level 3 Expert: This maps directly to NIST Special Publication (SP) 800-172. Its primary goal is to go beyond the base-level security requirements defined in NIST 800-171. NIST 800-172 provides security requirements that specifically defend against advanced persistent threats (APTs).

This is of critical importance as these organizations leverage common place DevOps tooling to build their software. Additionally, these large organizations may be working with smaller subcontractors or suppliers who are building software in tandem or partnership. 

For example, a mega-defense contractor is working alongside a small mom-and-pop shop to develop software for a classified government system. Lots of questions we should have here:

  1. How can my company as a mega-defense contractor validate what software built by my partner is not using blacklisted software packages?
  2. How can my company validate software supplied to me is free of malware?
  3. How can I validate that the software supplied to me is in compliance with licensing standards and vulnerability compliance thresholds of my security team?
  4. How do I validate that the software I’m supplying is compliant not only against NIST 800-171 and CMMC, but against the compliance standards of my government end user (Such as NIST 800-53 or NIST 800-161)?

Validating Security between DevSecOps Pipelines and Software Supply Chain

At any major or small contractor alike, everyone has taken steps to build internal DevSecOps (DSO) pipelines. However, the defense industrial base (DIB) commonly involves daily relationships in which smaller defense contractors supply software to a larger defense contractor for a program or DSO pipeline that consumes and implements that software. With Anchore Enterprise, we can now validate if that software supplied is compliant with CMMC controls as specified in NIST 800-171.

Looking to learn more about how to achieve CMMC Level 2 or NIST 800-171 compliance? One of the most popular technology shortcuts is to utilize a DoD software factory. Anchore has been helping organizations and agencies put the Sec in DevSecOps by securing traditional software factories, transforming them into DoD software factories.

Which Controls does Anchore Enterprise Automate?

3.1.7 – Restrict Non-Privileged Users and Log Privileged Actions

Related NIST 800-53 Controls: AC-6 (10)

Description: Prevent non-privileged users from executing privileged functions and capture the execution of such functions in audit logs. 

Implementation: Anchore Enterprise can scan the container manifests to determine if the user is being given root privileges and implement an automated policy to prevent build containers from entering a runtime environment. This prevents a scenario where any privileged functions can be utilized in a runtime environment.

3.4.1 – Maintain Baseline Configurations & Inventories

Related NIST 800-53 Controls: CM-2(1), CM-8(1), CM-6

Description: Establish and maintain baseline configurations and inventories of organizational systems (including hardware, software, firmware, and documentation) throughout the respective system development life cycles.

Implementation: Anchore Enterprise provides a centralized inventory of all containers and their associated manifests at each stage of the development pipeline. All manifests, images and containers are automatically added to the central tracking inventory so that a complete list of all artifacts of the build pipeline can be tracked at any moment in time.

3.4.2 – Enforce Security Configurations

Related NIST 800-53 Controls: CM-2 (1) & CM-8(1) & CM-6

Description: Establish and enforce security configuration settings for information technology products employed in organizational systems.

Implementation: Implementation: Anchore Enterprise scans all container manifest files for security configurations and publishes found vulnerabilities to a centralized database that can be used for monitoring, ad-hoc reporting, alerting and/or automated policy enforcement.

3.4.3 – Monitor and Log System Changes with Approval Process

Related NIST 800-53 Controls: CM-3

Description: Track, review, approve or disapprove, and log changes to organizational systems.

Implementation: Anchore Enterprise provides a centralized dashboard that tracks all changes to applications which makes scheduled reviews simple. It also provides an automated controller that can apply policy-based decision making to either automatically approve or reject changes to applications based on security rules.

3.4.4 – Run Security Analysis on All System Changes

Related NIST 800-53 Controls: CM-4

Description: Analyze the security impact of changes prior to implementation.

Implementation: Anchore Enterprise can scan changes to applications for security vulnerabilities during the build pipeline to determine the security impact of the changes.

3.4.6 – Apply Principle of Least Functionality

Related NIST 800-53 Controls: CM-7

Description: Employ the principle of least functionality by configuring organizational systems to provide only essential capabilities.

Implementation: Anchore Enterprise can scan all applications to ensure that they are uniformly applying the principle of least functionality to individual applications. If an application does not meet this standard then Anchore Enterprise can be configured to prevent an application from being deployed to a production environment.

3.4.7 – Limit Use of Nonessential Programs, Ports, and Services

Related NIST 800-53 Controls: CM-7(1), CM-7(2)

Description: Restrict, disable, or prevent the use of nonessential programs, functions, ports, protocols, and services.

Implementation: Anchore Enterprise can be configured as a gating agent that will scan for specific security violations and prevent these applications from being deployed until the violations are remediated.

3.4.8 – Implement Blacklisting and Whitelisting Software Policies

Related NIST 800-53 Controls: CM-7(4), CM-7(5)

Description: Apply deny-by-exception (blacklisting) policy to prevent the use of unauthorized software or deny-all, permit-by-exception (whitelisting) policy to allow the execution of authorized software.

Implementation: Anchore Enterprise can be configured as a gating agent that will apply a security policy to all scanned software. The policies can be configured in a black- or white-listing manner.

3.4.9 – Control and Monitor User-Installed Software

Related NIST 800-53 Controls: CM-11

Description: Control and monitor user-installed software.

Implementation: Anchore Enterprise scans all software in the development pipeline and records all user-installed software. The scans can be monitored in the provided dashboard. User-installed software can be controlled (allowed or denied) via the gating agent.

3.5.10 – Store and Transmit Only Cryptographically-Protected Passwords

Related NIST 800-53 Controls: IA-5(1)

Description: Store and transmit only cryptographically-protected of passwords.

Implementation: Anchore Enterprise can scan for plain-text secrets in build artifacts and prevent exposed secrets from being promoted to the next environment until the violation is remediated. This prevents unauthorized storage or transmission of unencrypted passwords or secrets. See screenshot below to see this protection in action.

3.11.2 – Scan for Vulnerabilities

Related NIST 800-53 Controls: RA-5, RA-5(5)

Description: Scan for vulnerabilities in organizational systems and applications periodically and when new vulnerabilities affecting those systems and applications are identified.

Implementation: Anchore Enterprise is designed to scan all systems and applications for vulnerabilities continuously and alert when any changes introduce new vulnerabilities. See screenshot below to see this protection in action.

3.11.3 – Remediate Vulnerabilities Respective to Risk Assessments

Related NIST 800-53 Controls: RA-5, RA-5(5)

Description: Remediate vulnerabilities in accordance with risk assessments.

Implementation: Anchore Enterprise can be tuned to allow or deny changes based on a risk scoring system.

3.12.2 – Implement Plans to Address System Vulnerabilities

Related NIST 800-53 Controls: CA-5

Description: Develop and implement plans of action designed to correct deficiencies and reduce or eliminate vulnerabilities in organizational systems.

Implementation: Anchore Enterprise automates the process of ensuring all software and systems are in compliance with the security policy of the organization. 

3.13.4 – Block Unauthorized Information Transfer via Shared Resources

Related NIST 800-53 Controls: SC-4

Description: Prevent unauthorized and unintended information transfer via shared system resources.

Implementation: Anchore Enterprise can be configured as a gating agent that will scan for unauthorized and unintended information transfer and prevent violations from being transferred between shared system resources until the violations are remediated.

3.13.8 – Use Cryptography to Safeguard CUI During Transmission

Related NIST 800-53 Controls: SC-8

Description: Transmission Confidentiality and Integrity: Implement cryptographic mechanisms to prevent unauthorized disclosure of CUI during transmission unless otherwise protected by alternative physical safeguards.

Implementation: Anchore Enterprise can be configured as a gating agent that will scan for CUI and prevent violations of organization defined policies regarding CUI from being disclosed between systems.

3.14.5 – Periodically Scan Systems and Real-time Scan External Files

Related NIST 800-53 Controls: SI-2

Description: Perform periodic scans of organizational systems and real-time scans of files from external sources as files are downloaded, opened, or executed.

Implementation: Anchore Enterprise can be configured to scan all external dependencies that are built into software and provide information about relevant security vulnerabilities in the software development pipeline. See screenshot below to see this protection in action.

Wrap-Up

In a world increasingly defined by software solutions, the cybersecurity posture of defense-related industries stands paramount. The CMMC, a framework with its varying levels of compliance, underscores the commitment of the defense industrial base to fortify its cyber defenses. 

As a multitude of organizations, ranging from the largest defense contractors to smaller mom-and-pop shops, work in tandem to support U.S. missions, the intricacies of maintaining cybersecurity standards grow. The questions posed exemplify the necessity to validate software integrity, especially in complex collaborations. 

Anchore Enterprise solves these problems by automating software supply chain security best practices. It not only automates a myriad of crucial controls, ranging from user privilege restrictions to vulnerability scanning, but it also empowers organizations to meet and exceed the benchmarks set by CMMC and NIST. 

In essence, as defense entities navigate the nuanced web of software development and partnerships, tools like Anchore Enterprise are indispensable in safeguarding the nation’s interests, ensuring the integrity of software supply chains, and championing the highest levels of cybersecurity.

If you’d like to learn more about the Anchore Enterprise platform or speak with a member of our team, feel free to book a time to speak with one of our specialists.

Four Signs You’re Ready to Upgrade from DIY Supply Chain Security to Anchore Enterprise

Build versus buy is always a complex decision for most organizations. Typically there is a tipping point that is hit when the friction of building and running your own tooling outweighs the cost benefits of abstaining from adding yet another vendor to your SaaS bill. The signals that point to when an organization is approaching this moment varies based on the tool you’re considering.

In this blog post, we will outline some of the common signals that your organization is approaching this event for managing software supply chain risk. Whether your developers have self-adopted software development best practices like creating software bills of material (SBOMs) and now you’re drowning in an ocean of valuable but scattered security data, or you’re ready to start scaling your shift left security strategy across your entire software development life cycle, we will cover all of these scenarios and more.

Challenge Type: Scaling SBOM Management

Managing SBOMs is getting out of hand. Each day there is more SBOM data to sort and store. SBOM generation is by far the easiest capability to implement today. It’s free, extremely lightweight (low learning curve for engineers to adopt, unlike some enterprise products), and it’s fast…blazing fast! As a result of this, teams can quickly generate hundreds, thousands (even millions!) of SBOMs over the course of a fiscal year. This is great from a data security perspective but creates its own problems.

Once the friction of creating SBOMs becomes trivial, teams typically struggle with good ways to store and manage all of this new data. Just like any other context, questions arise about how long to retain data, query the data for security related issues, or even integrate all of that data with third party tooling to glean actionable security insights. Once teams have fully adopted SBOM generation in a few areas, it is a good practice to consider the best way to manage the data so your developers’ time is not in vain. 

Anchore Enterprise helps in a variety of ways, not just to manage SBOMs but to detect SBOM drift in the build process and alert security teams to changes in SBOMs so they can be assessed for risks or malicious activity.

Challenge Type: Regulatory Compliance

Let’s say that you just got a massive policy compliance mandate dropped in your lap from your manager. It’s your job to implement the parameters within the allotted deadline, and you’re not sure where to start.

As we’ve talked about in other posts, meeting compliance standards is more than a full-time job. Organizations have to make the decision to either DIY compliance or work with third parties that have expertise in specific standards. With the debut of revision 5 of NIST 800-53, the “Control Catalog”, more and more compliance standards require companies to implement controls that specifically address software supply chain security. This is due to the fact that many federal compliance standards build off of the “Control Catalog” as the source of truth for secure IT systems.

Whether it’s FedRAMP, a compliance framework related to NIST 800-53, or something as simple as a CIS benchmark, Anchore can help. The Anchore Enterprise SBOM solutions offer automated policy enforcement in your software supply chain. It serves to enforce compliance frameworks on your source code repos, images in development, and runtime Kubernetes clusters.

Challenge Type: Zero-Day Response

When a zero-day vulnerability is discovered, how do you answer the question “Am I vulnerable?” Depending on how well you have structured your security practice that question can take anywhere from an hour to a week or more. The longer that window the more risk your organization accrues. Once a zero-day incident occurs, it is very easy to spot the organizations that are prepared and those that are not. 

If you haven’t figured it out yet, the retention and centralized management of SBOM’s are probably one of the most useful tools in modern incident response plans for identification and triage of zero-day incidents impacting organizations. Even though software teams are empowered to make decentralized decision making they can still adhere to security principles that can benefit from a centralized data storage solution. This type of centralization allows organizations to answer critical questions with speed at critical moments in the life of an organization.

Anchore Enterprise helps answer the question “Am I vulnerable?” and it does it in minutes rather than days or weeks. By creating a centralized store of software supply chain data (via SBOMs) Anchore Enterprise allows organizations to quickly query this information and get back precise information on if a vulnerable package exists within the organization and exactly where to focus the remediation efforts. We also provide hands-on training that takes our customers through table top exercises in a controlled environment. By simulating a zero-day incident we test how well an organization is prepared to handle an uncontrolled threat environment and identify the gaps that could lead to extended uncertainty.

Challenge Type: Scaling a Shift Left Security Culture 

The shift left security movement was based on the principle that organizations can preempt security incidents by implementing secure development practices earlier in the software development lifecycle. The problem with this approach arises as you attempt to scale it. The more gates that you put in to catch security vulnerabilities earlier in the life cycle slows the software development process and requires more security resources.

In order to scale shift left security practices organizations will need to adopt software-based solutions to automate these checks and allow developers to self-diagnose and remediate vulnerabilities without significant intervention from the security team. The earlier in the software development process that vulnerabilities are caught the faster secure software can be shipped. 

Anchore enables organizations to scale their shift left security strategy by automating security checks at multiple points in the development life cycle. On top of that, due to the speed that Anchore can run its security scans, organizations can check every software artifact in the development pipeline without adding significant friction. Checking every deployed image during integration (CI), storage (registry) and runtime (CD) allows Anchore to scale a continuous security program that significantly reduces the potential for a vulnerable application to find its way to production where it can be exploited by a malicious adversary. The Anchore Enterprise runtime monitoring capabilities allow you to see what is running in your environment, detect issues within those images, and prevent images that fail policy checks from being deployed in your cluster or runtime environment.

Wrap-Up

The landscape of software supply chain security is increasingly complex, underscored by the rapid proliferation of SBOMs, rising compliance standards, and evolving security threats. Organizations today face the dilemma of scaling in-house security tools or seeking more streamlined and comprehensive solutions. As highlighted in this post, many of the above signals might indicate that it’s time for your organization to transition from DIY methods to a more robust solution. 

Anchore Enterprise was developed to overcome the challenges that are most common to organizations. With its focus on aiding organizations in scaling their shift-left security strategies, Anchore not only ensures compliance but also facilitates faster and safer software deployment. Even though each organization has its own set of unique challenges pertaining to software supply chain security, Anchore Enterprise is ready to enable organizations to mitigate and respond to these challenges.

If you’d like to learn more about the Anchore Enterprise platform or speak with a member of our team, feel free to book a time to speak with one of our specialists.

Software Supply Chain Hierarchy of Needs: SBOMs as the Foundation

Software serves as a powerful tool that simplifies complex and technical concepts but with the incredible power of software comes an interconnected labyrinth of software dependencies that often form the foundation for innovative applications. These dependencies are not without their pitfalls, as we’ve learned from incidents like Log4Shell. As we try to navigate the ever-evolving landscape of software supply chain security, we need to ensure that our applications are built on strong foundations.

In this blog post, we delve into the concept of a Software Bill of Materials (SBOM) as a foundational requirement for a secure software supply chain. Just as a physical supply chain is scrutinized to ensure the quality and safety of a product, a software supply chain also requires critical evaluation. What’s at stake isn’t just the functionality of an application, but the security of information that the application has access to. Let’s dive into the world of software supply chains and explore how SBOMs could serve as the bedrock for a more resilient future in software development and security.

What are Software Supply Chain Attacks?

Supply chain attacks are malicious strikes that target the suppliers of components of an application rather than the application itself. Software supply chains are similar to physical supply chains. When you purchase an iPhone all you see is the finished product. Behind the final product is a complex web of component suppliers that are then assembled together to produce an iPhone. Displays and camera lenses from a Japanese company, CPUs from Arizona, modems from San Diego, lithium ion batteries from a Canadian mine; all of these pieces come together in a Shenzhen assembly plant to create a final product that is then shipped straight to your door.

In the same way that an attacker could target one of the iPhone suppliers to modify a component before the iPhone is assembled, a software supply chain threat actor could do the same but target an open source package that is then built into a commercial application. This is a problem when 70-90% of modern applications are built utilizing open source software components. GIven this the supply chain is only as secure as its weakest link. The image below of the iceberg has become a somewhat overused meme of software supply chain security but it has become overused precisely because it explains the situation so well.

Below is the same idea but without the floating ice analogy. Each layer is another layer of abstraction that is far removed from “Your App”. All of these dependencies give your software developers the superpower to build extremely complex applications, very quickly but with the unintentional side-effect that they cannot possibly understand all of the ingredients that are coming together.

This gives adversaries their opening. A single compromised package allows attackers to manipulate all of the packages “downstream” of their entrypoint.

This reality was viscerally felt by the software industry (and all industries that rely on the software industry, meaning all industries) during the Log4j incident.

Log4Shell Impact

Log4Shell is the poster child for the importance of software supply chain security. We’re not going to go deep on the vulnerability in this post. We have actually done this in a number of other posts. Instead we’re going to focus on the impact of the incident for organizations that had instances of Log4j in their applications and what they had to go through in order to remediate this vulnerability.

First let’s brush up on the timeline:

The vulnerability in log4j was originally privately disclosed on November 24. Five days later a pull request was published to close the vulnerability and a week after that the new package was released. The official public disclosure happened on December 10. This is when the mayhem began and companies began the work of determining whether they were vulnerable and figuring out how to remediate the vulnerability.

On average, impacted individuals spent ~90 hours dealing with the Log4j incident. Roughly 20% of that time was spent identifying where the log4j package was deployed into an application. 

From conversations with our customers and prospects the primary culprit for why this took up such a large portion of time was whether or not an organization had a central repository of metadata about the software dependencies that had been utilized in building their applications. For customers that did have a central repository and a way to query this database, the step of identifying which applications had the log4j vulnerability present took 1-2 hours instead of 20+ hours as seen with the other organizations. This is the power of having SBOMs in place for all of the software and a tool to help with SBOM management.

What is a Software Bill of Materials?

Similar to the nutrition labels on the back of the foods that you buy, SBOMs are a list of ingredients that go into the software that your applications consume. We normally think of SBOMs as an artifact of the development process. As a developer is manufacturing their application using different dependencies they are also building a recipe based on the ingredients. In reality, an SBOM can (and should) be generated at all steps of the build pipeline. Source, builds, images, and production software can all be used to generate an SBOM.

Similar to Maslow’s hierarchy of needs, software supply chain management has an analogous hierarchy of needs

At the base of the pyramid are the contents of the application, in other words, SBOMs about the application. The diagram below shows all of the layers of the proposed hierarchy of software supply chains.

By using an SBOM as the foundation of the pyramid, organizations can ensure that all of the additional security features they layer on to this foundation will stand the test of time. We can only know that our software is free from known vulnerabilities if we have confidence in the process that was used to generate the “ingredients” label. Signing software to prove that a package hasn’t been tampered with is only valid if the signed software is both free of known vulnerabilities. Signing a vulnerabile package or image only proves that software hasn’t been tampered with from that point forward. It can’t look back retrospectively and validate that the packages that came before are secure without the help of an SBOM or a vulnerability scanner.

What are the Benefits of this Approach?

Utilizing Software Bills of Materials (SBOMs) as the foundational element of software supply chain security brings several substantial benefits:

  1. Transparency: SBOMs provide a comprehensive view of all the components used in an application. They reveal the ‘ingredients’ that make up the software, enabling teams to understand the entire composition of their applications, including all dependencies. No more black box dependencies and the associated risk that comes with it.
  1. Risk Management: With the transparency provided by SBOMs, organizations can identify potential security risks within the components of their software and address them proactively. This includes detecting vulnerabilities in dependencies or third-party components. SBOMs allow organizations to standardize their software supply chain which allows for an automated approach to vulnerability management and impacts risk management.
  1. Quick Response to Vulnerabilities: When a new vulnerability is discovered in a component used within the software, SBOMs can help quickly identify all affected applications. This significantly reduces the time taken to respond and remediate these vulnerabilities, minimizing potential damages. When an incident occurs, NOT if, an organization is able to rapidly respond to the breach and limit the impact. 
  1. Regulatory Compliance: Regulations and standards are increasingly requiring SBOMs for demonstrating software integrity. By incorporating SBOMs, organizations can ensure they meet these cybersecurity compliance requirements. Especially when working with highly regulated industries like the federal government, financial services and healthcare.
  1. Trust and Verification: SBOMs facilitate trust and confidence in software products by allowing users to verify the components used. They serve as a ‘proof of integrity’ to customers, partners, and regulators, showcasing the organization’s commitment to security. They also enable higher level security abstractions like signed images or source code to inherit the underlying foundation security guarantees provided by SBOMs.

By putting SBOMs at the base of software supply chain security, organizations can build a robust structure that’s secure, resilient, and efficient.

Building on a Strong Foundation

The utilization of Software Bills of Materials (SBOMs) as the bedrock for secure software supply chains provides a fundamental shift towards increased transparency, improved risk management, quicker responses to vulnerabilities, heightened regulatory compliance, and stronger trust in software products. By unraveling the complex labyrinth of dependencies in software applications, SBOMs offer the necessary insight to identify and address potential weaknesses, thus creating a resilient structure capable of withstanding potential security threats. In the face of incidents like Log4Shell, the industry needs to adopt a proactive and strategic approach, emphasizing the creation of a secure foundation that can stand the test of time. By elevating the role of SBOMs, we are taking a crucial step towards a future of software development and security that is not only innovative but also secure, trustworthy, and efficient. In the realm of software supply chain security, the adage “knowing is half the battle” couldn’t be more accurate. SBOMs provide that knowledge and, as such, are an indispensable cornerstone of a comprehensive security strategy.

If you’re interested in learning about how to integrate SBOMs into your software supply chain the Anchore team of supply chain security specialists are ready and willing to discuss.

Navigating Continuous Authority To Operate (cATO): A Guide for Getting Started

Continuous Authority to Operate (cATO), sometimes known as Rapid ATO, is becoming necessary as the DoD and civilian agencies put more applications and data in the cloud. Speed and agility are becoming increasingly critical to the mission as the government and federal system integrators seek new features and functionalities to support the warfighter and other critical U.S. government priorities.

In this blog post, we’ll break down the concept of cATO in understandable terms, explain its benefits, explore the myths and realities of cATO and show how Anchore can help your organization meet this standard.

What is Continuous Authority To Operate (cATO)?

Continuous ATO is the merging of traditional authority to operate (ATO) risk management practices with flexible and responsive DevSecOps practices to improve software security posture.

Traditional Risk Management Framework (RMF) implementations focus on obtaining authorization to operate once every three years. The problem with this approach is that security threats aren’t static, they evolve. cATO is the evolution of this framework which requires the continual authorization of software components, such as containers, by building security into the entire development lifecycle using DevSecOps practices. All software development processes need to ensure that the application and its components meet security levels equal to or greater than what an ATO requires.

You authorize once and use the software component many times. With a cATO, you gain complete visibility into all assets, software security, and infrastructure as code.

By automating security, you are then able to obtain and maintain cATO. There’s no better statement about the current process for obtaining an ATO than this commentary from Mary Lazzeri with Federal Computer Week:

“The muddled, bureaucratic process to obtain an ATO and launch an IT system inside government is widely maligned — but beyond that, it has become a pervasive threat to system security. The longer government takes to launch a new-and-improved system, the longer an old and potentially insecure system remains in operation.”

The Three Pillars of cATO

To achieve cATO, an Authorizing Official (AO) must demonstrate three main competencies:

  1. Ongoing visibility: A robust continuous monitoring strategy for RMF controls must be in place, providing insight into key cybersecurity activities within the system boundary.
  2. Active cyber defense: Software engineers and developers must be able to respond to cyber threats in real-time or near real-time, going beyond simple scanning and patching to deploy appropriate countermeasures that thwart adversaries.
  3. Adoption of an approved DevSecOps reference design: This involves integrating development, security, and operations to close gaps, streamline processes, and ensure a secure software supply chain.

Looking to learn more about the DoD DevSecOps Reference Design? It’s commonly referred to as a DoD Software Factory. Anchore has been helping organizations and agencies put the Sec in DevSecOps by securing traditional software factories, transforming them into DoD software factories.

Continuous ATO vs. ATO

The primary difference between traditional ATOs and continuous ATOs is the frequency at which a system seeks to prove the validity of its security claims. ATOs require that a system can prove its security once every three years whereas cATO systems prove their security every moment that the system is running.

The Benefits of Continuous ATO

Continuous ATO is essentially the process of applying DevSecOps principles to the compliance framework of Authority to Operate. Automating the individual compliance processes speeds up development work by avoiding repetitive tasks to obtain permission. Next, we’ll explore additional (and sometimes unexpected) benefits of cATO.

Increase Velocity of System Deployment

CI/CD systems and the DevSecOps design pattern were created to increase the velocity at which new software can be deployed from development to production. On top of that, Continuous ATOs can be more easily scaled to accommodate changes in the system or the addition of new systems, thanks to the automation and flexibility offered by DevSecOps environments.

Reduce Time and Complexity to Achieve an ATO

With the cATO approach, you can build a system to automate the process of generating the artifacts to achieve ATO rather than manually producing them every three years. This automation in DevSecOps pipelines helps in speeding up the ATO process, as it can generate the artifacts needed for the AO to make a risk determination. This reduces the time spent on manual reviews and approvals. Much of the same information will be requested for each ATO, and there will be many overlapping security controls. Designing the DevSecOps pipeline to produce the unique authorization package for each ATO from the corpus of data and information available can lead to increased efficiency via automation and re-use.

No Need to Reinvent AND Maintain the Wheel

When you inherit the security properties of the DevSecOps reference design or utilize an approved managed platform, then the provider will shoulder the burden. Someone else has already done the hard work of creating a framework of tools that integrate together to achieve cATO, re-use their effort to achieve cATO for your system. 

Alternatively, you can utilize a platform provider, such as Platform One, Kessel Run, Black Pearl, or the Army Software Factory to outsource the infrastructure management.

Learn how Anchore helped Platform One achieve cATO and become the preeminent DoD software factory:

Myths & Realities

Myth or Reality?: DevSecOps can be at Odds with cATO

Myth! DevSecOps in the DoD and civilian government agencies are still the domain of early adopters. The strict security and compliance requirements — the ATO in particular — of the federal government make it a fertile ground for DevSecOps adoption. Government leaders such as Nicolas Chaillan, former chief software officer for the United States Air Force, are championing DevSecOps standards and best practices that the DoD, federal government agencies, and even the commercial sector can use to launch their own DevSecOps initiatives.

One goal of DevSecOps is to develop and deploy applications as quickly as possible. An ATO is a bureaucratic morass if you’re not proactive. When you build a DevSecOps toolchain that automates container vulnerability scanning and other areas critical to ATO compliance controls, can you put in the tools, reporting, and processes to test against ATO controls while still in your development environment.

DevSecOps, much like DevOps, suffers from a marketing problem as vendors seek to spin the definitions and use cases that best suit their products. The DoD and government agencies need more champions like Chaillan in government service who can speak to the benefits of DevSecOps in a language that government decision-makers can understand.

Myth or Reality?: Agencies need to adopt DevSecOps to prepare for the cATO 

Reality! One of the cATO requirements is to demonstrate that you are aligned with an Approved DevSecOps Reference Design. The “shift left” story that DevSecOps espouses in vendor marketing literature and sales decks isn’t necessarily one size fits all. Likewise, DoD and federal agency DevSecOps play at a different level. 

Using DevSecOps to prepare for a cATO requires upfront analysis and planning with your development and operations teams’ participation. Government program managers need to collaborate closely with their contractor teams to put the processes and tools in place upfront, including container vulnerability scanning and reporting. Break down your Continuous Integration/Continuous Development (CI/CD) toolchain with an eye on how you can prepare your software components for continuous authorization.

Myth or Reality?: You need to have SBOMs for everything in your environment

Myth! However…you need to be able to show your Authorizing Official (AO) that you have “the ability to conduct active cyber defense in order to respond to cyber threats in real time.” If a zero day (like log4j) comes along you need to demonstrate you are equipped to identify the impact on your environment and remediate the issue quickly. Showing your AO that you manage SBOMs and can quickly query them to respond to threats will have you in the clear for this requirement.

Myth or Reality?: cATO is about technology and process only

Myth! As more elements of the DoD and civilian federal agencies push toward the cATO to support their missions, and a DevSecOps culture takes hold, it’s reasonable to expect that such a culture will influence the cATO process. Central tenets of a DevSecOps culture include:

  • Collaboration
  • Infrastructure as Code (IaC)
  • Automation
  • Monitoring

Each of these tenets contributes to the success of a cATO. Collaboration between the government program office, contractor’s project team leadership, third-party assessment organization (3PAO), and FedRAMP program office is at the foundation of a well-run authorization. IAC provides the tools to manage infrastructure such as virtual machines, load balancers, networks, and other infrastructure components using practices similar to how DevOps teams manage software code.

Myth or Reality?: Reusable Components Make a Difference in cATO

Reality! The growth of containers and other reusable components couldn’t come at a better time as the Department of Defense (DoD) and civilian government agencies push to the cloud driven by federal cloud initiatives and demands from their constituents.

Reusable components save time and budget when it comes to authorization because you can authorize once and use the authorized components across multiple projects. Look for more news about reusable components coming out of Platform One and other large-scale government DevSecOps and cloud projects that can help push this development model forward to become part of future government cloud procurements.

How Anchore Helps Organizations Implement the Continuous ATO Process

Anchore’s comprehensive suite of solutions is designed to help federal agencies and federal system integrators meet the three requirements of cATO.

Ongoing Visibility

Anchore Enterprise can be integrated into a build pipeline, image registry and runtime environment in order to provide a comprehensive view of the entire software development lifecycle (SDLC). On top of this, Anchore provides out-of-the-box policy packs mapped to NIST 800-53 controls for RMF, ensuring a robust continuous monitoring strategy. Real-time notifications alert users when images are out of compliance, helping agencies maintain ongoing visibility into their system’s security posture.

Active Cyber Defense

While Anchore Enterprise is integrated into the decentralized components of the SDLC, it provides a centralized database to track and monitor every component of software in all environments. This centralized datastore enables agencies to quickly triage zero-day vulnerabilities with a single database query. Remediation plans for impacted application teams can be drawn up in hours rather than days or weeks. By setting rules that flag anomalous behavior, such as image drift or blacklisted packages, Anchore supports an active cyber defense strategy for federal systems.

Adoption of an Approved DevSecOps Reference Design

Anchore aligns with the DoD DevSecOps Reference Design by offering solutions for:

  • Container hardening (Anchore DISA policy pack)
  • Container policy enforcement (Anchore Enterprise policies)
  • Container image selection (Iron Bank)
  • Artifact storage (Anchore image registry integration)
  • Release decision-making (Anchore Kubernetes Admission Controller)
  • Runtime policy monitoring (Anchore Kubernetes Automated Inventory)

Anchore is specifically mentioned in the DoD Container Hardening Process Guide, and the Iron Bank relies on Anchore technology to scan and enforce policy that ensures every image in Iron Bank is hardened and secure.

Final Thoughts

Continuous Authorization To Operate (cATO) is a vital framework for federal system integrators and agencies to maintain a strong security posture in the face of evolving cybersecurity threats. By ensuring ongoing visibility, active cyber defense, and the adoption of an approved DevSecOps reference design, software engineers and developers can effectively protect their systems in real-time. Anchore’s comprehensive suite of solutions is specifically designed to help meet the three requirements of cATO, offering a robust, secure, and agile approach to stay ahead of cybersecurity threats. 

By partnering with Anchore, federal system integrators and federal agencies can confidently navigate the complexities of cATO and ensure their systems remain secure and compliant in a rapidly changing cyber landscape. If you’re interested to learn more about how Anchore can help your organization embed DevSecOps tooling and principles into your software development process, click below to read our white paper.