2025 Cybersecurity Executive Order Requires Up Leveled Software Supply Chain Security

A few weeks ago, the Biden administration published a new Executive Order (EO) titled “Executive Order on Strengthening and Promoting Innovation in the Nation’s Cybersecurity”. This is a follow-up to the original cybersecurity executive order—EO 14028—from May 2021. This latest EO specifically targets improvements to software supply chain security that addresses gaps and challenges that have surfaced since the release of EO 14028. 

While many issues were identified, the executive order named and shamed software vendors for signing and submitting secure software development compliance documents without fully adhering to the framework. The full quote:

In some instances, providers of software to the Federal Government commit[ed] to following cybersecurity practices, yet do not fix well-known exploitable vulnerabilities in their software, which put the Government at risk of compromise. The Federal Government needs to adopt more rigorous 3rd-party risk management practices and greater assurance that software providers… are following the practices to which they attest.

In response to this behavior, the 2025 Cybersecurity EO has taken a number of additional steps to both encourage cybersecurity compliance and deter non-compliance—the carrot and the stick. This comes in the form of 4 primary changes:

  1. Compliance verification by CISA
  2. Legal repercussions for non-compliance
  3. Contract modifications for Federal agency software acquisition
  4. Mandatory adoption of software supply chain risk management practices by Federal agencies

In this post, we’ll explore the new cybersecurity EO in detail, what has changed in software supply chain security compliance and what both federal agencies and software providers can do right now to prepare.

What Led to the New Cybersecurity Executive Order?

In the wake of massive growth of supply chain attacks—especially those from nation-state threat actors like China—EO 14028 made software bill of materials (SBOMs) and software supply chain security spotlight agenda items for the Federal government. As directed by the EO, the National Institute of Standards and Technology (NIST) authored the Secure Software Development Framework (SSDF) to codify the specific secure software development practices needed to protect the US and its citizens. 

Following this, the Office of Management and Budget (OMB) published a memo that established a deadline for agencies to require vendor compliance with the SSDF. Importantly, the memo allowed vendors to “self-attest” to SSDF compliance.

In practice, many software providers chose to go the easy route and submitted SSDF attestations while only partially complying with the framework. Although the government initially hoped that vendors would not exploit this somewhat obvious loophole, reality intervened, leading to the issuance of the 2025 cybersecurity EO to close off these opportunities for non-compliance.

What’s Changing in the 2025 EO?

1. Rigorous verification of secure software development compliance

No longer can vendors simply self-attest to SSDF compliance. The Cybersecurity and Infrastructure Security Agency (CISA) is now tasked with validating these attestations and via the additional compliance artifacts the new EO requires. Providers that fail validation risk increased scrutiny and potential consequences such as…

2. CISA authority to refer non-compliant vendors to DOJ

A major shift comes from the EO’s provision allowing CISA to forward fraudulent attestations to the Department of Justice (DOJ). In the EO’s words, officials may “refer attestations that fail validation to the Attorney General for action as appropriate.” This raises the stakes for software vendors, as submitting false information on SSDF compliance could lead to legal consequences. 

3. Explicit SSDF compliance in software acquisition contracts

The Federal Acquisition Regulatory Council (FAR Council) will issue contract modifications that explicitly require SSDF compliance and additional items to enable CISA to programmatically verify compliance. Federal agencies will incorporate updated language in their software acquisition contracts, making vendors contractually accountable for any misrepresentations in SSDF attestations.

See the “FAR council contract updates” section below for the full details.

4. Mandatory adoption of supply chain risk management

Agencies must now embed supply chain risk management (SCRM) practices agency-wide, aligning with NIST SP 800-161, which details best practices for assessing and mitigating risks in the supply chain. This elevates SCRM to a “must-have” strategy for every Federal agency.

Other Notable Changes

Updated software supply chain security compliance controls

NIST will update both NIST SP 800-53, the “Control Catalog”, and the SSDF (NIST SP 800-218) to align with the new policy. The updates will incorporate additional controls and greater detail on existing controls. Although no controls have yet been formally added or modified, NIST is tasked with modernizing these documents to align with changes in secure software development practices. Once those updates are complete, agencies and vendors will be expected to meet the revised requirements.

Policy-as-code pilot

Section 7 of the EO describes a pilot program focused on translating security controls from compliance frameworks into “rules-as-code” templates. Essentially, this adopts a policy-as-code approach, often seen in DevSecOps, to automate compliance. By publishing machine-readable security controls that can be integrated directly into DevOps, security, and compliance tooling, organizations can reduce manual overhead and friction, making it easier for both agencies and vendors to consistently meet regulatory expectations.

FedRAMP incentives and new key management controls

The Federal Risk and Authorization Management Program (FedRAMP), responsible for authorizing cloud service providers (CSPs) for federal use, will also see important updates. FedRAMP will develop policies that incentivize or require CSPs to share recommended security configurations, thereby promoting a standard baseline for cloud security. The EO also proposes updates to FedRAMP security controls to address cryptographic key management best practices, ensuring that CSPs properly safeguard cryptographic keys throughout their lifecycle.

How to Prepare for the New Requirements

FAR council contract updates

Within 30 days of the EO release, the FAR Council will publish recommended contract language. This updated language will mandate:

  • Machine-readable SSDF Attestations: Vendors must provide an SSDF attestation in a structured, machine-readable format.
  • Compliance Reporting Artifacts: High-level artifacts that demonstrate evidence of SSDF compliance, potentially including automated scan results, security test reports, or vulnerability assessments.
  • Customer List: A list of all civilian agencies using the vendor’s software, enabling CISA and federal agencies to understand the scope of potential supply chain risk.

Important Note: The 30-day window applies to the FAR Council to propose new contract language—not for software vendors to become fully compliant. However, once the new contract clauses are in effect, vendors who want to sell to federal agencies will need to meet these updated requirements.

Action Steps for Federal Agencies

Federal agencies will bear new responsibilities to ensure compliance and mitigate supply chain risk. Here’s what you should do now:

  1. Inventory 3rd-Party Software Component Suppliers
  2. Assess Visibility into Supplier Risk
    • Perform a vulnerability scan on all 3rd-party components. If you already have SBOMs, scanning them for vulnerabilities is a quick way to find identity risk.
  3. Identify Critical Suppliers
    • Determine which software vendors are mission-critical. This helps you understand where to focus your risk management efforts.
  4. Assess Data Sensitivity
    • If a vendor handles sensitive data (e.g., PII), extra scrutiny is needed. A breach here has broader implications for the entire agency.
  5. Map Potential Lateral Movement Risk
    • Understand if a vendor’s software could provide attackers with lateral movement opportunities within your infrastructure.

Action Steps for Software Providers

For software vendors, especially those who sell to the federal government, proactivity is key to maintaining and expanding federal contracts.

  1. Inventory Your Software Supply Chain
    • Implement an SBOM-powered SCA solution within your DevSecOps pipeline to gain real-time visibility into all 3rd-party components.
  2. Assess Supplier Risk
    • Perform vulnerability scanning on 3rd-party supplier components to identify any that could jeopardize your compliance or your customers’ security.
  3. Identify Sensitive Data Handling
    • If your software processes personally identifiable information (PII) or other sensitive data, expect increased scrutiny. On the flip side, this may make your offering “mission-critical” and less prone to replacement—but it also means compliance standards will be higher.
  4. Map Your Own Attack Surface
    • Assess whether a 3rd-party supplier breach could cascade into your infrastructure and, by extension, your customers.
  5. Prepare Compliance Evidence
    • Start collecting artifacts—such as vulnerability scan reports, secure coding guidelines, and internal SSDF compliance checklists—so you can quickly meet new attestation requirements when they come into effect.

Wrap-Up

The 2025 Cybersecurity EO is a direct response to the flaws uncovered in EO 14028’s self-attestation approach. By requiring 3rd-party validation of SSDF compliance, the government aims to create tangible improvements in its cybersecurity posture—and expects the same from all who supply its agencies.

Given the rapid timeline, preparation is crucial. Both federal agencies and software providers should begin assessing their supply chain risks, implementing SBOM-driven visibility, and proactively planning for new attestation and reporting obligations. By taking steps now, you’ll be better positioned to meet the new requirements.

Learn about SSDF Attestation with this guide. The eBook will take you through everything you need to know to get started.

SSDF Attestation 101: A Practical Guide for Software Producers

Software Supply Chain Security in 2025: SBOMs Take Center Stage

In recent years, we’ve witnessed software supply chain security transition from a quiet corner of cybersecurity into a primary battlefield. This is due to the increasing complexity of modern software that obscures the full truth—applications are a tower of components of unknown origin. Cybercriminals have fully embraced this hidden complexity as a ripe vector to exploit.

Software Bills of Materials (SBOMs) have emerged as the focal point to achieve visibility and accountability in a software ecosystem that will only grow more complex. SBOMs are an inventory of the complex dependencies that make up modern applications. SBOMs help organizations scale vulnerability management and automate compliance enforcement. The end goal is to increase transparency in an organization’s supply chain where 70-90% of modern applications are open source software (OSS) dependencies. This significant source of risk demands a proactive, data-driven solution.

Looking ahead to 2025, we at Anchore, see two trends for SBOMs that foreshadow their growing importance in software supply chain security:

  1. Global regulatory bodies continue to steadily drive SBOM adoption
  2. Foundational software ecosystems begin to implement build-native SBOM support

In this blog, we’ll walk you through the contextual landscape that leads us to these conclusions; keep reading if you want more background.

Global Regulatory Bodies Continue Growing Adoption of SBOMs

As supply chain attacks surged, policymakers and standards bodies recognized this new threat vector as a critical threat with national security implications. To stem the rising tide supply chain threats, global regulatory bodies have recognized that SBOMs are one of the solutions.

Over the past decade, we’ve witnessed a global legislative and regulatory awakening to the utility of SBOMs. Early attempts like the US Cyber Supply Chain Management and Transparency Act of 2014 may have failed to pass, but they paved the way for more significant milestones to come. Things began to change in 2021, when the US Executive Order (EO) 14028 explicitly named SBOMs as the foundation for a secure software supply chain. The following year the European Union’s Cyber Resilience Act (CRA) pushed SBOMs from “suggested best practice” to “expected norm.”

The one-two punch of the US’s EO 14028 and the EU’s CRA has already prompted action among regulators worldwide. In the years following these mandates, numerous global bodies issued or updated their guidance on software supply chain security practices—specifically highlighting SBOMs. Cybersecurity offices in Germany, India, Britain, Australia, and Canada, along with the broader European Union Agency for Cybersecurity (ENISA), have each underscored the importance of transparent software component inventories. At the same time, industry consortiums in the US automotive (Auto-ISAC) and medical device (IMDRF) sectors recognized that SBOMs can help safeguard their own complex supply chains, as have federal agencies such as the FDA, NSA, and the Department of Commerce.

By the close of 2024, the pressure mounted further. In the US, the Office of Management and Budget (OMB) set a due date requiring all federal agencies to comply with the Secure Development Framework (SSDF), effectively reinforcing SBOM usage as part of secure software development. Meanwhile, across the Atlantic, the EU CRA officially became law, cementing SBOMs as a cornerstone of modern software security. This constant pressure ensures that SBOM adoption will only continue to grow. It won’t be long until SBOMs become table stakes for anyone operating an online business. We expect the steady march forward of SBOMs to continue in 2025. 

In fact, this regulatory push has been noticed by the foundational ecosystems of the software industry and they are reacting accordingly.

Software Ecosystems Trial Build-Native SBOM Support

Until now, SBOM generation has been relegated to afterthought in software ecosystems. Businesses scan their internal supply chains with software composition analysis (SCA) tools; trying to piece together a picture of their dependencies. But as SBOM adoption continues its upward momentum, this model is evolving. In 2025, we expect that leading software ecosystems will promote SBOMs to a first-class citizen and integrate them natively into their build tools.

Industry experts have recently begun advocating for this change. Brandon Lum, the SBOM Lead at Google, notes, “The software industry needs to improve build tools propagating software metadata.” Rather than forcing downstream consumers to infer the software’s composition after the fact, producers will generate SBOMs as part of standard build pipelines. This approach reduces friction, makes application composition discoverable, and ensures that software supply chain security is not left behind.

We are already seeing early examples:

  • Linux Ecosystem (Yocto): The Yocto Project’s OpenEmbedded build system now includes native SBOM generation. This demonstrates the feasibility of integrating SBOM creation directly into the developer toolchain, establishing a blueprint for other ecosystems to follow.
  • Python Ecosystem: In 2024, Python maintainers explored proposals for build-native SBOM support, motivated by the regulations such as, the Secure Software Development Framework (SSDF) and the EU’s CRA. They’ve envisioned a future where projects, package maintainers, and contributors can easily annotate their code with software dependency metadata that will automatically propagate at build time.
  • Perl Ecosystem: The Perl Security Working Group has also begun exploring internal proposals for SBOM generation, again driven by the CRA’s regulatory changes. Their goal: ensure that Perl packages have SBOM data baked into their ecosystems so that compliance and security requirements can be met more effortlessly.
  • Java Ecosystem: The Eclipse Foundation and VMware’s Spring Boot team have introduced plug-ins for Java build tools like Maven or Gradle that streamline SBOM generation. While not fully native to the compiler or interpreter, these integrations lower the barrier to SBOM adoption within mainstream Java development workflows.

In 2025 we won’t just be talking about build-native SBOMs in abstract terms—we’ll have experimental support for them from the most forward thinking ecosystems. This shift is still in its infancy but it foreshadows the central role that SBOMs will play in the future of cybersecurity and software development as a whole.

Closing Remarks

The writing on the wall is clear: supply chain attacks aren’t slowing down—they are accelerating. In a world of complex, interconnected dependencies, every organization must know what’s inside its software to quickly spot and respond to risk. As SBOMs move from a nice-to-have to a fundamental part of building secure software, teams can finally gain the transparency they need over every component they use, whether open source or proprietary. This clarity is what will help them respond to the next Log4j or XZ Utils issue before it spreads, putting security team’s back in the driver’s seat and ensuring that software innovation doesn’t come at the cost of increased vulnerability.

Learn about the role that SBOMs for the security of your organization in this white paper.

Learn about the role that SBOMs for the security, including open source software (OSS) security, of your organization in this white paper.

All Things SBOM in 2025: a Weekly Webinar Series

Software Bills of Materials (SBOMs) have quickly become a critical component in modern software supply chain security. By offering a transparent view of all the components that make up your applications, SBOMs enable you to pinpoint vulnerabilities before they escalate into costly incidents.

As we enter 2025, software supply chain security and risk management for 3rd-party software dependencies are top of mind for organizations. The 2024 Anchore Software Supply Chain Security Survey notes that 76% of organizations consider these challenges top priorities. Given this, it is easy to see why understanding what SBOMs are—and how to implement them—is key to a secure software supply chain.

To help organizations achieve these top priorities Anchore is hosting a weekly webinar series dedicated entirely to SBOMs. Beginning January 14 and running throughout Q1, our webinar line-up will explore a wide range of topics (see below). Industry luminaries like Kate Stewart (co-founder of the SPDX project) and Steve Springett (Chair of the OWASP CycloneDX Core Working Group) will be dropping in to provide unique insights and their special blend of expertise on all things SBOMs.

The series will cover:

  • SBOM basics and best practices
  • SDPX and SBOMs in-depth with Kate Stewart
  • Getting started: How to generate an SBOM
  • Software supply chain security and CycloneDX with Steve Springett
  • Scaling SBOMs for the enterprise
  • Real-world insights on applying SBOMs in high-stakes or regulated sectors
  • A look ahead at the future of SBOMs and software supply chain security with Kate Stewart
  • And more!

We invite you to learn from experts, gain practical skills, and stay ahead of the rapidly evolving world of software supply chain security. Visit our events page to register for the webinars now or keep reading to get a sneak peek at the content.

#1 Understanding SBOMs: An Intro to Modern Development

Date/Time: Tuesday, January 14, 2025 – 10am PST / 1pm EST
Featuring: 

  • Lead Developer of Syft
  • Anchore VP of Security
  • Anchore Director of Developer Relations

We are kicking off the series with an introduction to the essentials of SBOMs. This session will cover the basics of SBOMs—what they are, why they matter, and how to get started generating and managing them. Our experts will walk you through real-world examples (including Log4j) to show just how vital it is to know what’s in your software.

Key Topics:

This webinar is perfect for both technical practitioners and business leaders looking to establish a strong SBOM foundation.

#2 Understanding SBOMs: Deep Dive with Kate Stewart

Date/Time: Wednesday, January 22, 2025 – 10am PST / 1pm EST
Featured Guest: Kate Stewart (co-founder of SPDX)

Our second session brings you a front-row seat to an in-depth conversation with Kate Stewart, co-founder of the SPDX project. Kate is a leading voice in software supply chain security and the SBOM standard. From the origins of the SPDX standard to the latest challenges in license compliance, Kate will provide an extensive behind-the-scenes look into the world of SBOMs.

Key Topics:

  • The history and evolution of SBOMs, including the creation of SPDX
  • Balancing license compliance with security requirements
  • How SBOMs support critical infrastructure with national security concerns
  • The impact of emerging technology—like open source LLMs—on SBOM generation and analysis

If you’re ready for a more advanced look at SBOMs and their strategic impact, you won’t want to miss this conversation.

#3 How to Automate, Generate, and Manage SBOMs

Date/Time: Wednesday, January 29, 2025 – 12pm EST / 9am PST
Featuring: 

  • Anchore Director of Developer Relations
  • Anchore Principal Solutions Engineer

For those seeking a hands-on approach, this webinar dives into the specifics of automating SBOM generation and management within your CI/CD pipeline. Anchore’s very own Alan Pope (Developer Relations) and Sean Fazenbaker (Solutions) will walk you through proven techniques for integrating SBOMs to reveal early vulnerabilities, minimize manual interventions, and improve overall security posture.

Key Topics:

This is the perfect session for teams focused on shifting security left and preserving developer velocity.

What’s Next?

Beyond our January line-up, we have more exciting sessions planned throughout Q1. Each webinar will feature industry experts and dive deeper into specialized use-cases and future technologies:

  • CycloneDX & OWASP with Steve Springett – A closer look at this popular SBOM format, its technical architecture, and VEX integration.
  • SBOM at Scale: Enterprise SBOM Management – Learn from large organizations that have successfully rolled out SBOM practices across hundreds of applications.
  • SBOMs in High-Stakes Environments – Explore how regulated industries like healthcare, finance, and government handle unique compliance challenges and risk management.
  • The Future of Software Supply Chain Security – Join us in March as we look ahead at emerging standards, tools, and best practices with Kate Stewart returning as the featured guest.

Stay tuned for dates and registration details for each upcoming session. Follow us on your favorite social network (Twitter, Linkedin, Bluesky) or visit our events page to stay up-to-date.

Learn about the role that SBOMs for the security of your organization in this white paper.

Learn about the role that SBOMs for the security, including open source software (OSS) security, of your organization in this white paper.

The Top Ten List: The 2024 Anchore Blog

To close out 2024, we’re going to count down the top 10 hottest hits from the Anchore blog in 2024! The Anchore content team continued our tradition of delivering expert guidance, practical insights, and forward-looking strategies on DevSecOps, cybersecurity compliance, and software supply chain management.

This top ten list spotlights our most impactful blog posts from the year, each tackling a different angle of modern software development and security. Hot topics this year include: 

  • All things SBOM (software bill of materials)
  • DevSecOps compliance for the Department of Defense (DoD)
  • Regulatory compliance for federal government vendors (e.g., FedRAMP & SSDF Attestation)
  • Vulnerability scanning and management—a perennial favorite!

Our selection runs the gamut of knowledge needed to help your team stay ahead of the compliance curve, boost DevSecOps efficiency, and fully embrace SBOMs. So, grab your popcorn and settle in—it’s time to count down the blog posts that made the biggest splash this year!

The Top Ten List

10 | A Guide to Air Gapping

Kicking us off at number 10 is a blog that’s all about staying off the grid—literally. Ever wonder what it really means to keep your network totally offline? 

A Guide to Air Gapping: Balancing Security and Efficiency in Classified Environments breaks down the concept of “air gapping”—literally disconnecting a computer from the internet by leaving a “gap of air” between your computer and an ethernet cable. It is generally considered a security practice to protect classified, military-grade data or similar.

Our blog covers the perks, like knocking out a huge range of cyber threats, and the downsides, like having to manually update and transfer data. It also details how Anchore Enforce Federal Edition can slip right into these ultra-secure setups, blending top-notch protection with the convenience of automated, cloud-native software checks.

9 | SBOMs + Vulnerability Management == Open Source Security++

Coming in at number nine on our countdown is a blog that breaks down two of our favorite topics; SBOMs and vulnerability scanners—And how using SBOMs as your foundation for vulnerability management can level up your open source security game.

SBOMs and Vulnerability Management: OSS Security in the DevSecOps Era is all about getting a handle on: 

  • every dependency in your code, 
  • scanning for issues early and often, and 
  • speeding up the DevSecOps process so you don’t feel the drag of legacy security tools. 

By switching to this modern, SBOM-driven approach, you’ll see benefits like faster fixes, smoother compliance checks, and fewer late-stage security surprises—just ask companies like NVIDIA, Infoblox, DreamFactory and ModuleQ, who’ve saved tons of time and hassle by adopting these practices.

8 | Improving Syft’s Binary Detection

Landing at number eight, we’ve got a blog post that’s basically a backstage pass to Syft’s binary detection magic. Improving Syft’s Binary Detection goes deep on how Syft—Anchore’s open source SBOM generation tool—uncovers out the details of executable files and how you can lend a hand in making it even better. 

We walk you through the process of adding new binary detection rules, from finding the right binaries and testing them out, to fine-tuning the patterns that match their version strings. 

The end goal? Helping all open source contributors quickly get started making their first pull request and broadening support for new ecosystems. A thriving, community-driven approach to better securing the global open source ecosystem.

7 | A Guide to FedRAMP in 2024: FAQs & Key Takeaways

Sliding in at lucky number seven, we’ve got the ultimate cheat sheet for FedRAMP in 2024 (and 2025😉)! Ever wonder how Uncle Sam greenlights those fancy cloud services? A Guide to FedRAMP in 2024: FAQs & Key Takeaways spills the beans on all the FedRAMP basics you’ve been struggling to find—fast answers without all the fluff. 

It covers what FedRAMP is, how it works, who needs it, and why it matters; detailing the key players and how it connects with other federal security standards like FISMA. The idea is to help you quickly get a handle on why cloud service providers often need FedRAMP certification, what benefits it offers, and what’s involved in earning that gold star of trust from federal agencies. By the end, you’ll know exactly where to start and what to expect if you’re eyeing a spot in the federal cloud marketplace.

6 | Introducing Grant: OSS Licence Management

At number six on tonight’s countdown, we’re rolling out the red carpet for Grant—Anchore’s snazzy new software license-wrangling sidekick! Introducing Grant: An OSS project for inspecting and checking license compliance using SBOMs covers how Grant helps you keep track of software licenses in your projects. 

By using SBOMs, Grant can quickly show you which licenses are in play—and whether any have changed from something friendly to something more restrictive. With handy list and check commands, Grant makes it easier to spot and manage license risk, ensuring you keep shipping software without getting hit with last-minute legal surprises.

5 | An Overview of SSDF Attestation: Compliance Need-to-Knows

Landing at number five on tonight’s compliance countdown is a big wake-up call for all you software suppliers eyeing Uncle Sam’s checkbook: the SSDF Attestation Form. That’s right—starting now, if you wanna do business with the feds, you gotta show off those DevSecOps chops, no exceptions! In Using the Common Form for SSDF Attestation: What Software Producers Need to Know we break down the new Secure Software Development Attestation Form—released in March 2024—that’s got everyone talking in the federal software space. 

In short, if you’re a software vendor wanting to sell to the US government, you now have to “show your work” when it comes to secure software development. The form builds on the SSDF framework, turning it from a nice-to-have into a must-do. It covers which software is included, the timelines you need to know, and what happens if you don’t shape up.

There are real financial risks if you can’t meet the deadlines or if you fudge the details (hello, criminal penalties!). With this new rule, it’s time to get serious about locking down your dev practices or risk losing out on government contracts.

4 | Prep your Containers for STIG

At number four, we’re diving headfirst into the STIG compliance world—the DoD’s ultimate ‘tough crowd’ when it comes to security! If you’re feeling stressed about locking down those container environments—we’ve got you covered. 4 Ways to Prepare your Containers for the STIG Process is all about tackling the often complicated STIG process for containers in DoD projects. 

You’ll learn how to level up your team by cross-training cybersecurity pros in container basics and introducing your devs and architects to STIG fundamentals. It also suggests going straight to the official DISA source for current STIG info, making the STIG Viewer a must-have tool on everyone’s workstation, and looking into automation to speed up compliance checks. 

Bottom line: stay informed, build internal expertise, and lean on the right tools to keep the STIG process from slowing you down.

3 | Syft Graduates to v1.0!

Give it up for number three on our countdown—Syft’s big graduation announcement! In Syft Reaches v1.0! Syft celebrates hitting the big 1.0 milestone! 

Syft is Anchore’s OSS tool for generating SBOMs, helping you figure out exactly what’s inside your software, from container images to source code. Over the years, it’s grown to support over 40 package types, outputting SBOMs in various formats like SPDX and CycloneDX. With v1.0, Syft’s CLI and API are now stable, so you can rely on it for consistent results and long-term compatibility. 

But don’t worry—development doesn’t stop here. The team plans to keep adding support for more ecosystems and formats, and they invite the community to join in, share ideas, and contribute to the future of Syft.

2 | RAISE 2.0 Overview: RMF and ATO for the US Navy

Next up at number two is the lowdown on RAISE 2.0—your backstage pass to lightning-fast software approvals with the US Navy! In RMF and ATO with RAISE 2.0 — Navy’s DevSecOps solution for Rapid Delivery we break down what RAISE 2.0 means for teams working with the Department of the Navy’s containerized software.  The key takeaway? By using an approved DevSecOps platform—known as an RPOC—you can skip getting separate ATOs for every new app. 

The guidelines encourage a “shift left” approach, focusing on early and continuous security checks throughout development. Tools like Anchore Enforce Federal Edition can help automate the required security gates, vulnerability scans, and policy checks, making it easier to keep everything compliant. 

In short, RAISE 2.0 is all about streamlining security, speeding up approvals, and helping you deliver secure code faster.

1 | Introduction to the DoD Software Factory

Taking our top spot at number one, we’ve got the DoD software factory—the true VIP of the DevSecOps world! We’re talking about a full-blown, high-security software pipeline that cranks out code for the defense sector faster than a fighter jet screaming across the sky. In Introduction to the DoD Software Factory we break down what a DoD software factory really is—think of it as a template to build a DoD-approved DevSecOps pipeline. 

The blog post details how concepts like shifting security left, using microservices, and leveraging automation all come together to meet the DoD’s sky-high security standards. Whether you choose an existing DoD software factory (like Platform One) or build your own, the goal is to streamline development without sacrificing security. 

Tools like Anchore Enforce Federal Edition can help with everything from SBOM generation to continuous vulnerability scanning, so you can meet compliance requirements and keep your mission-critical apps protected at every stage.

Wrap-Up

That wraps up the top ten Anchore blog posts of 2024! We covered it all: next-level software supply chain best practices, military-grade compliance tactics, and all the open-source goodies that keep your DevSecOps pipeline firing on all cylinders. 

The common thread throughout them all is the recognition that security and speed can work hand-in-hand. With SBOM-driven approaches, modern vulnerability management, and automated compliance checks, organizations can achieve the rapid, secure, and compliant software delivery required in the DevSecOps era. We hope these posts will serve as a guide and inspiration as you continue to refine your DevSecOps practice, embrace new technologies, and steer your organization toward a more secure and efficient future.

If you enjoyed our greatest hits album of 2024 but need more immediacy in your life, follow along in 2025 by subscribing to the Anchore Newsletter or following Anchore on your favorite social platform:

Learn about the role that SBOMs for the security of your organization in this white paper.

Learn about the role that SBOMs for the security, including open source software (OSS) security, of your organization in this white paper.

SSDF Attestation Template: Battle-tested Compliance Guidance

The CISA Secure Software Development Attestation form, commonly referred to as, SSDF attestation, was released earlier this year and with any new compliance framework, knowing the exact wording and details to provide in order to meet the compliance requirements can be difficult.

We feel you here. Anchore is heavily invested in the public sector and had to generate our own SSDF attestation for our platform, Anchore Enterprise. Having gone through the process ourselves and working with a number of customers that requested our expertise on this matter, we developed a document that helps you put together an SSDF attestation that will make a compliance officer’s heart sing.

Our goal with this document is to make SSDF attestation as easy as possible and demonstrate how Anchore Enterprise is an “easy button” that you can utilize to satisfy the majority of evidence needed to achieve compliance. We have already submitted in our own SSDF attestation and been approved, so we have confidence these answers will help get you over the line. You can find our SSDF attestation guide on our docs site.

Explore SSDF attestation in-depth with this eBook. Learn the benefits of the framework and how you can benefit from it.

How do I fill out the SSDF attestation form?

This is the difficult part, isn’t it? The SSDF attestation form looks very simple at a glance, but it has a number of sections that expect evidence to be attached that details how your organization secures both your development environments and production systems. Like all compliance standards, it doesn’t specify what will or won’t meet compliance for your organization, hence the importance of the evidence.

At Anchore, we both experienced this ourselves and helped our customers navigate this ambiguity. Out of these experiences we created a document that breaks down each item and what evidence was able to achieve compliance without being rejected by a compliance officer.

We have published this document on our Docs site for all other organizations to use as a template when attempting to meet SSDF attestation compliance.

Structure of the SSDF attestation form

The SSDF attestation is divided into 3 sections:

Section I

The first section is very short, it is where you list the type of attestation you are submitting and information about the product that you are attesting to meeting compliance.

Section II

This section is also short, the form is collecting contact information. CISA wants to be able to know how to get in contact with your organization and who is responsible for any questions or concerns that need to be addressed.

Section III

For all intents and purposes, Section III is the SSDF attestation form. This is where you will provide all of the technical supporting information to demonstrate that your organization complies with the requirements set out in the SSDF attestation form. 

The guide that Anchore has developed is focused specifically on how to fill out this section in a way that will meet the expectations of CISA compliance officers.

Where do I submit the SSDF attestation form?

If you are a US government vendor you can submit your organization’s completed form on the Repository for Software Attestations and Artifacts. You will need an account that can be requested on the login page. It normally takes a few days for the account to be created. Be sure to give yourself at least a week for it to be created. This can be done ahead of time while you’re gathering the information to fill out your form.

It’s also possible you will receive requests directly to pass along the form. Not every agency will use the repository. It’s even possible you will have non-government customers asking for the form. While it’s being mandated by the government, there’s a lot of good evidence in the document.

What tooling do I need to meet SSDF attestation compliance?

There are many ways in order to meet the technical requirements of SSDF attestation but there is also a well worn path. Anchore utilizes modern DevSecOps practices and assumes that the majority of our customers do as well. Below is a list of common DevSecOps tools that are typically used to help meet SSDF compliance

Endpoint Protection

Description: Endpoint protection tools secure individual devices (endpoints) that connect to a network. They protect against malware, detect and prevent intrusions, and provide real-time monitoring and response capabilities.

SSDF Requirement: [3.1] — “Separating and protecting each environment involved in developing and building software”

Examples: Jamf, Elastic, SentinelOne, etc.

Source Control

Description: Source control systems manage changes to source code over time. They help track modifications, facilitate collaboration among developers, and maintain different versions of code.

SSDF Requirement: [3.1] — “Separating and protecting each environment involved in developing and building software”

Examples: GitHub, GitLab, etc.

CI/CD Build Pipeline

Description: Continuous Integration/Continuous Deployment (CI/CD) pipelines automate the process of building, testing, and deploying software. They help ensure consistent and reliable software delivery.

SSDF Requirement: [3.1] — “Separating and protecting each environment involved in developing and building software”

Examples: Jenkins, GitLab, GitHub Actions, etc.

Single Sign-on (SSO)

Description: SSO allows users to access multiple applications with one set of login credentials. It enhances security by centralizing authentication and reducing the number of attack vectors.

SSDF Requirement: [3.1] — “Enforcing multi-factor authentication and conditional access across the environments relevant to developing and building software in a manner that minimizes security risk;”

Examples: Okta, Google Workspace, etc.

Security Event and Incident Management (SEIM)

Description: Monitoring tools provide real-time visibility into the performance and security of systems and applications. They can detect anomalies, track resource usage, and alert on potential issues.

SSDF Requirement: [3.1] — “Implementing defensive cybersecurity practices, including continuous monitoring of operations and alerts and, as necessary, responding to suspected and confirmed cyber incidents;”

Examples: Elasticsearch, Splunk, Panther, RunReveal, etc.

Audit Logging

Description: Audit logging captures a record of system activities, providing a trail of actions performed within the software development and build environments.

SSDF Requirement: [3.1] — “Regularly logging, monitoring, and auditing trust relationships used for authorization and access: i) to any software development and build environments; and ii) among components within each environment;”

Examples: Typically a built-in feature of CI/CD, SCM, SSO, etc.

Secrets Encryption

Description: Secrets encryption tools secure sensitive information such as passwords, API keys, and certificates used in the development and build processes.

SSDF Requirement: [3.1] — “Encrypting sensitive data, such as credentials, to the extent practicable and based on risk;”

Examples: Typically a built-in feature of CI/CD and SCM

Secrets Scanning

Description: Secrets scanning tools automatically detect and alert on exposed secrets in code repositories, preventing accidental leakage of sensitive information.

SSDF Requirement: [3.1] — “Encrypting sensitive data, such as credentials, to the extent practicable and based on risk;”

Examples: Anchore Secure or other container security platforms

OSS Component Inventory (+ Provenance)

Description: These tools maintain an inventory of open-source software components used in a project, including their origins and lineage (provenance).

SSDF Requirement: [3.3] — “The software producer maintains provenance for internal code and third-party components incorporated into the software to the greatest extent feasible;”

Examples: Anchore SBOM or other SBOM generation and management platform

Vulnerability Scanning

Description: Vulnerability scanning tools automatically detect security weaknesses in code, dependencies, and infrastructure.

SSDF Requirement: [3.4] — “The software producer employs automated tools or comparable processes that check for security vulnerabilities. In addition: a) The software producer operates these processes on an ongoing basis and prior to product, version, or update releases;”

Examples: Anchore Secure or other software composition analysis (SCA) platform

Vulnerability Management and Remediation Runbook

Description: This is a process and set of guidelines for addressing discovered vulnerabilities, including prioritization and remediation steps.

SSDF Requirement: [3.4] — “The software producer has a policy or process to address discovered security vulnerabilities prior to product release; and The software producer operates a vulnerability disclosure program and accepts, reviews, and addresses disclosed software vulnerabilities in a timely fashion and according to and timelines specified in the vulnerability disclosure program or applicable policies.”

Examples: This is not necessarily a tool but an organizational SLA on security operations. For reference Anchore has included a screenshot from our vulnerability management guide.

Next Steps

If your organization currently provides software services to a federal agency or is looking to in the future, Anchore is here to help you in your journey. Reach out to our team and learn how you can integrate continuous and automated compliance directly into your CI/CD build pipeline with Anchore Enterprise.

Learn about the importance of both FedRAMP and SSDF compliance for selling to the federal government.

Ad for webinar by Anchore about how to sell software services to the federal government by achieving FedRAMP or SSDF Compliance

FedRAMP & FISMA Compliance: Key Differences Explained

This blog post has been archived and replaced by the supporting pillar page that can be found here: https://anchore.com/wp-admin/post.php?post=987474188&action=edit

The blog post is meant to remain “public” so that it will continue to show on the /blog feed. This will help discoverability for people browsing the blog and potentially help SEO. If it is clicked on it will automatically redirect to the pillar page.

Anchore at Billington CyberSecurity Summit: Automating Defense in the AI Era

Are you gearing up for the 15th Annual Billington CyberSecurity Summit? So are we! The Anchore team will be front and center in the exhibition hall throughout the event, ready to showcase how we’re revolutionizing cybersecurity in the age of AI.

This year’s summit promises to be a banger, highlighting the evolution in cybersecurity as the latest iteration of AI takes center stage. While large language models (LLMs) like ChatGPT have been making waves across industries, the cybersecurity realm is still charting its course in this new AI-driven landscape. But make no mistake – this is no time to rest on our laurels.

As blue teams explore innovative ways to harness LLMs, cybercriminals are working overtime to weaponize the same technology. If there’s one lesson we’ve learned from every software and AI hype cycle: automation is key. As adversaries incorporate novel automations into their tactics, defenders must not just keep pace—they need to get ahead.

At Anchore, we’re all-in with this strategy. The Anchore Enterprise platform is purpose-built to automate and scale cybersecurity across your entire software development lifecycle. By automating continuous vulnerability scanning and compliance in your DevSecOps pipeline, we’re equipping warfighters with the tools they need to outpace adversaries that never sleep.

Ready to see how Anchore can transform your cybersecurity posture in the AI era? Stop by our booth for a live demo. Don’t miss this opportunity to stay ahead of the curve—book a meeting (below) with our team and take the first step towards a more secure tomorrow.

Anchore at the Billington CyberSecurity Summit

Date: September 3–6, 2024

Location: The Ronald Reagan Building and International Trade Center in Washington, DC

Our team is looking forward to meeting you! Book a demo session in advance to ensure a preferred slot.

Anchore’s Showcase: DevSecOps and Automated Compliance

We will be demonstrating the Anchore Enterprise platform at the event. Our showcase will focus on:

  1. Software Composition Analysis (SCA) for Cloud-Native Environments: Learn how our tools can help you gain visibility into your software supply chain and manage risk effectively.
  2. Automated SBOM Generation and Management: Discover how Anchore simplifies the creation and maintenance of Software Bills of Materials (SBOMs), the foundational component in software supply chain security.
  3. Continuous Scanning for Vulnerabilities, Secrets, and Malware: See our advanced scanning capabilities in action, designed to protect your applications across the DevSecOps pipeline or DoD software factory.
  4. Automated Compliance Enforcement: Experience how Anchore can streamline compliance with key standards such as cATO, RAISE 2.0,  NIST, CISA, and FedRAMP, saving time and reducing human error.

We invite all attendees to visit our booth to learn more about how Anchore’s DevSecOps and automated compliance solutions can enhance your organization’s security posture in the age of AI and cloud computing.

Event Highlights

Still on the fence about whether to attend? Here is a quick run-down to help get you off of the fence. This year’s summit, themed “Advancing Cybersecurity in the AI Age,” will feature more than 40 sessions and breakouts, covering critical topics such as:

  • The increasing impact of artificial intelligence on cybersecurity
  • Cloud security challenges and solutions
  • Proactive approaches to technical risk management
  • Emerging cyber risks and defense strategies
  • Data protection against breaches and insider threats
  • The intersection of cybersecurity and critical infrastructure

The event will showcase fireside chats with top government officials, including FBI Deputy Director Paul Abbate, Chairman of the Joint Chiefs of Staff General CQ Brown, Jr., and U.S. Cyber Command Commander General Timothy D. Haugh, among others.

Next Steps and Additional Resources

Join us at the Billington Cybersecurity Summit to network with industry leaders, gain valuable insights, and explore innovative technologies that are shaping the future of cybersecurity. We look forward to seeing you there!

If you are interested in the Anchore Enterprise platform and can’t wait till the show, here are some resources to help get you started:

Learn about best practices that are setting new standards for security in DoD software factories.

Anchore Awarded DoD ESI DevSecOps Phase II Agreement

The Department of Defense (DoD) Enterprise Software Initiative (ESI) has awarded Anchore inclusion in its DevSecOps program, which is part of the ESI’s DevSecOps Phase II enterprise agreements.

The DoD ESI’s main objective is to streamline the acquisition process for software and services across the DoD, in order to gain significant cost savings and improve efficiency. Admittance into the ESI program validates Anchore’s commitment to be a trusted partner to the DoD, delivering advanced container vulnerability scanning as well as SBOM management solutions that meet the most stringent compliance and security requirements.

Anchore’s inclusion in the DoD ESI DevSecOps Phase II agreement is a testament to our commitment to delivering cutting-edge software supply chain security solutions. This milestone enables us to more efficiently support the DoD’s critical missions by providing them with the tools they need to secure their software development pipelines. Our continued partnership with the DoD reinforces Anchore’s position as a trusted leader in SBOM-powered DevSecOps and container security.

—Tim Zeller, EVP Sales & Marketing

The agreements also included DevSecOps luminaries Hashicorp and Rancher Government as well as Cloudbees, Infoblox, GitLab, Crowdstrike, F5 Networks; all are now part of the preferred vendor list for all DoD missions that require cybersecurity solutions, generally, and software supply chain security, specifically.

Anchore is steadily growing their presence on federal contracts and catalogues such as Iron Patriot & Minerva, GSA, 2GIT, NASA SEWP, ITES and most recently also JFAC (Joint Federated Assurance Center).

What does this mean?

Similar to the GSA Advantage marketplace, DoD missions can now procure Anchore through the fully negotiated and approved ESI Agreements on the Solutions for Enterprise-Wide Procurement (SEWP) Marketplace. 

Anchore’s History with DoD

This award continues Anchore’s deepening relationship with the DoD. Starting in 2020, the DoD has vetted and approved Anchore’s container vulnerability scanning tools. Anchore is named in both the DoD Container Image Creation and Deployment Guide and the DoD Container Hardening Process Guide as recommended solutions.

The same year, Anchore was selected by the US Air Force’s Platform One to become the software supply chain vendor to implement the best practices in the above guides for all software built on the platform. Read our case study on how Anchore partnered with Platform One to build the premier DevSecOps platform for the DoD.

The following year, Anchore won the Small Business Innovation Research (SBIR) Phase III contract with Platform One to integrate directly into the Iron Bank container image process. If your image has achieved Iron Bank certification it is because Anchore’s solution has given it a passing grade. Read more about this DevSecOps success story in our case study with the Iron Bank.

Due to the success of Platform One within the US Air Force, in 2022 Anchore partnered with the US Navy to secure the Black Pearl DevSecOps platform. Similar to Platform One, Black Pearl is the go-to standard for modern software development within the Department of the Navy (DON) software development.

As Anchore continued to expand its relationship with the DoD and federal agencies, its offerings became available for purchase through the online government marketplaces and contracts such as GSA Advantage and Second Generation IT Blanket Purchase Agreements (2GIT), NASA SEWP, Iron Patriot/Minerva, ITES and JFAC. The ESI’s DevSecOps Phase II award was built on the back of all of the previous success stories that came before it. 

Achieving ATO is now easier with the inclusion of Anchore into the DoD ESI. Read our white paper on DoD software factory best practices to reach cATO or RAISE 2.0 compliance in days versus months.

We advise on best practices that are setting new standards for security and efficiency in DoD software factories, such as: Hardening container images, automation for policy enforcement and continuous monitoring for vulnerabilities.

DevSecOps Evolution: How DoD Software Factories Are Reshaping Federal Compliance

Anchore’s Vice President of Security, Josh Bressers recently did an interview with Fed Gov Today about the role of automation in DevSecOps and how it is impacting the US federal government. We’ve condensed the highlights of the interview into a snackable blog post below.

Automation is the foundation of DevSecOps

Automation isn’t just a buzzword but is actually the foundation of DevSecOps. It is what gives meaning to marketing taglines like “shift left”. The point of DevSecOps is to create automated workflows that provide feedback to software developers as they are writing the application. This unwinds the previous practice of  artificially grouping all of the “compliance” or “security” tasks into large blocks at the end of the development process. The challenge with this pattern is that it delays feedback and design decisions are made that become difficult to undo after development has completed. By inverting the narrative and automating feedback as design decisions are made, developers are able to prevent compliance or security issues before they become deeply embedded into the software.

DoD Software Factories are leading the way in DevSecOps adoption

The US Department of Defense (DoD) is at the forefront of implementing DevSecOps through their DoD software factory model. The US Navy’s Black Pearl and the Air Force’s Platform One are perfect examples of this program. These organizations are leveraging automation to streamline compliance work. Instead of relying on manual documentation ahead of Authority to Operate (ATO) reviews, automated workflows built directly into the software development pipeline provide direct feedback to developers. This approach has proven highly effective, Bressers emphasizes this in his interview:

It’s obvious why the DoD software factory model is catching on. It’s because they work. It’s not just talk, it’s actually working. There’s many organizations that have been talking about DevSecOps for a long time. There’s a difference between talking and doing. Software factories are doing and it’s amazing.

—Josh Bressers, VP of Security, Anchore

Benefits of compliance automation

By giving compliance the same treatment as security (i.e., automate all the things), tasks that once took weeks or even months, can now be completed in minutes or hours. This dramatic reduction in time-to-compliance not only accelerates development cycles but also allows teams to focus on collaboration and solution delivery rather than getting bogged down in procedural details. The result is a “shift left” approach that extends beyond security to compliance as well. When compliance is integrated early in the development process the benefits cascade down the entire development waterfall.

Compliance automation is shifting the policy checks left into the software development process. What this means is that once your application is finished; instead of the compliance taking weeks or months, we’re talking hours or minutes.

—Josh Bressers, VP of Security, Anchore

Areas for improvement

While automation is crucial, there are still several areas for improvement in DevSecOps environments. Key focus areas include ensuring developers fully understand the automated processes, improving communication between team members and agencies, and striking the right balance between automation and human oversight. Bressers emphasizes the importance of letting “people do people things” while leveraging computers for tasks they excel at. This approach fosters genuine collaboration and allows teams to focus on meaningful work rather than simply checking boxes to meet compliance requirements.

Standardizing communication workflows with integrated developer tools

Software development pipelines are primarily platforms to coordinate the work of distributed teams of developers. They act like old-fashioned switchboard operators that connect one member of the development team to the next as they hand-off work in the development production line. Leveraging developer tooling like GitLab or GitHub standardizes communication workflows. These platforms provide mechanisms for different team members to interact across various stages of the development pipeline. Teams can easily file and track issues, automatically pass or fail tests (e.g., compliance tests), and maintain a searchable record of discussions. This approach facilitates better understanding between developers and those identifying issues, leading to more efficient problem-solving and collaboration.

The government getting ahead of the private sector: an unexpected narrative inversion

In a surprising turn of events, Bressers points out that government agencies are now leading the way in DevSecOps implementation by integrating automated compliance. Historically often seen as technologically behind, federal agencies, through the DoD software factory model, are setting new standards that are likely to influence the private sector. As these practices become more widespread, contractors and private companies working with the government will need to adapt to these new requirements. This shift is evident in recent initiatives like the SSDF attestation questionnaire and White House Executive Order (EO) 14028. These initiatives are setting new expectations for federal contractors, signaling a broader move towards making compliance a native pillar of DevSecOps.

This is one of the few instances in recent memory where the government is truly leading the way. Historically the government has often been the butt of jokes about being behind in technology but these DoD software factories are absolutely amazing. The next thing that we’re going to see is the compliance expectations that are being built into these DoD software factories will seep out into the private sector. The SSDF attestation and the White House Executive Order are only the beginning. Ironically my expectation is everyone is going to have to start paying attention to this, not just federal agencies.

—Josh Bressers, VP of Security, Anchore

Next Steps

If you’re interested to learn more about how to future-proof your software supply chain with compliance automation via the DoD software factory model, be sure to read our white paper.

If you’d like to hear the interview in full, be sure to watch it on Fed Gov Today’s Youtube channel.

Introduction to the DoD Software Factory

In the rapidly evolving landscape of national defense and cybersecurity, the concept of a Department of Defense (DoD) software factory has emerged as a cornerstone of innovation and security. These software factories represent an integration of the principles and practices found within the DevSecOps movement, tailored to meet the unique security requirements of the DoD and Defense Industrial Base (DIB). 

By fostering an environment that emphasizes continuous monitoring, automation, and cyber resilience, DoD Software Factories are at the forefront of the United States Government’s push towards modernizing its software and cybersecurity capabilities. This initiative not only aims to enhance the velocity of software development but also ensures that these advancements are achieved without compromising on security, even against the backdrop of an increasingly sophisticated threat landscape.

Building and running a DoD software factory is so central to the future of software development that “Establish a Software Factory” is the one of the explicitly named plays from the DoD DevSecOps Playbook. On top of that, the compliance capstone of the authorization to operate (ATO) or its DevSecOps infused cousin the continuous ATO (cATO) effectively require a software factory in order to meet the requirements of the standard. In this blog post, we’ll break down the concept of a DoD software factory and a high-level overview of the components that make up one.

What is a DoD software factory?

A Department of Defense (DoD) Software Factory is a software development pipeline that embodies the principles and tools of the larger DevSecOps movement with a few choice modifications that conform to the extremely high threat profile of the DoD and DIB. It is part of the larger software and cybersecurity modernization trend that has been a central focus for the United States Government in the last two decades.

The goal of a DoD Software Factory is aimed at creating an ecosystem that enables continuous delivery of secure software that meet the needs of end-users while ensuring cyber resilience (a DoD catchphrase that emphasizes the transition from point-in-time security compliance to continuous security compliance). In other words, the goal is to leverage automation of software security tasks in order to fulfill the promise of the DevSecOps movement to increase the velocity of software development.

What is an example of a DoD software factory?

Platform One is the canonical example of a DoD software factory. Run by the US Air Force, it offers both a comprehensive portfolio of software development tools and services. It has come to prominence due to its hosted services like Repo One for source code hosting and collaborative development, Big Bang for a end-to-end DevSecOps CI/CD platform and the Iron Bank for centralized container storage (i.e., container registry). These services have led the way to demonstrating that the principles of DevSecOps can be integrated into mission critical systems while still preserving the highest levels of security to protect the most classified information.

If you’re interested to learn more about how Platform One has unlocked the productivity bonus of DevSecOps while still maintaining DoD levels of security, watch our webinar with Camdon Cady, Chief of Operations and Chief Technology Officer of Platform One.

Who does it apply to?

Federal Service Integrators (FSI)

Any organization that works with the DoD as a federal service integrator will want to be intimately familiar with DoD software factories as they will either have to build on top of existing software factories or, if the mission/program wants to have full control over their software factory, be able to build their own for the agency.

Department of Defense (DoD) Mission

Any Department of Defense (DoD) mission will need to be well-versed on DoD software factories as all of their software and systems will be required to run on a software factory as well as both reach and maintain a cATO.

What are the components of a DoD Software Factory?

A DoD software factory is composed of both high-level principles and specific technologies that meet these principles. Below are a list of some of the most significant principles of a DoD software factory:

Principles of DevSecOps embedded into a DoD software factory

  1. Breakdown organizational silos
    • This principle is borrowed directly from the DevSecOps movement, specifically the DoD aims to integrate software development, test, deployment, security and operations into a single culture with the organization.
  2. Open source and reusable code
    • Composable software building blocks is another principle of the DevSecOps that increases productivity and reduces security implementation errors from developers writing secure software packages that they are not experts in.
  3. Immutable Infrastructure-as-Code (IaC)
    • This principle focuses on treating the infrastructure that software runs on as ephemeral and managed via configuration rather than manual systems operations. Enabled by cloud computing (i.e., hardware virtualization) this principle increases the security of the underlying infrastructure through templated secure-by-design defaults and reliability of software as all infrastructure has to be designed to fail at any moment.
  4. Microservices architecture (via containers)
    • Microservices are a design pattern that creates smaller software services that can be built and scale independently of each other. This principle allows for less complex software that only performs a limited set of behavior.
  5. Shift Left
    • Shift left is the DevSecOps principle that re-frames when and how security testing is done in the software development lifecycle. The goal is to begin security testing while software is being written and tested rather than after the software is “complete”. This prevents insecure practices from cascading into significant issues right as software is ready to be deployed.
  6. Continuous improvement through key capabilities
    • The principle of continuous improvement is a primary characteristic of the DevSecOps ethos but the specific key capabilities that are defined in the DoD DevSecOps playbook are what make this unique to the DoD.
  7. Define a DevSecOps pipeline
    • A DevSecOps pipeline is the system that utilizes all of the preceding principles in order to create the continuously improving security outcomes that is the goal of the DoD software factory program.
  8. Cyber resilience
    • Cyber resiliency is the goal of a DoD software factory, is it defined as, “the ability to anticipate, withstand, recover from, and adapt to adverse conditions, stresses, attacks, or compromises on the systems that include cyber resources.”

Common tools and systems of a DoD software factory

  1. Code Repository (e.g., Repo One)
    • Where software source code is stored, managed and collaborated on.
  2. CI/CD Build Pipeline (e.g., Big Bang)
    • The system that automates the creation of software build artifacts, tests the software and packages the software for deployment.
  3. Artifact Repository (e.g., Iron Bank)
    • The storage system for software components used in development and the finished software artifacts that are produced from the build process.
  4. Runtime Orchestrator and Platform (e.g., Big Bang)
    • The deployment system that hosts the software artifacts pulled from the registry and keeps the software running so that users can access it.

How do I meet the security requirements for a DoD Software Factory? (Best Practices)

Use a pre-existing software factory

The benefit of using a pre-existing DoD software factory is the same as using a public cloud provider; someone else manages the infrastructure and systems. What you lose is the ability to highly customize your infrastructure to your specific needs. What you gain is the simplicity of only having to write software and allow others with specialized skill sets to deal with the work of building and maintaining the software infrastructure. When you are a car manufacturer, you don’t also want to be a civil engineering firm that designs roads.

To view existing DoD software factories, visit the Software Factory Ecosystem Coalition website.

Map of all DoD software factories in the US.

Roll your own by following DoD best practices 

If you need the flexibility and customization of managing your own software factory then we’d recommend following the DoD Enterprise DevSecOps Reference Design as the base framework. There are a few software supply chain security recommendations that we would make in order to ensure that things go smoothly during the authorization to operate (ATO) process:

  1. Continuous vulnerability scanning across all stages of CI/CD pipeline
    • Use a cloud-native vulnerability scanner that can be directly integrated into your CI/CD pipeline and called automatically during each phase of the SDLC
  2. Automated policy checks to enforce requirements and achieve ATO
    • Use a cloud-native policy engine in tandem with your vulnerability scanner in order to automate the reporting and blocking of software that is a security threat and a compliance risk
  3. Remediation feedback
    • Use a cloud-native policy engine that can provide automated remediation feedback to developers in order to maintain a high velocity of software development
  4. Compliance (Trust but Verify)
    • Use a reporting system that can be directly integrated with your CI/CD pipeline to create and collect the compliance artifacts that can prove compliance with DoD frameworks (e.g., CMMC and cATO)
  5. Air-gapped system

Is a software factory required in order to achieve cATO?

Technically, no. Effectively, yes. A cATO requires that your software is deployed on an Approved DoD Enterprise DevSecOps Reference Design not a software factory specifically. If you build your own DevSecOps platform that meets the criteria for the reference design then you have effectively rolled your own software factory.

How Anchore can help

The easiest and most effective method for achieving the security guarantees that a software factory is required to meet for its software supply chain are by using: 

  1. An SBOM generation and management tool that integrates directly into your software development pipeline
  2. A container vulnerability scanner that integrates directly into your software development pipeline
  3. A policy engine that integrates directly into your software development pipeline
  4. A centralized database to store all of your software supply chain security logs
  5. A query engine that can continuously monitor your software supply chain and automate the creation of compliance artifacts

These are the primary components of both Anchore Enterprise and Anchore Federal cloud native, SBOM-powered software composition analysis (SCA) platforms that provide an end-to-end software supply chain security to holistically protect your DevSecOps pipeline and automate compliance. This approach has been validated by the DoD, in fact the DoD’s Container Hardening Process Guide specifically named Anchore Federal as a recommended container hardening solution.

Learn more about how Anchore fuses DevSecOps and DoD software factories.

Conclusion and Next Steps

DoD software factories can come off as intimidating at first but hopefully we have broken them down into a more digestible form. At their core they reflect the best of the DevSecOps movement with specific adaptations that are relevant to the extreme threat environment that the DoD has to operate in, as well as, the intersecting trend of the modernization of federal security compliance standards.

If you’re looking to dive deeper into all things DoD software factory, we have a white paper that lays out the 6 best practices for container images in highly secure environments. Download the white paper below.

Navigating the Updates to cATO: Critical Changes & Practical Advice for DoD Programs

On April 11, the US Department of Defense (DoD)’s Chief Information Officer (CIO) released the DevSecOps Continuous Authorization Implementation Guide, marking the next step in the evolution of the DoD’s efforts to modernize its security and compliance ecosystem. This guide is part of a larger trend of compliance modernization that is transforming the US public sector and the global public sector as a whole. It aims to streamline and enhance the processes for achieving continuous authorization to operate (cATO), reflecting a continued push to shift from traditional, point-in-time authorizations to operate (ATOs) to a more dynamic and ongoing compliance model.

The new guide introduces several significant updates, including the introduction of specific security and development metrics required to achieve cATO, comprehensive evaluation criteria, practical advice on how to meet cATO requirements and a special emphasis on software supply chain security via software bills of material (SBOMs).

We break down the updates that are important to highlight if you’re already familiar with the cATO process. If you’re looking for a primer on cATO to get yourself up to speed, read our original blog post or click below to watch our webinar on-demand.

Continuous Authorization Metrics

A new addition to the corpus of information on cATO is the introduction of specific security and software development metrics that are required to be continuously monitored. Many of these come from the private sector DevSecOps best practices that have been honed by organizations at the cutting edge of this field, such as Google, Microsoft, Facebook and Amazon.

We’ve outlined the major ones below.

  1. Mean Time to Patch Vulnerabilities:
    • Description: Average time between the identification of a vulnerability in the DevSecOps Platform (DSOP) or application and the successful production deployment of a patch.
    • Focus: Emphasis on vulnerabilities with high to moderate impact on the application or mission.
  2. Trend Metrics:
    • Description: Metrics associated with security guardrails and control gates PASS/FAIL ratio over time.
    • Focus: Show improvements in development team efforts at developing secure code with each new sprint and the system’s continuous improvement in its security posture.
  3. Feedback Communication Frequency:
    • Description: Metrics to ensure feedback loops are in place, being used, and trends showing improvement in security posture.
  4. Effectiveness of Mitigations:
    • Description: Metrics associated with the continued effectiveness of mitigations against a changing threat landscape.
  5. Security Posture Dashboard Metrics:
    • Description: Metrics showing the stage of application and its security posture in the context of risk tolerances, security control compliance, and security control effectiveness results.
  6. Container Metrics:
    • Description: Measure the age of containers against the number of times they have been used in a subsystem and the residual risk based on the aggregate set of open security issues.
  7. Test Metrics:
    • Description: Percentage of test coverage passed, percentage of passing functional tests, count of various severity level findings, percentage of threat actor actions mitigated, security findings compared to risk tolerance, and percentage of passing security control compliance.

The overall thread with the metrics required is to quickly understand whether the overall security of the application is improving. If they aren’t this is a sign that something within the system is out of balance and is in need of attention.

Comprehensive and detailed evaluation criteria

Tucked away in Appendix B. “Requirements” is a detailed table that spells out the individual requirements that need to be met in order to achieve a cATO. This table is meant to improve the cATO process so that the individuals in a program that are implementing the requirements know the criteria they will be evaluated against. The goal being to reduce the amount of back-and-forth between the program and the Authorizing Official (AO) that is evaluating them.

Practical Implementation Advice

The ecosystem for DSOPs has evolved significantly since cATO was first announced in February 2022. Over the past 2+ years, a number of early adopters, such as Platform One have blazed a trail and learned all of the painful lessons in order to smooth the path for other organizations that are now looking to modernize their development practices. The advice in the implementation guide is a high-signal, low-noise distillation of these hard won lessons learned.

DevSecOps Platform (DSOP) Advice

If you’re more interested in writing software than operating a DSOP then you’ll want to focus your attention on pre-existing DSOP’s, commonly called DoD software factories.

We have written both a primer for understanding DoD software factories and an index of additional content that can quickly direct you to deep dives in specific content you’re interested in.

If you love to get your hands dirty and would rather have full control over your development environment, just be aware that this is specifically recommended against:

Build a new DSOP using hardened components (this is the most time-consuming approach and should be avoided if possible).

DevSecOps Culture Advice

While the DevSecOps culture and process advice is well-known in the private sector, it is still important to emphasize in the federal context that is currently transitioning to the modern software development paradigm.

  1. Bring the security team at the start of development and keep them involved throughout.
  2. Create secure agile processes to support the continued delivery of value without the introduction of unnecessary risk

Continuous Monitoring (ConMon) Advice

Ensure that all environments are continuously monitored (e.g., development, test and production). Utilize the security data collected from these environments to power and inform thresholds and triggers for active incident response. ConMon and ACD are separate pillars of cATO but need to be integrated so that information is flowing to the systems that can make best use of it. It is this integrated approach that delivers on the promise of significantly improved security and risk outcomes.

Active Cyber Defense (ACD) Advice

Both a Security Operations Center (SOC) and external CSSP are needed in order to achieve the Active Cyber Defense (ACD) pillar of cATO. On top of that, there also has to be a detailed incident response plan and personnel trained on it. While cATO’s goal is to automate as much of the security and incident response system as possible to reduce the burden of manual intervention. Humans in the loop are still an important component in order to tune the system and react with appropriate urgency.

Software Supply Chain Security (SSCS) Advice

The new implementation guide is very clear that a DSOP creates SBOMs for itself and any applications that pass through it. This is a mega-trend that has been sweeping over the software supply chain security industry for the past decade. It is now the consensus that SBOMs are the best abstraction and practice for securing software development in the age of composible and complex software.

The 3 (+1) Pillars of cATO

While the 3 pillars of cATO and its recommendation for SBOMs as the preferred software supply chain security tool were called out in the original cATO memo, the recently published implementation guide again emphasizes the importance of the 3 (+1) pillars of cATO.

The guide quotes directly from the memo:

In order to prevent any combination of human errors, supply chain interdictions, unintended code, and support the creation of a software bill of materials (SBOM), the adoption of an approved software platform and development pipeline(s) are critical.

This is a continuation of the DoD specifically, and the federal government generally, highlighting the importance of software supply chain security and software bills of material (SBOMs) as “critical” for achieving the 3 pillars of cATO. This is why Anchore refers to this as the “3 (+1) Pillars of cATO“.

  1. Continuous Monitoring (ConMon)
  2. Active Cyber Defense (ACD)
  3. DevSecOps (DSO) Reference Design
  4. Secure Software Supply Chain (SSSC)

Wrap-up

The release of the new DevSecOps Continuous Authorization Implementation Guide marks a significant advancement in the DoD’s approach to cybersecurity and compliance. With a focus on transitioning from traditional point-in-time Authorizations to Operate (ATOs) to a continuous authorization model, the guide introduces comprehensive updates designed to streamline the cATO process. The goal being to ease the burden of the process and help more programs modernize their security and compliance posture.

If you’re interested to learn more about the benefits and best practices of utilizing a DSOP (i.e., DoD software factory) in order to transform cATO compliance into a “switch flip”. Be sure to pick up a copy of our “DevSecOps for a DoD Software Factory: 6 Best Practices for Container Images” white paper. Click below to download.

Best Practices for DevSecOps in DoD Software Factories: A White Paper

The Department of Defense’s (DoD) Software Modernization Implementation Plan, unveiled in March 2023, represents a significant stride towards transforming software delivery timelines from years to days. This ambitious plan leverages the power of containers and modern DevSecOps practices within a DoD software factory.

Our latest white paper, titled “DevSecOps for a DoD Software Factory: 6 Best Practices for Container Images,” dives deep into the security practices for securing container images in a DoD software factory. It also details how Anchore Federal—a pivotal tool within this framework—supports these best practices to enhance security and compliance across multiple DoD software factories including the US Air Force’s Platform One, Iron Bank, and the US Navy’s Black Pearl.

Key Insights from the White Paper

  • Securing Container Images: The paper outlines six essential best practices ranging from using trusted base images to continuous vulnerability scanning and remediation. Each practice is backed by both DoD guidance and relevant NIST standards, ensuring alignment with federal requirements.
  • Role of Anchore Federal: As a proven tool in the arena of container image security, Anchore Federal facilitates these best practices by integrating seamlessly into DevSecOps workflows, providing continuous scanning, and enabling automated policy enforcement. It’s designed to meet the stringent security needs of DoD software factories, ready for deployment even in classified and air-gapped environments.
  • Supporting Rapid and Secure Software Delivery: With the DoD’s shift towards software factories, the need for robust, secure, and agile software delivery mechanisms has never been more critical. Anchore Federal is the turnkey solution for automating security processes and ensuring that all container images meet the DoD’s rigorous security and compliance requirements.

Download the White Paper Today

Empower your organization with the insights and tools needed for secure software delivery within the DoD ecosystem. Download our white paper now and take a significant step towards implementing best-in-class DevSecOps practices in your operations. Equip your teams with the knowledge and technology to not just meet, but exceed the modern security demands of the DoD’s software modernization efforts.

An Outline for Getting Up to Speed on the DoD Software Factory

This blog post is meant as a gateway to all things DoD software factory. We highlight content from across the Anchore universe that can help anyone get up to speed on what a DoD software factory is, why to use it and how to build one. This blog post is meant as an index to be scanned for the topics that are most interesting to you as the reader with links to more detailed content.

What is a DoD Software Factory?

The short answer is a DoD Software Factory is an implementation of the DoD Enterprise DevSecOps Reference Design. A slightly longer answer comes from our DoD software factory primer:

A Department of Defense (DoD) Software Factory is a software development pipeline that embodies the principles and tools of the larger DevSecOps movement with a few choice modifications that conform to the extremely high threat profile of the DoD and DIB.

Note that the diagram below looks like a traditional DevOps pipeline. The difference being that there are security controls layered into this environment that automate software component inventory, vulnerability scanning and policy enforcement to meet the requirements to be considered a DoD software factory.

Got the basics down? Go deeper and learn how Anchore can help you put the Sec into DevSecOps Reference Design by reading our DoD Software Factory Best Practices white paper.

Why do I want to utilize a DoD Software Factory?

For DoD programs, the primary reason to utilize a DoD software factory is that it is a requirement for achieving a continuous authorization to operation (cATO). The cATO standard specifically calls out that software is developed in a system that meets the DoD Enterprise DevSecOps Reference Design. A DoD software factory is the generic implementation of this design standard.

For Federal Service Integrators (FSIs), the biggest reason to utilize a DoD software factory is that it is a standard approach to meeting DoD compliance and certification standards. By meeting a standard, such as CMMC Level 2, you expand your opportunity to work with DoD programs.

Continuous Authorization to Operate (cATO)

If you’re looking for more information on cATO, Anchore has written a comprehensive guide on navigating the cATO process that can be found on our blog:

DevSecOps for a DoD Software Factory: 6 Best Practices for Container Images

The shift from traditional software delivery to DevSecOps in the Department of Defense (DoD) represents a crucial evolution in how software is built, secured, and deployed with a focus on efficiencies and speed. Our white paper advises on best practices that are setting new standards for security and efficiency in DoD software factories.

Cybersecurity Maturity Model Certification (CMMC)

The CMMC is the certification standard that is used by the DoD to vet FSIs from the defense industrial base (DIB). This is the gold standard for demonstrating to the DoD that your organization takes security seriously enough to work with the highest standards of any DoD program. The security controls that the CMMC references when determining certification are outlined in NIST 800-171. There are 17 total families of security controls that an organization has to meet in order to meet the CMMC Level 2 certification and a DoD software factory can help check a number of these off of the list.

The specific families of controls that a DoD software factory helps meet are:

  • Access Control (AC)
  • Audit and Accountability (AU)
  • Configuration Management (CM)
  • Incident Response (IR)
  • Maintenance (MA)
  • Risk Assessment (RA)
  • Security Assessment and Authorization (CA)
  • System and Communications Protection (SC)
  • System and Information Integrity (SI)
  • Supply Chain Risk Management (SR)

If you’re looking for more information on how apply software supply chain security to meet the CMMC, Anchore has published two blog posts on the topic:

NIST SP 800-171 & Controlled Unclassified Data: A Guide in Plain English

  • NIST SP 800-171 is the canonical list of security controls for meeting CMMC Level 2 certification. Anchore has broken down the entire 800-171 standard to give you an easy to understand overview.

Automated Policy Enforcement for CMMC with Anchore Enterprise

  • Policy Enforcement is the backbone of meeting the monitoring, enforcement and reporting requirements of the CMMC. In this blog post, we break down how Anchore Federal can meet a number of the controls specifically related to software supply chain security that are outlined in NIST 800-171.

How do I meet the DevSecOps Reference Design requirements?

The easy answer is by utilizing a DoD Software Factory Managed Service Provider (MSP). Below in the User Stories section, we deep dive into the US Air Force’s Platform One given they are the preeminent DoD software factory.

The DIY answer involves carefully reading and implementing the DoD Enterprise DevSecOps Reference Design. This document is massive but there are a few shortcuts you can utilize to help expedite your journey. 

Container Hardening

Deciding to utilize software containers in a DevOps pipeline is almost a foregone conclusion at this point. What is less well known is how to secure your containers, especially to meet the standards of a DoD software factory.

The DoD has published two guides that can help with this. The first is the DoD Container Hardening Guide, and the second is the Container Image Creation and Deployment Guide. Both name Anchore Federal as an approved container hardening scanner.

Anchore has published a number of blogs and even a white paper that condense the information in both of these guides into more digestible content. See below:

Container Security for U.S. Government Information Systems

  • This comprehensive white paper breaks down how to achieve a container build and deployment system that is hardened to the standards of a DoD software factory.

Enforcing the DoD Container Image and Deployment Guide with Anchore Federal

  • This blog post is great for those who are interested to see how Anchore Federal can turn all of the requirements of the DoD Container Hardening Guide and the Container Image Creation and Deployment Guide into an easy button.

Deep Dive into Anchore Federal’s Container Image Inspection and Vulnerability Management

  • This blog post deep dives into how to utilize Anchore Federal to find container vulnerabilities and alert or report on whether they are violating the security compliance required to be a DoD software factory.

Policy-based Software Supply Chain Security and Compliance

The power of a policy-based approach to software supply chain security is that it can be integrated directly into a DevOps pipeline and automate a significant amount of alerting, reporting and enforcement work. The blog posts below go into depth on how this automated approach to security and compliance can uplevel a DoD software factory:

A Policy Based Approach to Container Security & Compliance

  • This blog details how a policy-based platform works and how it can benefit both software supply chain security and compliance. 

The Power of Policy-as-Code for the Public Sector

  • This follow-up to the post above shows how the policy-based security platform outlined in the first blog post can have significant benefits to public sector organizations that have to focus on both internal information security and how to prove they are compliant with government standards.

Benefits of Static Image Inspection and Policy Enforcement

  • Getting a bit more technical this blog details how a policy-based development workflow can be utilized as a security gate with deployment orchestration systems like Kubernetes.

Getting Started With Anchore Policy Bundles

  • An even deeper dive into what is possible with the policy-based security system provided by Anchore Enterprise, this blog gets into the nitty-gritty on how to configure policies to achieve specific security outcomes.

Unpacking the Power of Policy at Scale in Anchore

  • This blog shows how a security practitioner can extend the security signals that Anchore Enterprise collects with the assistance of a more flexible data platform like New Relic to derive more actionable insights.

Security Technical Implementation Guide (STIG)

The Security Technical Implementation Guides (STIGs) are fantastic technical guides for configuring off the shelf software to DoD hardening standards. Anchore being a company focused on making security and compliance as simple as possible has written a significant amount about how to utilize STIGs and achieve STIG compliance, especially for container-based DevSecOps pipelines. Exactly the kind of software development environments that meet the standards of a DoD software factory. View our previous content below:

4 Ways to Prepare your Containers for the STIG Process

  • In this blog post, we give you four quick tips to help you prepare for the STIG process for software containers. Think of this as the amuse bouche to prepare you for the comprehensive white paper that comes next.

Navigating STIG Compliance for Containers

  • As promised, this is the extensive document that walks you through how to build a DevSecOps pipeline based on containers that is both high velocity and secure. Perfect for organizations that are aiming to roll their own DoD software factory.

User Stories

Anchore has been supporting FSIs and DoD programs to build DevSecOps programs that meet the criteria to be called a DoD software factory for the past decade. We can write technical guides and best practices documents till time ends but sometimes the best lessons are learned from real-life stories. Below are user stories that help fill in all of the details about how a DoD software factory can be built from scratch:

DoD’s Pathway to Secure Software

  • Join Major Camdon Cady of Platform One and Anchore’s VP of Security, Josh Bressers as they discuss the lessons learned from building a DoD software factory from the ground up. Watch this on-demand webinar to get all of the details in a laid back and casual conversation between two luminaries in their field.

Development at Mach Speed

  • If you prefer a written format over video, this case study highlights how Platform One utilized Red Hat OpenShift and Anchore Federal to build their DoD software factory that has become the leading Managed Service Provider for DoD programs.

Conclusion

Similar to how Cloud has taken over the infrastructure discussion in the enterprise world, DoD software factories are quickly becoming the go-to solution for DoD programs and the FSIs that support them. Delivering on the promise of the DevOps movement of high velocity development without compromising security, a DoD software factory is the one-stop shop to upgrade your software development practice into the modern age and become compliant as a bonus! If you’re looking for an easy button to infuse your DevOps pipeline with security and compliance without the headache of building it yourself, take a look at Anchore Federal and how it helps organizations layer software supply chain security into a DoD software factory and achieve a cATO.

4 Ways to Prepare your Containers for the STIG Process

The Security Technical Implementation Guide (STIG) is a Department of Defense (DoD) technical guidance standard that captures the cybersecurity requirements for a specific product, such as a cloud application going into production to support the warfighter. System integrators (SIs), government contractors, and independent software vendors know the STIG process as a well-governed process that all of their technology products must pass. The Defense Information Systems Agency (DISA) released the Container Platform Security Requirements Guide (SRG) in December 2020 to direct how software containers go through the STIG process. 

STIGs are notorious for their complexity and the hurdle that STIG compliance poses for technology project success in the DoD. Here are some tips to help your team prepare for your first STIG or to fine-tune your existing internal STIG processes.

4 Ways to Prepare for the STIG Process for Containers

Here are four ways to prepare your teams for containers entering the STIG process:

1. Provide your Team with Container and STIG Cross-Training

DevSecOps and containers, in particular, are still gaining ground in DoD programs. You may very well find your team in a situation where your cybersecurity/STIG experts may not have much container experience. Likewise, your programmers and solution architects may not have much STIG experience. Such a situation calls for some manner of formal or informal cross-training for your team on at least the basics of containers and STIGs. 

Look for ways to provide your cybersecurity specialists involved in the STIG process with training about containers if necessary. There are several commercial and free training options available. Check with your corporate training department to see what resources they might have available such as seats for online training vendors like A Cloud Guru and Cloud Academy.

There’s a lot of out-of-date and conflicting information about the STIG process on the web today. System integrators and government contractors need to build STIG expertise across their DoD project teams to cut through such noise.

Including STIG expertise as an essential part of your cybersecurity team is the first step. While contract requirements dictate this proficiency, it only helps if your organization can build a “bench” of STIG experts. 

Here are three tips for building up your STIG talent base:

  • Make STIG experience a “plus” or “bonus” in your cybersecurity job requirements for roles, even if they may not be directly billable to projects with STIG work (at least in the beginning)
  • Develop internal training around STIG practices led by your internal experts and make it part of employee onboarding and DoD project kickoffs
  • Create a “reach back” channel from your project teams  to get STIG expertise from other parts of your company, such as corporate and other project teams with STIG expertise, to get support for any issues and challenges with the STIG process

Depending on the size of your company, clearance requirements of the project, and other situational factors, the temptation might be there to bring in outside contractors to shore up your STIG expertise internally. For example, the Container Platform Security Resource Guide (SRG) is still new. It makes sense to bring in an outside contractor with some experience managing containers through the STIG process. If you go this route, prioritize the knowledge transfer from the contractor to your internal team. Otherwise, their container and STIG knowledge walk out the door at the end of the contract term.

2. Validate your STIG Source Materials

When researching the latest STIG requirements, you need to validate the source materials. There are many vendors and educational sites that publish STIG content. Some of that content is outdated and incomplete. It’s always best to go straight to the source. DISA provides authoritative and up-to-date STIG content online that you should consider as your single source of truth on the STIG process for containers.

3. Make the STIG Viewer part of your Approved Desktop

Working on DoD and other public sector projects requires secure environments for developers, solution architects, cybersecurity specialists, and other team members. The STIG Viewer should become a part of your DoD project team’s secure desktop environment. Save the extra step of your DoD security teams putting in a service desk ticket to request the STIG Viewer installation.

4. Look for Tools that Automate time-intensive Steps in the STIG process

The STIG process is time-intensive, primarily documenting policy controls. Look for tools that’ll help you automate compliance checks before you proceed into an audit of your STIG controls. The right tool can save you from audit surprises and rework that’ll slow down your application going live.

Parting Thought

The STIG process for containers is still very new to DoD programs. Being proactive and preparing your teams upfront in tandem with ongoing collaboration are the keys to navigating the STIG process for containers.

Learn more about putting your containers through the STIG process in our new white paper entitled Navigating STIG Compliance for Containers!

Automated Policy Enforcement for CMMC with Anchore Enterprise

The Cyber Maturity Model Certification (CMMC) is an important program to harden the cybersecurity posture of the defense industrial base. Its purpose is to validate that appropriate safeguards are in place to protect controlled unclassified information (CUI). Many of the organizations that are required to comply with CMMC are Anchore customers. They have the responsibility to protect the sensitive, but not classified data, of US military and government agencies as they support the various missions of the United States. 

CMMC 2.0 Levels

  • Level 1 Foundation: Safeguard federal contract information (FCI); not critical to national security.
  • Level 2 Advanced:  This maps directly to NIST Special Publication (SP) 800-171. Its primary goal is to ensure that government contractors are properly protecting controlled unclassified information (CUI).
  • Level 3 Expert: This maps directly to NIST Special Publication (SP) 800-172. Its primary goal is to go beyond the base-level security requirements defined in NIST 800-171. NIST 800-172 provides security requirements that specifically defend against advanced persistent threats (APTs).

This is of critical importance as these organizations leverage common place DevOps tooling to build their software. Additionally, these large organizations may be working with smaller subcontractors or suppliers who are building software in tandem or partnership. 

For example, a mega-defense contractor is working alongside a small mom-and-pop shop to develop software for a classified government system. Lots of questions we should have here:

  1. How can my company as a mega-defense contractor validate what software built by my partner is not using blacklisted software packages?
  2. How can my company validate software supplied to me is free of malware?
  3. How can I validate that the software supplied to me is in compliance with licensing standards and vulnerability compliance thresholds of my security team?
  4. How do I validate that the software I’m supplying is compliant not only against NIST 800-171 and CMMC, but against the compliance standards of my government end user (Such as NIST 800-53 or NIST 800-161)?

Validating Security between DevSecOps Pipelines and Software Supply Chain

At any major or small contractor alike, everyone has taken steps to build internal DevSecOps (DSO) pipelines. However, the defense industrial base (DIB) commonly involves daily relationships in which smaller defense contractors supply software to a larger defense contractor for a program or DSO pipeline that consumes and implements that software. With Anchore Enterprise, we can now validate if that software supplied is compliant with CMMC controls as specified in NIST 800-171.

Looking to learn more about how to achieve CMMC Level 2 or NIST 800-171 compliance? One of the most popular technology shortcuts is to utilize a DoD software factory. Anchore has been helping organizations and agencies put the Sec in DevSecOps by securing traditional software factories, transforming them into DoD software factories.

Which Controls does Anchore Enterprise Automate?

3.1.7 – Restrict Non-Privileged Users and Log Privileged Actions

Related NIST 800-53 Controls: AC-6 (10)

Description: Prevent non-privileged users from executing privileged functions and capture the execution of such functions in audit logs. 

Implementation: Anchore Enterprise can scan the container manifests to determine if the user is being given root privileges and implement an automated policy to prevent build containers from entering a runtime environment. This prevents a scenario where any privileged functions can be utilized in a runtime environment.

3.4.1 – Maintain Baseline Configurations & Inventories

Related NIST 800-53 Controls: CM-2(1), CM-8(1), CM-6

Description: Establish and maintain baseline configurations and inventories of organizational systems (including hardware, software, firmware, and documentation) throughout the respective system development life cycles.

Implementation: Anchore Enterprise provides a centralized inventory of all containers and their associated manifests at each stage of the development pipeline. All manifests, images and containers are automatically added to the central tracking inventory so that a complete list of all artifacts of the build pipeline can be tracked at any moment in time.

3.4.2 – Enforce Security Configurations

Related NIST 800-53 Controls: CM-2 (1) & CM-8(1) & CM-6

Description: Establish and enforce security configuration settings for information technology products employed in organizational systems.

Implementation: Implementation: Anchore Enterprise scans all container manifest files for security configurations and publishes found vulnerabilities to a centralized database that can be used for monitoring, ad-hoc reporting, alerting and/or automated policy enforcement.

3.4.3 – Monitor and Log System Changes with Approval Process

Related NIST 800-53 Controls: CM-3

Description: Track, review, approve or disapprove, and log changes to organizational systems.

Implementation: Anchore Enterprise provides a centralized dashboard that tracks all changes to applications which makes scheduled reviews simple. It also provides an automated controller that can apply policy-based decision making to either automatically approve or reject changes to applications based on security rules.

3.4.4 – Run Security Analysis on All System Changes

Related NIST 800-53 Controls: CM-4

Description: Analyze the security impact of changes prior to implementation.

Implementation: Anchore Enterprise can scan changes to applications for security vulnerabilities during the build pipeline to determine the security impact of the changes.

3.4.6 – Apply Principle of Least Functionality

Related NIST 800-53 Controls: CM-7

Description: Employ the principle of least functionality by configuring organizational systems to provide only essential capabilities.

Implementation: Anchore Enterprise can scan all applications to ensure that they are uniformly applying the principle of least functionality to individual applications. If an application does not meet this standard then Anchore Enterprise can be configured to prevent an application from being deployed to a production environment.

3.4.7 – Limit Use of Nonessential Programs, Ports, and Services

Related NIST 800-53 Controls: CM-7(1), CM-7(2)

Description: Restrict, disable, or prevent the use of nonessential programs, functions, ports, protocols, and services.

Implementation: Anchore Enterprise can be configured as a gating agent that will scan for specific security violations and prevent these applications from being deployed until the violations are remediated.

3.4.8 – Implement Blacklisting and Whitelisting Software Policies

Related NIST 800-53 Controls: CM-7(4), CM-7(5)

Description: Apply deny-by-exception (blacklisting) policy to prevent the use of unauthorized software or deny-all, permit-by-exception (whitelisting) policy to allow the execution of authorized software.

Implementation: Anchore Enterprise can be configured as a gating agent that will apply a security policy to all scanned software. The policies can be configured in a black- or white-listing manner.

3.4.9 – Control and Monitor User-Installed Software

Related NIST 800-53 Controls: CM-11

Description: Control and monitor user-installed software.

Implementation: Anchore Enterprise scans all software in the development pipeline and records all user-installed software. The scans can be monitored in the provided dashboard. User-installed software can be controlled (allowed or denied) via the gating agent.

3.5.10 – Store and Transmit Only Cryptographically-Protected Passwords

Related NIST 800-53 Controls: IA-5(1)

Description: Store and transmit only cryptographically-protected of passwords.

Implementation: Anchore Enterprise can scan for plain-text secrets in build artifacts and prevent exposed secrets from being promoted to the next environment until the violation is remediated. This prevents unauthorized storage or transmission of unencrypted passwords or secrets. See screenshot below to see this protection in action.

3.11.2 – Scan for Vulnerabilities

Related NIST 800-53 Controls: RA-5, RA-5(5)

Description: Scan for vulnerabilities in organizational systems and applications periodically and when new vulnerabilities affecting those systems and applications are identified.

Implementation: Anchore Enterprise is designed to scan all systems and applications for vulnerabilities continuously and alert when any changes introduce new vulnerabilities. See screenshot below to see this protection in action.

3.11.3 – Remediate Vulnerabilities Respective to Risk Assessments

Related NIST 800-53 Controls: RA-5, RA-5(5)

Description: Remediate vulnerabilities in accordance with risk assessments.

Implementation: Anchore Enterprise can be tuned to allow or deny changes based on a risk scoring system.

3.12.2 – Implement Plans to Address System Vulnerabilities

Related NIST 800-53 Controls: CA-5

Description: Develop and implement plans of action designed to correct deficiencies and reduce or eliminate vulnerabilities in organizational systems.

Implementation: Anchore Enterprise automates the process of ensuring all software and systems are in compliance with the security policy of the organization. 

3.13.4 – Block Unauthorized Information Transfer via Shared Resources

Related NIST 800-53 Controls: SC-4

Description: Prevent unauthorized and unintended information transfer via shared system resources.

Implementation: Anchore Enterprise can be configured as a gating agent that will scan for unauthorized and unintended information transfer and prevent violations from being transferred between shared system resources until the violations are remediated.

3.13.8 – Use Cryptography to Safeguard CUI During Transmission

Related NIST 800-53 Controls: SC-8

Description: Transmission Confidentiality and Integrity: Implement cryptographic mechanisms to prevent unauthorized disclosure of CUI during transmission unless otherwise protected by alternative physical safeguards.

Implementation: Anchore Enterprise can be configured as a gating agent that will scan for CUI and prevent violations of organization defined policies regarding CUI from being disclosed between systems.

3.14.5 – Periodically Scan Systems and Real-time Scan External Files

Related NIST 800-53 Controls: SI-2

Description: Perform periodic scans of organizational systems and real-time scans of files from external sources as files are downloaded, opened, or executed.

Implementation: Anchore Enterprise can be configured to scan all external dependencies that are built into software and provide information about relevant security vulnerabilities in the software development pipeline. See screenshot below to see this protection in action.

Wrap-Up

In a world increasingly defined by software solutions, the cybersecurity posture of defense-related industries stands paramount. The CMMC, a framework with its varying levels of compliance, underscores the commitment of the defense industrial base to fortify its cyber defenses. 

As a multitude of organizations, ranging from the largest defense contractors to smaller mom-and-pop shops, work in tandem to support U.S. missions, the intricacies of maintaining cybersecurity standards grow. The questions posed exemplify the necessity to validate software integrity, especially in complex collaborations. 

Anchore Enterprise solves these problems by automating software supply chain security best practices. It not only automates a myriad of crucial controls, ranging from user privilege restrictions to vulnerability scanning, but it also empowers organizations to meet and exceed the benchmarks set by CMMC and NIST. 

In essence, as defense entities navigate the nuanced web of software development and partnerships, tools like Anchore Enterprise are indispensable in safeguarding the nation’s interests, ensuring the integrity of software supply chains, and championing the highest levels of cybersecurity.

If you’d like to learn more about the Anchore Enterprise platform or speak with a member of our team, feel free to book a time to speak with one of our specialists.

Navigating Continuous Authority To Operate (cATO): A Guide for Getting Started

Continuous Authority to Operate (cATO), sometimes known as Rapid ATO, is becoming necessary as the DoD and civilian agencies put more applications and data in the cloud. Speed and agility are becoming increasingly critical to the mission as the government and federal system integrators seek new features and functionalities to support the warfighter and other critical U.S. government priorities.

In this blog post, we’ll break down the concept of cATO in understandable terms, explain its benefits, explore the myths and realities of cATO and show how Anchore can help your organization meet this standard.

What is Continuous Authority To Operate (cATO)?

Continuous ATO is the merging of traditional authority to operate (ATO) risk management practices with flexible and responsive DevSecOps practices to improve software security posture.

Traditional Risk Management Framework (RMF) implementations focus on obtaining authorization to operate once every three years. The problem with this approach is that security threats aren’t static, they evolve. cATO is the evolution of this framework which requires the continual authorization of software components, such as containers, by building security into the entire development lifecycle using DevSecOps practices. All software development processes need to ensure that the application and its components meet security levels equal to or greater than what an ATO requires.

You authorize once and use the software component many times. With a cATO, you gain complete visibility into all assets, software security, and infrastructure as code.

By automating security, you are then able to obtain and maintain cATO. There’s no better statement about the current process for obtaining an ATO than this commentary from Mary Lazzeri with Federal Computer Week:

“The muddled, bureaucratic process to obtain an ATO and launch an IT system inside government is widely maligned — but beyond that, it has become a pervasive threat to system security. The longer government takes to launch a new-and-improved system, the longer an old and potentially insecure system remains in operation.”

The Three Pillars of cATO

To achieve cATO, an Authorizing Official (AO) must demonstrate three main competencies:

  1. Ongoing visibility: A robust continuous monitoring strategy for RMF controls must be in place, providing insight into key cybersecurity activities within the system boundary.
  2. Active cyber defense: Software engineers and developers must be able to respond to cyber threats in real-time or near real-time, going beyond simple scanning and patching to deploy appropriate countermeasures that thwart adversaries.
  3. Adoption of an approved DevSecOps reference design: This involves integrating development, security, and operations to close gaps, streamline processes, and ensure a secure software supply chain.

Looking to learn more about the DoD DevSecOps Reference Design? It’s commonly referred to as a DoD Software Factory. Anchore has been helping organizations and agencies put the Sec in DevSecOps by securing traditional software factories, transforming them into DoD software factories.

Continuous ATO vs. ATO

The primary difference between traditional ATOs and continuous ATOs is the frequency at which a system seeks to prove the validity of its security claims. ATOs require that a system can prove its security once every three years whereas cATO systems prove their security every moment that the system is running.

The Benefits of Continuous ATO

Continuous ATO is essentially the process of applying DevSecOps principles to the compliance framework of Authority to Operate. Automating the individual compliance processes speeds up development work by avoiding repetitive tasks to obtain permission. Next, we’ll explore additional (and sometimes unexpected) benefits of cATO.

Increase Velocity of System Deployment

CI/CD systems and the DevSecOps design pattern were created to increase the velocity at which new software can be deployed from development to production. On top of that, Continuous ATOs can be more easily scaled to accommodate changes in the system or the addition of new systems, thanks to the automation and flexibility offered by DevSecOps environments.

Reduce Time and Complexity to Achieve an ATO

With the cATO approach, you can build a system to automate the process of generating the artifacts to achieve ATO rather than manually producing them every three years. This automation in DevSecOps pipelines helps in speeding up the ATO process, as it can generate the artifacts needed for the AO to make a risk determination. This reduces the time spent on manual reviews and approvals. Much of the same information will be requested for each ATO, and there will be many overlapping security controls. Designing the DevSecOps pipeline to produce the unique authorization package for each ATO from the corpus of data and information available can lead to increased efficiency via automation and re-use.

No Need to Reinvent AND Maintain the Wheel

When you inherit the security properties of the DevSecOps reference design or utilize an approved managed platform, then the provider will shoulder the burden. Someone else has already done the hard work of creating a framework of tools that integrate together to achieve cATO, re-use their effort to achieve cATO for your system. 

Alternatively, you can utilize a platform provider, such as Platform One, Kessel Run, Black Pearl, or the Army Software Factory to outsource the infrastructure management.

Learn how Anchore helped Platform One achieve cATO and become the preeminent DoD software factory:

Myths & Realities

Myth or Reality?: DevSecOps can be at Odds with cATO

Myth! DevSecOps in the DoD and civilian government agencies are still the domain of early adopters. The strict security and compliance requirements — the ATO in particular — of the federal government make it a fertile ground for DevSecOps adoption. Government leaders such as Nicolas Chaillan, former chief software officer for the United States Air Force, are championing DevSecOps standards and best practices that the DoD, federal government agencies, and even the commercial sector can use to launch their own DevSecOps initiatives.

One goal of DevSecOps is to develop and deploy applications as quickly as possible. An ATO is a bureaucratic morass if you’re not proactive. When you build a DevSecOps toolchain that automates container vulnerability scanning and other areas critical to ATO compliance controls, can you put in the tools, reporting, and processes to test against ATO controls while still in your development environment.

DevSecOps, much like DevOps, suffers from a marketing problem as vendors seek to spin the definitions and use cases that best suit their products. The DoD and government agencies need more champions like Chaillan in government service who can speak to the benefits of DevSecOps in a language that government decision-makers can understand.

Myth or Reality?: Agencies need to adopt DevSecOps to prepare for the cATO 

Reality! One of the cATO requirements is to demonstrate that you are aligned with an Approved DevSecOps Reference Design. The “shift left” story that DevSecOps espouses in vendor marketing literature and sales decks isn’t necessarily one size fits all. Likewise, DoD and federal agency DevSecOps play at a different level. 

Using DevSecOps to prepare for a cATO requires upfront analysis and planning with your development and operations teams’ participation. Government program managers need to collaborate closely with their contractor teams to put the processes and tools in place upfront, including container vulnerability scanning and reporting. Break down your Continuous Integration/Continuous Development (CI/CD) toolchain with an eye on how you can prepare your software components for continuous authorization.

Myth or Reality?: You need to have SBOMs for everything in your environment

Myth! However…you need to be able to show your Authorizing Official (AO) that you have “the ability to conduct active cyber defense in order to respond to cyber threats in real time.” If a zero day (like log4j) comes along you need to demonstrate you are equipped to identify the impact on your environment and remediate the issue quickly. Showing your AO that you manage SBOMs and can quickly query them to respond to threats will have you in the clear for this requirement.

Myth or Reality?: cATO is about technology and process only

Myth! As more elements of the DoD and civilian federal agencies push toward the cATO to support their missions, and a DevSecOps culture takes hold, it’s reasonable to expect that such a culture will influence the cATO process. Central tenets of a DevSecOps culture include:

  • Collaboration
  • Infrastructure as Code (IaC)
  • Automation
  • Monitoring

Each of these tenets contributes to the success of a cATO. Collaboration between the government program office, contractor’s project team leadership, third-party assessment organization (3PAO), and FedRAMP program office is at the foundation of a well-run authorization. IAC provides the tools to manage infrastructure such as virtual machines, load balancers, networks, and other infrastructure components using practices similar to how DevOps teams manage software code.

Myth or Reality?: Reusable Components Make a Difference in cATO

Reality! The growth of containers and other reusable components couldn’t come at a better time as the Department of Defense (DoD) and civilian government agencies push to the cloud driven by federal cloud initiatives and demands from their constituents.

Reusable components save time and budget when it comes to authorization because you can authorize once and use the authorized components across multiple projects. Look for more news about reusable components coming out of Platform One and other large-scale government DevSecOps and cloud projects that can help push this development model forward to become part of future government cloud procurements.

How Anchore Helps Organizations Implement the Continuous ATO Process

Anchore’s comprehensive suite of solutions is designed to help federal agencies and federal system integrators meet the three requirements of cATO.

Ongoing Visibility

Anchore Enterprise can be integrated into a build pipeline, image registry and runtime environment in order to provide a comprehensive view of the entire software development lifecycle (SDLC). On top of this, Anchore provides out-of-the-box policy packs mapped to NIST 800-53 controls for RMF, ensuring a robust continuous monitoring strategy. Real-time notifications alert users when images are out of compliance, helping agencies maintain ongoing visibility into their system’s security posture.

Active Cyber Defense

While Anchore Enterprise is integrated into the decentralized components of the SDLC, it provides a centralized database to track and monitor every component of software in all environments. This centralized datastore enables agencies to quickly triage zero-day vulnerabilities with a single database query. Remediation plans for impacted application teams can be drawn up in hours rather than days or weeks. By setting rules that flag anomalous behavior, such as image drift or blacklisted packages, Anchore supports an active cyber defense strategy for federal systems.

Adoption of an Approved DevSecOps Reference Design

Anchore aligns with the DoD DevSecOps Reference Design by offering solutions for:

  • Container hardening (Anchore DISA policy pack)
  • Container policy enforcement (Anchore Enterprise policies)
  • Container image selection (Iron Bank)
  • Artifact storage (Anchore image registry integration)
  • Release decision-making (Anchore Kubernetes Admission Controller)
  • Runtime policy monitoring (Anchore Kubernetes Automated Inventory)

Anchore is specifically mentioned in the DoD Container Hardening Process Guide, and the Iron Bank relies on Anchore technology to scan and enforce policy that ensures every image in Iron Bank is hardened and secure.

Final Thoughts

Continuous Authorization To Operate (cATO) is a vital framework for federal system integrators and agencies to maintain a strong security posture in the face of evolving cybersecurity threats. By ensuring ongoing visibility, active cyber defense, and the adoption of an approved DevSecOps reference design, software engineers and developers can effectively protect their systems in real-time. Anchore’s comprehensive suite of solutions is specifically designed to help meet the three requirements of cATO, offering a robust, secure, and agile approach to stay ahead of cybersecurity threats. 

By partnering with Anchore, federal system integrators and federal agencies can confidently navigate the complexities of cATO and ensure their systems remain secure and compliant in a rapidly changing cyber landscape. If you’re interested to learn more about how Anchore can help your organization embed DevSecOps tooling and principles into your software development process, click below to read our white paper.

The Power of Policy-as-Code for the Public Sector

As the public sector and businesses face unprecedented security challenges in light of software supply chain breaches and the move to remote, and now hybrid work, means the time for policy-as-code is now.

Here’s a look at the current and future states of policy-as-code and the potential it holds for security and compliance in the public sector:

What is Policy-as-Code?

Policy-as-code is the act of writing code to manage the policies you create to help with container security and other related security policies. Your IT staff can automate those policies to support policy compliance throughout your DevSecOps toolchain and production systems. Programmers express policy-as-code in a high-level language and store them in text files.

Your agency is most likely getting exposure to policy-as-code through cloud services providers (CSPs). Amazon Web Services (AWS) offers policy-as-code via the AWS Cloud Development Kit. Microsoft Azure supports policy-as-code through Azure Policy, a service that provides both built-in and user-defined policies across categories that map the various Azure services such as Compute, Storage, and Azure Kubernetes Services (AKS).

Benefits of Policy-as-Code

Here are some benefits your agency can realize from policy-as-code:

  • Information and logic about your security and compliance policies as code remove the risks of “oral history” when sysadmins may or may not pass down policy information to their successors during a contract transition.
  • When you render security and compliance policies as code in plain text files, you can use various DevSecOps and cloud management tools to automate the deployment of policies into your systems.
  • Guardrails for your automated systems because as your agency moves to the cloud, your number of automated systems only grows. A responsible growth strategy is to protect your automated systems from performing dangerous actions. Policy-as-code is a more suitable method to verify the activities of your automated systems.
  • A longer-term goal would be to manage your compliance and security policies in your version control system of choice with all the benefits of history, diffs, and pull requests for managing software code.
  • You can now test policies with automated tools in your DevSecOps toolchain.

Public Sector Policy Challenges

As your agency moves to the cloud, it faces new challenges with policy compliance while adjusting to novel ways of managing and securing IT infrastructure:

Keeping Pace with Government-Wide Compliance & Cloud Initiatives

FedRAMP compliance has become a domain specialty unto itself. While the United States federal government maintains control over the policies behind FedRAMP, and the next updates and changes, FedRAMP compliance has become its own industry with specialized consultants and toolsets that promise to get an agency’s cloud application through the FedRAMP approval process.

As government cloud initiatives such as Cloud Smart become more important, the more your agency can automate the management and testing of security policies, the better. Automation reduces human error because it does away with the manual and tedious management and testing of security policies.

Automating Cloud Migration and Management

Large cloud initiatives bring with them the automation of cloud migration and management. Cloud-native development projects that accompany cloud initiatives need to consider continuous compliance and security solutions to protect their software containers.

Maintaining Continuous Transparency and Accountability

Continuous transparency is fundamental to FedRAMP and other government compliance programs. Automation and reporting are two fundamental building blocks. The stakes for reporting are only going to increase as the mandates of the Executive Order on Improving the Nation’s Cybersecurity become reality for agencies.

Achieving continuous transparency and accountability requires that an enterprise have the right tools, processes, and frameworks in place to monitor, report, and manage employee behaviors throughout the application delivery life cycle.

Securing the Agency Software Supply Chain

Government agencies are multi-vendor environments with homogenous IT infrastructure, including cloud services, proprietary tools, and open source technologies. The recent release of the Container Platform SRG is going to drive more requirements for the automation of container security across Department of Defense (DoD) projects

Looking to learn more about how to utilizing a policy-based security posture to meet DoD compliance standards like cATO or CMMC? One of the most popular technology shortcuts is to utilize a DoD software factory. Anchore has been helping organizations and agencies put the Sec in DevSecOps by securing traditional software factories, transforming them into DoD software factories. Get caught up with the content below:

Policy-as-Code: Current and Future States

The future of policy-as-code in government could go in two directions. The same technology principles of policy-as-code that apply to technology and security policies can also render any government policy-as-code. An example of that is the work that 18F is prototyping for SNAP (Supplemental Nutrition Assistance Program) food stamp program eligibility.

Policy-as-code can also serve as another automation tool for FedRAMP and Security Technical Implementation Guide (STIG) testing as more agencies move their systems to the cloud. Look for the backend tools that can make this happen gradually to improve over the next few years.

Managing Cultural and Procurement Barriers

Compliance and security are integral elements of federal agency working life, whether it’s the DoD supporting warfighters worldwide or civilian government agencies managing constituent data to serve the American public better.

The concept of policy-as-code brings to mind being able to modify policy bundles on the fly and pushing changes into your DevSecOps toolchain via automation. While theoretically possible with policy-as-code in a DevSecOps toolchain, the reality is much different. Industry standards and CISO directives govern policy management at a much slower and measured cadence than the current technology stack enables.

API integration also enables you to integrate your policy-as-code solution into third-party tools such as Splunk and other operational support systems that your organization may already use as your standards.

Automation

It’s best to avoid manual intervention for managing and testing compliance policies. Automation should be a top requirement for any policy-as-code solution, especially if your agency is pursuing FedRAMP or NIST certification for its cloud applications.

Enterprise Reporting

Internal and external compliance auditors bring with them varying degrees of reporting requirements. It’s essential to have a policy-as-code solution that can support a full range of reporting requirements that your auditors and other stakeholders may present to your team.

Enterprise reporting requirements range from customizable GUI reporting dashboards to APIs that enable your developers to integrate policy-as-code tools into your DevSecOps team’s toolchain.

Vendor Backing and Support

As your programs venture into policy compliance, failing a compliance audit can be a costly mistake. You want to choose a policy-as-code solution for your enterprise compliance requirements with a vendor behind it for technical support, service level agreements (SLAs), software updates, and security patches.

You also want vendor backing and support also for technical support. Policy-as-code isn’t a technology to support using your own internal IT staff (at least in the beginning).

With policy-as-code being a newer technology option, a fee-based solution backed by a vendor also gets you access to their product management. As a customer, you want a vendor that will let you access their product roadmap and see the future.

Interested to see how the preeminent DoD Software Factory Platform used a policy-based approach to software supply chain security in order to achieve a cATO and allow any DoD programs that built on their platform to do the same? Read our case study or watch our on-demand webinar with Major Camdon Cady.

Enforcing the DoD Container Image and Deployment Guide with Anchore Federal

The latest version of the DoD Container Image and Deployment Guide details technical and security requirements for container image creation and deployment within a DoD production environment. Sections 2 and 3 of the guide include security practices that teams must follow to limit the footprint of security flaws during the container image build process. These sections also discuss best security practices and correlate them to the corresponding security control family with Risk Management Framework (RMF) commonly used by cybersecurity teams across DoD.

Anchore Federal is a container scanning solution used to validate the DoD compliance and security standards, such as continuous authorization to operate (cATO), across images, as explained in the DoD Container Hardening Process Guide. Anchore’s policy first approach places policy where it belongs– at the forefront of the development lifecycle to assess compliance and security issues in a shift left approach. Scanning policies within Anchore are fully customizable based on specific mission needs, providing more in-depth insight into compliance irregularities that may exist within a container image. This level of granularity is achieved through specific security gates and triggers that generate automated alerts. This allows teams to validate that the best practices discussed in Section 2 of the Container Image Deployment Guide enable best practices to be enforced as your developers build.  

Anchore Federal uses a specific DoD Scanning Policy that enforces a wide array of gates and triggers that provide insight into the DoD Container Image and Deployment Guide’s security practices. For example, you can configure the Dockerfile gate and its corresponding triggers to monitor for security issues such as privileged access. You can also configure the Dockerfile gate to expose unauthorized ports and validate images built from approved base images and check for the unauthorized disclosure of secrets/sensitive files, amongst others.

Anchor Federal’s DoD scanning policy is already enabled to validate the detailed list of best practices in Section 2 of the Container Image and Deployment Guide. 

Looking to learn more about how to achieve container hardening at DoD levels of security? One of the most popular technology shortcuts is to utilize a DoD software factory. Anchore has been helping organizations and agencies put the Sec in DevSecOps by securing traditional software factories, transforming them into DoD software factories.

Next Steps

Anchore Federal is a battle-tested solution that has been deployed to secure DoD’s most critical workloads. Anchore Federal exists to provide cleared professional services and software to DoD mission partners and the US Intelligence Community in building their DevSecOps environments. Learn more about how Anchore Federal supports DoD missions.

A Policy Based Approach to Container Security & Compliance

At Anchore, we take a preventative, policy-based compliance approach, specific to organizational needs. Our philosophy of scanning and evaluating Docker images against user-defined policies as early as possible in the development lifecycle, greatly reduce vulnerable, non-compliant images from making their way into trusted container registries and production environments.

But what do we mean by ‘policy-based compliance’? And what are some of the best practices organizations can adopt to help achieve their own compliance needs? In this post, we will first define compliance and then cover a few steps development teams can take to help to bolster their container security.

An Example of Compliance

Before we define ‘policy-based compliance’, it helps to gain a solid understanding of what compliance means in the world of software development. Generally speaking, compliance is a set of standards for recommended security controls laid out by a particular agency or industry that an application must adhere to. An example of such an agency is the National Institute of Standards and Technology or NIST. NIST is a non-regulatory government agency that develops technology, metrics, and standards to drive innovation and economic competitiveness at U.S. based organizations in the science and technology industry. Companies that are providing products and services to the federal government oftentimes are required to meet the security mandates set by NIST. An example of one of these documents is NIST SP 800-218, the Secure Software Development Framework (SSDF) which specifics the security controls necessary to ensure a software development environment is secure and produces secure code.

What do we mean by ‘Policy-based’?

Now that we have a definition and example, we can begin to discuss the aspect role play in achieving compliance. In short, policy-based compliance means adhering to a set compliance requirements via customizable rules defined by a user. In some cases, security software tools will contain a policy engine that allows for development teams to create rules that correspond to a particular security concern addressed in a compliance publication.

Looking to learn more about how to utilizing a policy-based security posture to meet DoD compliance standards like cATO or CMMC? One of the most popular technology shortcuts is to utilize a DoD software factory. Anchore has been helping organizations and agencies put the Sec in DevSecOps by securing traditional software factories, transforming them into DoD software factories. Get caught up with the content below:

How can Organizations Achieve Compliance in Containerized Environments?

Here at Anchore, our focus is helping organizations secure their container environments by scanning and analyzing container images. Oftentimes, our customers come to us to help them achieve certain compliance requirements they have, and we can often point them to our policy engine. Anchore policies are user-defined checks that are evaluated against an analyzed image. A best practice for implementing these checks is through a step in CI/CD. By adding an Anchore image scanning step in a CI tool like Jenkins or Gitlab, development teams can create an added layer of governance to their build pipeline.

Complete Approach to Image Scanning

Vulnerability scanning

Adding image scanning against a list of CVEs to a build pipeline allows developers to be proactive about security as they will get a near-immediate feedback loop on potentially vulnerable images. Anchore image scanning will identify any known vulnerabilities in the container images, enforcing a shift-left paradigm in the development lifecycle. Once vulnerabilities have been identified, reports can be generated listing information about the CVEs and vulnerable packages within the images. In addition, Anchore can be configured to send webhooks to specified endpoints if new CVEs have published that impact an image that has been previously scanned. At Anchore, we’ve seen integrations with Slack or JIRA to alert teams or file tickets automatically when vulnerabilities are discovered.

Adding governance

Once an image has been analyzed and its content has been discovered, categorized, and processed, the resulting data can be evaluated against a user-defined set of rules to give a final pass or fail recommendation for the image. It is typically at this stage that security and DevOps teams want to add a layer of control to the images being scanned in order to make decisions on which images should be promoted into production environments.

Anchore policy bundles (structured as JSON documents) are the unit of policy definition and evaluation. A user may create multiple policy bundles, however, for evaluation, only one can be marked as ‘active’. The policy is expressed as a policy bundle, which is made up of a set of rules to perform an evaluation of an image. These rules can define a check against an image for things such as:

  • Security vulnerabilities
  • Package whitelists and blacklists
  • Configuration file contents
  • Presence of credentials in an image
  • Image manifest changes
  • Exposed ports

Anchore policies return a pass or fail decision result.

Putting it Together with Compliance

Given the variance of compliance needs across different enterprises, having a flexible and robust policy engine becomes a necessity for organizations needing to adhere to one or many sets of standards. In addition, managing and securing container images in CI/CD environments can be challenging without the proper workflow. However, with Anchore, development and security teams can harden their container security posture by adding an image scanning step to their CI, reporting back on CVEs, and fine-tuning policies meet compliance requirements. With compliance checks in place, only container images that meet the standards laid out a particular agency or industry will be allowed to make their way into production-ready environments.

Conclusion

Taking a policy-based compliance approach is a multi-team effort. Developers, testers, and security engineers should be in constant collaboration on policy creation, CI workflow, and notification/alert. With all of these aspects in-check, compliance can simply become part of application testing and overall quality and product development. Most importantly, it allows organizations to create and ship products with a much higher level of confidence knowing that the appropriate methods and tooling are in place to meet industry-specific compliance requirements.

Interested to see how the preeminent DoD Software Factory Platform used a policy-based approach to software supply chain security in order to achieve a cATO and allow any DoD programs that built on their platform to do the same? Read our case study or watch our on-demand webinar with Major Camdon Cady.