Tag: DoDSoftwareFactory
4 Ways to Prepare your Containers for the STIG Process
The Security Technical Implementation Guide (STIG) is a Department of Defense (DoD) technical guidance standard that captures the cybersecurity requirements for a specific product, such as a cloud application going into production to support the warfighter. System integrators (SIs), government contractors, and independent software vendors know the STIG process as a well-governed process that all of their technology products must pass. The Defense Information Systems Agency (DISA) released the Container Platform Security Requirements Guide (SRG) in December 2020 to direct how software containers go through the STIG process.
STIGs are notorious for their complexity and the hurdle that STIG compliance poses for technology project success in the DoD. Here are some tips to help your team prepare for your first STIG or to fine-tune your existing internal STIG processes.
4 Ways to Prepare for the STIG Process for Containers
Here are four ways to prepare your teams for containers entering the STIG process:
1. Provide your Team with Container and STIG Cross-Training
DevSecOps and containers, in particular, are still gaining ground in DoD programs. You may very well find your team in a situation where your cybersecurity/STIG experts may not have much container experience. Likewise, your programmers and solution architects may not have much STIG experience. Such a situation calls for some manner of formal or informal cross-training for your team on at least the basics of containers and STIGs.
Look for ways to provide your cybersecurity specialists involved in the STIG process with training about containers if necessary. There are several commercial and free training options available. Check with your corporate training department to see what resources they might have available such as seats for online training vendors like A Cloud Guru and Cloud Academy.
There’s a lot of out-of-date and conflicting information about the STIG process on the web today. System integrators and government contractors need to build STIG expertise across their DoD project teams to cut through such noise.
Including STIG expertise as an essential part of your cybersecurity team is the first step. While contract requirements dictate this proficiency, it only helps if your organization can build a “bench” of STIG experts.
Here are three tips for building up your STIG talent base:
- Make STIG experience a “plus” or “bonus” in your cybersecurity job requirements for roles, even if they may not be directly billable to projects with STIG work (at least in the beginning)
- Develop internal training around STIG practices led by your internal experts and make it part of employee onboarding and DoD project kickoffs
- Create a “reach back” channel from your project teams to get STIG expertise from other parts of your company, such as corporate and other project teams with STIG expertise, to get support for any issues and challenges with the STIG process
Depending on the size of your company, clearance requirements of the project, and other situational factors, the temptation might be there to bring in outside contractors to shore up your STIG expertise internally. For example, the Container Platform Security Resource Guide (SRG) is still new. It makes sense to bring in an outside contractor with some experience managing containers through the STIG process. If you go this route, prioritize the knowledge transfer from the contractor to your internal team. Otherwise, their container and STIG knowledge walk out the door at the end of the contract term.
2. Validate your STIG Source Materials
When researching the latest STIG requirements, you need to validate the source materials. There are many vendors and educational sites that publish STIG content. Some of that content is outdated and incomplete. It’s always best to go straight to the source. DISA provides authoritative and up-to-date STIG content online that you should consider as your single source of truth on the STIG process for containers.
3. Make the STIG Viewer part of your Approved Desktop
Working on DoD and other public sector projects requires secure environments for developers, solution architects, cybersecurity specialists, and other team members. The STIG Viewer should become a part of your DoD project team’s secure desktop environment. Save the extra step of your DoD security teams putting in a service desk ticket to request the STIG Viewer installation.
4. Look for Tools that Automate time-intensive Steps in the STIG process
The STIG process is time-intensive, primarily documenting policy controls. Look for tools that’ll help you automate compliance checks before you proceed into an audit of your STIG controls. The right tool can save you from audit surprises and rework that’ll slow down your application going live.
Parting Thought
The STIG process for containers is still very new to DoD programs. Being proactive and preparing your teams upfront in tandem with ongoing collaboration are the keys to navigating the STIG process for containers.
Learn more about putting your containers through the STIG process in our new white paper entitled Navigating STIG Compliance for Containers!
Unpacking the Power of Policy at Scale in Anchore
Generating a software bill of materials (SBOM) is starting to become common practice. Is your organization using them to their full potential? Here are a couple questions Anchore can help you answer with SBOMs and the power of our policy engine:
- How far off are we from meeting the security requirements that Iron Bank, NIST, CIS, and DISA put out around container images?
- How can I standardize the way our developers build container images to improve security without disrupting the development team’s output?
- How can I best prioritize this endless list of security issues for my container images?
- I’m new to containers. Where do I start on securing them?
If any of those questions still need answering at your organization and you have five minutes, you’re in the right place. Let’s dive in.
If you’re reading this you probably already know that Anchore creates developer tools to generate SBOMs, and has been since 2016. Beyond just SBOM generation, Anchore truly shines when it comes to its policy capabilities. Every company operates differently — some need to meet strict compliance standards while others are focused on refining their software development practices for enhanced security. No matter where you’re at in your container security journey today, Anchore’s policy framework can help improve your security practices.
Anchore Enterprise has a tailored approach to policy and enforcement that means whether you’re a healthcare provider abiding by stringent regulations or a startup eager to fortify its digital defenses, Anchore has got you covered. Our granular controls allow teams to craft policies that align perfectly with their security goals.
Exporting Policy Reports with Ease
Anchore also has a nifty command line tool called anchorectl
that allows you to grab SBOMs and policy results related to those SBOMs. There are a lot of cool things you can do with a little bit of scripting and all the data that Anchore Enterprise stores. We are going to cover one example in this blog.
Once Anchore has created and stored an SBOM for a container image, you can quickly get policy results related to that image. The anchorectl
command that will evaluate an image against the docker-cis-benchmark policy bundle:
anchorectl image details <image-id> -p docker-cis-benchmark
That command will return the policy result in a few seconds. Let’s say your organization develops 100 images and you want to meet the CIS benchmark standard. You wouldn’t want to assess each of these images individually, that sounds exhausting.
To solve this problem, we have created a script that can iterate over any number of images, merge the results into a single policy report, and export that into a csv file. This allows you to make strategic decisions about how you can most effectively move towards compliance with the CIS benchmark (or any standard).
In this example, we ran the script against 30 images in my Anchore deployment. Now take a look holistically at how far off we are from CIS compliance. Here are a few metrics that standout:
- 26 of the 30 images are running as ‘root’
- 46.9% of our total vulnerabilities have fixes available (4978 /10611)
ADD
instructions are being used in 70% of our images- Health checks missing in 80% of our images
- 14 secrets (all from the same application team)
- 1 malware hit (Cryptominer Casey is at it again)
As a security team member, we didn’t write any of this code myself, which means I need to work with my developer colleagues on the product/application teams to clear up these security issues. Usually this means an email that educates my colleagues on how to utilize health checks, prefer COPY
instead over ADD
in Dockerfiles, declaring a non-privileged user instead of root, and methods to upgrade packages with fixes available (e.g., Dependabot). Finally, we would prioritize investigating how that malware made its way into that image for myself.
This example illustrates how storing SBOMs and applying policy rules against them at scale can streamline your path to your container security goals.
Visualizing Your Aggregated Policy Reports
While this raw data is useful in and of itself, there are times when you may want to visualize the data in a way that is easier to understand. While Anchore Enterprise does provide some dashboarding capabilities, it is not and does not aim to be a versatile dashboarding tool. This is where utilizing an observability vendor comes in handy.
In this example, I’ll be using New Relic as they provide a free tier that you can sign up for and begin using immediately. However, other providers such as Datadog and Grafana would also work quite well for this use case.
Importing your Data
- Download the
tsv-to-json.py
script - Save the data produced by the
policy-report.py
script as a TSV file- We use TABs as a separator because commas are used in many of the items contained in the report.
- Run the
tsv-to-json.py
script against the TSV file:
python3 tsv-to-json.py aggregated_output.tsv > test.json
- Sign-up for a New Relic account here
- Find your New Relic Account ID and License Key
- Your New Relic Account ID can be seen in your browser’s address bar upon logging in to New Relic, and your New Relic License Key can be found on the right hand side of the screen upon initial login to your New Relic account.
- Use
curl
to push the data to New Relic:
gzip -c test.json | curl \
-X POST \
-H "Content-Type: application/json" \
-H "Api-Key: <YOUR_NEWRELIC_LICENSE_KEY>" \
-H "Content-Encoding: gzip" \
https://insights-collector.newrelic.com/v1/accounts/<YOUR_NEWRELIC_ACCOUNT_ID>/events \
--data-binary @-
Visualizing Your Data
New Relic uses the New Relic Query Language (NRQL) to perform queries and render charts based on the resulting data set. The tsv-to-json.py
script you ran earlier converted your TSV file into a JSON file compatible with New Relic’s event data type. You can think of each collection of events as a table in a SQL database. The tsv-to-json.py script will automatically create an event type for you, combining the string “Anchore” with a timestamp.
To create a dashboard in New Relic containing charts, you’ll need to write some NRQL queries. Here is a quick example:
FROM Anchore1698686488 SELECT count(*) FACET severity
This query will count the total number of entries in the event type named Anchore1698686488 and group them by the associated vulnerability’s severity. You can experiment with creating your own, or start by importing a template we have created for you here.
Wrap-Up
The security data that your tools create is only as good as the insights that you are able to derive from them. In this blog post, we covered a way to help security practitioners turn a mountain of security data into actionable and prioritized security insights. That can help your organization to improve its security posture and meet compliance standards quicker. That being said this blog is dependent on you already being a customer of Anchore Enterprise.
Looking to learn more about how to utilizing a policy-based security posture to meet DoD compliance standards like cATO or CMMC? One of the most popular technology shortcuts is to utilize a DoD software factory. Anchore has been helping organizations and agencies put the Sec in DevSecOps by securing traditional software factories, transforming them into DoD software factories. Get caught up with the content below:
Automated Policy Enforcement for CMMC with Anchore Enterprise
The Cyber Maturity Model Certification (CMMC) is an important program to harden the cybersecurity posture of the defense industrial base. Its purpose is to validate that appropriate safeguards are in place to protect controlled unclassified information (CUI). Many of the organizations that are required to comply with CMMC are Anchore customers. They have the responsibility to protect the sensitive, but not classified data, of US military and government agencies as they support the various missions of the United States.
CMMC 2.0 Levels
- Level 1 Foundation: Safeguard federal contract information (FCI); not critical to national security.
- Level 2 Advanced: This maps directly to NIST Special Publication (SP) 800-171. Its primary goal is to ensure that government contractors are properly protecting controlled unclassified information (CUI).
- Level 3 Expert: This maps directly to NIST Special Publication (SP) 800-172. Its primary goal is to go beyond the base-level security requirements defined in NIST 800-171. NIST 800-172 provides security requirements that specifically defend against advanced persistent threats (APTs).
This is of critical importance as these organizations leverage common place DevOps tooling to build their software. Additionally, these large organizations may be working with smaller subcontractors or suppliers who are building software in tandem or partnership.
For example, a mega-defense contractor is working alongside a small mom-and-pop shop to develop software for a classified government system. Lots of questions we should have here:
- How can my company as a mega-defense contractor validate what software built by my partner is not using blacklisted software packages?
- How can my company validate software supplied to me is free of malware?
- How can I validate that the software supplied to me is in compliance with licensing standards and vulnerability compliance thresholds of my security team?
- How do I validate that the software I’m supplying is compliant not only against NIST 800-171 and CMMC, but against the compliance standards of my government end user (Such as NIST 800-53 or NIST 800-161)?
Validating Security between DevSecOps Pipelines and Software Supply Chain
At any major or small contractor alike, everyone has taken steps to build internal DevSecOps (DSO) pipelines. However, the defense industrial base (DIB) commonly involves daily relationships in which smaller defense contractors supply software to a larger defense contractor for a program or DSO pipeline that consumes and implements that software. With Anchore Enterprise, we can now validate if that software supplied is compliant with CMMC controls as specified in NIST 800-171.
Looking to learn more about how to achieve CMMC Level 2 or NIST 800-171 compliance? One of the most popular technology shortcuts is to utilize a DoD software factory. Anchore has been helping organizations and agencies put the Sec in DevSecOps by securing traditional software factories, transforming them into DoD software factories.
Which Controls does Anchore Enterprise Automate?
3.1.7 – Restrict Non-Privileged Users and Log Privileged Actions
Related NIST 800-53 Controls: AC-6 (10)
Description: Prevent non-privileged users from executing privileged functions and capture the execution of such functions in audit logs.
Implementation: Anchore Enterprise can scan the container manifests to determine if the user is being given root privileges and implement an automated policy to prevent build containers from entering a runtime environment. This prevents a scenario where any privileged functions can be utilized in a runtime environment.
3.4.1 – Maintain Baseline Configurations & Inventories
Related NIST 800-53 Controls: CM-2(1), CM-8(1), CM-6
Description: Establish and maintain baseline configurations and inventories of organizational systems (including hardware, software, firmware, and documentation) throughout the respective system development life cycles.
Implementation: Anchore Enterprise provides a centralized inventory of all containers and their associated manifests at each stage of the development pipeline. All manifests, images and containers are automatically added to the central tracking inventory so that a complete list of all artifacts of the build pipeline can be tracked at any moment in time.
3.4.2 – Enforce Security Configurations
Related NIST 800-53 Controls: CM-2 (1) & CM-8(1) & CM-6
Description: Establish and enforce security configuration settings for information technology products employed in organizational systems.
Implementation: Implementation: Anchore Enterprise scans all container manifest files for security configurations and publishes found vulnerabilities to a centralized database that can be used for monitoring, ad-hoc reporting, alerting and/or automated policy enforcement.
3.4.3 – Monitor and Log System Changes with Approval Process
Related NIST 800-53 Controls: CM-3
Description: Track, review, approve or disapprove, and log changes to organizational systems.
Implementation: Anchore Enterprise provides a centralized dashboard that tracks all changes to applications which makes scheduled reviews simple. It also provides an automated controller that can apply policy-based decision making to either automatically approve or reject changes to applications based on security rules.
3.4.4 – Run Security Analysis on All System Changes
Related NIST 800-53 Controls: CM-4
Description: Analyze the security impact of changes prior to implementation.
Implementation: Anchore Enterprise can scan changes to applications for security vulnerabilities during the build pipeline to determine the security impact of the changes.
3.4.6 – Apply Principle of Least Functionality
Related NIST 800-53 Controls: CM-7
Description: Employ the principle of least functionality by configuring organizational systems to provide only essential capabilities.
Implementation: Anchore Enterprise can scan all applications to ensure that they are uniformly applying the principle of least functionality to individual applications. If an application does not meet this standard then Anchore Enterprise can be configured to prevent an application from being deployed to a production environment.
3.4.7 – Limit Use of Nonessential Programs, Ports, and Services
Related NIST 800-53 Controls: CM-7(1), CM-7(2)
Description: Restrict, disable, or prevent the use of nonessential programs, functions, ports, protocols, and services.
Implementation: Anchore Enterprise can be configured as a gating agent that will scan for specific security violations and prevent these applications from being deployed until the violations are remediated.
3.4.8 – Implement Blacklisting and Whitelisting Software Policies
Related NIST 800-53 Controls: CM-7(4), CM-7(5)
Description: Apply deny-by-exception (blacklisting) policy to prevent the use of unauthorized software or deny-all, permit-by-exception (whitelisting) policy to allow the execution of authorized software.
Implementation: Anchore Enterprise can be configured as a gating agent that will apply a security policy to all scanned software. The policies can be configured in a black- or white-listing manner.
3.4.9 – Control and Monitor User-Installed Software
Related NIST 800-53 Controls: CM-11
Description: Control and monitor user-installed software.
Implementation: Anchore Enterprise scans all software in the development pipeline and records all user-installed software. The scans can be monitored in the provided dashboard. User-installed software can be controlled (allowed or denied) via the gating agent.
3.5.10 – Store and Transmit Only Cryptographically-Protected Passwords
Related NIST 800-53 Controls: IA-5(1)
Description: Store and transmit only cryptographically-protected of passwords.
Implementation: Anchore Enterprise can scan for plain-text secrets in build artifacts and prevent exposed secrets from being promoted to the next environment until the violation is remediated. This prevents unauthorized storage or transmission of unencrypted passwords or secrets. See screenshot below to see this protection in action.
3.11.2 – Scan for Vulnerabilities
Related NIST 800-53 Controls: RA-5, RA-5(5)
Description: Scan for vulnerabilities in organizational systems and applications periodically and when new vulnerabilities affecting those systems and applications are identified.
Implementation: Anchore Enterprise is designed to scan all systems and applications for vulnerabilities continuously and alert when any changes introduce new vulnerabilities. See screenshot below to see this protection in action.
3.11.3 – Remediate Vulnerabilities Respective to Risk Assessments
Related NIST 800-53 Controls: RA-5, RA-5(5)
Description: Remediate vulnerabilities in accordance with risk assessments.
Implementation: Anchore Enterprise can be tuned to allow or deny changes based on a risk scoring system.
3.12.2 – Implement Plans to Address System Vulnerabilities
Related NIST 800-53 Controls: CA-5
Description: Develop and implement plans of action designed to correct deficiencies and reduce or eliminate vulnerabilities in organizational systems.
Implementation: Anchore Enterprise automates the process of ensuring all software and systems are in compliance with the security policy of the organization.
3.13.4 – Block Unauthorized Information Transfer via Shared Resources
Related NIST 800-53 Controls: SC-4
Description: Prevent unauthorized and unintended information transfer via shared system resources.
Implementation: Anchore Enterprise can be configured as a gating agent that will scan for unauthorized and unintended information transfer and prevent violations from being transferred between shared system resources until the violations are remediated.
3.13.8 – Use Cryptography to Safeguard CUI During Transmission
Related NIST 800-53 Controls: SC-8
Description: Transmission Confidentiality and Integrity: Implement cryptographic mechanisms to prevent unauthorized disclosure of CUI during transmission unless otherwise protected by alternative physical safeguards.
Implementation: Anchore Enterprise can be configured as a gating agent that will scan for CUI and prevent violations of organization defined policies regarding CUI from being disclosed between systems.
3.14.5 – Periodically Scan Systems and Real-time Scan External Files
Related NIST 800-53 Controls: SI-2
Description: Perform periodic scans of organizational systems and real-time scans of files from external sources as files are downloaded, opened, or executed.
Implementation: Anchore Enterprise can be configured to scan all external dependencies that are built into software and provide information about relevant security vulnerabilities in the software development pipeline. See screenshot below to see this protection in action.
Wrap-Up
In a world increasingly defined by software solutions, the cybersecurity posture of defense-related industries stands paramount. The CMMC, a framework with its varying levels of compliance, underscores the commitment of the defense industrial base to fortify its cyber defenses.
As a multitude of organizations, ranging from the largest defense contractors to smaller mom-and-pop shops, work in tandem to support U.S. missions, the intricacies of maintaining cybersecurity standards grow. The questions posed exemplify the necessity to validate software integrity, especially in complex collaborations.
Anchore Enterprise solves these problems by automating software supply chain security best practices. It not only automates a myriad of crucial controls, ranging from user privilege restrictions to vulnerability scanning, but it also empowers organizations to meet and exceed the benchmarks set by CMMC and NIST.
In essence, as defense entities navigate the nuanced web of software development and partnerships, tools like Anchore Enterprise are indispensable in safeguarding the nation’s interests, ensuring the integrity of software supply chains, and championing the highest levels of cybersecurity.
If you’d like to learn more about the Anchore Enterprise platform or speak with a member of our team, feel free to book a time to speak with one of our specialists.
Navigating Continuous Authority To Operate (cATO): A Guide for Getting Started
Continuous Authority to Operate (cATO), sometimes known as Rapid ATO, is becoming necessary as the DoD and civilian agencies put more applications and data in the cloud. Speed and agility are becoming increasingly critical to the mission as the government and federal system integrators seek new features and functionalities to support the warfighter and other critical U.S. government priorities.
In this blog post, we’ll break down the concept of cATO in understandable terms, explain its benefits, explore the myths and realities of cATO and show how Anchore can help your organization meet this standard.
What is Continuous Authority To Operate (cATO)?
Continuous ATO is the merging of traditional authority to operate (ATO) risk management practices with flexible and responsive DevSecOps practices to improve software security posture.
Traditional Risk Management Framework (RMF) implementations focus on obtaining authorization to operate once every three years. The problem with this approach is that security threats aren’t static, they evolve. cATO is the evolution of this framework which requires the continual authorization of software components, such as containers, by building security into the entire development lifecycle using DevSecOps practices. All software development processes need to ensure that the application and its components meet security levels equal to or greater than what an ATO requires.
You authorize once and use the software component many times. With a cATO, you gain complete visibility into all assets, software security, and infrastructure as code.
By automating security, you are then able to obtain and maintain cATO. There’s no better statement about the current process for obtaining an ATO than this commentary from Mary Lazzeri with Federal Computer Week:
“The muddled, bureaucratic process to obtain an ATO and launch an IT system inside government is widely maligned — but beyond that, it has become a pervasive threat to system security. The longer government takes to launch a new-and-improved system, the longer an old and potentially insecure system remains in operation.”
The Three Pillars of cATO
To achieve cATO, an Authorizing Official (AO) must demonstrate three main competencies:
- Ongoing visibility: A robust continuous monitoring strategy for RMF controls must be in place, providing insight into key cybersecurity activities within the system boundary.
- Active cyber defense: Software engineers and developers must be able to respond to cyber threats in real-time or near real-time, going beyond simple scanning and patching to deploy appropriate countermeasures that thwart adversaries.
- Adoption of an approved DevSecOps reference design: This involves integrating development, security, and operations to close gaps, streamline processes, and ensure a secure software supply chain.
Looking to learn more about the DoD DevSecOps Reference Design? It’s commonly referred to as a DoD Software Factory. Anchore has been helping organizations and agencies put the Sec in DevSecOps by securing traditional software factories, transforming them into DoD software factories.
Continuous ATO vs. ATO
The primary difference between traditional ATOs and continuous ATOs is the frequency at which a system seeks to prove the validity of its security claims. ATOs require that a system can prove its security once every three years whereas cATO systems prove their security every moment that the system is running.
The Benefits of Continuous ATO
Continuous ATO is essentially the process of applying DevSecOps principles to the compliance framework of Authority to Operate. Automating the individual compliance processes speeds up development work by avoiding repetitive tasks to obtain permission. Next, we’ll explore additional (and sometimes unexpected) benefits of cATO.
Increase Velocity of System Deployment
CI/CD systems and the DevSecOps design pattern were created to increase the velocity at which new software can be deployed from development to production. On top of that, Continuous ATOs can be more easily scaled to accommodate changes in the system or the addition of new systems, thanks to the automation and flexibility offered by DevSecOps environments.
Reduce Time and Complexity to Achieve an ATO
With the cATO approach, you can build a system to automate the process of generating the artifacts to achieve ATO rather than manually producing them every three years. This automation in DevSecOps pipelines helps in speeding up the ATO process, as it can generate the artifacts needed for the AO to make a risk determination. This reduces the time spent on manual reviews and approvals. Much of the same information will be requested for each ATO, and there will be many overlapping security controls. Designing the DevSecOps pipeline to produce the unique authorization package for each ATO from the corpus of data and information available can lead to increased efficiency via automation and re-use.
No Need to Reinvent AND Maintain the Wheel
When you inherit the security properties of the DevSecOps reference design or utilize an approved managed platform, then the provider will shoulder the burden. Someone else has already done the hard work of creating a framework of tools that integrate together to achieve cATO, re-use their effort to achieve cATO for your system.
Alternatively, you can utilize a platform provider, such as Platform One, Kessel Run, Black Pearl, or the Army Software Factory to outsource the infrastructure management.
Learn how Anchore helped Platform One achieve cATO and become the preeminent DoD software factory:
Myths & Realities
Myth or Reality?: DevSecOps can be at Odds with cATO
Myth! DevSecOps in the DoD and civilian government agencies are still the domain of early adopters. The strict security and compliance requirements — the ATO in particular — of the federal government make it a fertile ground for DevSecOps adoption. Government leaders such as Nicolas Chaillan, former chief software officer for the United States Air Force, are championing DevSecOps standards and best practices that the DoD, federal government agencies, and even the commercial sector can use to launch their own DevSecOps initiatives.
One goal of DevSecOps is to develop and deploy applications as quickly as possible. An ATO is a bureaucratic morass if you’re not proactive. When you build a DevSecOps toolchain that automates container vulnerability scanning and other areas critical to ATO compliance controls, can you put in the tools, reporting, and processes to test against ATO controls while still in your development environment.
DevSecOps, much like DevOps, suffers from a marketing problem as vendors seek to spin the definitions and use cases that best suit their products. The DoD and government agencies need more champions like Chaillan in government service who can speak to the benefits of DevSecOps in a language that government decision-makers can understand.
Myth or Reality?: Agencies need to adopt DevSecOps to prepare for the cATO
Reality! One of the cATO requirements is to demonstrate that you are aligned with an Approved DevSecOps Reference Design. The “shift left” story that DevSecOps espouses in vendor marketing literature and sales decks isn’t necessarily one size fits all. Likewise, DoD and federal agency DevSecOps play at a different level.
Using DevSecOps to prepare for a cATO requires upfront analysis and planning with your development and operations teams’ participation. Government program managers need to collaborate closely with their contractor teams to put the processes and tools in place upfront, including container vulnerability scanning and reporting. Break down your Continuous Integration/Continuous Development (CI/CD) toolchain with an eye on how you can prepare your software components for continuous authorization.
Myth or Reality?: You need to have SBOMs for everything in your environment
Myth! However…you need to be able to show your Authorizing Official (AO) that you have “the ability to conduct active cyber defense in order to respond to cyber threats in real time.” If a zero day (like log4j) comes along you need to demonstrate you are equipped to identify the impact on your environment and remediate the issue quickly. Showing your AO that you manage SBOMs and can quickly query them to respond to threats will have you in the clear for this requirement.
Myth or Reality?: cATO is about technology and process only
Myth! As more elements of the DoD and civilian federal agencies push toward the cATO to support their missions, and a DevSecOps culture takes hold, it’s reasonable to expect that such a culture will influence the cATO process. Central tenets of a DevSecOps culture include:
- Collaboration
- Infrastructure as Code (IaC)
- Automation
- Monitoring
Each of these tenets contributes to the success of a cATO. Collaboration between the government program office, contractor’s project team leadership, third-party assessment organization (3PAO), and FedRAMP program office is at the foundation of a well-run authorization. IAC provides the tools to manage infrastructure such as virtual machines, load balancers, networks, and other infrastructure components using practices similar to how DevOps teams manage software code.
Myth or Reality?: Reusable Components Make a Difference in cATO
Reality! The growth of containers and other reusable components couldn’t come at a better time as the Department of Defense (DoD) and civilian government agencies push to the cloud driven by federal cloud initiatives and demands from their constituents.
Reusable components save time and budget when it comes to authorization because you can authorize once and use the authorized components across multiple projects. Look for more news about reusable components coming out of Platform One and other large-scale government DevSecOps and cloud projects that can help push this development model forward to become part of future government cloud procurements.
How Anchore Helps Organizations Implement the Continuous ATO Process
Anchore’s comprehensive suite of solutions is designed to help federal agencies and federal system integrators meet the three requirements of cATO.
Ongoing Visibility
Anchore Enterprise can be integrated into a build pipeline, image registry and runtime environment in order to provide a comprehensive view of the entire software development lifecycle (SDLC). On top of this, Anchore provides out-of-the-box policy packs mapped to NIST 800-53 controls for RMF, ensuring a robust continuous monitoring strategy. Real-time notifications alert users when images are out of compliance, helping agencies maintain ongoing visibility into their system’s security posture.
Active Cyber Defense
While Anchore Enterprise is integrated into the decentralized components of the SDLC, it provides a centralized database to track and monitor every component of software in all environments. This centralized datastore enables agencies to quickly triage zero-day vulnerabilities with a single database query. Remediation plans for impacted application teams can be drawn up in hours rather than days or weeks. By setting rules that flag anomalous behavior, such as image drift or blacklisted packages, Anchore supports an active cyber defense strategy for federal systems.
Adoption of an Approved DevSecOps Reference Design
Anchore aligns with the DoD DevSecOps Reference Design by offering solutions for:
- Container hardening (Anchore DISA policy pack)
- Container policy enforcement (Anchore Enterprise policies)
- Container image selection (Iron Bank)
- Artifact storage (Anchore image registry integration)
- Release decision-making (Anchore Kubernetes Admission Controller)
- Runtime policy monitoring (Anchore Kubernetes Automated Inventory)
Anchore is specifically mentioned in the DoD Container Hardening Process Guide, and the Iron Bank relies on Anchore technology to scan and enforce policy that ensures every image in Iron Bank is hardened and secure.
Final Thoughts
Continuous Authorization To Operate (cATO) is a vital framework for federal system integrators and agencies to maintain a strong security posture in the face of evolving cybersecurity threats. By ensuring ongoing visibility, active cyber defense, and the adoption of an approved DevSecOps reference design, software engineers and developers can effectively protect their systems in real-time. Anchore’s comprehensive suite of solutions is specifically designed to help meet the three requirements of cATO, offering a robust, secure, and agile approach to stay ahead of cybersecurity threats.
By partnering with Anchore, federal system integrators and federal agencies can confidently navigate the complexities of cATO and ensure their systems remain secure and compliant in a rapidly changing cyber landscape. If you’re interested to learn more about how Anchore can help your organization embed DevSecOps tooling and principles into your software development process, click below to read our white paper.
The Power of Policy-as-Code for the Public Sector
As the public sector and businesses face unprecedented security challenges in light of software supply chain breaches and the move to remote, and now hybrid work, means the time for policy-as-code is now.
Here’s a look at the current and future states of policy-as-code and the potential it holds for security and compliance in the public sector:
What is Policy-as-Code?
Policy-as-code is the act of writing code to manage the policies you create to help with container security and other related security policies. Your IT staff can automate those policies to support policy compliance throughout your DevSecOps toolchain and production systems. Programmers express policy-as-code in a high-level language and store them in text files.
Your agency is most likely getting exposure to policy-as-code through cloud services providers (CSPs). Amazon Web Services (AWS) offers policy-as-code via the AWS Cloud Development Kit. Microsoft Azure supports policy-as-code through Azure Policy, a service that provides both built-in and user-defined policies across categories that map the various Azure services such as Compute, Storage, and Azure Kubernetes Services (AKS).
Benefits of Policy-as-Code
Here are some benefits your agency can realize from policy-as-code:
- Information and logic about your security and compliance policies as code remove the risks of “oral history” when sysadmins may or may not pass down policy information to their successors during a contract transition.
- When you render security and compliance policies as code in plain text files, you can use various DevSecOps and cloud management tools to automate the deployment of policies into your systems.
- Guardrails for your automated systems because as your agency moves to the cloud, your number of automated systems only grows. A responsible growth strategy is to protect your automated systems from performing dangerous actions. Policy-as-code is a more suitable method to verify the activities of your automated systems.
- A longer-term goal would be to manage your compliance and security policies in your version control system of choice with all the benefits of history, diffs, and pull requests for managing software code.
- You can now test policies with automated tools in your DevSecOps toolchain.
Public Sector Policy Challenges
As your agency moves to the cloud, it faces new challenges with policy compliance while adjusting to novel ways of managing and securing IT infrastructure:
Keeping Pace with Government-Wide Compliance & Cloud Initiatives
FedRAMP compliance has become a domain specialty unto itself. While the United States federal government maintains control over the policies behind FedRAMP, and the next updates and changes, FedRAMP compliance has become its own industry with specialized consultants and toolsets that promise to get an agency’s cloud application through the FedRAMP approval process.
As government cloud initiatives such as Cloud Smart become more important, the more your agency can automate the management and testing of security policies, the better. Automation reduces human error because it does away with the manual and tedious management and testing of security policies.
Automating Cloud Migration and Management
Large cloud initiatives bring with them the automation of cloud migration and management. Cloud-native development projects that accompany cloud initiatives need to consider continuous compliance and security solutions to protect their software containers.
Maintaining Continuous Transparency and Accountability
Continuous transparency is fundamental to FedRAMP and other government compliance programs. Automation and reporting are two fundamental building blocks. The stakes for reporting are only going to increase as the mandates of the Executive Order on Improving the Nation’s Cybersecurity become reality for agencies.
Achieving continuous transparency and accountability requires that an enterprise have the right tools, processes, and frameworks in place to monitor, report, and manage employee behaviors throughout the application delivery life cycle.
Securing the Agency Software Supply Chain
Government agencies are multi-vendor environments with homogenous IT infrastructure, including cloud services, proprietary tools, and open source technologies. The recent release of the Container Platform SRG is going to drive more requirements for the automation of container security across Department of Defense (DoD) projects
Looking to learn more about how to utilizing a policy-based security posture to meet DoD compliance standards like cATO or CMMC? One of the most popular technology shortcuts is to utilize a DoD software factory. Anchore has been helping organizations and agencies put the Sec in DevSecOps by securing traditional software factories, transforming them into DoD software factories. Get caught up with the content below:
Policy-as-Code: Current and Future States
The future of policy-as-code in government could go in two directions. The same technology principles of policy-as-code that apply to technology and security policies can also render any government policy-as-code. An example of that is the work that 18F is prototyping for SNAP (Supplemental Nutrition Assistance Program) food stamp program eligibility.
Policy-as-code can also serve as another automation tool for FedRAMP and Security Technical Implementation Guide (STIG) testing as more agencies move their systems to the cloud. Look for the backend tools that can make this happen gradually to improve over the next few years.
Managing Cultural and Procurement Barriers
Compliance and security are integral elements of federal agency working life, whether it’s the DoD supporting warfighters worldwide or civilian government agencies managing constituent data to serve the American public better.
The concept of policy-as-code brings to mind being able to modify policy bundles on the fly and pushing changes into your DevSecOps toolchain via automation. While theoretically possible with policy-as-code in a DevSecOps toolchain, the reality is much different. Industry standards and CISO directives govern policy management at a much slower and measured cadence than the current technology stack enables.
API integration also enables you to integrate your policy-as-code solution into third-party tools such as Splunk and other operational support systems that your organization may already use as your standards.
Automation
It’s best to avoid manual intervention for managing and testing compliance policies. Automation should be a top requirement for any policy-as-code solution, especially if your agency is pursuing FedRAMP or NIST certification for its cloud applications.
Enterprise Reporting
Internal and external compliance auditors bring with them varying degrees of reporting requirements. It’s essential to have a policy-as-code solution that can support a full range of reporting requirements that your auditors and other stakeholders may present to your team.
Enterprise reporting requirements range from customizable GUI reporting dashboards to APIs that enable your developers to integrate policy-as-code tools into your DevSecOps team’s toolchain.
Vendor Backing and Support
As your programs venture into policy compliance, failing a compliance audit can be a costly mistake. You want to choose a policy-as-code solution for your enterprise compliance requirements with a vendor behind it for technical support, service level agreements (SLAs), software updates, and security patches.
You also want vendor backing and support also for technical support. Policy-as-code isn’t a technology to support using your own internal IT staff (at least in the beginning).
With policy-as-code being a newer technology option, a fee-based solution backed by a vendor also gets you access to their product management. As a customer, you want a vendor that will let you access their product roadmap and see the future.
Interested to see how the preeminent DoD Software Factory Platform used a policy-based approach to software supply chain security in order to achieve a cATO and allow any DoD programs that built on their platform to do the same? Read our case study or watch our on-demand webinar with Major Camdon Cady.
Enforcing the DoD Container Image and Deployment Guide with Anchore Federal
The latest version of the DoD Container Image and Deployment Guide details technical and security requirements for container image creation and deployment within a DoD production environment. Sections 2 and 3 of the guide include security practices that teams must follow to limit the footprint of security flaws during the container image build process. These sections also discuss best security practices and correlate them to the corresponding security control family with Risk Management Framework (RMF) commonly used by cybersecurity teams across DoD.
Anchore Federal is a container scanning solution used to validate the DoD compliance and security standards, such as continuous authorization to operate (cATO), across images, as explained in the DoD Container Hardening Process Guide. Anchore’s policy first approach places policy where it belongs– at the forefront of the development lifecycle to assess compliance and security issues in a shift left approach. Scanning policies within Anchore are fully customizable based on specific mission needs, providing more in-depth insight into compliance irregularities that may exist within a container image. This level of granularity is achieved through specific security gates and triggers that generate automated alerts. This allows teams to validate that the best practices discussed in Section 2 of the Container Image Deployment Guide enable best practices to be enforced as your developers build.
Anchore Federal uses a specific DoD Scanning Policy that enforces a wide array of gates and triggers that provide insight into the DoD Container Image and Deployment Guide’s security practices. For example, you can configure the Dockerfile gate and its corresponding triggers to monitor for security issues such as privileged access. You can also configure the Dockerfile gate to expose unauthorized ports and validate images built from approved base images and check for the unauthorized disclosure of secrets/sensitive files, amongst others.
Anchor Federal’s DoD scanning policy is already enabled to validate the detailed list of best practices in Section 2 of the Container Image and Deployment Guide.
Looking to learn more about how to achieve container hardening at DoD levels of security? One of the most popular technology shortcuts is to utilize a DoD software factory. Anchore has been helping organizations and agencies put the Sec in DevSecOps by securing traditional software factories, transforming them into DoD software factories.
Next Steps
Anchore Federal is a battle-tested solution that has been deployed to secure DoD’s most critical workloads. Anchore Federal exists to provide cleared professional services and software to DoD mission partners and the US Intelligence Community in building their DevSecOps environments. Learn more about how Anchore Federal supports DoD missions.
Benefits of Static Image Inspection and Policy Enforcement
In this post, I will dive deeper into the key benefits of a comprehensive container image inspection and policy-as-code framework.
A couple of key terms:
- Comprehensive Container Image Inspection: Complete analysis of a container image to identify it’s entire contents: OS & non-OS packages, libraries, licenses, binaries, credentials, secrets, and metadata. Importantly: storing this information in a Software Bill of Materials (SBOM) for later use.
- Policy-as-Code Framework: a structure and language for policy rule creation, management, and enforcement represented as code. Importantly: This allows for software development best practices to be adopted such as version control, automation, and testing.
What Exactly Comes from a Complete Static Image Inspection?
A deeper understanding. Container images are complex and require a complete analysis to fully understand all of their contents. The picture above shows all of the useful data an inspection can uncover. Some examples are:
- Ports specified via the
EXPOSE
instruction - Base image / Linux distribution
- Username or
UID
to use when running the container - Any environment variables set via the
ENV
instruction - Secrets or keys (ex. AWS credentials, API keys) in the container image filesystem
- Custom configurations for applications (ex.
httpd.conf
for Apache HTTP Server)
In short, a deeper insight into what exactly is inside of container images allows teams to make better decisions on what configurations and security standards they would prefer their production software to have.
How to Use the Above Data in Context?
While we can likely agree that access to the above data for container images is a good thing from a visibility perspective, how can we use it effectively to produce higher-quality software? The answer is through policy management.
Policy management allows us to create and edit the rules we would like to enforce. Oftentimes these rules fall into one of three buckets: security, compliance, or best-practice. Typically, a policy author creates sets of rules and describes the circumstances by which certain behaviors/properties are allowed or not. Unfortunately, authors are often restricted to setting policy rules with a GUI or even a Word document, which makes rules difficult to transfer, repeat, version, or test. Policy-as-code solves this by representing policies in human-readable text files, which allow them to adopt software practices such as version control, automation, and testing. Importantly, a policy as code framework includes a mechanism to enforce the rules created.
With containers, standardization on a common set of best-practices for software vulnerabilities, package usage, secrets management, Dockerfiles, etc. are excellent places to start. Some examples of policy rules are:
- Should all Dockerfiles have effective
USER
instruction? Yes. If undefined, warn me. - Should the
FROM
instruction only reference a set of “trusted” base images? Yes. If not from the approved list, fail this policy evaluation. - Are AWS keys ever allowed inside of the container image filesystem? No. If they are found, fail this policy evaluation.
- Are containers coming from DockerHub allowed in production? No. If they attempt to be used, fail this policy evaluation.
The above examples demonstrate how the Dockerfile analysis and secrets found during the image inspection can prove extremely useful when creating policy. Most importantly, all of these policy rules are created to map to information available prior to running a container.
Integrating Policy Enforcement
With policy rules clearly defined as code and shared across multiple teams, the enforcement component can freely be integrated into the Continuous Integration / Continuous Delivery workflow. The concept of “shifting left” is important to follow here. The principal benefit here is, the more testing and checks individuals and teams can incorporate further left in their software development pipelines, the less costly it will be for them when changes need to be made. Simply put, prevention is better than a cure.
Integration as Part of a CI Pipeline
Incorporating container image inspection and policy rule enforcement to new or existing CI pipelines immediately adds security and compliance requirements as part of the build, blocking important security risks from ever making their way into production environments. For example, if a policy rule exists to explicitly not allow a container image to have a root user defined in the Dockerfile, failing the build pipeline of a non-compliant image before pushing to a production registry is a fundamental quality gate to implement. Developers will typically be forced to remediate the issue they’ve created which caused the build failure and work to modify their commit to reflect compliant changes.
Below depicts how this process works with Anchore:
Anchore provides an API endpoint where the CI pipeline can send an image for analysis and policy evaluation. This provides simple integration into any workflow, agnostic of the CI system being used. When the policy evaluation is complete, Anchore returns a PASS or FAIL output based on the policy rules defined. From this, the user can choose whether or not to fail the build pipeline.
Integration with Kubernetes Deployments
Adding an admission controller to gate execution of container images in Kubernetes in accordance with policy standards can be a critical method to validate what containers are allowed to run on your cluster. Very simply: admit the containers I trust, reject the ones I don’t. Some examples of this are:
- Reject an image if it is being pulled directly from DockerHub.
- Reject an image if it has high or critical CVEs that have fixes available.
This integration allows Kubernetes operators to enforce policy and security gates for any pod that is requested on their clusters before they even get scheduled.
Below depicts how this process works with Anchore and the Anchore Kubernetes Admission Controller:
The key takeaway from both of these points of integration is that they are occurring before ever running a container image. Anchore provides users with a full suite of policy checks which can be mapped to any detail uncovered during the image inspection. When discussing this with customers, we often hear, “I would like to scan my container images for vulnerabilities.” While this is a good first step to take, it is the tip of the iceberg when it comes to what is available inside of a container image.
Conclusion
With immutable infrastructure, once a container image artifact is created, it does not change. To make changes to the software, good practice tells us to build a new container image, push it to a container registry, kill the existing container, and start a new one. As explained above, containers provide us with tons of useful static information gathered during an inspection, so another good practice is to use this information, as soon as it is available, and where it makes sense in the development workflow. The more policies which can be created and enforced as code, the faster and more effective IT organizations will be able to deliver secure software to their end customers.
Looking to learn more about how to utilizing a policy-based security posture to meet DoD compliance standards like cATO or CMMC? One of the most popular technology shortcuts is to utilize a DoD software factory. Anchore has been helping organizations and agencies put the Sec in DevSecOps by securing traditional software factories, transforming them into DoD software factories. Get caught up with the content below:
Bridging the Gap Between Speed and Security: A Deep Dive into Anchore Federal’s Container Image Inspection and Vulnerability Management
In today’s DevOps environment, developers and security teams are more intertwined than ever with increased speed to production. Enterprises are using hundreds to thousands of Docker images making it more difficult to maintain an accurate list of software inventory, and track software packages and vulnerabilities across their container workloads. This becomes a recurring headache for Federal DevSecOps teams who are trying to maintain control over the environment by monitoring for unauthorized software on the information system. Per National Security Agency (NSA) guidance, security teams should actively monitor and remove unauthorized, outdated, and potentially malicious software from the information system while simultaneously making timely updates to their software stack.
Fortunately, Anchore Federal can simplify this process for DevSecOps teams and development teams alike by inspecting Docker images in all container registries, analyzing the specific software components within a given image, and then visualizing every software package for the developer in the Anchore Federal UI. For this blog post, we will explore how we can positively impact our security posture by maintaining strong configuration control over the software in our environment using Anchore Federal to analyze, inspect, and visualize the contents of each image.
Looking to learn more about how to achieve container hardening at DoD levels of security? One of the most popular technology shortcuts is to utilize a DoD software factory. Anchore has been helping organizations and agencies put the Sec in DevSecOps by securing traditional software factories, transforming them into DoD software factories.
Anchore’s Image Inspection to Support Configuration Management Best Practices
For this demo, I’ve selected Logstash version 7.2.0 from DockerHub and analyzed this image against Anchore’s DoD security policies bundle found in Anchore’s policy hub. You can also navigate to the “Policy Bundles” tab in Anchore Federal UI by navigating to the “Policy Bundles” tab where we can see that we are using the “anchore_dod_security_policies” bundle as our default policy.
After validating the DoD policies are set, we then initiate the vulnerability scan against the Logstash image. Anchore automatically analyzes the image for not only CVEs, but evaluates the entire image contents against a comprehensive list of DoD security and compliance standards using our DoD security policies bundle. Anchore Federal automatically displays the results of the image scan in our “Image Analysis” tab as depicted below:
From the overview page, the user can easily see the compliance and vulnerability results generated against our DoD security policies. Taking this a step deeper, we then can begin inspecting the content of the image itself by navigating to the “Contents” tab. This extends beyond just a list of CVE’s, vulnerabilities and compliance checks. Anchore Federal provides the user with a total list of all of the different types of software packages, OS packages, and files that are found in the selected image:
This provides an integral point of analysis that allows the user to inventory and identify the different types of software and software packages that are within your environment. This is greatly needed across Federal organizations aiming to comply with DoD RMF and FedRAMP configuration management security controls.
Keeping the importance of configuration management in mind, Anchore Federal seamlessly integrates configuration management with security to magnify specific packages tied to vulnerabilities.
Unifying Configuration Management with Container Security
Anchore Federal allows the user to focus on adversely impacted packages by placing them front and center to the user. Navigating to the “Vulnerabilities” tab from the overview page allows you to see the adversely impacted packages. Anchore clearly displays that there is a CVE tied to the impacted Python package in the screenshot below:
From here, the security analyst would immediately want to be alerted to the other images in their environment that are impacted by the vulnerability. Anchore Federal automatically does this for you and links that affected package across all of the images in your repository. Anchore Federal also automatically generates reports of affected packages by selecting “Other Images Sharing Package.” In this example, we can see that our Elasticsearch image is also impacted by the vulnerability tied to this Python package:
You can tailor the reports accordingly by using the parameters to filter on any specific package and package version. Anchore takes care of the rest and automatically informs DevSecOps teams about all of the images tied to every package containing a vulnerability. This provides teams with the vulnerability information necessary to carry out vulnerability remediation across the impacted images for their organization.
Anchore Federal takes the burden off of the DevSecOps teams by integrating configuration management with Anchore’s deep image inspection vulnerability scanning and “policy first” compliance approach. As a result, Federal organizations don’t have to worry about sacrificing configuration management. Instead, using Anchore Federal, organizations can enhance configuration control of their environment, gain the valuable insight of software packages within each container, and remediate vulnerable software packages to closure in a timely manner.
A Policy Based Approach to Container Security & Compliance
At Anchore, we take a preventative, policy-based compliance approach, specific to organizational needs. Our philosophy of scanning and evaluating Docker images against user-defined policies as early as possible in the development lifecycle, greatly reduce vulnerable, non-compliant images from making their way into trusted container registries and production environments.
But what do we mean by ‘policy-based compliance’? And what are some of the best practices organizations can adopt to help achieve their own compliance needs? In this post, we will first define compliance and then cover a few steps development teams can take to help to bolster their container security.
An Example of Compliance
Before we define ‘policy-based compliance’, it helps to gain a solid understanding of what compliance means in the world of software development. Generally speaking, compliance is a set of standards for recommended security controls laid out by a particular agency or industry that an application must adhere to. An example of such an agency is the National Institute of Standards and Technology or NIST. NIST is a non-regulatory government agency that develops technology, metrics, and standards to drive innovation and economic competitiveness at U.S. based organizations in the science and technology industry. Companies that are providing products and services to the federal government oftentimes are required to meet the security mandates set by NIST. An example of one of these documents is NIST SP 800-218, the Secure Software Development Framework (SSDF) which specifics the security controls necessary to ensure a software development environment is secure and produces secure code.
What do we mean by ‘Policy-based’?
Now that we have a definition and example, we can begin to discuss the aspect role play in achieving compliance. In short, policy-based compliance means adhering to a set compliance requirements via customizable rules defined by a user. In some cases, security software tools will contain a policy engine that allows for development teams to create rules that correspond to a particular security concern addressed in a compliance publication.
Looking to learn more about how to utilizing a policy-based security posture to meet DoD compliance standards like cATO or CMMC? One of the most popular technology shortcuts is to utilize a DoD software factory. Anchore has been helping organizations and agencies put the Sec in DevSecOps by securing traditional software factories, transforming them into DoD software factories. Get caught up with the content below:
How can Organizations Achieve Compliance in Containerized Environments?
Here at Anchore, our focus is helping organizations secure their container environments by scanning and analyzing container images. Oftentimes, our customers come to us to help them achieve certain compliance requirements they have, and we can often point them to our policy engine. Anchore policies are user-defined checks that are evaluated against an analyzed image. A best practice for implementing these checks is through a step in CI/CD. By adding an Anchore image scanning step in a CI tool like Jenkins or Gitlab, development teams can create an added layer of governance to their build pipeline.
Complete Approach to Image Scanning
Vulnerability scanning
Adding image scanning against a list of CVEs to a build pipeline allows developers to be proactive about security as they will get a near-immediate feedback loop on potentially vulnerable images. Anchore image scanning will identify any known vulnerabilities in the container images, enforcing a shift-left paradigm in the development lifecycle. Once vulnerabilities have been identified, reports can be generated listing information about the CVEs and vulnerable packages within the images. In addition, Anchore can be configured to send webhooks to specified endpoints if new CVEs have published that impact an image that has been previously scanned. At Anchore, we’ve seen integrations with Slack or JIRA to alert teams or file tickets automatically when vulnerabilities are discovered.
Adding governance
Once an image has been analyzed and its content has been discovered, categorized, and processed, the resulting data can be evaluated against a user-defined set of rules to give a final pass or fail recommendation for the image. It is typically at this stage that security and DevOps teams want to add a layer of control to the images being scanned in order to make decisions on which images should be promoted into production environments.
Anchore policy bundles (structured as JSON documents) are the unit of policy definition and evaluation. A user may create multiple policy bundles, however, for evaluation, only one can be marked as ‘active’. The policy is expressed as a policy bundle, which is made up of a set of rules to perform an evaluation of an image. These rules can define a check against an image for things such as:
- Security vulnerabilities
- Package whitelists and blacklists
- Configuration file contents
- Presence of credentials in an image
- Image manifest changes
- Exposed ports
Anchore policies return a pass or fail decision result.
Putting it Together with Compliance
Given the variance of compliance needs across different enterprises, having a flexible and robust policy engine becomes a necessity for organizations needing to adhere to one or many sets of standards. In addition, managing and securing container images in CI/CD environments can be challenging without the proper workflow. However, with Anchore, development and security teams can harden their container security posture by adding an image scanning step to their CI, reporting back on CVEs, and fine-tuning policies meet compliance requirements. With compliance checks in place, only container images that meet the standards laid out a particular agency or industry will be allowed to make their way into production-ready environments.
Conclusion
Taking a policy-based compliance approach is a multi-team effort. Developers, testers, and security engineers should be in constant collaboration on policy creation, CI workflow, and notification/alert. With all of these aspects in-check, compliance can simply become part of application testing and overall quality and product development. Most importantly, it allows organizations to create and ship products with a much higher level of confidence knowing that the appropriate methods and tooling are in place to meet industry-specific compliance requirements.
Interested to see how the preeminent DoD Software Factory Platform used a policy-based approach to software supply chain security in order to achieve a cATO and allow any DoD programs that built on their platform to do the same? Read our case study or watch our on-demand webinar with Major Camdon Cady.