Balancing the Scale: Software Supply Chain Security and APTs

Note: This is a multi-part series primer on the intersection of advanced persistent threats (APTs) and software supply chain security (SSCS). This blog post is the first in the series. We will update this blog post with links to the additional parts of the series as they are published.
Part 1 | With Great Power Comes Great Responsibility: APTs & Software Supply Chain Security
Part 2 | David and Goliath: the Intersection of APTs and Software Supply Chain Security
• Part 3 (This blog post)

Last week we dug into the details of why an organization’s software supply chain is a ripe target for well-resourced groups like APTs and the potential avenues that companies have to combat this threat. This week we’re going to highlight the Anchore Enterprise platform and how it provides a turnkey solution for combating threats to any software supply chain.

How Anchore Can Help

Anchore was founded on the belief that a security platform that delivers deep, granular insights into an organization’s software supply chain, covers the entire breadth of the SDLC and integrates automated feedback from the platform will create a holistic security posture to detect advanced threats and allow for human interaction to remediate security incidents. Anchore is trusted by Fortune 100 companies and the most exacting federal agencies across the globe because it has delivered on this promise.

The rest of the blog post will detail how Anchore Enterprise accomplishes this.

Depth: Automating Software Supply Chain Threat Detection

Having deeper visibility into an organization’s software supply chain is crucial for security purposes because it enables the identification and tracking of every component in the software’s construction. This comprehensive understanding helps in pinpointing vulnerabilities, understanding dependencies, and identifying potential security risks. It allows for more effective management of these risks by enabling targeted security measures and quicker response to potential threats. Essentially, deeper visibility equips an organization to better protect itself against complex cyber threats, including those that exploit obscure or overlooked aspects of the software supply chain.

Anchore Enterprise accomplishes this by generating a comprehensive software bill of materials (SBOM) for every piece of software (even down to the component/library/framework-level). It then compares this detailed ingredients list against vulnerability and active exploit databases to identify exactly where in the software supply chain there are security risks. These surgically precise insights can then be fed back to the original software developers, rolled-up into reports for the security team to better inform risk management or sent directly into an incident management workflow if the vulnerability is evaluated as severe enough to warrant an “all-hands on deck” response.

Developers shouldn’t have to worry about manually identifying threats and risks inside your software supply chain. Having deep insights into your software supply chain and being able to automate the detection and response is vital to creating a resilient and scalable solution to the risk of APTs.

Breadth: Continuous Monitoring in Every Step of Your Software Supply Chain

The breadth of instrumentation in the Software Development Lifecycle (SDLC) is crucial for securing the software supply chain because it ensures comprehensive security coverage across all stages of software development. This broad instrumentation facilitates early detection and mitigation of vulnerabilities, ensures consistent application of security policies, and allows for a more agile response to emerging threats. It provides a holistic view of the software’s security posture, enabling better risk management and enhancing the overall resilience of the software against cyber threats.

Powered by a 100% feature complete platform API, Anchore Enterprise integrates into your existing DevOps pipeline.

Anchore has been supporting the DoD in this effort since 2019. Commonly referred to as “overwatch” for the DoD’s software supply chain. Anchore Enterprise continuously monitors how risk is evolving based on the ingesting of tens of thousands of runtime containers, hundreds of source code repositories and alerting on malware-laced images submitted to the registry. Monitoring every stage of the DevOps pipeline, source to build to registry to deploy, to gain a holistic view of when and where threats enter the software development lifecycle.

Feedback: Alerting on Breaches or Critical Events in Your Software Supply Chain

Integrating feedback from your software supply chain and SDLC into your overall security program is important because it allows for real-time insights and continuous improvement in security practices. This integration ensures that lessons learned and vulnerabilities identified at any stage of the development or deployment process are quickly communicated and addressed. It enhances the ability to preemptively manage risks and adapt to new threats, thereby strengthening the overall security posture of the organization.

How would you know if something is wrong in a system? Create high-quality feedback loops, of course. If there is a fire in your house, you typically have a fire alarm. That is a great source of feedback. It’s loud and creates urgency. When you investigate to confirm the fire is legitimate and not a false alarm; you can see fire, you can feel fire.

Software supply chain breaches are more similar to carbon monoxide leaks. Silent, often undetected, and potentially lethal. If you don’t have anything in place to specifically alert for that kind of threat then you could pay severely. 

Anchore Enterprise was designed specifically as both a set of sensors that can be deployed both deeply and broadly into your software supply chain AND a system of feedback that uses the sensors in your supply chain to detect and alert on potential threats that are silently emitting carbon monoxide in your warehouse.

Anchore Enterprise’s feedback mechanisms come in three flavors; automatic, recommendations and informational. Anchore Enterprise utilizes a policy engine to enable automatic action based on the feedback provided by the software supply chain sensors. If you want to make sure that no software is ever deployed into production (or any environment) with an exploitable version of Log4j the Anchore policy engine can review the security metadata created by the sensors for the existence of this software component and stop a deployment in progress before it ever becomes accessible to attackers.

Anchore Enterprise can also be configured to make recommendations and provide opinionated actions based on security signals. If a vulnerability is discovered in a software component but it isn’t considered urgent, Anchore Enterprise can instead provide a recommendation to the software developer to fix the vulnerability but still allow them to continue to test and deploy their software. This allows developers to become aware of security issues very early in the SDLC but also provide flexibility for them to fix the vulnerability based on their own prioritization.

Finally, Anchore Enterprise offers informational feedback that alerts developers, the security team or even the executive team to potential security risks but doesn’t offer a specific solution. These types of alerts can be integrated into any development, support or incident management systems the organization utilizes. Often these alerts are for high risk vulnerabilities that require deeper organizational analysis to determine the best course of action in order to remediate.

Conclusion

Due to the asymmetry between APTs and under-resourced security teams, the goal isn’t to create an impenetrable fortress that can never be breached. The goal is instead to follow security best practices and instead litter your SDLC with sensors and automated feedback mechanisms. APTs may have significantly more resources than your security team but they are still human and all humans make mistakes. By placing low-effort tripwires in as many locations as possible, you reverse the asymmetry of resources and instead allow the well-resourced adversary to become their own worst enemy. APTs are still software developers at the end of the day and no one writes bug-free code in the long run. By transforming your software supply chain into a minefield of best practices, you create a battlefield that requires your adversaries to slow down and carefully disable each individual security mechanism. None are impossible to disarm but each speed bump creates another opportunity for your adversary to make a mistake and reveal themselves. If the zero-trust architecture has taught us anything, it is that an impenetrable perimeter was never the best strategy.

Improving Syft’s Binary Detection

You, too, can help make Syft better! As you’re probably aware, Syft is a software composition analysis tool which is able to scan a number of sources to find software packages in container images and the local filesystem. Syft detects packages from a number of things such as source code and package manager metadata, but also from arbitrary files it encounters such executable binaries. Today we’re going to talk about how some of Syft’s binary detection works and how easy it is to improve.

Just recently, we were made aware of this vulnerability and it seemed like something we’d want to surface in Syft’s companion tool, Grype… but Fluent Bit wasn’t something that Syft was already detecting. Let’s look at how we added support for it!

Syft binary matching

Before we get into the details, it’s important to understand how Syft’s binary detection works today: Syft scans a filesystem, and a binary cataloger looks for files matching a particular name pattern and uses a regular expression to find a version string in the binary. Although this isn’t the only thing Syft does, this has proven to be a simple pattern that works fairly well for finding information about arbitrary binaries, such as the Fluent Bit binary we’re interested in.

In order to add support for additional binary types in Syft, the basic process is this:

  1. Find a binary
  2. Add a matching rule
  3. Add tests

Getting started

Starting with a local fork of the Syft repository, let’s work in the binary cataloger’s test-fixtures directory:

$ cd syft/pkg/cataloger/binary/test-fixtures

Here you’ll find a Makefile and a config.yaml. These are the main things of importance — and you can run make help to see extra commands available.

The first thing we need to do is find somewhere to get one of the binaries from — we need something to test that we’re actually detecting the right thing! The best way to do this is using a publicly available container image. Once we know an image to use, the Makefile has some utilities to make the next steps fairly straightforward.

After a short search online, we found that there is indeed a public docker image with exactly what we were looking for: https://hub.docker.com/r/fluent/fluent-bit. Although we could pick just about any version, we somewhat arbitrarily chose this one. We can use more than one, but for now we’re just going to use this as a starting point.

Adding a reference to the binary

After finding an image, we need to identify the particular binary file to look at. Luckily, the Fluent Bit documentation gave a pretty good pointer – this was part of the docker command the documentation said to run: /fluent-bit/bin/fluent-bit! It may take a little more sleuthing to figure out what file(s) within the image we need; often you can run an image with a shell to figure this out… but chances are, if you can run the command with a --version flag and get the version printed out, we can figure out how to find it in the binary.

For now, let’s continue on with this binary. We need to add an entry that describes where to find the file in question in the syft/pkg/cataloger/binary/test-fixtures/config.yaml:

- version: 3.0.2
    images:
      - ref: fluent/fluent-bit:3.0.2-amd64@sha256:7e6fe8efd51dda0739e355f58bf5e3b1623cbf2d4a23c06c7a365d9553e2d242
        platform: linux/amd64
    paths:
      - /fluent-bit/bin/fluent-bit

There are lots of examples in that file already, and hopefully the fields are straightforward but note the version — this is what we’ve ascertained should be reported and it will drive some functions later. Also, we’ve included the full sha256 hash, so even if the tags change, we’ll get the expected image. Then just run make:

$ make

go run ./manager download  --skip-if-covered-by-snippet
...
[email protected]
  ✔  pull image fluent/fluent-bit:3.0.2-amd64@sha256:7e6fe8efd51dda0739e355f58bf5e3b1623cbf2d4a23c06c7a365d9553e2d242 (linux/amd64)
  ✔  extract /fluent-bit/bin/fluent-bit

This pulled the image locally and extracted the file we told it to…but so far we haven’t really done much that you couldn’t do with standard container tools.

Finding the version

Now we need to figure out what type of expression should reliably find the version. There are a number of binary inspection tools, many of which can make this easier and perhaps you have some favorites — by all means use those! But we’re going to stick with the tools at hand. Let’s take a look at what the binary has matching the version we indicated earlier by running make add-snippet

$ make add-snippet

go run ./manager add-snippet
running: ./capture-snippet.sh classifiers/bin/fluent-bit/3.0.2/linux-amd64/fluent-bit 3.0.2 --search-for 3\.0\.2 --group fluent-bit --length 100 --prefix-length 20
Using binary file:      classifiers/bin/fluent-bit/3.0.2/linux-amd64/fluent-bit
Searching for pattern:  3\.0\.2
Capture length:         120 bytes
Capture prefix length:  20 bytes
Multiple string matches found in the binary:

1) 3.0.2
2) 3.0.2
3) CONNECT {"verbose":false,"pedantic":false,"ssl_required":false,"name":"fluent-bit","lang":"c","version":"3.0.2"}

Please select a match: 

Follow the prompts to inspect the different sections of the binary. Each of these actually looks like it could be something usable, but we want one that hopefully is simple to match across different versions. The third match has JSON, which possibly could get reordered. Looking at the second we can see something that has a string containing only 3.0.2 but let’s take a closer look at the first match. If we look at 1, we see something like the second that has a string containing only the version, <NULL>3.0.2<NULL>, but we also see %sFluent Bit, nearby. This looks promising! Let’s capture this snippet by following the prompts:

Please select a match: 1

006804fc: 2525 2e25 6973 0a00 252a 733e 2074 7970  %%.%is..%*s> typ
0068050c: 653a 2000 332e 302e 3200 2573 466c 7565  e: .3.0.2.%sFlue
0068051c: 6e74 2042 6974 2076 2573 2573 0a00 2a20  nt Bit v%s%s..* 
0068052c: 6874 7470 733a 2f2f 666c 7565 6e74 6269  https://fluentbi
0068053c: 742e 696f 0a0a 0069 6e76 616c 6964 2063  t.io...invalid c
0068054c: 7573 746f 6d20 706c 7567 696e 2027 2573  ustom plugin '%s
0068055c: 2700 696e 7661 6c69 6420 696e 7075 7420  '.invalid input 
0068056c: 706c 7567 696e 2027                      plugin '

Does this snippet capture what you need? (Y/n/q) y
wrote snippet to "classifiers/snippets/fluent-bit/3.0.2/linux-amd64/fluent-bit"

How could we tell the NULL terminators? What’s going on here? Looking at the readable text on the right, we see: .3.0.2., but the bytes are also displayed in the same position: 00 332e 302e 3200 and we know 00 is a NULL character because we’ve done quite a lot of these expressions. This is the hardest part, believe me! But if you’re still following along, let’s wrap this up by putting everything we’ve found together in a rule.

Adding a rule to Syft

Edit the syft/pkg/cataloger/binary/classifiers.go and add an entry for this binary:

                {
                        Class:    "fluent-bit-binary",
                        FileGlob: "**/fluent-bit",
                        EvidenceMatcher: FileContentsVersionMatcher(
                                // [NUL]3.0.2[NUL]%sFluent Bit
                                `\x00(?P<version>[0-9]+\.[0-9]+\.[0-9]+)\x00%sFluent Bit`,
                        ),
                        Package: "fluent-bit",
                        PURL:    mustPURL("pkg:github/fluent/fluent-bit@version"),
                        CPEs:    singleCPE("cpe:2.3:a:treasuredata:fluent_bit:*:*:*:*:*:*:*:*"),
                },

We’ve put the information we know about this in the entry: the FileGlob should find the file, as we’ve seen earlier, the FileContentsVersionMatcher takes a regular expression to extract the version. And I went ahead and looked up the format for the CPE and PURL this package should use and included these here, too.

Once we’ve added this, you can test it out right away by running your modified Syft code from the base directory of your git clone:

$ go run ./cmd/syft fluent/fluent-bit:3.0.2-amd64

 ✔ Pulled image                    
 ✔ Loaded image                                                                                                                                                          fluent/fluent-bit:3.0.2-amd64
 ✔ Parsed image                                                                                                                sha256:2007231667469ee1d653bdad65e55cc5f300985f10d7c4dffd6de0a5e76ff078
 ✔ Cataloged contents                                                                                                                 d3a6e4b5bc02c65caa673a2eb3508385ab27bb22252fa684061643dbedabf9c7
   ├── ✔ Packages                        [39 packages]  
   ├── ✔ File digests                    [1,771 files]  
   ├── ✔ File metadata                   [1,771 locations]  
   └── ✔ Executables                     [313 executables]  
NAME              VERSION                  TYPE     
base-files        11.1+deb11u9             deb       
ca-certificates   20210119                 deb       
fluent-bit        3.0.2                    binary    
libatomic1        10.2.1-6                 deb       
...

Great! It worked! If we try this out on some different versions, it looks like 3.0.1-amd64 works as well but this definitely did not work for 2.2.1-arm64 or 2.1.10, so we just repeat the process a bit and find out that we just need to make our expression a bit better to account for the variance in the arm64 versions having a couple extra NULL characters and the older versions not having the %s part. Eventually, this expression seemed to do the trick for the images I tried: x00(?P<version>[0-9]+.[0-9]+.[0-9]+)x00[^d]*Fluent.

We could have made this simpler — to just find <NULL><version><NULL>, but there are quite a few strings in the various binaries that match this pattern and we want to try our best to find the one that looks like it’s the specific version string we want. When we looked at the various bytes across a number of versions both the version and the name of the project showed up together like this. Having done a number of these classifiers in the past, I can say this is a fairly common type of thing to look for.

Testing

Since we already captured a test snippet, the last thing to do is add a test. If you recall, when we used the add-snippet command, it told us: 

wrote snippet to u0022classifiers/snippets/fluent-bit/3.0.2/linux-amd64/fluent-bitu0022

This is what we’re going to want to reference. So let’s add a test case to syft/pkg/cataloger/binary/classifier_cataloger_test.go, the very large Test_Cataloger_PositiveCases test:

                {
                        logicalFixture: "fluent-bit/3.0.2/linux-amd64",
                        expected: pkg.Package{
                                Name:      "fluent-bit",
                                Version:   "3.0.2",
                                Type:      "binary",
                                PURL:      "pkg:github/fluent/[email protected]",
                                Locations: locations("fluent-bit"),
                                Metadata:  metadata("fluent-bit-binary"),
                        },
                },

Wrapping up

Now that we have: 1) identified a binary 2) added a rule to Syft, and 3) added a test case with a small snippet, we’re done coding! Submit a pull request and sit back, knowing you’ve made the world a better place!

David and Goliath: the Intersection of APTs and Software Supply Chain Security

Note: This is a multi-part series primer on the intersection of advanced persistent threats (APTs) and software supply chain security (SSCS). This blog post is the second in the series. If you’d like to start from the beginning, you can find the first blog post here.

Last week we set the stage for discussing APTs and the challenges they pose for software supply chain security by giving a quick overview of each topic. This week we will dive into the details of how the structure of the open source software supply chain is a uniquely ripe target for APTs.

The Intersection of APTs and Software Supply Chain Security

The Software Ecosystem: A Ripe Target

APT groups often prioritize the exploitation of software supply chain vulnerabilities. This is due to the asymmetric structure of the software ecosystem. By breaching a single component, such as a build system, they can gain access to any organization using the compromised software component. This creates an inversion in the cost benefit of the effort involved in the research and development effort needed to discover a vulnerability and craft an exploit for the vulnerability. Before APTs were focused primarily on targets where the pay off could warrant the investment or vulnerabilities that were so wide-spread that the attack could be automated. The complex interactions of software dependencies allows APTs to scale their attack due to the structure of the ecosystem.

The Software Supply Chain Security Dynamic: An Unequal Playing Ground

The interesting challenge with software supply chain security is that securing the supply chain requires even more effort than an APT would take to exploit it. The rub comes because each company that consumes software has to build a software supply chain security system to protect their organization. An APT investing in exploiting a popular component or system gets the benefit of access to all of the software built on top of it.

Given that security organizations are at a structural disadvantage, how can organizations even the odds?

How Do I Secure My Software Supply Chain from APTs?

An organization’s ability to detect the threat of APTs in its internal software supply chain comes down to three core themes that can be summed up as “go deep, go wide and integrate feedback”. Specifically this means, the deeper the visibility into your organization’s software supply chain the less surface area an attack has to slip in malicious software. The wider this visibility is deployed across the software development lifecycle, the earlier an attacker will be caught. Neither of the first two points matter if the feedback produced isn’t integrated into the overall security program that can act on the signals surfaced.

By applying these three core principles to the design of a secure software supply chain, an organization can ensure that they balance the playing field against the structural advantage APTs possess.

How Can I Easily Implement a Strategy for Securing My Software Supply Chain?

The core principles of depth, breadth and feedback are powerful touchstones to utilize when designing a secure software supply chain that can challenge APTs but they aren’t specific rules that can be easily implemented. To address this, Anchore has created the open source VIPERR Framework to provide specific guidance on how to achieve the core principles of software supply chain security.

VIPERR is a free software supply chain security framework that Anchore created for organizations to evaluate and improve the security posture of their software supply chain. VIPERR stands for visibility, inspection, policy enforcement, remediation, and reporting. 

Utilizing the VIPERR Framework an organization can satisfy the three core principles of software supply chain security; depth, breadth and feedback. By following this guide, numerous Fortune 500 enterprises and top federal agencies have transformed their software supply chain security posture and become harder targets for advanced persistent threats. If you’re looking to design and run your own secure software supply chain system, this framework will provide a shortcut to ensure the developed system will be resilient. 

How Can I Comprehensively Implement a Strategy for Securing My Software Supply Chain?

There are a number of different comprehensive initiatives to define best practices for software supply chain security. Organizations ranging from the National Institute of Standards and Technology (NIST) with standards such as SP 800-53, SP 800-218, and SP 800-161. The Cloud Native Computing Foundation (CNCF) to Open Source Security Foundation (OpenSSF) have created detailed documentation on their recommendations to achieve a comprehensive supply chain security program, such as, the SLSA framework and Secure Supply Chain Consumption Framework (S2C2F) Project. Be aware that these are not quick and dirty solutions for achieving a “reasonably” secure software supply chain. They are large undertakings for any organization and should be given the resources needed to achieve success. 

We don’t have the time to go over each in this blog post but we have broken each down in our complete guide to software supply chain security.

This is the second in a series of blog posts focused on the intersection of APTs and software supply chain security. This blog post highlighted the reasons that APTs focus their efforts on software supply chain exploits and the potential avenues that companies have to combat this threat. Next week we will discuss the Anchore Enterprise solution as a turnkey platform to implement the strategies outlined above.

Anchore Enterprise 5.6: Improved Remediation & Visibility with Account Context Switcher

The Anchore Enterprise 5.6 release features updates to account management that enable administrators to context switch fast; analyzing and troubleshooting multiple datasets across multiple accounts. And allow users to share data across accounts easily and safely.

Improve data isolation and performance with accounts and role-based access controls 

Accounts are the highest level object in the Anchore Enterprise system. Each account has its own SBOM assets, users, policies, and reports that are siloed from other accounts in the system. Admins can separate their environment into different accounts based on teams, business units, projects, or products. With accounts, admins can isolate data to meet data security requirements or create workflows that are customized to the data flowing into that account. 

Accounts allow secure data sharing in a single system. On top of that it enables performance improvements by reducing the total amount of data that is processed when updating records or generating reports.

Each account can have users and roles assigned. Admins create users and set identification as well as permissions. Users have roles assigned that may have custom rights or privileges to data that can be viewed and managed within the account.

Leveraging account context to improve remediation and visibility  

In Anchore Enterprise an account object is a collection of settings and permissions that allow a user to access, maintain and manage data. Anchore Enterprise is a multi-tenancy system that consists of three logical components (accounts, users and permissions) providing flexibility for users to access and manage their data.

On occasion users may need to access information that resides outside of their own account. To investigate or troubleshoot issues and to manage data visibility across teams, allowing account context is crucial. Within the Anchore Enterprise UI, the Account Context option enables “context switching” to view SBOMs, analysis, and reports of different accounts while still retaining the specific user profile.

Also standard users are now provided with an additional level and vector of access control.

Adding Account Context in the URL

Until now the URLs in Anchore did not include account context which caused limitations to sharing data across accounts. Different users within the same account or users who were not part of the same account had to manually navigate to resources that were shared. 

In Anchore 5.6, account context is now included in the URL. This simplifies the workflow for sharing reports among users who have access to shared resources within the same or across different accounts.

Example Scenario

1. Create an account TestAccount and added a user TestUser1

2. Analyze the latest tag for Ubuntu under TestAccount context as username admin.

http://localhost:3000/TestAccount/artifacts/image/docker.io/ubuntu/latest/sha256:d21429c4635332e96a4baae3169e3f02ac8e24e6ae3d89a86002d49a1259a4f7

3. Log out of username admin

4. Paste the URL for the image analysis page above

5. Log in as username TestUser1

6. You will now be directly navigated to the Image analysis page

7. Verify top right that you are under username TestUser1

8. If you are trying to access a link without having access to the resource, you will receive an error message on the top right corner of the UI.

Please feel free to review our release notes for other notable updates and bug fixes in Anchore Enterprise 5.6.

How Cisco Umbrella Achieved FedRAMP Compliance in Weeks

Implementing compliance standards can be a daunting task for IT and security teams. The complexity and volume of requirements, increased workload, and resource constraints make it challenging to ensure compliance without overwhelming those responsible. Our latest case study, “How Cisco Umbrella Achieved FedRAMP Compliance in Weeks,” provides a roadmap for overcoming these challenges, leading to a world of streamlined compliance with low cognitive overhead.

Challenges Faced by Cisco Umbrella

Cisco Umbrella for Government, a cloud-native cybersecurity solution tailored for federal, state, and local government agencies, faced a tight deadline to meet FedRAMP vulnerability scanning requirements. They needed to integrate multiple security functions into a single, manageable solution while ensuring comprehensive protection across various environments, including remote work settings. Key challenges included:

  • Meeting all six FedRAMP vulnerability scanning requirements
  • Maintaining and automating STIG & FIPS compliance for Amazon EC2 virtual machines
  • Integrating end-to-end container security across the CI/CD pipeline, Amazon EKS, and Amazon ECS
  • Meeting SBOM requirements for White House Executive Order (EO 14028)

Solutions Implemented

To overcome these challenges, Cisco Umbrella leveraged Anchore Enterprise, a leading software supply chain security platform specializing in container security and vulnerability management. Anchore Enterprise integrated seamlessly with Cisco’s existing infrastructure, providing:

These features enabled Cisco Umbrella to secure their software supply chain, ensuring compliance with FedRAMP, STIG, FIPS, and EO 14028 within a short timeframe.

Remarkable Results

By integrating Anchore Enterprise, Cisco Umbrella achieved:

  • FedRAMP, FIPS, and STIG compliance in weeks versus months
  • Reduced implementation time and improved developer experience
  • Proactive vulnerability detection in development, saving hours of developer time
  • Simplified security data management with a complete SBOM management solution

Download the Case Study Today

Navigating the complexity and volume of compliance requirements can be overwhelming for IT and security teams, especially with increased workloads and resource constraints. Cisco Umbrella’s experience shows that with the right tools, achieving compliance can be streamlined and manageable. Discover how you can implement these strategies in your organization by downloading our case study, “How Cisco Umbrella Achieved FedRAMP Compliance in Weeks,” and take the first step towards streamlined compliance today.

Using the Common Form for SSDF Attestation: What Software Producers Need to Know

The release of the long-awaited Secure Software Development Attestation Form on March 18, 2024 by the Cybersecurity and Infrastructure Agency (CISA) increases the focus on cybersecurity compliance for software used by the US government. With the release of the SSDF attestation form, the clock is now ticking for software vendors and federal systems integrators to comply with and attest to secure software development practices.

This initiative is rooted in the cybersecurity challenges highlighted by Executive Order 14028, including the SolarWinds attack and the Colonial Pipeline ransomware attack, which clearly demonstrated the need for a coordinated national response to the emerging threats of a complex software supply chain. Attestation to Secure Software Development Framework (SSDF) requirements using the new Common Form is the most recent, and likely not the final, step towards a more secure software supply chain for both the United States and the world at large. We will take you through the details of what this form means for your organization and how to best approach it.

Overview of the SSDF attestation

SSDF attestation is part of a broader effort derived from the Cybersecurity EO 14028 (formally called “Improving the Nation’s Cybersecurity). As a result of this EO, the Office of Management and Budget (OMB) issued two memorandums, M-22-18 “Enhancing the Security of the Software Supply Chain through Secure Software Development Practices” and M-23-16 “Update to Memorandum M-22-18”.

These memos require the Federal agencies to obtain self-attestation forms from software suppliers. Software suppliers have to attest to complying with a subset of the Secure Software Development Framework (SSDF).

Before the publication of the SSDF attestation form, the SSDF was a software development best practices standard published by the National Institute of Standards and Technology (NIST) based on industry best practices like OWASP’s BSIMM and SAMM, a useful resource for organizations that valued security intrinsically and wanted to run secure software development without any external incentives like formal compliance requirements.

Now, the SSDF attestation form requires software providers to self-attest to having met a subset of the SSDF best practices. There are a number of implications to this transition from secure software development as being an aspiration standard to a compliance standard that we will cover below. The most important thing to keep in mind is that while the Attestation Form doesn’t require a software provider to be formally certified before they can transaction with a federal agency like FedRAMP does, there are retroactive punishments that can be applied in cases of non-compliance.

Who/What is Affected?

  1. Software providers to federal agencies
  • Federal service integrators
  • Independent software vendor
  • Cloud service providers
  1. Federal agencies and DoD programs who use any of the above software providers

Included

  • New software: Any software developed after September 14, 2022
  • Major updates to existing software: A major version change after September 14, 2022
  • Software-as-a-Service (SaaS)

Exclusions

  • First-party software: Software developed in-house by federal agencies. SSDF is still considered a best practice but does not require self-attestation
  • Free and open-source software (FOSS): Even though FOSS components and end-user products are excluded from self-attestation the SSDF requires that specific controls are in place to protect against software supply chain security breaches

Key Requirements of the Attestation Form

There are two high-level requirements for meeting compliance with the SSDF attestation form;

  1. Meet the technical requirements of the form
    • Note: NIST SSDF has 19 categories and 42 total requirements. The self-attestation form has 4 categories which are a subset of the full SSDF
  2. Self-attest to compliance with the subset of SSDF
    • Sign and return the form

Timeline

The timeline for compliance with the SSDF self-attestation form involves two critical dates:

  • Critical software: Jun 11, 2024 (3 months after approval on March 11)
  • All software: Sep 11, 2024 (6 months after approval on March 11)

Implications

Now that CISA has published the final version of the SSDF attestation form there are a number of implications to this transition. One is financial and the other is potentially criminal.

The financial penalty of not attesting to secure software development practices via the form can be significant. Federal agencies are required to stop using the software, potentially impacting your revenue,  and any future agencies you want to work with will ask to see your SSDF attestation form before procurement. Sign the form or miss out on this revenue.

The second penalty is a bit scarier from an individual perspective. An officer of the company has to sign the attestation form to state that they are responsible for attesting to the fact that all of the form’s requirements have been met. Here is the relevant quote from the form:

“Willfully providing false or misleading information may constitute a violation of 18 U.S.C. § 1001, a criminal statute.”

It is also important to realize that this isn’t an unenforceable threat. There is evidence that the DOJ Civil Cyber Fraud Initiative is trying to crack down on government contractors failing to meet cybersecurity requirements. They are bringing False Claims Act investigations and enforcement actions. This will likely weigh heavily on both the individual that signs the form and who is chosen at the organization to sign the form.

Given this, most organizations will likely opt to utilize a third-party assessment organization (3PAO) to sign the form in order to shift liability off of any individual in the organization.

Challenges and Considerations

Do I still have to sign if I have a 3PAO do the technical assessment?

No. As long as the 3PAO is FedRAMP-certified. 

What if I can’t comply in time?

You can draft a plan of action and milestones (POA&M) to fill the gap while you are addressing the gaps between your current system and the system required by the attestation form. If the agency is satisfied with the POA&M then they can continue to use your software. But they have to request either an extension of the deadline from OMB or a waiver in order to do that.

Can only the CEO and COO sign the form?

The wording in the draft form that was published required either the CEO or COO but new language was added to the final form that allows for a different company employee to sign the attestation form.

Conclusion

Cybersecurity compliance is a journey not a destination. SSDF attestation is the next step in that journey for secure software development. With the release of the SSDF attestation for, the SSDF standard is not transformed from a recommendation into a requirement. Given the overall trend of cybersecurity modernization that was kickstarted with FISMA in 2002, it would be prudent to assume that this SSDF attestation form is an intermediate step before the requirements become a hard gate where compliance will have to be demonstrated as a prerequisite to utilizing the software.

If you’re interested to get a deep-dive into what is technically required to meet the requirements of the SSDF attestation form, read all of the nitty-gritty details in our eBook, “SSDF Attestation 101: A Practical Guide for Software Producers“. 

If you’re looking for a solution to help you achieve the technical requirements of SSDF attestation quickly, take a look at Anchore Enterprise. We have helped hundreds of enterprises achieve SSDF attestation in days versus months with our automated compliance platform.

With Great Power Comes Great Responsibility: APTs & Software Supply Chain Security

Note: This is a multi-part series primer on the intersection of advanced persistent threats (APTs) and software supply chain security (SSCS). This blog post is the first in the series. We will update this blog post with links to the additional parts of the series as they are published.
• Part 1 (This blog post)
Part 2
Part 3

In the realm of cybersecurity, the convergence of Advanced Persistent Threats (APTs) and software supply chain security presents a uniquely daunting challenge for organizations. APTs, characterized by their sophisticated, state-sponsored or well-funded nature, focus on stealthy, long-term data theft, espionage, or sabotage, targeting specific entities. Their effectiveness is amplified by the asymmetric power dynamics of a well funded attacker versus a resource constrained security team.

Modern supply chains inadvertently magnify the impact of APTs due to the complex and interconnected dependency network of software and hardware components. The exploitation of this weakness by APTs not only compromises the targeted organizations but also poses a systemic risk to all users of the compromised downstream components. The infamous Solarwinds exploit exemplifies the far-reaching consequences of such breaches.

This landscape underscores the necessity for an integrated approach to cybersecurity, emphasizing depth, breadth, and feedback to create a holistic software supply chain security program that can withstand even adversaries as motivated and well-resourced as APTs. Before we jump into how to create a secure software supply chain that can resist APTs, let’s understand our adversary a bit better first.

Know Your Adversary: Advanced Persistent Threats (APTs)

What is an Advanced Persistent Threats (APT)?

An Advanced Persistent Threat (APT) is a sophisticated, prolonged cyberattack, usually state-sponsored or executed by well-funded criminal groups, targeting specific organizations or nations. Characterized by advanced techniques, APTs exploit zero-day vulnerabilities and custom malware, focusing on stealth and long-term data theft, espionage, or sabotage. Unlike broad, indiscriminate cyber threats, APTs are highly targeted, involving extensive research into the victim’s vulnerabilities and tailored attack strategies.

APTs are marked by their persistence, maintaining a foothold in a target’s network for extended periods, often months or years, to continually gather information. They are designed to evade detection, blending in with regular network traffic, and using methods like encryption and log deletion. Defending against APTs requires robust, advanced security measures, continuous monitoring, and a proactive cybersecurity approach, often necessitating collaboration with cybersecurity experts and authorities.

High-Profile APT Example: Operation Triangulation

The recent Operation Triangulation campaign disclosed by Kaspersky researchers is an extraordinary example of an APT in both its sophistication and depth. The campaign made use of four separate zero-day vulnerabilities, utilized a highly targeted approach towards specific individuals at Kaspersky, combined a multi-phase attack pattern and persisted over a four year period. Its complexity, implied significant resources possibly from a nation-state, and the stealthy, methodical progression of the attack, align closely with the hallmarks of APTs. Famed security researcher, Bruce Schneier, writing on his blog, Schneier on Security, wasn’t able to contain his surprise upon reading the details of the campaign, “[t]his is nation-state stuff, absolutely crazy in its sophistication.”

What is the impact of APTs on organizations?

Ignoring the threat posed by Advanced Persistent Threats (APTs) can lead to significant impact for organizations, including extensive data breaches and severe financial losses. These sophisticated attacks can disrupt operations, damage reputations, and, in cases involving government targets, even compromise national security. APTs enable long-term espionage and strategic disadvantage due to their persistent nature. Thus, overlooking APTs leaves organizations exposed to continuous, sophisticated cyber espionage and the multifaceted damages that follow.

Now that we have a good grasp on the threat of APTs, we turn our attention to the world of software supply chain security to understand the unique features of this landscape.

Setting the Stage: Software Supply Chain Security

What is Software Supply Chain Security?

Software supply chain security is focused on protecting the integrity of software through its development and distribution. Specifically it aims to prevent the introduction of malicious code into software that is utilized as components to build widely-used software services.

The open source software ecosystem is a complex supply chain that solves the problem of redundancy of effort. By creating a single open source version of a web server and distributing it, new companies that want to operate a business on the internet can re-use the generic open source web server instead of having to build its own before it can do business. These new companies can instead focus their efforts on building new bespoke software on top of a web server that does new, useful functions for users that were previously unserved. This is typically referred to as compostable software building blocks and it is one of the most important outcomes of the open source software movement.

But as they say, “there are no free lunches”. While open source software has created this incredible productivity boon comes responsibility. 

What is the Key Vulnerability of the Modern Software Supply Chain Ecosystem?

The key vulnerability in the modern software supply chain is the structure of how software components are re-used, each with its own set of dependencies, creating a complex web of interlinked parts. This intricate dependency network can lead to significant security risks if even a single component is compromised, as vulnerabilities can cascade throughout the entire network. This interconnected structure makes it challenging to ensure comprehensive security, as a flaw in any part of the supply chain can affect the entire system.

Modern software is particularly vulnerable to software supply chain attacks because 70-90% of modern applications are open source software components with the remaining 10-30% being the proprietary code that implements company specific features. This means that by breaching popular open source software frameworks and libraries an attacker can amplify the blast radius of their attack to effectively reach significant portions of internet based services with a single attack.

If you’re looking for a deeper understanding of software supply chain security we have written a comprehensive guide to walk you through the topic in full.

High-Profile Software Supply Chain Exploit Example: SolarWinds

In one of the most sophisticated supply chain attacks, malicious actors compromised the update mechanism of SolarWinds’ Orion software. This breach allowed the attackers to distribute malware to approximately 18,000 customers. The attack had far-reaching consequences, affecting numerous government agencies, private companies, and critical infrastructure.

Looking at the example of SolarWinds, the lesson we should take away is not to put a focus on prevention. APTs has a wealth of resources to draw upon. Instead the focus should be on monitoring the software we consume, build, and ship for unexpected changes. Modern software supply chains come with a great deal of responsibility. The software we use and ship need to be understood and monitored.

This is the first in a series of blog posts focused on the intersection of APTs and software supply chain security. This blog post highlighted the contextual background to set the stage for the unique consequences of these two larger forces. Next week, we will discuss the implications of the collision of these two spheres in the second blog post in this series.

Anchore’s June Line-Up: Essential Events for Software Supply Chain Security and DevSecOps Enthusiasts

Summer is beginning to ramp up, but before we all check out for the holidays, Anchore has a sizzling hot line-up of events to keep you engaged and informed. This June, we are excited to host and participate in a number of events that cater to the DevSecOps crowd and the public sector. From insightful webinars to hands-on workshops and major conferences, there’s something for everyone looking to enhance their knowledge and connect with industry leaders. Join us at these events to learn more about how we are driving innovation in the software supply chain security industry.

WEBINAR: How the US Navy is enabling software delivery from lab to fleet

Date: Jun 4, 2024

The US Navy’s DevSecOps platform, Party Barge, has revolutionized feature delivery by significantly reducing onboarding time from 5 weeks to just 1 day. This improvement enhances developer experience and productivity through actionable findings and fewer false positives, while maintaining high security standards with inherent policy enforcement and Authorization to Operate (ATO). As a result, development teams can efficiently ship applications that have achieved cyber-readiness for Navy Authorizing Officials (AOs).

In an upcoming webinar, Sigma Defense and Anchore will provide an in-depth look at the secure pipeline automation and security artifacts that expedite application ATO and production timelines. Topics will include strategies for removing silos in DevSecOps, building efficient development pipeline roles and component templates, delivering critical security artifacts for ATO (such as SBOMs, vulnerability reports, and policy evidence), and streamlining operations with automated policy checks on container images.

WORKSHOP: VIPERR — Actionable Framework for Software Supply Chain Security

Date: Jun 17, 2024 from 8:30am – 2:00pm ET

Location: Carahsoft office in Reston, VA

Anchore, in partnership with Carahsoft, is offering an exclusive in-person workshop to walk security practitioners through the principles of the VIPERR framework. Learn the framework hands-on from the team that originally developed the industry leading software supply chain security framework. In case you’re not familiar, the VIPERR framework enhances software supply chain security by enabling teams to evaluate and improve their security posture. It offers a structured approach to meet popular compliance standards. VIPERR stands for visibility, inspection, policy enforcement, remediation, and reporting, focusing on actionable strategies to bolster supply chain security.

The workshop covers building a software bill of materials (SBOM) for visibility, performing security checks for vulnerabilities and malware during inspection, enforcing compliance with both external and internal standards, and providing recommendations and automation for quick issue remediation. Additionally, timely reporting at any development stage is emphasized, along with a special topic on achieving STIG compliance.

EVENT: Carahsoft DevSecOps Conference 2024

Date: Jun 18, 2024

Location: The Ronald Reagan Building and International Trade Center in Washington, DC

If you’re planning to be at the show, our team is looking forward to meeting you.  You can book a demo session with us in advance!

On top of offering the VIPERR workshop, the Anchore team will be attending Carahsoft’s 2nd annual DevSecOps Conference in Washington, DC, a comprehensive forum designed to address the pressing technological, security, and innovation challenges faced by government agencies today. The event aims to explore innovative approaches such as DoD software factories, which drive efficiency and enhance the delivery of citizen-centric services, and DevSecOps, which integrates security into the software development lifecycle to combat evolving cybersecurity threats. Through a series of panels and discussions, attendees will gain valuable knowledge on how to leverage these cutting-edge strategies to improve their operations and service delivery.

EVENT: AWS Summit Washington, DC

Dates:  June 26-27, 2024

Location: Walter E. Washington Convention Center in Washington, DC

If you’re planning to be at the show, our team is looking forward to meeting you.  You can book a demo session with us in advance!

To round out June, Anchore will also be attending AWS Summit Washington, DC. The event highlights how AWS partners can help public sector organizations meet the needs of federal agencies. Anchore is an AWS Public Sector Partner and a graduate of the AWS ISV Accelerate program.

See how Anchore helped Cisco Umbrella for Government achieve FedRAMP compliance by reading the co-authored blog post on the AWS Partner Network (APN) Blog. Or better yet, drop by our booth and the team can give you a live demo of the product.

VIRTUAL EVENT: Life after the xz utils backdoor hack with Josh Bressers

Date: Wednesday, June 5, from 12:35 PM – 1:20 PM EDT

The xz utils hack was a significant breach that profoundly undermined trust within the open source community. The discovery of the backdoor revealed vulnerabilities in the software supply chain. As a member of both the open source community and a solution provider for the software supply chain security field, we at Anchore have strong opinions about XZ specifically, and open source security generally. Anchore’s VP of Security,  Josh Bressers will be speaking publicly about this topic at Upstream 2024.

Be sure to catch the live stream of “Life after the xz utils backdoor hack,” a panel discussion featuring Josh Bressers. The panel will cover the implications of the recent xz utils backdoor hack and how the attack deeply impacted trust within the open source community. In keeping with the Upstream 2024 theme of “Unusual Ideas to Solve the Usual Problems”, Josh will be presenting the “unusual” solution that Anchore has developed to keep these types of hacks from impacting the industry. The discussion will include insights from industry experts such as Shaun Martin of BlackIce, Jordan Harband, prolific JavaScript maintainer, Rachel Stephens from RedMonk, and Terrence Fischer from Boeing.

Wrap-Up

Don’t miss out on these exciting opportunities to connect with Anchore and learn about the latest advancements in software supply chain security and DevSecOps. Whether you join us for a webinar, participate in our in-person VIPERR workshop, or visit us at one of the major conferences, you’ll gain valuable insights and practical knowledge to enhance your organization’s security posture. We’re looking forward to engaging with you and helping you navigate the evolving digital landscape. See you in June!

Also, if you want to stay up-to-date on all of the events that Anchore hosts or participates in be sure to bookmark our events page and check back often!

Navigating the Updates to cATO: Critical Changes & Practical Advice for DoD Programs

On April 11, the US Department of Defense (DoD)’s Chief Information Officer (CIO) released the DevSecOps Continuous Authorization Implementation Guide, marking the next step in the evolution of the DoD’s efforts to modernize its security and compliance ecosystem. This guide is part of a larger trend of compliance modernization that is transforming the US public sector and the global public sector as a whole. It aims to streamline and enhance the processes for achieving continuous authorization to operate (cATO), reflecting a continued push to shift from traditional, point-in-time authorizations to operate (ATOs) to a more dynamic and ongoing compliance model.

The new guide introduces several significant updates, including the introduction of specific security and development metrics required to achieve cATO, comprehensive evaluation criteria, practical advice on how to meet cATO requirements and a special emphasis on software supply chain security via software bills of material (SBOMs).

We break down the updates that are important to highlight if you’re already familiar with the cATO process. If you’re looking for a primer on cATO to get yourself up to speed, read our original blog post or click below to watch our webinar on-demand.

Continuous Authorization Metrics

A new addition to the corpus of information on cATO is the introduction of specific security and software development metrics that are required to be continuously monitored. Many of these come from the private sector DevSecOps best practices that have been honed by organizations at the cutting edge of this field, such as Google, Microsoft, Facebook and Amazon.

We’ve outlined the major ones below.

  1. Mean Time to Patch Vulnerabilities:
    • Description: Average time between the identification of a vulnerability in the DevSecOps Platform (DSOP) or application and the successful production deployment of a patch.
    • Focus: Emphasis on vulnerabilities with high to moderate impact on the application or mission.
  2. Trend Metrics:
    • Description: Metrics associated with security guardrails and control gates PASS/FAIL ratio over time.
    • Focus: Show improvements in development team efforts at developing secure code with each new sprint and the system’s continuous improvement in its security posture.
  3. Feedback Communication Frequency:
    • Description: Metrics to ensure feedback loops are in place, being used, and trends showing improvement in security posture.
  4. Effectiveness of Mitigations:
    • Description: Metrics associated with the continued effectiveness of mitigations against a changing threat landscape.
  5. Security Posture Dashboard Metrics:
    • Description: Metrics showing the stage of application and its security posture in the context of risk tolerances, security control compliance, and security control effectiveness results.
  6. Container Metrics:
    • Description: Measure the age of containers against the number of times they have been used in a subsystem and the residual risk based on the aggregate set of open security issues.
  7. Test Metrics:
    • Description: Percentage of test coverage passed, percentage of passing functional tests, count of various severity level findings, percentage of threat actor actions mitigated, security findings compared to risk tolerance, and percentage of passing security control compliance.

The overall thread with the metrics required is to quickly understand whether the overall security of the application is improving. If they aren’t this is a sign that something within the system is out of balance and is in need of attention.

Comprehensive and detailed evaluation criteria

Tucked away in Appendix B. “Requirements” is a detailed table that spells out the individual requirements that need to be met in order to achieve a cATO. This table is meant to improve the cATO process so that the individuals in a program that are implementing the requirements know the criteria they will be evaluated against. The goal being to reduce the amount of back-and-forth between the program and the Authorizing Official (AO) that is evaluating them.

Practical Implementation Advice

The ecosystem for DSOPs has evolved significantly since cATO was first announced in February 2022. Over the past 2+ years, a number of early adopters, such as Platform One have blazed a trail and learned all of the painful lessons in order to smooth the path for other organizations that are now looking to modernize their development practices. The advice in the implementation guide is a high-signal, low-noise distillation of these hard won lessons learned.

DevSecOps Platform (DSOP) Advice

If you’re more interested in writing software than operating a DSOP then you’ll want to focus your attention on pre-existing DSOP’s, commonly called DoD software factories.

We have written both a primer for understanding DoD software factories and an index of additional content that can quickly direct you to deep dives in specific content you’re interested in.

If you love to get your hands dirty and would rather have full control over your development environment, just be aware that this is specifically recommended against:

Build a new DSOP using hardened components (this is the most time-consuming approach and should be avoided if possible).

DevSecOps Culture Advice

While the DevSecOps culture and process advice is well-known in the private sector, it is still important to emphasize in the federal context that is currently transitioning to the modern software development paradigm.

  1. Bring the security team at the start of development and keep them involved throughout.
  2. Create secure agile processes to support the continued delivery of value without the introduction of unnecessary risk

Continuous Monitoring (ConMon) Advice

Ensure that all environments are continuously monitored (e.g., development, test and production). Utilize the security data collected from these environments to power and inform thresholds and triggers for active incident response. ConMon and ACD are separate pillars of cATO but need to be integrated so that information is flowing to the systems that can make best use of it. It is this integrated approach that delivers on the promise of significantly improved security and risk outcomes.

Active Cyber Defense (ACD) Advice

Both a Security Operations Center (SOC) and external CSSP are needed in order to achieve the Active Cyber Defense (ACD) pillar of cATO. On top of that, there also has to be a detailed incident response plan and personnel trained on it. While cATO’s goal is to automate as much of the security and incident response system as possible to reduce the burden of manual intervention. Humans in the loop are still an important component in order to tune the system and react with appropriate urgency.

Software Supply Chain Security (SSCS) Advice

The new implementation guide is very clear that a DSOP creates SBOMs for itself and any applications that pass through it. This is a mega-trend that has been sweeping over the software supply chain security industry for the past decade. It is now the consensus that SBOMs are the best abstraction and practice for securing software development in the age of composible and complex software.

The 3 (+1) Pillars of cATO

While the 3 pillars of cATO and its recommendation for SBOMs as the preferred software supply chain security tool were called out in the original cATO memo, the recently published implementation guide again emphasizes the importance of the 3 (+1) pillars of cATO.

The guide quotes directly from the memo:

In order to prevent any combination of human errors, supply chain interdictions, unintended code, and support the creation of a software bill of materials (SBOM), the adoption of an approved software platform and development pipeline(s) are critical.

This is a continuation of the DoD specifically, and the federal government generally, highlighting the importance of software supply chain security and software bills of material (SBOMs) as “critical” for achieving the 3 pillars of cATO. This is why Anchore refers to this as the “3 (+1) Pillars of cATO“.

  1. Continuous Monitoring (ConMon)
  2. Active Cyber Defense (ACD)
  3. DevSecOps (DSO) Reference Design
  4. Secure Software Supply Chain (SSSC)

Wrap-up

The release of the new DevSecOps Continuous Authorization Implementation Guide marks a significant advancement in the DoD’s approach to cybersecurity and compliance. With a focus on transitioning from traditional point-in-time Authorizations to Operate (ATOs) to a continuous authorization model, the guide introduces comprehensive updates designed to streamline the cATO process. The goal being to ease the burden of the process and help more programs modernize their security and compliance posture.

If you’re interested to learn more about the benefits and best practices of utilizing a DSOP (i.e., DoD software factory) in order to transform cATO compliance into a “switch flip”. Be sure to pick up a copy of our “DevSecOps for a DoD Software Factory: 6 Best Practices for Container Images” white paper. Click below to download.

A Guide to Air Gapping: Balancing Security and Efficiency in Classified Environments

Not every organization needs to protect their data from spies that can infiltrate Petagon-level security networks with printable masks but there are more organizations than you think that utilize air gapping to protect their most classified. This blog dives into the concept of air gapping, its benefits, and its drawbacks. Finally, we will highlight how Anchore Enterprise/Federal can be deployed into an air gapped network and provide organization’s that require this level of security to benefit from the best of both worlds; the extraordinary security of an air gapped network and the automation of a Cloud-Native software composition analysis tool to protect their software supply chain.

What is an air gap?

An air gap is exactly what it sounds like; a literal gap filled with air between a network and the greater internet. It is a network security practice where a computer (or network) is physically isolated from any external networks. This isolation is achieved by ensuring that the system has no physical or wireless connections to other networks, creating a secure environment that is highly resistant to external cyber threats.

By maintaining this separation, air gapped systems protect sensitive data and critical operations from potential intrusions, malware, and other forms of cyber attacks. This method is commonly used in high-security environments such as government agencies, financial institutions, and critical infrastructure facilities to safeguard highly confidential and mission-critical information.

If you count yourself as part of the defense industrial base (DIB), then you’re likely very familiar with IT systems that have an air-gapped component. Typically, highly classified data storage systems require air-gapping as part of their defense-in-depth strategy.

Benefits of air gapping

The primary benefit of utilizing air gapping is that it eliminates an entire class of security threats. This brings a significant reduction in risk to the systems and the data that the system processes. 

Beyond that there are also a number of secondary benefits that come from running an air gapped network. The first is that it gives the DevSecOps team running the system complete control over the security and architecture of the system. Also, any software that is run on an air gapped network inherits both the security and compliance of the network. For 3rd-party software where the security of the software is in question, being able to run it in a fully isolated environment reduces the risk associated with this ambiguity. Similar to how anti-virus software creates sandboxes to safely examine potential malicious files, an air gapped network creates a physical sandbox that both protects the isolated system from remote threats and from internal data from being pushed out to remote systems.

Drawback of air gapping

As with most security solutions, there is always a tradeoff between security and convenience, and air gapping is no different. While air gapping provides a high level of security by isolating systems from external networks, it comes at the cost of significant inconvenience. For instance, accessing and transferring data on air gapped systems requires physical presence and manual procedures, such as using USB drives or other removable media. This can be time-consuming and labor-intensive, especially in environments where data needs to be frequently updated or accessed by multiple users.

Additionally, the lack of network connectivity means that software updates, patches, and system maintenance tasks cannot be performed remotely or automatically. This increases the complexity of maintaining the system and ensuring it remains up-to-date with the latest security measures.

How Anchore enables air gapping for the DoD and DIB

As the leading Cloud-Native software composition analysis (SCA) platform, Anchore Enterprise/Federal offers an on-prem deployment model that integrates seamlessly with air gapped networks. This is a proven model with countless deployments into IL4 to IL6  (air-gapped and classified) environments. Anchore supports public sector customers like the US Department of Defense (DoD), North Atlantic Treaty Organization (NATO), US Air Force, US Navy, US Customs and Border Protection, Australian Government Department of Defence and more. 

Architectural Overview

In order to deploy Anchore Enterprise or Federal in an air gapped environment, a few architectural considerations need to be made. At a very basic level, two deployments of Anchore need to be provisioned. 

The deployment in the isolated environment will be deployed as normal with some slight modifications. When Anchore is deployed normally, the feed service will be reaching out to a dozen or so specified endpoints to compile and normalize vulnerability data and then it will compile that into a single feed dataset and store that in the feeds database either deployed in the cluster or on an external instance or managed database if configured. 

When deploying in an air gapped environment, it’s necessary to set Anchore to run in “apiOnly” mode. This will prevent the feeds service from making unnecessary calls to the internet that will inevitably time out due to being in an isolated environment. 

A second skeleton deployment of Anchore will need to be deployed in a connected environment to sync the feeds. This deployment doesn’t need to spin up any of the services that will scan images or generate policy results and can therefore be done on a smaller instance. This deployment will reach out to the internet and build the feeds database. 

The only major consideration that needs to be taken into consideration with this connected deployment is it needs to be running on the same version of PostgreSQL as the disconnected deployment in order to ensure complete compatibility.

Workflow

Once the two deployments are provisioned, the general workflow will be the following:

First, allow the feeds to sync on the connected environment. Once the feeds are synced, dump out the entire feeds database and transfer it to the disconnected environment through the method typically used for your environment. 

When the file is available on the high-side, transfer it to the feed database instance Anchore uses. When it’s finished transferring, perform a PostgreSQL restore and the feeds will be available.

Automation Process

To mimic how Anchore syncs feeds in a connected environment, many people choose to automate some of all of the workflow described previously. A cronjob can be set up to run the backup along with a push to an automated cross domain solution like those available in AWS GovCloud. From there, the air gapped deployment can be scheduled to look at that location and perform the feed restore on a regular cadence.

Wrap-Up

While not every organization counts the Impossible Missions Force (IMF) as an adversary, the practice of air gapping remains a vital strategy for many entities that handle highly sensitive information. By deploying Anchore Enterprise/Federal in an air gapped network, organizations can achieve an optimal balance between the robust security of air gapping and the efficiency of automated software composition analysis. This combination ensures that even the most secure environments can benefit from cutting-edge software supply chain security.

Air gapping is only one of a catalog security practices that are necessary for systems that handle classified government data. If you’re interested to learn more about the other requirements, check out our white paper on best practices for container images in DoD-grade environments.