Introduction to the DoD Software Factory

In the rapidly evolving landscape of national defense and cybersecurity, the concept of a Department of Defense (DoD) software factory has emerged as a cornerstone of innovation and security. These software factories represent an integration of the principles and practices found within the DevSecOps movement, tailored to meet the unique security requirements of the DoD and Defense Industrial Base (DIB). 

By fostering an environment that emphasizes continuous monitoring, automation, and cyber resilience, DoD Software Factories are at the forefront of the United States Government’s push towards modernizing its software and cybersecurity capabilities. This initiative not only aims to enhance the velocity of software development but also ensures that these advancements are achieved without compromising on security, even against the backdrop of an increasingly sophisticated threat landscape.

Building and running a DoD software factory is so central to the future of software development that “Establish a Software Factory” is the one of the explicitly named plays from the DoD DevSecOps Playbook. On top of that, the compliance capstone of the authorization to operate (ATO) or its DevSecOps infused cousin the continuous ATO (cATO) effectively require a software factory in order to meet the requirements of the standard. In this blog post, we’ll break down the concept of a DoD software factory and a high-level overview of the components that make up one.

What is a DoD software factory?

A Department of Defense (DoD) Software Factory is a software development pipeline that embodies the principles and tools of the larger DevSecOps movement with a few choice modifications that conform to the extremely high threat profile of the DoD and DIB. It is part of the larger software and cybersecurity modernization trend that has been a central focus for the United States Government in the last two decades.

The goal of a DoD Software Factory is aimed at creating an ecosystem that enables continuous delivery of secure software that meet the needs of end-users while ensuring cyber resilience (a DoD catchphrase that emphasizes the transition from point-in-time security compliance to continuous security compliance). In other words, the goal is to leverage automation of software security tasks in order to fulfill the promise of the DevSecOps movement to increase the velocity of software development.

What is an example of a DoD software factory?

Platform One is the canonical example of a DoD software factory. Run by the US Air Force, it offers both a comprehensive portfolio of software development tools and services. It has come to prominence due to its hosted services like Repo One for source code hosting and collaborative development, Big Bang for a end-to-end DevSecOps CI/CD platform and the Iron Bank for centralized container storage (i.e., container registry). These services have led the way to demonstrating that the principles of DevSecOps can be integrated into mission critical systems while still preserving the highest levels of security to protect the most classified information.

If you’re interested to learn more about how Platform One has unlocked the productivity bonus of DevSecOps while still maintaining DoD levels of security, watch our webinar with Camdon Cady, Chief of Operations and Chief Technology Officer of Platform One.

Who does it apply to?

Federal Service Integrators (FSI)

Any organization that works with the DoD as a federal service integrator will want to be intimately familiar with DoD software factories as they will either have to build on top of existing software factories or, if the mission/program wants to have full control over their software factory, be able to build their own for the agency.

Department of Defense (DoD) Program

Any Department of Defense (DoD) program will need to be well-versed on DoD software factories as all of their software and systems will be required to run on a software factory as well as both reach and maintain a cATO.

What are the components of a DoD Software Factory?

A DoD software factory is composed of both high-level principles and specific technologies that meet these principles. Below are a list of some of the most significant principles of a DoD software factory:

Principles of DevSecOps embedded into a DoD software factory

  1. Breakdown Organizational Silos
    • This principle is borrowed directly from the DevSecOps movement, specifically the DoD aims to integrate software development, test, deployment, security and operations into a single culture with the organization.
  2. Open Source and Reusable Code
    • Composable software building blocks is another principle of the DevSecOps that increases productivity and reduces security implementation errors from developers writing secure software packages that they are not experts in.
  3. Immutable Infrastructure as Code (IaC)
    • This principle focuses on treating the infrastructure that software runs on as ephemeral and managed via configuration rather than manual systems operations. Enabled by cloud computing (i.e., hardware virtualization) this principle increases the security of the underlying infrastructure through templated secure-by-design defaults and reliability of software as all infrastructure has to be designed to fail at any moment.
  4. Microservices architecture (via containers)
    • Microservices are a design pattern that creates smaller software services that can be built and scale independently of each other. This principle allows for less complex software that only performs a limited set of behavior.
  5. Shift Left
    • Shift left is the DevSecOps principle that re-frames when and how security testing is done in the software development lifecycle. The goal is to begin security testing while software is being written and tested rather than after the software is “complete”. This prevents insecure practices from cascading into significant issues right as software is ready to be deployed.
  6. Continuous improvement through key capabilities
    • The principle of continuous improvement is a primary characteristic of the DevSecOps ethos but the specific key capabilities that are defined in the DoD DevSecOps playbook are what make this unique to the DoD.
  7. Define a DevSecOps Pipeline
    • A DevSecOps pipeline is the system that utilizes all of the preceding principles in order to create the continuously improving security outcomes that is the goal of the DoD software factory program.
  8. Cyber Resilience
    • Cyber resiliency is the goal of a DoD software factory, is it defined as, “the ability to anticipate, withstand, recover from, and adapt to adverse conditions, stresses, attacks, or compromises on the systems that include cyber resources.”

Common tools and systems of a DoD software factory

  1. Code Repository (e.g., Repo One)
    • Where software source code is stored, managed and collaborated on.
  2. CI/CD Build Pipeline (e.g., Big Bang)
    • The system that automates the creation of software build artifacts, tests the software and packages the software for deployment.
  3. Artifact Repository (e.g., Iron Bank)
    • The storage system for software components used in development and the finished software artifacts that are produced from the build process.
  4. Runtime Orchestration and Platform (e.g., Big Bang)
    • The deployment system that hosts the software artifacts pulled from the registry and keeps the software running so that users can access it.

How do I meet the security requirements for a DoD Software Factory? (Best Practices)

Use a pre-existing software factory

The benefit of using a pre-existing DoD software factory is the same as using a public cloud provider; someone else manages the infrastructure and systems. What you lose is the ability to highly customize your infrastructure to your specific needs. What you gain is the simplicity of only having to write software and allow others with specialized skill sets to deal with the work of building and maintaining the software infrastructure. When you are a car manufacturer, you don’t also want to be a civil engineering firm that designs roads.

To view existing DoD software factories, visit the Software Factory Ecosystem Coalition website.

Roll your own by following DoD best practices 

If you need the flexibility and customization of managing your own software factory then we’d recommend following the DoD Enterprise DevSecOps Reference Design as the base framework. There are a few software supply chain security recommendations that we would make in order to ensure that things go smoothly during the authorization to operate (ATO) process:

  1. Continuous vulnerability scanning across all stages of CI/CD pipeline
    • Use a cloud-native vulnerability scanner that can be directly integrated into your CI/CD pipeline and called automatically during each phase of the SDLC
  2. Automated policy checks to enforce requirements and achieve ATO
    • Use a cloud-native policy engine in tandem with your vulnerability scanner in order to automate the reporting and blocking of software that is a security threat and a compliance risk
  3. Remediation feedback
    • Use a cloud-native policy engine that can provide automated remediation feedback to developers in order to maintain a high velocity of software development
  4. Compliance (Trust but Verify)
    • Use a reporting system that can be directly integrated with your CI/CD pipeline to create and collect the compliance artifacts that can prove compliance with DoD frameworks (e.g., CMMC and cATO)
  5. Air-gapped system
    • Utilize a cloud-native software supply chain security platform that can be deployed into an air-gapped environment in order to maintain the most strict security for classified missions

Is a software factory required in order to achieve cATO?

Technically, no. Effectively, yes. A cATO requires that your software is deployed on an Approved DoD Enterprise DevSecOps Reference Design not a software factory specifically. If you build your own DevSecOps platform that meets the criteria for the reference design then you have effectively rolled your own software factory.

How Anchore Can Help

The easiest and most effective method for achieving the security guarantees that a software factory is required to meet for its software supply chain are by using: 

  1. An SBOM generation tool that integrates directly into your software development pipeline
  2. A container vulnerability scanner that integrates directly into your software development pipeline
  3. A policy engine that integrates directly into your software development pipeline
  4. A centralized database to store all of your software supply chain security logs
  5. A query engine that can continuously monitor your software supply chain and automate the creation of compliance artifacts

These are the primary components of both Anchore Enterprise and Anchore Federal cloud native, SBOM-powered software composition analysis (SCA) platforms that provide an end-to-end software supply chain security to holistically protect your DevSecOps pipeline and automate compliance. This approach has been validated by the DoD, in fact the DoD’s Container Hardening Process Guide specifically named Anchore Federal as a recommended container hardening solution.

Learn more about how Anchore brings DevSecOps to DoD software factories.

Conclusion and Next Steps

DoD software factories can come off as intimidating at first but hopefully we have broken them down into a more digestible form. At their core they reflect the best of the DevSecOps movement with specific adaptations that are relevant to the extreme threat environment that the DoD has to operate in, as well as, the intersecting trend of the modernization of federal security compliance standards.

For those that want to understand how a policy engine can reliably deliver DevSecOps developer productivity, continuous security and automated compliance, read our overview of how policy engine’s are the perfect compliment for software supply chain security: The Power of Policy-as-Code for the Public Sector.

Spring Webinar Update: Expand Your Knowledge with Our Expert-Led Sessions

In our continuous effort to bring valuable insights and tools to the world of software supply chain security, we are thrilled to announce two upcoming webinars and one recently held webinar now available for on-demand access. Whether you’re looking to enhance your understanding of software security, explore open-source tools to automate OSS licensing management, or navigate the complexities of compliance with federal standards, our expert-led webinars are designed to equip you with the knowledge you need. Here’s what’s on the agenda:

Tracking License Compliance Made Easy: Intro to Grant (OSS)

Date: Mar 26, 2024 at 2pm EDT  (11am PDT)

Join us as Anchore CTO, Dan Nurmi and Grant Maintainer, Christopher Phillips discuss the challenges of managing software licenses within production environments, highlighting the complexity and ongoing nature of tracking open-source licenses.

They will introduce Grant, an open-source tool designed to alleviate the burden of OSS license inspection by demonstrating how to scan for licenses within SBOMs or container images, simplifying a typically manual process. The session will cover the current landscape of software licenses, the difficulties of compliance checks, and a live demo of Grant’s features that automate this previously laborious process.

Software Security in the Real World with Kelsey Hightower and Dan Perry

Date: April 4th, 2024 at 2pm EDT  (11am PDT)

In our upcoming webinar, experts Kelsey Hightower and Dan Perry will delve into the nuances of securing software in cloud-native, containerized applications. This in-depth session will explore the criteria for vulnerability testing success or failure, offering insights into security testing and compliance for modern software environments. 

Through a live demonstration of Anchore Enterprise, they’ll provide a comprehensive look at visibility, inspection, policy enforcement, and vulnerability remediation, equipping attendees with a deeper understanding of software supply chain security, proactive security strategies, and practical steps to embark on a software security journey. 

The discussion will continue after the webinar on X/Twitter with Kelsey Hightower.

FedRAMP and SSDF Compliance: How to Sell to the Federal Government

This webinar explores how Anchore aids in navigating the complex compliance requirements for selling software to the federal government, focusing on FedRAMP vulnerability scanning and SSDF compliance. Led by Josh Bressers, VP of Security and Connor Wynveen, Senior Solutions Engineer it will detail evaluating FedRAMP controls for software containers and adhering to SSDF guidelines. 

Key takeaways include strategies to streamline FedRAMP and SSDF compliance efforts, leveraging SBOMs for efficiency, the critical role of automated vulnerability scans, and how Anchore’s policy pack can assist organizations in meeting compliance standards.

Accessing the Webinars

Don’t miss out on the opportunity to expand your knowledge and skills with these sessions. To register for the upcoming webinars or to access the on-demand webinar, visit our webinar landing page. Whether you’re looking to stay ahead of the curve in software security, explore funding opportunities for your open-source projects, or break into the federal market, our webinars are designed to provide you with the insights and tools you need.

We look forward to welcoming you to our upcoming webinars. Stay informed, stay ahead!

National Vulnerability Database: Opaque changes and unanswered questions

A short history lesson on the NVD

Founded in 2005, The National Vulnerability Database, or NVD, is a collection of vulnerability data by the National Institute of Standards and Technology (NIST) in the United States. As of today, many companies rely on NVD data for their security operations and vulnerability research.

NVD describes itself as:

The NVD is the U.S. government repository of standards based vulnerability management data represented using the Security Content Automation Protocol (SCAP). This data enables automation of vulnerability management, security measurement, and compliance. The NVD includes databases of security checklist references, security related software flaws, product names, and impact metrics.

The primary role of the NVD is adding data to vulnerabilities that have been assigned a CVE ID. They include additional metadata such as severity levels via Common Vulnerability Scoring System (CVSS), and affected data via Common Platform Enumeration (CPE). NIST is responsible for maintaining the NVD as each CVE ID can require additional modifications or maintenance as the nature of vulnerabilities can change daily. This is a service NVD has been providing for nearly 20 years.

The graph below shows a historical trend of CVE IDs that have been published in the CVE program (green), alongside the analysis data provided by NVD (red), since 2005.

Key: Green is all CVE IDs in NVD. Red is IDs with a CPE attached

We can see nearly every CVE has been enriched by NVD during this time.

A problematic website notice from the NVD

On February 15th 2024, a banner appeared on the NVD website stating:

NIST is currently working to establish a consortium to address challenges in the NVD program and develop improved tools and methods. You will temporarily see delays in analysis efforts during this transition. We apologize for the inconvenience and ask for your patience as we work to improve the NVD program.

It’s not entirely clear what this message means for the data provided by NVD or what the public should expect. 

While attempting to research the meaning behind this statement, Anchore engineers have discovered that as of February 15, 2024, NIST has almost completely stopped updating NVD with analysis for CVE IDs. The graph below shows the trend of CVE IDs that have been published in the CVE program (green), alongside the analysis data provided by NVD (red), since early January 1, 2024.

Key: Green is all CVE IDs in NVD. Red is IDs with a CPE attached

Starting February 12th, thousands of CVE IDs have been published without any record of analysis by NVD. Since the start of 2024 there been a total of 6,171 total CVE IDs with only 3,625 being enriched by NVD. That leaves a gap of 2,546 (42%!) IDs.

NVD has become an industry standard for organizations and security products to rely on their data for security operations, such as prioritizing vulnerability remediation and securing infrastructures. CVE IDs are constantly being added and updated, but those IDs are missing key analytical data provided by NVD. Any organizations that depend on NVD for vulnerability data such as CVSS scores are no longer receiving updates to the CVE data. This means that organizations relying on this data are left in the dark with new vulnerabilities, imposing greater risk and unmanaged attack surface for their environment.

Wait and see?

How to fill the gap of this missing data has not yet been addressed by NVD. There are other vulnerability databases such as the GitHub Advisory Database and the CVE5 database that contain severity ratings and affected products, but by definition, those databases cannot provide NVD severity scores.

Anchore is investigating options to create a public repository of identifiers to fill this gap. We invite members of the security community to join us at our next meetup on March 14th 2024 as we research options. Details for the meetup are available on GitHub

In the meantime, we will continue to look for updates from NIST and hope that they are more transparent about their service situation soon.

Syft Reaches v1.0!

Early in 2020 we started work on Syft, an SBOM generator that is easy to install and use and supports a variety of SBOM formats. In late September of 2020 we released the first version of Syft with seven ecosystems supported. Since then we’ve had 168 releases made up of the work from 134 contributors, collectively adding an additional 18 ecosystems — a total of over 40 package catalogers. 

Today we’re going one step further: we’re thrilled to announce the release of Syft v1.0.0! 🎉

What is Syft?

At a high level, Syft answers the seemingly simple question:  “what is in my application?”. To a finer point, Syft is a standalone CLI tool and library that scans container images and file systems in order to generate an SBOM: the Software Bill of Materials that describes what software components were found.

While Syft’s capability to generate a detailed SBOM is useful in its own right, we consider Syft to be a foundational tool that supports both our growing list of open-source offerings as well as dozens of other OSS projects. Ultimately it delivers a way for users to both generate SBOM material for their projects as well as use those same SBOMs for other important functions such as vulnerability scanning, OSS license reporting, tracking components at various stages of the application lifecycle, and other custom use cases.

Some of the important dimensions of Syft are:

  • Sources: types of software applications and artifacts that can be scanned by Syft to produce an SBOM, such as docker container images, source code directories, and more.
  • Catalogers: functions that are implemented for a given type of software artifact / packaging ecosystem, such as Java JAR files, RPMs, Go modules, and more.
  • Output Formats: interchangeable formats to put the SBOM material into, post-SBOM generation, such as the Syft-native JSON, SPDX (multiple versions), CycloneDX (multiple versions), and user-customizable formats via templating.

Over time, we’ve seen the list of sources, catalogers, and output formats continue to grow, and we are looking forward to adding even more capabilities as the project continues forward.  

Capabilities of Syft

To start with, Syft needs to know how to read what you’re giving it. There are a number of different types of sources Syft supports: directories, individual files, archives, and – of course – container images! And there are a lot of options such as letting you choose what scope you want to be cataloged within an image. By default we consider a squashed layer scope, which is most like what the filesystem will look like when a container is created from the image. But we also allow for looking at all layers within the container image, which would include contents that are distributed but not available when a container is created. In future versions we want to expand these selections to support even more use cases.

When scanning a source we have several catalogers scouring for packaging artifacts, covering several ecosystems:

  • Alpine (apk)
  • C/C++ (conan)
  • Dart (pubs)
  • Debian (dpkg)
  • Dotnet (deps.json)
  • Objective-C (cocoapods)
  • Elixir (mix)
  • Erlang (rebar3)
  • Go (go.mod, Go binaries)
  • Haskell (cabal, stack)
  • Java (jar, ear, war, par, sar, nar, native-image)
  • JavaScript (npm, yarn)
  • Jenkins Plugins (jpi, hpi)
  • Linux kernel module and archives (ko & vmlinz)
  • Nix (outputs in /nix/store)
  • PHP (composer)
  • Python (wheel, egg, poetry, requirements.txt)
  • Red Hat (rpm)
  • Ruby (gem)
  • Rust (cargo.lock)
  • Swift (cocoapods, swift-package-manager)
  • WordPress Plugins

We don’t blindly run all catalogers against all kinds of input sources though – we tailor the catalogers used to create the most accurate SBOM possible based on what is being analyzed. For example, when scanning images, we enable the alpm-db-cataloger but don’t enable the cocoapods-cataloger:

If there is a set of catalogers that you must / must not use for whatever reason, you can always tailor the set that runs with –override-default-catalogers and –select-catalogers to meet your needs. If you’re using Syft as an API, you can go further and implement your own package cataloger and provide it to Syft when creating an SBOM.

When it comes to SBOM formats, we are unopinionated about which format might be best for your use case — so we support as many as we can, including the most popular formats: SPDX and CycloneDX. This is a core decision point for us — this way you can pivot between SBOM formats and not be locked into a format based on a tooling decision. You can even select which version of a format to output (even output multiple at once!):

syft scan alpine:latest -o [email protected]

syft scan alpine:latest -o [email protected]

syft scan alpine:latest -o [email protected]

syft scan alpine:latest -o [email protected]=./alpine.spdx.json -o cyclonedx-json=./alpine.cdx.json

From the beginning, Syft was designed to lean into the Unix philosophies: do one thing and one thing well, allowing for it to plug into downstream tooling easily. For instance, to use your SBOM to get a vulnerability analysis, pipe the results to Grype

Or to discover software licenses and check for compliance, pipe the results to Grant:

What does v1 mean to us? 

Version 1 primarily signals a stabilization of the CLI and API. Moving forward you can expect that breaking changes will always coincide with a major version bump of Syft. Some specific guarantees we wanted to call out explicitly are:

  • We version our JSON schema of the syft JSON output separately than that of Syft, the application. In the past in a v0 state this has meant that breaking changes to the JSON schema would only result in a minor version bump of syft. Moving forward the JSON schema version used for the default JSON output of Syft will never include breaking changes.
  • We’ve implicitly supported the ability to decode any previous version of the Syft JSON model, however, moving forward this will be an explicit guarantee — If you have a Syft JSON document of any version, then you will be able to convert it into the latest Syft JSON version.
  • Stereoscope, our library for parsing container images and crafting squashed filesystem representations (critical to the functionality of Syft), will now be versioned at each release and have release tags.

What does this mean for the average user? In short: probably nothing! We will continue to ship enhancements and fixes every few weeks while remaining compatible with legacy SBOM document schemas.

So are we done?…

…Not even close 🤓. As the SBOM landscape keeps changing our goal is to grow Syft in terms of what can be cataloged, the accuracy of the results, the formats that can be expressed, and its usefulness in more use cases. 

Have an ecosystem you need that we don’t support yet? Let us know and let’s build it together! Found an issue or have a feature idea? Open an issue and let’s talk about it! Looking to contribute and are looking for a good place to start? We have some issues set aside just for you! Curious as to what we’re looking to work on next? Check out our ever-growing roadmap! And always, come join us every-other week at our community office hours chats if you want to meet with us and talk about topical issues regarding our OSS projects.

Balancing the Scale: Software Supply Chain Security and APTs

Note: This is a multi-part series primer on the intersection of advanced persistent threats (APTs) and software supply chain security (SSCS). This blog post is the first in the series. We will update this blog post with links to the additional parts of the series as they are published.
Part 1 | With Great Power Comes Great Responsibility: APTs & Software Supply Chain Security
Part 2 | David and Goliath: the Intersection of APTs and Software Supply Chain Security
• Part 3 (This blog post)

Last week we dug into the details of why an organization’s software supply chain is a ripe target for well-resourced groups like APTs and the potential avenues that companies have to combat this threat. This week we’re going to highlight the Anchore Enterprise platform and how it provides a turnkey solution for combating threats to any software supply chain.

How Anchore Can Help

Anchore was founded on the belief that a security platform that delivers deep, granular insights into an organization’s software supply chain, covers the entire breadth of the SDLC and integrates automated feedback from the platform will create a holistic security posture to detect advanced threats and allow for human interaction to remediate security incidents. Anchore is trusted by Fortune 100 companies and the most exacting federal agencies across the globe because it has delivered on this promise.

The rest of the blog post will detail how Anchore Enterprise accomplishes this.

Depth: Automating Software Supply Chain Threat Detection

Having deeper visibility into an organization’s software supply chain is crucial for security purposes because it enables the identification and tracking of every component in the software’s construction. This comprehensive understanding helps in pinpointing vulnerabilities, understanding dependencies, and identifying potential security risks. It allows for more effective management of these risks by enabling targeted security measures and quicker response to potential threats. Essentially, deeper visibility equips an organization to better protect itself against complex cyber threats, including those that exploit obscure or overlooked aspects of the software supply chain.

Anchore Enterprise accomplishes this by generating a comprehensive software bill of materials (SBOM) for every piece of software (even down to the component/library/framework-level). It then compares this detailed ingredients list against vulnerability and active exploit databases to identify exactly where in the software supply chain there are security risks. These surgically precise insights can then be fed back to the original software developers, rolled-up into reports for the security team to better inform risk management or sent directly into an incident management workflow if the vulnerability is evaluated as severe enough to warrant an “all-hands on deck” response.

Developers shouldn’t have to worry about manually identifying threats and risks inside your software supply chain. Having deep insights into your software supply chain and being able to automate the detection and response is vital to creating a resilient and scalable solution to the risk of APTs.

Breadth: Continuous Monitoring in Every Step of Your Software Supply Chain

The breadth of instrumentation in the Software Development Lifecycle (SDLC) is crucial for securing the software supply chain because it ensures comprehensive security coverage across all stages of software development. This broad instrumentation facilitates early detection and mitigation of vulnerabilities, ensures consistent application of security policies, and allows for a more agile response to emerging threats. It provides a holistic view of the software’s security posture, enabling better risk management and enhancing the overall resilience of the software against cyber threats.

Powered by a 100% feature complete platform API, Anchore Enterprise integrates into your existing DevOps pipeline.

Anchore has been supporting the DoD in this effort since 2019. Commonly referred to as “overwatch” for the DoD’s software supply chain. Anchore Enterprise continuously monitors how risk is evolving based on the ingesting of tens of thousands of runtime containers, hundreds of source code repositories and alerting on malware-laced images submitted to the registry. Monitoring every stage of the DevOps pipeline, source to build to registry to deploy, to gain a holistic view of when and where threats enter the software development lifecycle.

Feedback: Alerting on Breaches or Critical Events in Your Software Supply Chain

Integrating feedback from your software supply chain and SDLC into your overall security program is important because it allows for real-time insights and continuous improvement in security practices. This integration ensures that lessons learned and vulnerabilities identified at any stage of the development or deployment process are quickly communicated and addressed. It enhances the ability to preemptively manage risks and adapt to new threats, thereby strengthening the overall security posture of the organization.

How would you know if something is wrong in a system? Create high-quality feedback loops, of course. If there is a fire in your house, you typically have a fire alarm. That is a great source of feedback. It’s loud and creates urgency. When you investigate to confirm the fire is legitimate and not a false alarm; you can see fire, you can feel fire.

Software supply chain breaches are more similar to carbon monoxide leaks. Silent, often undetected, and potentially lethal. If you don’t have anything in place to specifically alert for that kind of threat then you could pay severely. 

Anchore Enterprise was designed specifically as both a set of sensors that can be deployed both deeply and broadly into your software supply chain AND a system of feedback that uses the sensors in your supply chain to detect and alert on potential threats that are silently emitting carbon monoxide in your warehouse.

Anchore Enterprise’s feedback mechanisms come in three flavors; automatic, recommendations and informational. Anchore Enterprise utilizes a policy engine to enable automatic action based on the feedback provided by the software supply chain sensors. If you want to make sure that no software is ever deployed into production (or any environment) with an exploitable version of Log4j the Anchore policy engine can review the security metadata created by the sensors for the existence of this software component and stop a deployment in progress before it ever becomes accessible to attackers.

Anchore Enterprise can also be configured to make recommendations and provide opinionated actions based on security signals. If a vulnerability is discovered in a software component but it isn’t considered urgent, Anchore Enterprise can instead provide a recommendation to the software developer to fix the vulnerability but still allow them to continue to test and deploy their software. This allows developers to become aware of security issues very early in the SDLC but also provide flexibility for them to fix the vulnerability based on their own prioritization.

Finally, Anchore Enterprise offers informational feedback that alerts developers, the security team or even the executive team to potential security risks but doesn’t offer a specific solution. These types of alerts can be integrated into any development, support or incident management systems the organization utilizes. Often these alerts are for high risk vulnerabilities that require deeper organizational analysis to determine the best course of action in order to remediate.

Conclusion

Due to the asymmetry between APTs and under-resourced security teams, the goal isn’t to create an impenetrable fortress that can never be breached. The goal is instead to follow security best practices and instead litter your SDLC with sensors and automated feedback mechanisms. APTs may have significantly more resources than your security team but they are still human and all humans make mistakes. By placing low-effort tripwires in as many locations as possible, you reverse the asymmetry of resources and instead allow the well-resourced adversary to become their own worst enemy. APTs are still software developers at the end of the day and no one writes bug-free code in the long run. By transforming your software supply chain into a minefield of best practices, you create a battlefield that requires your adversaries to slow down and carefully disable each individual security mechanism. None are impossible to disarm but each speed bump creates another opportunity for your adversary to make a mistake and reveal themselves. If the zero-trust architecture has taught us anything, it is that an impenetrable perimeter was never the best strategy.

David and Goliath: the Intersection of APTs and Software Supply Chain Security

Note: This is a multi-part series primer on the intersection of advanced persistent threats (APTs) and software supply chain security (SSCS). This blog post is the second in the series. If you’d like to start from the beginning, you can find the first blog post here.

Last week we set the stage for discussing APTs and the challenges they pose for software supply chain security by giving a quick overview of each topic. This week we will dive into the details of how the structure of the open source software supply chain is a uniquely ripe target for APTs.

The Intersection of APTs and Software Supply Chain Security

The Software Ecosystem: A Ripe Target

APT groups often prioritize the exploitation of software supply chain vulnerabilities. This is due to the asymmetric structure of the software ecosystem. By breaching a single component, such as a build system, they can gain access to any organization using the compromised software component. This creates an inversion in the cost benefit of the effort involved in the research and development effort needed to discover a vulnerability and craft an exploit for the vulnerability. Before APTs were focused primarily on targets where the pay off could warrant the investment or vulnerabilities that were so wide-spread that the attack could be automated. The complex interactions of software dependencies allows APTs to scale their attack due to the structure of the ecosystem.

The Software Supply Chain Security Dynamic: An Unequal Playing Ground

The interesting challenge with software supply chain security is that securing the supply chain requires even more effort than an APT would take to exploit it. The rub comes because each company that consumes software has to build a software supply chain security system to protect their organization. An APT investing in exploiting a popular component or system gets the benefit of access to all of the software built on top of it.

Given that security organizations are at a structural disadvantage, how can organizations even the odds?

How Do I Secure My Software Supply Chain from APTs?

An organization’s ability to detect the threat of APTs in its internal software supply chain comes down to three core themes that can be summed up as “go deep, go wide and integrate feedback”. Specifically this means, the deeper the visibility into your organization’s software supply chain the less surface area an attack has to slip in malicious software. The wider this visibility is deployed across the software development lifecycle, the earlier an attacker will be caught. Neither of the first two points matter if the feedback produced isn’t integrated into the overall security program that can act on the signals surfaced.

By applying these three core principles to the design of a secure software supply chain, an organization can ensure that they balance the playing field against the structural advantage APTs possess.

How Can I Easily Implement a Strategy for Securing My Software Supply Chain?

The core principles of depth, breadth and feedback are powerful touchstones to utilize when designing a secure software supply chain that can challenge APTs but they aren’t specific rules that can be easily implemented. To address this, Anchore has created the open source VIPERR Framework to provide specific guidance on how to achieve the core principles of software supply chain security.

VIPERR is a free software supply chain security framework that Anchore created for organizations to evaluate and improve the security posture of their software supply chain. VIPERR stands for visibility, inspection, policy enforcement, remediation, and reporting. 

Utilizing the VIPERR Framework an organization can satisfy the three core principles of software supply chain security; depth, breadth and feedback. By following this guide, numerous Fortune 500 enterprises and top federal agencies have transformed their software supply chain security posture and become harder targets for advanced persistent threats. If you’re looking to design and run your own secure software supply chain system, this framework will provide a shortcut to ensure the developed system will be resilient. 

How Can I Comprehensively Implement a Strategy for Securing My Software Supply Chain?

There are a number of different comprehensive initiatives to define best practices for software supply chain security. Organizations ranging from the National Institute of Standards and Technology (NIST) with standards such as SP 800-53, SP 800-218, and SP 800-161. The Cloud Native Computing Foundation (CNCF) to Open Source Security Foundation (OpenSSF) have created detailed documentation on their recommendations to achieve a comprehensive supply chain security program, such as, the SLSA framework and Secure Supply Chain Consumption Framework (S2C2F) Project. Be aware that these are not quick and dirty solutions for achieving a “reasonably” secure software supply chain. They are large undertakings for any organization and should be given the resources needed to achieve success. 

We don’t have the time to go over each in this blog post but we have broken each down in our complete guide to software supply chain security.

This is the second in a series of blog posts focused on the intersection of APTs and software supply chain security. This blog post highlighted the reasons that APTs focus their efforts on software supply chain exploits and the potential avenues that companies have to combat this threat. Next week we will discuss the Anchore Enterprise solution as a turnkey platform to implement the strategies outlined above.

With Great Power Comes Great Responsibility: APTs & Software Supply Chain Security

Note: This is a multi-part series primer on the intersection of advanced persistent threats (APTs) and software supply chain security (SSCS). This blog post is the first in the series. We will update this blog post with links to the additional parts of the series as they are published.
• Part 1 (This blog post)
Part 2
Part 3

In the realm of cybersecurity, the convergence of Advanced Persistent Threats (APTs) and software supply chain security presents a uniquely daunting challenge for organizations. APTs, characterized by their sophisticated, state-sponsored or well-funded nature, focus on stealthy, long-term data theft, espionage, or sabotage, targeting specific entities. Their effectiveness is amplified by the asymmetric power dynamics of a well funded attacker versus a resource constrained security team.

Modern supply chains inadvertently magnify the impact of APTs due to the complex and interconnected dependency network of software and hardware components. The exploitation of this weakness by APTs not only compromises the targeted organizations but also poses a systemic risk to all users of the compromised downstream components. The infamous Solarwinds exploit exemplifies the far-reaching consequences of such breaches.

This landscape underscores the necessity for an integrated approach to cybersecurity, emphasizing depth, breadth, and feedback to create a holistic software supply chain security program that can withstand even adversaries as motivated and well-resourced as APTs. Before we jump into how to create a secure software supply chain that can resist APTs, let’s understand our adversary a bit better first.

Know Your Adversary: Advanced Persistent Threats (APTs)

What is an Advanced Persistent Threats (APT)?

An Advanced Persistent Threat (APT) is a sophisticated, prolonged cyberattack, usually state-sponsored or executed by well-funded criminal groups, targeting specific organizations or nations. Characterized by advanced techniques, APTs exploit zero-day vulnerabilities and custom malware, focusing on stealth and long-term data theft, espionage, or sabotage. Unlike broad, indiscriminate cyber threats, APTs are highly targeted, involving extensive research into the victim’s vulnerabilities and tailored attack strategies.

APTs are marked by their persistence, maintaining a foothold in a target’s network for extended periods, often months or years, to continually gather information. They are designed to evade detection, blending in with regular network traffic, and using methods like encryption and log deletion. Defending against APTs requires robust, advanced security measures, continuous monitoring, and a proactive cybersecurity approach, often necessitating collaboration with cybersecurity experts and authorities.

High-Profile APT Example: Operation Triangulation

The recent Operation Triangulation campaign disclosed by Kaspersky researchers is an extraordinary example of an APT in both its sophistication and depth. The campaign made use of four separate zero-day vulnerabilities, utilized a highly targeted approach towards specific individuals at Kaspersky, combined a multi-phase attack pattern and persisted over a four year period. Its complexity, implied significant resources possibly from a nation-state, and the stealthy, methodical progression of the attack, align closely with the hallmarks of APTs. Famed security researcher, Bruce Schneier, writing on his blog, Schneier on Security, wasn’t able to contain his surprise upon reading the details of the campaign, “[t]his is nation-state stuff, absolutely crazy in its sophistication.”

What is the impact of APTs on organizations?

Ignoring the threat posed by Advanced Persistent Threats (APTs) can lead to significant impact for organizations, including extensive data breaches and severe financial losses. These sophisticated attacks can disrupt operations, damage reputations, and, in cases involving government targets, even compromise national security. APTs enable long-term espionage and strategic disadvantage due to their persistent nature. Thus, overlooking APTs leaves organizations exposed to continuous, sophisticated cyber espionage and the multifaceted damages that follow.

Now that we have a good grasp on the threat of APTs, we turn our attention to the world of software supply chain security to understand the unique features of this landscape.

Setting the Stage: Software Supply Chain Security

What is Software Supply Chain Security?

Software supply chain security is focused on protecting the integrity of software through its development and distribution. Specifically it aims to prevent the introduction of malicious code into software that is utilized as components to build widely-used software services.

The open source software ecosystem is a complex supply chain that solves the problem of redundancy of effort. By creating a single open source version of a web server and distributing it, new companies that want to operate a business on the internet can re-use the generic open source web server instead of having to build its own before it can do business. These new companies can instead focus their efforts on building new bespoke software on top of a web server that does new, useful functions for users that were previously unserved. This is typically referred to as compostable software building blocks and it is one of the most important outcomes of the open source software movement.

But as they say, “there are no free lunches”. While open source software has created this incredible productivity boon comes responsibility. 

What is the Key Vulnerability of the Modern Software Supply Chain Ecosystem?

The key vulnerability in the modern software supply chain is the structure of how software components are re-used, each with its own set of dependencies, creating a complex web of interlinked parts. This intricate dependency network can lead to significant security risks if even a single component is compromised, as vulnerabilities can cascade throughout the entire network. This interconnected structure makes it challenging to ensure comprehensive security, as a flaw in any part of the supply chain can affect the entire system.

Modern software is particularly vulnerable to software supply chain attacks because 70-90% of modern applications are open source software components with the remaining 10-30% being the proprietary code that implements company specific features. This means that by breaching popular open source software frameworks and libraries an attacker can amplify the blast radius of their attack to effectively reach significant portions of internet based services with a single attack.

If you’re looking for a deeper understanding of software supply chain security we have written a comprehensive guide to walk you through the topic in full.

High-Profile Software Supply Chain Exploit Example: SolarWinds

In one of the most sophisticated supply chain attacks, malicious actors compromised the update mechanism of SolarWinds’ Orion software. This breach allowed the attackers to distribute malware to approximately 18,000 customers. The attack had far-reaching consequences, affecting numerous government agencies, private companies, and critical infrastructure.

Looking at the example of SolarWinds, the lesson we should take away is not to put a focus on prevention. APTs has a wealth of resources to draw upon. Instead the focus should be on monitoring the software we consume, build, and ship for unexpected changes. Modern software supply chains come with a great deal of responsibility. The software we use and ship need to be understood and monitored.

This is the first in a series of blog posts focused on the intersection of APTs and software supply chain security. This blog post highlighted the contextual background to set the stage for the unique consequences of these two larger forces. Next week, we will discuss the implications of the collision of these two spheres in the second blog post in this series.

Anchore Enterprise 5.1: Token-Based Authentication

In Anchore 5.1, we have added the functionality of using token-based authentication through our API keys. Now with Anchore Enterprise 5.1, an administrator can create a token for a user so that they can use API keys rather than a username or credential. Let’s dive into the details of what this means.

Token-based authentication is a protocol that provides an extra layer of security when users want to access an API. It allows users to verify their identity, and in return receive a unique access token for the specific service. Tokens have a lifespan and as long as it is used within that duration, users can access the application with that same token without having to continuously log in.

We list the step-by-step mechanism for a token-based authentication protocol:

  1. A user will send its credentials to the client. 
  2. The client sends the credentials to an authorization server where it will generate a unique token for that specific user’s credentials. 
  3. The authorization server sends the token to the client. 
  4. The client sends the token to the resource server
  5. The resource server sends data/resource to the client for the duration of the token’s lifespan.

Token-Based Authentication in Anchore 5.1

Now that we’ve laid the groundwork, in the following sections we’ll walk through how to create API keys and use them in AnchoreCTL.

Creating API Keys

In order to generate an API key, navigate to the Enterprise UI and click on the top right button and select ‘API Keys’:

alt text

Clicking ‘API Keys’ will present a dialog that lists your active, expired and revoked keys:

alt text

To create a new API key, click on the ‘Create New API Key’ on the top right. This will open another dialog where it asks you for relevant details for the API key:

alt text

You can specify the following fields:

  • Name: The name of your API key. This is mandatory and the name should be unique (you cannot have two API keys with the same name).
  • Description: An optional text descriptor for your API key.
  • Expiry Date: An expiry date for your API key, you cannot specify a date in the past and it cannot exceed 365 days by default. This is the lifespan you are configuring for your token.

Click save and the UI will generate a Key Value and display the following output of the operation:

alt text

NOTE: Make sure you copy the Key Value as there is no way to get this back once you close out of this window.

Revoking API Keys

If there is a situation where you feel your API key has been compromised, you can revoke an active key. This prevents the key from being used for authentication. To revoke a key, click on the ‘Revoke’ button next to a key:

alt text

NOTE: Be careful revoking a key as this is an irreversible operation i.e. you cannot mark it active later.

The UI by default only displays active API keys. If you want to see your revoked and expired keys, check the toggle to ‘Show only active API keys’ on the top right:

alt text

Managing API Keys as an Admin

As an account admin you can manage API keys for all users in the account you are administered in. A global admin can manage API keys across all accounts and all users.

To access the API keys as an admin, click on the ‘System’ icon and navigate to ‘Accounts’:

alt text

Click ‘Edit’ for the account you want to manage keys for and click on the ‘Tools’ button against the user you wish to manage keys for:

alt text

Using API Keys in AnchoreCTL

Generating API Keys as an SAML (SSO) User

API keys for SAML (SSO) users are disabled by default. To enable API keys for SAML users, please update your helm chart values file with the following:

    user_authentication: 

        allow_api_keys_for_saml_users: true

NOTE: API keys are an additional authentication mechanism for SAML users that bypasses the authentication control of the IDP. When access has been revoked at the IDP, it does not automatically disable the user or revoke all API keys for the user. Therefore, when access has been revoked for a user, the system admin is responsible to manually delete the Anchore User or revoke any API key which was created for that user.

Using API Keys

API keys are authenticated using basic auth. In order to use API keys, you need to use a special username _api_key and the password is the Key Value that was the output when you created the API key.

curl -u ‘_api_key:<API key value>’ http://localhost:8228/v2/images

  url: “http://localhost:8228”

  username: “_api_key”

  password: <API Key Value>

Caveats for API Keys

API Keys generally inherit the permissions and roles of the user they were generated for, but there are certain operations you cannot perform using API keys regardless of which user they were generated for:

  • You cannot Add/Edit/Remove Accounts, Users and Credentials.
  • You cannot Add/Edit/Remove Roles and Role Members.
  • You cannot Add/Edit/Revoke API Keys.

We invite you to learn more about Anchore Enterprise 5.0 with a free 15 day trial. Or, if you’ve got other questions, set up a call with one of our specialists.

Learn more from Anchore:

  1. User Management in Anchore Enterprise 
  2. User Authentication with API Keys
  3. AnchoreCTL Configurations 

Introducing Grant: A new OSS project from Anchore for inspecting and checking license compliance from SBOMs

Today Anchore has released Grant, a tool to help users discover and reason about software licenses. Grant represents our latest efforts to build OSS tools oriented around the fundamental idea of creating SBOMs as part of your development process, so that they can be used for multiple purposes. We maintain Syft for generating SBOMs, and Grype for taking an SBOM and performing a vulnerability scan. Now with Grant, that same SBOM can also be used for performing license checks.

Knowing what licenses you have is critical to ensure the delivery of your software. It’s important to account for all of the licenses that you are beholden to in your transitive dependencies before introducing new dependencies (and well before shipping to production!). For example, you might want to know the package under which a GPL (General Public License) is discovered within the dependencies of a certain software before releasing, or what license a new software library might require. This evaluation tends to not be a one-time decision: you need to continually ensure that the dependencies you are using are not swapping to different licenses after you initially brought that dependency into your codebase. Grant aims to aid in solving the above issues through the use of its check and list commands.

Grant – Design and Launch Features

Grant takes either a Software Bills of Material (SBOM), a filesystem with a collection of license files, or a container image as input. Grant can either generate an SBOM for the different provided inputs or read an SBOM provided as input. Grant has two primary commands:

  • list: show me discovered licenses
  • check: evaluate a list of licenses to ensure your artifacts are compliant to a given policy

List 

The list command can display licenses discovered from the SBOM and their related packages. It can also filter licenses to show which licenses are not OSI approved, which licenses are not associated with a package, and which packages were found with no associated license. Users can use the list to show the discovered licenses for a given container image. Here is an example using grant to display all the licenses found for the latest redis image:

You can also get more detailed information about the licenses from the json output by using the -o json flag. This flag provides a json formatted output which shows the locations of the discovered license, if it has an SPDX expression, and the packages and locations of those packages for which it was discovered

Check

The check command can evaluate given inputs for license compliance relative to the provided license rules. One thing users can look out for when trying to understand their license posture is if a license that was previously permissive has changed. Check allows the user to express simple policies that will pass/fail the command depending on what Grant discovers:

rules:
    - pattern: "*gpl*"
      name: "deny-gply"
      mode: "deny"
      reason: "GPL licenses are not allowed"

Here is grant running check with the above configuration against the same redis image as the list example

Users can also use a -o json option to return a more detailed and machine readable view. This output allows users to see more detailed license information if a license is matched to an official SPDX ID. This includes text of the license, references to its clauses, and sections or rules showing if a license was not found for a package.

Grant can be configured to accept multiple sources, including those from container images, directories, licenses, and software bills of materials. Grant stitches these sources into a report that can then display the license contents of each source. This report of licenses can be optionally paired with a policy file that gates/limits an execution (pass/fail) on the presence or lack of a predetermined set of licenses. Policy files have the ability to include exceptions for packages that are not used or that organizations deem are not applicable to their compliance policy.

Grant’s approach to license identification

Grant takes a simplified approach to asserting on and establishing a license’s presence. It first tries to match all discoveries when possible to the SPDX license list. For sources with more complex expressions, grant will try to break those expressions into the individual license IDs. This should allow the user to make decisions about which licenses are relevant to the project they are working on. An example of this is something like the SPDX expression `LGPL-2.1-only OR MIT OR BSD-3-Clause`discovered for the package `foobar`. If this expression was discovered then the 3 different licenses would be associated with the package. If users need an exception they can then construct a policy that allows them to exclude non permissive licenses from the statement:

pattern: "LGPL-2.1"
name: "deny-LGPL-2.1"
mode: "deny"
reason: "LGPL-2.1 licenses are not allowed by new corporate policy"
exclusions:
    - "foobar"

One notable inclusion in the code is grants use of the license classifier from google to recognize licenses that are input to the tool and assign them a confidence level. Further enhancements are planned to allow for SBOMs that provide the original license text to also be run against the classifier.

SPDX license compatibility

The landscape of open source licenses is vast and complex with various licenses having different terms and conditions. It can be challenging to understand and manage the obligations associated with each license. While grant makes no claim on establishing if licenses are compatible with each other, it can establish if a discovered license has an ID found in the latest SPDX license list. Users can use the grant list command with the --non-spdx flag to get an idea of which licenses were unable to be matched to a corresponding SPDX ID. This should allow users to find gaps where Grant might not be making the right choice for SPDX ID association, while also pointing out interesting one off licenses and their locations that might be private or custom to a certain vendor:

Conclusion

Grant is another open source tool from Anchore that can help improve your software processes. Simply by generating SBOMs for your software and making them available, you can now easily gate incoming code changes and steps of your delivery pipeline that are sensitive to licensing concerns.

Go download Grant here and try it for yourself!

Introducing VIPERR: The First Software Supply Chain Security Framework for All

Today Anchore announces the VIPERR software supply chain security framework. This framework distills our lessons learned from supporting the most challenging software supply chain environments across government agencies and the defense industrial base. The framework is a blueprint for organizations to implement that reliably creates secure software supply chains with the least possible lift.

Previously security teams had to wade through massive amounts of literature on software supply chain security and spend countless hours of their teams time digesting those learnings into concrete processes and controls that integrated with their specific software development process. This typically absorbed massive amounts of time and resources and it wasn’t always clear at the end that an organization’s security had improved significantly.

Now organizations can utilize the VIPERR framework as a trusted industry resource to confidently improve their security posture and reduce the possibility of a breach due to a supply chain incident without the lengthy research cycle that frequently comes before the implementation phase of a software supply chain solution.

If you’re interested to see how Anchore works with customers to apply the framework via the Anchore Enterprise platform take the free guided walkthrough of the VIPERR framework. Alternatively, you can view our on demand webinar for a thorough walk through the framework. Finally, if you would like a more hands-on experience to try out the VIPERR framework then you can try our Anchore Enterprise Free Trial.  

Frequently Asked Questions

What is the VIPERR framework?

VIPERR is a free software supply chain security framework that Anchore created for organizations to evaluate and improve the security posture of their software supply chain. VIPERR stands for visibility, inspection, policy enforcement, remediation, and reporting. 

While working alongside developers and security teams within some of the most challenging architectures and threat landscapes, Anchore field engineers developed the VIPERR framework to both contextualize lessons learned and prevent live threats. VIPERR is meant to be a practical self-assessment playbook that organizations can regularly reuse to evaluate and harden their software supply chain security posture. By following this guide, numerous Fortune 500 enterprises and top federal agencies have transformed their software supply chain security posture and become hard targets for advanced persistent threats.

Why did Anchore create the VIPERR framework?

There are already a number of frameworks that have been developed to help organizations improve their software supply chain security but most of them focus on giving general guidance that is flexible enough that most specific implementations of the guidance will yield compliance. This is great for general standards because it allows organizations to find the best fit for their environment, but by keeping guidance general, it is difficult to always know the best specific implementation that delivers real-world results. The VIPERR framework was designed to fill this gap.

VIPERR is a framework that can be used to fulfill compliance standards for software supply chain security that is opinionated on how to achieve the controls of most of the popular standards. It can be paired with Anchore’s turnkey offering Anchore Enterprise so that organizations can opt for a solution that accomplishes the entire VIPERR framework without having to build a system from scratch.

Access an interactive 50 point checklist or a pdf version to guide you through each step of the VIPERR framework, or share it with your team to introduce the model for learning and awareness.

How do I begin identifying the gaps in my software supply chain security program? 

“I have no budget. My boss doesn’t think it’s a priority. I lack resources.” These are all common refrains when we speak with security teams working with us via our open source community or commercial enterprise offering. There is a ton of guidance available between SSDF, SLSA, NIST, and S2C2F. A lot of this is contextualized in a manner difficult to digest. As mentioned in the previous question VIPERR was created to be highly actionable by finding the right balance between giving guidance that is flexible and providing opinions that reduce options to help organizations make decisions faster.

The VIPERR framework will be available in a formatted 50 point, self-assessment checklist in the coming weeks, check back here for updates on that. By completing the forthcoming checklist you will produce a prioritized list of action items to harden your organization’s software supply chain security with the least amount of effort. 

How do I build a solution that closes the gaps that VIPERR uncovers?

As stated, VIPERR is a framework, not a solution. Anchore has worked with companies that have implemented VIPERR by building an in-house solution with a collection of open source tools (e.g. Syft, Grype, and other open source tools) or using a combination of multiple security tools. If you want to get an idea of the components involved in building a solution by self-hosting open-source tools and tying all of these systems together with first-party code, we wrote about that approach here

If I don’t want to build a solution, are there any turnkey solutions available?

Yes. Anchore Enterprise was designed as a turnkey solution to implement the VIPERR framework. Anchore Enterprise also automates the 50 security controls of the framework by integrating directly into an organization’s SDLC toolset (i.e., developer environments, CI/CD build pipelines, artifact registry and production environments). This provides the ability for organizations to know at any point in time if their software supply chain has been compromised and how to remediate the exploit.

If you’d like to learn more about the Anchore Enterprise platform or speak with a member of our team, feel free to book a time to speak with one of our specialists.