The Secure Software Development Attestation Form: The Next Step in SSDF Compliance

The long-awaited Secure Software Development Attestation Form was published on March 18, 2024 by the Cybersecurity and Infrastructure Agency (CISA). This is a continuation of the trend toward modernization of cybersecurity compliance that the US government has been marching toward since the turn of the millennium. 

This initiative is rooted in the cybersecurity challenges highlighted by Executive Order 14028, including the SolarWinds attack and the Colonial Pipeline ransomware attack, which clearly demonstrated the need for a coordinated national response to the emerging threats of a complex software supply chain.

The Secure Software Development Attestation Form is the newest, and likely not the final, step towards a more secure software supply chain for both the United States and the world at large. We will take you through the details of what this form means for your organization and how to best approach it.

While the SSDF and the new Secure Software Development Attestation Form are important compliance milestones in their own right, they are typically intermediate steps on the way to more rigorous compliance standards such as FedRAMP or continuous Authority to Operate (cATO). Regardless of your end destination, we will guide you through how to approach this step and all of the steps that come after.

Overview of the Secure Software Development Attestation Form

The Secure Software Development Attestation Form is part of a broader effort derived from the Cybersecurity EO 14028 (formally called “Improving the Nation’s Cybersecurity). As a result of this EO, the Office of Management and Budget (OMB) issued two memorandums, M-22-18 “Enhancing the Security of the Software Supply Chain through Secure Software Development Practices” and M-23-16 “Update to Memorandum M-22-18”.

These memos require the Federal agencies to obtain self-attestation forms from software suppliers. Software suppliers have to attest to complying with a subset of the Secure Software Development Framework (SSDF).

Before the publication of the Secure Software Development Attestation Form, the SSDF was a software development best practices standard published by the National Institute of Standards and Technology (NIST) based on industry best practices like OWASP’s BSIMM and SAMM. A useful resource for organizations that valued security intrinsically and wanted to run secure software development without any external incentives like formal compliance requirements.

Now, the Secure Software Development Attestation Form requires software providers to self-attest to having met a subset of the SSDF best practices. There are a number of implications to this transition from secure software development as being an aspiration standard to a compliance standard that we will cover below. The most important thing to keep in mind is that while the Attestation Form doesn’t require a software provider to be formally certified before they can transaction with a federal agency like FedRAMP does, there are retroactive punishments that can be applied in cases of non-compliance.

Who/What is Affected?

Who is Affected?

  1. Software providers to federal agencies:
    • Federal service integrators
    • Independent software vendor
    • Cloud service providers
  1. Federal agencies and DoD programs who use any of the above software providers

What is Included

  • New software: Any software developed after September 14, 2022
  • Major updates to existing software: A major version change after September 14, 2022
  • Software-as-a-Service (SaaS)

What is Excluded

  • First-party software: Software developed in-house by federal agencies. SSDF is still considered a best practice but does not require self-attestation.
  • Free and open-source software (FOSS): Even though FOSS components and end-user products are excluded from self-attestation the SSDF requires that specific controls are in place to protect against software supply chain security breaches.

Anchore is the leading open-source software supply chain security platform and helps numerous DoD programs achieve cATO. Anchore Federal offers:

Key Requirements of the Attestation Form

There are two high-level requirements for meeting compliance with the Secure Software Development Attestation Form;

  1. Meet the technical requirements of the form
    • Note: NIST SSDF has 19 categories and 42 total requirements. The self-attestation form has 4 which are a subset of the full SSDF. See below for a breakdown of all 4 requirements.
  2. Self-attest to compliance with the subset of SSDF
    • Sign and return the form.

Timeline

The timeline for compliance with the SSDF self-attestation form involves two critical dates: for critical software, compliance is required by early June, and for all other software in scope, by early September. Precise dates are not available yet due to ambiguity from CISA in their public communications.

Implications

Now that CISA has published the final version of the Secure Software Development Attestation Form there are a number of implications to this transition. One is economical and the other is potentially criminal.

The economic penalty of not self-attesting to secure software development practices via the form is that any federal agencies that you’re currently working with won’t be able to pay you any more and any future agencies you want to work with will ask to see your form before procurement. Sign the form or miss out on this revenue.

The second penalty is a bit scarier from an individual perspective. An officer of the company has to sign the attestation form to state that they are responsible for attesting to the fact that all of the form’s requirements have been met. Here is the relevant quote from the form:

“Willfully providing false or misleading information may constitute a violation of 18 U.S.C. § 1001, a criminal statute.”

It is also important to realize that this isn’t an unenforceable threat. There is evidence that the DOJ Civil Cyber Fraud Initiative is trying to crack down on government contractors failing to meet cybersecurity requirements. They are bringing False Claims Act investigations and enforcement actions. This will likely weigh heavily on both the individual that signs the form and who is chosen at the organization to sign the form.

Challenges and Considerations

Do I still have to sign if I have a 3PAO do the technical assessment?

No. As long as the 3PAO is FedRAMP-certified. 

What if I can’t comply in time?

You can draft a plan of action and milestones (POAM) to fill the gap while you are addressing the gaps between your current system and the system required by the attestation form. If the agency is satisfied with the POAM then they can continue to use your software. But they have to request either an extension of the deadline from OMB or a waiver in order to do that.

Can only the CEO and COO sign the form?

The wording in the draft form that was published required either the CEO or COO but new language was added to the final form that allows for a different company employee to sign the attestation form.

Full Requirements of Secure Software Development Attestation Form

  1. Software is developed and built in secure environments
    • Isolation and hardening of development and build environments
    • Authorization and access monitoring, logging and auditing
    • Multi-factor authentication
    • Catalog of software components
    • Encrypt sensitive data
    • Continuous monitoring of cybersecurity alerts and incidents
  2. Maintain trusted source code supply chains
    • Make a good-faith effort to maintain trusted software supply chains by employing automated tools to address the security supply chain and manage related vulnerabilities
  3. Maintain provenance for 1st- and 3rd-party components
  4. Employ automated tools for security vulnerabilities
    • Continuous scanning for vulnerabilities
    • Operate a vulnerability remediation policy
    • Operate a vulnerability disclosure program

How Anchore Can Help

Anchore Enterprise is a software supply chain security platform and can address many of the requirements of the Secure Software Development Attestation Form. By utilizing the platform, you can ensure that whoever is required to sign the attestation form has a battle-tested software platform to back-up their signature.

Software is developed and built in secure environments

Catalog of software components (i.e., an SBOM)

Anchore Enterprise includes a software component scanner that can make a full inventory of all software and the 3rd party components that are integrated into the end-user software by generating a software bill of materials (SBOM).

Continuous monitoring of cybersecurity alerts and incidents

Anchore Enterprise includes a vulnerability scanner that can uncover vulnerabilities in all software components and alert on vulnerabilities as they are found and feed these alerts into an incident management system that allows for teams to manage and remediate any critical discoveries.

Maintain trusted source code supply chains

This requirement is a bit nebulous but Anchore Enterprise is able to meet the requirement of providing an “automated tool to address the security of internal code and third-party components and manage related vulnerabilities”. Anchore Enterprise includes a vulnerability scanner that can scan the entire catalog of software components and a management interface for the vulnerabilities that are discovered.

Employ automated tools for security vulnerabilities

Continuous scanning for vulnerabilities

Anchore Enterprise includes a vulnerability scanner that can be integrated into all stages of the CI/CD pipeline and allow for scanning for vulnerabilities across all build environments.

Operate a vulnerability remediation policy

Anchore Enterprise includes remediation recommendations that will utilize the results of a vulnerability scanner to provide feedback to developers on how to best resolve a security vulnerability. 

Operate a vulnerability disclosure program

Anchore Enterprise includes an API that can be consumed by a vulnerability disclosure application or process. All of the vulnerability metadata can be collected from the API in order to populate the required vulnerability metadata for the disclosure.

Conclusion

Cybersecurity compliance modernization is a journey not a destination. The Secure Software Development Attestation Form is the next step in that journey for secure software development. This stop was focused on transforming the preceding SSDF standard from a recommendation into a requirement with a soft gate. Given the overall trend of cybersecurity modernization that was kickstarted with FISMA in 2002, it would be prudent to assume that this attestation form is an intermediate step before the requirements become a hard gate where compliance will have to be demonstrated as a prerequisite to utilizing the software.

Anchore is committed to keeping you up-to-date on the latest with these changes. If you’re interested to learn more about how Anchore Federal can help you achieve compliance with the Secure Software Development Attestation Form follow the link. Also, we have two webinars that go into more depth on how to achieve SSDF compliance that will expand your knowledge on this deeply technical topic, links below:

Webinars

We don’t know how to fix the xz problem, but we can detect it

A very impressive and scary attack against the xz library was uncovered on Friday, which made for an interesting weekend for many of us.

There has been a lot written about the technical aspects of this attack, and more will be uncovered over the next few days and weeks. It’s likely we’re not done learning new details about this attack. This doesn’t appear to affect as many organizations as Log4Shell did, but it’s a pretty big deal. Especially what this sort of attack means for the larger ecosystem of open source. Trying to explain the details isn’t the point of this blog post. There’s another angle of this story that’s not getting much attention. How can we solve problems like this (we can’t), and what can we do going forward?

The unsolvable problem

Sometimes reality can be harsh, but the painful truth about this sort of attack is that there is no solution. Many projects and organizations are happy to explain how they keep you safe, or how you can prevent supply chain attacks, by doing this one simple thing. However, the industry as it stands today lacks the ability to prevent an attack created by a motivated and resourced threat actor. If we want to use an analogy, preventing an attack like xz is the equivalent of the pre-crime science fiction dystopian stories. The idea behind pre-crime is to use data or technology to predict when a crime is going to happen, then stopping it. This leads to a number of problems in any society that adopts such a thing as one can probably imagine.

If there is a malicious open source maintainer, we lack the tools and knowledge to prevent this sort of attack, as you can’t actually stop such behavior until after it happens. It may be possible there’s no way to stop something like this before it happens.

HOWEVER, that doesn’t mean we are helpless. We can take a page out of the playbook of the observability industry. Sometimes we’re able to see problems as they happen or after they happen, then use that knowledge from the past to improve the future, that is a problem we can solve. And it’s a solution that we can measure. If you have a solid inventory of your software, looking for affected versions of xz becomes simple and effective.

Today and Tomorrow

Of course looking for a vulnerable version of xz, specifically versions 5.6.0 and 5.6.1, is something we should all be doing. If you’ve not gone through the software you’re running you should go do this right now. See below for instructions on how to use Syft and Anchore Enterprise to accomplish this.

Finding two specific versions of xz is a very important task right now, there’s also what happens tomorrow. We’re all very worried about these two specific versions of xz, but we should prepare for what happens next. It’s very possible there will be other versions of xz that end up having questionable code that needs to be removed or downgraded. There could be other libraries that have problems (everyone is looking for similar attacks now). We don’t really know what’s coming next. The worst part of being in the middle of attacks like this are the unknowns. But there are some things we do know. If you have an accurate inventory of your software, figuring out what software or packages you need to update becomes trivial.

Creating an inventory

If you’re running Anchore Enterprise the good news is you already have an inventory of your software. You can create a report that will look for images affected by CVE-2024-3094.

The above image shows how a report in Anchore Enterprise can be created.  Another feature of Anchore Enterprise allows you to query all of your SBOMs for instances of specified software via an API call, by package name.  This is useful for gaining insights about the location, ubiquity, and the version spread of the software, present in your environment.

The package names in question are the liblzma5 and xz-libs packages, which cover the common naming across rpm, dpkg, and apkg based Linux distributions.  For example:

   

We don’t know how to fix the xz problem, but we can detect it

See the Anchore Enterprise API Browser for more information about the API, and the Anchore Enterprise Documentation for more details on reporting, vulnerability scanning, and other functions of the system.

If you’re using Syft, it’s a little bit more complicated, but still a very solvable problem. The first thing to do is generate SBOMs for the software you’re using. Let’s create SBOMs for container images in this example.

It’s important to keep those SBOMs you just created somewhere safe. If we then find out in a few days or weeks that other versions of xz shouldn’t be trusted, or if a different open source library has a similar problem, we can just run a query against those files to understand how or if we are affected.

Now that we have a directory full of SBOMs, we can run a query similar to these to figure out which SBOMs have a version of xz in them.

While the example looks for xz, if the need to quickly look for other packages arises in the near future, it’s easy to adjust your query. It’s even easier if you store the SBOMs in some sort of searchable database.

What now?

There’s no doubt it’s going to be a very interesting couple of weeks for many of us. Anyone who has been watching the news is probably wondering what wild supply chain story will happen next. The pace of solving open source security problems hasn’t kept pace with the growth of open source. There will be no quick fixes. The only way we get out of this one is a lot of hard questions and even more hard work.

But in the meantime, we can focus on understanding and defending what we have. Sometimes the best defense is a good defense.

Want a primer on software suppy chain security? Get our free white paper here.

Navigating the NVD Quagmire

The global cybersecurity community has been in a state of uncertainty since the National Vulnerability Database (NVD) has degraded service starting in mid-February. There has been a lot of coverage of this incident this month and Anchore has been at the center of much of it. If you haven’t been keeping up, this blog post is here to recap what has happened so far and how the community has been responding to this incident.

Our VP of Security, Josh Bressers has been leading the charge to educate and organize the community. First with his Open Source Security podcast that goes through what is happening with NVD and why it is important. On top of that, last week he participated in a livestream with Chainguard Co-founder Dan Lorenc on the Resilient Cyber Show hosted by Chris Hughes on the implications of the current delay with NVD service.

We’ve condensed the topics from these resources into a blog post that will cover the issues created by the delay in NVD service, a background on what has happened so far, a potential open-source solution to the problem and a call to action for advocacy. Continue reading for the good stuff.

The problem

Federal agencies mandate NVD is used as the primary data source of truth even if there could be higher quality data sources available. This mainly comes down to the fact that the severity scores, meaning the Common Vulnerability Scoring System (CVSS), determines when an agency or organization is out of compliance with a federal security standard. Given that compliance standards are created by the US government, only NVD can score a vulnerability and determine the appropriate action in order to stay in compliance.

That’s where the problem starts to come in, you’ve got a whole bunch of government agencies on one hand saying, ‘you must use this data’. And then another government agency that says, “No, you can’t rely on this for anything”. This leaves folks working with the government in a bit of a pickle.

–Dan Lorenc, Co-Founder, Chainguard

If NVD isn’t assigning severities to vulnerabilities, it’s not clear what that means for maintaining compliance and they could be exposing themselves to significant risk. For example, there could be high severity vulnerabilities being published by organizations that are unaware because this vital review and scoring process has been removed.

Background on NVD and the current state of affairs

NVD is the canonical source of truth for software vulnerabilities for the federal government, specifically for 10+ federal compliance standards. It has also become a go-to resource for the worldwide security community even if individual organizations in the wider community aren’t striving to meet a United States compliance standard.

NVD adds a number of enrichments to CVE data but there are two of particular importance; first, it adds a severity score to all CVEs and two, it adds information of which versions of the software are impacted by the CVE. The National Institute of Standards and Technology (NIST) has been providing this service to the security community for over 20 years through the NVD. That changed last month:

Timeline

  • Feb 12: NVD dramatically reduces the number of CVEs that are being enriched:
  • Feb 15: NVD posts message about delay of enrichment on NVD Website

Read a comprehensive background in our original blog post, National Vulnerability Database: Opaque changes and unanswered questions.

Developing an Open-Source Solution

The Anchore team developed and maintains an open-source vulnerability scanner called Grype that utilizes the NVD as one of many vulnerability feeds as well as a software supply chain security platform called Anchore Enterprise that incorporates Grype. Given that both products use data from NVD, it was particularly important for Anchore to engage in the current crisis.

While there is nothing that Anchore can do about the missing severity scores, the other highlighted missing enrichment was the versions of the software that are impacted by the CVE, aka, Common Platform Enumeration (CPE). The matching logic ends up being the more important signal during impact analysis because it is an objective measure of impact rather than severity scoring which can be debated (and is, at length).

Given Anchore’s history with the open-source software community, creating an OSS project to fill a gap in the NVD enrichment seemed the logical choice. The goal of going the OSS route is to leverage the transparent process and rapid iteration that comes from building software publicly. Anchore is excited to collaborate with the community to:

  • Ingest CVE data
  • Analyze CVEs
  • Improve the CVE-to-versioning mapping process 

Everyone is being crushed by the unrelenting influx of vulnerabilities. It’s not just NVD. It’s not one organization. We can either sit in our silos and be crushed to death or we can work together.

–Josh Bressers, VP of Security at Anchore

If you’re looking to utilize this data and software as a backfill while NVD continues delaying analysis or want to contribute to the project, please join us on GitHub

Cybersecurity Awareness and Advocacy

It might seem strange that the cybersecurity community would need to convince the US government that investing in the cybersecurity ecosystem is a net positive investment given that the federal government is the largest purchaser of software in the world and is probably the largest target for threat actors. But given how NIST has decided to degrade the service of NVD and provide only opaque guidance on how to fill the gap in the meantime, it doesn’t appear that the right hand is talking with the left.

Whether the federal government intended to or not, by requiring that organizations and agencies utilize NVD in order to meet a number of federal compliance standards, it effectively became the authority on the severity of software vulnerabilities for the global cybersecurity ecosystem. By providing a valuable and reliable service to the community, the US garnered the trust of the ecosystem. The current state of NVD and the manner in which it was rolled out has degraded that trust. 

It is unknown whether the US will cede its authority to another organization, the EU may attempt to fill this vacuum with its own authoritative database but in the meantime, advocacy for cybersecurity awareness within the government is paramount. It is up to the community to create the pressure that will demonstrate the urgency with the current strategy around a vital community resource like NVD. 

Conclusion

Anchore is committed to keeping the community up-to-date on this incident as it unfolds. To stay informed, be sure to follow us on LinkedIn or Twitter/X.

If you’d like to watch the livestream in all its glory, click on the image below to go to the VOD.

Also, if you’re looking for more in-depth coverage of the NVD incident, Josh Bressers has a security podcast called, Open Source Security that covers the NVD incident and the history of NVD.

Introduction to the DoD Software Factory

In the rapidly evolving landscape of national defense and cybersecurity, the concept of a Department of Defense (DoD) software factory has emerged as a cornerstone of innovation and security. These software factories represent an integration of the principles and practices found within the DevSecOps movement, tailored to meet the unique security requirements of the DoD and Defense Industrial Base (DIB). 

By fostering an environment that emphasizes continuous monitoring, automation, and cyber resilience, DoD Software Factories are at the forefront of the United States Government’s push towards modernizing its software and cybersecurity capabilities. This initiative not only aims to enhance the velocity of software development but also ensures that these advancements are achieved without compromising on security, even against the backdrop of an increasingly sophisticated threat landscape.

Building and running a DoD software factory is so central to the future of software development that “Establish a Software Factory” is the one of the explicitly named plays from the DoD DevSecOps Playbook. On top of that, the compliance capstone of the authorization to operate (ATO) or its DevSecOps infused cousin the continuous ATO (cATO) effectively require a software factory in order to meet the requirements of the standard. In this blog post, we’ll break down the concept of a DoD software factory and a high-level overview of the components that make up one.

What is a DoD software factory?

A Department of Defense (DoD) Software Factory is a software development pipeline that embodies the principles and tools of the larger DevSecOps movement with a few choice modifications that conform to the extremely high threat profile of the DoD and DIB. It is part of the larger software and cybersecurity modernization trend that has been a central focus for the United States Government in the last two decades.

The goal of a DoD Software Factory is aimed at creating an ecosystem that enables continuous delivery of secure software that meet the needs of end-users while ensuring cyber resilience (a DoD catchphrase that emphasizes the transition from point-in-time security compliance to continuous security compliance). In other words, the goal is to leverage automation of software security tasks in order to fulfill the promise of the DevSecOps movement to increase the velocity of software development.

What is an example of a DoD software factory?

Platform One is the canonical example of a DoD software factory. Run by the US Air Force, it offers both a comprehensive portfolio of software development tools and services. It has come to prominence due to its hosted services like Repo One for source code hosting and collaborative development, Big Bang for a end-to-end DevSecOps CI/CD platform and the Iron Bank for centralized container storage (i.e., container registry). These services have led the way to demonstrating that the principles of DevSecOps can be integrated into mission critical systems while still preserving the highest levels of security to protect the most classified information.

If you’re interested to learn more about how Platform One has unlocked the productivity bonus of DevSecOps while still maintaining DoD levels of security, watch our webinar with Camdon Cady, Chief of Operations and Chief Technology Officer of Platform One.

Who does it apply to?

Federal Service Integrators (FSI)

Any organization that works with the DoD as a federal service integrator will want to be intimately familiar with DoD software factories as they will either have to build on top of existing software factories or, if the mission/program wants to have full control over their software factory, be able to build their own for the agency.

Department of Defense (DoD) Program

Any Department of Defense (DoD) program will need to be well-versed on DoD software factories as all of their software and systems will be required to run on a software factory as well as both reach and maintain a cATO.

What are the components of a DoD Software Factory?

A DoD software factory is composed of both high-level principles and specific technologies that meet these principles. Below are a list of some of the most significant principles of a DoD software factory:

Principles of DevSecOps embedded into a DoD software factory

  1. Breakdown Organizational Silos
    • This principle is borrowed directly from the DevSecOps movement, specifically the DoD aims to integrate software development, test, deployment, security and operations into a single culture with the organization.
  2. Open Source and Reusable Code
    • Composable software building blocks is another principle of the DevSecOps that increases productivity and reduces security implementation errors from developers writing secure software packages that they are not experts in.
  3. Immutable Infrastructure as Code (IaC)
    • This principle focuses on treating the infrastructure that software runs on as ephemeral and managed via configuration rather than manual systems operations. Enabled by cloud computing (i.e., hardware virtualization) this principle increases the security of the underlying infrastructure through templated secure-by-design defaults and reliability of software as all infrastructure has to be designed to fail at any moment.
  4. Microservices architecture (via containers)
    • Microservices are a design pattern that creates smaller software services that can be built and scale independently of each other. This principle allows for less complex software that only performs a limited set of behavior.
  5. Shift Left
    • Shift left is the DevSecOps principle that re-frames when and how security testing is done in the software development lifecycle. The goal is to begin security testing while software is being written and tested rather than after the software is “complete”. This prevents insecure practices from cascading into significant issues right as software is ready to be deployed.
  6. Continuous improvement through key capabilities
    • The principle of continuous improvement is a primary characteristic of the DevSecOps ethos but the specific key capabilities that are defined in the DoD DevSecOps playbook are what make this unique to the DoD.
  7. Define a DevSecOps Pipeline
    • A DevSecOps pipeline is the system that utilizes all of the preceding principles in order to create the continuously improving security outcomes that is the goal of the DoD software factory program.
  8. Cyber Resilience
    • Cyber resiliency is the goal of a DoD software factory, is it defined as, “the ability to anticipate, withstand, recover from, and adapt to adverse conditions, stresses, attacks, or compromises on the systems that include cyber resources.”

Common tools and systems of a DoD software factory

  1. Code Repository (e.g., Repo One)
    • Where software source code is stored, managed and collaborated on.
  2. CI/CD Build Pipeline (e.g., Big Bang)
    • The system that automates the creation of software build artifacts, tests the software and packages the software for deployment.
  3. Artifact Repository (e.g., Iron Bank)
    • The storage system for software components used in development and the finished software artifacts that are produced from the build process.
  4. Runtime Orchestration and Platform (e.g., Big Bang)
    • The deployment system that hosts the software artifacts pulled from the registry and keeps the software running so that users can access it.

How do I meet the security requirements for a DoD Software Factory? (Best Practices)

Use a pre-existing software factory

The benefit of using a pre-existing DoD software factory is the same as using a public cloud provider; someone else manages the infrastructure and systems. What you lose is the ability to highly customize your infrastructure to your specific needs. What you gain is the simplicity of only having to write software and allow others with specialized skill sets to deal with the work of building and maintaining the software infrastructure. When you are a car manufacturer, you don’t also want to be a civil engineering firm that designs roads.

To view existing DoD software factories, visit the Software Factory Ecosystem Coalition website.

Roll your own by following DoD best practices 

If you need the flexibility and customization of managing your own software factory then we’d recommend following the DoD Enterprise DevSecOps Reference Design as the base framework. There are a few software supply chain security recommendations that we would make in order to ensure that things go smoothly during the authorization to operate (ATO) process:

  1. Continuous vulnerability scanning across all stages of CI/CD pipeline
    • Use a cloud-native vulnerability scanner that can be directly integrated into your CI/CD pipeline and called automatically during each phase of the SDLC
  2. Automated policy checks to enforce requirements and achieve ATO
    • Use a cloud-native policy engine in tandem with your vulnerability scanner in order to automate the reporting and blocking of software that is a security threat and a compliance risk
  3. Remediation feedback
    • Use a cloud-native policy engine that can provide automated remediation feedback to developers in order to maintain a high velocity of software development
  4. Compliance (Trust but Verify)
    • Use a reporting system that can be directly integrated with your CI/CD pipeline to create and collect the compliance artifacts that can prove compliance with DoD frameworks (e.g., CMMC and cATO)
  5. Air-gapped system
    • Utilize a cloud-native software supply chain security platform that can be deployed into an air-gapped environment in order to maintain the most strict security for classified missions

Is a software factory required in order to achieve cATO?

Technically, no. Effectively, yes. A cATO requires that your software is deployed on an Approved DoD Enterprise DevSecOps Reference Design not a software factory specifically. If you build your own DevSecOps platform that meets the criteria for the reference design then you have effectively rolled your own software factory.

How Anchore Can Help

The easiest and most effective method for achieving the security guarantees that a software factory is required to meet for its software supply chain are by using: 

  1. An SBOM generation tool that integrates directly into your software development pipeline
  2. A container vulnerability scanner that integrates directly into your software development pipeline
  3. A policy engine that integrates directly into your software development pipeline
  4. A centralized database to store all of your software supply chain security logs
  5. A query engine that can continuously monitor your software supply chain and automate the creation of compliance artifacts

These are the primary components of both Anchore Enterprise and Anchore Federal cloud native, SBOM-powered software composition analysis (SCA) platforms that provide an end-to-end software supply chain security to holistically protect your DevSecOps pipeline and automate compliance. This approach has been validated by the DoD, in fact the DoD’s Container Hardening Process Guide specifically named Anchore Federal as a recommended container hardening solution.

Learn more about how Anchore brings DevSecOps to DoD software factories.

Conclusion and Next Steps

DoD software factories can come off as intimidating at first but hopefully we have broken them down into a more digestible form. At their core they reflect the best of the DevSecOps movement with specific adaptations that are relevant to the extreme threat environment that the DoD has to operate in, as well as, the intersecting trend of the modernization of federal security compliance standards.

For those that want to understand how a policy engine can reliably deliver DevSecOps developer productivity, continuous security and automated compliance, read our overview of how policy engine’s are the perfect compliment for software supply chain security: The Power of Policy-as-Code for the Public Sector.

Spring Webinar Update: Expand Your Knowledge with Our Expert-Led Sessions

In our continuous effort to bring valuable insights and tools to the world of software supply chain security, we are thrilled to announce two upcoming webinars and one recently held webinar now available for on-demand access. Whether you’re looking to enhance your understanding of software security, explore open-source tools to automate OSS licensing management, or navigate the complexities of compliance with federal standards, our expert-led webinars are designed to equip you with the knowledge you need. Here’s what’s on the agenda:

Tracking License Compliance Made Easy: Intro to Grant (OSS)

Date: Mar 26, 2024 at 2pm EDT  (11am PDT)

Join us as Anchore CTO, Dan Nurmi and Grant Maintainer, Christopher Phillips discuss the challenges of managing software licenses within production environments, highlighting the complexity and ongoing nature of tracking open-source licenses.

They will introduce Grant, an open-source tool designed to alleviate the burden of OSS license inspection by demonstrating how to scan for licenses within SBOMs or container images, simplifying a typically manual process. The session will cover the current landscape of software licenses, the difficulties of compliance checks, and a live demo of Grant’s features that automate this previously laborious process.

Software Security in the Real World with Kelsey Hightower and Dan Perry

Date: April 4th, 2024 at 2pm EDT  (11am PDT)

In our upcoming webinar, experts Kelsey Hightower and Dan Perry will delve into the nuances of securing software in cloud-native, containerized applications. This in-depth session will explore the criteria for vulnerability testing success or failure, offering insights into security testing and compliance for modern software environments. 

Through a live demonstration of Anchore Enterprise, they’ll provide a comprehensive look at visibility, inspection, policy enforcement, and vulnerability remediation, equipping attendees with a deeper understanding of software supply chain security, proactive security strategies, and practical steps to embark on a software security journey. 

The discussion will continue after the webinar on X/Twitter with Kelsey Hightower.

FedRAMP and SSDF Compliance: How to Sell to the Federal Government

This webinar explores how Anchore aids in navigating the complex compliance requirements for selling software to the federal government, focusing on FedRAMP vulnerability scanning and SSDF compliance. Led by Josh Bressers, VP of Security and Connor Wynveen, Senior Solutions Engineer it will detail evaluating FedRAMP controls for software containers and adhering to SSDF guidelines. 

Key takeaways include strategies to streamline FedRAMP and SSDF compliance efforts, leveraging SBOMs for efficiency, the critical role of automated vulnerability scans, and how Anchore’s policy pack can assist organizations in meeting compliance standards.

Accessing the Webinars

Don’t miss out on the opportunity to expand your knowledge and skills with these sessions. To register for the upcoming webinars or to access the on-demand webinar, visit our webinar landing page. Whether you’re looking to stay ahead of the curve in software security, explore funding opportunities for your open-source projects, or break into the federal market, our webinars are designed to provide you with the insights and tools you need.

We look forward to welcoming you to our upcoming webinars. Stay informed, stay ahead!

National Vulnerability Database: Opaque changes and unanswered questions

A short history lesson on the NVD

Founded in 2005, The National Vulnerability Database, or NVD, is a collection of vulnerability data by the National Institute of Standards and Technology (NIST) in the United States. As of today, many companies rely on NVD data for their security operations and vulnerability research.

NVD describes itself as:

The NVD is the U.S. government repository of standards based vulnerability management data represented using the Security Content Automation Protocol (SCAP). This data enables automation of vulnerability management, security measurement, and compliance. The NVD includes databases of security checklist references, security related software flaws, product names, and impact metrics.

The primary role of the NVD is adding data to vulnerabilities that have been assigned a CVE ID. They include additional metadata such as severity levels via Common Vulnerability Scoring System (CVSS), and affected data via Common Platform Enumeration (CPE). NIST is responsible for maintaining the NVD as each CVE ID can require additional modifications or maintenance as the nature of vulnerabilities can change daily. This is a service NVD has been providing for nearly 20 years.

The graph below shows a historical trend of CVE IDs that have been published in the CVE program (green), alongside the analysis data provided by NVD (red), since 2005.

Key: Green is all CVE IDs in NVD. Red is IDs with a CPE attached

We can see nearly every CVE has been enriched by NVD during this time.

A problematic website notice from the NVD

On February 15th 2024, a banner appeared on the NVD website stating:

NIST is currently working to establish a consortium to address challenges in the NVD program and develop improved tools and methods. You will temporarily see delays in analysis efforts during this transition. We apologize for the inconvenience and ask for your patience as we work to improve the NVD program.

It’s not entirely clear what this message means for the data provided by NVD or what the public should expect. 

While attempting to research the meaning behind this statement, Anchore engineers have discovered that as of February 15, 2024, NIST has almost completely stopped updating NVD with analysis for CVE IDs. The graph below shows the trend of CVE IDs that have been published in the CVE program (green), alongside the analysis data provided by NVD (red), since early January 1, 2024.

Key: Green is all CVE IDs in NVD. Red is IDs with a CPE attached

Starting February 12th, thousands of CVE IDs have been published without any record of analysis by NVD. Since the start of 2024 there been a total of 6,171 total CVE IDs with only 3,625 being enriched by NVD. That leaves a gap of 2,546 (42%!) IDs.

NVD has become an industry standard for organizations and security products to rely on their data for security operations, such as prioritizing vulnerability remediation and securing infrastructures. CVE IDs are constantly being added and updated, but those IDs are missing key analytical data provided by NVD. Any organizations that depend on NVD for vulnerability data such as CVSS scores are no longer receiving updates to the CVE data. This means that organizations relying on this data are left in the dark with new vulnerabilities, imposing greater risk and unmanaged attack surface for their environment.

Wait and see?

How to fill the gap of this missing data has not yet been addressed by NVD. There are other vulnerability databases such as the GitHub Advisory Database and the CVE5 database that contain severity ratings and affected products, but by definition, those databases cannot provide NVD severity scores.

Anchore is investigating options to create a public repository of identifiers to fill this gap. We invite members of the security community to join us at our next meetup on March 14th 2024 as we research options. Details for the meetup are available on GitHub

In the meantime, we will continue to look for updates from NIST and hope that they are more transparent about their service situation soon.

Syft Reaches v1.0!

Early in 2020 we started work on Syft, an SBOM generator that is easy to install and use and supports a variety of SBOM formats. In late September of 2020 we released the first version of Syft with seven ecosystems supported. Since then we’ve had 168 releases made up of the work from 134 contributors, collectively adding an additional 18 ecosystems — a total of over 40 package catalogers. 

Today we’re going one step further: we’re thrilled to announce the release of Syft v1.0.0! 🎉

What is Syft?

At a high level, Syft answers the seemingly simple question:  “what is in my application?”. To a finer point, Syft is a standalone CLI tool and library that scans container images and file systems in order to generate an SBOM: the Software Bill of Materials that describes what software components were found.

While Syft’s capability to generate a detailed SBOM is useful in its own right, we consider Syft to be a foundational tool that supports both our growing list of open-source offerings as well as dozens of other OSS projects. Ultimately it delivers a way for users to both generate SBOM material for their projects as well as use those same SBOMs for other important functions such as vulnerability scanning, OSS license reporting, tracking components at various stages of the application lifecycle, and other custom use cases.

Some of the important dimensions of Syft are:

  • Sources: types of software applications and artifacts that can be scanned by Syft to produce an SBOM, such as docker container images, source code directories, and more.
  • Catalogers: functions that are implemented for a given type of software artifact / packaging ecosystem, such as Java JAR files, RPMs, Go modules, and more.
  • Output Formats: interchangeable formats to put the SBOM material into, post-SBOM generation, such as the Syft-native JSON, SPDX (multiple versions), CycloneDX (multiple versions), and user-customizable formats via templating.

Over time, we’ve seen the list of sources, catalogers, and output formats continue to grow, and we are looking forward to adding even more capabilities as the project continues forward.  

Capabilities of Syft

To start with, Syft needs to know how to read what you’re giving it. There are a number of different types of sources Syft supports: directories, individual files, archives, and – of course – container images! And there are a lot of options such as letting you choose what scope you want to be cataloged within an image. By default we consider a squashed layer scope, which is most like what the filesystem will look like when a container is created from the image. But we also allow for looking at all layers within the container image, which would include contents that are distributed but not available when a container is created. In future versions we want to expand these selections to support even more use cases.

When scanning a source we have several catalogers scouring for packaging artifacts, covering several ecosystems:

  • Alpine (apk)
  • C/C++ (conan)
  • Dart (pubs)
  • Debian (dpkg)
  • Dotnet (deps.json)
  • Objective-C (cocoapods)
  • Elixir (mix)
  • Erlang (rebar3)
  • Go (go.mod, Go binaries)
  • Haskell (cabal, stack)
  • Java (jar, ear, war, par, sar, nar, native-image)
  • JavaScript (npm, yarn)
  • Jenkins Plugins (jpi, hpi)
  • Linux kernel module and archives (ko & vmlinz)
  • Nix (outputs in /nix/store)
  • PHP (composer)
  • Python (wheel, egg, poetry, requirements.txt)
  • Red Hat (rpm)
  • Ruby (gem)
  • Rust (cargo.lock)
  • Swift (cocoapods, swift-package-manager)
  • WordPress Plugins

We don’t blindly run all catalogers against all kinds of input sources though – we tailor the catalogers used to create the most accurate SBOM possible based on what is being analyzed. For example, when scanning images, we enable the alpm-db-cataloger but don’t enable the cocoapods-cataloger:

If there is a set of catalogers that you must / must not use for whatever reason, you can always tailor the set that runs with –override-default-catalogers and –select-catalogers to meet your needs. If you’re using Syft as an API, you can go further and implement your own package cataloger and provide it to Syft when creating an SBOM.

When it comes to SBOM formats, we are unopinionated about which format might be best for your use case — so we support as many as we can, including the most popular formats: SPDX and CycloneDX. This is a core decision point for us — this way you can pivot between SBOM formats and not be locked into a format based on a tooling decision. You can even select which version of a format to output (even output multiple at once!):

syft scan alpine:latest -o [email protected]

syft scan alpine:latest -o [email protected]

syft scan alpine:latest -o [email protected]

syft scan alpine:latest -o [email protected]=./alpine.spdx.json -o cyclonedx-json=./alpine.cdx.json

From the beginning, Syft was designed to lean into the Unix philosophies: do one thing and one thing well, allowing for it to plug into downstream tooling easily. For instance, to use your SBOM to get a vulnerability analysis, pipe the results to Grype

Or to discover software licenses and check for compliance, pipe the results to Grant:

What does v1 mean to us? 

Version 1 primarily signals a stabilization of the CLI and API. Moving forward you can expect that breaking changes will always coincide with a major version bump of Syft. Some specific guarantees we wanted to call out explicitly are:

  • We version our JSON schema of the syft JSON output separately than that of Syft, the application. In the past in a v0 state this has meant that breaking changes to the JSON schema would only result in a minor version bump of syft. Moving forward the JSON schema version used for the default JSON output of Syft will never include breaking changes.
  • We’ve implicitly supported the ability to decode any previous version of the Syft JSON model, however, moving forward this will be an explicit guarantee — If you have a Syft JSON document of any version, then you will be able to convert it into the latest Syft JSON version.
  • Stereoscope, our library for parsing container images and crafting squashed filesystem representations (critical to the functionality of Syft), will now be versioned at each release and have release tags.

What does this mean for the average user? In short: probably nothing! We will continue to ship enhancements and fixes every few weeks while remaining compatible with legacy SBOM document schemas.

So are we done?…

…Not even close 🤓. As the SBOM landscape keeps changing our goal is to grow Syft in terms of what can be cataloged, the accuracy of the results, the formats that can be expressed, and its usefulness in more use cases. 

Have an ecosystem you need that we don’t support yet? Let us know and let’s build it together! Found an issue or have a feature idea? Open an issue and let’s talk about it! Looking to contribute and are looking for a good place to start? We have some issues set aside just for you! Curious as to what we’re looking to work on next? Check out our ever-growing roadmap! And always, come join us every-other week at our community office hours chats if you want to meet with us and talk about topical issues regarding our OSS projects.

Balancing the Scale: Software Supply Chain Security and APTs

Note: This is a multi-part series primer on the intersection of advanced persistent threats (APTs) and software supply chain security (SSCS). This blog post is the first in the series. We will update this blog post with links to the additional parts of the series as they are published.
Part 1 | With Great Power Comes Great Responsibility: APTs & Software Supply Chain Security
Part 2 | David and Goliath: the Intersection of APTs and Software Supply Chain Security
• Part 3 (This blog post)

Last week we dug into the details of why an organization’s software supply chain is a ripe target for well-resourced groups like APTs and the potential avenues that companies have to combat this threat. This week we’re going to highlight the Anchore Enterprise platform and how it provides a turnkey solution for combating threats to any software supply chain.

How Anchore Can Help

Anchore was founded on the belief that a security platform that delivers deep, granular insights into an organization’s software supply chain, covers the entire breadth of the SDLC and integrates automated feedback from the platform will create a holistic security posture to detect advanced threats and allow for human interaction to remediate security incidents. Anchore is trusted by Fortune 100 companies and the most exacting federal agencies across the globe because it has delivered on this promise.

The rest of the blog post will detail how Anchore Enterprise accomplishes this.

Depth: Automating Software Supply Chain Threat Detection

Having deeper visibility into an organization’s software supply chain is crucial for security purposes because it enables the identification and tracking of every component in the software’s construction. This comprehensive understanding helps in pinpointing vulnerabilities, understanding dependencies, and identifying potential security risks. It allows for more effective management of these risks by enabling targeted security measures and quicker response to potential threats. Essentially, deeper visibility equips an organization to better protect itself against complex cyber threats, including those that exploit obscure or overlooked aspects of the software supply chain.

Anchore Enterprise accomplishes this by generating a comprehensive software bill of materials (SBOM) for every piece of software (even down to the component/library/framework-level). It then compares this detailed ingredients list against vulnerability and active exploit databases to identify exactly where in the software supply chain there are security risks. These surgically precise insights can then be fed back to the original software developers, rolled-up into reports for the security team to better inform risk management or sent directly into an incident management workflow if the vulnerability is evaluated as severe enough to warrant an “all-hands on deck” response.

Developers shouldn’t have to worry about manually identifying threats and risks inside your software supply chain. Having deep insights into your software supply chain and being able to automate the detection and response is vital to creating a resilient and scalable solution to the risk of APTs.

Breadth: Continuous Monitoring in Every Step of Your Software Supply Chain

The breadth of instrumentation in the Software Development Lifecycle (SDLC) is crucial for securing the software supply chain because it ensures comprehensive security coverage across all stages of software development. This broad instrumentation facilitates early detection and mitigation of vulnerabilities, ensures consistent application of security policies, and allows for a more agile response to emerging threats. It provides a holistic view of the software’s security posture, enabling better risk management and enhancing the overall resilience of the software against cyber threats.

Powered by a 100% feature complete platform API, Anchore Enterprise integrates into your existing DevOps pipeline.

Anchore has been supporting the DoD in this effort since 2019. Commonly referred to as “overwatch” for the DoD’s software supply chain. Anchore Enterprise continuously monitors how risk is evolving based on the ingesting of tens of thousands of runtime containers, hundreds of source code repositories and alerting on malware-laced images submitted to the registry. Monitoring every stage of the DevOps pipeline, source to build to registry to deploy, to gain a holistic view of when and where threats enter the software development lifecycle.

Feedback: Alerting on Breaches or Critical Events in Your Software Supply Chain

Integrating feedback from your software supply chain and SDLC into your overall security program is important because it allows for real-time insights and continuous improvement in security practices. This integration ensures that lessons learned and vulnerabilities identified at any stage of the development or deployment process are quickly communicated and addressed. It enhances the ability to preemptively manage risks and adapt to new threats, thereby strengthening the overall security posture of the organization.

How would you know if something is wrong in a system? Create high-quality feedback loops, of course. If there is a fire in your house, you typically have a fire alarm. That is a great source of feedback. It’s loud and creates urgency. When you investigate to confirm the fire is legitimate and not a false alarm; you can see fire, you can feel fire.

Software supply chain breaches are more similar to carbon monoxide leaks. Silent, often undetected, and potentially lethal. If you don’t have anything in place to specifically alert for that kind of threat then you could pay severely. 

Anchore Enterprise was designed specifically as both a set of sensors that can be deployed both deeply and broadly into your software supply chain AND a system of feedback that uses the sensors in your supply chain to detect and alert on potential threats that are silently emitting carbon monoxide in your warehouse.

Anchore Enterprise’s feedback mechanisms come in three flavors; automatic, recommendations and informational. Anchore Enterprise utilizes a policy engine to enable automatic action based on the feedback provided by the software supply chain sensors. If you want to make sure that no software is ever deployed into production (or any environment) with an exploitable version of Log4j the Anchore policy engine can review the security metadata created by the sensors for the existence of this software component and stop a deployment in progress before it ever becomes accessible to attackers.

Anchore Enterprise can also be configured to make recommendations and provide opinionated actions based on security signals. If a vulnerability is discovered in a software component but it isn’t considered urgent, Anchore Enterprise can instead provide a recommendation to the software developer to fix the vulnerability but still allow them to continue to test and deploy their software. This allows developers to become aware of security issues very early in the SDLC but also provide flexibility for them to fix the vulnerability based on their own prioritization.

Finally, Anchore Enterprise offers informational feedback that alerts developers, the security team or even the executive team to potential security risks but doesn’t offer a specific solution. These types of alerts can be integrated into any development, support or incident management systems the organization utilizes. Often these alerts are for high risk vulnerabilities that require deeper organizational analysis to determine the best course of action in order to remediate.

Conclusion

Due to the asymmetry between APTs and under-resourced security teams, the goal isn’t to create an impenetrable fortress that can never be breached. The goal is instead to follow security best practices and instead litter your SDLC with sensors and automated feedback mechanisms. APTs may have significantly more resources than your security team but they are still human and all humans make mistakes. By placing low-effort tripwires in as many locations as possible, you reverse the asymmetry of resources and instead allow the well-resourced adversary to become their own worst enemy. APTs are still software developers at the end of the day and no one writes bug-free code in the long run. By transforming your software supply chain into a minefield of best practices, you create a battlefield that requires your adversaries to slow down and carefully disable each individual security mechanism. None are impossible to disarm but each speed bump creates another opportunity for your adversary to make a mistake and reveal themselves. If the zero-trust architecture has taught us anything, it is that an impenetrable perimeter was never the best strategy.

David and Goliath: the Intersection of APTs and Software Supply Chain Security

Note: This is a multi-part series primer on the intersection of advanced persistent threats (APTs) and software supply chain security (SSCS). This blog post is the second in the series. If you’d like to start from the beginning, you can find the first blog post here.

Last week we set the stage for discussing APTs and the challenges they pose for software supply chain security by giving a quick overview of each topic. This week we will dive into the details of how the structure of the open source software supply chain is a uniquely ripe target for APTs.

The Intersection of APTs and Software Supply Chain Security

The Software Ecosystem: A Ripe Target

APT groups often prioritize the exploitation of software supply chain vulnerabilities. This is due to the asymmetric structure of the software ecosystem. By breaching a single component, such as a build system, they can gain access to any organization using the compromised software component. This creates an inversion in the cost benefit of the effort involved in the research and development effort needed to discover a vulnerability and craft an exploit for the vulnerability. Before APTs were focused primarily on targets where the pay off could warrant the investment or vulnerabilities that were so wide-spread that the attack could be automated. The complex interactions of software dependencies allows APTs to scale their attack due to the structure of the ecosystem.

The Software Supply Chain Security Dynamic: An Unequal Playing Ground

The interesting challenge with software supply chain security is that securing the supply chain requires even more effort than an APT would take to exploit it. The rub comes because each company that consumes software has to build a software supply chain security system to protect their organization. An APT investing in exploiting a popular component or system gets the benefit of access to all of the software built on top of it.

Given that security organizations are at a structural disadvantage, how can organizations even the odds?

How Do I Secure My Software Supply Chain from APTs?

An organization’s ability to detect the threat of APTs in its internal software supply chain comes down to three core themes that can be summed up as “go deep, go wide and integrate feedback”. Specifically this means, the deeper the visibility into your organization’s software supply chain the less surface area an attack has to slip in malicious software. The wider this visibility is deployed across the software development lifecycle, the earlier an attacker will be caught. Neither of the first two points matter if the feedback produced isn’t integrated into the overall security program that can act on the signals surfaced.

By applying these three core principles to the design of a secure software supply chain, an organization can ensure that they balance the playing field against the structural advantage APTs possess.

How Can I Easily Implement a Strategy for Securing My Software Supply Chain?

The core principles of depth, breadth and feedback are powerful touchstones to utilize when designing a secure software supply chain that can challenge APTs but they aren’t specific rules that can be easily implemented. To address this, Anchore has created the open source VIPERR Framework to provide specific guidance on how to achieve the core principles of software supply chain security.

VIPERR is a free software supply chain security framework that Anchore created for organizations to evaluate and improve the security posture of their software supply chain. VIPERR stands for visibility, inspection, policy enforcement, remediation, and reporting. 

Utilizing the VIPERR Framework an organization can satisfy the three core principles of software supply chain security; depth, breadth and feedback. By following this guide, numerous Fortune 500 enterprises and top federal agencies have transformed their software supply chain security posture and become harder targets for advanced persistent threats. If you’re looking to design and run your own secure software supply chain system, this framework will provide a shortcut to ensure the developed system will be resilient. 

How Can I Comprehensively Implement a Strategy for Securing My Software Supply Chain?

There are a number of different comprehensive initiatives to define best practices for software supply chain security. Organizations ranging from the National Institute of Standards and Technology (NIST) with standards such as SP 800-53, SP 800-218, and SP 800-161. The Cloud Native Computing Foundation (CNCF) to Open Source Security Foundation (OpenSSF) have created detailed documentation on their recommendations to achieve a comprehensive supply chain security program, such as, the SLSA framework and Secure Supply Chain Consumption Framework (S2C2F) Project. Be aware that these are not quick and dirty solutions for achieving a “reasonably” secure software supply chain. They are large undertakings for any organization and should be given the resources needed to achieve success. 

We don’t have the time to go over each in this blog post but we have broken each down in our complete guide to software supply chain security.

This is the second in a series of blog posts focused on the intersection of APTs and software supply chain security. This blog post highlighted the reasons that APTs focus their efforts on software supply chain exploits and the potential avenues that companies have to combat this threat. Next week we will discuss the Anchore Enterprise solution as a turnkey platform to implement the strategies outlined above.

With Great Power Comes Great Responsibility: APTs & Software Supply Chain Security

Note: This is a multi-part series primer on the intersection of advanced persistent threats (APTs) and software supply chain security (SSCS). This blog post is the first in the series. We will update this blog post with links to the additional parts of the series as they are published.
• Part 1 (This blog post)
Part 2
Part 3

In the realm of cybersecurity, the convergence of Advanced Persistent Threats (APTs) and software supply chain security presents a uniquely daunting challenge for organizations. APTs, characterized by their sophisticated, state-sponsored or well-funded nature, focus on stealthy, long-term data theft, espionage, or sabotage, targeting specific entities. Their effectiveness is amplified by the asymmetric power dynamics of a well funded attacker versus a resource constrained security team.

Modern supply chains inadvertently magnify the impact of APTs due to the complex and interconnected dependency network of software and hardware components. The exploitation of this weakness by APTs not only compromises the targeted organizations but also poses a systemic risk to all users of the compromised downstream components. The infamous Solarwinds exploit exemplifies the far-reaching consequences of such breaches.

This landscape underscores the necessity for an integrated approach to cybersecurity, emphasizing depth, breadth, and feedback to create a holistic software supply chain security program that can withstand even adversaries as motivated and well-resourced as APTs. Before we jump into how to create a secure software supply chain that can resist APTs, let’s understand our adversary a bit better first.

Know Your Adversary: Advanced Persistent Threats (APTs)

What is an Advanced Persistent Threats (APT)?

An Advanced Persistent Threat (APT) is a sophisticated, prolonged cyberattack, usually state-sponsored or executed by well-funded criminal groups, targeting specific organizations or nations. Characterized by advanced techniques, APTs exploit zero-day vulnerabilities and custom malware, focusing on stealth and long-term data theft, espionage, or sabotage. Unlike broad, indiscriminate cyber threats, APTs are highly targeted, involving extensive research into the victim’s vulnerabilities and tailored attack strategies.

APTs are marked by their persistence, maintaining a foothold in a target’s network for extended periods, often months or years, to continually gather information. They are designed to evade detection, blending in with regular network traffic, and using methods like encryption and log deletion. Defending against APTs requires robust, advanced security measures, continuous monitoring, and a proactive cybersecurity approach, often necessitating collaboration with cybersecurity experts and authorities.

High-Profile APT Example: Operation Triangulation

The recent Operation Triangulation campaign disclosed by Kaspersky researchers is an extraordinary example of an APT in both its sophistication and depth. The campaign made use of four separate zero-day vulnerabilities, utilized a highly targeted approach towards specific individuals at Kaspersky, combined a multi-phase attack pattern and persisted over a four year period. Its complexity, implied significant resources possibly from a nation-state, and the stealthy, methodical progression of the attack, align closely with the hallmarks of APTs. Famed security researcher, Bruce Schneier, writing on his blog, Schneier on Security, wasn’t able to contain his surprise upon reading the details of the campaign, “[t]his is nation-state stuff, absolutely crazy in its sophistication.”

What is the impact of APTs on organizations?

Ignoring the threat posed by Advanced Persistent Threats (APTs) can lead to significant impact for organizations, including extensive data breaches and severe financial losses. These sophisticated attacks can disrupt operations, damage reputations, and, in cases involving government targets, even compromise national security. APTs enable long-term espionage and strategic disadvantage due to their persistent nature. Thus, overlooking APTs leaves organizations exposed to continuous, sophisticated cyber espionage and the multifaceted damages that follow.

Now that we have a good grasp on the threat of APTs, we turn our attention to the world of software supply chain security to understand the unique features of this landscape.

Setting the Stage: Software Supply Chain Security

What is Software Supply Chain Security?

Software supply chain security is focused on protecting the integrity of software through its development and distribution. Specifically it aims to prevent the introduction of malicious code into software that is utilized as components to build widely-used software services.

The open source software ecosystem is a complex supply chain that solves the problem of redundancy of effort. By creating a single open source version of a web server and distributing it, new companies that want to operate a business on the internet can re-use the generic open source web server instead of having to build its own before it can do business. These new companies can instead focus their efforts on building new bespoke software on top of a web server that does new, useful functions for users that were previously unserved. This is typically referred to as compostable software building blocks and it is one of the most important outcomes of the open source software movement.

But as they say, “there are no free lunches”. While open source software has created this incredible productivity boon comes responsibility. 

What is the Key Vulnerability of the Modern Software Supply Chain Ecosystem?

The key vulnerability in the modern software supply chain is the structure of how software components are re-used, each with its own set of dependencies, creating a complex web of interlinked parts. This intricate dependency network can lead to significant security risks if even a single component is compromised, as vulnerabilities can cascade throughout the entire network. This interconnected structure makes it challenging to ensure comprehensive security, as a flaw in any part of the supply chain can affect the entire system.

Modern software is particularly vulnerable to software supply chain attacks because 70-90% of modern applications are open source software components with the remaining 10-30% being the proprietary code that implements company specific features. This means that by breaching popular open source software frameworks and libraries an attacker can amplify the blast radius of their attack to effectively reach significant portions of internet based services with a single attack.

If you’re looking for a deeper understanding of software supply chain security we have written a comprehensive guide to walk you through the topic in full.

High-Profile Software Supply Chain Exploit Example: SolarWinds

In one of the most sophisticated supply chain attacks, malicious actors compromised the update mechanism of SolarWinds’ Orion software. This breach allowed the attackers to distribute malware to approximately 18,000 customers. The attack had far-reaching consequences, affecting numerous government agencies, private companies, and critical infrastructure.

Looking at the example of SolarWinds, the lesson we should take away is not to put a focus on prevention. APTs has a wealth of resources to draw upon. Instead the focus should be on monitoring the software we consume, build, and ship for unexpected changes. Modern software supply chains come with a great deal of responsibility. The software we use and ship need to be understood and monitored.

This is the first in a series of blog posts focused on the intersection of APTs and software supply chain security. This blog post highlighted the contextual background to set the stage for the unique consequences of these two larger forces. Next week, we will discuss the implications of the collision of these two spheres in the second blog post in this series.