Build Your Own Grype Database

When running vulnerability scans against your software dependencies it’s important to have the most up to date vulnerability information that’s been published. New vulnerabilities are found all the time, the data goes stale quickly. For current Grype users, we have a daily pipeline that builds and publishes a Grype database with the latest vulnerability data. Up until now the tooling that drives this pipeline has not been available as open source since it was originally designed as an embedded aspect of Anchore’s commercial products. Today that’s changing! 

How does this help the average Grype user? By making the framework and code that are used to prepare vulnerability data sources open, the entire open source community (even you!) can contribute improvements and new vulnerability data sources, enhancing both the breadth and quality of vulnerability scanning for all.

We’re happy to announce two new open source projects: Vunnel and Grype-DB.

Vunnel (short for “vulnerability data funnel”) understands how to pull and process vulnerability from various upstream data sources, such as NVD, Github Security Advisories, and multiple Linux distribution providers. This allows you to prepare a data directory with indexed and normalized vulnerability data. This sounds simple, but all of this vulnerability data is different and varies widely in its quality and composition. Vunnel gives us some control to normalize this data in a way that gives better consistency.

Demo of Vunnel

Grype-DB builds an SQLite database that Grype can use based off of the data that Vunnel outputs. Even more, Grype-DB can invoke Vunnel in order to prepare a data directory for multiple providers, allowing you to orchestrate and tailor which providers you want to include in the database.

Running Grype-DB

This puts the entire Grype vulnerability data pipeline and surrounding tooling into the open source! This includes all of the providers that drive Grype today: Alpine, Amazon Linux, Centos, Debian, GitHub Security Advisories, NVD, Oracle Linux, RedHat Enterprise Linux, SUSE Linux Enterprise, Ubuntu, and Wolfi. Anyone can now fully participate in the data processing for the Grype ecosystem, expanding the vulnerability matching capabilities of Grype (for example, adding support for new Linux distributions in Grype). 

We’re excited to see what community contributions arise from this effort! Stay tuned for a tutorial to show you how to implement a new Vunnel provider. 

If you’d like to learn more feel free to reach out to us on our Discourse forum, drop into our community meetings for live Q&A (every other Thursday), or see the docs:

Syft and Grype Community Momentum

Hello open source supply chain security fans! A lot has happened with Syft and Grype in the last couple of months, so I want to share some of the new features and improvements we’ve been working on. If you’re not familiar with Syft and/or Grype, here are the details: both tools are open source, and maintained by Anchore.

Syft is our Software Bill of Materials generator. It scans container images and filesystems and makes an inventory of contents including files and software packages. Grype, in turn, takes that information and analyzes it for vulnerabilities from a variety of sources including the GitHub Advisory Database, the National Vulnerability Database, and others.

Syft and Grype development happen at a rapid pace and I want to share a couple of recent improvements. 

Syft Performance Improvements

In Syft 0.71, released early February 2023, we spent some time focusing on improving scanning performance. If you use Syft to scan large, multi-GB images or big directories, you will definitely see some improvement. These improvements are passed through to Grype as well, since Grype uses Syft under the hood to extract the list of packages needing to be analyzed for vulnerabilities. In one of our tests, we saw the time for a scan operation that took six minutes before optimizations only take 23 seconds after. If you scan large images or containers, make sure you are up to date on Syft because you will probably see big improvements.

Syft Binary Detection

Syft gained new capabilities in version v0.62.3 released in late 2022: this version introduced a way to scan and detect binary packages for common open source components, so we can detect things like embedded copies of Apache HTTP Server or PostgreSQL that might not have been installed using a package manager.

Our development community quickly started adding new classifiers for a lot of different open source components, including Python, PHP, Perl, and Go runtimes, Redis, HAProxy, and others. It’s pretty easy to extend the binary detection mechanism for new things, so if you want to learn how to add a new classifier, let us know and we can point you in the right direction.

Good First Issues

Are you interested in contributing to Syft or Grype? We would be happy to have you. We’ve labeled some issues in GitHub with “Good First Issue” if you want to look for something to start with. If you want to talk about a possible implementation or ask questions to help you get started, you can find the developers on Discourse or join the community meeting once every two weeks on Thursday at noon Eastern Time.

Good First Issues for Syft

Good First Issues for Grype

Developers: smaller binaries and improved build times

Finally, we’ve made some changes to the dependencies we are using, which has resulted in significantly smaller binary sizes for both Syft and Grype, along with improvements to our build and release process. We now have the ability to get changes released much faster – from PR to release, the pipeline is less than 30 minutes instead of hours.

Thank you to everyone who contributes to and helps our team advance Syft and Grype for the open source community. We’re excited about the future of open source software security and hope that you are too.

Breaking Down NIST SSDF: Spotlight on PW.6 Compilers and Interpreter Security

In this part of the long-running series breaking down NIST Secure Software Development Framework (SSDF), also known as the standard NIST 800-218, we are going to discuss PW 6. This control is broken into two parts, PW.6.1 and PW.6.2. These two controls are related and defined as:

PW.6.1: Use compiler, interpreter, and build tools that offer features to improve executable security.
PW.6.2: Determine which compiler, interpreter, and build tool features should be used and how each should be configured, then implement and use the approved configurations.

We’re going to lump both of these together for the purpose of this post. It doesn’t make sense to split these two controls apart when we are reviewing what this actually means, but there will be two posts for PW.6, this is part one. Let’s start by looking at the examples for some hints on what the standard is looking for:

PW.6.1
Example 1: Use up-to-date versions of compiler, interpreter, and build tools.
Example 2: Follow change management processes when deploying or updating compiler, interpreter, and build tools, and audit all unexpected changes to tools.
Example 3: Regularly validate the authenticity and integrity of compiler, interpreter, and build tools. See PO.3.

PW.6.2
Example 1: Enable compiler features that produce warnings for poorly secured code during the compilation process.
Example 2: Implement the “clean build” concept, where all compiler warnings are treated as errors and eliminated except those determined to be false positives or irrelevant.
Example 3: Perform all builds in a dedicated, highly controlled build environment.
Example 4: Enable compiler features that randomize or obfuscate execution characteristics, such as memory location usage, that would otherwise be predictable and thus potentially exploitable.
Example 5: Test to ensure that the features are working as expected and are not inadvertently causing any operational issues or other problems.
Example 6: Continuously verify that the approved configurations are being used.
Example 7: Make the approved tool configurations available as configuration-as-code so developers can readily use them.

If we review the references, you will find there’s a massive swath of suggestions. Everything from code signing to obfuscating binaries, to handling compiler warnings, to threat modeling. The net was cast wide on this one. Every environment is different. Every project or product uses its own technology. There’s no way to “one size fits all” this control. This is one of the challenges that has made compliance for developers so very difficult in the past. We have to determine how this applies to our environment, and the way we apply this finding will be drastically different than the way someone else applies it.

We’re going to split this topic along the lines of build environments and compiler/interpreter security. For this blog, we are going to focus on using modern protection technology, specifically in compiler security and runtimes. Of course, you will have to review the guidance and understand what makes sense for your environment, everything we discuss here is for example purposes only.

Compiler security
When we think about the security of applications, we tend to focus on the code itself. Security vulnerabilities are the result of attackers causing unexpected behavior in the code. Printing an unescaped string, adding or subtracting a very large integer. Maybe even getting the application to open a file it shouldn’t. We’ve all heard about memory safety problems and how hard they are to avoid in certain languages. C and C++ are legendary for their lack of memory protection. Our intent should be to write code that doesn’t have security vulnerabilities. The NSA and even Consumer Reports have recently come out against using memory unsafe languages. We can also lean on technology to help reduce the severity of memory safety bugs when we can’t abandon memory unsafe languages just yet, maybe never. There’s still a lot of COBOL out there, after all.

While attackers can exploit some bugs in ways that cause unexpected behavior, there are technologies, especially in compilers, that can lower the severity or even eliminate the danger of certain bug classes. For example, stack buffer overflows in C used to be a huge problem, then we created stack canaries which has reduced the severity of these bugs substantially.

Every compiler is different, every operating system is different, and every application is different, so all of this has to be decided for each individual application. For the purposes of simplicity, we will use gcc to show how some of these technologies work and how to enable them. The Debian Wiki Hardening page has a huge amount of detail, we’ll just cover some of the quick easy things.

user@debian:~/test$
user@debian:~/test$ gcc -o overflow test-overflow.c
root@debian:~/test$ ./overflow
Segmentation fault
user@debian:~/test$ gcc -fstack-protector -o overflow test-overflow.c
user@debian:~/test$ ./overflow
*** stack smashing detected ***: terminated
Aborted
user@debian:~/test$

In the above example, we can see how the compiler can issue a warning instead of crashing if we enable the gcc stack protector feature.

Most of these protections will only reduce the severity of a very narrow group of bugs. These languages still have many other problems and moving away from a memory unsafe language is the best path forward. Not everyone can move to a memory safe language, so compiler flags can help.

Compiler warnings are bugs
There was once a time when compiler warnings were ignored because they were just warnings. It didn’t really matter, or so we thought. Compiler warnings were just suggestions from the compiler, if there’s time later those warnings can be fixed. Except there is never time later. It turns out that sometimes those warnings are really important. They can be hints that a serious bug is waiting to be exploited. It’s hard to know which warnings are harmless and which are serious, so the current best practice is to fix them all to minimize vulnerabilities in your code.

If we use our example code, we can see:

user@debian:~/test$
user@debian:~/test$ gcc -o overflow test-overflow.c
test-overflow.c: In function 'function':
test-overflow.c:6:2: warning: '__builtin_memcpy' writing 24 bytes into a region of size 9 overflows the destination [-Wstringop-overflow=]
6 | strcpy(s, "This string is too long");
| ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
user@debian:~/test$

We see a warning telling us our string is too long. The build doesn’t fail, but that’s not a warning you should ignore.

Interpreted languages

The suggestion in the SSDF is for interpreted languages is to use the latest interpreter. These languages are memory safe, but they are still vulnerable to logic bugs. Many of the interpreters are written in C or C++, so you could double check they are built with the various compiler hardening features enabled.

There aren’t often protections built into the interpreter itself. This goes back to the wide swath of guidance for this control. Programming languages have an infinite number of possible use cases, the problem set is too large to accurately protect. Memory safety is a very narrow set of problems that we still can’t get right. General purpose programming is an infinitely wide set of problems.

There were some attempts to secure interpreted languages in the past, but the hardening proved to be too easy to break to rely on as a security feature. PHP and Ruby used to have safe mode, but it turned out they weren’t actually safe. Compiler and interpreter protections are hard to make effective in meaningful ways.

The best way to secure interpreted languages is to run code in sandboxes using things like virtualization and containers. Such guidance won’t be covered in this post. In fact SSDF doesn’t have guidance on how to run applications securely, SSDF focuses on development. There is plenty of other guidance on that, we’ll make sure to cover those once the SSDF series is complete.

This complexity and difficulty are almost certainly why the SSDF guidance is to just run the latest interpreter. The latest interpreter version will ensure any bugs, security or otherwise are fixed.

Wrapping up
As we can see from this post, optimizing compiler and runtime security isn’t a simple task. It’s one of those things can can feel easy, but it’s really not. The devil is in the details. The only real guidance here is to figure out what works best in your environment and go with that.

If you missed the first post in this series, you can view it here. Next time we will discuss build systems. Build systems have been a popular topic over the last few years as they have been targets for attackers. Luckily for us there is some solid guidance we can draw upon for securing a build system.

Josh Bressers
Josh Bressers is vice president of security at Anchore where he guides security feature development for the company’s commercial and open source solutions. He serves on the Open Source Security Foundation technical advisory council and is a co-founder of the Global Security Database project, which is a Cloud Security Alliance working group that is defining the future of security vulnerability identifiers.

Anchore Adds Support for NIST 800-218 SSDF

This blog post has been archived and replaced by the supporting pillar page that can be found here:

https://anchore.com/wp-admin/post.php?post=987473414&action=edit

The blog post is meant to remain “public” so that it will continue to show on the /blog feed. This will help discoverability for people browsing the blog and potentially help SEO. If it is clicked on it will automatically redirect to the pillar page.

Finding and Fixing the jsonwebtoken Vulnerabilities

There’s a new vulnerability in the Node.js jsonwebtoken library getting attention. CVE-2022-23529, or GHSA-27h2-hvpr-p74q are the identifiers this vulnerability has been assigned. It’s not just one thing though, there were four fixes to jsonwebtoken version 9.0.0. For now we’re going to focus on CVE-2022-23529 which is getting the most attention due to a high CVSS score. There’s added complexity around the score as NVD scores it a 9.8 (critical), the researcher that found it scored it a 7.6 (high), and the GHSA advisory rates it medium. 

Note: How you use the library actually determines your severity, it’s unlikely most users of this library will be vulnerable, the vulnerability only affects unique uses. 

The jsonwebtoken library is a Node.js library for working with JSON Web Tokens, or JWT. JWT is used as a way to authenticate web services. This library is downloaded more than 9 million times per week, which is a pretty substantial number. It’s easy to say this is a well used library.

Whenever there’s an issue like what we’re seeing in the jsonwebtoken library, there’s questions about how to actually find and fix it. A lot of articles and advisories will say “just upgrade the package”, but that’s meaningless advice. How do we know if we’re using jsonwebtoken? How do we know if we’re using a vulnerable version? And how do we actually upgrade it? Let’s go over all of these questions, and we will use a Software Bill of Materials (SBOM) as our vehicle for helping 

First we have to find jsonwebtoken in our projects and products. We’re going to use Syft and a very simple example for this. We start out in a directory named “jwt”, then we will create a directory called “project”, install jsonwebtoken version 8.5.0 in it, and generate an SBOM with Syft.

➜  jwt$ mkdir project
➜  jwt$ cd project
➜  project$ npm install [email protected]

added 15 packages, and audited 16 packages in 629ms

➜  project$ cd ..
➜  jwt$ syft dir:project -o json > sbom.json
 ✔ Indexed project

r] Cataloged packages [15 packages]

➜  jwt$

We can see jsonwebtoken version 8.5.0 is installed if we look at the SBOM file

➜  jwt$ cat sbom.json | jq '.artifacts[] | select(.name == "jsonwebtoken")'
{
  "id": "94476b6a2fbdc8c5",
  "name": "jsonwebtoken",
  "version": "8.5.0",
  "type": "npm",
  "foundBy": "javascript-lock-cataloger",
  "locations": [
    {
      "path": "/path/to/project/package-lock.json"
    }
  ],
  "licenses": [],
  "language": "javascript",
  "cpes": [
    "cpe:2.3:a:jsonwebtoken:jsonwebtoken:8.5.0:*:*:*:*:*:*:*",
    "cpe:2.3:a:*:jsonwebtoken:8.5.0:*:*:*:*:*:*:*"
  ],
  "purl": "pkg:npm/[email protected]"
}
➜  jwt$

We could create an SBOM for all of our projects and look through them for versions of jswonwebtoken that are older than 9.0.0, but that can be a chore, especially if you have a lot of projects. It’s far easier to rely on a vulnerability scanner such as Grype to scan projects and report back the findings. Here’s what we get if we scan our project with Grype

➜  jwt$  ls
project  sbom.json
➜  jwt$ grype sbom:sbom.json
 ✔ Vulnerability DB    [no update available]
 ✔ Scanned image       [4 vulnerabilities]
NAME      INSTALLED  FIXED-IN  TYPE  VULNERABILITY    SEVERITY
jsonwebtoken  8.5.0  9.0.0 npm   GHSA-27h2-hvpr-p74q  High 
jsonwebtoken  8.5.0  9.0.0 npm   GHSA-8cf7-32gw-wr33  Medium
jsonwebtoken  8.5.0  9.0.0 npm   GHSA-hjrf-2m68-5959  Medium
jsonwebtoken  8.5.0  9.0.0 npm   GHSA-qwph-4952-7xr6  Medium
➜  jwt$

Remember back when I said there are four vulnerabilities? Now we can see all four with a scan like this. Grype would also report other findings if they existed in our application.

We use Grype to scan the SBOM in our above example. Grype can directly scan a directory, but we’re not using a directory scan. SBOMs scan faster and give us a nice point-in-time snapshot of what packages are installed.

Now that we know we have jsonwebtoken, and we know we have a vulnerable version, how can we upgrade the package? We know we have to move to version 9.0.0 and we have version 8 installed. There have been some breaking changes. You can see those in the changelog document. For the purpose of simplicity we will ignore possible breaking changes.

We can run `npm update` to upgrade our jsonwebtoken package

➜  project$ npm update

changed 1 package, and audited 16 packages in 4s

➜  project$ cd ..
➜  jwt$ syft dir:project -o json > upgraded-sbom.json
 ✔ Indexed project
 ✔ Cataloged packages      [15 packages]
➜  jwt$ grype sbom:upgraded-sbom.json
 ✔ Vulnerability DB        [no update available]
 ✔ Scanned image           [4 vulnerabilities]

NAME          INSTALLED  FIXED-IN  TYPE  VULNERABILITY        SEVERITY
jsonwebtoken  8.5.1      9.0.0     npm   GHSA-27h2-hvpr-p74q  High
jsonwebtoken  8.5.1      9.0.0     npm   GHSA-8cf7-32gw-wr33  Medium
jsonwebtoken  8.5.1      9.0.0     npm   GHSA-hjrf-2m68-5959  Medium
jsonwebtoken  8.5.1      9.0.0     npm   GHSA-qwph-4952-7xr6  Medium
➜  jwt$

Notice if we just run `npm update` we end up at version 8.5.1 instead of 9. This is because of those breaking changes. You have to run ‘npm install [email protected]` to update to the version we want.

➜  project$ npm install [email protected]

added 3 packages, removed 7 packages, changed 2 packages, and audited 12 packages in 5s

➜  project$ cd ..
➜  jwt$ syft dir:project -o json > 9.0-sbom.json
 ✔ Indexed project
 ✔ Cataloged packages      [11 packages]
➜  jwt$ grype sbom:9.0-sbom.json

 ✔ Vulnerability DB        [no update available]
 ✔ Scanned image           [0 vulnerabilities]

No vulnerabilities found

➜  jwt$

Our package is updated and no vulnerabilities are found! In the real world it’s rarely this simple. Most modern applications are large and often have library versions that can’t be easily upgraded. For our oversimple example however, this is the output we expect.

This may be a simple contrived example, but the fundamental concept applies to any package and vulnerability. We can use tools like Syft and Grype to know what we have, what vulnerabilities affect our project, then we can verify we upgraded everything correctly. I hope this deep dive into the example scenario provides the groundwork to help you confidently approach the jsonwebtoken vulnerability to keep your organization safe from susceptible libraries. 

Josh Bressers
Josh Bressers is vice president of security at Anchore where he guides security feature development for the company’s commercial and open source solutions. He serves on the Open Source Security Foundation technical advisory council and is a co-founder of the Global Security Database project, which is a Cloud Security Alliance working group that is defining the future of security vulnerability identifiers.

Why is this massive supply chain attack being ignored?

If you read security news, you may have heard about a recent attack that resulted in 144,000, that’s one hundred and forty four THOUSAND packages being uploaded to NuGet, PyPI, and NPM. That’s a mind boggling number, it seems like with all the supply chain news it would be all anyone is talking about. Except it seems to have flared up quickly then died right down.

The discovery of this attack was made by Checkmarx. Essentially what happened was attackers created a number of accounts in the NuGet, PyPI, and NPM packaging ecosystems. Those fake accounts then uploaded a huge number of packages that linked to phishing sites in the package description. The intention seems to have been to improve search ranking of those sites as well as track users that enter sensitive details.

Supply chain security is an overused term

This concept called “supply chain security” is a term that  is very overused these days. What we tend to call software supply chain security is many other things that are sometimes hard to describe. Reproducible builds, attestation, source code control, slim containers are a few examples. An attack like this can’t be solved with the current toolset we have which is almost certainly why it’s not getting the attention it deserves. It’s easy to talk about something an exciting project or startup can help with. It’s much harder to understand and fix systemic problems.

Why this one is different

To understand why this is so different and hard, let’s break this problem down into its pieces. The first part of the attack is the packaging ecosystems. The accounts in question were valid, they weren’t hacked or trying to impersonate someone else. The various packaging ecosystems have low barriers to entry, this is why we all use them and why they are so incredible. In these packaging ecosystems we see new accounts and packages all the time. In fact there are thousands of new packages added every day. There’s nothing unexpected if an attacker creates many accounts, no alarm bells would be expected. Once an account exists, it can start adding packages.

The second piece of this attack is someone has to download the package in question. It should be pointed out that in this particular instance the actual package content isn’t malicious, but it’s safe to say nobody wants any of these packages in their application. The volume of bad packages are important for this part of the attack. Developers will accidentally typo package names, or they might stumble on a package thinking it solves whatever their problem is. Or they might just have bad luck and install something by accident. Again, this is working within the constraints of the system. So far nothing happening is outside of everyday operations.

Then the last part of this attack is how it gets cleaned up. The packaging ecosystems have stellar security teams working behind the scenes. As soon as they find a bad package it gets delisted. It’s rare for these malicious packages to last more than a few days once they are discovered. Quickly removing packages is the best course of action. But again, the existing supply chain security solutions won’t pick up any of these happenings at this time. When a package is delisted, it just vanishes. How do you know if any of the packages you already installed are a problem? What if your artifact registry just cached a malicious package? It can be difficult to understand if you have a malicious package installed.

How should this work?

How we detect these problems is where things start to get really hard. There will be calls for the packaging ecosystems to lock down their environments, that’s probably a bad idea. The power of open source is how fast and easy it is to collaborate. Putting up walls won’t solve this, it just moves the problem somewhere else, often in a way that hides the real issues.

We have existing databases that track vulnerabilities and bad packages, but they can’t handle this scale today. There are examples of malicious packages listed in OSV and GitHub’s vulnerability database. Other databases like CVE have explicitly stated they don’t want to track this sort of malware. Just knowing where to look and how to catalog these malicious packages isn’t simple, yet it’s an ongoing problem. There have been several instances of malicious packages just this year.

To understand the scale of this data, the CVE project has existed since 1999 and there are about 200,000 IDs total at the end of 2022. Adding 144,000 new IDs would be significant.

At the end of the day, the vulnerability databases are where this data needs to exist. Creating a new way to track malicious packages and expecting everyone to watch it just creates new problems. We are good at finding and fixing vulnerabilities in our software, this is fundamentally the same problem. Malicious packages are no different than vulnerabilities. We also need to keep in mind this will continue to happen.

There are a huge number of tools that exist and parse vulnerability databases, then alert developers. Alerting developers is exactly what these datasets and tools were built for, but none of them are picking up this type of supply chain problem today. If we add this data to the existing data all the pieces can fall into place with minimal disruption.

What can we do right now?

A knee jerk reaction to an event like this is to create constraints on developers in an attempt to only use trusted packages. While that can work, it’s always important to remember that when you create constraints for a person, they become more creative. Using curated open source repositories will need ongoing maintenance. If you just make pulling new packages harder without the ability to quickly add new packages, the developers will find another way.

At the moment there’s no good solution for detecting these packages. The best option is to generate a software bill of materials (SBOM) for all of your software, then look for the list of known bad packages against what’s in the SBOMs. In this particular case even if you have one of these packages in your environment, it will be harmless. But the purpose of this post is to explain the problem so the community can have informed conversations. This is about starting to work together to solve hard problems.

In the future we need to see lists of these known malicious packages cataloged somewhere. It’s boring and difficult work though, so it’s unlikely to get much attention. This is the equivalent of buried utilities that let modern society function. Extremely important, but not something that turns many heads unless it goes terribly wrong.

There’s no way any one group can solve this problem. We will need a community effort. Everyone from the packaging ecosystems, to the vulnerability databases, to the tool manufacturers, and even the security researchers all need to be on the same page. There are efforts underway to help with this. OSV and GitHub allow community contributions. The OpenSSF has a Securing Software Repos working group. The Cloud Security Alliance has the Global Security Database. These are some of the places to find or generate productive and collaborative conversations that can drive progress that hinders use of malicious packages in the software supply chain.

Josh Bressers
Josh Bressers is vice president of security at Anchore where he guides security feature development for the company’s commercial and open source solutions. He serves on the Open Source Security Foundation technical advisory council and is a co-founder of the Global Security Database project, which is a Cloud Security Alliance working group that is defining the future of security vulnerability identifiers.

Breaking Down NIST SSDF: Spotlight on PS.3.2

This is the second post in a long running series to explain the details of the NIST Secure Software Development Framework (SSDF), also known as the standard NIST 800-218. You can find more details about the SSDF on the NIST website.

Today we’re going to cover control PS.3.2 which is defined as

PS.3.2: Collect, safeguard, maintain, and share provenance data for all components of each software release (e.g., in a software bill of materials [SBOM]).

This one sounds really simple, we just need an SBOM, right? But nothing is ever that easy, especially in the world of cybersecurity compliance.

Let’s break this down into multiple parts. Nearly every word in this framework is important for a different reason. The short explanation is we need data that describes our software release. Then we need to safely store that data. It sounds simple, but like many things in our modern world of technology, the devil is in the details.

Start with the SBOM

Let’s start with an SBOM. Yes, you need an SBOM. That’s the provenance data. There are many ways to store release data, but the current expectation across the industry is that SBOMs will be the primary document. The intent is we have the ability to receive and give out SBOMs. For the rest of this post we will put a focus on how to meet this control using an SBOM and SBOM management.

It doesn’t matter how fast or slow the release process is, every time you ship or deploy software, you need an SBOM. For most of us the days of putting out a release every few years are long gone, almost everyone is releasing software at a breakneck pace. Humans cannot be a part of this process, because humans are slow and make mistakes. To solve the challenge of SBOM automation, we need, well, automation. SBOMs should be generated automatically during stages of the development process. There are many different ways to accomplish this, here at Anchore we’re pretty partial to the Syft SBOM generator. We will be using Syft in our examples,  but there are many ways to create this data.

Breaking it Down

Creating an SBOM is the easiest step of meeting this control. If we have a container we need an SBOM for, let’s use the Grype container for our example. It can be as easy as running

syft -o spdx-json anchore/grype:latest

and we have an SBOM of the Grype container image in the SPDX format. In this example we generated an SBOM from a container in the Docker registry, but there’s no reason to wait for a container to be pushed to the registry to generate an SBOM. You can add Syft into the build process. For  example, you can see a Syft GitHub action that does this step  automatically on every build. There are even ways to include the SBOM in the registry metadata now.

Once we have our SBOMs generated, keep in mind the ‘s’ is important, you are going to have a lot of SBOMs. Some applications will have one, some will have multiple. For example if you ship three container images for the application you will end up with at least three SBOMs. This is why the word “collect” exists in the control. Collecting all the SBOMs for a release is important. Collecting really just means making sure you can find the SBOMs that were automatically generated. In our case, we would collect and store the SBOMs in Anchore Enterprise. It’s a tool that does a great job of keeping track of a lot of SBOMs. More details can be found on the Anchore Enterprise website.

Protect the Data Integrity

After the SBOMs are collected, we have to safeguard the SBOMs contents. The word safeguard isn’t very clear. One of the examples states “​​Example 3: Protect the integrity of provenance data, and provide a way for recipients to verify provenance data integrity.” This seems pretty straightforward. It would be dishonest to make the claim “just sign the SBOM and you’re done” because digital signatures are still hard.

It’s probably best to use whatever mechanisms you use to safeguard your application artifacts to also safeguard the SBOM. This could be digital signatures. It could be a read only bucket storage over HTTPS. It could be checksum data available out of band. Maybe just a system that provides audit logs of when data changes. There’s no single way to do this and unfortunately there’s no good advice that can be handed out for this step. Be wary of anyone claiming this is a solved problem today. The smart folks working on Syft have some ideas on how to deal with this.

We also are expected to maintain the SBOMs we are now collecting and safeguarding. This one seems easy as in theory an SBOM is a static document. I think one could interpret this in several ways. NIST has a glossary, it doesn’t define maintain, but does define maintenance as “Any act that either prevents the failure or malfunction of equipment or restores its operating capability.” It’s safe to say the intent of this is to make sure the SBOMs are available now and into perpetuity. In a fast moving industry it’s easy to forget that in two or more years from now the data in an SBOM could be needed by customers, auditors, or even forensic investigators. But on the other side of that coin, it’s just as possible that in a few years what passes as an SBOM today won’t be considered an SBOM. Maintaining SBOMs should not be disregarded as unimportant or simple. You should find an SBOM management system that can store and convert SBOM formats as a way to future proof the documents.

There are new products coming to market that can help with this maintain stage. They are being touted as SBOM management platforms. Anchore Enterprise is a product that does this. There are also open source alternatives such as Dependency Track. There will no doubt be even more of these tools into the future as SBOM use increases and the market matures.

Lastly, and possibly most importantly, we have to share the SBOMs.

One aspect of SBOMs that keeps coming up is an idea that every SBOM needs to be available to the public. This is specifically covered by CISA in their SBOM FAQ. It comes up on a pretty regular basis and is a point of confusion. You get to decide who can access an SBOM. You can only distribute an SBOM to your customers, you can distribute them to the public, you can keep them internal only. Today there isn’t a well defined way to distribute SBOM data. Many ecosystems have their own ways of including SBOM data. For example in the world of containers, registries are putting them in metadata. Even GoReleaser lets you create SBOMs. Depending how your product or service is accessed, there may not be a simple answer to this question.

One solution could be having customers email support asking for a specific SBOM. Maybe you have the SBOM available in the same place customers download your application or login to your service. You can even just package the SBOM up into the application, like a file in a zip archive. Once again, the guidance does not specifically tell us how to accomplish this.

Pro Tip: Make sure you include instructions for anyone downloading the SBOM how to verify the integrity of your application and your SBOM. PS3.1 talks about how to secure the integrity of your application and we’ll cover that in a future blog post.

Final Thoughts

This is one control out of 42. It’s important to remember this is a journey, it’s not a one and done sort of event. We have many more blog posts to share on this topic, and a lot of SBOMs to  generate. Like any epic journey, there’s not one right way to get to the destination. 

Everyone has to figure out how they want to meet each NIST SSDF control, ideally in a way that is valuable to the organization as well as customers. Processes that create unnecessary burden will always end up worked around, and processes integrated into existing workflows are far less cumbersome. Let’s aim high and produce verifiable components that not only meet NIST compliance, but also ease the process for downstream consumers.

To sum it all up, you need to create SBOMs for every release, safeguard them the same way you safeguard your application, store them in a future proof manner, and be able to share the SBOMs. There’s no one way to do any of this, if you have any questions subscribe to our newsletter for monthly updates on software supply chain security insights and trends.

Josh Bressers
Josh Bressers is vice president of security at Anchore where he guides security feature development for the company’s commercial and open source solutions. He serves on the Open Source Security Foundation technical advisory council and is a co-founder of the Global Security Database project, which is a Cloud Security Alliance working group that is defining the future of security vulnerability identifiers.

Meet Quill: A cross platform code signing tool for macOS

We generate a lot of tooling at Anchore. We chose to write most of these tools in Go for a few reasons: the development process is delightful, cross-platform builds are easy, and the distribution of artifacts is very simple (curl the binary). 

Since we target releasing tools for macOS we are beholden to the requirements put forth by Apple, something we’ve written about at length in the past. Tools like gon have made the process of signing and notarizing our releases much easier by wrapping xcrun and codesign utilities and hiding some of the inherent complexities. However, since gon shells out to these tools you still must be on a mac to sign and notarize your binaries. This nullifies one of the reasons why we chose Go in the first place: having simple cross-platform builds from any platform. 

We’ve reworked our release process a few times over to account for this, all with unpleasant tradeoffs. It seems to come down to a couple of points:

  1. Running macOS in CI is more expensive than running on linux.
  2. Using Docker on macOS in CI is annoying. Due to licensing restrictions Docker is not included on the default mac runners. This is problematic since we use goreleaser to perform the build and release steps in one shot, which means we need to be able to sign/notarize our binaries at the same time as we package and release them for all platforms. This has only very recently been alleviated with the addition of colima on the default mac runner, but before this it has caused us to slice-and-dice up our release pipeline in awkward ways.

After a while we started to wonder to ourselves: Is it intrinsically necessary to require the signing and notarization steps to run on a mac? The more we looked the more we were certain the answer was “no”.

What’s in a signed binary anyway?

When you run codesign to sign your binary a new payload is added at the end of the binary with (usually) the following sections:

  • A Code Directory: essentially a table of hashes. Each hash is a digest of each page in the binary before the new payload. The code directory is “what” gets signed.
  • A PKCS7 (CMS) envelope: contains the cryptographic signature made against the Code Directory.
  • A Set of Requirements (optional): expressions that are evaluated against the signature that should hold true. “Designated Requirements” are a special set of requirements that describe how to determine the identity of the code being signed.
  • A Set of Entitlements (optional): a list of key-value pairs in XML that represent privileges an executable can request, e.g. com.apple.developer.avfoundation.multitasking-camera-access=true request for camera access while running alongside another foreground app.

There is nothing inherent about any of these payload elements that require the signing process to run on a mac.

What about notarization? What’s involved to get your binary notarized by Apple? This is an easier answer:

  1. Put your binary in a zip
  2. Upload the zip to Apple via their Notarization API
  3. Poll their Notarization API until there is a result

It seems that the only reason why we are signing and notarizing our releases on macOS is because Apple does not yet provide cross-platform tooling to do so…

Introducing Quill

We created a new tool called Quill to sign and notarize your macOS binary from any platform. 

This works quite well with goreleaser as a post-build step:

using quill with goreleaser

In this way you can use a single goreleaser file for local builds and production builds:

  • Signing and notarization are performed for production builds.
  • Ad-hoc signing is done for snapshot builds (notarization is skipped). This means that no cryptographic material is needed as input, so the Code Directory is added to the binary but there is no signature attached.

You can additionally use Quill to:

  • View your previous notarization submissions with “quill submission list”
  • Get the logs from Apple about the details of a submission result with “quill submission logs <submission-id>”
  • Parse and describe a macOS binary (including all signing details) with “quill describe ./path/to/binary”

We are now using quill for our production releases of Syft and Grype and have room to implement more features in the future to expand the capabilities of quill to match that of codesign. Quill is an open source project — we would love feedback, bug reports, and pull requests!

Measuring Vulnerability Scanner Quality with Grype and Yardstick

Introducing Yardstick

As we build Grype, our open source container vulnerability scanner, we are constantly thinking about the quality of our results and how to improve them. We have developed a number of methods to measure our results at development time, so that our scanner doesn’t regress as we change our scanning logic and how we parse sources of external data. We’ve incorporated some of these methods into a new tool: Yardstick, which inspects and compares the results of vulnerability scans between different scanner versions.

The most important thing for any vulnerability scanning software is the quality of its results. How do you measure the quality of vulnerability scan data? How do you know if your scanner quality is improving or declining? What impact do code changes have on the quality of your results? How about new or updated sources of external data? Can we incorporate these and prove that the scanner results will get better?

Yardstick aims to answer these questions by characterizing matching performance quantitatively.

Vulnerability Scan Quality

A basic approach to measuring the quality of a vulnerability scan over time might be to simply compare the results from one version to another, for the same container image. But this will only tell us if the results changed, not whether they got better or worse without manually looking at all of the results. There are a number of factors that could change the results of a scan:

  • Code changes in the scanner itself
  • New vulnerabilities added to upstream data sources
  • Existing vulnerabilities might be changed or removed from upstream data sources
  • The artifacts being scanned might have changed in some way, or our understanding of the contents of those artifacts might change because of changes to Syft (our SBOM generator, which Grype uses to analyze the contents of artifacts being scanned.)

To move beyond simple result change detection we’ve hand-curated a set of ever-growing examples of labeled data from real container images. These “labels” are used as ground truth to compare against vulnerability results. We use the F1 score (a combination of True Positive, False Positive, and False Negative counts) and a few simple rules to make up Grype’s quality gate. Get more technical information on our scoring.

Positives and Negatives

For the most accurate results, we want to maximize “True Positives” while minimizing “False Negatives” and “False Positives”:

True Positive: A vulnerability that the scanner correctly identifies. (good!)

False Positive: A vulnerability that was reported but should not have been. (bad, but not as bad as a false negative.)

False Negative: A vulnerability that the scanner should have reported, but didn’t. (bad!)

We have integrated Yardstick into our test and build infrastructure to compare the scan results from different versions of Grype, so that we can identify regressions in our vulnerability matching techniques. We also integrate a lot of external data from various sources, and our goal is to open the process by which the Grype vulnerability database is populated so that our community can add additional sources of data. All of this means that we need robust and comprehensive tools to ensure that our quality stays high.

Right now, Yardstick only has a driver for Grype, but it is extensible, so it’s possible to add support for other vulnerability matchers. We would be happy to see pull requests from the community to improve Yardstick’s capabilities, and we’d be happy to hear if Yardstick is useful when you use a vulnerability scanning tool.

What does it look like?

Here are some screenshots and an animation to show you what Yardstick looks like in operation:

A screenshot of a Yardstick quality gate in operation.

Want to try it out? You can find instructions in our GitHub repository, and please feel free to visit our Discourse forum to ask questions and chat with our developers and security experts.

Frequently Asked Questions:

Q: Why didn’t you call it “Meterstick”?

A: In 1793, a ship sailing from Paris to America carrying objects to be used as references for a standard kilogram and meter was thrown off course by a storm, washed up in the Caribbean, and raided by British pirates who stole the objects. By the time a second ship with new reference pieces set sail, the United States had already decided to use the Imperial system of measurement. So, we have Yardstick. (source)

Q: If I just want to scan my containers for vulnerabilities, do I need to use Yardstick?

A: No, Yardstick is intended more as a tool for developers of vulnerability scanners. If you just want to scan your own images, you should just use Grype. If you want to participate in the development of Grype, you might want to explore Yardstick.

Q: Can Yardstick compare the quality of SBOMs (Software Bill of Materials)?

A: Not yet, but we have designed the tool with this goal in mind. If you’re interested in working on it, chat with us! PR’s appreciated!

Q: Can Yardstick process results from other vulnerabilities besides Grype?

A: Not yet, but PRs accepted!

Anchore Enterprise and the new OpenSSL vulnerabilities

Today the OpenSSL project released an advisory for two new vulnerabilities that were rated as having a critical severity, but have been lowered to having a high severity. These vulnerabilities only affect OpenSSL versions 3.0.0 to 3.0.6. As OpenSSL version 3 was released in September of 2021, it is not expected to be widely deployed at this time. OpenSSL is one of those libraries that isn’t a simple upgrade. OpenSSL version 1 is much more common at the time of this writing and is not affected by CVE-2022-3786 or CVE-2022-3602.

The issues in question are not expected to be exploitable beyond a crash by a malicious actor due to the vulnerabilities being stack buffer overflows. Stack buffer overflows result in crashes on modern systems due to a security feature known as stack canaries which have become commonplace in recent times.

Detecting OpenSSL with Anchore Enterprise

Anchore Enterprise easily detects OpenSSL as it is commonly packaged within Linux distributions. These are packaged versions of OpenSSL in which a package manager installs a pre-built binary package, commonly referred to as APK, DEB, or RPM packages. Below is an example of searching a Fedora image for OpenSSL and determining it has OpenSSL 3.0.2 installed:

Anchore Enterprise search for OpenSSL

This is the most common way OpenSSL is shipped in container images today.

That’s not the entire story though. It is possible to include OpenSSL when shipping a binary application. For example the Node.js upstream binary statically links the OpenSSL library into the executable. That means OpenSSL is present in Node.js, but there are no OpenSSL files on disk for a scanner to detect. In such an instance it is necessary to review which applications will include OpenSSL and look for those.

In the case of Node.js it is necessary to look for the node binary located somewhere on the disk. We can examine the files contained in the SBOM to identify /usr/local/bin/node, for example:

Searching for Node.js in Anchore Enterprise

If Node.js is installed as a package, it will get picked up without issue. If Node.js is installed as a binary, either from source or from Node.js itself, it’s slightly more work to detect as it is necessary to review all of the installed files, not just a package named “node”.

We have an update coming in Anchore Enterprise 4.2 that will be able to identify Node.js as a binary install, you can read more about how this will work below where we explain detecting OpenSSL with Syft.

Detecting OpenSSL with Syft

Anchore has an open source SBOM scanner called Syft. It is part of the core technology in Anchore Enterprise. It’s possible to use Syft to detect instances of OpenSSL in your applications and containers. Syft has no issues detecting OpenSSL packages installed by operating systems. Running it against a container image or application directory works as expected.

There’s also a new trick Syft just learned, that’s detecting a version of Node.js installed as a binary. This is a brand new feature you can read about in a Syft blog post. You can expect this detection in Anchore Enterprise very soon.

Using Anchore policy to automatically alert on CVE-2022-3786 and CVE-2022-3602

Anchore Enterprise has a robust policy and reporting engine that can be used to ease the burden of finding instances of CVE-2022-3786 and CVE-2022-3602. There is a “Quick Report” feature that allows you to search for a CVE. Part of what makes a report such as this so powerful is that you can search back in time. Any SBOM stored in Anchore Enterprise ever can be queries. This means even if you don’t have old containers available to scan, if you have the SBOM stored, you can know if that image or application was ever affected by this issue without the need to rescan anything.

Quickreport window in Anchore Enterprise

It should be noted that you may want to search for the CVE and also the GitHub GHSA IDs. While the GHSA does refer to the CVE, at this time Anchore Enterprise treats them differently when creating policy and reports.

Planning for the future

We will probably see CVE-2022-3786 and CVE-2022-3602 showing up in container images for years to come. It’s OK to spend some time at the beginning manually looking for OpenSSL in our applications and images, but this isn’t a long term strategy. Long term it will be important to rely on automation to detect, alert, and prevent vulnerable OpenSSL usage. Even if you aren’t using OpenSSL version 3 today, it could be accidentally included at a future date. And while we’re all busy looking for OpenSSL today, it will be something else tomorrow. Automation can help detect past, present, and future issues.

The extensive use of OpenSSL means security professionals and development teams are going to be dealing with the issue for many months to come. Getting immediate visibility into your risk using open source tools is the fastest way to get going. But as we get ready for the long haul, prepare for the next inevitable issue that surfaces. Perhaps you’ve already found some as you’ve addressed OpenSSL. Anchore Enterprise can get you ready for a quick and full assessment of the impact, immediate controls to prevent vulnerable versions from moving further toward production, and streamlined remediation processes. Please contact us if you want to know how we can help you get started on your SBOM journey.

Detecting binary artifacts with Syft

Actions speak louder than words

It’s no secret that SBOM scanners have primarily put a focus on returning results from packaging managers and struggle with binary applications installed via a side channel. If you’re installing software from a Linux distribution, NPM, or PyPI those packages are tracked with package manager data. Syft picks those packages up without any problems because it finds evidence in the package manager metadata to determine what was installed. However, if we  install a binary, such as Node.js without a package manager, Syft won’t pick it up. Until now!

There’s a new update to Syft, version 0.60.1, that now gives us the ability to look for binaries installed outside of a package manager. The initial focus is on Node.js because the latest version of Node.js includes OpenSSL 3, which is affected by recently released security vulnerabilities. Node.js is an application that includes this latest version of OpenSSL 3, which makes it important to be able to find it at this time.

In the future we will be adding many other binary types to detect, check back to see all the new capabilities of Syft soon.

We can show this behavior using the node container image. If we scan the container with Syft version 0.59.0, we can see that the Node.js binary is not detected. We are filtering the results to only show us things with ‘node’ in their name. The official node container is quite large and contains many packages, if we don’t filter the output it would be several pages long.

Syft scanning for the node binary and not finding it

There is no binary named ‘node’ in that list. However, we know this binary is installed, it is the official node container. Now if we try again using Syft version 0.60.1 the node binary is in the output of Syft with a type of binary.

Syft detecting the node binary

 

How does this work?

The changes to Syft are very specific and apply only to the Node.js binary. We added the ability for Syft to look for binaries that could be node, this begins by looking at the names of the binary files on disk. This was done to avoid trying to scan through every single binary file on the system which would be very slow and consume a great deal of resources.

Once we find something that might be a Node.js binary, we extract the plaintext strings data from it. This is comparable to running the ‘strings’ command from a UNIX environment. Basically what happens is we look for strings of plain text and ignore the binary data. In our case we are looking for a string of text that contains version information in a Node.js binary. If we determine the binary is indeed Node.js, we then extract the version details.

The output of Syft is of ‘binary’ format. If you look at Syft output you will see the different types of packages that were detected. These could be npm, deb, or python for example. Now you will also see a new type which is binary. As mentioned, the only binary type that can be found today is node, but more are coming soon.

Final Thoughts

Given how new this feature is, there is a known drawback. This patch could cause the Node.js binary to show up twice in an SBOM. If Node.js is installed via a package manager, such as rpm, the RPM classifier will find ‘node’ and so will the binary classifier. The same node binary will be listed twice. We know this is a bug and we are going to fix it soon. Given the importance of being able to detect Node.js, we believe this addition is too important to not include even with this drawback.

As already mentioned, this update only detects the Node.js binary. We are also working on binary classifiers for Python and Go in the short term, and long term we expect many binary classifiers to exist. This is an example of not letting perfect get in the way of good enough.

Please keep in mind this is the first step in a very long journey. There will be bugs in the binary classifiers as they are written. There are many new things to classify in the future, we don’t yet know what sort of things we will be looking for, which is exciting. Syft is an open source project – we love bug reports, pull requests, and questions. We would love you to join our community!

It is essential that we all remain vigilant and proactive in our software supply chain security as new vulnerabilities like OpenSSL and malicious code are inevitable. Please contact us if you want to know how we can help you get started on your SBOM journey and detect OpenSSL in your environment.

An Introduction to the Secure Software Development Framework

It’s very likely you’ve heard of a new software supply chain memo from the US White House that came out in September 2022. The content of the memo has been discussed at length by others. The actual memo is quite short and easy to read, you wouldn’t regret just reading it yourself.

The very quick summary of this document is that everyone working with the US Government will need to start following NIST 800-218, also known as the NIST Secure Software Development Framework, or SSDF. This is a good opportunity to talk about how we can start to do something with SSDF today. For the rest of this post we’re going to review the actual SSDF standard and start creating a plan of tackling what’s in it. The memo isn’t the interesting part, SSDF is.

This is going to be the first of many, many blog posts as there’s a lot to cover in the SSDF. Some of the controls are dealt with by policy. Some are configuration management, some are even software architecting. Depending on each control, there will be many different ways to meet the requirements. No one way is right, but there are solutions that are easier than others. This series will put extra emphasis on the portions of SSDF that deal with software bill of materials (SBOM) specifically, but we are not going to ignore the other parts.

An Introduction to the Secure Software Development Framework (SSDF)

If this is your first time trying to comply with a NIST standard, keep in mind this will be a marathon. Nobody starts following the entire compliance standard on day one. Make sure to set expectations with yourself and your organization appropriately. Complying with a standard will often take months. There’s also no end state, these standards need to be thought about as continuous projects, not one and done.

If you’re looking to start this journey I would suggest you download a spreadsheet NIST has put together that details the controls and standards for SSDF. It looks a little scary the first time you load it up, but it’s really not that bad. There are 42 controls. That’s actually a REALLY small number as far as NIST standards go. Usually you will see hundreds or even thousands.

An Overview of the NIST SSDF Spreadsheet

There are 4 columns: Practices, Tasks, Notional Implementation Examples, References

If we break it down further we see there are 19 practices and 42 Tasks. While this all can be intimidating, we can work with 19 practices and 42 tasks. The practices are the logical groupings of tasks, and the tasks are the actual controls we have to meet. The SSDF document covers all this in greater detail, but the spreadsheet makes everything more approachable and easy to group together.

The Examples Column

The examples column is where the spreadsheet really shines. The examples are how we can better understand the intent of a given control. Every control has multiple examples and they are written in a way anyone can understand. The idea here isn’t to force a rigid policy on anyone, but to show there are many ways to accomplish these tasks. Most of us learn better from examples than we do from technical control text, so be sure to refer to the examples often.

The References Section

The references sections are scary looking. Those are a lot of references and anyone who tries to read them all will be stuck for weeks or months. It’s OK though, they aren’t something you have to actively read, it’s to help give us additional guidance if something isn’t clear. There’s already a lot of security guidance out there, it can be easier to cross reference work that already exists than it is to make up all new content. This is how you can get clarifying guidance on the tasks. It’s also possible you already are following one or more of these standards which means you’ve already started your SSDF journey.

The Tasks

Every task has a certain theme. There’s no product you can buy that will solve all of these requirements. Some themes can only be met with policy. Some are secure software development processes. Most will have multiple ways to meet them. Some can be met with commercial tools, some can be met with open source tools.

Interpreting the Requirements

Let’s cover a very brief example (we will cover this in far more detail in a future blog post). PO 1.3. 3rd party requirements. The text of this reads

PO.1.3: Communicate requirements to all third parties who will provide commercial software components to the organization for reuse by the organization’s own software. [Formerly PW.3.1]

This requirement revolves around communicating your own requirements to your suppliers. But today the definition of supplier isn’t always obvious. You could be working with a company. But what if you’re working with open source? What if the company you’re working with is using open source? The important part of this is better explained in the examples: Example 3: Require third parties to attest that their software complies with the organization’s security requirements.

It’s easier to understand this in the context of having your supplier prove they are in compliance with your requirements. Proving compliance can be difficult in the best situations. Keep in mind you can’t just do this in one step. You probably first just need to know what you have (SBOM is a great way to do this.) Once you know what you have, you can start to define expectations for others. And once you have expectations and an SBOM you can hand out an attestation.

One of the references for this one is NIST 800-160. If we look at section 3.1.1, there are multiple pages that explain the expectations. There isn’t a simple solution as you will see if you read through NIST 800-160. This is an instance where a combination of policy, technology, and process will all come together to ensure the components used are held to a certain standard.

This is a lot to try to take in all at once, so we should think about how to break this down. Many of us already have existing components. How we tackle this with existing components is not the same approach we would take with a brand new application security project. One way to think about this is you will first need an inventory of your components before you can even try to create expectations for your suppliers.

We could go on explaining how to meet this control, but for now let’s just leave this discussion here. The intent was to show what this challenge looks like, not to try to solve it today. We will revisit this in another blog post when we can dive deep into the requirements and some ideas on how to meet the control requirements, and even define what those requirements are!

Your Next Steps

Make sure you check back for the next post in this series where we will take a deep dive into every control specified by the SSDF. New compliance requirements are a challenge, but they exist to help us improve what we are already doing in terms of secure software development practices. Securing the software supply chain is not just a popular topic, it’s a real challenge we all have to meet now. It’s easy to talk about securing the software supply chain, it’s a lot of hard work to actually secure it. But luckily for us there is more information and examples to build off of than ever before. Open source isn’t about code, it’s about sharing information and building communities. Anchore has several ways to help you on this journey. You can contact us, join our community Discourse forum, and check out our open source projects: Syft and Grype.

Josh Bressers
Josh Bressers is vice president of security at Anchore where he guides security feature development for the company’s commercial and open source solutions. He serves on the Open Source Security Foundation technical advisory council and is a co-founder of the Global Security Database project, which is a Cloud Security Alliance working group that is defining the future of security vulnerability identifiers.

NSA Securing the supply chain for developers: the past, present, and future of supply chain security

Last week the NSA, CISA, and ODNI released a guide that lays out supply chain security, but with a focus on developers. This was a welcome break from much of the existing guidance we have seen which mostly focuses on deployment and integration rather than the software developers. The software supply chain is a large space, and that space includes developers.

The guide is very consumable. It’s short and written in a way anyone can understand. The audience on this one is not compliance professionals. They also provide fantastic references. Re-explaining the document isn’t needed, just go read it.

However, even though the guide is very readable, it could be considered immature compared to much of the other guidance we have seen come from the government recently. This immaturity of a developer focused supply chain guide came through likely because this is in fact an immature space. Developer compliance has never been successful outside of some highly regulated industries and this guide reminds us why. Much of the guidance presented has themes of the old heavy handed way of trying to do security, while also attempting to incorporate some new and interesting concepts being pioneered by groups such as the Open Source Security Foundation (OpenSSF).

For example, there is guidance being presented that suggests developer systems not be connected to the Internet. This was the sort of guidance that was common a decade ago, but no developers could imagine trying to operate a development environment without Internet access now. This is a non-starter in most organizations. The old way of security was to create heavy handed rules developers would find ways to work around. The new way is to empower developers while avoiding catastrophic mistakes.

But next to outdated guidance, we see modern guidance such as using Supply chain Levels for Software Artifacts, or SLSA. SLSA is a series of levels that can be attained when creating software to help ensure integrity of the built artifacts. SLSA is an open source project that is part of the OpenSSF project that is working to create controls to help secure our software artifacts.

If we look at SLSA Level 1 (there are 4 levels), it’s clearly the first step in a journey. All we need to do for SLSA level 1 is keep metadata about how an artifact was built and what is in it. Many of us are already doing that today! The levels then get increasingly more structured and strict until we have a build system that cannot connect to the internet, is version controlled, and signs artifacts. This gradual progress makes SLSA very approachable.

There are also modern suggestions that are very bleeding edge and aren’t quite ready yet. Reproducible builds are mentioned, but there is lack of actionable guidance on how to accomplish this. Reproducible builds are an idea where you can build the source code for a project on two different systems and get the exact same output, bit for bit. Today everyone doing reproducible builds does so from enormous efforts, not because the build systems allow it. It’s not realistic guidance for the general public yet.

The guide expands the current integrator guidance of SBOM and verifying components is an important point. It seems to be pretty accepted at this point that generating and consuming SBOMs are table stakes in the software world. The guide reflects this new reality.

Overall, this guide has an enormous amount of advice contained in it. Nobody could do all of this even if they wanted to, don’t feel like this is an all or none effort. This is a great starting point for developer supply chain security. We need to better define the guidance we can give to developers to secure the supply chain. This guide is the first step, the first draft is never perfect, but the first draft is where the journey begins.

Understand what you are doing today, figure out what you can easily do tomorrow, and plan for some of the big things well into the future. And most importantly, ignore the guidance that doesn’t fit into your environment. When guidance doesn’t match with what you’re doing it doesn’t mean you’re doing it wrong. Sometimes the guidance needs to be adjusted. The world often changes faster than compliance does.

The most important takeaway isn’t to view this guide as an end state. This guide is the start of something much bigger. We have to start somewhere, and developer supply chain security starts here. Both how we protect the software supply chain and how we create guidance are part of this journey. As we grow and evolve our supply chain security, we will grow and evolve the guidance and best practices.

Anchore Enterprise 4.1 Introduces Curated Vulnerability Feed, AnchoreCTL 1.0, and Source to Build SBOM Drift Management

We are pleased to announce the release of Anchore Enterprise v4.1 which contains a major new service to help reduce false positives as well as improvements to our SBOM Drift capability, RHEL 9 support, and updates to the AnchoreCTL command line tool. Read on to learn more!

Reducing False Positives with the new curated Anchore Vulnerability Feed

For most security teams who are doing vulnerability management, handling false positives is the biggest source of frustration and wasted time. A large number of false positives affect every user, independent of their environment, for one of two major reasons: incorrectly identified software contents that appear to be vulnerable or incomplete data in the vulnerability feed itself.

In 2021, to address the challenge of misidentified components, Anchore introduced two features, SBOM Hints and SBOM Corrections, that allow users to adjust the metadata to ensure more accurate generation of the SBOM. This, in turns, provides better mapping to the list of vulnerabilities.

With Anchore Enterprise 4.1, we are excited to offer the Anchore Vulnerability Feed which addresses the second issue of incomplete data in public feeds, especially from the National Vulnerability Database (NVD). The Anchore Vulnerability Feed uses data gathered from Anchore’s user community, customer environments, and research done by the Anchore Security Team. This data is used to identify inaccurate metadata in public vulnerability feeds. Once problematic metadata is identified, the Anchore Vulnerability Feed prevents matches against a software component either through a managed exclusion list or by enhancing the metadata itself.

All customers can request an assessment of a potential false positive through the Anchore support portal. As Anchore discovers and adds new data to the feed, customers will benefit from live updates which immediately reduce false positives on the customer site without any need for administration changes or software updates. This feature is available to all existing customers across all tiers.

Detect Malicious Activity and Misconfiguration with SBOM Drift Enhancements

Ever since the Solarwinds compromise, companies have become aware that malicious components can be added during development to create attack vectors. To help with detecting this type of attack, Anchore added a capability in Anchore Enterprise 4.0 called SBOM Drift which looked for when components were being added, changed, or removed during the software development life cycle. The initial feature enabled users to detect and alert on changes between builds of container images. Anchore Enterprise 4.1 further expands on this capability by adding the ability to detect drift between the SBOM generated from a source code repository and the SBOM generated from the resulting build. While some drift is normal as packages are added as dependencies or included from the base operating system, some drift is not.

New policy rules can catch changes such as downgrades in version numbers which may be a result of either tampering or misconfigurations. Drift alerts are configurable and can be set to either warn or fail a build based on your requirements. The underlying API to the service allows users to query the changes for reporting and to track dependency usage.

Unified and improved command line experience with AnchoreCTL 1.0

Part of the power of Anchore Enterprise is the extensive API coverage and the flexibility of integrating with 3rd party tools and platforms. Since the first launch of our product, the main tool for interacting with any of Anchore Enterprise’s functions via the command line has been anchore-cli. This tool was used to request operations, status, or pull data from the backend. At the beginning of the year, we introduced a next-generation tool called AnchoreCTL, written in GoLang and provided as a standalone client tool. AnchoreCTL allowed a user to interact with Anchore Enterprise application grouping and source code/image SBOM features.

Along with Anchore Enterprise 4.1, we are releasing AnchoreCTL v1.0 which now has all of the capabilities previously provided by anchore-cli, but in a simple, unified experience. Provided as a Go binary, it reduces the environment requirements to run the tool on systems such as runners in a CI/CD environment and simplifies the administrative experience of working with Anchore Enterprise.

Additionally, the user experience for interacting with operations like sbom management and application management has been massively simplified. Operations which took multiple command line invocations can now be performed with a single operation.

RHEL9 and clone support

Finally, Anchore Enterprise 4.1 can now scan and continuously monitor RHEL 9 and CentOS 9 Stream container images for any security issues present in installed packages for these operating systems. These packages are now included in generated SBOMs and customers can be applied to Anchore’s customizable policy enforcement.

For more information about the product or to get started with a trial license, please contact Anchore.

3 Myths of Open Source Software Risk and the One Nobody Is Discussing

Open source software is being vilified once again and, in some circles, even considered a national security threat. Open source software risk has been a recurring theme: First it was classified as dangerous because anyone could work on it and then it was called insecure because nobody was in charge. After that, the concern was that open source licenses were risky because they would require you to make your entire product open source.

Let’s consider where open source stands today. It’s running at minimum 80% of the world. Probably more. Some of the most mission-critical applications and services on the planet (and on Mars) are open source. The reality is, open source software isn’t inherently more risky than anything else. It’s simply misunderstood, so it’s easy to pick on.

Myth 1: Open source software is a risk because it isn’t secure

Open source software may not be as risky as you have been led to believe, but that doesn’t mean it gets a free pass either.

The most recent and top-of-mind example is the Log4Shell vulnerability in Log4j. It’s easy to put the blame on open source, but it’s lack of proper insight into our infrastructure that is the fundamental issue.

The question, “Are we running Log4j?” took many of us weeks to answer when we needed that answer in a few minutes. The key to managing our software risk (and that’s all software, not just open source) is to have the ability to know what is running and where it’s running. This is the literal purpose for a software bill of materials (SBOM).

The foundation for managing open source risk begins with knowing what we have in our software supply chain. Any software can be a potential risk if you don’t know you’re running it. You should be generating and receiving an SBOM for every piece of software used and have the capability to store and search the data. Not knowing what you’re running in your software supply chain is a far greater risk than actually running it.

The reality is that open source software is just software. It’s when we do a poor job of incorporating it into our products, deploying it, and tracking it that creates this mythic “security risk” we often hear about.

Myth 2: Open source software is a risk because it isn’t high quality

It was easier a decade ago to claim that open source software was inferior because there wasn’t a lot of open source in use. Today too much of the world runs on top of open source software to make the claim that it is low quality — the idea is simply laughable.

The real purpose behind the message that open source software is not suitable for enterprise use — and which you’ll often hear from legacy software vendors — is that open source software is inferior to commercially developed software.

In actuality, we’re not in a place to measure the quality of any of our software. While work is ongoing to fill this need, your best option today is to find the open source software that solves your problem and then make sure that it is up to date and has no major bugs that can leave your software supply chain susceptible to vulnerabilities.

Myth 3: Open source software is a risk because you can’t trust the people writing it

Myth 3 is loosely tied to the first myth that open source software is not secure. There are efforts to measure open source quality, which is a noble cause. Not all open source is created equally. It’s a common misbelief that open source projects with only one maintainer are of lower quality (see myth 2) and you can’t trust the people who build them.

There are plenty of projects in wide use where nobody really knows who is working on them. It’s a GitHub ID and that’s about it. So it’s possible the maintainer is an adversary. It’s also possible the intern that your endpoint vendor just hired is an adversary. The only difference is that in the open source world, we can at least figure it out.

Although there are open source projects that are nefarious, there are also many people working to uncover the malicious activity. They include a wide range of individuals from end users pointing out strange behavior to researchers scanning repositories and endpoint teams looking for active threats. The global community is a mighty power when it turns its attention to finding malicious open source software.

Again, open source software risk is less about trust than it is about having insight into what we are using and how we are using it. Trying to find malicious code is not realistic for many of us, but when it does get found, we need the ability to quickly pinpoint it in our software and remove it.

The true risk of open source software

In an era where the use of open source software is only increasing, the true risk in using open source — or any software for that matter — is failing to understand how it works. In the early days of open source, we could only understand our software by creating it. There wasn’t a difference between being an open source user and an open source contributor.

Open source is very different today. The number of open source users is huge (the population of the world to be exact), while the number of open source contributors is much smaller. And this is OK because everyone shouldn’t be expected to be an open source contributor. There’s nothing wrong with taking in open source packages and using them to build something else. That’s the whole point!

If there’s one piece of advice I can give, it’s that consuming open source can help you create better software faster as long as you manage risk. There are many good tools that scan for vulnerabilities and there are SBOM-driven solutions to help you identify security issues in all your software components. Open source is an experience where we will all have a different journey. But like any journey, we have to pay attention along the way or we could find ourselves off course.

Josh Bressers
Josh Bressers is vice president of security at Anchore where he guides security feature development for the company’s commercial and open source solutions. He serves on the Open Source Security Foundation technical advisory council and is a co-founder of the Global Security Database project, which is a Cloud Security Alliance working group that is defining the future of security vulnerability identifiers.

Docker Security Best Practices: A Complete Guide

When Docker was first introduced, Docker container security best practices primarily consisted of scanning Docker container images for vulnerabilities. Now that container use is widespread and container orchestration platforms have matured, a much more comprehensive approach to security is standard practice.

This post covers best practices for three foundational pillars of Docker container security and the best practices within each pillar:

  1. Securing the Host OS
    1. Choosing an OS
    2. OS Vulnerabilities and Updates
    3. User Access Rights
    4. Host File System
    5. Audit Considerations for Docker Runtime Environments
  2. Securing the Container Images
    1. Continuous Approach
    2. Image Vulnerabilities
    3. Policy Enforcement
    4. Create a User for the Container Image
    5. Use Trusted Base Images for Container Images
    6. Do Not Install Unnecessary Packages in the Container
    7. Add the HEALTHCHECK Instruction to the Container Image
    8. Do Not Use Update Instructions Alone in the Dockerfile
    9. Use COPY Instead of ADD When Writing Dockerfiles
    10. Do Not Store Secrets in Dockerfiles
    11. Only Install Verified Packages in Containers
  3. Securing the Container Runtime
    1. Consider AppArmor and Docker
    2. Consider SELinux and Docker
    3. Seccomp and Docker
    4. Do Not Use Privileged Containers
    5. Do Not Expose Unused Ports
    6. Do Not Run SSH Within Containers
    7. Do Not Share the Host’s Network Namespace
    8. Manage Memory and CPU Usage of Containers
    9. Set On-Failure Container Restart Policy
    10. Mount Containers’ Root Filesystems as Read-Only
    11. Vulnerabilities in Running Containers
    12. Unbounded Network Access from Containers

What Are Containers?

Containers are a method of operating system virtualization that enable you to run an application and its dependencies in resource-isolated processes. These isolated processes can run on a single host without visibility into each others’ processes, files, and network. Typically each container instance provides a single service or discrete functionality (called a microservice) that constitutes one component of the application.

Containers, themselves, are immutable, which means that any changes made to a running container instance will be made on the container image and then deployed. This capability allows for more streamlined development and a higher degree of confidence when deploying containerized applications.

Securing the Host Operating System

Container security starts at the infrastructure layer and is only as strong as this layer. If attackers compromise the host operating system (OS), they may compromise all processes on the OS, including the container runtime. For the most secure infrastructure, you should design the base OS to run the container engine only, with no other processes that could be compromised.

For the vast majority of container users, the preferred host operating system is a Linux distribution. Using a container-specific host OS to reduce the surface area for attack is generally a best practice. Modern container platforms like Red Hat OpenShift run on Red Hat Enterprise Linux CoreOS, which is hardened with SELinux and offers process, network, and storage separation. To further strengthen the infrastructure layer of your container stack and improve your overall security posture, you should always keep the host operating system patched and updated.

Best Practices for Securing the Host OS

The following list outlines some best practices to consider when securing the host OS:

1. Choosing an OS

If you are running containers on a general-purpose operating system, you should instead consider using a container-specific operating system because they typically include by default such security features as enabled SELinux, automated updates, and image hardening. Bottlerocket from AWS is one such OS designed for hosting containers that is free, open source, and Linux based.

With a general-purpose OS, you will need to manage every security feature independently. Hosts that run containers should not run any unnecessary system services or non-containerized applications. And you should consistently scan and monitor your host operating system for vulnerabilities. If you find vulnerabilities, apply patches and update the OS.

2. OS Vulnerabilities and Updates

Once you choose an operating system, it’s important to standardize on best practices and tooling to validate the versioning of packages and components contained within the base OS. Note that if you choose to use a container-specific OS, it will contain components that may become vulnerable and require remediation. You should use tools provided by the OS vendor or other trusted organizations to regularly scan and check for updates to components.

Even though security vulnerabilities may not be present in a particular OS package, you should update components if the vendor recommends an update. If it’s simpler for you to redeploy an up-to-date OS, that is also an option. With containerized applications, the host should remain immutable in the same manner containers should be. You should not be persisting data uniquely within the OS. Following this best practice will greatly reduce the attack surface and avoid drift. Lastly, container runtime engines such as Docker frequently update their software with fixes and features. You can mitigate vulnerabilities by applying the latest updates.

3. User Access Rights

All authentication directly to the OS should be audited and logged. You should only grant access to the appropriate users and use keys for remote logins. And you should implement firewalls and allow access only on trusted networks. You should also implement a robust log monitoring and management process that terminates in a dedicated log storage host with restricted access.

Additionally, the Docker daemon requires ‘root’ privileges. You must explicitly add a user to the ‘docker’ group to grant that user access rights. Remove any users from the ‘docker’ group who are not trusted or do not need privileges.

4. Host File System

Make sure containers are run with the minimal required set of file system permissions. Containers should not be able to mount sensitive directories on a host’s file system, especially when they contain configuration settings for the OS. This is a bad practice that you should avoid because an attacker would be able to execute any command that the Docker service can run and potentially gain access to the entire host system because the Docker service runs as root.

5. Audit Considerations for Docker Runtime Environments

You should conduct audits on the following:

  • Container daemon activities
  • These files and directories:
    • /var/lib/docker
    • /etc/docker
    • docker.service
    • docker.socket
    • /etc/default/docker
    • /etc/docker/daemon.json
    • /usr/bin/docker-containerd
    • /usr/bin/docker-runc

Securing Docker Images

You should know exactly what’s inside a Docker container before deploying it. Many of the challenges associated with ensuring Docker image security can be addressed simply by following best practices for securing Docker images.

What Are Docker Images?

So first of all, what are Docker images? Simply put, a Docker container image is a collection of data that includes all files, software packages, and metadata needed to create a running instance of a container. In essence, an image is a template from which a container can be instantiated. Images are immutable, which means that once they’ve been built, they cannot be changed. If someone were to make a change, a new image would be built as a result.

Container images are built in layers. The base layer contains the core components of an image and is the foundation upon which all other components and layers are added. Commonly, base layers are minimal and typically representative of common OSes.

Container images are most often stored in a central location called a registry. With registries like Docker Hub, developers can store their own images or find and download images that have already been created.

Docker Image Security

Incorporating the mechanisms to conduct static analysis on your container images provides insight into any potential vulnerable OS and non-OS packages. You can use an automated tool like Anchore to control whether you would like to promote non-compliant images into trusted registries through policy checks within a secure container build pipeline.

Policy enforcement is essential because vulnerable images that make their way into production environments pose significant threats that can be costly to remediate and can damage your organization’s reputation. Within these images, focus on the security of the applications that will run.

Explore the benefits of containerization and how they extend to security in our latest whitepaper.

Docker Image Security Best Practices

The following list outlines some best practices to consider when implementing Docker image security:

1. Continuous Approach

A fundamental approach to securing container images is to automate building and testing. You should set up the tooling to analyze images continuously. For container image-specific pipelines, you should employ tools that are purpose-built to uncover vulnerabilities and configuration defects. Your tooling should give developers the option to create governance around the images being scanned so that based on your configurable policy rules, images can pass or fail the image scan step in the pipeline and not progress further. In short, development teams need a structured and reliable process for building and testing the container images that are built.

Here’s how this process might look:

  1. Developer commits code changes to source control
  2. CI platform builds container image
  3. CI platform pushes container image to staging registry
  4. CI platform calls a tool to scan the image
  5. The tool passes or fails the images based on the policy mapped to the image
  6. If the image passes the policy evaluation and all other tests defined in the pipeline, the image is pushed to a production registry

2. Image Vulnerabilities

As part of a continuous approach to securing container images, you should scan packages and components within the image for common and known vulnerabilities. Image scanning should be able to uncover vulnerabilities contained within all layers of the image, not just the base layer.

Moreover, because vulnerable third-party libraries are often part of the application code, image inspection and analysis must be able to detect vulnerabilities for OS and non-OS packages contained within the images. Should a new vulnerability for a package be published after the image has been scanned, the tool should retrieve new vulnerability info for the applicable component and alert the developers so that remediation can begin.

3. Policy Enforcement

You should create and enforce policy rules based on the severity of the vulnerability as defined by the Common Vulnerability Scoring System.

Example policy rule: If the image contains any vulnerable packages with a severity greater than medium, stop this build.

4. Create a User for the Container Image

Containers should be run as a non-root user whenever possible. The USER instruction within the Dockerfile defines this.

Docker container image policy rule

5. Use Trusted Base Images for Container Images

Ensure that the container image is based on another established and trusted base image downloaded over a secure channel. Official repositories are Docker images curated and optimized by the Docker community or associated vendor. Developers should be connecting and downloading images from secure, trusted, private registries. These trusted images should be selected from minimalistic technologies whenever possible to reduce attack surface areas.

Docker Content Trust and Notary can be configured to give developers the ability to verify images tags and enforce client-side signing for data sent to and received from remote Docker registries. Content trust is disabled by default.

For more info see Docker Content Trust and Notary. In the context of Kubernetes, see Connaisseur, which supports Notary/Docker Content Trust.

6. Do Not Install Unnecessary Packages in the Container

To reduce container size and minimize the attack surface, do not install packages outside the scope and purpose of the container.

7. Add the HEALTHCHECK Instruction to the Container Image

The HEALTHCHECK instructions directive tells Docker how to determine if the state of the container is normal. Add this instruction to Dockerfiles, and based on the result of the healthcheck (unhealthy), Docker could exit a non-working container and instantiate a new one.

8. Do Not Use Update Instructions Alone in the Dockerfile

To help avoid duplication of packages and make updates easier, do not use update instructions such as apt-get update alone or in a single line in the Dockerfile. Instead, run the following:

RUN apt-get update && apt-get install -y
 bzr
 cvs
 git
 mercurial
 subversion

Also, see leveraging the build cache for insight on how to reduce the number of layers and for other Dockerfile best practices.

9. Use COPY Instead of ADD When Writing Dockerfiles

The COPY instruction copies files from the local host machine to the container file system. The ADD instruction can potentially retrieve files from remote URLs and perform unpacking operations. Since ADD could bring in files remotely, the risk of malicious packages and vulnerabilities from remote URLs is increased.

10. Do Not Store Secrets in Dockerfiles

Do not store any secrets within container images. Developers may sometimes leave AWS keys, API keys, or other secrets inside of images. If attackers were to grab these keys, they could be exploited. Secrets should always be stored outside of images and provided dynamically at runtime as needed.

11. Only Install Verified Packages in Containers

Download and install verified packages from trusted sources, such as those available via apt-get from official Debian repositories. To verify Debian packages within a Dockerfile, see Redis Dockerfile.

Implementing Container Image Security

One way to implement Docker image security best practices is with Anchore, a solution that conducts static analysis on container images and evaluates these images against user-defined checks. With Anchore, you can identify vulnerabilities within packages for OS and non-OS components and use policy rules to enforce the image configuration best practices described above.

Docker Security Best Practices Using Anchore

With Anchore, you can configure policies to check for the following:

  • Vulnerabilities
  • Packages
  • Secrets
  • Image metadata
  • Exposed ports
  • Effective users
  • Dockerfile instructions
  • Password files
  • Files

A popular implementation is to use the open source Jenkins CI tool along with Anchore for scanning and policy checks to build secure and compliant container images in a CI pipeline.

Securing Docker Container Runtime

Docker runtime security is critical to your overall container security strategy. It’s important to set up tooling to monitor the containers that are running. If new vulnerabilities get published that are impactful to a particular container, the alerting mechanisms need to be in place to stop and replace the vulnerable container quickly.

The first step in securing the container runtime is securing the registries where the images reside. It’s considered best practice to pull and run images only from trusted container registries. For an added layer of security, you should only promote trusted and signed images into production registries. Vulnerable, non-compliant images should not live in container registries where images are staged for production deployments.

The container engine hosts and runs containers built from container images that are pulled from registries. Namespaces and Control Groups are two critical aspects of container runtime security:

  • Namespaces provide the first and most straightforward form of isolation: Processes running within a container cannot see and affect processes running in another container or in the host system. You should always activate Namespaces.
  • Control Groups implement resource accounting and limiting. Always set resource limits for each container so that the single container does not hog all resources and bring down the system.

Only trusted users should control the container engine. For example, if Docker is the container runtime, root privileges are required to run Docker commands, and you should exercise caution when changing the Docker group.

You should deploy cloud-native security tools to detect such network traffic anomalies as unexpected traffic flows within the network, scanning of ports, or outbound access retrieving information from questionable locations. In addition, your security tools should monitor for invalid process execution or system calls as well as for writes and changes to protected configuration locations and file types. Typically, you should run containers with their root filesystems in read-only mode to isolate writes to specific directories.

If you are using Kubernetes to manage containers, your workload configurations are declarative and described as code in YAML files. These files can describe insecure configurations that can potentially be exploited by an attacker. It is generally good practice to incorporate Infrastructure as Code (IaC) scanning as part of a deployment and configuration workflow prior to applying the configuration in a live environment.

Why Is Docker Container Runtime Security So Important?

One of the last stages of a container’s lifecycle is deployment to production. For many organizations, this stage is the most critical. Often a production deployment is the longest period of a container’s lifecycle, and therefore it needs to be consistently monitored for threats, misconfigurations, and other weaknesses. Once your containers are live and running, it is vital to be able to take action quickly and in real time to mitigate potential attacks. Simply put, production deployments must be protected because they are valuable assets for organizations whose existence depends on them.

Docker Container Runtime Best Practices

The following list outlines some best practices to follow when implementing Docker container runtime security:

1. Consider AppArmor and Docker

From the Docker documentation:

AppArmor (Application Armor) is a Linux security module that protects an operating system and its applications from security threats. To use it, a system administrator associates an AppArmor security profile with each program. Docker expects to find an AppArmor policy loaded and enforced.

AppArmor is available on Debian and Ubuntu by default. In short, it is important that you do not disable Docker’s default AppArmor profile or create your own customer security profile for containers specific to your organization. Once this profile is used, the container has a certain set of restrictions and capabilities such as network access or file read/write/execute permissions. Read the official Docker documentation on AppArmor.

2. Consider SELinux and Docker

SELinux is an application security system that provides an access control system that greatly augments the Discretionary Access Control model. If it’s available on the Linux host OS that you are using, you can start Docker in daemon mode with SELinux enabled. The container would then have a set of restrictions as defined in the SELinux policy. Read more about SELinux.

3. Seccomp and Docker

Seccomp (secure computing mode) is a Linux kernel feature that you can use to restrict the actions available within a container. The default seccomp profile disables about 44 system calls out of more than 300. At a minimum, you should ensure that containers are run with the default seccomp profile. Get more information on seccomp.

4. Do Not Use Privileged Containers

Do not allow containers to be run with the –privileged flag because it gives all capabilities to the container and also lifts all the limitations enforced by the device cgroup controller. In short, the container can then do nearly everything the host can do.

5. Do Not Expose Unused Ports

The Dockerfile defines which ports will be opened by default on a running container. Only the ports that are needed and relevant to the application should be open. Look for the EXPOSE instruction to determine if there is access to the Dockerfile.

6. Do Not Run SSH Within Containers

SSH server should not be running within a container. Read this blog post for details.

7. Do Not Share the Host’s Network Namespace

When the networking mode on a container is set to --net=host, the container will not be placed inside a separate network stack. In other words, this flag tells Docker not to containerize the container’s networking. This is potentially dangerous because it allows the container to open low-numbered ports like any other root process. Additionally, a container could potentially do unexpected things such as terminate the Docker host. Bottom line: Do not add the --net=host option when running a container.

8. Manage Memory and CPU Usage of Containers

By default, a container has no resource constraints and can use as much of a given resource as the host’s kernel allows. Additionally, all containers on a Docker host share the resources equally and non-memory limits are enforced. A running container begins to consume too much memory on the host machine is a major risk. For Linux hosts, if the kernel detects that there is not enough memory to perform important system functions, it will kill processes to free up memory, which could potentially bring down an entire system if the wrong process is killed.

Docker can enforce hard memory limits, which allow the container to use no more than a given amount of user or system memory. Docker can also enforce soft memory limits, which allow the container to use as much memory as needed unless certain conditions are met. For a running container, the --memory flag is what defines the maximum amount of memory the container can use. When managing container CPU, the --cpu flags give you more control over the container’s access to the host machine’s CPU cycles.

9. Set On-Failure Container Restart Policy

By using the --restart flag when running a container, you can specify how a container should or should not be restarted on exit. If a container keeps exiting and attempting to restart, it could possibly lead to a denial of service on the host. Additionally, ignoring the exit status of a container and always attempting to restart the container can lead to a non-investigation of the root cause behind the termination. You should always investigate when a container attempts to be restarted on exit. Configure the --on-failure restart policy to limit the number of retries.

10. Mount Containers’ Root Filesystems as Read-Only

You should run containers with their root filesystems in read-only mode to isolate writes to specifically defined directories, which you can easily monitor. Using read-only filesystems makes containers more resilient to being compromised. Additionally, because containers are immutable, you should not write data within them. Instead, designate an explicitly defined volume for writes.

11. Vulnerabilities in Running Containers

You should monitor containers for existing vulnerabilities, and when problems are detected, patch or remediate them. If vulnerabilities exist, container scanning should find an inventory of vulnerable packages (CVEs) at the operating system and application layers. You should also implement container-aware tools designed to operate at the same elasticity and agility of containers.

Checks you should be looking for include:

  • Invalid or unexpected process execution
  • Invalid or unexpected system calls
  • Changes to protected configs
  • Writes to unexpected locations or file types
  • Malware execution
  • Traffic sent to unexpected network destinations

12. Unbounded Network Access from Containers

Controlling the egress network traffic sent by containers is critical. Tools for monitoring the inter-container traffic should at the very least accomplish the following:

  • Automated determination of proper container networking surfaces, including inbound and process-port bindings
  • Detection of traffic flow both between containers and other network entities
  • Detection of network anomalies, such as port scanning and unexpected traffic flows within your organization’s network

A Final Word on Container Security Best Practices

Containerized applications and environments present additional security concerns not present with non-containerized applications. But by adhering to the fundamentally basic concepts for host and application security outlined here, you can achieve a stronger security posture for your cloud-native environment.

And while host security, container image scanning, and runtime monitoring are great places to start, adopting additional security best practices like scanning application source code (both open source and proprietary) for vulnerabilities and coding errors along with following a policy-based compliance approach can vastly improve your container security. To see how continuous security embedded at each step in the software lifecycle can help you improve your container security, request a demo of Anchore.

Docker Image Security in 5 Minutes or Less

Updated post as of May, 2022

Containerized software has become the de facto choice for new development with a recent survey showing that over 80% of organizations claim they will increase container adoption over the next 24 months.

While container adoption can ease the development process and increase velocity, it also has the potential to increase an organization’s attack surface and make it susceptible to vulnerabilities. With developers now using both proprietary and open source components in their container environments, visibility into software containers and their dependencies is paramount to securing Docker images and ultimately avoid data breaches.

An SBOM, or a Software Bill of Materials, is a vital tool for securing the software supply chain. Used by both security and development teams alike, SBOMs provide visibility into all the components in a container image, including both direct and transitive dependencies. They can be used to identify vulnerabilities and risks such as misconfigurations and embedded secrets so teams can quickly locate and remediate issues before they reach runtime and continue to monitor for new vulnerabilities post-deployment.

In this blog post, we’ll show you how you can easily get started generating docker SBOMs and analyzing them for vulnerabilities using the open source projects Syft and Grype, maintained by Anchore.

Shifting Docker Image Security Left

Getting started with comprehensive Docker image security is easy to do with Syft and Grype. These projects are lightweight, flexible, and stateless command line tools for developers that make it possible to generate a Software Bill of Materials (SBOM) from container images and analyze that SBOM  for vulnerabilities.

First, you start by running Syft to generate an SBOM to identify all of your components including dependencies, package details, and filesystem metadata plus malware and risks like secrets and misconfigurations. This level of granularity will make sure you are identifying and accurately matching any potential vulnerabilities.

Once that SBOM is generated, it can be fed into Grype which will scan it for vulnerabilities. Re-analyzing images on a regular basis to identify newly discovered vulnerabilities is fast and easy because you only need to generate one SBOM for each version of an image. This is particularly useful in the event of a zero-day, when time is of the essence and you don’t have a minute to spare rescanning your environment from scratch.

Using Syft and Grype for Docker Image Analysis

Generating an SBOM

Step 1: Download & Install Syft

Go to the Syft releases page and download the latest version of Syft or follow installation instructions for your system here.

Step 2: Generate SBOM

Run Syft against your Docker image to output a comprehensive SBOM:

syft <docker image>

You will see an output similar to this:

$ syft debian:10

 ✔ Pulled image

 ✔ Loaded image

 ✔ Parsed image

 ✔ Cataloged packages      [91 packages]


NAME                    VERSION                  TYPE

adduser                 3.118                    deb

apt                     1.8.2.3                  deb

base-files              10.3+deb10u12            deb

base-passwd             3.5.46                   deb

bash                    5.0-4                    deb

…

Step 3: Save Your SBOM

You can easily generate an SBOM and save it in multiple formats depending on your needs by following the steps outlined here. For this example we’ll use JSON using the -o json config.

Finding Vulnerabilities

Step 1: Download & Install Grype

Go to the Grype releases page and download the latest version of Grype or follow installation instructions for your system here.

Step 2: Generate a Vulnerability Report

You can pipe an SBOM file directly from Syft into Grype:

syft <yourimage>:tag -o json | grype

Or scan an existing SBOM

Grype sbom:path/to/sbom.json

You will see an output similar to this:

 $ grype sbom:./debian_10_SBOM.json

 ✔ Vulnerability DB        [updated]

 ✔ Loaded image

 ✔ Parsed image

 ✔ Cataloged packages      [91 packages]

 ✔ Scanned image           [137 vulnerabilities]

 

NAME            INSTALLED            FIXED-IN      TYPE  VULNERABILITY     SEVERITY

apt             1.8.2.3                            deb   CVE-2011-3374     Negligible

bash            5.0-4                              deb   CVE-2019-18276    Negligible

bsdutils        1:2.33.1-0.1                       deb   CVE-2022-0563     Negligible

bsdutils        1:2.33.1-0.1        (won't fix)    deb   CVE-2021-37600    Low

coreutils       8.30-3              (won't fix)    deb   CVE-2016-2781     Low

…

Note: To output the vulnerability report as a file, follow the config options here.

Grype uses multiple vulnerability data sources to optimize vulnerability matching and reduce noise from false positives so that developers don’t waste as much time when fixing vulnerabilities in their Docker images.

Docker Image Security at Scale

While conducting scans of Docker images is quick and easy, automating such scans and implementing Docker image security best practices at scale across multiple teams and applications requires an enterprise-level solution that goes beyond what Syft and Grype provide. Anchore Enterprise adds powerful functionality to the intuitive features of Syft and Grype. With features such as SBOM Management, policy and compliance controls and global reporting and notifications, Anchore Enterprise helps organizations secure their entire software supply chain.

Conclusion

It is critically important for developers to know exactly what is inside a software container before using it and to enforce company-wide policy and compliance regulations throughout the build process. Using simple image analysis tools, like Syft and Grype are a great way to get up and running quickly and easily with Docker image security before graduating to an enterprise level, overall software supply chain management solution like Anchore Enterprise. By using Anchore, you can know more about the building blocks used in your applications and prepare for the ever growing industry best practices that are quickly becoming standards and mandates.

Anchore Enterprise Now Supports SBOM Import From ‘docker sbom’

Recently, Docker and Anchore worked together to deliver a new operation within Docker Desktop for generating a container image software bill of materials (SBOM) using native Docker tools. The core functionality for generating an SBOM comes from Anchore’s open-source Syft project, which can be accessed as a command line tool or used as a library for other tools to integrate with (as is the case with our collaboration with Docker).

Anchore provides a number of open-source and commercially available software tools for managing SBOMs and providing security/compliance insights and enforcement capabilities against those generated SBOMs. Our general approach to securing modern software development systems embraces the user’s automation and development flexibility objectives, handling large and dynamic software production flows. To facilitate this, the Anchore Enterprise platform effectively conforms to a pattern where:

  1. Existing software development infrastructure is instrumented with light-weight tooling that is pointed at a software element (source code checkout, container image, etc.) to generate an SBOM, and then
  2. The tooling imports that SBOM into a deployment of Anchore Enterprise which stores the SBOM for further processing, at which point the full capabilities of Anchore Enterprise can be applied to the software SBOM.

The Anchore Enterprise client that implements the SBOM generation and import steps is named ‘anchorectl’, a lightweight CLI tool that is included with the Anchore Enterprise platform.

As part of our ongoing commitment to support integration with Docker’s native tooling and approach to SBOM generation, we’ve recently released a new version of anchorectl, available to all Anchore Enterprise users, with added support for importing an SBOM directly from new ‘docker sbom’ command. With this capability, users who have access to an existing Anchore Enterprise deployment and prefer to use native ‘docker’ commands in their development environments can easily connect the two systems in a typically UNIX-like fashion. The following example shows an abstract ‘checkout, build container image, import image sbom to Anchore Enterprise’ using this new interface.

# git clone <somerepo>

# docker build -t <someimage> -f <somerepo>/Dockerfile <somerepo>/

# docker sbom --format syft-json <someimage> | anchorectl sbom upload -

With this simple process invoked either manually or scripted as part of an automated build, users can be assured that new container image SBOMs are being imported to their Anchore Enterprise deployment, so that the full capabilities of Anchore Enterprise – vulnerability scanning (on demand, historical), compliance checks using Anchore’s fully policy subsystem, SBOM drift detection, global reporting and notifications, and many others – can be applied.

Learn more about generating SBOMs for Docker images with Syft. 

Conclusion

As we continue to explore new areas for building SBOM generation and consumption capabilities in collaboration with the Docker community, we remain committed to ensuring that all of Anchore’s products, open-source tools and partnership collaboration efforts are interoperable. As we move forward, we’re looking forward to moving SBOM generation capabilities even closer to the ‘build’ process, continuing support for open standards atop the existing native, SPDX, CycloneDX and other formats, and providing integrations with a wide variety of development environments.

Gartner Innovation Insight for SBOMs

The software bill or materials, or SBOM, is foundational for end-to-end software supply chain management and security. Knowing what’s in software is the first step to securing it. Think of an SBOM like an ingredients label on packaged food: If there’s a toxic chemical in your can of soup, you’d want to know before eating it.

SBOMs are critical not only for identifying security vulnerabilities and risks in software but also for understanding how that software changes over time and potentially becomes vulnerable to new threats. In Innovation Insight for SBOMs, Gartner recommends integrating SBOMs throughout the software development lifecycle to improve the visibility, transparency, security, and integrity of proprietary and open-source code in software supply chains.

The Role of SBOMs in Securing Software Supply Chains

Gartner estimates that by 2025, 60 percent of organizations building or procuring critical infrastructure software will mandate and standardize SBOMs in their software engineering practice — a significant increase from less than 20 percent in 2022. However, organizations that are using open-source software and reusable components to simplify and accelerate software development are challenged with gaining visibility into the software they consume, build, and operate. And without visibility, they become vulnerable to the security and licensing compliance risks associated with software components.

SBOMs are an essential tool in your security and compliance toolbox. They help continuously verify software integrity and alert stakeholders to security vulnerabilities and policy violations.

To achieve software supply chain security at scale, Gartner recommends that software engineering leaders integrate SBOMs into their DevSecOps pipelines to:

  • Automatically generate SBOMs for all software produced
  • Automatically verify SBOMs for all open source and proprietary software consumed
  • Continuously assess security and compliance risks using SBOM data before and after deployment

Gartner underscores the importance of integrating SBOM workflows across the software development lifecycle, noting that “SBOMs are an essential tool in your security and compliance toolbox. They help continuously verify software integrity and alert stakeholders to security vulnerabilities and policy violations.”

Who Should Use SBOMs

Citing U.S. National Telecommunications and Information Administration (NTIA) recommendations, Gartner identifies three primary entities that benefit from SBOM adoption:

  1. Software producers: Use SBOMs to assist in the building and maintenance of their supplied software
  2. Software procurers: Use SBOMs to inform pre-purchase assurance, negotiate discounts, and plan implementation strategies
  3. Software operators: Use SBOMs to inform vulnerability management and asset management, to manage licensing and compliance, and to quickly identify software and component dependencies and supply chain risks

SBOM Tools Evaluation

Gartner cautions that SBOMs are not intended to be static documents and that every new release of a component should include a new SBOM. When evaluating open-source and commercial SBOM tools for SBOM generation and management, Gartner advises organizations to select tools that provide the following capabilities:

  • Create SBOMs during the build process
  • Analyze source code and binaries (like container images)
  • Generate SBOMs for those artifacts
  • Edit SBOMs
  • View, compare, import, and validate SBOMs in a human-readable format
  • Merge and translate SBOM contents from one format or file type to another
  • Support use of SBOM manipulation in other tools via APIs and libraries

By generating SBOMs in the build phase, developers and security teams can identify and manage the software in their supply chains and catch bad actors early before they reach runtime and wreak havoc.

How to Generate an SBOM with Free Open Source Tools

Generating a Software Bill of Materials (SBOM) as part of your DevOps process is an essential technique to help secure your software supply chain. SBOMs are becoming critical due to the growing prominence of supply chain attacks such as Solarwinds, maintainers intentionally adding malware like node-ipc, and severe vulnerabilities like Log4Shell.

SBOMs can help identify the software components used within a system as well as licenses and vulnerabilities. SBOMs also can be used to comply with the Executive Order Improving the Nation’s Cybersecurity.

Fortunately, there are a number of tools that can help create SBOMs and generating your first one takes just a few easy steps:

  1. Choose your SBOM generation tool – we’ll use Syft here
  2. Download and install Syft
  3. Determine the SBOM output format you need
  4. Run Syft against the desired source: syft <source> -o <format>

Hold on! Before you jump into using open source tools for SBOMs, note that you can get instant access to a free trial of the Anchore Enterprise platform here.

Open Source Tools for Generating SBOMs

There are many tools available for generating SBOMs, so the first thing you’ll need to do is pick one to use. SBOM generators are often specific to a particular ecosystem such as Python or Go. Some are capable of generating SBOMs for a number of different ecosystems and environments. Some of the more popular SBOM tools are:

  1. Syft by Anchore
  2. Tern
  3. Kubernetes BOM tool
  4. spdx-sbom-generator

For this example we’ll focus on Syft, since it is easy to use in many different scenarios and supports a variety of ecosystems. Syft can run on your desktop, in CI systems, as a Docker container and scan a wide variety of ecosystems from Linux distributions to many types of build dependency specifications.

Getting Syft

The first thing to do is download Syft. There are a number of ways to do this:

Using curl

The recommended method to get Syft for macOS and Linux is by using curl:

curl -sSfL https://raw.githubusercontent.com/anchore/syft/main/install.sh | sh -s -- -b <SOME_BIN_PATH> <RELEASE_VERSION>

For example:

curl -sSfL https://raw.githubusercontent.com/anchore/syft/main/install.sh | sh -s -- -b /usr/local/bin

 

Homebrew

For macOS, you can install Syft using Homebrew:

brew tap anchore/syft
brew install syft

 

Direct Download

You can directly download Syft binaries for many platforms including Windows from the GitHub releases page.

Docker

There is also a Syft Docker image with every release: anchore/syft, which can be run like this:

docker run -it --rm anchore/syft <args>

 

Validate the Syft Installation

To confirm Syft was installed correctly, simply run:

syft version

You should see output similar to:

Application:        syft
Version:            0.43.2
JsonSchemaVersion:  3.2.2
BuildDate:          2022-04-06T21:49:04Z
GitCommit:          e415bb21e7a609c12dc37a2d6395796fb675e3fe
GitDescription:     v0.43.2
Platform:           linux/amd64
GoVersion:          go1.18
Compiler:           gc

Note: Syft was version 0.43.2 at the time of this writing

Generating Your First SBOM

Once you have Syft available, creating your first SBOM is simple. Syft supports multiple sources to scan when generating an SBOM using both the local filesystem and container images.

Scanning Images

To generate an SBOM for a Docker or OCI image – even without a Docker daemon, simply run:

syft <image>

By default, output includes only software that is included in the final layer of the container. To include software from all image layers in the SBOM, regardless of its presence in the final image, use the –scope all-layers option:

syft --scope all-layers <image>

 

Scanning the Filesystem

To generate an SBOM for the local filesystem, use the dir: and file: prefixes with either absolute or relative paths. For example to scan the current directory:

syft dir:.

Or a specific file:

syft file:/my-go-binary

Syft can generate SBOMs from a variety of other sources, such as Podman, tar archives, or directly from an OCI registry even when Docker is not available. Check out the full list of sources.

Basic Example

For example, to scan the latest Alpine image, simply run:

syft alpine:latest

You should see output similar to this:

 ✔ Loaded image            
 ✔ Parsed image            
 ✔ Cataloged packages      [14 packages]
NAME                    VERSION      TYPE 
alpine-baselayout       3.2.0-r18    apk   
alpine-keys             2.4-r1       apk   
apk-tools               2.12.7-r3    apk   
busybox                 1.34.1-r3    apk   
ca-certificates-bundle  20191127-r7  apk   
libc-utils              0.7.2-r3     apk   
libcrypto1.1            1.1.1l-r7    apk   
libretls                3.3.4-r2     apk   
libssl1.1               1.1.1l-r7    apk   
musl                    1.2.2-r7     apk   
musl-utils              1.2.2-r7     apk   
scanelf                 1.3.3-r0     apk   
ssl_client              1.34.1-r3    apk   
zlib                    1.2.11-r3    apk

By default, the SBOM you’ll see will be a nicely formatted table rather than any standardized SBOM format, which leads us to…

Choose Your SBOM Format

Depending on your use cases, it may be important to use a particular SBOM format. The most common ones are Software Package Data Exchange (SPDX) and CycloneDX, both of which Syft supports. Syft also has a format which interoperates losslessly with the Grype vulnerability scanner.

While Syft supports these different formats, they have slightly different goals and features. It may be important to pick SPDX or CycloneDX for interoperability with other tools or as a standardized format to distribute to downstream consumers.

Generating an SBOM in SPDX format

If your use case requires an SBOM in SPDX format, Syft has you covered. SPDX has been around the longest of all the formats mentioned here. There are multiple variants of SPDX. Syft supports SPDX Tag-value (spdx-tag-value) and SPDX JSON (spdx-json). For SPDX JSON, simply add the -o spdx-json argument. For example, running this against a docker image, again using the latest Alpine:

syft alpine:latest -o spdx-json

You’ll see there is a lot more data than the table view allows! You should see something resembling:

{
 "SPDXID": "SPDXRef-DOCUMENT",
 "name": "alpine-latest",
 "spdxVersion": "SPDX-2.2",
 "creationInfo": {
  "created": "2022-04-12T01:47:03.011148Z",
  "creators": [
   "Organization: Anchore, Inc",
   "Tool: syft-0.42.4"
  ],
  "licenseListVersion": "3.16"
 },
 "dataLicense": "CC0-1.0",
 "documentNamespace": "https://anchore.com/syft/image/alpine-latest-31e0e940-da83-4ea2-8a0c-fbba76371667",
 "packages": [
  {
   "SPDXID": "SPDXRef-8039c8621bcc1383",
   "name": "alpine-baselayout",
   "licenseConcluded": "GPL-2.0-only",
   "description": "Alpine base dir structure and init scripts",
   "downloadLocation": "https://git.alpinelinux.org/cgit/aports/tree/main/alpine-baselayout",
   "externalRefs": [
    {
     "referenceCategory": "SECURITY",
     "referenceLocator": "cpe:2.3:a:alpine:alpine-baselayout:3.2.0-r18:*:*:*:*:*:*:*",
     "referenceType": "cpe23Type"
    },
    {
     "referenceCategory": "PACKAGE_MANAGER",
     "referenceLocator": "pkg:alpine/[email protected]?arch=x86_64&upstream=alpine-baselayout&distro=alpine-3.15.0",
     "referenceType": "purl"
    }
   ],
   "filesAnalyzed": false,
   "licenseDeclared": "GPL-2.0-only",
   "originator": "Person: Natanael Copa <[email protected]>",
   "sourceInfo": "acquired package info from APK DB: /lib/apk/db/installed",
   "versionInfo": "3.2.0-r18"
  }
 ],
 "files": [
  {
   "SPDXID": "SPDXRef-2eaa15c5fc625ebe",
   "comment": "layerID: sha256:8d3ac3489996423f53d6087c81180006263b79f206d3fdec9e66f0e27ceb8759",
   "licenseConcluded": "NOASSERTION",
   "fileName": "/etc/crontabs/root"
  }
 ],
 "relationships": [
  {
   "spdxElementId": "SPDXRef-8039c8621bcc1383",
   "relationshipType": "CONTAINS",
   "relatedSpdxElement": "SPDXRef-2eaa15c5fc625ebe"
  }
 ]
}

Not only does this format contain the package names, but also Package URLs, license information, and a host of other things such as files Syft identified associated with a package.

Generating an SBOM in CycloneDX format

Similarly, if you need to generate an SBOM in CycloneDX format use a CycloneDX format option. Syft supports CycloneDX XML (cyclonedx-xml) and JSON (cyclonedx-json). For CycloneDX XML:

syft <source> -o cyclonedx-xml

To run this against the same latest Alpine image, run:

syft alpine:latest -o cyclonedx-xml

And you should see a result resembling this:

<?xml version="1.0" encoding="UTF-8"?>
<bom xmlns="http://cyclonedx.org/schema/bom/1.4" serialNumber="urn:uuid:fb2a4dac-b62b-4d78-b209-40bd09388022" version="1">
  <metadata>
    <timestamp>2022-04-11T22:01:51-04:00</timestamp>
    <tools>
      <tool>
        <vendor>anchore</vendor>
        <name>syft</name>
        <version>0.42.4</version>
      </tool>
    </tools>
    <component bom-ref="27f24e002ab47c1b" type="container">
      <name>alpine:latest</name>
      <version>sha256:a3f8ca28888378e4880b3f73504c78278a9038dccf906760a1afd4a08c81c1c1</version>
    </component>
  </metadata>
  <components>
    <component type="library">
      <publisher>Natanael Copa &lt;[email protected]&gt;</publisher>
      <name>alpine-baselayout</name>
      <version>3.2.0-r18</version>
      <description>Alpine base dir structure and init scripts</description>
      <licenses>
        <license>
          <id>GPL-2.0-only</id>
        </license>
      </licenses>
      <cpe>cpe:2.3:a:alpine-baselayout:alpine-baselayout:3.2.0-r18:*:*:*:*:*:*:*</cpe>
      <purl>pkg:alpine/[email protected]?arch=x86_64&amp;upstream=alpine-baselayout&amp;distro=alpine-3.15.0</purl>
      <externalReferences>
        <reference type="distribution">
          <url>https://git.alpinelinux.org/cgit/aports/tree/main/alpine-baselayout</url>
        </reference>
      </externalReferences>
      <properties>
        <property name="syft:package:foundBy">apkdb-cataloger</property>
        <property name="syft:package:metadataType">ApkMetadata</property>
        <property name="syft:package:type">apk</property>
        <property name="syft:cpe23">cpe:2.3:a:alpine:alpine-baselayout:3.2.0-r18:*:*:*:*:*:*:*</property>
        <property name="syft:location:0:layerID">sha256:8d3ac3489996423f53d6087c81180006263b79f206d3fdec9e66f0e27ceb8759</property>
        <property name="syft:location:0:path">/lib/apk/db/installed</property>
        <property name="syft:metadata:gitCommitOfApkPort">dfa1379357a321e638feef1cd8d55ab03d020f45</property>
        <property name="syft:metadata:installedSize">413696</property>
        <property name="syft:metadata:originPackage">alpine-baselayout</property>
        <property name="syft:metadata:pullChecksum">Q1EymS6rAgmGs7XYhqdyEoiWgEZ6A=</property>
        <property name="syft:metadata:pullDependencies">/bin/sh so:libc.musl-x86_64.so.1</property>
        <property name="syft:metadata:size">21101</property>
      </properties>
    </component>
    <component type="operating-system">
      <name>alpine</name>
      <version>3.15.0</version>
      <description>Alpine Linux v3.15</description>
      <swid tagId="alpine" name="alpine" version="3.15.0"></swid>
      <externalReferences>
        <reference type="issue-tracker">
          <url>https://bugs.alpinelinux.org/</url>
        </reference>
        <reference type="website">
          <url>https://alpinelinux.org/</url>
        </reference>
      </externalReferences>
      <properties>
        <property name="syft:distro:id">alpine</property>
        <property name="syft:distro:prettyName">Alpine Linux v3.15</property>
        <property name="syft:distro:versionID">3.15.0</property>
      </properties>
    </component>
  </components>
</bom>

Again, there is a lot more data than the table allows, but a different set of data than the SPDX format because there simply is not a one-to-one mapping of properties between the two.

Generating an SBOM in Syft Lossless format

The last format we’ll talk about is Syft’s own JSON format. If there isn’t a need to provide an SBOM to other tools and you may be using Grype to scan the SBOM, the format with the highest fidelity is the Syft JSON format. Both SPDX and CycloneDX lose some amount of information from the initial Syft data model whereas the Syft format does not.

Although Grype works great with SPDX and CycloneDX, there could be a situation where data was lost converting to one of these formats and Grype matching uses some of that extra data, so using the Syft JSON might make the most sense. To use the Syft JSON format, use the -o json argument.

Additional Syft Features

There’s a lot more that Syft can do, with quite a few configuration options. A few things to note include:

  • Output the SBOM to a file using –file path/to/file
  • Exclude paths from scanning using –exclude path/**/*.txt
  • Specify configuration in a .syft.yaml file
  • Connect to private OCI registries
  • Cryptographically sign and attest SBOMs

Next Steps

Now that you’ve got an SBOM, what’s next? A logical next step would be to integrate with your build pipeline to have SBOMs generated automatically. In fact, there could be more than one location where it makes sense to generate SBOMs such as build time and after a container is built or during a release process.

The SBOMs then could be scanned for license compliance and continuously for vulnerabilities. In fact, if you are using GitHub Actions, there are a couple actions to do just that: sbom-action to generate SBOMs using Syft and scan-action to perform vulnerability scanning. For a few repositories, it’s very simple to set these up but might be challenging when there are a lot of repositories to keep track of.

Managing SBOMs at scale

As we’ve talked about, using SBOMs as a central part of securing your software supply chain is increasingly important. Integrating automated SBOM generation into your DevOps process is vital. Storing, managing, and analyzing those SBOMs to inform security measures should be an important consideration for you and your organization.

For more comprehensive SBOM management, an enterprise level solution like Anchore Enterprise will enable you to generate comprehensive SBOMs with every build, detect drift from one build to the next, share SBOMs internally or externally, and quickly identify risk such as vulnerabilities, secrets, malware, and misconfiguration. To learn more about Anchore Enterprise, schedule a demo with one of our specialists here.

Conclusion

Now that you understand the many reasons to generate SBOMs (whether for compliance or vulnerability analysis) using Syft to generate SBOMs is a flexible and simple process with many options to tailor SBOMs to your specific use cases.

If you’d like to explore using Anchore Enterprise for its robust features like continuous visibility, SBOM monitoring, drift detection, and policy enforcement then access a free 15 day trial here.

Anchore and Docker Release ‘docker sbom’ to Create Comprehensive SBOMs Based on Syft

Today Anchore and Docker released the first feature in what we anticipate will be an ongoing initiative to bring the value of the software bill of materials (SBOM) to all container-oriented build and publication systems. Now included in the latest Docker Desktop version is an operation called ‘docker sbom’ that is available via the ‘docker’ command. This new operation, which is built on top of Anchore’s open source Syft project, enables Docker users to quickly generate detailed SBOM documents against container images using the native Docker CLI.

SBOMs are quickly becoming foundational data sources for a variety of DevSecOps use cases ranging from basic software development hygiene all the way to unlocking more complex security and compliance capabilities such as tamper and drift detection, zero-day response support, and post-security-event forensic analysis. While security scanning tools need to identify software components, they often don’t make an SBOM accessible to users or include the level of detail needed to support a variety of use cases. With this open source collaboration between Anchore and Docker, we are giving users the ability to create and store an SBOM independently from running any higher-level function like vulnerability scanning or license detection.

By enabling SBOM creation to be an independent operation, it can be decoupled from the multitude of individual use cases that rely on SBOM data. This approach gives users the ability to generate an SBOM once and then use it for a variety of use cases. We believe that the availability of SBOM data is foundational when developing processes and technologies to improve software supply chain security. With ‘docker sbom’, we’re excited to engage with the Docker and Anchore communities together on the topic of SBOM creation, usage, and future directions.

How the ‘docker sbom’ command works

The new ‘docker sbom’ command is simple to use and leverages the power of Syft to provide rich content and data formats. In the following quick example, we show how the ‘docker sbom’ command can be used to generate a comprehensive SBOM document in a user-chosen format and then used as input for other tools that are capable of consuming an SBOM to provide higher-level functions such as vulnerability scanning.

As the discussion on the best way to create and consume SBOM data continues to evolve, we’re committed to supporting industry standard formats like SPDX, CycloneDX, Syft-JSON, and others in order to promote the idea of creating and storing SBOMs in forms that interoperate with evolving security and DevOps infrastructure tools.

Here, we show using the ‘docker sbom’ command against a test image that combines regular distro-provided packages (Alpine in this case) with multiple vulnerable versions of Log4j that are packaged in a variety of different forms ranging from simple top-level jars to many-levels-deep jars within compressed Java archives:

% docker sbom dnurmi/testrepo:jarjar

Syft v0.42.2

 ✔ Loaded image
 ✔ Parsed image
 ✔ Cataloged packages      [217 packages]

NAME                         VERSION                        TYPE

alpine-baselayout            3.2.0-r18                      apk
alpine-keys                  2.4-r1                         apk
…
…
log4j-core                   2.12.1                         java-archive
log4j-core                   2.11.0                         java-archive
log4j-core                   2.11.1                         java-archive
log4j-core                   2.13.2                         java-archive
log4j-core                   2.12.0                         java-archive
…

While the default output is in human-readable form for quick review, the command supports a growing set of output formats that can be used more directly for integration into other systems and tools that can analyze SBOMs:

% docker sbom --help


Usage:  docker sbom [OPTIONS] COMMAND

…
      --format string         report output format, options=[syft-json cyclonedx-xml cyclonedx-json github-json spdx-tag-value spdx-json table text]

                              (default "table")
…

To demonstrate this flow, let’s look at a simple use case where ‘docker sbom’ is used to produce its data as SPDX JSON, which is then consumed by another tool. We’ll use Grype, Anchore’s open source vulnerability scanner, to produce a vulnerability report without needing to contact any remote scanning services:

% docker sbom --format spdx-json docker.io/dnurmi/testrepo:jarjar | grype

Syft v0.42.2

 ✔ Loaded image
 ✔ Parsed image
 ✔ Cataloged packages      [217 packages]

NAME                 INSTALLED     FIXED-IN      VULNERABILITY        SEVERITY

…
log4j-core           2.12.1                      CVE-2021-45046       Critical
log4j-core           2.11.0                      CVE-2021-45105       Medium
log4j-core           2.13.2        2.16.0        GHSA-7rjr-3q55-vv33  Critical
log4j-core           2.12.1        2.12.4        GHSA-8489-44mv-ggj8  Medium
log4j-core           2.13.0                      CVE-2020-9488        Low
log4j-core           2.12.1        2.12.3        GHSA-p6xc-xr62-6r2g  High
log4j-core           2.12.0        2.12.3        GHSA-p6xc-xr62-6r2g  High
log4j-core           2.12.0                      CVE-2021-44832       Medium
…

What’s next for Anchore and Docker collaboration

The process above demonstrates a vulnerability scan as just one example use case where SBOMs can prove valuable. At Anchore, we’ve built our open source projects and our commercial Anchore Enterprise solution using SBOMs as the foundation for enabling best practices for software development, security, and compliance. We’re committed to continuing this work by making available both open source projects and commercial Anchore Enterprise products that are built to create, ingest, store, analyze, and output SBOM data across all stages in the development cycle.

With Docker, we’re looking forward to further collaboration to deliver deeper integration between Syft’s SBOM generating technology and the Docker build and store processes. We are working toward a near future where every container image stored in a registry has an associated SBOM that can be inspected and consumed for further processing. And beyond that, we’re looking to explore ideas that couple build directives with SBOM content to drive concepts like reproducible builds and build time security rule enforcement. We’re excited to continue the discussion and to have you join us in this effort!

Grype now supports CycloneDX and SPDX

In the world of software bills of materials (SBOM) there are currently two major standards: Software Package Data Exchange (SPDX) and CycloneDX. SPDX is a product of the Linux Foundation. It’s been a standard for over ten years now. CycloneDX is brought to us by the OWASP project. It’s a bit newer than SPDX, and just as capable. If you’re following the SBOM news, these two formats are often topics of discussion.

It is expected that anyone who is creating or consuming SBOMs will probably use one of these two formats to ensure a certain amount of interoperability. If you expect the consumers of your software to keep track of your SBOM, you need a standard way of communicating. Likewise, if we are expecting an SBOM from our vendors, we want to make sure it’s in a format we can actually use. This is one of those cases where more isn’t better, two is plenty.

If you’re familiar with Anchore’s open source projects Syft and Grype, there’s also another format you’ve probably seen known as the Syft lossless SBOM. This format has been tailored specifically to the needs of Syft and Grype when the projects were just starting out. It’s a great format and contains a huge amount of information, but there aren’t a lot of tools out there that can generate or consume this SBOM format today.

When we think about vulnerability scanners, we tend to think about pointing a scanner at a container, or directory, or even a source repo, then scanning that location to find vulnerabilities in the dependencies. Grype has a neat trick though, it can scan an SBOM for vulnerabilities. This means instead of having to first scan the files to identify them, then figure out if any have vulnerabilities. Grype can skip over that identification  step by using an SBOM. Most of the time a vulnerability scanner spends is in this identification stage, scanning an SBOM for vulnerabilities is incredibly fast.

Initially Grype was only able to use a Syft format SBOM to scan for vulnerabilities. This is awesome, but we come back to the problem of what happens when a vendor gives us an SBOM in SPDX or CycloneDX format? The easy answer is to support those formats too of course. The next obvious question is which format should Grype support next; SPDX or CycloneDX? Since making a decision is hard, and SBOM formats are like children, you can’t really pick a favorite, it was decided to support both!

If you download the latest version of Grype you can now use it to scan your SPDX and CycloneDX SBOMs for vulnerabilities. If a vendor ships you an SBOM, it can be fed directly into Grype. We’re pretty sure Grype is the first open source vulnerability scanner that supports both SPDX and CycloneDX at the time of writing this. We think that’s a pretty big deal!

Now, it should be noted that this functionality is very new. There are going to be bugs and difficulties scanning SPDX and CycloneDX SBOMs. We would be fools to pretend the features are perfect. However, Grype is also an open source project, you don’t have to sit on the sidelines and watch. Open source is a team sport. If you scan an SBOM with Grype and run into any problems, please file a bug here. You can even submit a patch if that’s more your style, we love pull requests from our community.

Stay tuned for even more awesome features coming soon. We’re just getting started!

Anchore Enterprise 4.0 Delivers SBOM-Powered Software Supply Chain Management

With significant attacks against the software supply chain over the last year, securing the software supply chain is top of mind for organizations of all sizes. Anchore Enterprise 4.0 is designed specifically to meet this growing need, delivering the first SBOM-powered software supply chain management tool.

Powered By SBOMs

Anchore Enterprise 4.0 builds on Anchore’s existing SBOM capabilities, placing comprehensive SBOMs as the foundational element to protect against threats that can arise at every step in the software development lifecycle. Anchore can now spot risks in source code dependencies and watch for suspicious SBOM drift in each software build, as well as monitor applications for new vulnerabilities that arise post-deployment.

New Key Features:

Track SBOM drift to detect suspicious activity, new malware, or compromised software

Anchore Enterprise 4.0 introduces an innovative new capability to detect SBOM drift in the build process, alerting users to changes in SBOMs so they can be assessed for new risks or malicious activity. With SBOM drift detection, security teams can now set policy rules that alert them when components are added, changed, or removed so that they can quickly identify new vulnerabilities, developer errors, or malicious efforts to infiltrate builds.

End-to-end SBOM management reduces risk and increases transparency in software supply chains

Building on Anchore’s existing SBOM-centric design, Anchore Enterprise 4.0 now leverages SBOMs as the foundational element for end-to-end software supply chain management and security. Anchore automatically generates and analyzes comprehensive SBOMs at each step of the development lifecycle. SBOMS are stored in a repository to provide visibility into your components and dependencies as well as continuous monitoring for new vulnerabilities and risks, even post-deployment. Additionally, users can now meet customer or federal compliance requirements such as those described in the Executive Order On Improving the Nation’s Cybersecurity by producing application-level SBOMs to be shared with downstream users.

Track the security profile of open source dependencies in source code repositories and throughout the development process

With the ever-expanding use of open source software by developers, it has become imperative to identify and track the many dependencies that come with each piece of open source at every step of the development cycle to ensure the security of your software supply chain. Anchore Enterprise 4.0 extends scanning for dependencies to include source code repositories on top of existing support for CI/CD systems and container registries. Anchore Enterprise can now generate comprehensive SBOMs that include both direct and transitive dependencies from source code repositories to pinpoint relevant open source vulnerabilities, and enforce policy rules.

Gain an application-level view of software supply chain risk

Securing the software supply chain requires visibility into risk for each and every application. With Anchore Enterprise 4.0, users can tag and group all of the artifacts associated with a particular application, release, or service. This enables users to report on vulnerabilities and risks at an application level and monitor each application release for new vulnerabilities that arise. In the case of a new vulnerability or zero-day, users can quickly identify impacted applications solely from the SBOM repository and respond quickly to protect and remediate those applications.

Looking Forward

Anchore believes that SBOMs are the foundation of software supply chain management and security. The Anchore team will continue to build on these capabilities and advance the use of SBOMs to secure and manage the ever-evolving software supply chain landscape.

Trusting SBOMs in the Software Supply Chain: Syft Now Creates Attestations Using Sigstore

This blog post has been archived and replaced by the supporting pillar page that can be found here:

https://anchore.com/wp-admin/post.php?post=987473385&action=edit

The blog post is meant to remain “public” so that it will continue to show on the /blog feed. This will help discoverability for people browsing the blog and potentially help SEO. If it is clicked on it will automatically redirect to the pillar page.

Helping Entrepreneurs Take Flight

The Kindness Campaign, inspired by Anchore’s core values, focuses on spreading kindness throughout our local communities. With Anchorenauts distributed across the US and UK, our quarterly volunteer program enables and encourages Anchorenauts to connect with local organizations and give back. In addition to direct support for various causes throughout the year, Anchore empowers team members to get involved with eight (8) paid volunteer hours per quarter.

This month, we are excited to partner with Ashley Goldstein from the Santa Barbara based organization, Women’s Economic Ventures (WEV). WEV, in partnership with Mixteco Indigena Community Organization Project (‘MICOP”), programatically supports aspiring entrepreneurs within the Indigenous and Latinx community in Santa Barbara and Ventura Counties.

Budding entrepreneurs hold up their Women’s Economic Ventures certification.

Through the Los Emprendedores Program, Ashley firmly believes in the WEV’s and MICOP’s ability to empower members with the skills they need to launch their own businesses and to effect change in the most marginalized populations.

As part of the Kindness Campaign, Anchore has donated gently used Apple MacBooks to support budding entrepreneurs with the tools needed to kick start their businesses and enable their tremendous entrepreneurship training in the Los Emprendedores Program. In the program, participants develop highly valuable business skills ranging from business planning, grant writing, digital marketing, and key ESG (Environmental, Social, & Governance) practices.

As a tech company, we deeply believe in the responsibility to give back a piece of the industry to our community through widening access to both basic technology, but also business and career opportunities in the technology sector. At Anchore, we feel a great sense of pride in playing a part in contributing to that in our community, and are grateful for the opportunity to support Ashley, WEV, and MICOP.

How You Can Take Action

If your company has gently used computer equipment that is ready to be donated, we encourage you to reach out to WEV, and other organizations doing amazing work in their communities such as Boys & Girls Clubs of America (that have local chapters nationwide) to learn more about the ways you can help.

Be sure to check back next quarter to hear about new activity with Anchore’s Kindness Campaign.

Gartner’s 12 Things to Get Right for Successful DevSecOps: A Study in DevSecOps Best Practices

This blog post has been archived. It was replaced by the supporting pillar page, found here: https://anchore.com/wp-admin/post.php?post=987473321&action=edit

The blog post is meant to remain “public” so that it will continue to show on the /blog feed. This will help discoverability for people browsing the blog and potentially help SEO. If it is clicked on it will automatically redirect to the pillar page.

2022 Security Trends: Software Supply Chain Survey

In January 2022, Anchore published its Software Supply Chain Security Survey of the latest security trends, with a focus on the platforms, tools, and processes used by large enterprises to secure their software supply chains, including the growing volume of software containers.

What Are the 2022 Top Security Trends?

The top 2022 security trends related to software supply chain security are:

  1. Supply chain attacks are impacting 62 percent of organizations
  2. Securing the software supply chain is a top priority
  3. The software bill of materials (SBOM) emerges as a best practice to secure the software supply chain
  4. Open source and internally developed code both pose security challenges
  5. Increased container adoption is driving the need for better container security
  6. Scanning containers for vulnerabilities and quickly remediating them is a top challenge
  7. The need to secure containers across diverse environments is growing as organizations adopt multiple CI/CD tools and container platforms

Software Supply Chain Security Survey: Key Findings

The Anchore Software Supply Chain Security Survey is the first survey of respondents exclusively from large enterprises rather than solely from open source and developer communities or smaller organizations. The survey asked 428 executives, directors, and managers in IT, security, development, and DevOps functions about their security practices and concerns and use of technologies for securing containerized applications. Their answers provide a comprehensive perspective on the state of software supply chain security with a focus on the impact of increased use of software containers.

2022 Software Supply Chain Security Survey Respondent Demographics

We highlight several key findings from the survey in this blog post. For the complete survey results, download the Anchore 2022 Software Supply Chain Security Report.

1. Supply chain attacks impacted 62% of organizations

Such widespread attacks as SolarWinds, MIMECAST, and HAFNIUM as well as the recent Log4j vulnerability have brought the realities of the risk associated with software supply chains to the forefront. As a result, organizations are quickly mobilizing to understand and reduce software supply chain security risk.

Software supply chain attack impacts

A combined 62 percent of respondents were impacted by at least one software supply chain attack during 2021, with 6 percent reporting the attacks as having a significant impact and 25 percent indicating a moderate impact.

2. Organizations focus on securing the software supply chain

More than half of survey respondents (54 percent) indicate that securing the software supply chain is a top or significant focus, while an additional 29 percent report that it is somewhat of a focus. This indicates that recent, high-profile attacks have put software supply chain security on the radar for the vast majority of organizations. Very few (3 percent) indicate that it is not a priority at all.

pie chart showing organizations focusing on securing the software supply chain

3. SBOM practices must mature to improve supply chain security

The software bill-of-materials (SBOM) is a key part of President Biden’s executive order on improving national cybersecurity because it is the foundation for many security and compliance regulations and best practices. Despite the foundational role of SBOMs in providing visibility into the software supply chain, fewer than a third of organizations are following SBOM best practices. In fact, only 18 percent of respondents have a complete SBOM for all applications.

Bar chart with a breakdown of SBOM practices to improve software supply chain security

Despite these low numbers, respondents do report, however, that they plan to increase their SBOM usage in 2022, so these trends may change as adoption continues to grow.

4. The shift to containers continues unabated

Enterprises plan to continue expanding container adoption over the next 24 months with 88 percent planning to increase container use and 31 percent planning to increase use significantly.

Container use statistics from Anchore 2022 Software Supply Chain Security Survey

A related trend of note is that more than half of organizations are now running employee- and customer-facing applications in containers.

5. Securing containers focuses on supply chain and open source

Developers incorporate a significant amount of open source software (OSS) in the containerized applications they build. As a result, the Security of OSS containers is ranked as the number one challenge by 24 percent of respondents with almost half (45 percent) ranking it among their top three challenges. Ranked next was Security of the code we write with 18 percent of respondents choosing that as their top container security challenge and Understanding full SBOM with 17 percent.

Bar chart showing top security challenges

6. Organizations face challenges in scanning containers

As organizations continue to expand their container use, a large majority face critical challenges related to identifying and remediating security issues within containers. Top challenges include identifying vulnerabilities in containers (89 percent), the time it takes to remediate issues (72 percent), and identifying secrets in containers (78 percent). Organizations will need to adopt more accurate container scanning tools that can accurately pinpoint vulnerabilities and provide recommendations for quick remediation.

Bar chart showing top container scanning challenges

7. Organizations must secure across diverse environments

Survey respondents use a median of 5 container platforms.The most popular method of deployment is standalone Kubernetes clusters based on the open source package, which 75 percent of respondents use. These environments are run on-premises, via hosting providers, or on infrastructure-as-a-service from a cloud provider. The second most popular container platform is Azure Kubernetes Service (AKS) with 53 percent of respondents using, and Red Hat OpenShift ranks third at 50 percent. Respondents leverage the top container platforms in both their production and development environments.

Bar chart showing types of container platforms used by enterprises

For more insights to help you build and maintain a secure software supply chain, download the full Anchore 2022 Software Supply Chain Security Report.

Attribution Requirements for Sharing Charts

Anchore encourages the reuse of charts, data, and text published in this report under the terms of the Creative Commons Attribution 4.0 International License.

You may copy and redistribute the report content according to the terms of the license, but you must provide attribution to the Anchore 2022 Software Supply Chain Security Report.

Key Things to Know about SBOMs and SBOM Standards

This blog post has been archived and replaced with the support pillar page here: https://anchore.com/wp-admin/post.php?post=987473316&action=edit

The blog post is meant to remain “public” so that it will continue to show on the /blog feed. This will help discoverability for people browsing the blog and potentially help SEO. If it is clicked on it will automatically redirect to the pillar page.

How to Find and Fix Log4j with Open Source and Enterprise Tools from Anchore

Updated 01/07/22. As new information about the Log4j vulnerability becomes available, we will update this blog with the latest information.

As the Log4j zero-day vulnerability continues to have widespread effects across nearly every industry and sector, it’s becoming increasingly evident that there is still a long remediation road ahead. With new vulnerabilities associated with Log4j continuing to emerge, it is imperative to find and remediate this Log4j vulnerability across all your applications to ensure the security of your software supply chains.

This blog provides step-by-step instructions for finding and fixing the Log4Shell vulnerability found in Log4j using Anchore open source solutions (Syft and Grype) and commercial solution (Anchore Enterprise).

Summary of the Known Log4j Vulnerabilities

Log4Shell was originally published on Dec 10, 2021 as a zero-day vulnerability found in Apache Log4j 2, a widely used Java library. The vulnerability enables a remote attacker to take control of a device on the internet if the device is running certain versions of Log4j 2. After the original vulnerability was reported and fixed, there have been subsequent vulnerabilities identified which have resulted in new patched versions. We will update this table for any further Log4j vulnerabilities or versions

Vulnerability IDs Affected Package and Version  Patched Version  Date Published References
CVE-2021-44228,
GHSA-jfh8-c2jp-5v3q
log4j-core 2.14 and earlier 2.15.0 (Java 8) 10 Dec 2021 NVD, GHSA
CVE-2021-45046,
GHSA-7rjr-3q55-vv33
log4j-core 2.15 and earlier 2.16.0 (Java 8)
2.12.2 (Java 7)
14 Dec 2021 NVD, GHSA
CVE-2021-45105,
GHSA-p6xc-xr62-6r2g
log4j-core 2.16 and earlier 2.17.0 (Java 8)
2.12.3 (Java 7)
2.3.1 (Java 6)
18 Dec 2021 NVD, GHSA
CVE-2021-44832,
GHSA-8489-44mv-ggj8
Log4j-core 2.17.0 and earlier 2.17.1 (Java 8)
2.12.4 (Java 7)
2.3.2 (Java 6)
28 Dec 2021 NVD, GHSA

Steps to Find Log4j Using Anchore Open Source Tools

Anchore provides two lightweight, command-line open source tools. Syft scans filesystems or container images to produce a comprehensive software bill of materials (SBOM). Grype identifies known vulnerabilities by scanning an SBOM generated by Syft or by scanning filesystems or container images directly.

In the context of Log4j, Syft-generated SBOMs enable you to find out if Log4j is present in a source code repository, a filesystem, or a container image. Grype can then be used to perform a vulnerability scan on that SBOM to determine if the detected versions of any given software package match any of the known Log4j vulnerabilities.

As the Log4j incident continues to evolve, both Syft and Grype are useful in identifying which Log4j versions you are running and determining the presence of any new Log4j vulnerabilities as they are announced.

The following examples provide step-by-step instructions for using Syft and Grype to locate and remediate Log4j.

Syft

Watch a video demonstration

Download Syft.

Example 1: Run syft to generate an SBOM and filter the results to identify any vulnerable versions of Log4j.

Command Example:

# syft dir://tmp/jarsinjars/ | grep log4j

Command Example:

# syft docker.io/dnurmi/testrepo:jarjar | grep log4j

Example 2: Run syft with the JSON output option to get more detailed information on the locations of the Log4j dependencies in your source code repositories and/or container images.

The example below shows the top-level ‘fireline.hpi’ package which contains a Log4j jar deeply embedded as shown by the ‘VirtualPath’ element of the JSON record.

Command Example:

# syft -o json docker.io/dnurmi/testrepo:jarjar

Example 3: Run syft and create a JSON file so you can save it as an SBOM artifact.

Command Example:

# syft -o json docker.io/dnurmi/testrepo:jarjar > log4j-sbom.json

Grype

Download Grype.

Example 4: Run grype on an SBOM produced by Syft and filter to find components that contain Log4j vulnerability IDs.

The example below filters for the ID GHSA-jfh8-c2jp-5v3q but you can also filter for any of the new CVEs affecting Log4j (see above for a list of vulnerability identifiers).

Command Example:

# grype docker.io/dnurmi/testrepo:jarjar | grep GHSA-jfh8-c2jp-5v3q

Steps to Find and Fix Log4j Using Anchore Enterprise

Watch a video demonstration.

Anchore Enterprise scans and performs a deep inspection of container images to identify the presence of Log4j and related vulnerabilities. However Anchore Enterprise also maintains a repository of SBOMs across applications and teams, so that users can query their entire SBOM catalog to search for any scanned images that include Log4j.

Anchore Enterprise users can leverage policy-based reporting and alert services to create automatic notifications when existing or new scans identify vulnerable versions of Log4j. The customizable policy engine can provide a “stop” signal your build systems and Anchore’s out-of-the-box policies also contain a rule blocking critical vulnerabilities such as those registered against Log4j from being deployed into production.

Using the Anchore Enterprise Runtime Inventory system, users can also quickly search for and detect instances of vulnerable Log4j software that is present in containers running in your Kubernetes deployments.

All Anchore Enterprise vulnerability detection features continuously update using the latest vulnerability data from a variety of public sources, and send notifications as the Log4j incident evolves, to keep teams abreast of newly discovered vulnerabilities and fix versions pertaining to Log4j software.

Instructions below are provided for the Anchore Enterprise command-line interface (CLI) and the user interface.

Scanning with Anchore CLI to Find Log4j

Watch a video demonstration. 
The Anchore CLI provides a command-line interface on top of the Anchore Enterprise REST API. The Anchore CLI is published as a Python package that can be installed from the Python PyPI package repository on any platform supporting PyPI. Anchore CLI users can manage and inspect images, policies, subscriptions, and registries.

Example 1: Run anchore-cli to scan container images, produce an SBOM, and use it to identify any vulnerable versions of Log4j, along with the location of any detected, installed Log4j packages.

Command Example:

# anchore-cli image content docker.io/dnurmi/testrep:jarjar java | grep log4j

Example 2: Run anchore-cli to search a specific container image for vulnerabilities matching vulnerability IDs.

The example below filters for CVE-2021-44228 but you can also filter for any of the new CVEs affecting Log4j (see above for a list of CVE identifiers).

Command Example:

# anchore-cli image vuln docker.io/dnurmi/testrepo:jarjar all | grep CVE-2021-44228

Example 3: Run

anchore-cli

to query all of the analyzed image SBOMs in your catalog for instances of vulnerable software matching particular vulnerability IDs.

The example below filters for ID GHSA-jfh8-c2jp-5v3q but you can also filter for any of the new CVEs affecting Log4j (see above for a list of CVE identifiers).

Command Example:

# anchore-cli query images-by-vulnerability --vulnerability-id GHSA-jfh8-c2jp-5v3q

Policy Enforcement and Reporting Using Anchore Enterprise

Example 1: In the Anchore Enterprise user interface navigate to View Reports -> Quick Report -> Images By Vulnerability to perform a query to retrieve all images with vulnerable software matching particular vulnerability IDs, such as CVE-2021-44228.

Example 2:  In Anchore Enterprise, navigate to Image Analysis > Select Image > Vulnerabilities Tab. Select View Images Affected on the left hand side of the screen. This will generate a report of all other images that share the specified CVE.

Example 3:  Ensure that the active policy has a rule that specifies a STOP action on any vulnerability marked with a severity level of Critical.

Finding Log4j in Running Containers Using Anchore Enterprise

Example 4: Use Anchore Enterprise’s Runtime Inventory to detect any container that is running in your Kubernetes cluster and includes a vulnerable version of Log4j. You will first need to install KAI (an agent that runs in your Kubernetes cluster) which can be downloaded here.

Once KAI is installed, select the Kubernetes tab in Anchore Enterprise and select Vulnerabilities.

Next query for the known vulnerability identifiers for Log4j such as CVE-2021-44228 or others listed above.

Any currently running images with the Log4j vulnerability will populate in the table below when you execute the query. From there, you can drill down on any impacted image, by selecting the Vulnerabilities tab within the impacted runtime image, and View Images Affected to identify other images that share the Log4j vulnerability.

Remediation Workflows Using Anchore Enterprise

Example: If Log4j is detected in any of your applications, you must begin the fix process. Anchore Enterprise users can trigger notifications and remediation workflows based on rules set through the policy engine. Tickets can be automatically created in Jira or GitHub, sent as emails, or posted to Slack or Microsoft Teams channels. These notifications provide not only the details of the Log4j version discovered or the policy rules that were violated but also include explicit instructions on what versions should be used to resolve the issue.

Contact Us

Anchore is continuing to monitor the Log4j incident as it evolves. If you need assistance or want to learn more about how Anchore can help, please contact us.

Understanding SBOM Management and The Six Ways It Prevents SBOM Sprawl

This blog post has been archived and replaced by the supporting pillar page that can be found here:

https://anchore.com/wp-admin/post.php?post=987473370&action=edit

The blog post is meant to remain “public” so that it will continue to show on the /blog feed. This will help discoverability for people browsing the blog and potentially help SEO. If it is clicked on it will automatically redirect to the pillar page.

How to Detect and Remediate Log4J at Scale with Anchore Enterprise

Responding to Log4Shell, the Log4j zero-day that disrupted the lives of security teams around the globe, is not a one weekend or one week event. While organizations may have put in place immediate responses to try to prevent exploits, the problem won’t be resolved until all of the applications that use Log4j have been remediated. This will require a long term response that remediates the impacted applications while preventing any more vulnerable components from making it through to production or being delivered to customers.

Since the Log4shell vulnerability disclosure, we’ve seen a huge interest in our open source projects, Syft and Grype. These tools are simple yet powerful CLI utilities which help you generate a Software Bill of Materials (SBOM) for your software artifacts (Syft) so you can see if you are using Log4j and notify you if they are vulnerable (Grype). Our VP of Security, Josh Bressers, wrote an Infoworld article explaining how you can get going with them quickly.

Syft and Grype are very convenient for ephemeral, one time scans but with a fast moving situation and new versions of Log4j coming out quickly to address the vulnerability (we’ve already seen two), tracking, enforcing and managing the SBOMs and vulnerability data they generate can quickly become challenging. Anchore Enterprise provides users with a number of features that both help to reduce the pain of the current response frenzy and help you over the long haul get to a place where the vulnerability has been fully remediated.

Detecting Log4Shell at Scale

Applications containing Log4j may be going through your development pipeline, sitting in your registry, or actively running in Kubernetes. Anchore Enterprise customers already have all of this information about the possible locations of the vulnerable package in a single repository so they can easily search across their entire environments to assess the impact.

Anchore Enterprise customers already get a fully supported version of the functionality in Syft and Grype combined into a single tool called AnchoreCTL. Whether used on the command line on a desktop or integrated into your CI/CD pipelines, AnchoreCTL pushes all of the SBOM data to Anchore Enterprise centralized data store. Combined with data that Anchore Enterprise gathers from artifact registries or Kubernetes environments, all SBOM data is managed and accessible in a single place.

Not only does this allow security teams to detect whether vulnerable versions of Log4j are being used anywhere across their environments but also allows them to check when new versions of Log4j are being deployed and put into production by developers.

Many CEOs and Boards of Directors are demanding daily updates from the CISO and security teams on the business impact of the Log4j vulnerability and Anchore Enterprise’s reporting system is allowing security and response teams to accurately report on how vulnerable they are to the ongoing issue.

Using Policies for Enforcement at Scale

While identifying if and where you are vulnerable is the essential first step to triage the problem, customers quickly need to reduce risk. Anchore Enterprise contains a sophisticated policy engine that can provide a “stop” signal to the platforms in your development environment. By default, the out-of-the-box policies in Anchore Enterprise contain a rule disallowing critical CVEs so all customers already received necessary protections as soon as the issue was flagged in public databases on December 9, even if they had not yet crafted a specific response.

For users who are using AnchoreCTL to scan builds in their CI/CD systems, as soon as the policy rule about critical CVEs was triggered, build and deployment jobs would have been halted for affected software. Going further along the deployment process, users who had Anchore Enterprise connected to Anchore’s Kubernetes Admission Controller, would have also been unable to deploy vulnerable applications as a result of the policy rule.

Beyond the default policies provided by Anchore Enterprise, customizing more granular policy rules can help your organization to further pinpoint your efforts. For example, users may run very old versions of Log4j that are not vulnerable to the Log4Shell exploit. Users can easily add an access list rule in Anchore Enterprise to disallow the impacted versions (2.0 to 2.15) but allow others (versions lower than 2.0) to ensure the dragnet doesn’t catch more than it needs. As we have recently seen, an updated version of Log4j (2.15) was itself a concern. Some more advanced users create their own hot fix packages to avoid waiting for upstream security responses. Temporary policy rules can be created to enforce the presence of a specific hash for a custom-built package to ensure developers have used the internally created hot fix until the organization is comfortable using the upstream public package.

Beyond just looking at the version string, a number of mitigation strategies have emerged such as using environment variables to modify the behavior of the Log4j code. A policy rule can be added that ensures these variables are in place. Combined with a temporary allow-list entry for the version of Log4j you are using, this can be a more practical solution while you work on your upgrade strategy.

Using the Anchore Enterprise policy engine for multiple pipeline stages enables a defense-in-depth approach to ensure you are catching all entry points for vulnerable content. The single point of command and control for your security rules across any component found in an SBOM allows customers to adjust as new information comes to light.

Remediating Log4Shell At Scale

Finally, chances are you are detecting Log4j in multiple applications maintained by multiple development teams. To start the fix process, Anchore Enterprise users can trigger notifications and remediation workflows based on rules that are triggered from the policy engine. Tickets can be automatically created in Jira or GitHub, sent as emails, or posted to Slack or Microsoft Teams channels. These notifications provide not only the details of the Log4j version discovered or the policy rules that were violated but also include explicit instructions on what versions should be used to resolve the issue.

From Sprint to Marathon

The extensive use of Log4j and the severity of the exploit means security professionals and development teams are going to be dealing with the issue for many months to come. Getting immediate visibility into your risk using open source tools is the fastest way to get going. But as we get ready for the long haul, prepare for the next inevitable critical issue that surfaces. Perhaps you’ve already found some as you’ve addressed Log4j. Anchore Enterprise can get you ready for a quick and full assessment of the impact, immediate controls to prevent vulnerable versions from moving further toward production, and streamlined remediation processes. Please contact us if you want to know how we can help you get started on your SBOM journey.

Anchore Enterprise 3.3 Increases Vulnerability Visibility and Adds UI Enhancements

Visibility into cloud-native software applications is essential for securing the software supply chain. Today’s applications include code and components from many different sources, including internal developers, open source projects, and commercial software. With the release of 3.3, Anchore Enterprise now provides richer insight into your software components to identify more vulnerabilities and security risks along with UI enhancements to streamline the management of your container image analysis. 

Discover and Mitigate Vulnerabilities and Security Risks Found in Rocky Linux Containers

Anchore Enterprise 3.3 can now scan and continuously monitor Rocky Linux container images for any security issues present in installed Rocky Linux packages to improve their security posture and reduce threats. Rocky Linux packages are now also included in SBOMs. Additionally, customers can apply Anchore’s customizable policy enforcement to Rocky Linux packages and vulnerabilities.

Create Customizable Login Messages to Share Info with Your Team

A customizable banner can now be added to the login page. This can be used to provide Anchore Enterprise users with information such as instructions on how to login (i.e. SSO or email address) or which administrator to contact in the event of an issue. Both end-users and administrators will benefit from this new feature as it will enable collaboration and communication between internal teams that are using Anchore Enterprise. 

Delete Multiple Items at Once in the Repository View Through the UI

Anchore Enterprise UI users can now select and delete multiple repo tags and “failed” images from the Repository View. When an image is analyzed, a list of repo tags are generated. These tags are alphanumeric identifiers that are attached to an image name. Depending on the content of the image, hundreds of these tags can be generated, many of which are superfluous to the user. Now, rather than having to click on and delete each tag individually, users can delete these unnecessary tags in bulk. Additionally, users can delete multiple images at once that have failed analysis either due to policy requirements or a misconfiguration as well.  

Evaluate Policy Bundle Changes Without Having to Leave the Edit Screen

Anchore Enterprise UI users will now be able to view their policy evaluation as they edit their policy bundles without having to leave the edit screen in the UI. Policy bundle evaluations provide users with a pass or fail status for their images based on user-defined allowlists and blocklists. The ability to view the evaluation while editing the policy bundle enables users to see how their changes are affecting the evaluation without having to leave the screen they are working in.

Viewpoint: The Future of Software Supply Chain Security

Hello friends. My name is Josh and I’ve just started at Anchore as the Vice President of Security. I’ll talk more about what the role means in future posts, but for the moment I want to answer a few questions that are top of mind right now. Namely, what do I think the future of software supply chain security looks like and why did I choose to work with Anchore to help organizations better protect their software supply chains.

Back in 2004 I started working on the Red Hat Product Security Team. My focus has always been on securing the open source we all use  every day. Back then we were securing the open source supply chain, but there wasn’t really a name for this practice yet. I have always felt very strongly about the integrity and security of software products as well as the security of open source. It’s a happy coincidence that these two topics have merged in the last few years!

Today open source software makes up a majority of the code in almost every software application we use. Combining open source with cloud platforms and modern development technologies has completely changed the way we build and deliver software applications. These changes have helped us to fundamentally transform how we interact with technology in our jobs and our lives. But now, this dependence on software, and the supply chain that produces it, has created a new set of attack points for bad actors. I believe this industry-wide and economy-wide realization will change the foundations of how we build, deliver, and use technology.

What’s different now?

There was once a time the security team would end every conversation with “if you don’t listen to us someday you’ll be sorry.” Nobody was ever sorry. But the world has changed a lot and that  “someday” may be now. There are many new threats and attacks that create significant and measurable losses. Breaches are expensive, ransomware is expensive, personal data has monetary value now. Anchore’s 2021 Software Supply Chain Security Report found that 64% of organizations had been impacted by a software supply chain attack in the past year. We exist at a nexus point that has made the risk very real. Every company has gone digital with almost everything online now, DevOps has made the number of services uncountable and the pace of change almost unmeasurable. Meanwhile, the adversaries are organized and highly motivated. Separately any one of these factors might be manageable, but when you put it all together we need to completely rethink the approach to software supply chain security. Big problems need bold new ideas.

We are also in a period of disruptive change that allows for new ideas and real change to happen much faster than normal. The explosion of ransomware and increasing supply chain attacks against the backdrop of a global pandemic that have changed the very foundations of society, are creating the imperative to act. As we witness the growing attention software supply chain security is getting, it’s important to notice that we are no longer just talking about software supply chain security, we’re actually taking concrete steps to solve the problems.

What will the future look like?

Now we are starting to understand what the future will look like as we move toward solutions that will help better protect the software supply chain against these growing risks. In the past it was very common to conduct a security review once a product was “done”. This often resulted in a lot of missed security vulnerabilities being run in production or delivered to customers. Modern day development has changed such that security is expected to be tightly integrated into every step of the development process. This is where the term “shift left” originated.

We are already seeing the beginning of this change with the growing attention paid to the software bill of materials (SBOM) and vulnerability scanning as critical components of software supply chain security. Neither of these ideas are new, but we are seeing convergence around SBOM standards. There are groups like The Linux Foundation’s OpenSSF and the Cloud Native Computing Foundation (CNCF) that are working in the open source ecosystem to create a common understanding of the problems and define potential solutions. The United States Cybersecurity and Infrastructure Agency (CISA) has a supply chain task force. Conferences have entire supply chain tracks to share emerging best practices. The time to address software supply chain security is here.

There are new practices, processes, and tools that will need to be put into place to protect the software supply chain. While the importance of SBOM and vulnerability scanning is well understood, the critical challenge is in using the data to improve security. I think what we do with this data is the biggest area for improvement. Having an SBOM by itself isn’t useful. You need the ability to store it, track it over time, to aggregate the data, search the data, and get back actionable answers to questions. The same holds true for vulnerability scanning. Just scanning software after it has been built isn’t enough. What happens after the scan runs? How do you use the data to identify and remediate problems to reduce risk?

I want to use the Heartbleed vulnerability as a great example of where we started, where we are today, and where I want to see us go next. If you were around for Heartbleed, it was an eye opening experience. Just determining which systems you had running a vulnerable version of OpenSSL was a herculean task. Most of us had to manually go looking for files on a disk. Today with our ability to generate and distribute SBOMs, it’s not hard to figure out what systems are using OpenSSL. We can even construct policies now that could prevent a new build or deployment that contains an old version of OpenSSL.

The future I want to see is having insight into your end-to-end software supply chain and the security of the software you create and use. Being able to craft policies that can be enforced about what your software should look like. Not just having the ability to ask what a vulnerability or bug means for your application, but having tools that tell you before you even ask the question.

Why Anchore?

This all brings us to Anchore. I’ve known about Anchore for quite some time. In my previous role in Product Security, I worked with a large number of organizations focused on software supply chain issues. This included open source projects, software vendors, consultants, and even supply chain working groups. It became very obvious that while there was increasing focus on software supply chain security, there wasn’t always a consensus on the best practices or tooling needed.

The current state of tools is very uneven. Few tools provide comprehensive SBOMs with all of the relevant metadata needed to make accurate security assessments and decisions. Some scanning tools want to report zero false positives, resulting in lots of false negatives. Other tools simplistically report every possible vulnerability which results in lots of irrelevant false positives. I’m not looking to point any fingers here, this is all very new and everyone is continuing to learn. In my experience the sweet spot is somewhere in the middle—some false positives should be expected, but too many or too few are both bad. The purpose of tooling is to help provide data to make decisions. Bad data results in bad decisions.

Every time I interacted with any organization in the software supply chain space, I kept seeing Anchore as occupying the sweet spot in the middle over and over again. Anchore starts from a foundation of open source tools that are easy for developers to integrate and use. Syft, an open source SBOM generator, is incredibly useful and accurate. Grype, an open source vulnerability scanner, is one of the best vulnerability scanners I’ve ever used. Anchore’s commercial product, Anchore Enterprise, builds on that open source foundation and adds some powerful features for cataloging SBOMs, remediating vulnerabilities, and enforcing policies. Everywhere I looked it seemed that Anchore was the one company that “got it.” Anchore was doing all the things that were important to me in a way that made sense. Relevant scanning results, easy SBOM creation and use, and the ability to leverage existing policies (like CIS) instead of trying to build new ones.

And lastly, open source. Open source isn’t something I think is a good idea, it’s part of who I am. My entire life has been shaped and built within the open source community. I know anywhere I work has to be extremely open, very open source friendly, and have a culture that mirrors the ways open source thinks and works. Anchore has the open source culture and open source focus that I know is so very important. They have a whole blog dedicated to their culture, give it a read, it’s fantastic!

What’s next?

The easiest way to see what’s next is to give the Anchore open source tools a spin. Generate an SBOM with Syft. Then scan the SBOM file for vulnerabilities with Grype. It’s all open, try them out, file some bugs, submit pull requests. Open source works best when everyone works together.

If you want to pull it all together for an end-to-end solution for securing your software supply chain, check out Anchore Enterprise. It’s a nice way to tie the tools together in one place to meet the needs of a larger organization or multiple teams.

I love to talk about these topics. If you’re interested in having a chat or even just saying hi feel free to reach out. Watch this space, there’s a lot to talk about, and even more work to do. It’s going to be a truly epic adventure!

How to Check for CISA Catalog of Exploited Vulnerabilities

Last week the United States Cybersecurity and Infrastructure Security Agency (CISA) published a binding operational directive describing a list of security vulnerabilities that all federal agencies are required to fix. Read the directive here: https://cyber.dhs.gov/bod/22-01/ 

The directive establishes a CISA-managed catalog of known exploited vulnerabilities that carry significant risk to federal agencies. The list can be found here: https://www.cisa.gov/known-exploited-vulnerabilities-catalog

While CISA’s directive is binding only on U.S. federal agencies, companies can also leverage this catalog to prioritize vulnerabilities that may put their organization at risk.

There has been a lot of discussion about this directive and what it will mean. Rather than add commentary about the directive itself, let’s discuss what’s actually inside this list of vulnerabilities and what actions you can take to check if you are using any of the software in question.

It’s important to understand that the list of vulnerabilities in this catalog will not be static. CISA has stated in their directive that the list will be modified in the future, meaning that we can expect more vulnerabilities to be added. Even if a federal agency is not currently running any of the vulnerable software versions, as the list grows and evolves and the software that is running evolves, it will be important to have a plan for the future. Think about handling vulnerabilities like delivering the mail. Even if you finish all your work by the end of the day, there will be more tomorrow.

If you work with lists of vulnerabilities you will be used to vulnerabilities having a severity assigned by the National Vulnerability Database (NVD). The NVD is a U.S. government repository of vulnerability data that is managed by the National Institute of Standards and Technology (NIST). The data in NVD enriches the CVE data set with additional product information as well as a severity rating for the vulnerability based on the CVSS scoring system.

It is very common for policy decisions to be made based on the NVD CVSS severity rating. Any vulnerability with a CVSS score of critical or important is expected to be fixed very quickly, while more time is allowed to fix medium and low severity vulnerabilities. The idea is that these severity ratings can help us decide which vulnerabilities are the most dangerous, and those should be fixed right away.

However, this new list of must-fix vulnerabilities from CISA goes beyond just considering the CVSS score. At the time of writing this the CISA list contains 291 vulnerabilities that require special attention. But why these 291 when there are an almost immeasurable number of vulnerabilities in the wild? The directive indicates that these vulnerabilities are being actively exploited, which means there are attackers using these vulnerabilities to break into systems right now.

Not all vulnerabilities are created equally

Examining the catalog of vulnerabilities from CISA, many of the IDs have received a rating of critical or important from NVD, but not all. For example CVE-2019-9978 is a WordPress plugin with a severity of medium. Why would a medium severity rating make this list? Attackers don’t pay attention to severity.

Remember this list isn’t based on the NVD CVSS severity rating, it’s based on which vulnerabilities are being actively exploited. CISA has information that organizations do not and is aware of attackers using these particular vulnerabilities to attack systems. The CVSS rating does not indicate if a vulnerability is being actively attacked, it only scores on potential risk. Just because a vulnerability is rated as medium doesn’t mean it can’t be attacked. The severity only describes the potential risk; low risk does not mean zero risk.

How Anchore can help

There are a few options Anchore provides that can help you handle this list. Anchore has an open source tool called Grype which is capable of scanning containers, archives, and directories for security vulnerabilities. For example, you can use Grype to scan the latest Ubuntu image by running
docker run anchore/grype ubuntu:latest
docker run anchore/grype output
You will have to manually compare the output of Grype to the list from CISA to determine if you are vulnerable to any of the issues, luckily CISA has provided a CSV of all the CVE IDs here:
https://www.cisa.gov/sites/defaultkn/files/csv/known_exploited_vulnerabilities.csv

Here’s a simplified example you can use right now to check if a container is vulnerable to any of the items on the CISA list.

First, use Grype to scan a container image. You can also scan a directory or archive; this example just uses containers because it’s simple. Extract just the CVE IDs, sort them, then store the sorted list in a file called scan_ids.txt in /tmp.

docker run anchore/grype  | sed -r 's/.*(CVE-[0-9]{4}-[0-9]{4,}).*/\1/g' | sort > /tmp/scan_ids.txt

Next download the CISA csv file, extract the CVE IDs, sort it, and store the results in a file called “cisa_ids.txt” in /tmp/

curl https://www.cisa.gov/sites/default/files/csv/known_exploited_vulnerabilities.csv | sed -r 's/.*(CVE-[0-9]{4}-[0-9]{4,}).*/\1/g' | sort > /tmp/cisa_ids.txt

Then compare the two lists, looking for any IDs that are on both lists

comm -1 -2 /tmp/cisa_ids.txt /tmp/scan_ids.txt

The “comm” utility when run with the “-1 -2” flags only returns things it finds in both lists. This command will return the overlap between the vulnerabilities found by Grype and those on the CISA list. If the container doesn’t contain any CVE IDs on the CISA list, then nothing is returned.

Users of Anchore Enterprise can take advantage of a pre-built, curated CISA policy pack that will scan container images and identify any vulnerabilities found that are on the CISA list.

Download the CISA policy pack for Anchore Enterprise here.

Once downloaded, Anchore customers can upload the policy pack to Anchore Enterprise by selecting the Policy Bundles tab as seen below:

Anchore policy tab

Next, upload the policy pack by selecting the Paste Bundle button.

Upload policy bundle to Anchore

If done correctly, you should see something very similar to what is depicted below, where you can see the raw json file loaded into the policy editor:

Loaded policy bundle

Lastly, activate by clicking the radio button for the bundle, so that it can be used in your CI/CD pipelines and/or runtime scans to detect the relevant CVEs from the CISA catalog that are specified within the policy.

Activate a policy on Anchore

You can now see the results generated by the CISA policy pack against any of your images, as demonstrated below against an image that contains Apache Struts vulnerabilities that are included within the CISA vulnerability list.

Policy results

From here, you can easily generate automated reports listing which CVEs from the CISA policy exist within your environments.

Looking ahead

Organizations should expect new vulnerabilities to be added to the CISA catalog in the future. Attackers are always changing tactics, finding new ways to exploit existing vulnerabilities, and finding new vulnerabilities. Security is a moving target and security teams must remain vigilant. Anchore will continue to follow the guidance coming out of organizations such as CISA and enable customers and users to take action to secure their environments based on that guidance.

Creating a FedRAMP Compliance Checklist

Creating a FedRAMP compliance checklist can be vital to approaching compliance methodically.  While government contracting is full of FedRAMP challenges stories, the move to cloud-native development grants us new tools, technologies, and methodologies to better set your projects up for FedRAMP compliance success. It’s up to you to capture these best practices in a checklist or process flow for your teams to follow.

Considerations for your FedRAMP Compliance Checklist

Here are some concerns to include in your checklist:

1. Shift Security Left

Shifting left describes using tools and practices to improve and encourage more rapid feedback from security stakeholders about security and compliance into the early development stages. However, the objective is always to hand bugs and fixes back to developers as part of a smooth, ongoing, continuous development process.

Unit testing is a familiar example of shifting left by delivering early, user-experience feedback on functionality.  Shifting unit testing left  ensures that most problems are caught early, during the development stage, where it is quicker and simpler to remedy them.

By shifting security left, the handling of each vulnerability becomes an integral part of the CI/CD pipeline. This prevents a mass of vulnerabilities from appearing as a single irritating blockage before your team admits a system into production. More frequent vulnerability scanning during development ensures bugs and other issues can be dealt with quickly and efficiently as they arise, and security becomes a part of the development process.

With the primary focus of CI/CD environments on fast, efficient development and innovation, security has to work efficiently as part of this process. Anchore advises that DoD and federal security teams use tools that can deliver rapid feedback into development. Security tools must integrate with typical CI/CD and container orchestration tools.  The tools you choose should also promote early-stage interaction with developers.

2. Follow the 30/60/90 rule to keep Images Secure

Anchore recommends following the 30/60/90 rule to satisfy the guidance outlined in the DoD Cloud Computing Security Requirements Guide. This rule sets out the number of days to fix security issues: 

  • 30 days to fix critical vulnerabilities 
  • 60 days to fix high vulnerabilities
  • 90 days to fix moderate vulnerabilities 

In support of this, it is also strongly recommended to use a tool that allows security teams to update and validate vulnerability databases with new security data frequently. Not only  is this necessary to satisfy Security Controls RA-5(2), but  using such a tool is a best practice to ensure your security data is timely and relevant.

By following the 30/60/90 rule and ensuring that you update your vulnerability databases and feeds promptly, you empower your security teams to remediate new security challenges quickly and efficiently.

3. Make use of Tools that Support Container Image Allow/Deny Listing

Federal agencies should leverage container security tools that can enforce allowlisting and denylisting of container images. Maintaining allow and denylists are common methods of securing networks and software dependencies. However, they are less common in a containerized environment.

This capability is crucial, as attackers can potentially use containers to deploy denylisted software into secure areas such as your DevOps toolchain. The elements and dependencies of a container may not always appear in a software bill of materials (SBOM) from existing scanning tools. Therefore it’s crucial that the tools used can examine the contents of a container and can enforce allowlist and denylist safeguards.

Anchore advises that container image denylisting should occur at the CI/CD stage to allow rapid feedback. By shifting the security feedback to the developers, they receive immediate feedback on issues. This technique allows for faster remediation, as denylisted container images or the software contained within them are immediately flagged to the developer.

4. Deploy a Container Security Tool that Maintains Strong Configuration Management over Container Images

Software delivery and security operations teams should maintain an accurate inventory of all software they deploy on any federal information system. This inventory gives both teams accurate situational awareness of their systems and enables more precise decisionmaking.

Anchore advises federal agencies to implement a container-native security tool that can systematically deconstruct and inspect container images for all known software packages and display findings for information security personnel in an organized and timely manner.

5. Use a Container Scanning Tool that runs  on IL-2 through IL-6

The DoD and federal agencies must leverage tools that keep any vulnerability data regarding the information system within their authorization boundaries. However, many FedRAMP vulnerability scanning tools require an agent that connects to the vendor’s external cloud environment. The DoD designates this as interconnectivity between DoD/federal systems and the tool vendor and would rule out the use of any agent/cloud-based tool within an IL-6 classified environment.

Where organizations still choose to implement an agent-based container security tool, they are then responsible for ensuring that the security vendor maintains an up-to-date accreditation for their cloud environment. The environment must also have the relevant RMF/FedRAMP security controls that the federal information system can inherit during the ATO process. In addition, any DoD or federal agency should ensure the agent-based tool can run in both classified/unclassified environments.

Learn how Anchore brings DevSecOps to DoD software factories.

6. Express Security Policy as Code

Where possible, select tools that enable your teams to define security policy as code. These tools enable security teams to establish and automate best practices that they can push to tools, either across the network or in more secure environments.

Expressing security policy as code also enables your ops teams to manage systems using existing software development life cycle (SDLC) techniques. For example, policy as code enables the versioning of security policies. Now teams can compare policy versions for configuration drift or other unexpected changes.

In essence, it will enable the policies themselves to be subjected to the same level as rigorous as the code they are applied against.

The onus of implementing new security policies shifts security left onto developers. It can be important not to tighten container security policies too far in one step. Versioning also enables any agencies to improve and tighten security policy over time.

This iterative approach towards improving security stops over-intrusive security policies from stalling development in the CI/CD pipeline. It prevents the emergence of any culture clash between developers and security operations. Security teams can begin with a policy base that delivers on minimum compliance standards and develop this over time towards evolving best practices.

Conclusion

Think of a FedRAMP Compliance Checklist as more than just a documented list of activities your teams need to perform to get your application FedRAMPed. Rather, think of it as a methodical and strategic approach for your developers and security teams to follow as part of holistic and proactive strategies for secure software development and government compliance. 

Download our FedRAMP containers checklist to help jump start your organization’s FedRAMP compliance checklist.

7 Tips to Create a DevSecOps Open Source Strategy

DevSecOps open source convergence isn’t always apparent to business stakeholders. Here at Anchore, we’re believers in the open sourcing of DevSecOps because open source software (OSS) is foundational to cloud-native software development. 

The Relationship between DevSecOps and Open Source

Open source technologies play a decisive role in how businesses and government agencies build their DevOps toolchains and capabilities. Entire companies have grown around open source DevOps and DevSecOps tools, offering enterprise-grade services and support for corporate and government customers. 

DevSecOps Adoption IRL

The adoption of DevSecOps across the public sector and industries such as financial services and healthcare has been full of challenges. Some may even call DevSecOps adoption aspirational.

Adopting DevSecOps starts with shifting left with security. Work on minimizing software code vulnerabilities begins day 1 of the project, not as the last step before release. You also need to ensure that all your team members, including developers and operations teams, share responsibility for following security practices as part of their daily work. Then you must integrate security controls, processes, and tools at the start of your current DevOps workflow to enable automated security checks at each stage of your delivery pipeline.

Open Source in the Thick of DevSecOps

DevOps and DevSecOps can find their roots in the open source culture. DevOps principles have a lot in common with open source principles.

Software containers and Kubernetes are perhaps the best-known examples of open source tools advancing DevSecOps. Containers represent a growing open source movement representing some essential principles of DevSecOps, especially collaboration and automation. These tools can also help mitigate common threats such as outdated images, embedded malware, and insecure software or libraries.

The advantages of open source for DevSecOps include:

  • No dependency on proprietary formats like you would get with vendor-developed applications
  • Access to a vibrant open source community of developers and advocates trying to solve real-world problems
  • An inclusive meritocracy where good ideas can come from anywhere, not just a product manager or sales rep who’s a few layers removed from the problems users encounter every day during their work.

Creating a DevSecOps Open Source Strategy

Here are some tips about how to set a DevSecOps open source strategy:

1. Presenting Open Source to your Organization’s Leadership

While open source technologies are gaining popularity across commercial and federal enterprises, it doesn’t always mean that your management are open source advocates. Here are some tips for presenting open source DevSecOps solutions to your leadership team:

  • Open source technologies for a DevSecOps toolchain offer a low entry barrier to build a proof of concept to show the value of DevSecOps to your leadership team. Presenting a live demo of a toolchain carries much more weight than another PowerPoint presentation over another Zoom call.
  • Proper DevSecOps transformation requires a roadmap that moves your enterprise from the waterfall software development life cycle (SDLC) or DevOps to DevSecOps. Open source tools have a place on that roadmap.
  • Know the strengths and weaknesses of the open source tools you’re proposing for your DevSecOps toolchain, especially for compliance reporting.
  • Remember, there are costs for implementing open source tools in your DevSecOps toolchain to work hours, implementation costs, operations, and security.

2. Establish OSS Governance Standards as an Organization

There can be many ways that OSS enters your DevSecOps pipeline that break from normal software procurement norms. Since OSS doesn’t come with a price tag, it’s easy for OSS to bypass your standard software procurement processes and even your expense reports, for that matter. If you’re building cloud-native applications at any sort of scale, you need to start wrapping some ownership and accountability around OSS.

Smaller organizations could assign a developer ownership and accountability over the OSS in their portion of the project. This developer would be responsible for generating the software bill of materials (SBOM) for the OSS under their responsibility.

Depending on the size of your development organization and use of OSS, it may make more sense to establish a centralized OSS tools team inside your development organization.

3. Place Collaboration before Bureaucracy

The mere words “software procurement” invoke images of bureaucracy and red tape in developers’ eyes, primarily if they work for a large corporation or government agency. You don’t want to repeat that experience with OSS procurement. DevSecOps offers you culture change, best practices, and new tools to improve collaboration.

Here are some ways to message how open source procurement will be different for your developers from the usual enterprise software procurement process:

  • Count your developers and cybersecurity teams as entire stakeholders and tap into their open source experience
  • Open and maintain communication channels between developers, legal, and business stakeholders through the establishment of an OSS CoEOSPO or similar working group
  • Communicate with your developers through appropriate channels such as Slack or Zoom when you need input and feedback

4. Educate Your Stakeholders About the Role of OSS in DevSecOps

While your development teams may be all about OSS, that doesn’t mean the rest of your business stakeholders are. Use stakeholder concerns about the current security climate as an opportunity to discuss how OSS helps improve the security of your software development efforts, including:

  • OSS means more visibility into the code for your cybersecurity team, unlike proprietary software code 
  • OSS tools serve as the foundation of the DevSecOps toolchain, whether its code and vulnerability scanning, automation, testing, or container orchestration
  • DevSecOps and OSS procurement processes enable you to create security practices

5. Upgrade Your OSS Procurement Function

Your OSS procurement may still be entirely ad hoc, and there’s no judgment if that’s served your organization well thus far. However, we’re entering a new era of security and accountability as the software supply chain becomes an attack vector. While there’s no conclusive evidence that OSS played a role in recent software supply chain breaches, OSS procurement can set an example for the rest of your organization. A well-executed OSS procurement cycle intakes OSS directly into your DevSecOps toolchain.

Here are some upgrades you can make to OSS procurement:

  • Establish an OSS center of excellence or go one step further and establish an open source program office to bring together OSS expertise inside your organization and drive OSS procurement priorities.
  • Seek out an executive sponsor for OSS because it’s safe to say OSS adoption and procurement inside some enterprises aren’t easy. You are going to be navigating internal challenges, politics, and bureaucracy. Seek out an executive sponsor for OSS procurement in your organization. A chief technology officer or VP of development are natural candidates for this role. Your procurement effort needs an executive-level sponsor to champion your efforts and provide high-level support to ensure that OSS becomes a priority for your development organization.
  • Encourage developer involvement in the OSS community, not only because it’s good for their career,  your organization benefits from the ideas they bring back to in-house projects.

6. Make Risk Management Your Co-Pilot

Your development team assumes responsibility for the OSS to keep it secure and ensure your teams run the latest version and security updates. Such work can take developers away from client-facing and billable projects. There are corporate cultures, especially in professional services and system integration, where developers must meet quotas for the billable work. Maintaining OSS behind the scenes — when a customer isn’t necessarily paying — is a hard sell to management sensitive to their profit & loss.

A more cavalier approach is to move fast and assume the OSS in question is being kept up to date and secure by a robust volunteer effort.

Another option is outsourcing your OSS security and maintenance and paying for somebody else to worry about it. This solution can be expensive, even if you can find a vendor with the appropriate skills and experience.

7. Bring  Together  Developers + Business for DevSecOps Open Source Success

Software procurement in the enterprise world is an area of expertise all unto itself. When you take steps toward creating a more formalized OSS procurement cycle, it takes a cross-functional team to succeed with OSS procurement and later governance. An Open Source Program Office can be the ideal home for just such a cross-functional team.

Your contracts and legal teams often don’t understand technology, much less OSS. Likewise, your developers won’t be knowledgeable about the latest in software licensing. 

Such a coming together won’t happen without leadership support and maybe even a little culture change in some organizations.

DevSecOps: Open Source to Enterprise Software

Compliance, whether it’s the United States government’s FedRAMP or commercial compliance programs such as Sarbanes Oxley (SOX) in the healthcare industry and Payment Card Industry Data Security Standard (PCI DSS) in the financial services industry, brings high stakes. For example, mission-critical government cloud applications can’t go live without passing an authority to operate (ATO). Financial and healthcare institutions face stiff fines and penalties if their applications fail compliance audits.

Beyond that, the breach of the week making headlines in mainstream and technology media is also driving DevSecOps decisions. Companies and federal agencies are doing what they can to becoming another cybersecurity news story.

Such high stakes present a challenge for organizations moving to DevSecOps. Relying on open source solutions solely for a DevSecOps toolchain puts the onus of maintenance and patching on internal teams. There’s also a point for tools such as container scanning your organization needs to look at enterprise offerings. Most often, the reason to move to an enterprise offering is that of compliance audits. For example, you require enterprise-class reporting and a real-time feed of the latest vulnerability data to satisfy internal and external compliance requirements. Vendor backing and support also become a necessity.

Final Thought

A DevSecOps open source strategy comes from melding procurement, people, and DevSecOps practices together. Doing so lets your organization benefit from the innovation and security that open source offers while relying on DevSecOps practices to ensure collaboration throughout the whole development lifecycle to successful product launch.

SBOM Tools: Drop an SBOM GitHub Action into your Workflow

This blog post has been archived and replaced by the supporting pillar page that can be found here:

https://anchore.com/wp-admin/post.php?post=987473412&action=edit

The blog post is meant to remain “public” so that it will continue to show on the /blog feed. This will help discoverability for people browsing the blog and potentially help SEO. If it is clicked on it will automatically redirect to the pillar page.

Anchore Enterprise 3.2 Provides Increased Visibility to Identify More Risks in the Software Supply Chain

Modern cloud-native software applications include software components and code from both internal developers and external sources such as open source communities or commercial software providers. Visibility into these components to identify vulnerabilities, security risks, misconfigurations, and bad practices is an integral part of securing the software supply chain. Anchore Enterprise 3.2 provides richer visibility into your software components so risks can be identified and quickly resolved.

Discover and Mitigate Vulnerabilities and Security Risks Found in SUSE Enterprise Linux Containers

Discover and Mitigate Vulnerabilities and Security Risks Found in SUSE Enterprise Linux Containers

SUSE container image scan results

Anchore Enterprise 3.2 can now scan and continuously monitor SUSE container images for any security issues present in installed SUSE packages to improve their security posture and reduce threats. SUSE packages are now included in SBOMs as well as a comprehensive list of files. Additionally, customers can apply Anchore’s customizable policy enforcement to SUSE packages and vulnerabilities.

Identify Vulnerabilities More Accurately with Our Next-Generation Scanning Engine

Anchore Enterprise 3.2 now uses our next-generation scanning engine that builds upon capabilities in our open source tool Grype and also delivers more accurate results. Users will benefit from the fast pace of innovation while gaining all of the additional features that are available in Anchore Enterprise such as false-positive management. In addition, Grype users switching to Anchore Enterprise will benefit from consistent results between the two solutions, simplifying the transition. The 3.2 version of Anchore Enterprise provides richer visibility into your software components so risks can be identified and quickly resolved.

Note: Existing customers will need to select the next-generation engine in order to take advantage of these benefits. All new installations will default to the new scanning engine. For more information on how to switch, please see the release notes.

More Metadata Exposed in the UI for Policy Rules

More Metadata Exposed in the UI for Policy Rules

New metadata tabs

Customers using Anchore Enterprise now have the ability to see additional SBOM file details in the UI that were previously available only through the API. This new UI visibility enables users to quickly and easily view data that can be instrumental in creating and tuning policy rules. The UI data additions include secrets for identifying credential information inadvertently included in container builds and file content checks which can be used for best practices such as making sure configurations are set correctly. The UI also now allows you to access retrieved files (files that you have designated to be saved during the scan) for further review and additional policy checks. 

More Allowlist Customization Options in the UI

More Allowlist Customization Options in the UI

Allowlist customized by Trigger ID

Users now have additional Allowlist customization options in the UI. Allowlists enable development teams to continue working while issues are being investigated. Now in addition to vulnerabilities, users can add other policy checks to Allowlists through the UI which permits them to override specific policy violations for a more accurate final pass or fail recommendations on image scans.

Expanding Container Security: Announcing Anchore Engine 1.0 and the Role of Syft and Grype

It’s been an amazing five years working with you, our users, with more than 74,000 deployments across more than 40 releases since we initially shipped Anchore Engine. Today, we are pleased to announce that the project has now reached its 1.0 milestone. Much has changed in the world of container security since our first release, but the need for scanning container images for vulnerabilities and other security risks has been a constant. 

Anchore Engine 1.0 includes a focused feature set that is the result of working directly with many organizations toward securing cloud-native development environments and also represents an update to Anchore’s overall approach to delivering DevSecOps-focused open source tools.

New code, New Speed

Over and over again, we’ve heard that the three most important criteria for a container scanning tool is that it needs to be quick, it needs to be accurate, and it needs to be easy to integrate into existing development toolchains. To support those needs, last year we took the lessons learned over years of developing Anchore Engine and created two new command line tools: Syft and Grype. Syft generates a high-fidelity software bill of materials (SBOM) for containers and directories, and Grype performs a vulnerability analysis on the SBOMs created by Syft or against containers directly. 

With the release of 1.0, we’ve now refactored Anchore Engine to embed these stateless tools for core scanning functions, improving the speed of container image scans and ensuring parity between stand-alone stateless Syft/Grype scans and those produced in the stateful Anchore Engine service. We’ve also cut the time for the initial sync of the vulnerability DB from hours to seconds, getting new users up-and-running even faster than before.

Feed Service Deprecation

Prior to the 1.0 release, deployments of Anchore Engine periodically connect up to our public feed service, hosted in the cloud. The vulnerability data would then be pulled down and merged into your local Anchore Engine database. This merge process often took a while due to the per-record insert-or-update process. 

With the new Grype-based scanner, the vulnerability data is now managed using Grype itself using a single file transfer that updates the entire vulnerability database atomically. The vulnerability data itself is generated from the same sources as the public feed service ensuring that you’ll see no drop in distro or package ecosystem coverage. Since Anchore Engine 1.0 will no longer use the public feed service, we plan to sunset the non-Grype feed service on April 4, 2022, which we’ve chosen in order to give all users of Anchore Engine the time to plan and execute upgrades to Anchore Engine 1.0+. After April 4, 2022, any existing deployments of Anchore Engine prior to 1.0 will continue to operate, but will no longer be receiving new vulnerability data updates.

New Tools for CI/CD

When Anchore Engine was created, scanning was centered around the container registry. Now, with GitHub Actions, GitLab Runners, Azure Pipelines, and the ever present Jenkins, scanning in the CI/CD pipeline is fast becoming the norm. We originally created our inline scanner script (inline_scan) to wrap around Anchore Engine and facilitate this workflow, and it did its job well. With Syft and Grype now delivering the same capabilities as inline_scan (and much more), in a faster and more efficient fashion, we are also retiring the project. 

As of 1.0, we will no longer be updating inline_scan with new releases of Anchore Engine, and we will stop updating the existing inline_scan images after January 10, 2022. Similarly, our existing native CI/CD integrations based on inline_scan will be or have already been, updated to use Grype internally. 

The Road Ahead

Going forward, Syft and Grype will be the best choice for developers and DevOps engineers that need to integrate scanning into their CI/CD pipelines. Anchore Engine 1.0 will continue to play the role of providing persistent storage for the results of CI/CD scans as well as automated registry scanning. Because Anchore Engine 1.0 is built on the common core of Syft and Grype, you will get a consistent result regardless of where you need scans performed and which tools you use.

With the foundational role that SBOMs will play in software supply chain security and the fast moving changes to the various CI/CD platforms, we have an ambitious roadmap for Syft and Grype. Our goal is to make Syft the best open source tool for generating an SBOM and Grype the best tool for reporting discovered vulnerabilities. Anchore Engine will continue to receive updates and improvements, but with registry scanning requirements being relatively static we are not planning any major new capabilities at this time.

Our commercial solution, Anchore Enterprise, will continue to be focused on helping security teams to manage the whole security lifecycle with centralized policy control, audits, and integrations with enterprise platforms. We are committed to ensuring that users who chose to use Syft, Grype and Anchore Engine have a quick and easy path if and when they are ready to make the transition. 

Anchore is committed to open source, recognizing that it is the best way to give DevOps teams the tools they need to move fast. Whether you use Syft, Grype, or Anchore Engine, we look forward to working together with you on your DevSecOps journey. You can connect with us on Discourse or GitHub. We’ll also be hosting a webinar on October 20, 2021 to walk you through our open source and enterprise solutions and explain the role and capabilities of each. Register here.

Important Resources

Anchore Engine 1.0

Syft

Grype 

Anchore Enterprise

Compare Anchore open source and enterprise solutions

Important Dates

  • October 1, 2021 – Anchore Engine 1.0 Available
  • October 20, 2021 – Webinar on Anchore open source and enterprise solutions. Register here.
  • January 10, 2022 – the inline_scan project will be retired.  Users can switch to using syft/grype for stateless security scans.  Executions of inline_scan will continue to function but will no longer receive vulnerability data updates. inline_scan will not be updated with Engine 1.0. It will remain on 0.10.x until retirement.
  • April 4, 2022 – The public feed service, ancho.re, will no longer be available.  Anchore Engine users need version 1.0+ using the v2 scanner to receive new vulnerability data updates.  Existing users will continue to function but will no longer receive vulnerability data updates.

The 3 Shades of SecDevOps

We live and work in a time of Peak Ops. DevOps. DevSecOps. GitOps. And SecDevOps, to name a few. It can be confusing to discern the reality through the marketing spin. However, SecDevOps is one new form of Ops that’s worth keeping in mind as you face new and emerging security and compliance challenges as your organization pulls out of the pandemic.

Here’s what I call the three shades of SecDevOps definitions:

SecDevOps: The Ops Definition

SecDevOps — also called rugged DevOps — places security first in the development process. SecDevOps and DevSecOps differ in the order of security considerations during the software development life cycle (SDLC). It’s a nascent school of thought which goes as far as comparing SecDevOps vs. DevOps.

SecDevOps requires a thorough understanding of how the application works to identify how it can be vulnerable. Such an understanding gives you a clearer idea of how you can protect your application from security threats. Threat modeling during the SDLC is an industry best practice for gaining such an understanding.

There are two distinct parts in SecDevOps:

Security as Code

Security as Code (SaC) is when you build security into the tools and practices in your DevOps pipeline. Static application security testing (SAST) and dynamic application security testing (DAST) solutions automatically scan applications coming through the pipeline. SaC places priority on automation over manual processes. Manual processes do remain in place for security-critical components of the application. Implementing SaC is an essential element of DevOps toolchains and workflows.

Infrastructure as Code

Infrastructure as Code (IaC) refers to a set of DevOps tools for setting up and updating infrastructure components to ensure a hardened and controlled deployment environment. The same code development rules are used to manage operations infrastructure instead of manual changes or one-off scripts that often take place these days. With IaC, mitigating a system problem takes deploying a configuration-controlled server versus the old way of patching and updating servers already in production.

SecDevOps uses continuous and automated security testing starting before the application goes into production. It implements issue tracking to ensure the early identification of any defects. It also leverages automation and testing to provide effective security tests throughout the software development lifecycle.

SecDevOps: The DevSecOps Synonym 

Then again, some organizations use the term SecDevOps synonymously with DevSecOps. There’s nothing wrong here. For example, a government agency focusing on security may use the term to mean DevSecOps. It’s semantics because they want to emphasize the importance of security in their software development.

SecDevOps: The Marketing Spin Definition

The Ops market is full of competition. It’s natural for marketers to want to spin the definition of SecDevOps so that it best suits the products and solutions that their company is selling to prospective customers. The best way to digest a marketing spin definition is to define what SecDevOps means for your organization. Don’t let salespeople define SecDevOps for you.

Final thoughts

Regardless of your school of thought about the three shades of SecDevOps, it’s about the people, culture, processes, and technology. A positive outcome of our current age of Peak Ops is that we all have a lot to learn from other schools of Ops thought, so soak in the SecDevOps definition and see what you can learn from it to apply to your organization’s DevSecOps practices.

Drop an SBOM: How to Secure your Software Supply Chain Using Open Source Tools

In the past few years, the number of software supply chain attacks against companies has skyrocketed. The incessant threat is pushing organizations to start figuring out their own solutions to supply chain security. The recent Executive Order on Improving the Nation’s Cybersecurity also raises new requirements for software used by the United States government.

Securing the software supply chain is no easy task! Software supply chains continue to grow in complexity. Software may come from open source projects, commercial software vendors, and your internally-developed code. And with today’s cloud-native, container-centric practices, development teams are consuming, building, and deploying more software today than they ever have before.

So this begs the question: “Is your team deploying software that might lead to the next headline-grabbing supply chain hack?”

Some supply chain hacks happen when software consumers don’t realize they are using vulnerable software, while other hacks can occur when the origin or contents of the software has been spoofed by malicious actors.

If you’d like to avoid falling victim to these types of attacks, keep reading.

To start off, let’s step backward from the worst-case scenario…

  • I don’t want my company to make headlines by having a massive security breach, so…
  • I don’t want to deploy any software artifacts that are known to have vulnerabilities, so…
  • I need to know which of my installed software packages are vulnerable, so…
  • I need to know what my installed software packages are, so…
  • I need to analyze my artifacts to determine what software they contain.

The Ingredients of Supply Chain Security

Any effective solution to securing your supply chain must include two ingredients: transparency and trust. What does that mean?

Transparency: Discovering What is There

Inevitably, it all starts with knowing what software is being used. You need an accurate list of “ingredients” (such as libraries, packages, or files) that are included in a piece of software. This list of “ingredients” is known as a software bill of materials (SBOM). Once we have an SBOM for any piece of software we create or use, we can begin to answer critical questions about the security of our software supply chain.

It’s important to note that SBOMs themselves also serve as input to other types of analyses. A noteworthy example of this is vulnerability scanning — discovering known security problems with a piece of software based on previously published vulnerability reports. Detecting and mitigating vulnerabilities goes a long way toward preventing security incidents.

In the case of software deployed in containers, developers can use SBOMs and vulnerability scans together to provide better transparency into container images. When performing these two types of analyses within a CI/CD pipeline, we need to realize two things:

  1. Each time we create a new container image (i.e. an image with a unique digest), we only need to generate an SBOM once. And that SBOM can be forever associated with that unique image. Nice!
  2. Even though that unique image never changes, it’s vital to continually scan for vulnerabilities. Many people scan for vulnerabilities once an image is built, and then move on. But new vulnerabilities are discovered and published every day (literally) — so it’s vital to periodically scan any existing images we’re already consuming or distributing to identify if they are impacted by new vulnerabilities.

Trust: Relying on What is There

While artifacts such as an SBOM or vulnerability report provide critical information about software at various points in the supply chain, software consumers need to ensure that they can rely on the origin and integrity of these artifacts.

Keep in mind that software consumers can be customers or users outside your organization or they can be other teams within your organization. In either case, you need to establish “trust”.

One of the foundational approaches to implementing “trust” is for software producers to generate artifacts (including SBOMs and vulnerability reports) that attest to the contents of the software (including SBOMs and vulnerability reports), and then sign those artifacts. Software consumers can then verify the software, SBOM and vulnerability report for an accurate picture of both the contents and security status of the software they are using.

To implement signing and attestation, development teams have to figure out how to create the SBOM and vulnerability reports, which crypto technology to use for signing, how to manage the keys used for signing, and how to jam these new tools into their existing pipelines. It’s not uncommon to see a “trust” solution get misimplemented, or even neglected altogether.

Ideally, solving for trust would be easy and automated. If it was, development teams would be much more likely to implement it.

What might that look like? Let’s take a look at how we can build transparency and trust into an automated workflow.

Building the Workflow

To accomplish this, we’re going to use three open source CLI tools that are gaining traction in the trust and transparency spaces:

  • Cosign:  container signing, verification and storage in an OCI registry (one of the tools in the Sigstore project)
  • Syft:  software bill of materials generator for container images and filesystems
  • Grype: vulnerability scanner for container images and filesystems

If you learn best by seeing a working example, we have one! Check out https://github.com/luhring/example-container-image-supply-chain-security. We’re using GitHub Actions and GitHub’s container registry, but these practices apply just as well to any CI system and container registry.

In our example, we’ve created a container image that’s been intentionally designed to have vulnerabilities, using this Dockerfile. But you should apply the steps below to your own container images that your team already builds.

Signing the Image

Since we’re adding trust and analysis for a container image, the first step is to provide a way to trust the origin and integrity of the container image itself. This means we need to ensure that the container image is signed.

For this, we’ll use Cosign. Cosign is a fantastic tool for signing and verifying container images and related artifacts. It can generate a public/private key pair for us, or it can hook into an existing key management system. For the simplicity of this demonstration, we’ll use “cosign.key” and “cosign.pub” files that Cosign generates for us.

Outside of our CI workflow, we’ll run this command, set a password for our private key, and store these files in GitHub as secrets.

cosign generate-key-pair

Then in our workflow, we can use these keys in our Cosign commands, such as here to sign our image:

cosign sign -key ./cosign.key "$IMAGE"

Conveniently, this command also pushes the new signature to the registry for us.

Analyzing the Image

Now that we have an image we can trust, we can begin asking critical questions about what’s inside this image.

Creating an SBOM

Let’s start by generating an SBOM for the image. For this, we’ll use Syft. Syft is a great tool for discovering what’s inside an image and creating SBOMs that can be leveraged downstream.

syft "registry:$IMAGE" -o json > ./sbom.syft.json

Having an SBOM on file for a container image is important because it lets others observe and further analyze the software packages found in the image. But we can’t forget: other people need to be able to trust our SBOM!

Cosign lets us create attestations for container images. Attestations allow us to make a claim about an image (such as what software is present) in such a way that can be cryptographically verified by others that depend on this information.

cosign attest -predicate ./sbom.syft.json -key ./cosign.key "$IMAGE"

Like with the “sign” command, Cosign takes care of pushing our attestation to the registry for us.

Scanning for Vulnerabilities

Okay, now that we have an SBOM that we can trust, it’s critical to our security that we understand what vulnerabilities have been reported for the software packages in our image. For this, we’ll use Grype, a powerful, CLI-based vulnerability scanner.

We’ll use Grype to scan for vulnerabilities using the SBOM from Syft as the target.

grype sbom:./sbom.syft.json -o json > ./vulnerability-report.grype.json

Just as we did with our SBOM, we’re going to “attest” this vulnerability report for our image, which allows others to trust the results of our scan.

cosign attest -predicate ./vulnerability-report.grype.json -key ./cosign.key "$IMAGE"

Remember that it’s crucial that we continuously scan for vulnerabilities since new vulnerabilities are reported every day. In our example repo, we’ve set up a nightly pipeline that looks for the latest SBOM, verifies the attestation using Cosign, and if valid, uses the SBOM to perform a new vulnerability scan.

Why Did We Do All of That?

Armed with an attested SBOM and vulnerability report, consumers of our image can depend on both the contents of the software and our scan results to understand what software vulnerabilities are present in the container image we’ve shipped.

Here’s how someone could leverage the trust and transparency we’ve created by verifying the attestations in their own workflow:

cosign verify-attestation -key ./cosign.pub “$IMAGE”

If the attestation can be verified, cosign retrieves all of the attestation data available for the container image. Depending on the scenario, there can be a large amount of attestation data. But this opens up a lot of options for downstream users that are depending on the ability to trust the container images they’re consuming.

Here’s an example of how to use an attestation to find all of the reported vulnerabilities for a container image. (This command assumes you’ve downloaded the cosign.pub public key file from our example).

cosign verify-attestation -key ./cosign.pub ghcr.io/luhring/
example@sha256:1f2d8339eda7df7945ece6d3f72a3198bf9a0e92f3f937d4cf37adcbd21a006a | jq --slurp 'map(.payload | @base64d | fromjson | .predicate.Data | fromjson | select(.descriptor.name == "grype")) | first | .matches | map(.vulnerability.id) | unique'

Yes, it’s a long command! But that’s only because we’re using the console. This gets much easier with tooling designed to verify signatures and attestations on our behalf.

Final Thoughts

Now is the time to invest in the security of your software supply chain. Equipped with the knowledge above, you can start to bake trust and transparency into your own pipelines right now. This mission can be daunting — it’s difficult or impossible without great tooling.

We’ve now built a workflow that uses Cosign, Syft, and Grype to:

  1. provide transparency into the contents and the security of our container image, and
  2. create trust that the output from any given step is valid to use as the input to the next step.

With Cosign, Syft, and Grype, you’re much closer to securing your supply chain than you were before. Here’s how to get started with each:

And most importantly, remember that these tools are entirely open source. Please consider giving back to these communities! Add issues, open PRs, and talk to us in the Sigstore and Anchore community Discourse forum. The biggest challenges are overcome when great minds come together to create solutions.

7 Principles of DevSecOps Automation

This blog post has been archived and replaced by the supporting pillar page that can be found here:

https://anchore.com/wp-admin/post.php?post=987473388&action=edit

The blog post is meant to remain “public” so that it will continue to show on the /blog feed. This will help discoverability for people browsing the blog and potentially help SEO. If it is clicked on it will automatically redirect to the pillar page.

5 DevSecOps Best Practices for Hybrid Teams

As we put away our beach chairs and pool toys, now that Labor Day is past us, it’s time to refresh your DevSecOps best practices if your organization is moving employees back to the office on a part-time basis. While your developers should capitalize on their remote work wins, hybrid work can require different approaches than what has been in place during the past 18+ months.

Here are some DevSecOps practices to consider if your development teams are moving to a hybrid work model:

1. Reinforce Trust and Relationships 

The pandemic-forced remote work we’ve all been through has provided invaluable collaboration, empathy, and trust lessons. Your work to continuously improve trust and relationships on your teams doesn’t stop when some team members begin to make their way back to the office.

A challenge to be wary of with hybrid DevSecOps teams is the reality that some team members have face time with managers and executives in the office.   Remote employees don’t get this time. A common employee concern is that two (or more) classes of employees develop in your organization.

There can be cultural issues at play here. Then again, work from home (WFH) anxiety and paranoia can be real for some people. Pay close attention and keep your communication between team members open as you venture into remote work. Provide parity for your meetings by allowing onsite and remote participants an equal platform. Another good rule is to communicate calmly and with candor. Such acts will help reinforce trust across your teams. 

2. Review your DevOps/DevSecOps Toolchain Security

The move to remote work opened up commercial and public sector enterprises to new attacks as remote work grew endpoints outside the traditional network perimeter.  Commercial and public sector organization endpoint security in pre-pandemic times was very much centralized. 

Securing the DevSecOps pipeline is an underserved security discussion in some ways. The DevOps and DevSecOps communities spend so much time on discussions about delivery velocity and shifting security left. The actual security of the toolchain, such as the value of identity access management (IAM), zero trust architecture (ZTA), and other security measures. The benefit here is only authorized employees can access your toolchain.

Use the move to hybrid work to review and refresh your toolchain security against “man in the middle” and other attackers lurking for hybrid teams to target.

3. Improve your DevSecOps Tools and Security Reporting

End-to-end traceability gains added importance as more of your executives and stakeholders return to a new state of normalcy. Use your organization’s move to hybrid work to improve security and development tools reporting across your pipelines. There are some reasons for this refresher:

  • Deliver additional data to your management and stakeholders about project progress through your pipelines regarding your hybrid work move. Be proactive and work with stakeholders during your hybrid work transition to see if they have additional reporting requirements for their management.
  • Update your security reporting to reflect the new hybrid working environment that spans both inside and outside your traditional endpoints and network perimeter.
  • Give your team the most accurate picture using data of the current state of software development and security over your projects.

4. Standardize on a Dev Platform

Hybrid work reinforces the need for your developers to work on a standardized platform such as GitLab or GitHub. The platform can serve as a centralized, secure hub for software code and project artifacts accessible to your developers, whether they are working from home or in the office. Each platform also includes reporting tools that can help you further communicate with your management about the progress and security of your projects. 

If your developers are already standardized on a platform, use the move to hybrid work to learn and implement new features. For example, GitLab now integrates Grype with GitLab 14 for container security. GitHub includes GitHub Actions which makes it easy to automate CI/CD workflows.

5. Refine your Automation Practices

DevSecOps automation isn’t meant to be a one-and-done process. It requires constant analysis and feedback from your developers. With automation, look for areas to improve, such as change management and other tasks that you need to adapt to hybrid work. Make it a rule if hybrid work changes a workflow for your teams, it’s a new opportunity to automate! 

Final thoughts

If you view DevOps and, in turn, DevSecOps as opportunities for continuous improvement, then DevSecOps best practices for hybrid work are another step in your DevSecOps journey. Treat it as the same learning experience as when your organization sent your team home in the early days of COVID-19. 

DevOps Supply Chain Security: A Case for DevSecOps

DevOps supply chain security is becoming another use case for DevSecOps as enterprises seek innovative solutions to secure this attack vector. 60% of the 2021 Anchore Software Supply Chain Report considers securing the software supply chain as a top or significant focus area. DevSecOps gives enterprises the foundational tools and processes to support this security focus.

Anatomy of a Software Supply Chain Attack

A software supply chain is analogous to a manufacturing supply chain in the auto industry. It includes anything that impacts your software, especially open source and custom software components. The sources for these components come from outside an organization such as an open source software (OSS) project, third-party vendor, contractor, or partner.

The National Institute of Standards and Technology (NIST) has a concise and easy-to-understand definition of software supply chain attack:

A software supply chain attack occurs when a cyber threat actor infiltrates a software vendor’s network and employs malicious code to compromise the software before the vendor sends it to their customers. 

Many organizations see increased value from in-house software development by adopting open source technology and containers to build and package software for the cloud quickly. Usually branded as Digital Transformation, this shift comes with trade-offs rarely highlighted by vendors and boutique consulting firms selling the solutions. You can get past these trade-offs with OSS by establishing an open source program office (OSPO) to manage your OSS governance.

They do not limit these risks to criminal hacking, and fragility in your supply chain comes in many forms. One type of risk comes from single contributors that could object morally to the use of their software, like what happened when one developer decided he didn’t like President Trump’s support of ICE and pulled his package from NPM. Or unbeknownst to your legal team, you could distribute software without a proper license, as with any container that uses Alpine Linux as the base image. 

Why DevSecOps for Software Supply Chain Security?

DevSecOps practices focus on breaking down silos, improving collaboration, and of course, shifting security left to integrate it early in the development process before production. These and other DevSecOps practices are foundational to secure cloud-native software development.

Software supply chain security in the post SolarWinds and Codecov world is continuously evolving. Some of the brightest minds in commercial and public sector cybersecurity are stepping up to mitigate the risks of potential software supply chain attacks. It’s a nearly impossible task currently. 

Here are some reasons why DevSecOps is a must for software supply chain security:

Unify your CI/CD Pipeline

The sooner you can unify your CI/CD pipeline, the sooner you can implement controls, allowing your security controls to shift left, according to InfoWorld. Implementing multiple controls across multiple systems is a recipe for disaster.

Unifying your CI/CD pipeline also gives you another opportunity to level set current tool standards, but you can upgrade tools as necessary to improve security and compliance.

Target Dependencies in Software Code

A DevSecOps toolchain gives you the tools, processes, and analytics to target dependencies in the software code coursing through your software supply chain. Less than half of our software supply chain survey respondents report scanning open source software (OSS) containers and using private repositories for dependencies.

Unfortunately, there’s no perfect solution to detecting your software dependencies. Thus, you need to resort to multiple solutions across your DevSecOps toolchain and software supply chain. Here are some traditional solutions:

  • Implement software container scanning using a tool such as Anchore Enterprise (of course!) at critical points across your supply chain, such as before checking containers into your private repository
  • Analyze code dependencies specified in the manifest file or lock files
  • Track and analyze dependencies that your build process pulls into the release candidate
  • Examine build artifacts before they enter your registry via tools and processes

The appropriate targeting of software dependencies raises the stature of the software bill of materials (SBOM) as a potent software supply chain security measure. 

Use DevSecOps Collaboration to Break Down DevOps Supply Chain Barriers

DevSecOps isn’t just about tools and processes. It also instills improvements in culture, especially for cross-team collaboration. While DevSecOps culture is a work in progress for the average enterprise, and it should be that way, focusing a renewed focus on software supply chain security is cause for you to extend your DevSecOps culture to your contractors and third-party suppliers that make up your software supply chain.

DevSecOps frees your security team from being the last stop before production. They are free to be more proactive at earlier stages of the software supply chain through frameworks, automated testing, and improved processes. Collaborating with the security team takes on some extra dimensions with software supply security because they’ll deal with some additional considerations:

  • Onboarding OSS securely to their supply chain
  • Intaking third-party vendor technologies while maintaining security and compliance
  • Collaborating with contractor and partner security teams as a player-coach to integrate their code into their final product

Structure DevSecOps with a Framework and Processes

As companies continue to move to the cloud, it’s becoming increasingly apparent they should integrate DevSecOps into their cloud infrastructure. Some pain points will likely arise, but their duration will be short and their payoffs large, according to InfoQ.

A DevSecOps framework brings accountability and standardization leading to an improved security posture. It should encompass the following:

  • Visibility into dependencies through the use of automated container scanning and SBOM generation
  • Automation of CI/CD pipelines through the use of AI/ML tools and other emerging technologies
  • Mastery over the data that your pipelines generate gives your technology and cybersecurity stakeholders the actionable intelligence they require to respond effectively to technical issues in the build lifecycle and cybersecurity incidents

Final Thoughts

As more commercial and public sector enterprises focus on improving the security posture of their software supply chains, DevSecOps provides the framework, tools, and culture change that can serve as a foundation for software supply chain security. Just as important, DevSecOps also provides the means to pivot and iterate on your software supply chain security in the interests of continuous improvement.

Want to learn more about supply chain security? Download our Expert Guide to Software Supply Chain Security White Paper!

4 Kubernetes Security Best Practices

Kubernetes security best practices are a necessity now as Kubernetes is becoming a defacto standard for container orchestration. Many of the best practices focus on securing Kubernetes workloads. Managers, developers, and sysadmins need to make it a habit to institute early in their move to Kubernetes orchestration.

Earlier this year, respondents to the Anchore 2021 Software Supply Chain Security Report replied that they use a median of 5 container platforms. That’s testimony to the growing importance of Kubernetes in the market. ”Standalone” Kubernetes (that are not part of a PaaS service) are used most often by 71 percent of respondents. These instances may be run on-premise, through a hosting provider, or on a cloud provider’s infrastructure. The second most used container platform is Amazon ECS (56%), a platform-as-a-service (PaaS) offering. Tied for third place (53%) are Amazon EKS, Azure Kubernetes Services, and Red Hat OpenShift.

A common industry definition for a workload is the amount of activity performed or capable of being performed within a specified period by a  program or application running on a computer. The definition is often loosely applied and can describe a simple “hello world” program or a complex monolithic application. Today, the terms workload, application, software, and program are used interchangeably.

Best Practices

Here are some Kubernetes security best practices to keep in mind

1. Enable Role-Based Access Control

Implementing and configuring Role-Based Access Control (RBAC) is necessary when securing your Kubernetes environment and workloads.

Kubernetes 1.6 and later enable RBAC by default (later for HAProxy); however, if you’ve upgraded since then and haven’t changed your configuration since then, you should double-check it. Due to how Kubernetes authorization controllers are combined, you will have to enable RBAC and disable legacy Attribute-Based Access Control (ABAC).

Once you start enforcing RBAC, you still need to use it effectively. You should avoid cluster-wide permissions in favor of namespace-specific permissions. Don’t give just anyone cluster admin privileges, even for debugging – it is much more secure to grant access only as needed.

2. Perform Vulnerability Scanning of Containers in the Pipeline

Setting up automated Kubernetes vulnerability scanning of containers in your DevSecOps pipelines and registries is essential to workload security. When you automate visibility, monitoring, and scanning across the container lifecycle, can you remediate more issues in development before your containers reach your production environment.

Another element of this best practice is to have the tools and processes in place to enable the scanning of Kubernetes secrets and private registries. This is another essential step as software supply chain security continues to gain a foothold across industries. 

3. Keep a Secret

A secret in Kubernetes contains sensitive information, such as a password or token. Even though a pod cannot access the secrets of another pod, it’s vital to keep a secret separate from an image or pod. A person with access to the image would also have access to the secret. This is especially true for complex applications that handle numerous processes and have public access.

4. Follow your CSP’s Security Guidelines

If you’re running Kubernetes in the cloud, then you want to consult your cloud service provider’s guidelines for container workload security. Here are links to documentation from the major CSPs:

Along with these security guidelines, you may want to consider cloud security certifications for your cloud and security teams members. CSPs are constantly evolving their security offerings. Just consulting the documentation when you need it may not be enough for your organization’s security and compliance posture.       

Final thought

Kubernetes security best practices need to become second nature to operations teams as their Kubernetes adoption grows. IT management needs to work with their teams to ensure the best practices in this post and others make it into standard operating procedures if they aren’t already.

Want to learn more about container security practices? Check out our Container Security Best Practices That Scale webinar, now on-demand!

Cloud Migration Security Challenges: 5 Ways DevSecOps Can Help

DevSecOps is playing a growing role in cloud migrations, especially in the public sector. Even before the Executive Order on Improving the Nation’s Cybersecurity Executive Order, agencies had to face cloud migrations with an eye on security to ensure their cloud projects met FedRAMP compliance.

Here are some ways that DevSecOps can help your agency or organization meet cloud migration challenges:

1. Improves Information Processing

When a DoD or other government program moves to the cloud and a DevSecOps model, it fundamentally transforms how they interact with data. DevSecOps gives government agency and DoD programs the tools, processes, and frameworks to develop applications quickly and capitalize on data to help them respond to data-intensive mission challenges such as big data data analysis, fraud detection, and trends data.

To say information is power now considering government responses to natural disasters, COVID-19, and other threats on the world stage. For example, DevSecOps gives development teams in the public sector a new ability to migrate legacy applications to the cloud securely to enable access so they can open them up a new hybrid workforce.

2. Provides Security by Design for New Cloud Projects

“Shift security left” is a common refrain about DevSecOps. More importantly, DevSecOps brings security by design to public sector cloud projects.

When you consider DevSecOps as part of your program’s cloud migration strategy, DevOps and security teams can collaborate on workload protection, secure landing zones, operating models, network segmentation, and the implementation of zero trust architecture (ZTA) because both teams get input and buy-in during the design phase in regards to functional requirements, data flows, and workstreams. 

DevSecOps, by its nature, also provides the feedback loops and collaboration channels that you don’t find in the public sector’s legacy model of long-term contracts, multiple vendors, and silos between developers, cybersecurity, stakeholders, and constituents.

3. Automation of Builds and Testing

Automation is becoming one of the keys to security and overall success with public sector cloud projects. Implementing a DevSecOps toolchain or upgrading your existing DevOps toolchain for DevSecOps provides the tools for automation of container security scanning and compliance checks.

With some government contracting pundits saying up to 80% of agency IT staff’s daily work is just keeping the lights on, moving technical staff to more mission-critical and strategic work will benefit the program. A cloud migration — by its very nature — requires some time for your teams to learn and harness the latest cloud services. Being able to retask team members from fairly rote tasks such as running software builds to critical tasks such as implementing new cloud services benefits government programs small and large and, in turn, the taxpayer.

4. Supports Secure Iteration of Cloud Applications

Following a DevSecOps methodology gives you a secure method for iterating on application features. For example, let’s say your agency is moving a legacy application to the cloud. Moving legacy agency applications to the cloud requires a process that secures the application and its data from inside the agency data into the cloud. If the choice is made to refactor your application, your users can use new cloud services that improve security and user experience (UX). 

DevSecOps adds a new layer of security over these everyday development tasks:

  • Adding new features using DevSecOps can help the project gain the delivery velocity of a consumer app store versus the quarterly or yearly feature releases common to public sector software development
  • Allowing applications to take advantage of containers and microservices architectures
  • Enabling application optimization using the cloud service provider’s infrastructure that wasn’t previously available in agency data centers

Another option is to rebuild a legacy application for the cloud. Moving to DevSecOps and containers brings with it significant code changes. Still, such an investment could be worth it depending on the purpose of the application, and the changing user and constituent landscape as remote and hybrid work grow in dominance.

5. Sets a Foundation for a Security Culture

DevSecOps and moving to the cloud require a cultural transformation for today’s public sector agencies to meet cloud migration security challenges. Bringing DevSecOps into your program’s cloud migration process is another step in making security part of everybody’s job.  When your cloud migration and development teams adopt DevSecOps, it opens up new opportunities for reporting that enable you to best communicate the progress and security status of your cloud migrations to your internal stakeholders. 

DevSecOps and Cloud Benefits in Full View

The DoD and the public sector are gradually realizing the benefits of DevSecOps and the cloud. Bringing DevSecOps into your cloud migration framework gives you new tools to maintain security and compliance of your legacy applications and data as they leave your agency data centers and make their journey to the cloud.

Download our Expert Guide to DevOps to DevSecOps Transformation to learn more about DevSecOps to help prepare for your next cloud migration security challenges!

Advancing Software Security with Technical Innovation

As we explore the various roles and responsibilities at Anchore, one of the critical areas is building the roadmap for our enterprise product.  Anchore Enterprise is a continuous security and compliance platform for cloud-native applications. Our technology helps secure the software development process and is in use by enterprises like NVIDIA and eBay as well as government agencies like the U.S. Air Force and Space Force. 

As news of software supply chain breaches continue to make headlines and impact software builds across industries, the team at Anchore works each day to innovate and refine new technology to support secure and compliant software builds. 

With this, Anchore is thrilled to announce an opening for the role of Principal Product Manager. Our Vice President of Product, Neil Levine, weighs in on what he sees as key elements to this role:  

“Product managers are lucky in that we get to work with almost every part of an organization and are able to use both our commercial and technical skills. In larger organizations, a role like this often gets more proscribed and the ability to exercise a variety of functions is limited. Anchore is a great opportunity for any PM who wants to enjoy roaming across a diverse range of projects and teams. In addition to that, you get to work in one of the most important and relevant parts of the cybersecurity market that is addressing literal front-page news.”

Are you passionate about security, cloud infrastructure or open-source markets? Then apply for this role on our job board.

The Power of Policy-as-Code for the Public Sector

As the public sector and businesses face unprecedented security challenges in light of software supply chain breaches and the move to remote, and now hybrid work, means the time for policy-as-code is now.

Here’s a look at the current and future states of policy-as-code and the potential it holds for security and compliance in the public sector:

What is Policy-as-Code?

Policy-as-code is the act of writing code to manage the policies you create to help with container security and other related security policies. Your IT staff can automate those policies to support policy compliance throughout your DevSecOps toolchain and production systems. Programmers express policy-as-code in a high-level language and store them in text files.

Your agency is most likely getting exposure to policy-as-code through cloud services providers (CSPs). Amazon Web Services (AWS) offers policy-as-code via the AWS Cloud Development Kit. Microsoft Azure supports policy-as-code through Azure Policy, a service that provides both built-in and user-defined policies across categories that map the various Azure services such as Compute, Storage, and Azure Kubernetes Services (AKS).

Benefits of Policy-as-Code

Here are some benefits your agency can realize from policy-as-code:

  • Information and logic about your security and compliance policies as code remove the risks of “oral history” when sysadmins may or may not pass down policy information to their successors during a contract transition.
  • When you render security and compliance policies as code in plain text files, you can use various DevSecOps and cloud management tools to automate the deployment of policies into your systems.
  • Guardrails for your automated systems because as your agency moves to the cloud, your number of automated systems only grows. A responsible growth strategy is to protect your automated systems from performing dangerous actions. Policy-as-code is a more suitable method to verify the activities of your automated systems.
  • A longer-term goal would be to manage your compliance and security policies in your version control system of choice with all the benefits of history, diffs, and pull requests for managing software code.
  • You can now test policies with automated tools in your DevSecOps toolchain.

Public Sector Policy Challenges

As your agency moves to the cloud, it faces new challenges with policy compliance while adjusting to novel ways of managing and securing IT infrastructure:

Keeping Pace with Government-Wide Compliance & Cloud Initiatives

FedRAMP compliance has become a domain specialty unto itself. While the United States federal government maintains control over the policies behind FedRAMP, and the next updates and changes, FedRAMP compliance has become its own industry with specialized consultants and toolsets that promise to get an agency’s cloud application through the FedRAMP approval process.

As government cloud initiatives such as Cloud Smart become more important, the more your agency can automate the management and testing of security policies, the better. Automation reduces human error because it does away with the manual and tedious management and testing of security policies.

Automating Cloud Migration and Management

Large cloud initiatives bring with them the automation of cloud migration and management. Cloud-native development projects that accompany cloud initiatives need to consider continuous compliance and security solutions to protect their software containers.

Maintaining Continuous Transparency and Accountability

Continuous transparency is fundamental to FedRAMP and other government compliance programs. Automation and reporting are two fundamental building blocks. The stakes for reporting are only going to increase as the mandates of the Executive Order on Improving the Nation’s Cybersecurity become reality for agencies.

Achieving continuous transparency and accountability requires that an enterprise have the right tools, processes, and frameworks in place to monitor, report, and manage employee behaviors throughout the application delivery life cycle.

Securing the Agency Software Supply Chain

Government agencies are multi-vendor environments with homogenous IT infrastructure, including cloud services, proprietary tools, and open source technologies. The recent release of the Container Platform SRG is going to drive more requirements for the automation of container security across Department of Defense (DoD) projects

Looking to learn more about how to utilizing a policy-based security posture to meet DoD compliance standards like cATO or CMMC? One of the most popular technology shortcuts is to utilize a DoD software factory. Anchore has been helping organizations and agencies put the Sec in DevSecOps by securing traditional software factories, transforming them into DoD software factories. Get caught up with the content below:

Policy-as-Code: Current and Future States

The future of policy-as-code in government could go in two directions. The same technology principles of policy-as-code that apply to technology and security policies can also render any government policy-as-code. An example of that is the work that 18F is prototyping for SNAP (Supplemental Nutrition Assistance Program) food stamp program eligibility.

Policy-as-code can also serve as another automation tool for FedRAMP and Security Technical Implementation Guide (STIG) testing as more agencies move their systems to the cloud. Look for the backend tools that can make this happen gradually to improve over the next few years.

Managing Cultural and Procurement Barriers

Compliance and security are integral elements of federal agency working life, whether it’s the DoD supporting warfighters worldwide or civilian government agencies managing constituent data to serve the American public better.

The concept of policy-as-code brings to mind being able to modify policy bundles on the fly and pushing changes into your DevSecOps toolchain via automation. While theoretically possible with policy-as-code in a DevSecOps toolchain, the reality is much different. Industry standards and CISO directives govern policy management at a much slower and measured cadence than the current technology stack enables.

API integration also enables you to integrate your policy-as-code solution into third-party tools such as Splunk and other operational support systems that your organization may already use as your standards.

Automation

It’s best to avoid manual intervention for managing and testing compliance policies. Automation should be a top requirement for any policy-as-code solution, especially if your agency is pursuing FedRAMP or NIST certification for its cloud applications.

Enterprise Reporting

Internal and external compliance auditors bring with them varying degrees of reporting requirements. It’s essential to have a policy-as-code solution that can support a full range of reporting requirements that your auditors and other stakeholders may present to your team.

Enterprise reporting requirements range from customizable GUI reporting dashboards to APIs that enable your developers to integrate policy-as-code tools into your DevSecOps team’s toolchain.

Vendor Backing and Support

As your programs venture into policy compliance, failing a compliance audit can be a costly mistake. You want to choose a policy-as-code solution for your enterprise compliance requirements with a vendor behind it for technical support, service level agreements (SLAs), software updates, and security patches.

You also want vendor backing and support also for technical support. Policy-as-code isn’t a technology to support using your own internal IT staff (at least in the beginning).

With policy-as-code being a newer technology option, a fee-based solution backed by a vendor also gets you access to their product management. As a customer, you want a vendor that will let you access their product roadmap and see the future.

Interested to see how the preeminent DoD Software Factory Platform used a policy-based approach to software supply chain security in order to achieve a cATO and allow any DoD programs that built on their platform to do the same? Read our case study or watch our on-demand webinar with Major Camdon Cady.

The Broad Impact of Software Supply Chain Attacks

The broad impact of software supply chain attacks is clear in the findings of our recent 2021 Anchore Supply Chain Security Report. As malicious actors continue to advance the threat landscape in creative and alarming ways, Anchore commissioned a survey of 400+ enterprises with at least 1,000 employees to find out how real the impact is.

A whopping 64% of respondents to our survey reported that a supply chain attack had affected them in the last year. Furthermore, a third of those respondents report that the impact on their organizations was moderate or significant.

Scanning Challenges Abound

 Enterprises facing these supply chain attacks also have to work through container scanning challenges. 86% of respondents reported challenges in identifying vulnerabilities. Too many false positives are a challenge for 77% of the respondents. On average, respondents estimate that 44% of vulnerabilities found are false positives. Getting developers to spend time on remediating issues was a challenge for 77% of respondents.

Corporate and government agency moves to DevOps and DevSecOps mean collaboration among development, security, and operations teams is more important than ever before. 77% of organizations are designating Security Champions within Dev teams to facilitate tighter collaboration.

affected by software supply chain attacks in last 12 months

Enterprise Security Focus: The Software Supply Chain 

Against a backdrop of recent high-profile software supply chain attacks, 46 percent of respondents indicated that they have a significant focus on securing the software supply chain while an additional 14 percent have prioritized it as a top focus. 

Very few (3%) of the respondents showed that software supply chain security isn’t a priority at all.

Focus on Securing Software Supply Chain

The DevOps Toolchain: An Enterprise Blind Spot

Experts have identified development platforms and DevOps toolchains as a significant risk point for software supply chain security. When attackers compromise a toolchain or development platform, they gain access to all the different applications that move through your development pipeline. This opens the door for bad actors to insert malicious code or backdoors that can be exploited once the developer deploys the software in production or (even worse) shipped to customers. 

A critical best practice is to leverage infrastructure-as-code (IaC) to secure each platform or tool in the development process to ensure they are secured properly. Just over half of respondents are using IaC to secure these various platforms.

Using IAC to Secure DevOps Toolchain

Do you want more insights into container and software supply chain security? Download the Anchore 2021 Software Supply Chain Security Report!

5 Tips for Improving your DevOps Methodology Post-COVID

The time is now to review your current DevOps methodology and look for areas of improvement. The fog is lifting off our pandemic-enforced lockdowns and your teams have most definitely learned a lot during the past year-plus of remote work. Most of all, your teams have had to stretch and pivot because your in-place development methodologies weren’t ideal for remote or hybrid team working models.

1. Conduct a Post Mortem of your DevOps Methodology During the Pandemic

DevOps is about continuous learning and feedback. Take the time as your organization opens up to a hybrid work model or doubles down on remote work to take stock of how your method fared during the pandemic.

Talk to your development and operations teams about what technology worked during the pandemic for them. Ask about your toolchains, automation, and security. Did your teams have to do any workarounds in the remote world? If so, make sure you capture and document them for future reference and for the benefit of your organization.

Also, reflect on the temperature of your DevOps culture. Did your teams not miss a beat once you moved to a remote working model? Did working remotely adversely affect your collaboration and communication between team members? Are there concerns about how a hybrid work model affects team collaboration and communications?

Such a post-mortem doesn’t mean calling yet another meeting. You can push out questions through your Slack channels plus meetings and standups that are already on your calendars.

 

2. Review your Definition of DevOps Success

It’s important to have a definition of DevOps success for your organization. Some common measurements of DevOps success include:

  • Availability and uptime
  • Work in progress
  • Repository speed
  • Deployment frequency
  • Deployment stability

The first step here is to evaluate whether your pre-pandemic measurements of DevOps success still apply today. Review your current measurements against your operations and performance during the past year-plus of remote working. Be sure to document any changes to your measurements and communicate that to your DevOps teams.

3. Update Developer Onboarding Training with your DevOps Methodology

While you shift your operations to a post-pandemic working model, whether remote or hybrid, take the time to review and update your developer onboarding training. Here are some examples of onboarding training items to capture or update and communicate:

  • Communications and collaboration channels for escalation of issues and problem solving
  • Accounts and access to cloud services
  • Documentation of your DevOps Methodology in written and graphical form
  • Training on your toolchain in written and graphical form (Bonus points for a video)

Standard practices and create reference materials to ensure that your teams are following the DevOps methodology you set for your organization, not the developer’s interpretation of DevOps. Some developers may also carry forth DevOps practices from their previous employer.

4. Automate your Workflows

According to the 2021 Anchore Software Supply Chain Security Report, only 47% of the respondents have automated remediation workflows to help developers fix issues that are identified in scanning.

Now is the time to review the automation (or lack thereof) of your remediation workflows. While you’re at it, take stock of other automation opportunities across your DevOps pipelines to help improve the productivity of your development organization.

5. Conduct a DevOps to DevSecOps Transformation

Cybersecurity lessons are all around us now. There comes a time when even a finely tuned and maintained DevOps methodology needs to support additional security requirements. As new breaches make headlines, new government executive orders (EOs) hit the street, and attack vectors now could be time for your organization to make a full-blown DevOps to DevSecOps transformation. Making such a transformation is the best way to lock down the security of your DevOps toolchains and software supply chain.

Final Thoughts

A DevOps methodology should represent the current state of a development organization complete with lessons learned. Taking a continuous improvement approach especially after such a life-changing event as the pandemic makes good business sense. Work to maintain your development team’s agility and prepare to face challenges in the new world of work together.

Do you want to learn more about DevOps pipelines? Check out our on-demand webinar, How To Secure Your DevOps Pipeline In a Post-SolarWinds World!

What’s Critical Software? NIST Responds

Part of President Biden’s May 12, 2021 Cybersecurity Executive Order (EO) is for the National Institute of Standards and Technology (NIST) to define critical software as part of the goal for strengthening the security of government purchased software.

NIST recently delivered on that aim and the definition of critical software has the potential to influence government IT as a whole. Plus the companies that sell IT services and software into the federal government could feel the impact in unexpected ways. Let’s take a closer look…

The Meaning of Critical Software

Software runs the United States federal government. It powers the processing of government benefits. Government researchers on the front lines of the pandemic rely on cloud computing power. America’s warfighters rely on software to run logistics, process real-time intelligence, and for R&D. Here’s the official NIST definition of critical software:

EO-critical software is defined as any software that has, or has direct software dependencies upon, one or more components with at least one of these attributes:

  • is designed to run with elevated privilege or manage privileges;
  • has direct or privileged access to networking or computing resources;
  • is designed to control access to data or operational technology;
  • performs a function critical to trust; or,
  • operates outside of normal trust boundaries with privileged access.

This definition strikes at the heart of software in government and DoD infrastructure. Critical software manages access to mission-critical systems that play key roles in government programs including support of the warfighter, intelligence community (IC), and other government programs that support the health and vitality of United States citizens and the economy.

The preliminary list of software categories includes software at every level of the technology stack:

  • Identity, credential, and access management (ICAM)
  • Operating systems, hypervisors, container environments
  • Web browsers
  • Endpoint security
  • Network control
  • Network protection
  • Network monitoring and configuration
  • Operational monitoring and analysis
  • Remote scanning
  • Remote access and configuration management
  • Backup/recovery and remote storage

NTIA defines “Critical to trust,” as “categories of software used for security functions such as network control, endpoint security, and network protection.”

Critical Software: Definition + Dependencies

As required by the order, NIST’s definition of critical software focuses heavily on software that has elevated privileges or controls access to an organization’s computing resources along with related direct software dependencies. Here’s the definition from NIST:

“Direct software dependencies,” means, for a given component or product, “other software components (e.g., libraries, packages, modules) that are directly integrated into, and necessary for operation of, the software instance in question. This is not a systems definition of dependencies and does not include the interfaces and services of what are otherwise independent products.”

The direct software dependencies mentioned are where the definition of critical software gets interesting. Agency and DoD software development bring together the work of multiple vendors and open source projects. Government programs are going to need a level of software transparency now that maybe wasn’t in their previous request for proposals (RFPs). 

It’s the NIST definition of dependencies that gives new meaning to critical software. Dependencies mean software libraries, packages, and modules that are necessary for software operations. NTIA has quite a task deciding how deep a software bill of materials (SBOM) detail should go into when capturing software composition.

Industry Feedback  & SBOM Visibility

Industry feedback has been sent to the NTIA asking for flexibility, especially on how deep an SBOM should be required to go in describing its transitive dependencies, according to NextGov

Granted there’s still an industry need for a definitive software bill of materials (SBOM) standard, the comments focus on SBOM limitations due to software versioning and identification issues. They also cite the importance of context when it comes to vulnerability information. Lastly, they mention the level of effort and resources required to prepare SBOMs.

While the SBOM isn’t a new thing in software development, yet it’s not in as wide of use as you’d expect. President Biden’s cybersecurity executive order has the potential to finally give the SBOM some teeth. 

SBOM generation to be effective requires automation and just as importantly, a definitive industry standard. Automated generation of SBOMs is a natural step in a DevOps or DevSecOps toolchain. Platform-centric toolchains can make this a new feature or API-based integration. Traditional DevOps/DevSecOps toolchains are open for integration as well.

Government acquisitions and procurement officials have yet to enter the critical software discussion. The definition that NTIA is championing is going to have an impact on how request for proposals (RFPs), statement of works (SOWs), and even the other transaction authority (OTA) procurement vehicles. Post-cybersecurity EO RFPs must now account for:

Such additional considerations may expand the scope of RFPs. Expect a learning curve from agency contracting officers and respondents alike during the first round of IT proposals.

Do you want to generate SBOMs on the OSS in your development projects? Download Syft, our open source CLI tool for generating a Software Bill of Materials (SBOM) from container images and filesystems.

Settling into a Culture of Kindness

Blake Hearn (he/him) joined Anchore in February 2020 as a DevSecOps Engineer on the Customer Success team, marking the start of both Blake’s professional career and entry into DevSecOps.  In this Humans of Anchore profile, we sat down with Blake to talk about learning new skill sets, a culture of kindness, and lessons from leadership.   

Settling into a Culture of KindnessFrom his start at Anchore, Blake has been immersed in a team of kind and supportive people offering him the mentorship, resources, and encouragement needed to be successful.  

“The whole team really helped me learn at a fast rate. They created training materials and testing environments for me to learn, checked in with me frequently, and even recommended some certifications which played a huge role in building a foundational knowledge of DevSecOps.  A year and a half ago I didn’t know anything about Docker, Jenkins or Kubernetes and now I’m using them every day.” 

Blake’s support system reaches far beyond his direct team, extending all the way to the executives and co-founders of the company. 

“I’ve had a really great experience with my managers and the leadership team. Being able to reach out to the CEO or CTO is amazing.  Dan Nurmi (CTO/Co-Founder) has open office hours each week where I can bring my technical questions and feel comfortable doing so. Everyone at Anchore is really collaborative. I can ask anyone a question and they are more than willing to help.” 

In his role, Blake spends most of his day working on the Platform One team at the Department of Defense (DoD) partnering with engineers from companies across the industry to help deliver software solutions faster and more securely across the DoD.

“It’s been a really good opportunity for me to learn from both my Anchore team and my Platform One team. My role requires a lot of custom Helm templating and testing updates on various Kubernetes clusters.  We are putting our minds together to come up with solutions and do groundbreaking work.”

Looking ahead, Blake is eager to continue his learning journey. “I’m excited to continue learning from others and get into new skill sets. Recently, I’ve learned a little bit about the operational side of Machine Learning (ML) and how ML could be used in cybersecurity. Next, I would like to get into penetration testing to help improve the security posture of products and services. I think that would provide a huge benefit to our customers – especially with the supply chain attacks we’ve seen recently in the news.”

In summarizing his time at Anchore, Blake is grateful for the support system he has found: “I didn’t think companies like Anchore existed – where the company’s culture is so kind, everyone is really smart, works well together, and you have direct access to leadership.  No other company I’ve seen compares to Anchore.” 

Interested in turning your dreams into reality? Check out our careers page for our open roles anchore.com/careers

 

Developing Passionate and Supportive Leaders

Anchore’s management program is founded on passionate people leaders who are kind, open, and invest in their team’s success.  Finding passionate leaders means opening the door to opportunities for all employees. We empower Anchorenauts to apply for management roles and participate in a cross-functional interview process.     

A few months into Dan Luhring’s (he/him) time at Anchore, a management role opened up in the Engineering organization.  When the Director of Engineering asked if anyone on the team was interested in pursuing the role, Dan immediately raised his hand. 

Developing Passionate and Supportive Leaders“When I interviewed for the manager position with the leadership team, I was glad that I was going through a formal process because it made me realize that Anchore understands how vitally important great managers are to the success of the company.”

Upon joining the Anchore management team, all leaders go through a robust training program where they learn more about different communication and working styles, coaching conversations, and the guiding principle of Anchore’s management philosophy: building trusting relationships.

“I love our manager training series.  I thought the role-playing exercises were really thoughtfully done and have been missing from manager training I’ve done in the past. Between the training sessions, ongoing employee programs, and overall partnership, I feel really supported by our People Operations team in my role.” 

Anchore’s continuous performance model enables our managers to set a strong foundation of trust and clear communication from day one.  Although Dan had already been working with his team before becoming a manager, the Stay Interviews gave Dan even more insight into his new direct reports. 

“I got a ton of value out of the Stay Interviews with my direct reports. It’s really useful to know what motivates people, how they like to receive recognition and feedback, and what their long-term career goals are.  It made me more aware of their professional interests outside of their day-to-day responsibilities. Because I know the motivators of my direct reports, I can assign special projects based on individual interest, even if it’s not something they do in their core role.”  

Reflecting on his opportunity to join the management team, Dan is excited to be part of making Anchore a great place to work and continuing to lead his team based on trust.    

“There are things that Anchore gets right that I find to be really unique. We are thoughtful about who we promote into the management team.  We have great support and autonomy with helpful programs and tools to facilitate trusting relationships, really caring about the people who report to us and wanting to help them achieve their career goals.”

Interested in becoming a team leader like Dan? View Anchore’s current openings here.

Anchore Enterprise 3.1 Streamlines End-to-End Container Security

Container security is a team sport.  Development teams need to avoid delays by finding and fixing security issues early in development, while DevOps teams must check compliance before they deploy. Security teams must continuously monitor for new vulnerabilities that impact production environments. Collaboration among these teams is required for efficient and effective security processes. Anchore Enterprise 3.1 adds new capabilities to expand automation of container security across these stages from development to production. This will advance the team’s collective goals and ultimately, speed to market.

Runtime Image Monitoring for Continuous Security

Anchore Enterprise 3.1 makes it easy to monitor your running containers and quickly evaluate images for security and compliance risks. Security teams can now watch entire Kubernetes clusters, gain visibility into overall risk in production, and be alerted of new vulnerabilities. Our new UI makes it easy to layer in our extensive policy language and start using Anchore’s admission controller to enforce security across your critical Kubernetes clusters. Watch a video of Runtime Image Monitoring in action.

New AnchoreCTL Client Automates Pipeline Scanning 

Designed for use with Anchore Enterprise, AnchoreCTL is a new command-line client that makes it easier to automate container scanning within the CI/CD pipeline. With AnchoreCTL, customers can distribute scanning tasks across their CI/CD platforms and pipelines, increasing throughput and reducing time-to-analyze for Anchore Enterprise. 

AnchoreCTL incorporates the capabilities of Anchore open source tools Syft (SBOM generator) and Grype (vulnerability scanner) while adding support for the reporting and compliance APIs in Anchore Enterprise. AnchoreCTL is also fully supported under the Anchore Enterprise support agreement and SLAs. Those who are ready to move from open source to Anchore Enterprise will benefit from an easy migration path from Syft and Grype to AnchoreCTL.  AnchoreCTL can be installed through a binary, container, or a growing number of package managers. Watch a video of AnchoreCTL here.

Simplified STIG Compliance for US Federal Agencies

The Federal Edition of Anchore Enterprise 3.1 greatly simplifies the process of DISA Security Technical Information Guide (STIG) checks for containers running in a Kubernetes cluster. With Anchore’s new cloud-native tool  STIG tool REM, federal agencies can fully automate what was once a time-consuming manual process. The results of STIG checks are aggregated and correlated within Anchore Enterprise, providing security teams with a single pane of glass to report on STIG compliance issues along with vulnerabilities and other compliance checks. Watch a video of STIG Compliance Checks.

Kubernetes Adoption by the Numbers

Our recent 2021 Anchore Supply Chain Security Survey sheds some light on Kubernetes adoption and growth in the enterprise as it pertains to running container workloads. 

For this blog post, container platforms based on Kubernetes run containerized applications, whether during development and testing, staging, or production. These platforms run in house, through a hosting provider, or from a cloud provider or another vendor.

K8s Stands Alone

Perhaps the most interesting Kubernetes stat in the survey is that 71% of respondents are using a “standalone” version of Kubernetes that’s not part of a platform as a service (PaaS) rather it’s run on-premise or even on cloud infrastructure as a service (IaaS).

The second most used container platform is Amazon Amazon Elastic Container Service (ECS) with 56%. 

53% of the respondents are using Amazon Elastic Kubernetes Service (EKS), Azure Kubernetes Services, and Red Hat OpenShift for their container management and orchestration.

We’re at an interesting point for Kubernetes adoption as these numbers show. While there’s a well-known Kubernetes skills gap, organizations are still relying on their own teams, most likely augmented with outside contractors to deploy and operate Kubernetes. While the major cloud service providers (CSPs) are the logical platform for outsourcing Kubernetes infrastructure and the related backend management tasks the numbers point to the fact they are still gaining mindshare in the Kubernetes market.

Container Platforms Used

K8s and Large Workloads

Cloud-native software development is now delivering at an enterprise-scale on major development projects that include 1000+ containers. Here’s the spread of Kubernetes adoption in use on these business and mission-critical projects:

  • Standalone Kubernetes (7%)
  • Amazon ECS (7%)
  • Amazon EKS (7%)
  • Azure Kubernetes Service (7%)
  • SUSE-Rancher Labs (6%)

This tight spread of K8s platforms paints an interesting picture of the scale where these large enterprise projects play. Standalone Kubernetes, Amazon ECS, Amazon EKS, and Azure AKS are all at a tie. The continued presence of standalone Kubernetes is a testimony to early adopters and the growing reliance on open source software in large enterprises.

It’ll be interesting to revisit this question next year after large enterprises have gone through more than a year of COVID-19 driven cloud migrations which could give CSP offerings a decided advantage in the new world of work.

Looking Forward

Kubernetes is still experiencing exponential growth. The Kubernetes responses in our survey speak to a future that’s being written as we speak. 

The complexities around deploying and operating Kubernetes still remain and aren’t going to disappear anytime soon. That means that the open source projects and CSPs offering Kubernetes solutions are going to have to focus more on simplicity and usability in future releases. Along with that comes a renewed commitment for outreach, documentation, and training for their Kubernetes offerings.

Do you want more insights into container and software supply chain security? Download the Anchore 2021 Software Supply Chain Security Report!

A Custom Approach to Software Security Solutions

We’re hiring a Product Marketing Manager! In this week’s Be Yourself, With Us, SVP of Marketing Kim Weins shares the exciting opportunities within the role. 

Product marketing at a startup like Anchore provides a lot of room to leave your stamp, since our product is evolving quickly based on problems our customers need to solve,” said Kim. 

Anchore’s customer base ranges from large enterprises like NVIDIA and eBay to government agencies like the U.S. Space Force and the U.S. Air Force. Being nimble to create custom solutions is critical for our expanding software security products.

“On top of that, we’re in a rapidly growing industry with a solution at the nexus of cloud, containers and security. There’s immense potential for what Anchore can provide for customers and the Product Marketing Manager is going to have a huge impact on how these solutions are communicated to the rest of the industry,” she continued.

Are you passionate about the future of software security and curious about the next innovation that will help secure data and prevent cyberattacks? Then consider joining our marketing team. Visit this link to apply.

Secure the Software Supply Chain: 5 Insights from the 2021 Anchore Software Supply Chain Security Report

The challenge of building and maintaining a secure software supply chain continues to vex enterprise IT leaders. We recently surveyed IT, security, and development leaders in the Anchore 2021 Software Supply Chain Security Report to get some insights into these challenges they and their teams face daily.

Here’s a preview of our survey results:

Highlights from the Survey

Container usage is on the rise as these highlights from the survey show:

  • 65% of the respondents replied they are at intermediate or advanced levels of container maturity
  • 84% plan to increase container use and 29% will increase container use significantly. Respondents use containers for both internal applications and software products they sell.
  • 38% of advanced container users see containerized apps as a higher supply chain risk versus 16% of beginner container users

1. Open Source is the Top Container Security Challenge 

Developers incorporate a significant amount of open source software (OSS) in the containerized applications they build. As a result, 23% of respondents rank securing OSS containers as the number one challenge. In a tie for second place (19%) is understanding the security of code that an organization writes themselves and understanding the full software bill-of-materials (SBOM).  SBOMs are a critical part of President Biden’s Executive Order because they are the foundation for many security and compliance practices.

Open Source is the Top Container Security Challenge

2. Software Supply Chain Attacks Cut Deep

With over 18,000 organizations affected just by the SolarWinds attack, a software supply chain attack has affected a significant majority (64%) of respondents within the last twelve months. Over a third of the respondents report that the impact of a software supply chain on their organizations was moderate or significant.

Software Supply Chain Attacks Cut Deep

3. Containers and Software Supply Chain Risk

We saw an interesting statistic in the survey with 38% of advanced container users seeing containerized apps as a higher supply chain risk versus just 16% of beginner container users. 

These stats paint an intriguing picture of container adoption that’s entering middle age. When you look at the rise of Docker containers since 2013 leading into the current generation of cloud-native development. Long-time container users recognize the security risks of containers inside the supply chain. A new generation of developers adopting containers is starting on their own learning journey.

Software Supply Chain Attacks Cut Deep

4. OSS, SBOM, and Container Security Rank as Challenges

Open source software (OSS) ranks as a top container security challenge according to 23% of the survey respondents. Meanwhile, the software bill of materials (SBOM) is a top challenge for 19% of the respondents.

Another interesting insight from the survey is that some enterprises still underestimate OSS as 26% of components and code where some industry benchmarks such as the Synopsys 2020 Open Source Security and Risk Analysis Report point to OSS comprising 70% or more of components and code in today’s applications.

Bar chart showing comparison of top container security challenges.

5. The Truth about False Positives

We hear a lot about container vulnerability scanning challenges and the damage that false positives can do to a security team’s credibility. Survey respondents laid out their top three challenges:

  • Identifying vulnerabilities (86%) 
  • Receiving too many false positives (77%)
  • Getting developers to remediate issues (77%)

On average, survey respondents estimate that 44% of vulnerabilities they find are false positives.

 

Do you want more insights to help build and maintain a secure software supply chain? Download the Anchore 2021 Software Supply Chain Security Report!

Carving a Career Path That Fits

Startups come with many opportunities – the ability to partner directly with leadership, to move quickly with decision making, and to work on a variety of projects across the organization. At Anchore, we have intentionally designed our internal programs to provide employees with equitable opportunities for mentorship and career growth. 

Through our continuous performance model, we built opportunity into the foundation of our company culture. We do this by ensuring every employee (regardless of background, tenure, or level) feels empowered to raise their hand with an idea, question, or express interest in a new project. Anchorenauts have ample opportunity to expand their skills as they work towards short-term and long-term career goals.  

Instead of focusing solely on linear career paths, we give employees the opportunity to pursue other roles or career aspirations.  

Andre Neufville (he/him) joined Anchore in November 2019 on the Customer Success team, with a focus on designing solutions that integrate container security best practices into customer DevSecOps workflows.  “My role was to interface with customers and prospects, help them understand how to operate Anchore in containerized environments and integrate container scanning in their development workflows, but there was also an added sales aspect that I hadn’t done before.”

Client service wasn’t always the focus for Andre.  Prior to Anchore, he worked on systems administration and network security. 

“Early on I developed an interest in understanding the components of a secure enterprise network and used that knowledge to design better security around systems and network architectures. At the same time, cloud adoption grew and companies began developing modernization strategies for enterprise infrastructure. I transitioned to the role of a Cloud Security Architect in which I was able to apply my previous experience to advise customers on how to secure their cloud infrastructure and workloads.

When Anchore’s InfoSecurity and IT team was expanding, Andre expressed interest in the role during a continuous performance discussion and was supported by his manager to pursue the opportunity.  The IT Security Engineer role proved to be the perfect opportunity to combine his past experiences and current interests (as Andre is also in the process of getting his Masters degree in Cybersecurity Technology).  

“In the past, I partnered with and advised customers on architecting solutions without the ownership of seeing it through.  The InfoSec role has given me an opportunity to apply the same principles internally, but rather than just advising how it should be implemented, I get to follow through and look for areas of improvement. The whole end-to-end approach really intrigued me and my general affinity towards security that I’ve had in all my roles. I’m grateful for the opportunity to be a part of our internal IT Security initiatives and look forward to learning and growing in the role.

Supporting employees to pursue alternative career opportunities within our organization is an integral part of Anchore’s culture – truly embodying our core values of kindness, openness, ownership.  For more on our open roles, check out our careers page here.

3 Tips for getting Stakeholder Buy-in for DevSecOps

Gaining stakeholder buy-in for DevSecOps comes with some upfront work. You don’t want to present to your department’s leadership, much less your C-Suite, to talk about DevSecOps unless you have an accurate picture of where your development teams are currently and where they need to go in the future.

Here are three tips for preparing to get stakeholder buy-in for DevSecOps:

1. Analyze your Development Process Maturity

Whether DevSecOps is just the next step in your DevOps journey or you’re making your initial foray into DevSecOps straight from a waterfall SDLC, a critical step in the first phase is to analyze the maturity of your software development process. Your analysis should include:

  • Document any current state processes
  • Gather any reporting data about your current development processes
  • Interview key developers about what’s working and not currently working in your development processes
  • Interview key security team members about what’s working well and what’s not working in their processes and procedures that support your applications in development and production

Before presenting this information to your stakeholders, distill it down into the key points in a format that’ll resonate with your stakeholders. For example, if you work in a data-driven organization, then let the numbers tell your story. Non-technical stakeholders may also need a quick DevOps to DevSecOps education that hones in on how DevSecOps benefits their piece of the business. 

2. Define DevSecOps for your Organization

Software vendor marketing and the OSS community each put their spin on the definition of DevSecOps. Therefore, as part of your outreach, it’s important to define DevSecOps for your organization, including:

  • What DevSecOps means to your organization
  • The expected outcomes after moving to DevSecOps
  • The tools and processes your organization is putting into place to ensure employee success

Spare your teams from any misunderstandings and document your DevSecOps definition. Post that definition in a place that’s accessible to all your team members and stakeholders. It’s not about creating a project charter for your DevOps to DevSecOps transformation, but defining your true north.

3. Plan for a DevSecOps Culture

Like DevOps, you can’t buy DevSecOps. Your managers and key technology team members need to work together to foster DevSecOps cultural philosophies that take your DevOps foundation to DevSecOps transformation.

Culture can be a squishy word to some stakeholders. It’s important to couch DevSecOps culture in business terms with an eye for how it benefits the organization. A simple way to do this is to create a DevSecOps roadmap with milestones for each major transformation point, including:

Continuous Feedback and Interaction

Cross-functional DevSecOps teams may collaborate remotely, which can create challenges with continuous feedback. It’s not about a manager delivering feedback on the DevSecOps team performance. Instead, it’s about enabling teams to collaborate more effectively. ChatOps tools such as Slack, Microsoft Teams, and MatterMost can now replace email for DevSecOps teams. As technology such as artificial intelligence (AI) improves, you can expect to see more automation through chatbots.

Container-Based Architectures

The shift to cloud-native applications is driving the adoption of container-based delivery models.  DevSecOps plays a critical role in the move to container-based architectures, which can be a cultural change in and unto itself for DevOps teams. A proper and robust implementation of containers changes developer and operations cultures because it changes the development model of how architects design solutions, programmers create code, and how operations teams maintain production applications.

Team Autonomy

Like DevOps, DevSecOps is no place for micromanagers at any level of your organization. A standard part of DevSecOps culture is enabling your teams to choose their own tools and create their processes based on the way they work. DevSecOps also promotes distributed decision models to support greater innovation and delivery velocity.

Automation

DevSecOps extensively embeds automation for security checks and remediation workflows directly into DevOps processes and toolchains. An automation strategy that extends to security is a sign of a healthy DevSecOps culture. 

DevSecOps Training for Developers

Another step to security becoming part of everyone’s job is to provide security training for your developers. Training could take the form of in-house developer training in casual formats such as Lunch and Learns or more formal training classes conducted by your organization’s training department. Another option is to send your developers to a third-party training provider to get the requisite security training. Depending on your security ambitions (and budget), there is always the option to send your DevOps team members to get a vendor certification such as the DevSecOps Foundation certification from the DevOps Institute the Certified DevSecOps Professional (CDP) from practical-devsecops.com.

Final Thought

Preparing your case to advance your DevOps journey with data, a current state picture of your development processes, and a plan to transform your development team culture enables you to meet your stakeholders with the facts and strategy they require to make a decision to grant budget and staffing to the effort.

Behind the Scenes of Startup Team Strategies

Building products in an emerging tech space requires a highly collaborative and creative Engineering team. We sat down with Chief Architect & Director of Engineering Zach Hill (he/him) to understand more about creating an environment of psychological safety and operating with a growth mindset.

“At a high level, psychological safety is about fostering an environment where people feel comfortable saying ‘I don’t know’ – because it’s safe to do so and we’ll figure it out together. We’re a fast-paced startup with high-tech products in a fairly emerging industry, and that means much of what we are working on is developing as we go and we are all learning and growing together.”

Anchore’s values of kindness, openness, and ownership play an integral role in the Engineering team culture. 

“You can see it in our Slack channels. You can see it in the way people on the team interact with one another and with their managers. We have a highly collaborative team that’s open to asking for help and being mentors to one another.  It’s an exciting time to be an Engineer at Anchore.” 

If you are interested in joining Zach’s team and working at a company that values kindness, ownership and openness, we’re hiring across our Engineering organization.

The Current State of the Container Registry

A container registry is becoming a necessity for organizations using containers in cloud-native development projects because it enables them to reuse software components that have already been through a vulnerability scan and other compliance checks. 

Here’s a look at the current state of container registries:

What’s a Container Registry?

A container registry, sometimes called a container hub, is a centralized repository of container images that an organization develops, vets, and secures to support the reuse of containers. When your organization establishes a container registry, they gain the technical foundation to create a reuse strategy that can help increase development velocity through the strategic reuse of software. For example, when a team reuses containers that have already gone through vulnerability scans and other security checks there’s no need to repeat those checks on the container at points during the DevOps lifecycle.

Container Registry Use Cases

Here are some examples of container hub use cases that are bubbling up right now:

Healthcare

As more healthcare applications migrate to the public cloud and to mobile apps because of COVID-driven telehealth initiatives, a container hub offers healthcare institutions such as a regional hospital system a secure and centralized repository of containers that are in compliance and now available for reuse.

For example, telehealth is changing user expectations for user experience (UX) and security.  It’s incumbent on healthcare institutions to implement a container hub that can help their developers meet the changing market expectation for slick and consumer-like experiences they would expect with any app they download from Google Play or the iTunes Store.

Financial Services

Another industry ripe for container registries is the financial services industry. The financial services industry had been embracing the cloud even pre-pandemic to feed consumer demand for online and consumer banking services. In turn, this strategic move fuels the need for DevOps to DevSecOps and containers to support the necessary security and compliance that the industry requires to protect the personal and financial information of their customers.

A container registry inside a financial institution offers its developers and outside partners secure and vetted reusable containers that they can reuse across projects.

Public Sector & DoD

Public sector agencies and Department of Defense (DoD) programs are prime candidates for container hubs because they have security and compliance requirements they must maintain to protect government data and applications from attacks.

An example of a DoD container registry is Iron Bank (more on that later) which serves as a repository of standard container images for Platform One, an innovative cloud, and DevSecOps initiative. Iron Bank offers DoD developers hardened containers they can use across cloud projects they’re building to run on Platform One infrastructure. Other container registries are certain to come online as other DoD elements move forward with their own large-scale cloud initiatives.

Learn how Anchore brings DevSecOps to DoD software factories.

Container Registry Examples

Here are examples of industry-standard  container registries:

Docker Hub

Docker Hub is perhaps the best-known example of a container registry. It’s a cloud-based repository open to the public in which Docker users and partners create, test, store, and distribute container images. All Docker tools go to the Docker Hub by default.

GitLab

GitLab offers a secure and private registry for Docker images that integrates directly with their industry-standard version control platform.

The Future is the Industry Specific Container Registry

As Platform One and the NVIDIA NGC show, there are some benefits of industry-specific container hubs including:

  • A platform for cross-industry collaboration amongst developers and even market competitors to help some industry and even society level challenges such as COVID-19
  • A central repository showing best practices in container creation and security for everybody to learn from
  • A “container ethos” much in line with the open source ethos that can help support organizations early in their container adoption journey with secure and vetted containers they can download and use in their own projects

More corporate and government program-level container registries are a natural launchpad for more of an industry-level container registry as alliances and partnerships find the need to connect with developers outside their normal sphere of influence.

Do you want to learn more about container security best practices? Check out our Container Security Best Practices that Scale On-Demand Webinar.

Riding the Wave of Container Security

Robert Prince (he/him) joined Anchore in May 2020 as a Senior Automation/Release Engineer, going back to his roots as an individual contributor after several years in leadership roles. In this Humans of Anchore profile, we sat down with Robert to talk about his transition back to development, having a safe work environment to explore and learn, and riding the wave of container security.

Leadership roles come with a host of responsibilities like budgeting, managing people and their professional development, building and maintaining relationships with strategic partners, and reporting to the board. Though he enjoyed those responsibilities while he held them, Robert wanted to connect with hands-on development again. Thankfully Anchore presented him with the perfect opportunity to do so.

“I feel comfortable putting my head down, doing tactical work, and not having to worry about people managing or strategic decision making. One reason for that is Anchore’s leadership team: it’s obvious that they’re at the top of their game, and to me that is very comforting.

A lot of companies talk about being kind to each other, but it’s more than talk here; kindness is non-negotiable, and that’s one of the things that pulled me in and keeps me here. Coming from environments where that hasn’t always been the case, it’s taken me some time to internalize it. Once I understood that this company provides a kindness-based, trust first culture – I found it freeing. It lets me focus with less distraction.”

Robert is part of the DevOps group, focusing on automation of tools and the release process.

“The container orchestration and security space is hot right now. Software development supply chain is a concept that few outside of infrastructure tech talked much about before. Now people are starting to pay attention. Anchore is at the center of what people actually need right now – it’s really fun to be involved in a company that is riding that wave.

You can almost see the change in infrastructure and tooling happening in real time. It reminds me of the massive change that happened when cloud computing was commoditized: some things got much simpler but when you go beyond “Hello, World” – there’s new layers of complexity. It means that you’re constantly learning while applying what you already know. I don’t have all the answers but I’m with a well rounded team. Sometimes I mentor folks, and sometimes they mentor me. I feel lucky to be part of Anchore.”

Cybersecurity Executive Order Brings FedRAMP Changes Aplenty

On May 12, 2021, President Biden’s Executive Order on Improving the Nation’s Cybersecurity finally hit the street. Amongst all its goodness about the software bill of materials (SBOM), software supply chain security, and cybersecurity there’s some good news about FedRAMP and these developments are going to be a major step forward for government cloud security, compliance, and the government cloud community.

Here are some FedRAMP highlights from the executive order (EO):

Security Principles for Cloud Service Providers

Amazon Web Services (AWS), Microsoft Azure, and Google Cloud Platform (GCP) are increasingly providing secure platforms for the next generation of federal government applications. Case in point, federal government agencies spent $6.6 billion on cloud computing in fiscal 2020. That figure is up from $6.1 billion in fiscal 2019, according to a government spending analysis by Bloomberg Government, as reported by NextGov. Section 3 Modernizing Federal Government Cybersecurity, states:

The Secretary of Homeland Security acting through the Director of CISA, in consultation with the Administrator of General Services acting through the Federal Risk and Authorization Management Program (FedRAMP) within the General Services Administration, shall develop security principles governing Cloud Service Providers (CSPs) for incorporation into agency modernization efforts. 

We’ll have to wait and see how this might affect the container orchestration and security offerings of the major CSPs and the opportunities it may open for the system integrators (SIs) and the innovative SBIRs across the federal information technology ecosystem.

Federal Cloud Security Strategy in 90 Days

The EO also positions FedRAMP — as part of the General Services Administration — as part of a task force to create a federal cloud security strategy in 90 days and then provide guidance to federal agencies. Here’s the quote from the EO:

Within 90 days of the date of this order, the Director of OMB, in consultation with the Secretary of Homeland Security acting through the Director of CISA, and the Administrator of General Services acting through FedRAMP, shall develop a Federal cloud-security strategy and provide guidance to agencies accordingly. Such guidance shall seek to ensure that risks to the FCEB from using cloud-based services are broadly understood and effectively addressed, and that FCEB Agencies move closer to Zero Trust Architecture.

There have been a few runs at a federal-level cloud strategy over the past few years. Most recently, there’s Cloud Smart that seeks to redefine cloud computing; modernization and maturity, and tackles security concerns, such as continuous data security and FedRAMP. Cloud Smart replaces Cloud First, an earlier government-wide initiative to provide a cloud strategy for federal agencies.

Federal agencies and technology firms serving the government should pay attention to the development of this new cloud security strategy as it’ll influence future cloud procurements. Their ambitious 90 day goal to deliver this strategy leaves virtually no time for any feedback from the government’s industry partners.

90 Days to a Cloud Technical Reference Architecture

 With large government cloud initiatives such as the United States Air Force’s Platform One gaining mindshare, other parts of the Department of Defense (DoD) and civilian agencies are certain to follow suit with large scale secure cloud initiatives. The EO also mandates the creation of a cloud security technical reference architecture:

Within 90 days of the date of this order, the Secretary of Homeland Security acting through the Director of CISA, in consultation with the Director of OMB and the Administrator of General Services acting through FedRAMP, shall develop and issue, for the FCEB, cloud-security technical reference architecture documentation that illustrates recommended approaches to cloud migration and data protection for agency data collection and reporting.

Just like the cloud security strategy, 90 days is an ambitious goal for a cloud technical security reference architecture. It’ll be interesting to see how much this architecture will draw upon the experience and lessons learned from Platform One, Cloud One, and other large scale cloud initiatives across the DoD and civilian agencies.

FedRAMP Training, Outreach, and Collaboration

FedRAMP accreditation and compliance is no easy task. The EO mandates establishing a training program to provide agencies training and the tools to manage FedRAMP requests. There’s no mention for training for the SI and government contractor community at this stage but it’s almost a certainty that the mandated FedRAMP training will find its way out to that community.

The EO also calls for improving communications with CSPs which normally falls under the FedRAMP PMO. Considering the complexities of FedRAMP, improving communication should be an ongoing process so the automation and standardization of communications that the EO touts could take some of the human error that might occur when communicating a technical status.

Automation is also due to extend across the FedRAMP life cycle, including assessment, authorization, continuous monitoring, and compliance. This development can help make the much-heralded Continuous ATO a reality for more agencies. It also opens the door for more innovation as SIs seek out startup partners and SBIR contracts to bring innovative companies from outside the traditional government contractor community to satisfy those new automation requirements.

Learn how Anchore helps automate FedRAMP vulnerability scans. 

Final Thoughts

Cloud security concerns are universal across the commercial and public sectors. Biden’s EO strikes all the right chords at first glance because it elevates the SBOM as a cybersecurity vulnerability. It also gives FedRAMP some much-needed support at a time when federal agencies continue to face new and emerging threats.

Want to learn more about containers and FedRAMP? Check out our 7 Must-Dos To Expedite FedRAMP for Containers webinar now available on-demand!

Latest Cybersecurity Executive Order Requires an SBOM

This blog post has been archived and replaced by the supporting pillar page that can be found here:

https://anchore.com/wp-admin/post.php?post=987473366&action=edit

The blog post is meant to remain “public” so that it will continue to show on the /blog feed. This will help discoverability for people browsing the blog and potentially help SEO. If it is clicked on it will automatically redirect to the pillar page.

GitOps vs. DevOps: How GitOps plays in a DevOps and DevSecOps World

Operations models are coming at us fast and furious these days. DevOps and DevSecOps adoption and maturity have only increasing during the pandemic. It’s now incumbent for DevOps and DevSecOps teams to get all their system configurations under the same level of control and governance as they do with their application source code and containers. That’s right, it’s time for a new operations model. Say hello to GitOps!

What is GitOps?

GitOps practices empower development teams to perform traditional IT operations team tasks. As more organizations move to continuous integration (CI), continuous delivery (CD), and apply automation to their testing, delivery, deployment, and governance the more opportunities they have to implement GitOps to streamline infrastructure tasks that DevOps doesn’t necessarily automate and factor into its workflows. It also lets your teams take advantage of backend data through the application of analytics to give your stakeholders actionable insights on what’s happening up and down your pipelines and in your cloud infrastructure.

One of the many strengths of GitOps when compared to DevOps, is that it enables DevOps teams to loosen restrictions between development and operations sequences. GitOps is also repository centric with the project’s configuration files and deployment parameters residing in the same repository as the application source code. These strengths mean GitOps supports rapid development and complex changes, all the while minimizing reliance on the complex scripts that traditionally dominate such tasks. GitOps also emphasizes the use of Git, a single and often already familiar tool for developers. GitOps is gaining attention because of the complexities around configuring Kubernetes (K8s). When a Kubernetes shop moves to GitOps, they can manage their K8s configuration right along with their application source code. If they make a configuration mistake, they can roll back to their last known good configuration.

Critics of GitOps cite its lack of visibility into environments despite proponents seeing visibility as one of its strong suits. The criticism is because the data that teams see resides in a plain text format in Git which they see working for only simple configurations and setups.

Comparing GitOps vs. DevOps

We’re reaching a peak level of ops in the IT industry right now. It’s easy to get them all confused without the help of corporate marketing departments. The easiest way to distinguish between DevOps vs. GitOps is to think of DevOps as a pipeline mechanism that enhances software delivery. GitOps is a software development mechanism that improves developer productivity.

These ops models bleed together because of continuous integration/continuous delivery (CI/CD) and the componentization of software through containers. 

GitOps complements an overall DevOps strategy and your toolchains. By introducing GitOps into their workflows, a DevOps team can experiment with new infrastructure configurations. If the team finds that the changes don’t behave as expected, they can roll back the changes using Git history.

GitOps and Secure Development

Many of the same contrasts between DevOps and GitOps remain in GitOps vs. DevSecOps. GitOps is developer-centric. DevSecOps is security-focused with software containers playing a growing role in how DevSecOps teams secure software during development, testing, and in production.

As DevSecOps add more security and analytics to the toolchain, teams can easily extend their security measures to secure their Git repositories. The merging of DevSecOps and GitOps should be interesting to watch.

DevOps, DevSecOps, and GitOps in the Future

Evolution is constant in the IT operations world. While some foresee that DevOps and DevSecOps will merge, GitOps will certainly continue to augment DevOps and DevSecOps toolchains and processes offer teams better tools to manage configurations, thus lifting some pressure off the Ops team so they can focus on more strategic work.

How Core Values Can Foster Open Performance Discussions

Kindness, Openness, and Ownership.  These are the core values that Anchore was built on and our team members exhibit every day.  At the core of these values are the underlying themes of trust, empathy, and communication, which are paramount to our continuous performance model.  We designed this model to create manager and employee relationships that enable and empower every employee to feel safe to raise their hand with new ideas and ask for help, creating open two-way communication.

Every other month, employees sit down with their managers for a Top 2 conversation, where they discuss two things that went well and two things that can be focused on in the coming months.  Most importantly, this feedback goes both ways.  Employees have a regular opportunity to share feedback with their manager (wanting more frequent check-ins, interest in stretch projects, etc.) and managers have regular opportunities to coach employees, providing the tools and resources they need to be successful. 

Additionally, every six months managers sit down with their direct reports for a Stay Interview, providing a dedicated time to discuss motivators, communication styles, and long-term career goals. With this understanding, the feedback and opportunities presented can be deliberately aligned with each person’s individual goals and objectives. 

To learn more about how our continuous performance model has given our team members the tools to build a trusting and open relationship, we sat down with Brandon Lee (he/him), Senior Accountant, and his manager Alaina Frye (she/her), Sr. Director, Finance, Accounting and RevOps. 

Brandon has worked at various sized companies, from large financial services firms to small startups, all of which had infrequent and unclear performance models (or none at all). When he joined Anchore, Brandon welcomed the opportunity to participate in a robust and regular performance development program. 

“I think what we have with the Anchore Top 2 discussions and Stay Interviews is pretty awesome and has enabled Alaina and I to have a very open and honest relationship,” said Brandon.  “Because the Top 2 meetings occur so often, it’s easy for us to reflect on what went well in the previous months, as well as help identify some of the processes that we can continue to enhance and refine. The transparency and open dialogue we have in our Stay Interviews is really helpful for my career growth and happiness at Anchore. I enjoy the opportunity to share my short-term and long-term career goals in a very candid way and really appreciate the continuous support in accomplishing my goals.”

Alaina, whose experience also ranges from large financial services firms to small startups, has become a champion of Anchore’s Top 2’s and Stay Interviews.  As a manager, they give her the ability to have constant communication with her direct reports. This ensures that expectations and goals are clear on both sides – ensuring strong communication all around. 

Top 2’s help everyone digest feedback because we have this set framework and recurring time to discuss performance regularly. It facilitates the opportunity to receive feedback and then work together on action plans,” said Alaina. “Some months there is specific feedback about how I can better support Brandon, but other months we sit down and talk at a higher level about process improvements we want to make.”

Even ad-hoc feedback conversations outside of Top 2’s have become more natural because Brandon and Alaina have built a foundation in their sessions, understanding how each other thinks, communicates, and what motivates them.  This sense of trust and psychological safety with one another has opened the door to real time feedback opportunities. 

Through Alaina and Brandon’s embodiment of Anchore’s values they have cultivated a strong relationship where they can learn and grow – together.  If you are interested in working on a team that fosters kindness, trust and open communication, head to our careers page. 

5 Open Source Procurement Best Practices

SolarWinds and now Codecov point to the need for enterprises to better manage how they procure and intake open source software (OSS) into their DevOps life-cycle. While OSS is “free” it’s not without internal costs as you procure the software and bring it to bear in your enterprise software. 

Here are five open source procurement best practices to consider:

1. Establish Ownership over OSS for your organization

Just as OSS is becoming foundational to your software development efforts,  shouldn’t it also be to your org chart? 

We’re lucky at Anchore to have an open source tools team as part of our engineering group. Our CTO and product management team also have deep roots in the OSS world. Having OSS expertise in our development organization means there is ownership over open source software. These teams serve our overall organization plus current and prospective customers.

You have a couple of options for establishing ownership over OSS for your organization:

  • Develop strong relationships with the OSS communities behind the software you plan to integrate into your enterprise software. For example, support can take the form of paying your developers to contribute to the code base. You can also choose to be a corporate sponsor of initiatives and community events.
  • Task an appropriate developer or development team to “own” the OSS components they’re integrating into the software they’re developing.
  • Stand up a centralized open source team if you have the budget and the business need, and they can serve as your internal OSS experts on security and integration.

These are just a few of the options for establishing ownership. Ultimately, your organization needs to commit the management and developer support to ensure you have the proper tools and frameworks in place to procure OSS securely.

 

2. Do your research and ask the right questions

Due diligence and research are a necessity when procuring OSS for your enterprise projects.  Either your developers or open source team have to take the lead in asking the right questions about OSS projects you plan to include in your enterprise software. Procuring enterprise software requires a lot of work on the part of legal, contracts, and procurement teams to work through the intricacies of contracts, licensing, support, and other related business matters. There’s none of that when you procure OSS. However, it doesn’t mean you shouldn’t put in guard rails to protect your enterprise because sometimes you may not even realize what OSS your developers are deploying to production. Here are some questions that might arise:

  • Who’s maintaining the code?
  • Will they continue to maintain it as long as we need it?
  • Who do we contact if something goes wrong?

It’s not about your developers becoming a shadow procurement department. Rather, it’s putting their skills and experience to work a little differently to perform due diligence they might do when researching enterprise software. The only difference here is your developers need to find out some of the “what ifs” that come if an OSS project goes stagnant or may not deliver on the potential of their project.

3. Set up a Standard OSS Procurement Process

A key step is to set up and document a standard OSS process that’s replicable across your organization to set a standard for the onboarding process. Be sure to tap into the expertise of your IT, DevOps, cybersecurity, risk management, and procurement teams when creating the process.

You also should catalog all OSS that meet the approval process set by your cross-functional team in a database or other central repository. This is a common best practice in some large enterprises, but keeping it up to date comes at an expense.

4. Generate an SBOM for your OSS 

OSS doesn’t include a software bill of materials (SBOM), a necessary element for conducting vulnerability scans. It’s up to you to adjust your DevOps processes and put the tools in place for whomever owns OSS in your development organization. Generating an SBOM for OSS can take place at one or more phases in your DevOps toolchain.

5. Put OSS Maintenance in Place

When you’ve adopted an OSS component and integrated it into your software, you still need to have a method in place to maintain that source code. It’s a logical role if you have a dedicated open source team in-house and such work is accounted for in their budget, charter, and staffing. If you don’t have such a team, then the maintenance work would fall to a developer and that risks shifting priorities, especially if your developers are billable to client projects. The last option is to outsource the OSS maintenance to a third party firm or contractor, and that can be easier said than done, as the expertise can be hard to find (and sometimes costly!).

Then again, you can always roll the dice and hope that the OSS project remains on top of maintaining their source code and software with the necessary security updates and patches well into the future.

OSS Procurement and your Enterprise

The time is now to review and improve how your enterprise procures and maintains OSS. Doing the job right requires relationship building with the OSS community plus building internal processes and governance over OSS.

 

Do you want to generate SBOMs on the OSS in your development projects? Download Syft, our open source CLI tool for generating a Software Bill of Materials (SBOM) from container images and filesystems.

Blending Passion and Performance to Advance Innovation

As we explore the various roles and responsibilities at Anchore, one critical area is maintaining our interactions with the open source community. Anchore’s roots are deep in open source, and this area remains vital to our organization today. As a company we may expand our offerings, but the technology feedback and engagement we receive from our users in the community drives and inspires our team.

As we continually innovate and cultivate the newest technologies for secure and compliant software development, Anchore is thrilled to be hiring a Developer Advocate for our open source tools.

 

Our Vice President of Product, Neil Levine, weighed in on what he sees as the key elements to this role:

“Hiring a developer advocate is critical for Anchore as we look to grow adoption of our open source tools and evangelize the benefits of DevSecOps practices. We make our tools open source to reduce friction and encourage conversation with the developer community. This role is the essential glue between those users and the Anchore engineering team, so we can ensure that we are advancing state-of-the-art concepts when it comes to developing secure software. This role will help not just Anchore, but the broader software community.”

Are you passionate about DevSecOps and open source projects? Then apply for this role on our job board.

#NowHiring

5 Reasons AI and ML are the Future of DevSecOps

As the tech industry continues to gather lessons learned from the SolarWinds and now Codecov breaches, it’s safe to say that artificial intelligence and machine learning are going to play a role in the future of DevSecOps. Enterprises are already experimenting with AI and ML with the hopes of reaping future security and developer productivity investments.

While even DevSecOps teams with the budget and time to be early adopters are still figuring out how to implement AI and ML at the scale, it’s time more teams look to the future:

1. Cloud-Native DevSecOps tools and the Data they Generate

As enterprises rely more on cloud-native platforms for their DevSecOps toolchains, they also need to put the tools, frameworks, and processes to make the best use of the backend data that their platforms generate. Artificial intelligence and machine learning will enable DevSecOps teams to get their data under management faster while making it actionable for technology and business stakeholders alike.

There’s also the prospect that AI and machine learning offer DevOps teams a different view of development tasks and enable organizations to create a new set of metrics

Wins and losses in the cloud-native application market may very well be decided by which development teams and independent software vendors (ISVs) turn their data into actionable intelligence. Creating actionable intelligence gives their stakeholders and developers views into what their developers and sysadmins are doing right security and operations wise.

2. Data-Backed Support for the Automation of Container Scanning

As the automation of container scanning becomes a standard requirement for commercial and public sector enterprises, so will the requirements to capture and analyze the security data and the software bill of materials (SBOM) that come with containers advancing through your toolchains.

The DevSecOps teams of the future are going to require next-generation tools to capture and analyze the data that comes from the automation of vulnerability scanning of containers in their DevSecOps toolchains. AI and ML support for container vulnerability scanning offer a delicate balance of autonomy and speed to help capture and communicate incident and trends data for analysis and action by developers and security teams.

3. Support for Advanced DevSecOps Automation

It’s a safe assumption that automation is only going to mature and advance in the future with no stopping. It’s quite possible that AI and ML will take on the repetitive legwork that powers some operations tasks such as software management and some other rote management tasks that fill up the schedules of present-day operations teams.

While AI and ML won’t completely replace their operations teams, these technologies may certainly shape the future of operations team duties. While there’s always the fear that automation may replace human workers, the reality is going to be closer to ops teams becoming more about automation management.

4. DevOps to DevSecOps Transformation

The SolarWinds and Codecov breaches are the perfect prompts for enterprises to make the transformation from DevOps to DevSecOps to protect their toolchains and software supply chain. Not to mention, cloud migrations by commercial and government enterprises are going to require better analytics over development and operational data their teams and projects currently produce for on-premise applications.

5. DevSecOps to NoOps Transformation

Beyond DevSecOps lies NoOps, a state where an enterprise automates so much that they no longer need an operations team, While the NoOps trend has been around for the past ten years, it still ranks as a forward-looking trend for the average enterprise.

However, there are lessons you can learn now from NoOps in how it conceptualizes the future of operations automation that you can start applying to your DevOps and DevSecOps pipelines, even today.

Final thoughts

For the mature DevSecOps shop of the future to remain competitive, it must make the best use of data from the backend systems in its toolchain; SBOMs; and container vulnerability scanning. Artificial intelligence and machine learning are becoming the ideal technology solutions for enterprises to reach their future DevSecOps potential.

Celebrating Anchore’s Fifth Birthday

This is a special guest post from our CEO, Saïd Ziouani to celebrate and reflect on five years of the Anchore journey.

As we celebrate Anchore’s fifth birthday this month, and reflect on our journey thus far, I am truly humbled at what our talented team of professionals has accomplished in such a short period of time. Anchore was founded on three core values: Kindness, Openness, and Ownership. Our employees (affectionately called Anchorenauts) exemplify each of those values every day, and are the backbone of the company.

When Dan Nurmi, Co-founder, and I got together back in 2016 to start Anchore, container technology was still in the early stages, but adoption was starting to take shape at a pace that was like nothing we’d seen in the past. We could see how security and compliance would need to be re-imagined to a “continuous” approach that would allow developers to deliver innovation quickly and securely. We then realized that coupling the container adoption movement with developer-led security (or “shift left”) was going to be the foundational play for our next adventure.

Today, after five wonderful years of innovation, building a team, raising capital and instilling a strong operational foundation, I’m pleased to see how far we have come as a company. At 75 people strong, we are excited to be helping Fortune 100 companies such as eBay, NVIDIA and Cisco and government agencies such as the U.S. Air Force and Navy to develop secure cloud-native applications. And we’ve been honored to work alongside DevOps leaders such as GitLab, GitHub and Cloudbees to advance DevSecOps practices.

As we look forward, the next five years at Anchore will be full of new innovations as we help organizations secure their software supply chains in a world of increasing threats. We also seek to develop and inspire the next generation of engineers, technologists and leaders, both within Anchore and in the larger open source and technology community.

I’m even more excited now than I was the day Dan and I founded the company. The thrill of being at the forefront of such amazing and dynamic technology is more than we expected. As Anchorenauts have heard me say many times in the past, “it’s really all about the journey.” At Anchore, we surround ourselves with hardworking, kind individuals, all driving toward a common goal of building a technology that contributes to ensuring a safer and more secure world. I’m grateful to our industry partners, valued customers and all Anchorenauts — from those who’ve been with us since the early days to those who have embraced the journey with us more recently. We look forward to continuing to build this amazing company together!

2 SBOM & Supply Chain Security News Items to Watch

We aren’t about to stop hearing about the need for a software bill of materials (SBOM) and software supply chains security anytime soon. You can expect more news about a Presidential executive order about SBOMs and a new software supply chain breach at Codecov that we’re all still learning more about.

Impending Executive Order about SBOMs

The fallout from the SolarWinds supply chain attack is behind the U.S. federal government considering issuing an executive order that would require vendors to provide a software bill of materials (SBOM) with the software they sell to or create for a customer.

One of the potential benefits of this EO is that we might finally see a boost to some of the excellent industry and cross-industry work being done out there to better track software dependencies and related metadata. Hopefully, we’ll see SPDX, CycloneDX, SWID, and the National Telecommunications and Information Administration (NTIA) play new and collaborative roles within government and industry once this EO hits the street. 

An EO of this magnitude also sends a powerful message to government and industry about the risks of vulnerabilities that come from software dependencies. There’s also the potential of a knowledge gap that both government and industry will need to bridge. Look for security vendors to pivot their messaging and thought leadership to fill this gap.

Codecov Supply Chain Breach

Codecov — makers of a tool that lets development teams measure the testing coverage of their codebase — could be the latest high-profile software supply chain breach adding new fuel to the impending federal government EO.

Reports point to attackers exploiting a bug in Codecov’s Docker image creation process to gain access to a Bash Uploader script that maps out development environments and reports back to the development team. The modification called out for user credentials that would enable the attackers to access and exfiltrate data right from the continuous integration environment. CEO 

Jerrod Engelberg published an update on their corporate site that warned that any credentials, authentication tokens, or keys run through an affected customer’s CI process were exposed giving attackers access to application code, data stores, and git repositories.

The Codecov breach brings up the harsh realities of the need to secure the DevSecOps toolchain for government and commercial enterprises. Nowadays, any focus on application security must also include the toolchain.

Be Proactive about SBOMs and Supply Chain Security

News of the impending executive order and recent news about Codecov mean the time is now to become more proactive about your organization’s SBOM adoption. Here are some actions you can take to be proactive about SBOMs and supply chain security:

  • Review your current DevOps or DevSecOps process with your development and operations teams and look for natural points to introduce the requirement for an SBOM as an entry gate.
  • Become conversant in the major SBOM standards: SPDX, SWID, and CycloneDX because we’ve yet to see a full-court push for an industry standard. It’s also a good time to monitor the SBOM work the NTIA is doing.
  • Implement a tool to generate SBOMs from container images and file systems if you haven’t already done so. Download and take Syft for a spin. It’s our open source CLI tool and library for generating a Software Bill of Materials from container images and filesystems.

A Family Approach to Startup Life

When Chad Olds (he/him) joined Anchore in February 2020 as VP of Sales-Americas, his goal was to build a collaborative, high-performing sales organization.  The first year was filled with many unexpected challenges, most notably a global pandemic. This led him through an action-packed year beginning as a “team of one” and ending with an incredibly talented team of Account Executives, Solution Engineers, and Sales Development Representatives.  

When the pandemic hit, Chad learned quickly how much work is involved with raising three children and being present as a parent while balancing a demanding career. It completely changed the expectations and needs in his household.  

“What I learned, even before the pandemic, is that taking care of the kids is a lot of work, and it is absolutely unfair for me to think that my wife, Brittany, who owns a small business, should be expected to take on full parent duties 24/7.”  

Chad knew how important it was to participate and share in the demands required in caretaking, and finding a way to balance the ownership and responsibility was a priority.

With a career in sales spanning 15 years, Chad’s focus was aligning himself with a company that understood the importance of finding an effective balance between work and life. At Anchore he found a sense of trust in managing personal schedules that flex with an individual’s needs.  It’s not always possible to predict needs or delegate work to others at a start-up, but he knew that with proper planning and prioritization, he had the support to make it happen.   

“I changed my work schedule to help take on more of the morning responsibilities for our family. Things like help make breakfast, get the kids dressed and hair brushed. Essentially help them get ready to start the school day, which, for any parents out there, can attest that this alone can be a day’s work!”

Chad realized that even with the adjustment of helping with the morning routine, it wasn’t enough.  He wanted to support his wife in having more time to herself.  “I started blocking time during the week to spend time with my kids while Brittany was able to take the time she needed to stay balanced and healthy. It was fantastic for both of us! One of the things I appreciate about Anchore is that I don’t feel the need to hide spending time with my family. It’s something that our leadership team fully supports.”  

Being able to show up at work and contribute at the highest level involves having a life outside of work – whatever that may look like to each person.  Chad believes that burnout can happen quickly, especially at a start-up where the workload is vast, and the pressure is high. 

“I want my team to really know their friends and family.  I want them to enjoy what they do every day.  It’s about working smart, and prioritizing early and often to ensure you’re able to get done what you need to get done, while also being able to show up in other areas of your life fully, without distraction.  It is incredibly meaningful for me to not only give that support to my team in achieving what is most important to them, but to receive that level of support from my leadership as well.”

You can keep up with Chad and his series Colds Unfair Advantage on LinkedIn.

Taking A Healthy Viewpoint

Since Anchore’s inception, healthcare has been a central tenet for CEO Saïd Ziouani. “We want everyone at Anchore to focus on creating and being a part of something really special here. Employees should not have to worry that their physical and mental health is not being taken care of. It is, and will always be, a priority for us.”

In this Mission: Impact health profile, we sat down with Shannon Goulding to hear about her wellness journey with Anchore’s benefits program.

“Last year I joined Anchore and upon enrolling in benefits, was blown away at the number of plans that were fully covered by the employer – my experience of the industry standard is that the lowest HDHP-type plans were the only ones covered at 100%. While I consider myself a generally healthy person, I enrolled in a low-deductible PPO at ZERO extra cost to me, so I buckled down and got serious about using my insurance. I saw all the specialists that I had been putting off for years due to cost, and for lack of a better phrase, got my act together!

Thanks to having comprehensive insurance from my employer, I now can afford the things that I have realized are a necessity as I advance in a challenging career in talent acquisition at a startup, during a totally unprecedented season of life. As a result I now wear glasses, resulting in fewer headaches. Plus I have easy and affordable access to tools that are helping me keep my mental health in check.

It’s a win-win for me AND for Anchore because when I’m happier and healthier, I’m a much better recruiter!”

 

Software Supply Chain Security: Now is the Time to Act

It’s time to make evaluating and mitigating software supply chain security attacks top of mind as government agencies, corporations, industry analysts, and security firms try to chart a course forward for supply chain security after the SolarWinds hack.

Security Challenges 

Here are some software supply chain security challenges you should keep at top of mind now and in the future:

  • Software updates are a well-known best practice but can also introduce risk. However, with SolarWinds, customers received software that was signed but compromised. In following this best practice, they did just what the attacker wanted by installing the compromised software on your systems. 
  • Software behavior monitoring — another best practice — met its match in the stealthy and patient attackers who created so much damage inside SolarWinds before their discovery.
  • Source code reviews are inefficient, as the SolarWinds hack shows. Some reports point out that attackers had control of the SolarWinds build environment, making it possible for them to insert malicious code without knowing the SolarWinds Orion development team. 

When traditional security practices and solutions such as these fail on such a grand scale, it becomes time to reevaluate how software supply chain security works in organizations of all types. 

The SolarWinds hack exposes many of the significant drawbacks of today’s supply chains to the light of new and changing cybersecurity realities. 

Changing supply chain security means galvanizing your teams and counterpart teams in all the commercial partners and vendors that touch your supply chain to become true partners with open communication lines, collaboration, and knowledge sharing.

While large corporations may vet software vendors’ security through questionnaires or independent assessments, more still needs to be done to reduce risks across the software supply chain. Work beyond that initial questionnaire and subsequent onboarding means focusing on automated vulnerability scans and other methods to shore up your process for bringing in software components or applications.

Security, development, and IT teams must collaborate to ensure sufficient security checks and remediation of issues at each software supply chain stage. Those compliance processes must apply to software from all sources, whether open source, commercial vendors, or internal developers. 

Best Practices

There isn’t a single security solution that can secure your software supply chain from attacks. The gravity of the SolarWinds attack is an invitation for you and your software supply chain partners to collaborate and reassess the security, governance, communications, and collaboration needs across your supply chain. Here are some best practices you are bound to see and experience in the post-SolarWinds world:

  • Improve relationships and collaboration
  • Improve governance of software onboarding 
  • Harden your build environment
  • Require an SBOM for all partners and vendors
  • Implement Defense in Depth
  • Apply “Zero Trust” to software supply chain security 
  • Create a “kill chain” for your software supply chain

Read our White Paper

Today, software supply chain security requires continuous awareness, collaboration, and new strategies. We no longer live in a time to sit still when it comes to software supply chain security.

Software Supply Chain Security, Best Practices for Cloud-Native Application Development

The SBOM + Threat Intelligence are the Future of Software Supply Chain Security

As organizations open up the software bill of materials (SBOM) to their security teams, there is a future of the SBOM as source data for threat intelligence is becoming abundantly clear. Applying intelligence to SBOM data is a natural step in a world where DevOps and DevSecOps teams use a range of tools and technologies such as AIOps and analytics to gain actionable intelligence from their backend data.

Here’s a look at how the SBOM and threat intelligence spells the future of software supply chain security:

SBOMs Today

We’re reaching a critical point with the role of the SBOM in today’s enterprise. After the SolarWinds hack, it’s incumbent for government agencies and businesses to open up the SBOM to their security teams. Options to make this work include:

  • Offer training to your internal teams about SBOM basics with an accompanying briefing about what your organization expects to get 
  • Give your security team the tools and support to make the SBOM the first “gate” before third-party software code and components enter your software supply chain.
  • Put in the tools and processes to generate SBOMs as part of your DevSecOps toolchain and processes

Threat Intelligence Today

Threat intelligence, also known as cyber threat intelligence (CTI), is organized, analyzed, and refined information about current or potential attacks according to Whatis.com. An entire industry has arisen around threat intelligence that includes platforms, open source, and fee-based data feeds.

Choosing a threat intelligence platform means knowing your goals and requirements for data collection, and how the platform presents its threat analysis and reports to your security team. If you already have a threat intelligence platform in place that your security team manages, it’s time to work with your team and, if necessary, the vendor, to explore potential integration options for pulling in SBOM data into your threat intelligence reporting. There’s not a single hard and fast answer here (at least not yet) but the platform application programming interface (API) is the logical starting point.

The SBOM and Threat Intelligence in the Future

The SBOM is an under-realized threat intelligence option. For example, let’s say that you want to integrate an open source software (OSS) project into an enterprise software project you have underway for an important customer. The project is highly functional and shows excellent potential to serve as a key feature in your solution. Then the OSS project suffers a security incident. There are also vulnerabilities appearing in the same OSS project weekly.

A commercial or open source vulnerability scanner can only tell you whether some piece of that OSS project is vulnerable. It doesn’t give you any sort of status or analysis that alerts to the project’s history in the “vulnerability of the week club.”

While that OSS project with the vulnerability a week remains appealing, that there’s an OSS project with vulnerabilities is lost on the vulnerability scanner. You have a commercial option that fills most of the requirements but has only suffered two vulnerabilities over its entire lifetime. That’s probably the safer bet. We need to reach a point where we couple SBOMs with intelligence to help raise the role and importance of SBOM as a key security data source.

Even if you have the staffing budget to hire 100 people and their whole role in life is to determine the threat status of your open source dependencies, they still need a technology solution that enables them to narrow down what they examine as third-party software enters your pipelines. 

Final Thoughts

As businesses and governments continue to tackle the rise of software supply chain attacks, they and the vendors that serve them need to look at coupling SBOMs with some form of intelligence. Threat intelligence is a logical bet. It’s time for threat intelligence platform vendors and their customers to work collaboratively on solutions to add SBOMs to their feeds by default.

Do you want to generate SBOMs on the OSS in your development projects? Download Syft, our open source CLI tool for generating a Software Bill of Materials (SBOM) from container images and filesystems.

It All Started With a Fish Tank

It all started with a fish tank…  You don’t hear that often, but for Anchore employees Touré Dunnon and Amy Oxley, this hobby was just the thing to start their Anchorenaut friendship.  

Touré’s 250 gallon saltwater aquarium is the real deal. It has an impressive mechanical and biological filtration system that holds 400 gallons total (2500 lbs). Touré grows seaweed and keeps the water extra clean for his nine fish from Fiji and the Caribbean.  

This advanced level of aquatic life didn’t happen overnight.  Touré learned his love of fish through his Dad, who started him out with a 10 gallon saltwater tank when he was 13.  After college, Touré got back into aquariums with his two daughters who help manage the water changes, clean the tank, and feed the fish.  Since he was interested in becoming a marine biologist as a kid, Touré is hoping his daughters will be inspired to pursue that path when they get older.

Outside of his fish tank fatherhood, Touré is a Senior Software Engineer on the Anchore platform team, primarily working on policy engine with a key focus on keeping active containers within compliance.  

Meanwhile in Texas, Amy’s three-year-old daughter Fynn’s obsession with the Finding Nemo movie piqued her interest in starting a fish tank hobby. It wasn’t until seeing Toure’s aquarium during an Anchore All-Hands virtual meeting that she was inspired to commit.  Amy’s 40 gallon freshwater tank has 11 fish, complete with schools of tetra, catfish, and shrimp.  

While still aspiring to make her tank more automated (and eventually upgrade to a saltwater tank as “saltwater fish are way cooler”), Amy and Fynn love to count the fish and learn their names. 

Amy is the Senior Manager of the IT and Information Security team, filling her days with managing Anchore’s internal systems for both ease of use and compliance while maintaining and managing the company’s security initiatives.

Toure and Amy’s friendship has continued to grow – with Touré being a fountain (dare we say, an aquarium) of knowledge for Amy as she has embarked on her fish tank journey, being her go-to person for questions on everything from water changes to the ideal plants and fish to purchase next.  They even connected on their hobby of woodworking, and strategize on how to build stands and support systems for their fish tanks.  

In a distributed company during an unprecedented time, Touré and Amy’s friendship is an example of the unconventional ways people can make a connection through something as simple as a video conference background. You can keep up with Touré and Amy (and their aquatic hobbies) on LinkedIn.

Plugging an SBOM into your DevSecOps Process

The software bill of materials (SBOM) is gaining renewed attention and notoriety post-SolarWinds. More companies and government agencies seek deeper transparency into the software components entering their software supply chain.

While there are critics out there that believe that the SBOM is a misguided concept for DevSecOps, the continuing evolution of DevSecOps, much less the automation it brings to development teams today, now makes SBOMs a foundational aspect of the DevSecOps process.

SBOMs: The New Gate to the DevSecOps Pipeline

It’s time to treat the software BOM as a barrier to entry for software components entering your DevSecOps, not just your container repository. Requiring an SBOM for all software entering your pipeline has become a common-sense best practice in this day and age. You have three options for obtaining SBOMs:

  • Gain full cooperation from the software vendor, despite whether they’re a partner of your organization, with the SBOM as part of their delivery 
  • Implement software composition analysis tools that require particular expertise 
  • Implement a container vulnerability scanning tool that enables you to generate SBOMs for the containers entering your pipeline 

Moving software bill of materials generation to the left is just another step in moving security left. Depending on your particular business processes, compliance requirements, and pipeline gateway requirements, it’s essential to add a documented or automated process (or better yet, a combination of the two) that ensures that each software component has an accompanying SBOM before it enters your development environment As an example, let’s say your developers are taking advantage of an open source software (OSS) component in a cloud-native application built for one of your most important customers. OSS projects don’t have the resources or staff to generate an SBOM. It’s not something they do. Nothing stops your teams, however, from creating an OSS onboarding process and putting in the right tool to generate the SBOM for the OSS themselves before the software even hits your development environment and software repositories.

SBOM, Say What?

There are some details from industry surveys and various industry mutterings that project that less than half of the companies out there are creating SBOMs for their software. It’s not about the SBOM being a misguided concept for DevSecOps either. Currently, organizations just do not build SBOM creation into their DevSecOps processes. You control the gates at all phases of your DevSecOps toolchain. It’s up to you to put in the tools and processes upfront (“to the left”) to ensure that all software from your in-house teams, contractors, partners, vendors, and OSS enter your DevSecOps toolchain, much less your enterprise software supply chain.

Accountability for the SBOM is a New Priority

Accountability for SBOMs seems to get lost in the rush to deliver new software. It’s time for that to change.

Beyond putting in the tools and processes to capture an SBOM at contract time or have your team generate an SBOM for OSS, here are some examples of how you can build in SBOM accountability inside your organization:

  • Include SBOM as a requirement for contractual deliverables from your vendors and partners, making it part of new software development, updates, and patches they deliver as part of a project.
  • Establish the SBOM as a method for cross-functional team collaboration early in the project lifecycle because your contracts, finance, development, and cybersecurity teams all benefit from a well-formed SBOM to help them accomplish critical project-related tasks.
  • Deputize a developer or development team to “own” or shepherd OSS components through your DevSecOps toolchain, giving them the responsibility of generating an SBOM for each component.
  • Elevate the role of your cybersecurity team in the SBOM discussion by mandating that the SBOM serves as the basis for vulnerability scans.

Final Thoughts

Your development, security, and operations teams have probably already put a lot of work into creating the culture, processes, and frameworks to enable your organization to leverage DevSecOps. Factoring in the role of the SBOM into your DevSecOps processes is yet another iteration of your DevSecOps processes. 

Do you want to generate SBOMs on the OSS in your development projects? Download Syft, our open source command-line interface (CLI) tool, and Go library for developing a Software Bill of Materials (SBOM) from container images and filesystems.

The Software Bill of Materials (SBOM) through an Open Source Lens

This blog post has been archived and replaced by the supporting pillar page that can be found here:

https://anchore.com/wp-admin/post.php?post=987473384&action=edit

The blog post is meant to remain “public” so that it will continue to show on the /blog feed. This will help discoverability for people browsing the blog and potentially help SEO. If it is clicked on it will automatically redirect to the pillar page.

Bringing Gratitude into the Workplace: Meet Emily Long

 

Emily Long (she/her) joined Anchore one week after the pandemic shut down the U.S. in March 2020, she was employee number 25. In her Chief People Officer role, Emily led the build out of the G&A (General and Administrative – self titled ‘Great and Awesome’) functions made up of Finance/Accounting, IT, Information Security, Recruiting, HR, Legal/Compliance, L&D (Learning and Development), and DEI (Diversity, Equity and Inclusion). Though she’d be the first to tell you, the best part was hiring “the incredibly talented team that really does the work.”

Through her value system and leadership approach focused on empowerment, team dynamics, and trust, Emily’s impact has extended to her recent move into the Chief Operating Officer role where the Customer Success organization has joined forces with the G&A team.

In this Humans of Anchore profile, we (virtually) sat with Emily to hear about her journey over the past year, what the organization has accomplished, and what inspires her:

“I feel a deep sense of gratitude that I’ve been part of Anchore during the growth we have experienced over the last year. That gratitude is attached to what makes our growth special – that it’s always been about us accomplishing this together, as a collective company and team. Everyone here believes in what we’re doing, and every single person that works here is part of that success. Being able to partner with a team that is low ego, and high in humility and kindness, makes those wins as a company that much more fulfilling.

Everything that we accomplish at Anchore – whether that be product features, financials, training materials, or metrics – has people behind it. Teams of people collaborating together to solve problems. If we don’t understand the person that is developing code or supporting our customers, we aren’t telling the whole story. We can’t fully understand the quantitative outputs we get without deeply understanding the qualitative inputs that create it. Notably, the people behind the data.

What makes us different is not putting more or less focus on the technical or non-technical side of our business, rather putting an equal focus on both. We believe that everything is connected and we can get increased technical innovation through empowerment of our team members. And not just by saying it is important – but doing something about it. I can honestly say I’ve never worked somewhere that has focused on this more – a true example of this was hiring me originally as Chief People Officer at employee number 25.

We have worked hard to ensure the way we operate gives every team member a sense of belonging – that we take the time to understand how each unique person works, exploring their ideas, and hearing their concerns. This community of empowerment exists in a crew of almost 70 Anchorenauts. We have this infrastructure built to enable us to continue this as we scale because we have invested the time, energy, and resources through internal education, individual ownership, and structural support. We believe this is key to our success, for every employee.

Anyone who has worked closely with me hears me say all the time that I genuinely believe that people are fundamentally good. Most of the time when I’ve seen people become defensive, shut down, or act out in some way at work it has been a result of insecurity or a lack of trust that they’ve learned through past experiences – and I’ve been there before. There is honestly nothing better and more inspiring than watching someone shed away those walls they’ve built by experiencing what trust really looks like and truly stepping into their power. I get to witness this at Anchore all of the time – and each time I’m filled with a deep sense of pride and gratitude.

My ultimate goal is to have everyone at every level here at Anchore believe in, and have a path to achieve, that limitless potential that lives in each of us. Working somewhere with people that believe in each other, want to be part of a greater good, and are willing to hand the mic to someone that needs it more than they do… now there’s something really beautiful about that.”

We’re debuting our Anchorenaut logo

As we continue our culture-first series, this Friday we’re debuting our Anchorenaut logo (pronouns they/them).

By definition, an Anchorenaut is someone that embodies our company values and what being an Anchore employee encompasses: kindness, openness and ownership. We have a strong-knit team here, even though we’re dispersed geographically across the globe. This character serves as a symbol of how together, we’re real people uniting every day to advance software security.

Does this resonate with you? See our open roles here: https://lnkd.in/edDC7bf

 

At Anchore we’re passionate about our products and our industry

At Anchore we’re passionate about our products and our industry, but we’re equally committed to building a company with amazing people, incredible career opportunities, and an ability to make a difference. We’re thrilled to start sharing more about who we are and what matters to us through the launch of our culture-first series.

On Fridays, you can expect to learn more about who Anchore is. We’ll give you a closer look at:

The Humans of Anchore: The people (including pets and little ones!) who help shape our company.

Be Yourself. With Us: A highlight reel of new jobs and a glimpse into the people you could be working with at Anchore.

Mission: Impact: This is where we show you our programs and initiatives and how they enable us to live out our core values every day.

So, come learn more about why we’re excited to work here. And maybe a little about how you can make that a reality for you, too, someday. Come be yourself. With us.

https://hubs.li/H0G636d0

Curious what it’s like in a startup?

Curious what it’s like in a startup? As we continue our culture-first series, today we’re diving into the jobs and people at Anchore. All startups are different, at Anchore we focus on ensuring all employees, from individual contributors to the exec team, are given the opportunity to challenge themselves and explore new skillsets.

We talked to Support Engineer Tyran H. in the UK about his time on the team.

“Anchore is my first encounter working at an actual startup and is an amazing place to experience the real deal. Plus, I also have the opportunity to learn and develop technologies at the forefront of the tech world.”

Not only is Tyran part of our growing customer success team, but he was also Anchore’s first UK-based employee.

“As the first overseas hire, being welcomed as part of the family to help Anchore grow from the ground up has made settling in easy. It feels more like working on a passion-project with a group of friends than ACTUAL work, which is a massive bonus!”

Want to join Tyran and our team? Check out our latest job listings here.

From Olympic Athlete to DevOps Engineer

When Alfredo Deza came to work here at Anchore it was early in the startup phase, he was employee #16. His path to Anchore began after a storied upbringing in his native country of Peru, where Alfredo competed as a high jump athlete in the Athens 2004 Summer Olympics. He then shifted his determination and perseverance to study how to become a developer. 

“My goal is to translate the stamina and work ethics from athletics to my work as an engineer. Having discipline, not letting my guard down, and doing things the right way has enabled me to propel forward in my career,” said Deza.

After building a strong skill set in software coding and engineering, he pursued a career as a software engineer and still fuels his passion for computer programming in his free time. 

Alfredo has co-authored a book “Python for DevOps” and is currently writing another book on machine learning. He teaches courses on Python and CI/CD, and recently was an expert panelist at GitHub Universe 2020

When he’s not mentoring the next generation of developers and engineers, Alfredo spends time with his wife and three children. He consciously tries to expose his kids to new experiences and let them guide their own interests, in fact, his oldest recently taught himself to play the piano.

“When you carve your path to success with effort, you can apply the principles of ownership and see great results. If you do what you say and live with objectives, amazing things will happen.”

The White House Executive Order Is a Call to Action for Software Supply Chain Security 

Last week, President Biden released an executive order (EO) mandating a sweeping review of the federal government’s supply chains. In response to high-profile software supply chain hacks most notably the recent SolarWinds hack, the order went beyond physical supply chains to address the security of software supply chains as well.

The federal government depends on supply chains to provide material support for everything from personal protective equipment (PPE) for frontline workers to furniture for State Department facilities worldwide to complex IT systems across the DoD and civilian government agencies. Fortune 1000 corporation supply chains serve a similar role to companies and often stretch to suppliers around the world. The fact that this order makes direct mention of software supply chains is an indicator of the role software plays in our daily lives.

Treat this Executive Order as a Call to Action

The sheer complexity of software supply chain security means it’s time for commercial and public sector DevSecOps teams to learn from one another. The executive order makes the following request for a review of information technology software, data, and services that the government purchases with their 53.36 billion dollar IT budget.

“The Secretary of Commerce and the Secretary of Homeland Security, in consultation with the heads of appropriate agencies, shall submit a report on supply chains for critical sectors and subsectors of the information and communications technology (ICT) industrial base (as determined by the Secretary of Commerce and the Secretary of Homeland Security), including the industrial base for the development of ICT software, data, and associated services.”

While this EO directly hits systems integrators (SIs) and other businesses that do work for the Department of Defense and civilian government agencies, the EO also applies to manufacturers in the semiconductor and other industrial bases that serve as foundations to the economic prosperity and security of the United States. 

Key Considerations of  Software Supply Chain Security 

Every software supply chain security initiative in the aftermath of SolarWinds needs to encompass two major aspects.  First, it must cover the security of the workloads or applications being constructed,and second, it must cover the security of the DevSecOps toolchain” (eg the tools, platforms and infrastructure) that is used to build the software.  While cybersecurity teams have long focused on securing production infrastructure, the SolarWinds breach has called attention to the need to better secure the systems used by developers to build software applications.

Secure the Workload

Securing the software workload is a common practice that starts with setting policies to govern the use of source code from public repositories such as GitHub and ensuring software in your private repositories is properly vetted and secured. Policies should also extend to employees and contractors housing your source code in their personal source code repository accounts. Another crucial security step is to secure your source code “secrets” — application programming interface (API) keys, OAuth tokens, passwords, and more — that can provide authorized access to your application. Secrets that are left in source code are available to all repository contributors, whether cloned, copied, or distributed.  If an attacker gains access to your source code repository, your secrets can be co-opted for their malicious attack. 

Beyond the usual source code security concerns, there is also the prospect of intellectual property (IP) theft if an attacker accesses your source code repository. High-profile software source code leaks have struck both the consumer and defense industries for various reasons.

Because modern, cloud-native applications are often composed of a myriad of containers from open source and software suppliers, securing containers should take the form of a zero-trust strategy. Because containers may enter the software development process at many different points, organizations must embed continuous compliance and security checks at each stage of the development lifecycle. You can use open source or commercial tools to scan containers for vulnerabilities and generate a software bill of materials (SBOM) to ensure the software components match the vendor contract or documentation.

Secure the Toolchain

The next key security move is securing the entire software development toolchain  — the entire set of tools and platforms that are used to build your workloads.-. You should focus on the security of your DevSecOps toolchains including your continuous integration/continuous development (CI/CD) platforms. Such systems can become a target for man in the middle (MiTM) and other attacks. Security measures include:

  • Protecting distributed applications
  • Securing wired and wireless access points
  • Mandating VPN access for remote workers
  • Creating policies and controls to protect against unapproved shadow DevOps tools

Ensuring the security of your test/QA environments should be an integral part of your QA infrastructure build-out. It helps to have QA staff with a grounding in secure architecture to make this happen. If nobody on your QA team has that skill, look for ways to assign a member of your security or operations team to support the build-out of your QA infrastructure.

Security of production infrastructure is the area where most security teams already spend significant effort, including implementing threat detection, runtime application self-protection (RASP), and related measures. Expanding the security focus to include the systems used in earlier stages of the software development process provides additional layers of protection and improves the overall security of the software supply chain. 

Take Action

Now is the time to reevaluate your software supply chain security. Here are some immediate steps to take:

  • Audit your source code repositories especially access management and secrets security.
  • Examine the feasibility of a zero-trust security strategy for your source code and work with your DevSecOps teams to create a roadmap/plan on how your organization can implement such a strategy to protect your source code.
  • Implement tools to enable your DevSecOps team to generate an SBOM for open source and commercial software components you are onboarding to your toolchain.
  • Audit your DevOps toolchain security and architecture if you haven’t already done so with an eye on protecting distributed applications, securing wired and wireless access points, mandating VPN access for remote workers, plus creating policies and controls to protect against Shadow DevOps tools. 
  • Audit the security of your Test/QA infrastructure, focusing on the architecture and then the same security elements as the rest of your DevOps toolchain security audit.
  • Reinforce your application production security with threat detection, RASP, and other tools to shield your production applications from attackers.

Request a demo today to learn how Anchore Enterprise can help secure your DevOps toolchains and software supply chains!

Charting your DevSecOps Stakeholder Spectrum

The adoption of DevSecOps touches more than just your technology and security stakeholders within your organization. There’s a full spectrum of DevSecOps stakeholders spanning technology, security, and even your business units.

The full DevSecOps Stakeholder spectrum includes:

Technology Stakeholders

The obvious stakeholders to feel some positive effects and challenges of moving to DevSecOps are your technology leaders, such as your chief technology officer (CTO), chief information officer (CIO), and engineering VP.

Their motivations typically include developer productivity. Security teams also can become more productive because of a DevSecOps transformation through automation and adjustments to job roles and processes.

DevSecOps is a mighty robust preventative measure to keep these stakeholders and their teams from getting caught up in expensive security remediation efforts that draw attention away from their regular duties.

An essential role for the technology stakeholder is to be the internal champion for DevSecOps or be the one to empower somebody on their senior staff to be that champion. The DevSecOps champion at the stakeholder level needs prompting to represent your organization’s current and future needs at high-level strategy and budget discussions.

Security Stakeholders

If your organization has a chief information security officer (CISO) or chief security officer (CSO), they are a significant element in your DevSecOps stakeholder spectrum.

Duties of a security stakeholder focus on managing and maintaining the security posture of build environments, software supply chain, and end products. The CISO, often with the CIO, may represent the organization about security matters such as a recent attack on or breach within your organization.

Business Stakeholders

You can’t dismiss the role of business stakeholders in DevSecOps either. These are the business unit leaders that may feel the most impact from DevSecOps. The good news is that such effects are positive if the units work with the technology team to put the right processes, frameworks, and content to tell the story of how DevSecOps benefits your organization. Here are some typical DevSecOps stakeholders on the business or back-office side of your organization:

Sales 

Your sales leaders and representatives gain many benefits from DevSecOps that you can’t gloss over. Positioning the benefits of DevSecOps with sales leaders and their teams who can benefit from it can help them land prospective customers.

Suppose your company has clients in the public sector or the financial services and healthcare industries. In that case, DevSecOps can help your applications achieve compliance more quickly since your organization has shifted security and compliance left.

DevSecOps is becoming an emerging requirement on DoD and civilian government agency procurement vehicles. If your business works with these entities, then you want to arm your sales stakeholders with the correct talking points about your company’s DevSecOps efforts.

Marketing

DevOps culture transformation shouldn’t just be about your development, operations, and security teams. Marketing stakeholders such as your chief marketing officer (CMO), VP of marketing, or marketing director need visibility into your DevSecOps efforts just like other parts of your organization.

Your marketing stakeholders need a share in the collective responsibility to ensure that the software your organization delivers meets expectations and is a market fit for the business customer you’re pursuing.

Marketing teams supporting the launch of new products and services need constant visibility into the DevSecOps project progress. Likewise, developers earn a view of marketing activities. The days of surprises in marketing collateral should be no more in a DevOps culture. DevOps also offers sales organizations a conduit to communicate customer feedback and requirements into the development cycle, so incremental releases can include customer-requested features.

Automation is a priority in DevSecOps. It’s up to you to educate your marketing team on how automation changes how your organization delivers software internally and externally. 

Finance

The chief financial officer (CFO) role is seen as a more strategic role considering the pandemic’s effects on business. Similar positions in federal government agencies also see a similar change as agencies juggle budgets to support their mission, constituents, and employees.

Even the finance department has a potential role in your DevSecOps process. While an accountant may not be billing their time to your DevOps projects, there’s work for them with facets of software license management, plus your cloud spending.

Cloud economics and cloud cost optimization are integral elements of digital transformation projects these days, just like DevOps. Don’t forget to add the finance team to meetings when building out reporting requirements for the DevOps toolchain, cloud migration, and cloud management solutions that’ll power your software development efforts.

Legal

Your organization’s chief legal officer (CLO) or outside counsel is another link in the DevSecOps stakeholder spectrum.  Legal counsel is helpful as software licensing becomes more complex due to open source and commercial software components come together in product development. There are also potential legal issues as you establish software supply chains with licensing and contracts where having a legal stakeholder comes in handy.

When you make legal counsel part of your DevSecOps stakeholder spectrum, you can count on the right software licensing questions and concerns before making a costly intellectual property (IP) or licensing mistake.

Final Thoughts

DevSecOps not only transform how you develop and secure software, but it also transforms your business or agency business units as well. Like it or not, DevSecOps makes software development truly a cross-functional effort. It’s up to you to bring the DevSecOps stakeholders together to ensure the success of your DevSecOps initiatives.

Your DevSecOps Toolchain: 6 Steps to Integrate Security Into DevOps

This blog post has been archived and replaced by the supporting pillar page that can be found here:

https://anchore.com/wp-admin/post.php?post=987473366&action=edit

The blog post is meant to remain “public” so that it will continue to show on the /blog feed. This will help discoverability for people browsing the blog and potentially help SEO. If it is clicked on it will automatically redirect to the pillar page.

Creating a DevOps to DevSecOps Framework for your Organization

A DevOps to DevSecOps transformation works best with a structured framework acting as governance. When you approach such a transformation, putting structure around it allows you and your teams to stop, ask questions, and iterate on potential changes to your existing DevOps processes.

Here’s a simple framework to help ensure an orderly DevOps to DevSecOps transformation:

1. Outreach and Education

A move from DevOps to DevSecOps is far from strictly a technological affair. Such a transformation will significantly impact your developers, sysadmins, project management, and stakeholders.  DevOps to DevSecOps transformation touches other business units when it enables them to deliver on projects to their customers at a higher velocity and more securely. There may also be changes in how your other business groups interact with your development teams for new feature requirements and related matters.

Outreach and education with developers can take a couple of forms. First, you want to seek developer participation in your transformation. It’s time to create internal advocates and champions for DevSecOps. That’s a little easier to do if your organization is already a DevOps shop. However, you’re going to need to extend your outreach efforts to your security team as well. When you build advocates and champions for DevOps to DevSecOps, your development teams become self-policing towards fear, uncertainty, and doubt amongst your developers who aren’t ready about making a move to DevSecOps. You can only do so much to change developer attitudes if you’re not working with them daily and understand their pain points.

Sysadmin outreach plays out much the same way as developer outreach. Build allies. Work with them through any potential changes to their job duties, especially for bringing security checks and scans into their daily work.

There are also outreach and education considerations that may take you out of your IT department to educate the business about DevSecOps. A move to DevSecOps is going to affect the executive management team over the DevOps teams. You want to set expectations about the benefits of DevSecOps and automation. 

Exiting the outreach and education phase means you’ve met with your developers, sysadmins, stakeholders, and management. You’ve also delivered DevSecOps training, whether in-house or through an outside provider, to help upskill your developers and sysadmins and teach them about the benefits of DevSecOps. During this phase, security training may also extend to secure coding and vendor training on the DevSecOps security tools you plan to implement.

This phase is also a good time to survey the market and seek potential vendors to help you integrate security. Visit their websites. Watch their online demos. Ask thoughtful questions about their products and services in such online DevSecOps communities such as DevOps Chat and The DevOps Institute. You should also seek solutions and insights from the open source community that may help you secure your DevSecOps delivery cycle.

If you work inside a large corporation or government agency, establishing a DevSecOps Center of Excellence (CoE) brings together the DevSecOps expertise from across your organization. It can channel them into helping solve some technology and cultural challenges your organization might face in your move to DevSecOps. 

2. Implement Security across your Toolchains

Just like you’re giving your DevOps teams a choice in deciding upon their DevOps tools, the same should apply to the DevSecOps security. Whether you have teams choose open source or vendor-based DevSecOps solutions, the only caveat is that your organization’s requirements are met.

This phase may overlap with the Education and Outreach phase depending on your organization’s schedule and related factors.

While you’re building out your DevOps toolchains with additional security tools and features, it’s also a good time to audit the security of your DevSecOps toolchain itself. Remote DevOps teams and the growing prevalence of cloud-based DevOps tools comprising today’s toolchains making for attractive targets. Attackers are targeting toolchains with man-in-the-middle (MitM) attacks to compromise the development environment.
Exiting this phase means your new security measures are in place and generating your required reports. Depending on your organization’s maturity and situation, it may also mean improving the security of your access controls and endpoints against future attacks.

3. Pilot Project

There’s no better way to confirm that the tools and processes you’re putting in place for your DevOps teams to move to DevOps are working than a real-life pilot project. Pick a small internal project with an owner who’s keen to move to DevSecOps. Put your best people on the project and use it as a learning opportunity for your developers and sysadmins. It’s also an opportunity to educate your business stakeholders about the benefits and virtues of DevSecOps because you can show business value.

Exiting the pilot project stage can happen once the project is live and you’ve captured the lessons learned and rolled them back into your DevSecOps processes.

4. Full DevSecOps

Once you hit full DevSecOps, your job still isn’t done. When your organization hits this point, it’s time to take a continuous learning and collaborative approach to development and operations to speed up and secure your software delivery.

Don’t forget an ongoing feedback mechanism once your DevOps teams move into full-on DevSecOps. You want to take in developer and sysadmin feedback to apply lessons learned through a DevSecOps Center of Excellence (CoE) or another forum where your organization can intake the information without filtering through management, bureaucracy, or corporate politics.

DevSecOps isn’t meant to be a static state of being. You need to put your lessons learned into practice. You also need to offer ongoing training to your teams moving to DevSecOps.

Final Thoughts

Taking a systematic approach — such as an adoption framework — moving from DevOps to DevSecOps provides your teams and stakeholders with enough structure to transition their projects and job roles into this new way of developing software securely. This approach allows you to discuss your progress with your DevOps teams and their stakeholders at mutually agreed times during your transformation. 

5 Ways a DevOps to DevSecOps Transformation Changes Teams for the Better

Whether your organization is moving from DevOps to DevSecOps or making the initial step from a traditional waterfall software development life cycle (SDLC) to DevSecOps, you need to account for how DevSecOps is going to change your teams.

Here are five changes your teams can expect when your organization moves to DevSecOps:

1. Security becomes part of everyone’s job.

Perhaps the most significant change that comes to DevOps teams when they take the next step in their DevOps journey to move to a DevSecOps model is that security becomes part of everyone’s job.

It starts with incorporating security from day one of the project, whether it’s a new cloud application your organization is launching or an update to an existing application. When you take this first step, security is the “Department of No” or “It’s us versus them” when seeking approvals or collaborating on resolving security issues.

Another step to security becoming part of everyone’s job is to provide security training for your developers. Training could take the form of in-house developer training in casual formats such as Lunch and Learns or more formal training classes conducted by your organization’s training department. Another option is to send your developers to a third-party training provider to get the requisite security training. Depending on your security ambitions (and budget), there is always the option to send your DevOps team members to get a DevSecOps vendor certification such as the DevSecOps Foundation certification from the DevOps Institute or the Certified DevSecOps Professional (CDP) from practical-devsecops.com.

Finally, it’s also critical to document secure coding standards for developer onboarding and their later reference to ensure security becomes a part of the developer’s job.

2. Priorities may change

Trust between your security and DevOps teams will not improve overnight. The key to building trust between these teams — traditionally at odds in some organizational environments — is about prioritizing results. Both teams need to set priorities that are best for the project delivery and the overall business. For example, some software bugs that the team may encounter during development may not be enough to halt a product release. The development and security teams

3. Instills a Fail Fast culture

Unfortunately, in some corporate cultures, even the thought of failure can paralyze software development projects or keep the teams working in an endless loop redoing work to avoid releasing a product.

When “Fail Fast” becomes part of team culture, developers identify bugs as they build. That’s a big contrast to the days when developers or QA would work on bugs during the last few days (or hours!) before product launch. DevSecOps culture enables developers to take the time to fix issues in development versus spending hours or days fixing the issue once your application is in production.

Elements of a “Fail Fast” culture include:

  • Failure becomes a learning experience for the team rather than a career-ending incident.
  • Teams document lessons learned from the failure and put in the tools and processes to ensure the failure doesn’t happen again.
  • Testing. QA and remediation are recurring aspects of the DevSecOps lifecycle enabling developers to find bugs and issues during the development lifecycle before your application hits production.
  • Ask for help becomes a rule rather than an exception, with team members not worrying about losing face to their fellow team members and management.

“Fail Fast” cultures aren’t born overnight. Creating such a culture requires management support, accompanied by building trust across the teams who work together to deliver software. Most of all, management needs to lead by example and show their teams that failure is a learning opportunity, so take the extra time to put actions behind words if you’re a manager or stakeholder.

4. Increases Transparency

DevSecOps is no place for job security through obscurity. The DevSecOps culture warrants transparency between developers, security, and operations teams during their work. Increasing transparency requires the work of everybody to open collaboration between groups.

Here are some examples of how DevSecOps increases transparency:

  • Security teams enter the DevOps lifecycle, ensuring developers, security, and operations teams see everything through the same lens while working together.
  • Teams begin to use the “same language” since they are now working together during the development lifecycle.
  • As engagement between development, security, and operations teams grows, it can finally be possible for these groups to see they are all aligned to a single goal and start dropping the “us versus them” attitude that can sometimes affect how these teams collaborate with each other.

Another element of transparency that you should monitor on your teams is engagement. Be prepared to work with teams and staff who might have been working in silos either deliberately or through no fault of their own.

5. Treat Metrics as your Missing DevOps Team Member

Tools across the DevSecOps toolchain are chock full of data for teams and their stakeholders to track. DevSecOps culture enables teams to tell their data-driven story to internal stakeholders such as executives and project sponsors.

Some metrics to consider capturing during your delivery lifecycle include:

  • Reduced Total Security Tickets Opened
  • Discovery of Preproduction Vulnerabilities
  • Reduced Time-to-Remediate
  • Reducing Failed Security Tests
  • Percentage of Security Audits Passed

When choosing DevSecOps tools, make sure that analytics and reporting tools are prominent in your requirements.  Also, be prepared to iterate on metrics and reporting as you and your stakeholders learn more about your organization’s reporting capabilities and needs.

DevSecOps and Other Business Units

Also, keep in mind that many of the changes that DevSecOps demands of development and operations teams also trickle out to the business units that sit around the periphery of software development projects. For example, business stakeholders will have to factor more security requirements into their project requirements. Your executives have the opportunity to tap into more backend data from your DevSecOps toolchain and view actionable data through dashboards that your teams can set up and configure.

DevSecOps also provides your compliance auditors with new options for tapping into security and application data that would have required extra efforts in the days of waterfall software development.

Final thoughts: Building the Better DevSecOps Team

Setting up a DevSecOps team for success isn’t about just throwing some security tools into your existing DevOps toolchain and considering it done. It’s about cultural transformation and transparency. The first employees to feel that change will be in your development, operations, and security groups. These are also the team members you need to buy in on your DevSecOps vision.

Anchore Enterprise 3.0 introduces New Features to Secure the Software Supply Chain

Hopefully, heralding the start of what is a happier new year for everyone, today we are pleased to announce the availability of Anchore Enterprise 3.0. Over the past 18 months since our last major release, much has happened in the world of software security (and beyond!). From the software supply chain becoming a national security issue to the major developer platforms prioritizing DevSecOps in their roadmaps, the practice of hardening the entire software-delivery lifecycle is now front and center of all organizations. 

We’ve taken a hard look at the fundamental challenges of taking a true “shift left” approach to cloud-native security. Finding vulnerabilities or flagging issues in software is not difficult. Every piece of software has something that could be of concern. The critical challenge is doing it to reduce developer friction by avoiding noise, providing relevant context, and clear remediation steps. Let’s drill into the major features of the 3.0 release, which help our customers achieve these goals. Check out our launch video:

Bringing the Kubernetes Context to Container Alerts 

Anchore Enterprise has integrated with Kubernetes for a while, blocking images being deployed that fail to meet security standards. With 3.0, we’ve taken that a step further and now connect into the operational Kubernetes environment to catalog the running instances with our Kubernetes Inventory feature. We can flag any containers which have active vulnerabilities or are now failing policy or compliance checks that have run or are running Kubernetes. By marking the relevant image digest in your registry as being currently run in production and its being a security concern, developers can more easily prioritize their response to alerts.

A Distributed Model for CI/CD Scanning

Many security tools in the container space wait for an image to be published to a registry by the CI/CD system before scanning it, adding performance overhead and placing the burden of scanning on a central security platform.  To distribute the processing effort and improve pipeline speed by reducing network traffic, we’ve completely refactored how we integrate with GitLab, GitHub, and other popular CI/CD tools. Our new Go-based Pipeline Scanner tool creates the Software Bill of Materials (SBOM) from the container image locally in the CI/CD platform itself and sends the results to the central Anchore system. Anchore then sends back the policy result to pass or fail the CI job. This new deployment model improves the operational cost of running the Anchore system while also simplifying the security scan’s operational overhead in the pipeline.

Helping Security Teams help Developers

You have security information from the pipeline scan and situational awareness from the Kubernetes inventory. What should a developer do next? Our Remediation Action Plans are a brand new addition to our user interface, allowing security teams to generate clear instructions for developers on how to resolve security alerts. Pre-populated suggestions created by Anchore Enterprise can be combined with contextual information and then sent out to popular messaging or ticketing systems.  This powerful combination can help developers, who are probably not security experts, understand the options available to them, resolve issues faster, and allow additional research to be passed along by the security team. 

Reducing False Positives 

False positives are the bane of all security teams, wasting time and effort on wild goose chases. On the container input side, we added a “Hints” capability in 2.4 which allowed developers to explicitly describe software content to help improve vulnerability matching and reduce false positives. In 3.0, we’ve added a new False Positive Management feature so security teams can modify artifact metadata after it has been extracted by Anchore’s system. This allows for gaps or inaccuracies in the data generated during a scan to be fixed thereby ensuring better fidelity with associated fields in the vulnerability database. This is particularly useful for particular language artifacts such as Java which either have inaccurate metadata or don’t have any data at all.

Finally, DevSecOps teams deal with false positives are often dealt with by allowlisting specific packages. This provides a quick and instant way to unblock a pipeline. However, allowlists tend to linger, making what is often intended to be a temporary reprieve to an image a permanent exception. This in itself can become a security concern as fixes are not followed up. With our new Allowlist Expiration feature, security teams can ensure that exceptions to deployments can be time-limited on a customizable value.

Looking Forward

Anchore Enterprise 3.0 is going to continue to receive regular updates throughout the year. We plan to continue adding more runtime features and deeper integrations with the CI/CD platforms while improving the fidelity of security information. As ever, we look forward to hearing from customers and do sign up for our upcoming Anchore 3.0 webinar entitled How to Secure Containers Across The SDLC

DevSecOps and Defense in Depth for Software Supply Chain Security

One challenge that needs addressing in the software supply chain security fight is the balance between agility and redundancy in enterprise security strategies. There’s no better example of that than the recommendations about moving to DevSecOps and implementing Defense in Depth to improve your software supply chain security.

DevSecOps and Software Supply Chain Security

The shift left movement that DevSecOps offers can be vital to securing software build environments. DevSecOps is the next step beyond DevOps, a cultural change that brings security into DevOps rapid release cycles.

DevSecOps is built for agility and velocity. It relies on a range of open source tools to automate the software build cycle. It’s also not uncommon for organizations to put their own spin on DevOps and DevSecOps to meet their security and compliance requirements. There’s plenty of room for enterprises adopting DevSecOps to “build to suit,” which can make it challenging to maintain DevSecOps standards across vendors serving a software supply chain.

The cultural changes that DevSecOps brings to software development can almost be more important than the tooling because it brings security concerns into the development lifecycle versus making security the last stop (and the last night) before applications hit production. The DevSecOps culture stresses:

  • Transparency yields trust with sharing between the DevOps and security teams inside enterprises
  • Shared goals and metrics with DevOps and security teams cooperating on shared goals to achieve the desired metrics to achieve compliance and security

While often an ideal, these cultural norms have a lot of applicability to securing the software supply chain. Transparency along the software supply chain builds trust. That can play a couple of different ways. When you build trust with your vendor teams along your supply chain, it becomes easier to share information and collaborate on security and operational challenges. Many of us are also working under challenging personal and professional circumstances during this pandemic. It only helps that you have clear lines of communication open to set expectations. You also want to create an environment where your team, not to mention vendors, can feel safe asking questions and bringing up technology and business issues.

Defense in Depth and Software Supply Chain Security

Another security technique bound to gain attention in the fight against software supply chain hacks is Defense in Depth. Typically, a security strategy of large enterprises with big budgets, Defense in Depth employs multiple layers of security controls so that if one layer fails, other layers remain operating.

No enterprise can say that its systems are 100% secure. That goes for any organization working on your software supply chain. Otherwise, there’d be no need for such drastic security measures as Defense in Depth. Nor would you need system redundancies because attackers wouldn’t be able to exploit your systems. In reality, the state of software supply chain security isn’t going to change much in the next year or even five years. Thus, it behooves security teams across the supply chain to look to security measures such as Defense in Depth to put in “sea walls” with the attitude that eventually, a wave may crash over the wall.

Defense in Depth includes three layers of controls:

  • Physical layer, which controls the physical access to IT systems, including fences and human guards.
  • Technical controls such as fingerprint readers, authentication, and data encryption that prevent access. 
  • Administrative controls are an organization’s policies and procedures to ensure security and compliance requirements are met. Policies include hiring, onboarding, and other processes that govern how technology teams do their work.

There’s no real cultural shift that Defense in Depth brings with it. Yet, it’s essential to consider the introduction of system redundancies to developers and sysadmins’ routine day-to-day work. Specific job roles and metrics would undoubtedly have to adjust to running, managing, and securing redundant systems.

DevSecOps and Defense in Depth

There are many questions about how a coopetion between DevSecOps and Defense in Depth could work for the average enterprise. Both security strategies have their purposes.

The cultural aspects of DevSecOps, especially when it comes to transparency, still relate very well to Defense in Depth. Sooner or later, large enterprises scrutinizing their software supply chain security need to start paying attention to the people aspect of software supply chain security — transparency, insider threat, security training, and communications. 

The people aspects of software supply chain security are bound to come under additional scrutiny in some large enterprises. There are some lessons for everybody to learn from how DevSecOps handles culture and metrics that can transfer over to Defense in Depth.

System redundancies are where DevSecOps and Defense in Depth are at odds. The reference architecture of the typical DevSecOps toolchain is lean and mean, without redundancies. Some organizations even allow their development teams to choose their tools to build out their toolchains.

Somehow, the smart enterprises will cherry-pick from DevSecOps and Defense in Depth to create a solution within budget that can improve their software supply chain security.

Final Thoughts

Risk mitigation around software supply chains is going through an awakening post-SolarWinds. DevSecOps and Defense in Depth both help mitigate a range of significant security risks. The gravity of a software supply chain attack brings home the reality that despite your best preparations, it’s essential to acknowledge that you’ll be hacked. It may not be your enterprise directly but could be one of the vendors along your supply chain. So, you must do the best you can as a security organization. Put in the right tools. Institute best practices. Train your developers and security teams in best practices. Incentivize your vendors to follow suit.  But most of all, you should have a response plan in place to augment your security tools and strategy.

 

5 Critical Job Skills for Software Supply Chain Security Professionals

When auditing your software supply chain security, it’s important not to forget building and maintaining the job skills of your software supply chain security team. Building skills amongst your software supply chain security team and setting expectations for skills and experience amongst your supply chain vendors is a prudent investment as you prepare for a potential attack in your future.

Here are some job skills to build and refresh amongst your technology teams and vendors who make up your software supply chain:

1. DevSecOps

The agile nature of the software supply chain combined with the operational complexities necessitates people with DevSecOps skills and experience.DevSecOps brings your security team and tools into the DevOps life cycle. Designating DevSecOps as a desired job skill for your software supply chain internal and vendor teams also gives your teams a common framework, operational expectations, and terminology that can help improve operations across your supply chain.

Building upon and validating DevSecOps skills is still a nascent activity. There are few industry certifications right now for DevSecOps. The DevSecOps Foundation certification from The DevOps Institute is one certification you can have your software supply chain team members pursue to level set DevSecOps skills across your teams and vendors. If DevSecOps certifications aren’t workable because of timing and availability, then consider the DevSecOps courses on learning sites such as LinkedIn Learning, Cloud Academy, or A Cloud Guru. 

Another option for DevSecOps skills training is to create your own internal training program for all engineers and architects involved in your software supply chain. Your team members who are in charge of software supply chain security need to partner with every vendor team that touches any part of your product during the DevSecOps life cycle to validate that security is an integral element of the vendor’s software delivery organization’s tools, processes, and culture. 

2. Oral and Written Communication (Soft Skills)

Securing the software supply chain of today requires that all the participants have soft skills. Relationship building is critical behind the scenes of software supply chain security as you often have to interact with executive decision-makers, management stakeholders, and counterparts on their vendor teams.

Extending the need for soft skills outside your own enterprise, dealing with vendors and suppliers across your supply chain in times of regular business and crisis situations requires strong oral and written communications skills and even empathy.

Building up soft skills takes practice. While there are various online platforms that offer soft skills training, sometimes the best training is having your managers and team leads set the example so you establish a culture where soft skills are seen as a benefit and not a weakness.

3. Analytical Skills

Software supply chains add additional levels of complexity to software delivery. With more complexity comes more opportunities for things to break down at points across your software supply chain.

Building up analytics skills on your teams can take a couple of forms. Most commonly, it’s thought of as a personal learning pursuit. However, DevOps teams have the advantage of using retrospectives and post-mortems to showcase the analytical thinking skills of their senior team members. These meetings also give you the opportunity to put a structure or framework around troubleshooting and analysis.

4. Cloud Architecture

As the cloud is playing a predominant role in the software supply chain, it’s time to make wise staffing investments in cloud architects. Yes, cloud architects are an in-demand role, as Google shows in their use of trusted cloud computing to secure their own software supply chains.

While Google is an extreme example of how cloud architecture skills play into a software security supply chain security, cloud infrastructure is a growing attack vector. You want to have that skill set in your organization. It’s also a skill set you want across your vendors.

Cloud architecture is an in-demand job skill. Fortunately, cloud architect training options abound. Each of the major cloud services providers has solution architect certifications. Your employees and partners can take the training and even their certification tests online.

5. Documentation

You can’t run a software supply chain with its needs for processes, frameworks, and policies on oral history, email inbox, or Slack channel alone. You need to create a documentation culture with the job skills to go with it. Outside of the security requirements, you place in your vendor contracts and RFPs, written documentation is necessary to help educate your vendors about the standard security practices they must follow to remain a vendor in good standing for your software supply chain.

For example, a best practice is for vendors to document their software and hardware design and build processes to ensure the processes are repeatable and measurable. Such documentation should already be part of the cost of doing business if your product must meet compliance standards such as FedRAMP, Sarbanes Oxley (SOX), or the Health Insurance Portability and Accountability Act.

Building documentation skills isn’t about throwing contract technical writers to write some documentation for your supply chain as part of a rapid-fire one and done project. Rather, documentation needs to become part of team member jobs and vendor requirements. Options for building documentation skills:

  • Embed technical writers or editors amongst your teams to act as writing coaches for technical staff tasked to document the systems and processes they support
  • Create documentation templates for the major document types you require from vendors and provide job aids and documentation kickoff meetings and follow up support
  • Showcase examples of well-done documentation in team and vendor meetings

Final thoughts

As your teams work to improve their support of software supply chain security you’re going to encounter many judgment calls. The job skills in this blog post, all have one thing in common. They all require continuous learning in order for your internal teams and vendors to be successful. As the software supply chain becomes the latest attack vector for nation-state and other attackers, it’s in your best interests to give your teams and vendors the tools they need to succeed.

7 Trends Lining Up to Fight Software Supply Chain Attacks

Software supply chain attacks are going to be forever on the minds of CISOs and DevSecOps teams as commercial and public sector enterprises look for ways to avoid the headlines as the next SolarWinds.

Now’s the time for technology, collaboration, and compliance processes to come together to help protect software supply chains. Here are seven trends that paint a future picture of how it may all work:

1. Compliance Strengthens in the Face of Supply Chain Security

We’ve yet to see a full response from the compliance world — HIPAA, Sarbanes Oxley (SOX), and PCI-DSS —  in the aftermath of the SolarWinds compromise. Responses from the compliance community are certain to come out as healthcare, financial services, and other compliance-governed industries seek advice and counsel about what makes for a compliant software supply chain to their auditors.

One compliance standard built for multi-vendor environments is the United States Government’s Federal Risk and Authorization Management Program (FedRAMP). When you peel away the bureaucracy and the politics, the federal government is just one big compliance play. They have the experience of integrating multiple vendors from large systems integrators, taking small to midsize companies into their software supply chain while maintaining security and compliance. However, despite all the goodness of FedRAMP, there’s little adoption outside of businesses that do government work.

2. Defense in Depth Protects the Software Supply Chain

Defense in Depth is a strategy where you treat every piece of software that you bring into your software supply chain as malicious actors regardless of whether the source where it’s your own developers, external suppliers, or open source. You run and monitor the software accordingly. 

 However, Defense-in-Depth is extremely expensive to do for real.

3. The Rise of Code as an Attack Vector

A software supply chain attack, such as we saw with SolarWinds, will put a renewed focus on code as an attack vector. DevSecOps, with the “shift left” culture it brings, will increasingly become a necessity for development teams up and down the software supply chain. Part of the shift-left culture is to equip your developers with lightweight tools to use locally to scan their code.

You can expect to see other mitigation strategies on the rise such as developing a robust code composition strategy with a detailed software bill of materials (SBOMs). Such strategies with a fully documented and verified change of custody for all software code entering the supply chain. Your developers can also run a secrets calling tool on all code repositories and include active monitoring. You should also have your developers or in-house cybersecurity team search for your code in public Git repositories, especially GitHub and GitLab. Finding your code once means there’s bound to be more of it out there.

Another strategy is to invest in tools to monitor code integrity and git misconfiguration. Deploy these tools in your DevSecOps pipelines and ensure that your supply partners are using the same or similar tools in their DevSecOps toolchains.

4. OSS and the Future of Supply Chain Security

There are bound to be more questions than ever about open source software (OSS) in enterprise software supply chains across industries and governments. While OSS is a foundation to leading DevOps, DevSecOps, and cloud-first initiatives, it will not stop technical and business leaders from asking even more questions about sourcing this critical software.

There are still more questions than answers about what the post-SolarWinds world may look like for major OSS projects prevalent in the enterprise. It may lend even more credence to the Red Hat business model and perhaps breed more companies taking ownership over some more facets of OSS.

Some in the OSS community are calling for more corporate open source citizenship. For-profit companies and even government agencies donate money, time, and expertise back to the OSS projects they use most to improve the project’s code security.

On the security front, it could mean that enterprises depending on OSS enlist mandatory code scanning tools to vet all the open source code entering the toolchain. It could lead to an onboarding process for open source code.

5. DevSecOps for Everybody!

DevSecOps could become one cost of entry for partners in a software supply chain soon. While criticisms about the validity of DevSecOps abound in some technology industry circles, the current situation offers the DevSecOps movement a time to prove itself.  The benefits of well-executed DevSecOps across software supply chain providers include:

  • A common language for developers, security, and operations teams
  • Improved access to actionable data from backend development, build, and QA systems for reporting to stakeholders and auditors
  • A development model that relates well to iterative software development and automated security testing at each phase of the development cycle

In the future, larger enterprises may demand contractually that all their suppliers follow a DevOps/DevSecOps model of development. However, such standardization of practices could be difficult, if not impossible, to enforce in all reality. However, startups already standardized on agile and DevOps gain a leg up if they have salespeople who can bring home large enterprise accounts.

6. Transparency Grows in the Software Supply Chain

DevOps and DevSecOps have driven home the need for transparency in the development lifecycle. Transparency needs to grow throughout the software supply chain during 2021 and beyond. One solution to make that happen is in-toto — a software supply chain security framework– that includes current integrations with Git, Docker, Datadog, and some other open source build tools.

7. Zero Trust across the Supply Chain

As concerns over software supply chain security grow in 2021 and beyond, zero trust security solutions may come to bear in the next generation of software supply chain security. However, there are some caveats to this approach. Zero trust security is still an emerging category (albeit with great potential). My colleague, Andre Neufville, wrote about how zero trust security can protect containers. It’s a technology bound to find itself in the software supply chain of the future because it treats all code as equally malicious while giving developers and security teams the tools and processes to help mitigate the associated risks.

Final Thoughts

There’s no denying that software supply chain security will forever remain a challenge. Given that,  it doesn’t mean that your organization can’t be forward-thinking in the technologies and strategies that you put in place to mitigate risks across your software supply chain

Preparing for Future Software Supply Chain Attacks

Questions around software supply chain attacks aren’t leaving the industry conversation anytime because of the SolarWinds attack. It’s time to review your software supply chain security fundamentals. Now that we’re in 2021, we can all expect newfound attention on securing the supply chain inside business and government. 

Let’s first define the role of the software supply chain in modern software development.

Software Supply Chain Explained

To prepare themselves for software supply chain attacks, teams need to understand the software supply chain’s operational role in their product development and services activities. 

Much of the traditional security focus inside commercial and public sector enterprises is about compliance with end-user security. DevSecOps — still in its infancy — is starting early adopters on a journey to bring security into the start of the DevOps life cycle. It’s also breaking down the traditional silos that exist between developers and security.

Now enter the software supply chain, which follows a similar model as a manufacturing supply chain. One of the software supply chain’s primary jobs is to ensure that the right code is being developed for the program’s most essential features. When you consider the scale of enterprise applications, the correct code encompasses multiple applications, a potentially exhausting list of application features, plus internal and sometimes third-party development teams to maintain existing applications and create new code. Such a scale and complexity make it a growing attack vector.

So much has changed about large-scale software development over the last decade. A significant change is that today’s software supply chain includes a sourcing step. It works similar to the sourcing to the sourcing step in the traditional supply chain where your organization manages relationships with suppliers. The sourcing step is also where an organization buys parts or materials that are more efficient or cost-effective to outsource. For example, in enterprise software development, stakeholders use the sourcing step to purchase security software for integration into the products they’re developing if security isn’t part of their strengths. It’s also the step where organizations determine using open source software based on their products or as an integration option for features.

Whether the source is an open source project, a fledgling startup, or an offshore firm, organizations must put in the tools and processes to analyze the components’ quality and security entering their software supply chain. Such an analysis should include  the following factors:

  • Developer documentation, especially for the product’s application programming interface (API)
  • Software support through community forums or fee-based arrangements with the developer
  • Commercial and open source software licensing agreements
  • Security features in the software

It’s also raising questions in the halls of Congress about whether the U.S. government has an adequate framework to assess the security of products upon which the government relies, according to CyberScoop.

We’ll be discussing the intricacies of open source software in the corporate software supply chain in a future blog post.

Software Supply Chain Security Fundamentals

Here are some fundamentals of software supply chain security to brush up on as you look to improve your supply chain security in 2021:

Practice Basic Cyber Hygiene

Like so much of cybersecurity as a discipline, start with security basics at the top of the list to maintain supply chain security. 

Basic cyber hygiene starts with installing industry standards antivirus and malware software on any machine or mobile device that accesses the supply chain. 

Another step is to set strong passwords, multifactor authentication, device encryption, and regular software updates for any machine or mobile device with access to the supply chain. You can enforce these policies from your enterprise mobility management (EMM) platform.

Other hygiene practices include using network firewalls to protect your software supply chain. You also need to back your systems up regularly and clean their hard drives on a regularly scheduled basis.

Include Software Supply Chain Attacks in your Threat Models

When creating or just updating your threat models, be sure to include supply chain attacks. While many analysts, pundits, say that SolarWinds did nothing wrong, that’s no excuse for you not to factor software supply chain attacks in your threat models.

Institute Proper Risk Management for your Supply Chain

The technology risk management discussion has mostly been devoid of the software supply chain, unfortunately.

My colleague Andre Neufville, an Anchore solution architect, speaks to the wisdom of instituting proper risk management in the DevSecOps pipeline and some other advice that you can also apply to your supply chain best practices.

Work with your Partners to improve Security Accountability

It’s one thing to manage your technology stack and supporting infrastructure; it’s another thing to secure and enforce your development partners that you have in your supply chain. While challenging to do, you can look at contractual measures to ensure security with enforceable penalties if broken. Unfortunately, such contractual agreements can be challenging to enforce.

There’s also seeking out third-party vendors who’re already adhering to your industry’s necessary compliance standards

Implement Defense in Depth

Another option to explore if you have the budget is to implement defense in depth, where you treat every piece of software you bring into your supply chain as a malicious actor. It doesn’t matter if you source the software from your internal DevSecOps teams, a third-party supplier, an open source software project, or a combination of sources.

Defense in Depth requires your organization to put in the tools and processes to monitor everything that enters your supply chain. It’s an expensive measure to implement and out of reach for all but the largest of enterprises.

Key Takeaways

Remember that as software supply chain attacks continue to mount in the future, tactics will change, but the basic cybersecurity fundamentals will remain in place. Your DevSecOps team needs to work with your auditors and cybersecurity team to ensure that your supply chain security adheres to your required standards. Here are some key takeaways:

  • Software supply chain security is the new hot button security concern for 2021.
  • Start with the cybersecurity basics when securing your software supply chain, including strong passwords, multifactor authentication, and regular software updates.
  • Include software supply chain attacks in your organization’s threat models if you aren’t doing that already.
  • Institute proper risk management for your supply chain using the same practices you’re already applying to your organization’s software and business risk.
  • Implement Defense in Depth treating everything that enters your supply chain as malicious (an expensive option!).

5 DevSecOps Myths to Dispel in 2021

DevSecOps seems to attract its share of myths. As we go into 2021, it’s time that we as an industry work to dispel those myths for our prospective customers, customers, and internal stakeholders across our organizations.

Here are some common DevSecOps myths we can all work on dispelling in 2021:

1. Organizations lose control when they move to DevSecOps.

Software development has a legacy of long development timelines in both business and the public sector. There are long quality assurance cycles with a final assessment by a security team at the end of the process. A move to DevSecOps may seem like a loss of control to project managers, developers, QA, and security teams who are used to working on development projects following traditional waterfall software development methodologies.

Dispelling the myth that your organization will lose control once you move to DevSecOps takes a multi-faceted approach. Internal training for your technology and business teams can be a powerful force to quell this misconception for starters. Then, when you tell the story of a DevSecOps pilot project, be sure to include facts around how the DevSecOps toolchain improved security and compliance coverage.

Another exciting way to dispel stories about loss of control is to focus on the new reporting options for developers, security analysts, and business stakeholders that can now be made available because of the tools and processes you’ve put in place for DevSecOps.

2. You can buy DevSecOps.

Marketing departments, PR agencies, and vendors are all trying to ride the DevSecOps trend to increase sales. The message that you can buy DevSecOps from a vendor — after all, it’s just a tool or a suite of tools — is a myth that DevSecOps has inherited from DevOps. Sales and marketing reps perpetuate this myth on sales calls all the time.

Part of any DevSecOps pilot should be education and outreach to stakeholders and influencers inside and outside your IT groups. Your non-developers are still going to feel some cultural changes that DevSecOps adoption brings to organizations.

3. DevSecOps is about Speed and Speed Only.

There’s the ongoing myth that DevSecOps is about speed and speed only. Improving software delivery velocity is but one aspect. Automation help speed deployments while improving software quality and compliance.

4. DevSecOps requires an elite senior-level Development Team.

There’s a wrong sentiment out there that DevSecOps is only for an elite team of senior-level developers working as a tight group with specialized training, certifications, and tools. There’s no secret society of DevSecOps either.

You shoot down this myth by keeping open lines of communications open between your DevSecOps delivery teams and the rest of your organization. Provide a DevSecOps overview to your business stakeholders to teach them the benefits of DevSecOps in business terms they can understand. Ask what support your business and technology teams to best communicate with each other because on of the tenants of DevSecOps is transparency, after all.

5. DevSecOps isn’t for Remote Teams.

A program manager once told me that remote teams couldn’t do DevOps. Well, COVID-19 has proven him wrong. Enough said. The same myth follows DevSecOps around as well.

Let’s say you may have a team that’s finding success with DevSecOps during the pandemic. You still need to capture and communicate the success stories and the lessons learned from working on DevSecOps as a remote team. At some point (maybe), your organization will return to everyday life back in the office. Anecdotes of DevSecOps success during COVID-19 will not be enough for some critics. Take the extra steps to capture data, metrics, and positive feedback from your internal and external customers.

Final Thoughts 

DevSecOps is another technology change that employees have to keep tracking. Some employees will embrace the changes with passion. Others will see DevSecOps as a disruption to their daily routines. DevSecOps myths take root in between these groups. Start of your 2021 with a campaign to improve your communication and education about DevSecOps to dispel such myths.

2021 DevSecOps Predictions: A Year of Growth and “Shift-Left”

As a company, Anchore has been tracking the growth of DevSecOps we’re seeing in the market and with our commercial and public sector customers during the past year. DevSecOps keep progressing despite everything that was going on with the pandemic.

 Our team recently got together and made some predictions about how DevSecOps will fare in 2021:

Shift Left Grows from Objective to Best Practice

Shift-left will become more of a practice than an objective.  In 2021, I predict that more dev teams will embrace shift-left concepts in a more pragmatic way, predicts Dan Nurmi, CTO of Anchore.  While early on, much of the messaging around shift-left security was taken as ‘moving’ responsibility from so-called ‘right’ (production, run-time, with responsibilities being on operators) to ‘left’ (closer to the source code with responsibilities being on software developers), the more realistic perspective is to embrace shift-left as ‘spreading’ the responsibilities rather than wholesale ‘moving’ them. 

In practical terms, I predict that as more quality security/compliance tools exist that integrate into a DevSecOps automation design, the reality and value of being able to detect, report and remediate security, compliance and best-practice violations at *every* stage of an SDLC will become the norm.

Shift Compliance Left Becomes Reality

Compliance is ready for shift-left treatment, Nurmi also predicted.  There is significant overlap between many aspects of an organization’s compliance requirements and the practices that exist for ensuring secure software development and delivery.  In the same way that shift-left has become a rallying cry for more efficiently handling secure software delivery, we predict that in 2021 the industry will begin looking at how a similar approach (if not identical) can apply to solving organizational compliance requirements, particularly as they pertain to the organization’s own internal use of software and software services.

DevSecOps grows outside of Compliance-based Industries

“Given the increasing number of digital assets and the average cost of a cyberattack, it is critical for organizations to constantly be looking for weaknesses in their attack surfaces. In 2021, we will see more organizations than ever adopt DevSecOps into their cybersecurity strategies, or risk having their integrity and reputations destroyed,” Blake Hearn, DevSecOps engineer for Anchore, predicts.

2020 has been a year of change for many aspects of people’s lives, especially technology. Up to this point, DevSecOps has mostly operated in industries with heavy security mandates: defense, healthcare, and finance, adds Michael Simmons, DevSecOps engineer at Anchore. “I see DevSecOps spreading to other sectors as cybercrime rises due to the importance of software in function of people’s lives in the pandemic world.”

“Additionally, California consumer data protection laws came into effect in 2020. Any businesses that operate in California need to abide by these rules,” Simmonds added. “Because of this, I see DevSecOps spreading into more mainstream industries and technology companies as they move towards maintaining compliance.”

DevSecOps continues to Grow into a Data Play

“Opinions on the growth of artificial intelligence (AI) in DevOps and DevSecOps vary. I see the release of AWS DevOps Guru more than a sign that DevOps and DevSecOps will grow into even more data-driven activities well into 2021 and beyond,” predicted Will Kelly, technical marketing manager for Anchore. 

“With so many DevSecOps teams moving to remote work, it only makes sense to maximize the use of backend data to maximize the effectiveness and efficiency of those teams. AI and machine learning tools are where we’re going to see that happen for real.”

DevSecOps in 2021

2021 is bound to be an exciting year of growth and maturing for DevSecOps as enterprises continue to lean into DevSecOps tools and strategies to apply lessons they learned during COVID-19.

2021 Container Predictions: The Year of Containers Walking Fast

So many of us will be glad when 2020 is over and one for the history books. On the bright side, it has been an excellent year for container technologies, though. Recently, some Anchore employees made their predictions for the container market in 2021:

2021: The Year of Containers “Walking Fast”

“If we look at container adoption as a matter of crawl, walk, run, 2021 is looking like many in the industry will be walking fast,” predicts Dan Nurmi, CTO of Anchore. “We’ve seen many mid to large organizations choose containers to realize their ultimate objective of delivering fast, stable, highly-automated, secure SDLC processes. Up until now, the greatest success we’ve seen has been in smaller R&D and greenfield projects.  Moving forward, organizations will be building on the tools and techniques delivered by these successful projects to drive container adoption further into critical production application environments.” 

He adds that many of these container projects have shown real value without sacrificing design characteristics.

Containers drop their Bad Reputation

He predicts that in 2021, now that many successful container-based designs have been proven out, organizations and designers will begin seeing characteristics of containers as beneficial rather than as problems.  He explains the flexibility of software choice to developers, clear and trackable content, quick update and deployment are aspects that containers can quickly provide and can be leveraged rather than resisted, now that tech exists to overcome those early concerns such as stability, security, monitoring, and provenance tracking.

Containers aren’t the enemy, advises Nurmi.  “Whenever there is an innovation/evolution in the developer infrastructure space, there is an immediate and legitimate outpouring of concerns about the challenges and problems that appear when shifting to something new.  Container technology supports a compelling enough set of values to make the change worthwhile. However, with the availability of new and ever-improved tooling, we’re now seeing organizations overcoming many of these initial concerns, ranging from software provenance to security and monitoring.”

Growing Container usage Demands  Better Tooling

More and more technology corners that haven’t adopted containerization (or have, but not entirely) will continue the path towards better and more usage predicts Alfredo Deza, a senior software engineer at Anchore. He adds, “With that usage, the demand for better tooling will follow through. The need for integrations everywhere and anywhere for containers will continue, and more tools will default to a containerized installation only.

“In areas like Machine Learning, containerization is becoming more prominent, and more cloud support dedicated to containers and machine learning will follow up,” Deza further predicts.

Multi-cloud goes Mainstream

Multi-cloud will become mainstream in 2021, according to Paul Novarese, a senior sales engineer at Anchore. Unlike multi-region techniques that are mostly availability tactics, multi-cloud is a strategic way to avoid vendor lock-in.

Serverless Container Platform Adoption Expands

Infrastructure becomes more of a hindrance, and the adoption of serverless container platforms (such as Fargate) expands, Novarese predicts.  In a serverless universe, security solutions that rely on sidecar containers, agents, or kernel modules become obsolete – deep image awareness and continuous compliance become more and more important.

Rise of Docker Build Alternatives

There are already a lot of options for building containers without Docker. In 2021, these options will continue to gain momentum, and alternatives to Dockerfile may be just as popular as docker build, according to Adam Hevenor, principal product manager at Anchore. “I am keeping my eye on Cloud Native Buildpacks, which was recently brought into the Incubation stage by the CNCF.” 

Rise of the Rest of the Registries

With changes to the Dockerhub usage caps, we can expect to see more and more projects use alternative registries, predicts Hevenor. It remains to be seen whether open source projects will move out of Docker Hub. Still, offerings from Github, Amazon Web Services, and Google are going to become increasingly common places to keep your container images. 

Beyond Kubernetes (towards Serverless)

While Kubernetes has captured most enterprises’ mindshare, managing and upgrading Kubernetes is still a big challenge for operators, stated Hevenor. He predicts with the announcement of Lambda’s support of containers and the growth of Google Cloud Run and Azure Functions you can expect to see more and more enterprises consider serverless alternatives to Kubernetes. 

Container Adoption on Microsoft Windows will Double in 2021

Mike Suding, a  sales engineer for Anchore, predicts that container usage/adoption on Microsoft Windows will double compared to 2020. Because of the law of small numbers and Microsoft’s history of excelling when their executive team gets behind a product or service.

Containers in 2021

The Anchore team has high expectations for container acceptance and adoption in the market next year even when enterprises begin to enter post-COVID-19 recovery and many digital transformation projects mark their completion.

Securing the DevSecOps Pipeline

We live in an unprecedented era of remote work due to COVID-19.  Now is the time to review the security of your DevSecOps pipeline to ensure that the tools and workflow powering your software development is secure from attacks.

Here are some tips to consider as you evaluate your approach to integrating security at each stage of the development lifecycle.

Implement Threat Modeling

Threat Modeling not only helps security teams define security requirements and assess underlying risks associated with new and existing applications; it fosters ongoing communication between security and development teams. Integrating threat modeling tools in the development lifecycle promotes collaboration between each team on the system architecture and provides a consistent method for tracking key information. Microsoft’s Threat Modeling Tool and OWASP’s Threat Dragon are popular open source tools used in DevSecOps pipelines to conduct threat modeling.

Utilize IDE Extensions

Utilizing IDE extensions to identify vulnerabilities and security flaws as developers are writing code is an easy way to catch security issues early on. It also serves as a way to educate developers on good coding practices. 

Run Peer Code Reviews

Implementing peer code reviews is another method to ensure developers are using secure coding practices. Code reviews practices can improve the quality of your organization’s code base as it fosters collaboration between reviewers and those writing the code, facilitating knowledge sharing, consistency, and legibility of code.

Implement Pre-Commit Hooks

Exposing secrets such as application programming interface (API) keys, database credentials, service tokens, and private keys to source code repositories occur more frequently than you might think and can be costly for organizations. In 2019, security researchers discovered over 200,000 unique secrets on GitHub. Integrating pre-commit tools in code repositories can prevent your secrets from being pushed inadvertently.

Bolster your DevSecOps workflow with Automated Security Scanning and Testing 

As development teams focus on ways to push faster release cadences, automated security scanning and testing is critical to identify vulnerabilities and other issues early in the development lifecycle. 

For containerized applications, container security scanning tools evaluate container applications and their underlying file system for vulnerabilities, secrets, exposed ports, elevated privileges, and other misconfigurations that may be introduced either from public base images or developer mistakes. 

With the proliferation of open source software in recent years, modern applications often consist of third-party dependencies. There are advantages to utilizing OSS; however, if not carefully inspected, it can introduce vulnerabilities and other issues. Implementing dependency checking tools can analyze dependencies in your code to identify issues such as vulnerabilities and the use of non-compliant licenses.

Static application security testing (SAST) should be integrated into the pipeline to automatically scan every code change as it is committed. Initiating workflows from scan results can facilitate immediate feedback leading to quicker remediation. Dynamic application security testing (DAST) is a good way to evaluate your running applications for vulnerabilities that may be missed by SAST tools. 

Build Secure Immutable Infrastructure in the Cloud

The adoption of DevOps and cloud-hosted services facilitated the practice of Infrastructure-as-Code (IaC) in which enterprise services could be architected, committed as code, and deployed in an automated fashion. While this has allowed IT teams to quickly deploy enterprise applications and services, this has introduced challenges for security teams to identify issues in IaC before these applications and services are deployed to production. Static security scanning tools can analyze infrastructure tools such as Terraform and Cloudformation for any misconfigurations and other security issues early in the development lifecycle and provide feedback in an automated fashion. 

Implementing practices such as configuration management and baseline configurations can help facilitate immutable infrastructure that is deployed in a consistent manner based on a defined set of requirements and continuously monitor infrastructure for inadvertent or unauthorized changes.

Utilize Secrets Management

As we discussed earlier in this post, exposed secrets can be an organizational nightmare. However, ensuring that sensitive information is protected is no small task either. This is where secrets management comes into play. Every organization should have a set of tools and processes to protect passwords, API keys, SSH keys, and other secrets. Besides providing a secure method for storing secrets, secrets management can also facilitate other best practices such as auditing, role-based access control, and lifecycle management.

Final Thoughts

Communications and collaboration amongst your team members should be part of all the tips in this post. Your new or renewed focus on DevSecOps pipeline security should also become part of any internal processes that you have in place to govern tool security and maintenance.

DevOps to DevSecOps Cultural Transformation: The Next Step

Part of any DevOps to DevSecOps transformation is cultural transformation. While you’ve probably made steps to strengthen your development and operations cultures to embrace the concepts and tools that power DevOps, there’s going to be some more work to do to transform your burgeoning corporate DevOps culture to embrace DevSecOps fully.

DevSecOps is a growing movement in the commercial and public sectors to incorporate security into the software delivery process further.  Monitoring and analytics in the continuous integration/continuous delivery (CI/CD) pipeline expose software risks and vulnerabilities to DevSecOps teams for their follow-up actions and remediation.

Here are some next steps to grow your DevOps culture to DevSecOps:

1. Position DevSecOps as the Next Step

DevOps is a journey for many in the IT industry. It takes time and investments in staffing, tools, processes, and security to move from a traditional waterfall-driven software development life cycle (SDLC) to DevOps. There are two scenarios for organizations who want to move to DevSecOps:

  • Moving straight from a waterfall SDLC, skipping traditional DevOps, and moving right to DevSecOps
  • Moving from DevOps to DevSecOps through upgrading the CI/CD toolchain with a range of security automation tools, shifting security left, and bringing security team members into the development cycle

Either DevSecOps adoption scenario needs outreach and training support to communicate expectations, next steps, and changes to your developers, sysadmins, and security team. Work closely with your teams during your move to DevSecOps to answer their questions, take their feedback on progress and changes, while giving the transformation project a chance to pivot based on lessons learned along the way.

2. Move from Gated Processes to Shared Responsibility

DevOps depends on gates between each stage. Managers, stakeholders, and even entire development organizations can justify these gates because they provide a sense of security for troubleshooting, halting delivery, or stakeholder inquiries into the project. 

DevSecOps substitutes mutual accountability for those gates. Mutual accountability comes about through process changes and improving collaboration between your development, security, and operations teams through cross-functional teams supported by the proper technology tools and executive sponsorship.

3. Communicate about Security outside your IT Department

Such a new and enduring focus on security during the application delivery life cycle means you have to keep communications and outreach channels open with your stakeholders and user community. You need to create strong internal communications about how DevSecOps is changing how your teams deliver software, and its benefits they can expect from this transformation.

You need to extend your security education to other departments, such as your sales and marketing teams. For example, moving to a DevSecOps model gives your marketing team the reason to create security-focused messaging and collateral that your sales team can use to reach prospective and existing customers who’re security conscious.

4. Make Security no longer “Us vs. Them”

Gone are the days the cybersecurity team was the “Team of No,” and security testing took place right before product launch. Today consumers and enterprise customers want rapid updates and app stores. It’s time to dismantle the vestiges of “us vs. them” and make security a priority in your application development from project kickoff. Do everything you can process and tool-wise to move away from the stress of incident and issue-driven security responses, leading to fixing security issues at the end of your development life cycle.

Building collaboration between your DevOps and security teams starts with:

  • Building security into each stage of your CI/CD workflow
  • Integrating mandatory security checks into your code reviews
  • Integrating automated container security scanning into your container supply chain

Beyond these incremental steps to build collaboration between your teams, it helps managers and team leads to set the example for collaboration. Organizational culture and internal politics can breed rivalries that can interfere with collaboration, if not the entire DevOps cycle.

5. Target Developers’ Baggage

Developers bring the best practices and bad habits of every previous employer and past contract with them. There are plenty of developers who can sort this out from their work, but some find challenges in sorting such things out for themselves.  DevOps and DevSecOps definitions and implementations vary. Not to mention, COVID-19 is also raising stress levels at home and work for people causing work to slip.

Some common ways to target developer baggage include:

  • Focusing on developer experience (DX) throughout your development tool and CI/CD toolchain selection
  • Communicate about your processes in terms of frameworks that capture approved tools, processes, and expectations for your developers, QA, and system administrators during employee onboarding

Final Thoughts

Culture can be the most essential but often misunderstood portion of DevOps transformation. I’m fond of the old saying that goes, “you can’t buy DevOps.” The same goes for DevSecOps. The security and compliance implications of DevSecOps make it, so you need to go further with your security outreach and communications to help push cultural transformation forward.

Package Blocklists Are Not Foolproof

As organizations progress in their software container adoption journeys, they realize that they need image scanning beyond simple vulnerability checks.  As security teams develop more sophisticated image policies, many implement package blocklists to keep unnecessary code such as curl and sshd out of their images. Curl can be a handy tool in development and debugging, but attackers can also use it to download malicious code into an otherwise trusted container.  There are two primary scenarios security teams want to protect against:

  • An attacker compromises a container that has curl in it and then uses curl to bring compromised code into the environment
  • A developer uses curl to download unapproved code, configurations, or binaries from unvetted sources (e.g. random GitHub repositories) during the build process (this could be malicious or inadvertent)

For the first scenario, a simple blocklisting of the curl package will cover most cases.  If we can produce an image that we know curl is not installed on, we’ve effectively mitigated an entire class of potential attacks.

Note: The policy rules, Dockerfiles, and other files used in this article are available from GitHub.

Blocklisting curl

Blocklisting packages is a pretty straightforward process.  We just need a simple policy rule:

Anchore Enterprise policy rule: Gate: packages Trigger: blacklist Parameter: name=curl Action: stop

We’ll build an example image with this Dockerfile to test the rule:

# example dockerfile that will use curl to download source, anchore
# will stop this with a simple package blocklist on curl

FROM alpine:latest   WORKDIR /
RUN apk update && apk add --no-cache build-base curl

# download source and build
RUN curl -o - https://codeload.github.com/kevinboone/solunar_cmdline/zip/master | unzip -d / -
RUN cd /solunar_cmdline-master && make clean && make && cp solunar /bin/solunar

HEALTHCHECK --timeout=10s CMD /bin/date || exit 1
USER 65534:65534
CMD ["-c", "London"]
ENTRYPOINT ["/bin/solunar"]

OK, let’s build, push, and scan the image.

pvn@gyarados ~/curl_example# export ANCHORE_CLI_USER=admin
pvn@gyarados ~/curl_example# export ANCHORE_CLI_PASS=foobar
pvn@gyarados ~/curl_example#
 export ANCHORE_CLI_URL=http://anchore.example.com:8228/v1

pvn@gyarados ~/curl_example# docker build -t 
pvnovarese/curl_example:simple .
Sending build context to Docker daemon  112.1kB
Step 1/12 : FROM alpine:latest
 ---> a24bb4013296

[...]

Successfully built 799a36c3cb2d
Successfully tagged pvnovarese/curl_example:simple

pvn@gyarados ~/curl_example# docker push pvnovarese/curl_example:simple
The push refers to repository [docker.io/pvnovarese/curl_example]

[...]

pvn@gyarados ~/curl_example# anchore-cli image add --dockerfile ./Dockerfile pvnovarese/curl_example:simple

[...]

Simple curl Example

As expected, the package blocklist caught the installed curl package and the image fails the policy evaluation.

Multi-stage Builds Add Complexity

We’ve increased our protection against developers using curl to bring unknown code from random places on the internet into our environment.

But what if the developer uses a multi-stage build?  If you’re not familiar with multi-stage builds, they are frequently used to create more compact docker images.  The most common pattern is that a first stage is used to build the software, then the binaries and other artifacts produced are transferred to the final stage, leaving behind the source code, build tools, and other bits that are vital to building the code but aren’t needed to run the code.  The build-stage container then is discarded and only the final lean container with the bare necessities moves on.

Since those intermediate-stage containers are ephemeral, Anchore Enterprise doesn’t have access to them and can only scan the final image.  Because of this, many things that happen during the actual build process can avoid detection.  A developer can install curl in the intermediate build-stage container, pull down unvetted code, and then copy a compromised binary to a final stage image without installing curl in that final image.

### example multistage build - in this case, a simple package blocklist
### will NOT stop this, since curl only is installed in the intermediate
### "builder" image and doesn't exist in the final image.  To stop this,
### we can look for curl in the RUN commands in the Dockerfile.

### Stage 1
FROM alpine:latest as builder
WORKDIR /solunar_cmdline-master
RUN apk update && apk add --no-cache build-base curl

### Clone private repository
RUN curl -o - https://codeload.github.com/kevinboone/solunar_cmdline/zip/master | unzip -d / -
RUN make clean && make

### Stage 2
FROM alpine:latest

HEALTHCHECK NONE
WORKDIR /usr/local/bin
COPY --from=builder /solunar_cmdline-master/solunar /usr/local/bin/solunar


# if you want to use a particular localtime,
# uncomment this and set zoneinfo appropriately
# RUN apk add --no-cache tzdata bash && cp /usr/share/zoneinfo/America/Chicago /etc/localtime


USER 65534:65534
CMD ["-c", "London"]
ENTRYPOINT ["/usr/local/bin/solunar"]

The final image output from this example is a completely standard alpine:latest image with a single binary copied in. Our simple package blocklist won’t catch this: the multistage image passes the policy evaluation even though curl was installed and used as part of the build process. Only the final image is checked against the package blocklist.

Our sample image passes the policy evaluation even though curl was used in the build process because the package is not installed in the final image.

To increase our protection, we should check for RUN instructions in the Dockerfile that call curl in addition to our package blocklist rule.

two new policy rules are added Gate: dockerfile Trigger: instruction Parameter: instruction=RUN check=like value=.curl. Action: stop Gate: dockerfile Trigger: no dockerfile provided Action: stop

We’ve added two rules.  The first will fail the image on any RUN instruction in the Dockerfile that includes “curl”, and the second will fail the image if no Dockerfile is submitted with the image.  We then re-evaluate with this new policy bundle (note that we don’t need to re-scan, we’re just applying the new policy to the same image) and get the desired failure:

Note: No changes were made to the Dockerfile from the previous run, and we did not rebuild the image – we only changed the policy rules.

This time, our policy evaluation caught both the installation of curl into the intermediate container and the actual execution of curl to download the unauthorized code.  Either of these alone is enough to cause the policy evaluation to fail as desired.

Also, in this case, our package blocklist was not triggered, since the final image still doesn’t contain the curl package.

Conclusion

Package blocklists can be quite useful. In most cases, whether or not particular packages are present in an image is much less of a concern than how those images are constructed and used, so looking just at the final image isn’t enough.  Anchore Enterprise’s deep image introspection includes analysis of the Dockerfile used to create the image, which allows the policy engine to enforce more best practices than simple image inspection alone.

Policies, Dockerfiles, and Jenkinsfiles used for this article can be found in my GitHub.

The Journey from DevOps to DevSecOps

Digital transformation, improved security, and compliance are the key drivers pushing corporations and government agencies to adopt DevSecOps. Some organizations will experience a journey from DevOps to DevSecOps, depending on their DevOps maturity. 

Defining DevOps and DevSecOps for your Organization

There’s a growing list of definitions for DevOps and DevSecOps out there. Some come from vendor marketing, and a few of the definitions come from new perspectives about bringing together development, security, and operations teams.

For the purposes of this blog post, DevOps combines cultural philosophies, practices, and tools that increase an organization’s ability to deliver software and services at high velocity. DevOps enables teams to develop and improve products faster than organizations using traditional software development and infrastructure management processes.

DevSecOps — by definition — brings cybersecurity and security operations tools and strategies such as container vulnerability scanning automation into your organization’s existing or new DevOps toolchain.

In the next few years, it’s a safe bet that the definition of DevSecOps will subsume the DevOps definition as corporations and public sector agencies continue to increase their security focus across the software delivery life cycle.

Moving from DevOps to DevSecOps: Step by Step

When you move from DevOps to DevSecOps, it’s another step in your DevOps journey for many reasons. Your development and operations teams are taking another step left and bringing along their colleagues in security for the trip.

  1. Start with a Small Proof of Concept Project

Starting with a small proof-of-concept project is always the best way to help your teams prepare for any technology or process changes. Choosing a small pilot project for DevSecOps lets you test adjustments and additions to your tools and processes. Your small pilot project could take one of the following forms:

  • A solution architect or small project team building out your current or creating a new DevOps pipeline with additional security tools such as Anchore Toolbox or, even better, Anchore Enterprise at each stage to support automated scanning of your containers. This pilot project is ideal if you must show additional security features to your management and project stakeholders, such as your customers.
  • A small project team is running an application development project through your sparkling new DevSecOps toolchain. An example of such a small project is an update to a small not-business critical project that your organization uses internally.

Pilot projects such as these require little startup investment if you use open source tools. However, suppose your organization has to build and maintain applications that must meet compliance. In that case, you’ll probably have to consider using open source security tools that provide you with the reporting capabilities that your auditors require.

  1. Go Agile to Deliver Code in Iterative Releases

Delivering your software code using agile methodologies in small scope iterative releases helps your DevSecOps teams check for code and container vulnerabilities through quality assurance gates embedded across your development life cycle.

  1. Implement Automated Testing across your Toolchain

Automation is integral across a DevSecOps delivery process, especially with testing. Test automation shouldn’t replace human testers. Running automated testing and dependency checks enable your testers to focus on the most critical issues preventing you from achieving compliance.

  1. Invest in Upskilling your Developers and Testers

Part of shifting security left with DevSecOps is training your developers and testers in security principles. These days that means online training from a vendor or other training providers. It also means letting your developers attend industry conferences. With national and regional technology and security conferences online, this is easy to do.

Another way to invest in upskilling is to support your developers pursuing DevSecOps, DevOps, and cloud-focused certifications. For example, there’s a Certified DevSecOps Professional Certification from Practical DevSecOps and a DevSecOps Foundation Certification from the DevOps Institute.

  1. Involve your Developers in Security Discussions

Just as you bring your development and operations teams out of their silos, you need to get your developers into the security discussion. A move to DevSecOps shifts security left, so it sits throughout your software development life cycle versus being the last step before product release.

Everybody on the project team is accountable for security in a DevSecOps environment. Your organization can only reach this accountability level when you empower your teams with expertise and resources to respond to and mitigate security threats within the toolchain and before the threats hit production.

  1. Treat Compliance like another Team Member

Failing compliance audits means an expensive, time-consuming, and sometimes litigious process to return systems to compliance. DevSecOps gives you the methodologies, framework, and tools to help your organization’s systems achieve continuous compliance at every stage of your delivery life cycle.

  1. Adopt Regular Security Practices across your Teams

DevSecOps practices mean using regular scans, code reviews, and penetration tests to ensure your applications and cloud infrastructure are secure against insider and external threats.

Final Thoughts

Taking the journey from DevOps to DevSecOps is the ultimate story of shifting security left for commercial and public sector enterprises. Some organizations will seek DevSecOps first, leaping a traditional waterfall software development life cycle. Others will mature and strengthen their DevOps processes to become more security-focused during their delivery life cycle.