Precogs for Software To Spot Vulnerabilities?

There are some movies which provide an immediate dose of entertainment for 2 hours and you instantly forget them afterwards. Others lurk within you, and constantly resurface to make you think about ideas or concepts. The 2002 movie Minority Report is one of the latter. In it, a police department is setup to investigate “precrime” based on foreknowledge provided by psychic humans called “precogs”. The dilemma of penalizing people who have not actually done anything is an interesting philosophical conundrum that resonates in contemporary topics. One example is the potential for insurance companies to not cover people who show genetic disposition to certain illnesses, even while not being ill.

In the modern world rather than the future shown in the movie, computer crime and, more broadly, data breaches are now so common that we barely notice them, despite the fact they often have material impacts on us as individuals (see: Equifax). Fortunately, we actually do have something close to precogs in the software world which, while not allowing us to arrest criminals, do allow us to know when something is really likely to happen and do something about it.

Many vendors and government agencies produce long lists of known software vulnerabilities that have a good chance of being exploited. Yet, the reality is that most organizations don’t do anything with them because they don’t even know they are running the affected software or because they do know what is running but don’t have the time to fix it. 

I recently joined Anchore as VP of Products motivated by the opportunity to fix this problem. Like many, I’ve been amazed at the huge uptake in containers across the industry and, as a long time open source advocate, excited about the way it has allowed companies to take advantage of the huge ecosystem of open source software. However, I’ve also been cognizant that this new wave of adoption has increased the attack surface for companies and made the challenge of securing dynamic and heterogeneous environments even harder.

In meeting with the team at Anchore, it was clear that they really understood containers and had gone a long way to solving the problem. The solution that Anchore has built not only tells you what software you are running (by scanning your repos) but enables teams to prevent bad software being deployed in the first place, using customizable policies which react to defects found in operating system and software library packages, as well as poorly implemented best practices. By enabling so-called DevOpSec processes, Anchore can help development teams become more efficient and spread the load of security responsibility – the only way we can tackle the mountain of vulnerabilities that come out every day. It may not quite be precogs, but it’s pretty close.

I’ve been creating and deploying infrastructure software for over 20 years so have probably contributed a fair degree of security flaws to the world. I’m looking forward to joining the other side and working with our customers to making the new cloud native world a more secure one.

Answers to your Top 3 Compliance Questions

Policy first is a distinguishing tenet for Anchore as a product in today’s container security marketplace. When it comes to policy, we at Anchore receive a lot of questions from customers regarding different compliance standards, guidelines, and how the Anchore platform can help meet their requirements which remain a priority. Today, we will review our top three (in no particular order) policy and compliance questions we receive to demonstrate how Anchore can alleviate some of policy/compliance woes when choosing a container security tool to bring into your tech stack.

How can Anchore Help me Satisfy NIST 800-53 controls?

We receive a lot of questions regarding how Anchore can help different organizations meet compliance baselines that deal heavily with the implementation of NIST 800-53 controls. As a result, we talk about a lot of controls we satisfy in our federal white paper on container security. At a high level, Anchore helps organizations satisfy requirements for RA-5: Vulnerability Scanning, SI-2 Flaw Remediation, CA-7 Continuous Monitoring.

However, Anchore does more than just help organizations with vulnerability scanning and policy enforcement with containers. As a part of our process, Anchore provides an in-depth inspection of the image as they pass through Anchore analyzers that enforce whitelisted and blacklisted attributes such as ports/protocols, types of images, and types of OS as described in our previous blog post. Anchore Enterprise users can customize and enforce whitelisting/blacklisting within the Anchore Enterprise UI, navigating to the Whitelists tab will show the lists of whitelists that are present in the current DoD security policies bundle.

As a result, this allows organizations to comply with configuration management controls as well, specifically CM-7(5) Least Functionality: Whitelisting/Blacklisting in addition to CM-7(4) Unauthorized Software and Blacklisting. To prevent unauthorized software from entering your image, simply selecting “whitelist/blacklist” images tab as demonstrated below which allows you to blacklist OS, image, or packages :

How does Anchore Help Organizations Meet the Guidelines Specified in NIST 800-190: Application Container Security Guide?

Anchore provides a policy first approach to automated vulnerability scanning and compliance scanning for Docker images. By having customizable policies at the center of Anchore Engine itself, we provide the capability to react swiftly as new Federal security policies are published. NIST 800-190 was no different for the Anchore team. NIST 800-190 specifies, “Organizations should automate compliance with container runtime configuration standards. Documented technical implementation guidance, such as the Center for Internet Security Docker Benchmark.” 

Out of the box, Anchore provides a CIS Policy Bundle for open source and Enterprise users alike which allows you to check for Host Configuration, Docker daemon configuration, Docker daemon configuration files, Container Images and Build File, and Container Runtime. Below, we can see how the latest Postgres image stacks up against the CIS Benchmarks called out in NIST 800-190:

Anchore platform displaying image analysis.

From here, we would recommend hardening the image to comply with the CIS benchmarks before advancing this image into production.

Is Anchore FIPS 140-2 Validated?

Anchore is not a FIPS 140-2 validated product, nor is it a FIPS 140-2 Compliant product. However, it’s important to explain why Anchore has no plans on becoming FIPS 140-2 Validated. As NIST explains FIPS 140-2 applicability is listed here:

 “This standard is applicable to all Federal agencies that use cryptographic-based security systems to protect sensitive information in computer and telecommunication systems (including voice systems) as defined in Section 5131 of the Information Technology Management Reform Act of 1996, Public Law 104-106. This standard shall be used in designing and implementing cryptographic modules that Federal departments and agencies operate or are operated for them under contract…”

A majority of the products found on the list deal with validating encryption as a protection mechanism for products associated with networking hardware or hardware/software involved in the identification and authentication of users into an environment that is outside of the scope of the Anchore product. Anchore believes it is important to protect sensitive information generated from Anchore scanning. However, Anchore does not provide FIPS 140-2 validated protection of that information. Rather, Anchore believes it is the responsibility of the team managing Anchore deployments to protect the data generated from Anchore which can be done using FIPS 140-2 validated products. As of 2018, Docker became the first container relevant vendor to have a FIPS 140-2 validated product with the Docker Enterprise Edition Crypto Library. Furthermore, no other container security tools in the market are FIPS 140-2 validated. 

Conclusion

Although we simply covered NIST standards in this post due to its wide use and popularity amongst our customers, Anchore Enterprise exists as a policy first tool that provides teams with the flexibility to adapt their container vulnerability scanning in a timely fashion to comply with any compliance standard across various markets. Please contact our Anchore team if you are having trouble enforcing a compliance standard or if there is a custom Anchore policy bundle we can create in line with your current compliance needs.

Using Anchore to Identify Secrets in Container Images

Building containerized applications inherently bring up the question of how to best give these applications access to any sensitive information they may need. This sensitive information can often be in the form of secrets, passwords, or other credentials. This week I decided to explore a couple of bad practices / common shortcuts and some simple checks you can configure using both Anchore Engine and Enterprise to integrate into your testing to achieve a more polished security model for your container image workloads.

Historically, I’ve seen a couple “don’ts” for giving containers access to credentials:

  • Including directly in the contents of the image
  • Defining a secret in a Dockerfile with ENV instruction

The first should be an obvious no. Including sensitive information within a built image, is giving anyone who has access to the image, access to those passwords, keys, creds, etc. I’ve also seen this placement of secrets inside of the container image using the ENV instruction. Dockerfiles are likely managed somewhere, and exposing them in clear text is a practice that should be avoided. A recommended best practice is not only to check for keys and passwords as your images are being built, but implement the proper set of tools for true secrets management (not the above “don’ts”). There is an excellent article written by Hashicorp on Why We Need Dynamic Secrets which is a good place to start.

Using the ENV instruction

Below is a quick example of using the ENV instruction to define a variable called AWS_SECRET_KEY. Both AWS Access Keys consist of two parts: an access key ID and a secret access key. These credentials can be used with AWS CLI or API operations and should be kept private.

FROM node:6

RUN mkdir -p /home/node/ && apt-get update && apt-get -y install curl
COPY ./app/ /home/node/app/

ENV AWS_SECRET_KEY="1234q38rujfkasdfgws"

For argument’s sake, let’s pretend I built this image and ran the container with the following command:

docker run --name bad_container -d jvalance/node_critical_fail
$ docker ps | grep bad_container
3bd970d05f16        jvalance/node_critical_fail     "/bin/sh -c 'node /h…"   13 seconds ago      Up 12 seconds         22/tcp, 8081/tcp         bad_container

And now exec into it with the following: docker exec -ti 3bd970d05f16 /bin/bash to bring up a shell. Then run the env command:

# env 
YARN_VERSION=1.12.3
HOSTNAME=3bd970d05f16
PWD=/
HOME=/root
AWS_SECRET_KEY=1234q38rujfkasdfgws
NODE_VERSION=6.16.0
TERM=xterm
SHLVL=1
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
_=/usr/bin/env

Now you can see that I’ve just given anyone with access to this container the ability to grab any environment variable I’ve defined with the ENV instruction.

Similarly with the docker inspect command:

$ docker inspect 3bd970d05f16 -f "{{json .Config.Env}}"
["PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin","NODE_VERSION=6.16.0","YARN_VERSION=1.12.3","AWS_SECRET_KEY=1234q38rujfkasdfgws"]

Storing Credentials in Files Inside the Image

Back to our example of a bad Dockerfile:

FROM node:6

RUN mkdir -p /home/node/ && apt-get update && apt-get -y install curl
COPY ./app/ /home/node/app/

ENV AWS_SECRET_KEY="1234q38rujfkasdfgws"

Here we are copying the contents of the app directory into home/node/app inside the image. Why is this bad? Here’s an image of the directory structure:

AWS directory structure image.

and specifically the contents of the credentials file:

# credentials

[default]
aws_access_key_id = 12345678901234567890
aws_secret_access_key = 1a2b3c4d5e6f1a2b3c4d5e6f1a2b3c4d5e6f1a2b
[kuber]
aws_access_key_id = 12345678901234567890
aws_secret_access_key = 1a2b3c4d5e6f1a2b3c4d5e6f1a2b3c4d5e6f1a2b

Same as I did before, I’ll try to find the creds in the container.

/home/node/app# cat .aws/credentials 
[default]
aws_access_key_id = 12345678901234567890
aws_secret_access_key = 1a2b3c4d5e6f1a2b3c4d5e6f1a2b3c4d5e6f1a2b
api_key = 0349r5ufjdkl45
[kuber]
aws_access_key_id = 12345678901234567890
aws_secret_access_key = 1a2b3c4d5e6f1a2b3c4d5e6f1a2b3c4d5e6f1a2b

Checking for the Above with Anchore

At Anchore, a core focus of ours is centered around conducting a deep image inspection to give users comprehensive insight into the contents of their container images, and to provide users the ability to define flexible policy rules to enforce security and best-practices. By understanding that container images are composed are far more than just lists of packages, Anchore takes a comprehensive approach but providing users the ability to check for the above examples.

Using the policy mechanisms of Anchore, users can define a collection of checks, whitelists, and mappings (encapsulated as a self-contained Anchore policy bundle document). Anchore policy bundles can then be authored to encode a variety of rules, including checks within (but not limited to) Dockerfile line checks for the presence of credentials. Although I will never recommend the bad practices used in the above examples for secrets, we should be checking for them nonetheless.

Policy Bundle

A policy bundle is a single JSON document, which is composed of:

  • Policies
  • Whitelists
  • Mappings
  • Whitelisted Images
  • Blacklisted Images

The policies component of a bundle defines the checks to make against an image and the actions to recommend if the checks to find a match.

Example policy component of a policy bundle:

"name": "Critical Security Policy",
  "policies": [
    {
      "comment": "Critical vulnerability,  secrets, and best practice violations",
      "id": "48e6f7d6-1765-11e8-b5f9-8b6f228548b6",
      "name": "default",
      "rules": [
        {
          "action": "STOP",
          "gate": "dockerfile",
          "id": "38428d50-9440-42aa-92bb-e0d9a03b662d",
          "params": [
            {
              "name": "instruction",
              "value": "ENV"
            },
            {
              "name": "check",
              "value": "like"
            },
            {
              "name": "value",
              "value": "AWS_.*KEY"
            }
          ],
          "trigger": "instruction"
        },
        {
          "action": "STOP",
          "gate": "secret_scans",
          "id": "509d5438-f0e3-41df-bb1a-33013f23e31c",
          "params": [],
          "trigger": "content_regex_checks"
        },...

The first policy rule uses the dockerfile gate and instruction trigger to look for AWS environment variables that may be defined in the Dockerfile.

The second policy rule uses the secret scans gate and content regex checks trigger to look for AWS_SECRET_KEY and AWS_ACCESS_KEY within the container image.

It is worth noting that there is an analyzer_config.yaml file which is taking care of the regex definitions.

For the purposes of this post, I’ve analyzed an image that includes the two bad practices discussed earlier and evaluated the analyzed image against a policy bundle that contains the rule definitions above. It should catch the poor practices!

Here is a screenshot of the Anchore Enterprise UI Policy Evaluation table:

Anchore Enterprise UI policy evaluation table overview.

The check output column clearly informs us what Anchore found for each particular trigger ID line item and importantly, the STOP action which helps to determine the final result of the policy evaluation.

We can see very clearly that these policy rule definitions have caught both the ENV variable and credentials file. If this were plugged into a continuous integration pipeline, we could fail the build on this particular container image and put the responsibility on the developer to fix, rebuild, and never ship this image to a production registry.

Putting this in Practice

In summary, it is extremely important to put checks in-place with a tool like Anchore to align with your container image build frequency. For secrets management, an overall best practice I recommend is using a secret store like Vault to handle the storage of sensitive data. Depending on the orchestrator you are using for your containers, there are some options. For Kubernetes, there is Kubernetes Vault. Staying with the Hashicorp suite, there are some options here as well for dynamic secrets: Vault Integration and Retrieving Dynamic Secrets.

The above is an excellent system to have in place. I will continue to advocate for including image scanning and policy enforcement as a mandatory step in continuous integration pipelines because it directly aligns with the practice of bringing security as far left in the development lifecycle as possible to catch issues early. Taking a step back to plan and put in place solutions for managing secrets for your containers, and securing your images, will drastically improve your container security stance from end to end and allow you to deploy with confidence.