Installing Anchore with a Single Command Using Helm

Helm is the package manager for Kubernetes, inspired by packaged managers such as homebrem, yum, npm and apt. Applications are packaged in Charts which are a collection of files that contain the definition and configuration of resources to be deployed to a Kubernetes cluster. Helm was created by Deis who donated the project to the Cloud Native Computing Foundation (CNCF).

Helm makes it simple to package and deploy applications to be deployed including versioning. upgrade and rollback of applications. Helm does not replace Docker images, in fact, docker images are deployed by Helm into a Kubernetes cluster.

Helm is comprised of two components a server-side service running on the Kubernetes cluster called Tiller and the client-side component, Helm. Using helm applications, packaged as charts, can be deployed and managed using a single command:

$ helm install myApp

We have recently packaged the Anchore Engine as a Helm Chart to simplify deployment on Kubernetes. Now Anchore can be installed in a highly scalable environment with a single command.

Within 3 minutes you can have an Anchore Engine installed and running in your Kubernetes environment. The following guide requires:

  • A running Kubernetes Cluster
  • kubectl configured to access your Kubernetes cluster
  • Helm binary installed and available in your path

Tiller, the server-side component of Helm, should be installed in your Kubernetes cluster. To installer Tiller run the following command:

$ helm init
$HELM_HOME has been configured at /home/username/.helm
Tiller (the Helm server-side component) has been installed into your Kubernetes Cluster.
⎈ Happy Helming! ⎈

If Tiller has already been installed you will receive a warning messaging that can safely be ignored.

Next, we need to ensure that we have an up to date list of Helm Charts.

$ helm repo update
Hang tight while we grab the latest from your chart repositories...
...Skip local chart repository
...Successfully got an update from the "stable" chart repository
Update Complete. ⎈ Happy Helming!⎈

By default, the Anchore Engine chart will deploy an Anchore Engine container along with a PostgreSQL database container however this behavior can be overridden if you have an existing PostgreSQL service available.

In addition to the database, the chart creates two deployments

  • Cores Services: The core services deployment includes the external API, notification service, Kubernetes webhook, catalog and queuing service.
  • Worker: The worker service runs the image analysis and can be scaled up to handle the concurrent evaluation of images.

In this example, we will deploy the database, core services and a single worker. Please refer to the documentation for more sophisticated deployments including scaling worker nodes.

The installation can be completed with a single command:

$ helm install --name anchore-demo stable/anchore-engine

If there server-side component, Tiller, is not installed you will see the following error message:
Error: could not find tiller

You may wish to configure Anchore Engine to synchronize policies from the Anchore Cloud service, allowing you to use the free graphical policy editor to build policies, whitelists and map these to your own repositories and images.

If you have not already created an account on the Anchore Cloud you can sign up for free at anchore.io/signup

We can pass your username and password to the Helm chart in either by using command line options or by creating a values.yaml file containing these parameters.

In the following example, the anchore.io username and password are being passed using command line options.

Note: In addition to passing your authentication credentials we also need to enable synchronization of policy bundles and disable anonymous access.

$ helm install --name anchore-demo stable/anchore-engine 
       --set coreConfig.policyBundleSyncEnabled=True 
       --set globalConfig.users.admin.anchoreIOCredentials.useAnonymous=False 
       --set globalConfig.users.admin.anchoreIOCredentials.user=[email protected] 
       --set globalConfig.users.admin.anchoreIOCredentials.password=verysecret

Alternatively, the updated values file can be passed as a parameter to the installation.

$ helm install --name anchore-demo stable/anchore-engine --values=values.yaml

In both examples the –name parameter is optional and if omitted a name will be randomly assigned to your deployment.

The Helm installation should complete in a matter of seconds after which time it will output details of the deployed resources showing the secrets, configMaps, volumes, services, deployments and pods that have been created.

In addition, some further help text providing URLs and a quick start will be displayed.

Running helm list (or helm ls) will show your deployment

$ helm ls
NAME         REVISION UPDATED           STATUS   CHART                NAMESPACE
anchore-demo 1 Wed Jan 20 10:46:10 2018 DEPLOYED anchore-engine-0.1.0 default

We can use kubectl to show the deployments on the Kubernetes cluster.

$ kubectl get deployments
NAME                                DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
anchore-demo-anchore-engine-core    1       1       1          0         1m
anchore-demo-anchore-engine-worker  1       1       1          1         1m
anchore-demo-postgresql             1       1       1          1         1m

When the engine is started for the first time it will perform a full synchronization of feed data, including CVE vulnerability data. This first sync may last for several minutes during which time the service will be responsive but will queue up images for analysis pending successful completion of the feed sync.

The Anchore Engine exposes a REST API however the easiest way to interact with the Anchore Engine is through the Anchore CLI which can be installed using Python PiP.

$ pip install anchorecli

Documentation for installing the CLI on Mac, Linux and Windows can be found in the wiki.

The Anchore CLI can be configured using command line options, environment variables or a configuration file. See the getting started wiki for details.

In this example, we will use environment variables.

ANCHORE_CLI_USER=admin
ANCHORE_CLI_PASS=foobar

The password can be retrieved from Kubernetes by accessing the secrets passed to the container.

ANCHORE_CLI_PASS=$(kubectl get secret --namespace default anchore-demo-anchore-engine -o jsonpath="{.data.adminPassword}" | base64 --decode; echo)

Note: The deployment name in this example, anchore-demo-anchore-engine, was retrieved from the output of the helm installation or helm status command.

The helm installation or status command will also show the Anchore Engine URL, for example:

ANCHORE_CLI_URL=http://anchore-demo-anchore-engine.default.svc.cluster.local:8228/v1/

To provide external access you can use kubectl to expose the external API port, 8228 to the internet.

$ kubectl expose deployment anchore-demo-anchore-engine-core 
       --type=LoadBalancer 
       --name=anchore-engine 
       --port=8228

service “anchore-engine” exposed

The external IP can be retrieved from the Kubernetes cluster using the get service call:

$ kubectl get service anchore-engine

NAME           CLUSTER-IP   EXTERNAL-IP PORT(S)        AGE
anchore-engine 10.27.245.63 <pending>   8228:31622/TCP 22s

If the external IP is shown as pending then try re-running the command after a minute.

$ kubectl get service anchore-engine

NAME           CLUSTER-IP   EXTERNAL-IP    PORT(S)        AGE
anchore-engine 10.27.245.63 35.186.160.168 8228:31622/TCP 49s

In this example the Anchore URL should be set to:

ANCHORE_CLI_URL=http://35.186.160.168:8228/v1

Now you can use the Anchore CLI to analyze and report on images.

For example:

To view the status of the Anchore Engine:

$ anchore-cli system status

To add an image to be analyzed:

$ anchore-cli image add docker.io/library/alpine:latest

To list images:

$ anchore-cli image list

To list CVEs found in an image:

$ anchore-cli image vuln library/alpine:latest os

You can follow the Getting Started Guide to learn more about using the Anchore Engine including adding subscriptions, evaluating policies and inspecting images.

Handling False Positives

If like me you’re subscribed to receive updates for popular base images such as CentOS, then this morning you may have received an email like this from Anchore:

Here, you are receiving a warning that a new, HIGH severity CVE was just found in the CentOS image. You can read more about the vulnerability in Red Hat’s security advisory RHSA-2018:0102 which covers the impact of CVE-2017-3145 on the BIND DNS package.

As you can see from reading the advisory, an attacker could “potentially use this flaw to make named, acting as a DNSSEC validating resolver, exit unexpectedly … via a specially crafted DNS request.”

However the base CentOS image does not include the BIND DNS package, but it does include the bind-license package which contains a single text file container copyright information for BIND. While the security advisory lists all bind-* packages the copyright license file can obviously not be exploited by a specially crafted DNS request!

While this CVE can safely be ignored in the security vulnerability page for library/centos:latest or any images built from this base image it is likely that your policy checks will fail this image due to the High Severity vulnerability.

In my environment, I use the Global Whitelist feature for this very reason. It allows me to add an exception to ensure that the RHSA-2018:0102 vulnerability does not incorrectly block my CentOS or RHEL images.

In the screenshot below you can see that I have whitelisted RHSA-2018:0102 and in the package field I have specified the bind-license package to ensure that we only whitelist this package and not a binary package that is actually exploitable.

Using the free Anchore Cloud service you can receive notifications for image updates, paid subscribers receive policy and CVE updates such as the one covered by this blog.

Scanning Images on Amazon Elastic Container Registry (ECR)

The Anchore Engine supports analyzing images from any Docker V2 compatible registry however when accessing an Amazon ECR registry extra steps must be taken to handle Amazon Web Services authentication.

The Anchore Engine will attempt to download images from any registry without requiring further configuration. For example, running the following command:

$ anchore-cli image add prod.example.com/myapp/foo:latest

This would instruct the Anchore Engine to download the myapp/foo:latest image from the prod.example.com registry. Unless otherwise configured the Anchore Engine will try to pull the image from the registry without authentication.

In the following example, we fail to add an image for analysis due to an error.

$ anchore-cli image add prod.example.com/myapp/bar:latest
Error: image cannot be found/fetched from registry
HTTP Code: 404

In many cases it is not possible to distinguish between an image that does not exist and an image that you are not authorized to access since many registries do not wish to disclose the existence of private resources to unauthenticated users.

The Anchore Engine can store credentials used to access your private registries.

Running the following command lists the defined registries.

$ anchore-cli registry list

Registry                                                User            
docker.io                                               anchore
quay.io                                                 anchore
registry.example.com                                    johndoe
123456789012.dkr.ecr.us-east-1.amazonaws.com            ABC

Here we can see that 4 registries have been defined. When pulling an image the Anchore Engine checks to see if any credentials have been defined for the registry, if none are present then the Anchore Engine will attempt to pull images without authentication but if a registry is defined then all access of metadata or pulls for images from that registry will use the specified username and password.

Registries can be added using the following syntax:

$ anchore-cli registry add REGISTRY USERNAME PASSWORD

The REGISTRY parameter should include the fully qualified hostname and port number of the registry. For example registry.anchore.com:5000

Amazon AWS typically uses keys instead of traditional usernames & passwords. These keys consist of an access key ID and a secret access key. While it is possible to use the aws ecr get-login command to create an access token, this will expire after 12 hours so it is not appropriate for use with Anchore Engine, otherwise, a user would need to update their registry credentials regularly. So when adding an Amazon ECR registry to Anchore Engine you should pass the aws_access_key_id and aws_secret_access_key.

For example:

$ anchore-cli registry add /
             1234567890.dkr.ecr.us-east-1.amazonaws.com /
             MY_AWS_ACCESS_KEY_ID /
             MY_AWS_SECRET_ACCESS_KEY /
             --registry-type=awsecr

The registry-type parameter instructs Anchore Engine to handle these credentials as AWS credentials rather than traditional usernames and passwords. Currently, the Anchore Engine supports two types of registry authentication standard username and password for most Docker V2 registries and Amazon ECR. In this example we specified the registry type on the command line however if this parameter is omitted then the CLI will attempt to guess the registry type from the URL which uses a standard format.

The Anchore Engine will use the AWS access key and secret access keys to generate authentication tokens to access the Amazon ECR registry, the Anchore Engine will manage regeneration of these tokens which typically expire after 12 hours.

In addition to supporting AWS access key credentials Anchore also supports the use of IAM roles for authenticating with Amazon ECR if the Anchore Engine is run on an EC2 instance.

In this case, you can configure the Anchore Engine to inherit the IAM role from the EC2 instance hosting the engine.

When launching the EC2 instance that will run the Anchore Engine you need to specify a role that includes the AmazonEC2ContainerRegistryReadOnly policy.

While this is best performed using a CloudFormation template, you can manually configure from the launch instance wizard.

Select Create new IAM role.

Under the type of trusted entity select EC2.

Ensure that the AmazonEC2ContainerRegistryReadOnly policy is selected.

Give a name to the role and add this role to the Instance you are launching.

On the running EC2 instance you can manually verify that the instance has inherited the correct role by running the following command:

#curl http://169.254.169.254/latest/meta-data/iam/info
{
 "Code" : "Success",
 "LastUpdated" : "2018-01-1218:45:12Z",
 "InstanceProfileArn" : "arn:aws:iam::123456789012:instance-profile/ECR-ReadOnly",
 "InstanceProfileId" : "ABCDEFGHIJKLMNOP”
}

By default the support for inheriting the IAM role is disabled. This can be enabled by adding the following entry to the top of the Anchore Engine config.YAML file.

allow_awsecr_iam_auto: False

When IAM support is enabled instead of passing the access key and secret access key use “awsauto” for both username and password. This will instruct the Anchore Engine to inherit the role from the underlying EC2 instance.

$ anchore-cli registry add /
               1234567890.dkr.ecr.us-east-1.amazonaws.com /
               awsauto /
               awsauto /
               --registry-type=awsecr

You can learn more about Anchore Engine and how you can scan your container images whether they are hosted on cloud-based registries such as DockerHub and Amazon ECR or on private Docker V2 compatible registries hosted on-premises.