Install Anchore Enterprise on Amazon EKS with Helm

In this post I will walkthrough the installation of Anchore Enterprise 2.0 on Amazon EKS with Helm. Anchore currently maintains a Helm Chart which I will use to install the necessary Anchore components.

Prerequisites

  • A running Amazon EKS cluster with worker nodes launched. See EKS Documentation for more information.
  • Helm client and server installed and configured to your EKS cluster.

Note: We’ve written a blog post titled Introduction to Amazon EKS which details how to get started on the above prerequisites.

The prerequisites for getting up and running are the most difficult part of the installation in my opinion, the Anchore Helm chart makes the installation process straightforward.

Once you have a EKS cluster up and running and worker nodes launched, you can verify via the following command:

$ kubectl get nodes NAME STATUS ROLES AGE VERSION ip-10-0-1-66.us-east-2.compute.internal Ready <none> 1d v1.12.7 ip-10-0-3-15.us-east-2.compute.internal Ready <none> 1d v1.12.7 ip-10-0-3-157.us-east-2.compute.internal Ready <none> 1d v1.12.7

Anchore Helm Chart Configuration

To make proper configurations to the Helm chart, create a custom anchore_values.yaml file and utilize it when installing. There are many options for configuration with Anchore, for the purposes of this document, I will only change the minimum to get Anchore Enterprise installed. For reference, there is an anchore_values.yaml` file in this repository, that you may include in your installation.

Note – For this installation, I will be configuring ingress and using an ALB ingress controller. You can read more about Kubernetes Ingress with AWS ALB Ingress Controller.

Configurations

Ingress

I’ve added the following to my anchore_values.yaml file under the ingress section:

ingress: enabled: true # Use the following paths for GCE/ALB ingress controller apiPath: /v1/* uiPath: /* # apiPath: /v1/ # uiPath: / # Uncomment the following lines to bind on specific hostnames # apiHosts: # - anchore-api.example.com # uiHosts: # - anchore-ui.example.com annotations: kubernetes.io/ingress.class: alb alb.ingress.kubernetes.io/scheme: internet-facing

Anchore Engine API service

I’ve added the following to my anchore_values.yaml file under the Anchore API section:

# Pod configuration for the anchore engine api service. anchoreApi: replicaCount: 1 # Set extra environment variables. These will be set on all api containers. extraEnv: [] # - name: foo # value: bar # kubernetes service configuration for anchore external API service: type: NodePort port: 8228 annotations: {}

Note – Changed service type to NodePort.

Anchore Enterprise Global

I’ve added the following to my anchore_values.yaml file under the Anchore Enterprise global section:

anchoreEnterpriseGlobal: enabled: true

Note – Enabled enterprise components.

Anchore Enterprise UI

I’ve added the following to my anchore_values.yaml file under the Anchore Enterprise UI section:

anchoreEnterpriseUi: # kubernetes service configuration for anchore UI service: type: NodePort port: 80 annotations: {} sessionAffinity: ClientIP

Note – Changed service type to NodePort.

This should be all you need to change in the chart.

AWS EKS Configurations

Download the ALB Ingress manifest update cluster-name with EKS cluster name in alb-ingress-controller.yaml

wget https://raw.githubusercontent.com/kubernetes-sigs/aws-alb-ingress-controller/v1.0.1/docs/examples/alb-ingress-controller.yaml

Update cluster-name with the EKS cluster name in alb-ingress-controller.yaml

wget https://raw.githubusercontent.com/kubernetes-sigs/aws-alb-ingress-controller/v1.0.1/docs/examples/rbac-role.yaml

From the AWS console, create an IAM policy and manually update the EKS subnets for auto-discovery.

In the IAM console, create a policy using the contents of the template iam-policy.json. Attach the IAM policy to the EKS worker nodes role.

Add the following to tags to your clusters public subnets:

kubernetes.io/cluster/demo-eks-cluster : shared kubernetes.io/role/elb : '' kubernetes.io/role/internal-elb : ''

Deploy the rbac-role and alb ingress controller.

kubectl apply -f rbac-role.yaml

kubectl apply -f alb-ingress-controller.yaml

Deploy Anchore Enterprise

Enterprise services require an Anchore Enterprise license, as well as credentials with permission to the private docker repositories that contain the enterprise images.

Create a Kubernetes secret containing your license file.

kubectl create secret generic anchore-enterprise-license --from-file=license.yaml=<PATH/TO/LICENSE.YAML>

Create a Kubernetes secret containing Docker Hub credentials with access to the private anchore enterprise repositories.

kubectl create secret docker-registry anchore-enterprise-pullcreds --docker-server=docker.io --docker-username=<DOCKERHUB_USER> --docker-password=<DOCKERHUB_PASSWORD> --docker-email=<EMAIL_ADDRESS>

Run the following command to deploy Anchore Enterprise:

helm install --name anchore-enterprise stable/anchore-engine -f anchore_values.yaml

It will take the system several minutes to bootstrap. You can checks on the status of the pods by running kubectl get pods

MacBook-Pro-109:anchoreEks jvalance$ kubectl get pods NAME READY STATUS RESTARTS AGE anchore-cli-5f4d697985-hhw5b 1/1 Unknown 0 4h anchore-cli-5f4d697985-rdm9f 1/1 Running 0 14m anchore-enterprise-anchore-engine-analyzer-55f6dd766f-qxp9m 1/1 Running 0 9m anchore-enterprise-anchore-engine-api-bcd54c574-bx8sq 4/4 Running 0 9m anchore-enterprise-anchore-engine-catalog-ddd45985b-l5nfn 1/1 Running 0 9m anchore-enterprise-anchore-engine-enterprise-feeds-786b6cd9mw9l 1/1 Running 0 9m anchore-enterprise-anchore-engine-enterprise-ui-758f85c859t2kqt 1/1 Running 0 9m anchore-enterprise-anchore-engine-policy-846647f56b-5qk7f 1/1 Running 0 9m anchore-enterprise-anchore-engine-simplequeue-85fbd57559-c6lqq 1/1 Running 0 9m anchore-enterprise-anchore-feeds-db-668969c784-6f556 1/1 Running 0 9m anchore-enterprise-anchore-ui-redis-master-0 1/1 Running 0 9m anchore-enterprise-postgresql-86d56f7bf8-nx6mw 1/1 Running 0 9m

Run the following command for details on the deployed ingress resource:

MacBook-Pro-109:anchoreEks jvalance$ kubectl describe ingress Name: anchore-enterprise-anchore-engine Namespace: default Address: 6f5c87d8-default-anchoreen-d4c9-575215040.us-east-2.elb.amazonaws.com Default backend: default-http-backend:80 (<none>) Rules: Host Path Backends ---- ---- -------- * /v1/* anchore-enterprise-anchore-engine-api:8228 (<none>) /* anchore-enterprise-anchore-engine-enterprise-ui:80 (<none>) Annotations: alb.ingress.kubernetes.io/scheme: internet-facing kubernetes.io/ingress.class: alb Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal CREATE 18m alb-ingress-controller LoadBalancer 6f5c87d8-default-anchoreen-d4c9 created, ARN: arn:aws:elasticloadbalancing:us-east-2:472757763459:loadbalancer/app/6f5c87d8-default-anchoreen-d4c9/42defe8939465e2c Normal CREATE 18m alb-ingress-controller rule 2 created with conditions [{ Field: "path-pattern", Values: ["/*"] }] Normal CREATE 18m alb-ingress-controller rule 1 created with conditions [{ Field: "path-pattern", Values: ["/v1/*"] }]

I can see above that an ELB has been created and I can navigate to the specified address:

Anchore Enterprise login screen.

Once I login to the UI and begin to analyze images, I can see the following vulnerability and policy evaluation metrics displaying on the dashboard.

Anchore Enterprise platform dashboard.

Conclusion

You now have an installation of Anchore Enterprise up and running on Amazon EKS. The complete contents for the walkthrough are available by navigating to the GitHub repo here. For more info on Anchore Engine or Enterprise, you can join our community Slack channel, or request a technical demo.

Vulnerability Remediation Requirements for Internet-Accessible Systems

The Department of Homeland Security recently issued the Binding Operational Directive 19-02, “Vulnerability Remediation Requirements for Internet-Accessible Systems.” A binding operational directive is a compulsory direction to the federal, executive branch, and departments and agencies for purposes of safeguarding federal information and information systems. Federal agencies are required to comply with DHS-developed directives.

As the development and deployment of internet-accessible systems increases across federal agencies, it is imperative for these agencies to identify and remediate any known vulnerabilities currently impacting the systems they manage. The purpose of BOD 19-02 is to highlight the importance of security vulnerability identification and remediation requirements for internet-facing systems. Additionally, layout required actions for agencies when vulnerabilities are identified through Cyber Hygiene scanning. The Cybersecurity and Infrastructure Security Agency (CISA) leverages Cyber Hygiene scanning results to identify cross-government trends and persistent constraints, to help impacted agencies overcome technical and resource challenges that prevent the rapid remediation of security vulnerabilities. These Cyber Hygiene scans are in accordance with Office of Management and Budget (OMB) Memorandum 15-01: Fiscal Year 2014-2015 Guidance on Improving Federal Information Security and Privacy Management Practices, from which the NCCIC conducts vulnerability scans of agencies’ internet-accessible systems to identify vulnerabilities and configuration errors. The output from these scans can be known as Cyber Hygiene reports, which score any identified vulnerabilities with the Common Vulnerability scoring system or CVSS.

“To ensure effective and timely remediation of critical and high vulnerabilities identified through Cyber Hygiene scanning, federal agencies shall complete the following actions:”

Review and Remediate Critical and High Vulnerabilities

Review Cyber Hygiene reports issues by CISA and remediates any critical and high vulnerabilities detected on Internet-facing systems:

  • Critical vulnerabilities must be remediated within 15 calendar days of initial detection.
  • High vulnerabilities must be remediated within 30 calendar days of initial detection.

How Anchore Fits In

As federal agencies continue to transform their software development, it is necessary for them to incorporate proper security solutions purpose-built to identify and prevent vulnerabilities that are native to their evolving technology stack.

Anchore is a leading provider of container security and compliance enforcement solutions designed for open-source users and enterprises. Anchore provides vulnerability and policy management tools built to surface comprehensive container image package and data content, protect against security threats, and incorporate an actionable policy enforcement language capable of evolving as compliance needs change. Flexible and robust enough for the security and policy controls regulated industry verticals need to adopt cloud-native technologies in a DevSecOps environment.

One of the critical points of focus here is leveraging Anchore to identify known vulnerabilities in container images. Anchore accomplishes this by first performing a detailed analysis of the container image, identifying all known operating system packages and third-party libraries. Following this, Anchore will map any known vulnerabilities to the identified packages within the analyzed image.

Viewing Vulnerabilities in the UI

Anchore Enterprise customers can view identified vulnerabilities for analyzed images, by logging into the UI, and navigating to the image in question.

View identified vulnerabilities for analyzed images in Anchore platform.

In the above image, we can see that CVE-2019-3462 is of severity high, linked to the OS package apt-1.0.9.8.4, and there is a fix available in version 1.0.9.8.5. Also presented in the UI, is a link to where the CVE information comes from. Based on the requirements of BOD 19-02, this high-severity vulnerability will need to be remediated within 15 days of identification.

Note – A list of vulnerabilities can also be viewed using the Anchore CLI which can be configured to communicate with a running Anchore service.

Also, the dashboard view provides a higher-level presentation of the vulnerabilities impacting all images scanned with Anchore.

Anchore dashboard provides higher-level presentation of the vulnerabilities.

Viewing Vulnerabilities in the Build Phase

Anchore scanning can be integrated directly into the build phase of the software development lifecycle to identify security vulnerabilities, and potentially fail builds, to prevent vulnerable container images from making their way into production registries and environments. This point of integration is typically the fastest path to vulnerability identification and remediation for development teams.

Anchore provides a Jenkins plugin that will need to be configured to communicate with an existing Anchore installation. The Anchore Jenkins plugin surfaces security and policy evaluation reports directly in the Jenkins UI and as JSON artifacts.

Common vulnerabilities and exposures list in Jenkins.

Note – For more information on how custom Anchore policies can be created to fulfill specific compliance requirements, contact us, or navigate to our open-source policy hub for examples.

Registry Integration

For organizations not scanning images during the build phase, Anchore can be configured to integrate directly with any docker_v2 container registry to continuously scan the repositories or tags.

Ongoing Vulnerability Identification

It is not uncommon for vulnerabilities to be published days or weeks after an image has been scanned. To address this, Anchore can be configured to subscribe to vulnerability updates. For example, if a user is subscribed to the library/nginx:latest image tag and a new vulnerability is added which matches a package in the subscribed nginx image, Anchore can send out a Slack notification. This alerting functionality is especially critical for the BOD 19-02 directive as the remediation requirements are time-sensitive, and agencies should be alerted of new threats ASAP.

Conclusion

Anchore continues to provide solutions for the government, enterprises, and open-source users, built to support the adoption of container technologies. By understanding that containers are more than just CVEs and lists of packages, Anchore takes a container-native approach to image scanning and provides end-users with a complete suite of policy and compliance checks designed to support a variety of industry verticals from the U.S. Government and F100 enterprises to start-ups.

Create an Open Source Secure Container Based CI/CD Pipeline

Docker gives developers the ability to streamline packaging, storage, and deployment of applications at great scale. With increased use of container technologies across software development teams, securing these images become challenging. Due to the increased flexibility and agility, security checks for these images need to be woven into an automated pipeline and become part of the development lifecycle.

Common Tooling

Prior to any implementation, it is important to standardize on a common set of tools that will be critical components for addressing the above requirement. The four tools that will be discussed today are as follows:

Jenkins

Continuous integration tools like Jenkins will be driving the workload for any automated pipeline to run successfully. The three tools below will be used throughout an example development lifecycle.

Docker Registry

Docker images are stored and delivered through registries. Typically, only trusted and secure images should be accessible through Docker registries that developers can pull from.

Anchore

Anchore will scan images and create a list of packages, files, and artifacts. From this, Anchore has the ability to define and enforce custom policies and send the results of these back in the form of a pass or fail.

Notary

Notary is Docker’s platform to provide trusted delivery of images. It does this by signing images, distributing them to a registry, and ensuring that only trusted images can be distributed and utilized. Example CI build steps:

  1. Developer commits code to repository.
  2. Jenkins job begins to build a new Docker image bring in any code changes just made.
  3. Once the image completes it is scanned by Anchore and checked against user-defined policies.
  4. If the Anchore checks do not fail, the image gets signed by Notary and pushed to a Docker registry.

Anchore Policies

As mentioned above, Anchore is the key component to enforcing only secure images progress through the next stages in the build pipeline. In greater detail, Anchore will scan images and create a manifest of packages. From this manifest, there is the ability to run checks for image vulnerabilities. Additionally, the ability to periodically check if new vulnerabilities have been published that directly impact a package contained within a relevant image manifest. Anchore has the ability to be integrated with common CI tools (Jenkins), or in an ad hoc manner from a command line. From these integrations, policy checks can be enforced to potentially fail builds. Anchore checks will provide the most value through a proper CI model. Having the ability to split up acceptable base images and application layers is critical for appropriate policy check abstraction. Multiple Anchore gates specific to each of these image layers is fundamental to the overall success of Anchore policies. As an example, prior to trusted base image promotion and push into a registry, it will need to pass Anchore checks for Dockerfile best practices (USER, non ssh open), and operating system package vulnerability checks.

Secondary to the above, once a set of base images have been signed (Notary) and pushed into a trusted registry, it is now a requirement for all ‘application specific’ images to be created. It is the responsibility of whoever is building these images to make sure the appropriate base images are being used. Inheritance of a base layer will apply here, and only signed images from the trusted registry will be able to pass the next set of Anchore policy checks. These checks will not only focus on the signed and approved base layer images but depending on the application layer dependencies, will check for any NPM or Python packages that contain published vulnerabilities. Policies can be created that enforce Dockerfile and image best practices. As an example, Anchore allows the ability to look for a base image to be in existence via a regex check. These regular expressions can be used to enforce policies specific to image layers, files, etc.

While the above is just an example of how to implement, secure, and enforce images throughout its lifecycle, it is important to understand the differences between tools, and the separate functions each play. Without tools similar to Anchore, it is easy to see how insecure or untrusted images can make their way into registries and production environments. By leveraging gated checks with Anchore, not only do you have control around which images can be used, but teams can begin to adopt core functionality of the other tools outlined above in a more secure fashion.

Anchore & Slack, Container Security Notifications

With Anchore you can subscribe to TAGs and Images to receive notifications when images are updated, when CVEs are added or removed and when the policy status of an image changes so you can take a proactive approach to ensure security and compliance. Having the ability to stay on top the notifications above allows for the appropriate methods for remediation and triage to take place. One of the most common alerting tools Anchore users leverage is Slack.

How to Configure Slack Webhooks to Receive Anchore Notifications via Azure Functions

In this example, we will walk through how to configure Slack webhooks to receive Anchore notifications. We will consume the webhook with an Azure Function and pass the notification data into a Slack channel.

You will need the following:

Slack Configuration

Configure incoming webhooks to work with the Slack application you would like to send Anchore notifications to. The Slack documentation gives a very detailed walkthrough on how to set this up.

Should look similar to the configuration below (I am just posting to the #general channel):

Slack webhook setup for workspace.

Azure Initial Configuration

Once you have an Azure account, begin by creating a Function App. In this example I will use the following configuration:

Create function app for webhook test.

Choose In-Portal development environment and then Webhook + API:

Azure configuration for Javascript.

Once the function has been setup, navigate to the integrate tab and edit the configuration:

Azure integrate tab to edit configuration.

Finally, we will to select ‘Get function URL’ to retrieve the URL for the function we’ve just created. It should look similar to this format:

https://jv-test-anchore-webhook.azurewebsites.net/api/general/policy_eval/admin

Anchore Engine Configuration

If you have not setup Anchore Engine there are a couple of choices:

Once you have a running Anchore Engine, we need to configure engine to send out webhook notifications to the URL of our Function App in Azure.

Once the configuration is complete, you will need to activate a subscription, you can follow the documentation link above for more info on that.

In this example, I have subscribed to a particular tag and am listening for ‘policy_eval’ changes. From the documentation:

“This class of notification is triggered if a Tag to which a user has subscribed has a change in its policy evaluation status. The policy evaluation status of an image can be one of two states: Pass or Fail. If an image that was previously marked as Pass changes status to Fail or vice-versa then the policy update notification will be triggered.”

Azure Function Code

I kept this as minimal as possible in order to keep it open-ended. In short, Anchore will be sending out the notification data to the webhook endpoint we’ve specified, we just need to write some code to consume it, and then send it to Slack.

You can view the code here.

Quick note: In the example, the alert to Slack is very basic. However, feel free to experiment with the notification data that Anchore sends to Azure and configure the POST data to Slack.

Testing

In my example, I’m going to swap between two policy bundles and evaluate them against an image and tag I’ve subscribed to. The easiest way to accomplish this is via the CLI or the API.

The CLI command to activate a policy: anchore-cli policy activate <PolicyID> The CLI command to evaluate an image:tag against the newly activated policy: anchore-cli evaluate check docker.io/jvalance/sampledockerfiles:latest

This should trigger a notification give I’ve modified the policy bundles to create two different final actions. In my example, I’m toggling the exposed port 22 in the default bundle between ‘WARN’ and ‘STOP’

Once Anchore has finished evaluating the image against the newly activated policy, a notification should be created and sent out to our Azure Function App. Based on the logic we’ve written, we will handle the request, and send out a Slack notification to our Slack app that has been set up to receive incoming webhooks.

You should be able to view the notification in the Slack workspace and channel:

Slack notification tested successfully.

Anchore & Enforcing Alpine Linux Docker Images Vulnerability

A security vulnerability affecting the Official Alpine Docker Linux images (>=3.3) contain a NULL password for the root user. This particular vulnerability has been tracked as CVE-2019-5021. With over 10 million downloads, Alpine Linux is one of the most popular Linux distributions on Docker Hub. In this post, I will demonstrate an understanding of the issue by taking a closer look at two Alpine Docker images, configure Anchore Engine to identify the risk within the vulnerable image, and give a final output based on Anchore policy evaluation.

Finding the Issue

In the build of the Alpine Docker image (>=3.3) the /etc/shadow file show the root user password field entry without a password or lock specifier set. We can see this by running an older Alpine Docker image:

# docker run docker.io/alpine:3.4 cat /etc/shadow | head -n1 root:::0:::::

With no ! or password set, this will now be the condition we wish to check with Anchore.

To see this condition addressed with the latest version of Alpine, run the following command:

# docker run docker.io/alpine:latest cat /etc/shadow | head -n1 root:!::0:::::

Configuring Anchore Secret Search Analyzer

We will now set up Anchore to search for this particular pattern during image analysis, in order to properly identify the known issue.

Anchore comes with a number of patterns pre-installed that search for some types of secrets and keys, each with a named pattern that can be matched later in anchore policy definition. We can add a new pattern to the analyzer_config.yaml anchore engine configuration file, and start-up anchore with this configuration. The new analyzer_config.yaml, for example, should have a new pattern added, which we’ve named ‘ALPINE_NULL_ROOT’:

# Section in analyzer_config.yaml # Options for any analyzer module(s) that takes customizable input ... ... secret_search: match_params: - MAXFILESIZE=10000 - STOREONMATCH=n regexp_match: ... ... - "ALPINE_NULL_ROOT=^root:::0:::::$"

Note – By default, an installation of Anchore comes bundled with a default analzyer_config.yaml file. In order to address this particular issue, modifications will need to be made to the analzyer_config.yaml file as shown above. In order to make sure the configuration changes make their way into your installation of Anchore Engine, create an analyzer_config.yaml file and properly mount it into the Anchore Engine Analyzer Service.

Create an Anchore Policy Specific to this Issue

Next, I will create a policy bundle containing a policy rule which explicitly look for any matches found of the above ALPINE_NULL_ROOT regex create above. If any matches are found, the Anchore policy evaluation will fail.

# Anchore ALPINE_NULL ROOT Policy Bundle { "blacklisted_images": [], "comment": "Default bundle", "id": "alpinenull", "mappings": } ], "name": "Default bundle", "policies": , "trigger": "content_regex_checks" } ], "version": "1_0" } ], "version": "1_0", "whitelisted_images": [], "whitelists": , "name": "Global Whitelist", "version": "1_0" } ] }

Note: The above is an entire policy bundle component which will be needed to effectively evaluate against any analyzed images. The key section within this is the policies section, where we are using the secret_scans gate with content_regex_name and ALPINE_NULL_ROOT parameters.

Conduct Policy Evaluation

Once this policy has been added and activated to an existing Anchore Engine deployment, we can conduct an analysis and policy evaluation of the vulnerable Alpine Docker image (v3.4) via the following command:

# anchore-cli evaluate check docker.io/library/alpine:3.4 --detail Image Digest: sha256:0325f4ff0aa8c89a27d1dbe10b29a71a8d4c1a42719a4170e0552a312e22fe88 Full Tag: docker.io/library/alpine:3.4 Image ID: b7c5ffe56db790f91296bcebc5158280933712ee2fc8e6dc7d6c96dbb1632431 Status: fail Last Eval: 2019-05-09T05:02:32Z Policy ID: alpinenull Final Action: stop Final Action Reason: policy_evaluation Gate Trigger Detail Status secret_scans content_regex_checks Secret search analyzer found regexp match in container: file=/etc/shadow regexp=ALPINE_NULL_ROOT=^root:::0:::::$ stop

In the above output, we can explicitly see that the secret search analyzer found a regular expression match in the Alpine 3.4 Docker image we’ve analyzed and we’ve associated a stop action with this policy rule definition, and the overall result of the policy evaluation has failed.

Given that Alpine is one of the most widely used Docker images, and the impacted versions of it are particularly recent, it is recommended to update to a new version of the image that is not impacted or modify the image to disable the root account.

How Tremolo Security Deploys Anchore on Openshift

When you see a breach in the headlines, it usually reads something like “Known vulnerability exploited to…”Whatever was stolen or broken was compromised because of a bug that had been discovered and fixed by the developers, but not patched in production. Patching is hard.

It’s much harder than most security professionals are willing to admit. It’s not hard because running an upgrade script is hard, patching is hard because without a comprehensive testing suite you never know if an update is going to break your application or systems.

At Tremolo Security, we have already blogged about how we approach patching our dependencies in Unison and OpenUnison. With our release of Orchestra to automate security and compliance in Kubernetes in the past few weeks, we wanted to apply the same approach to our containers. We turned to Anchore’s open source Anchore Engine to scan the containers we publish and make sure they’re kept up to date. In this post we’re going to talk about our use case, why we chose to use Anchore and how we deployed Anchore to scan and update our containers.

Publishing Patched Containers

Our use case for container scanning is a bit different then most. In a typical enterprise you want to have a secure registry with containers that are continuously scanned for known vulnerabilities and compliance with policies. For Tremolo Security, we wanted to make sure that the containers we publish are already patched and kept updated continuously. We work very hard to create an easily patched solution and we want to make sure our customers can feel confident that the containers they obtain from us have been kept up to date.

When we first started publishing containers, we relied on Dockerhub’s automatic builds to publish our containers whenever one of the base images (CentOS or Ubuntu) were updated. This wasn’t good enough for us. The base containers were usually patched once per month, but that was too slow for us. We’d have customers come to us and say “we scanned your containers and there are patches available.” We wanted to make sure that as patches became available they were immediately integrated into our containers.

Why We Chose Anchore

We were first introduced to Anchore a few years ago when they were guests on TWIT.tv’s FLOSS Weekly, a podcast about free and open source software. We had submitted our container for a scan by both Anchore’s service and a well-known provider’s service and received very different results. We tweeted our question to Anchore and they responded with a great blog post explaining how they take into account Red Hat’s updates to CVEs in CentOS for far better and more accurate scan results. That deep level of understanding made it clear to us this was a partner we wanted to work with.

Deploying Anchore

Now for the fun part, we wanted to deploy Anchore’s open source engine on our own infrastructure. We use OKD for various functions at Tremolo Security, including our CI/CD pipeline and publishing. OKD out of the box is far more restrictive than most Kubernetes distributions. It doesn’t allow for privileged containers by default and its use of selinux is very powerful but can be very limiting on containers that are not built to run unrestricted. Finally, OKD doesn’t rely on Helm or Tiller but Anchore does.

Helm and Tiller

I don’t like to deploy anything in my cluster that has cluster-admin access unless I know and am in control of how and when its used. Helm and Tiller have never given me these warm-and-fuzzies and so we don’t use it. That said we needed it for deploying Anchore so we decided to deploy it into Anchore’s project (namespace). When we deployed tiller, we give it a service account that only had administrator access in the Anchore project. As soon as we had Anchore working, we immediately destroyed the Helm and Tiller deployments.

Writing Ephemeral Data

The first issue we ran into was that the containers that makeup Anchore’s engine write ephemeral data into their containers. Most k8s distros will let you do this based on file system permissions, but not OKD. When you need to write ephemeral data you need to update your pods to use empty volumes. We went through each of the deployments in the Helm charts to add them:

- name: service-config-volume mountPath: /anchore_service_config - name: logs mountPath: /var/log/anchore - name: run mountPath: /var/run - name: scratch mountPath: /scratch

and

- name: service-config-volume emptyDir: {} - name: logs emptyDir: {} - name: run emptyDir: {} - name: scratch emptyDir: {}

Unprivileged Containers

Next issue we ran into was that the analyzer needed root access. The other containers thankfully do not. We created a service account and added it to the privileged scc in OKD. SCCs serve the same role in OKD that pod security policies are now serving upstream Kubernetes clusters.

Deleting Helm

Once Anchore was running, did I mention we deleted Helm?

Scanning and Updating Containers

Once Anchore was running, we needed to add our containers and then make sure they were updated appropriately. While the anchore-cli works great for one-off commands, it wasn’t going to scale for us. We publish variants for nearly a dozen containers for Ubuntu, CentOS and RHEL so the cli just wasn’t going to work. The great thing is though that Anchore is cloud-native and the cli just uses an API!

We decided to create a poor man’s operator. An operator is a pattern in the cloud-native world that says “Take all the repetitive stuff admins do and automate it.” For instance, the operators we’re building for OpenUnison and MyVirtualDirectroy will automate certificate management and trusts. Typically operators revolve around a custom resource definition (CRD) and when an instance of a CRD is updated the operator makes sure the environment is brought into line with the configuration of the custom resource. We call this a poor man’s operator because instead of watching a custom resource, we decided to create a CronJob that would run through the containers listed in a CR and if updates are available to call a webhook to being rebuilding.

The great thing about this approach was we could just add new containers to our CR and it would just add them to Anchore’s scans! No fuss, no muss! We’re all open source friends here so we published our code – https://github.com/TremoloSecurity/anchore-os-image-scan.

Closing The Loop

Anchore’s given Tremolo Security a great platform for keeping our containers patched. Knowing when we walk into a customer that they’re scans will give the best possible results is a competitive differentiator for us. We have enjoyed working with Anchore for the last few years and look forward to working with them for many more to come!

Anchore 2.0 is Now Built on the Red Hat Universal Base Image

Earlier this week Red Hat announced an exciting new offering for developers, technology partners, and users: the Red Hat Universal Base Image (UBI). Anchore is excited to announce that as of Anchore Enterprise 2.0 (including the OSS Anchore Engine), core Anchore container images will now be based on the Red Hat UBI.

As an organization that develops software that is primarily distributed to end-users as a collection of container images, we have derived great value and agility through the isolation and encapsulation that comes with developing on, building, testing, and distributing software using containers.

The Anchore services themselves are applications that utilize underlying libraries, dependencies and utilities that are typically provided by most Linux OS distributions, and as such our container images have historically been based on either CentOS or Ubuntu base images.

It is a testament to the effectiveness of container isolation that even though Anchore has changed which OS base image we’ve used, the user experience of running/upgrading Anchore across these changes has remained largely unchanged. However, there have been users who have asked for more from the underlying OS that Anchore services are built upon – specifically the ability to match the supported container and underlying OS infrastructure, and access to support options from the OS vendor for container-based service deployments. Up until now, we have not been able to provide crystal clear recommendations around these topics to our users.

“Red Hat is pleased to welcome Anchore as one of the first partners to adopt the Universal Base Image” said Lars Herrmann, senior director, Ecosystem Program, Red Hat. “We believe the availability of more freely redistributable, well-curated base images can simplify the development process for our partners and enhance the support experience of our mutual customers.”

The Red Hat Universal Base Image is derived directly from Red Hat Enterprise Linux, and is freely available and redistributable, enabling technology partners and application developers such as ourselves to build and distribute our container-based applications, all based on a familiar and trusted Red Hat based OS.

As an application developer, the availability of the UBI short-circuits complications that can arise from users and customers of ours, who are asking for OS-level support for our application, and many other use cases where a supported container OS environment is required (in particular, within large enterprises and regulated industries). In addition, UBI has made the Red Hat OSS software ecosystem fully accessible, when it comes to delivering end-to-end (from development, through build, to distribution) container-based software. Anchore users can now utilize familiarity with Red Hat software for system diagnosis/deep inspection within the Anchore containers (based on UBI), and most importantly can now “turn on” official Red Hat support for any base OS concerns when running on Red Hat Enterprise Linux or Red Hat OpenShift, in addition to the specialized support available from Anchore for our own services.

We’re excited to be an early adopter of the UBI offering from Red Hat, and believe that moving to UBI as our base container image clearly and immediately improves the options available to ourselves (as application developers) and to all users of Anchore, across the board.

For more information on the Anchore Enterprise 2.0 launch, as well as the Red Hat Universal Base Image announcements and material, please refer to the following links.

Learn More About the Red Hat Universal Base Image

Learn More About Anchore Enterprise 2.0

Announcing Anchore Enterprise Version 2.0

We’re truly excited today to announce the immediate availability of Anchore Enterprise version 2.0, the latest OSS and Enterprise software from Anchore that provides users with the tools and techniques needed to enforce container security, compliance and best-practices requirements with usable, flexible, cross-organization, and above all time-saving technology from Anchore. This release is based on the all-new (and also available today) OSS Anchore Engine version 0.4.0).

New Features of Enterprise 2.0

Building on top of the existing Anchore Enterprise 1,2 release, Anchore Enterprise version 2.0 adds major new features and architectural updates that collectively represent the technical expression of discussions, experiences, and feedback from customers and users of Anchore over the last several years. As we continue to gain in-depth insight into the challenges that Dev/Ops and Sec/Ops groups face, we’re observing container-based deployments becoming more of the norm rather than the exception (for production workloads).

As a consequence, the size, responsiveness, information retrieval and reporting breadth, and operational needs demanded of Anchore in its role as an essential piece of policy-based security and compliance infrastructure have grown in kind.

The overarching purpose of the new features and design of the 2.0 version of Anchore Enterprise is to directly address the challenges of continued growth and scale by extending the enterprise integration capabilities of Anchore, establishing an architecture that grows alongside our users’ demanding throughput and scale requirements, and offering even more insight into users’ container image environments through rich new APIs and reporting capabilities, all in addition to the rich set of enforcement capabilities included with Anchore Enterprise’s flexible policy engine.

The major new features and resources launched as part of Anchore Enterprise 2.0 include:

  • GUI Dashboard: new configurable landing page for users of the Enterprise UI, presenting complex information summaries and metrics time series for deep insight into the collective status of your container image environment.
  • Enterprise Reporting Service: entirely new service that runs alongside existing Anchore Enterprise services that exposes the full corpus of container image information available to Anchore Engine via a flexible GraphQL interface
  • LDAP Integration: Anchore Enterprise can now be configured to integrate with your organization’s LDAP/AD identity management system, with flexible mappings of LDAP information to Anchore Enterprise’s RBAC account and user subsystem.
  • Red Hat Universal Base Image: all Anchore Enterprise container images have been re-platformed atop the recently announced Red Hat Universal Base Image, bringing more enterprise-grade software and support options to users deploying Anchore Enterprise in Red Hat environments.
  • Anchore Engine 0.4.0: Anchore Enterprise is built on top of the OSS Anchore Engine, which has received many new features and updates as well (see below for details).
  • New Documentation and Resources: Alongside the release of Anchore Engine 0.4.0, we’ve launched a brand new documentation site that provides a more flexible structure, versioned documentation sets, and greatly enhanced feedback and contribution capabilities.
  • New Support Portal: customers of Anchore Enterprise 2.0 are now provided with full access to a new support portal for better ticket tracking and feature request submissions.

Anchore Engine OSS

Anchore Enterprise 2.0 is built on top of Anchore Engine version 0.4.0 – a new version of the fully functional core services that drive all Anchore deployments. Anchore Engine has received a number of new features and other new project updates:

  • Automated Data Management: new automation capabilities and rules allow simplified management of the volume of analysis data while still supporting audit capabilities. New data tiers support flexible management of Anchore data resources as your deployment grows and scales over time.
  • Policy Hub: centralized repository of Anchore policies, accessible by all Anchore users, where pre-canned policies are available either to be used directly or as a starting point for your own policy definitions.
  • Rootless Analyzers: new implementation of the core image analysis capabilities of Anchore which no longer require any special access to handle the high variability found within container images, while still providing the deep inspection needed for powerful security and compliance enforcement.
  • Red Hat Universal Base Image: all Anchore Engine container images have been re-platformed atop the recently announced and freely available Red Hat Universal Base Image, bringing more enterprise-grade software and support options to users deploying Anchore Engine in Red Hat environments.

For a full description of new features, improvements and fixes available in Anchore Engine OSS, click here.

Once again, we would like to sincerely thank all of our open-source users, customers and contributors for all of the spirited discussion, feedback, and code contributions that are all part of this latest release of Anchore Engine OSS! If you’re new to Anchore, we would like nothing more than to have you join our community!

Anchore Enterprise 2.0 Available Now

With Anchore Enterprise 2.0, available immediately, our goal has been to include a brand new set of large scale and enterprise-focused updates for all Anchore users that can be utilized immediately by upgrading existing deployments of Anchore Enterprise or Anchore Engine OSS.

For users looking for comprehensive solutions to the unique challenges of securing and enforcing best-practices and compliance to existing CI/CD, container monitoring and control frameworks, and other container-native pipelines, we sincerely hope you enjoy our latest release of Anchore software and other resources – we look forward to working with you!

For more information on requesting a trial, or getting started with Anchore Enterprise 2.0, please direct your browser to the Anchore Enterprise.

Use Anchore Policies to Reach CIS Docker Benchmark

As Docker usage has greatly increased, it has become increasingly important to gain a better understanding of how to securely configure and deploy Dockerized applications. The Center for Internet Security published 1.13 Docker Benchmark, which provides consensus-based guidance by subject matter experts for users and organizations to achieve secure Docker usage and configuration.

We previously published a blog on how Anchore can help achieve NIST 800-190 compliance. This post will detail how Anchore can help with certain sections of CIS Docker Benchmarks 1.13. The publication focuses on five areas that are specific to Docker:

  • Host Configuration
  • Docker daemon configuration
  • Docker daemon configuration files
  • Container Images and Build File
  • Container Runtime

Anchore is a service that analyzes Docker images pre-runtime and applies user-defined acceptance policies to allow automated container image validation and certification. Anchore is more commonly used with a CI tool similar to Jenkins in order to streamline container image builds in a more automated fashion. The critical component in helping achieve any sort of compliance are Anchore Policy Bundles. With these, users have full control over what specific policy rules they would like their Docker images to adhere to, and potentially fail or warn users based on the outcome of these evaluations.

Scoring Information

A scoring status indicated whether compliance with the given recommendations impacts the assessed target’s benchmark score.

Scored

Failure to comply with “Scored” recommendations will decrease the final benchmark score. Compliance with “Scored” recommendations will increase the final benchmark score.

Not Scored

Failure to comply with “Not Scored: recommendations will not decrease the final benchmark score. Compliance with “Not Scored” will not increase the final benchmark score.

Profile Definitions

The following configuration profiles are defined by this Benchmark:

Level 1 – Docker

Items in this profile intend to:

  • Be practical and prudent
  • Provide a clear security benefit
  • Not inhibit the utility of the technology beyond acceptable means

Level 2 – Docker

Items in this profile exhibit one or more of the following characteristics:

  • Are intended for environments or use cases where security is paramount
  • Acts as defense in depth measure
  • May negatively inhibit the utility or performance of the technology

1. Host Configuration

Security tools specific to the Host Configuration are not achievable with Anchore.

2. Docker Daemon Configuration

Security tools specific to the Docker daemon are not achievable with Anchore.

3. Docker Daemon Configuration Files

Security tools specific to the Docker daemon configuration are not achievable with Anchore.

4. Container Images and Build files

Docker container images and their corresponding Dockerfiles govern how a container will behave when running. It is important to use the appropriate base images, and best practices when creating Dockerfiles to secure your containerized applications and infrastructure.

4.1 Create a user for the container (Scored)

Create a non-root user for the container in the Dockerfile for the container image. It generally good practice to run a Docker container as a non-root user.

When creating Dockerfiles make sure the USER instruction exists. This can be achieved with an Anchore policy by checking for the USER instruction, as well as checking to make sure the effective user is not the root.

4.2 Use trusted base images for containers (Not Scored)

Ensure that container images come from trusted sources. Official repositories are Docker images curated and optimized by the Docker community or vendor. As an organizational best practice, setting up a trusted Docker registry where your developers are allowed to push and pull images from is seen as secure. Configuration and use of Docker Content trust with Notary is helpful when achieving this.

Anchore helps with this when built-in with a secure CI pipeline. As an example, once an image has been built, it is then scanned and analyzed by Anchore, if it passes Anchore policies it is now safe to be pushed to a designated trusted Docker registry. If the image does not pass Anchore checks, it does not get pushed to a registry. Anchore policies can be set up to make sure base images are coming from trusted registries as well.

4.3 Do not install unnecessary packages in the container (Not Scored)

It is generally a best practice to not install anything outside of the usage scope of the container. By bringing additional software packages that are not utilized, the attack surface of the container is increased.

Anchore policies get be set to look for only a setlist of software packages, or look for a slimmed-down version of the base image by checking the FROM instruction. By using minimal base images or alpine, not only is the size of the image greatly decreased, the threat surface area of the container is decreased.

4.4 Scan and rebuild the images to include security patches (Not Scored)

Images should be scanned frequently. If vulnerabilities are discovered within images, they should be patched/fixed, rebuilt, and pushed to the registry for instantiation.

Anchore scans can be conducted as part of a normal CI pipeline, doing this ensures the frequency of scans is in-line with image builds. Anchore vulnerability feeds are consistently being updated with newer vulnerabilities as they are made available to the public. By watching image repositories and tags within Anchore, webhook notifications can be configured to alert the appropriate teams when new vulnerabilities are impactful to a watched image or tag.

Anchore policy checks during the CI pipeline can be set up to stop container images with vulnerable software packages from ever reaching a trusted registry.

4.5 Enable Content trust for Docker (Scored)

Enable content trust for Docker and use digital signatures with a tool like Notary to ensure that only trusted Docker images can be pushed to a registry.

While this is not directly enforceable by Anchore, setting up Anchore policy checks within a CI pipeline to only sign images that have passed an evaluation is part of a secure CI best practice.

4.6 Add HEALTHCHECK instruction to the container image (Scored)

Add the HEALTHCHECK instruction within your Dockerfiles. This ensures the engine will periodically check the running container against that instruction. Based on the output of the healthcheck, Docker could exit a non-working container and instantiate a new one.

Anchore policy checks can be configured to ensure the HEALTHCHECK instruction is present within a Dockerfile.

4.7 Do not use update instructions alone in the Dockerfile (Not Scored)

Make sure to not use update instruction alone or in a single line within a Dockerfile. Doing this will cache the update layer, and potentially could deny a fresh update when the Docker image is built again.

Anchore policy checks can be configured to look for regular expressions specific to an update instruction alone or in a single line. Following this, a warning notification could be sent out.

4.8 Remove setuid and setgid permissions in the images (Not Scored)

Remove setuid and setgid permission in the images to prevent escalation attacks in the containers.

Anchore policy checks can be set to only allow setuid and setgid permission on executables that need them. These permissions could be removed during build time by explicitly stating the following in the Dockerfile:

RUN find / -perm +6000 -type f -exec chmod a-s {} ; || true

4.9 Use COPY instead of ADD in Dockerfile (Not Scored)

Use the COPY instruction instead of the ADD instruction in Dockerfiles.

Anchore policy checks can be setup to warn when ADD instruction in the present in a Dockerfile.

4.10 Do not store secrets in Dockerfiles (Not Scored)

Do not store secrets in Dockerfiles.

Anchore policy checks can be configured to look for secrets (AWS keys, API keys, or other regular expressions) that may be present within an image.

4.11 Install verified packages only (Not Scored)

Verify the authenticity of packages before installing them in the image.

Since Anchore can inspect the Dockerfile, policy checks can be configured to only allow allowed packages to be installed during a Docker build.

5. Container Runtime

Although Anchore focuses on mainly pre-runtime, there are countermeasures that can be taken during the build stage prior to instantiation to help mitigate container runtime threats.

5.6 Do not run ssh within containers (Scored)

SSH server should not be running within the container.

Anchore policies can be configured to check for exposed port 22.

5.7 Do not map privileged ports within containers (Scored)

The TCP/IP port number below 1024 are considered privileged ports. Normal users and processes are not allowed to use them for various security reasons.

Anchore policies can be configured to check for these exposed ports.

5.8 Only open needed ports on container (Scored)

Dockerfile for container images should only define needed ports for container usage.

Anchore policies can be configured to check that the needed exposed ports are open.

Conclusion

The above findings outline which sections of the CIS Docker Benchmark can achieve with Anchore and Anchore policies. It is highly recommended that other tools be used in combination to achieve and secure CI image pipeline in order to accomplish a more complete CIS Docker Benchmark score.

One of the easiest ways to get started with achieving the Docker CIS Benchmark is to use the Anchore Policy Bundle below:

Anchore Policy for Docker CIS

Get started with the Anchore Policy for Docker CIS Benchmark on the Anchore Policy Hub.