Driving Open Source Container Security Forward

A little over seven months ago we announced the open source Anchore Engine project and since then we have seen hundreds of organizations deploy Anchore Engine to add security and compliance to their container environments.

Most organizations build their container infrastructure with open source solutions:

  • Linux for the container host
  • Docker for container runtime
  • Jenkins for CI/CD
  • Kubernetes for orchestration
  • Prometheus for monitoring

When Anchore was formed there was an obvious gap in terms of open source container security and our goal was to fill that gap with the best in breed container scanning solution that added not just reporting but policy-based compliance. At the same time, we were working on Anchore CoreOS released the Clair project which provided an open source vulnerability scanner. We are big fans of the work CoreOS has done in the container community so we looked into that project but saw a number of gaps: firstly its focus was reporting on operating system CVEs (vulnerabilities). While CVE scanning is an important first step it is just the tip of the iceberg, container security and compliance tool should be looking at policies that cover licensing, secrets, configuration, etc. The second challenge we saw was that Clair was focused more on the registry use case which given the Clair use in the CoreOS Quay registry made perfect sense. So we built a series of tools to address container scanning and compliance from the ground up. Since then we have been glad to see more open source container security solutions come to market such as Sysdig’s Falco runtime security project.

In building the Anchore Engine our philosophy has been to keep the core engine open source and feature-complete while providing value-added services on top of the engine – for example, a user interface in addition to the AP and CLI, added enterprise integrations. A user should be able to secure their CI/CD pipeline with our open source engine without requiring a commercial product and without sharing their container and vulnerability data with third parties – everything should work on-premises for free. Of course, we are happy to sell you an enterprise offering on top of the open source solution and if you are ever not satisfied with our enterprise offering you should be able to remove the added services and roll back to the fully functional open source engine.

Roughly every month we have released an update to the open source project and this week we are proud to announce the 0.2.0 release that adds a number of interesting new features including Prometheus integration, improved Debian vulnerability reporting and a number of scalability related enhancements to allow our users to scale to handle thousands of builds a day.

Prometheus Integration

Prometheus is an open source event monitoring system with a time series database inspired by Google’s internal monitoring tools (Borgmon). Prometheus has rapidly become the de facto standard for monitoring and metrics in cloud-native environments.
Anchore Engine 0.2.0 adds support for exposing metrics for consumption by Prometheus allowing collection of metrics, reporting and monitoring of Anchore Engine.

Improved Debian CVE reporting

The Anchore Engine and the Anchore Feed service have been extended to track the Debian specific no-DSA flag that indicates that while the package version is vulnerable to a given CVE the Debian build of this package, either because of build options or environment is not vulnerable. In previous versions of the Anchore Engine whitelists were used to filter these records from policy output, with Anchore Engine 0.2.0 these CVEs will not be shown on the default CVE report nor within the policy output.

Scalability Improvements

Anchore Engine 0.2.0 includes a number of features to simplify scale-out deployments of Anchore Engine on Kubernetes, Amazon ECS and other large scale environments. Many features have been added to allow Anchore Engine to support thousands of builds a day and hundreds of thousands of images stored within the Anchore database

  • Support for running multiple core services (catalog, API, queue and policy engine). Previous releases had supported the scale-out of analyzer workers only.
  • Support for storing analysis and other data in external storage systems such as Amazon S3, Swift and clustered file systems in addition to the native database support.

You can read more about the changes in the online documentation or in the changelog on GitHub.

We are currently working on a number of exciting new features for delivery over the next couple of months including:

  • Support for matching NVD vulnerabilities in software libraries including Java, Python, Ruby and Node.JS.
  • Support for scanning nest Java archives. eg. Java JAR files stored in WAR files stored in EAR files.
  • Layer reporting – exposing image layer data in the Anchore CLI and API
  • Layer based policies – allowing policies such as “only allow images built on selected based images.”

No Excuses, Start Scanning

One of the most popular features of the Anchore Cloud service is the ability to deep dive into any container image to inspect its contents to see what files, packages and software libraries make up an image. Before I import any public image into my development environment I check out the list of security vulnerabilities in the image, if any, the policy status (does it fail basic compliance checks) and then I dig into the contents tab to see what operating system packages and libraries are in the image. I am still surprised at just how large many images are.

 

This content view allows you to dig into every artifact in the image – what operating system packages, what Node.JS NPM modules including details such as their license and versions as well as how they got pulled in – for example, multiple copies of the same module being pulled in as dependencies of other modules.

While this level of inspection is useful before you pull in a new public Docker image this level of detail is even more useful when applied to your own internal images.

When most people talk about container security and compliance the focus is on security vulnerabilities: “Do I have any critical or high vulnerabilities in my image.” As we have covered previously CVEs are just the tip of the iceberg and that organizations should be looking at policies that cover licensing, secrets, configuration, etc. Many organizations that we talk to see the value in policy-based compliance and are planning to implement container scanning as part of their CI/CD workflows but are not ready to make the investment required to add checkpoints and gates within their build or deployment infrastructure.

When the Equifax news broke about their massive breach caused by an unpatched Apache Struts vulnerability I think that every CIO in every organization was on the phone with their operations team and developers to ask if they had a vulnerable version of Apache Struts. While it’s simple to find out what version of a library you are running today on your servers, do you know what was run on your production cluster last week, last month, last year?

Even if you do not have the time or resources to invest in securing your CI/CD pipeline today with policies, reports and compliance checks it will take less than 10 minutes to download Anchore’s open source Engine, point it to your container registry and start it scanning. The Anchore Engine will discover new tags and images deployed to your repos, download and analyze them and maintain a history of tags and images over time. When you are ready to start putting in place policies, vulnerabilities, or gate deployments based on compliance checks you already have data at hand to help you track trends, compare images and run reports on changes over time. We find many organizations just using this data to produce detailed build summaries or changelogs.

Get started today, for free, either with Anchore’s cloud service or download and run the open source Anchore Engine on-premises today.

Welcome to the Container Chronicle

Things change rapidly in the fast fluid world of Containers, sometimes it’s hard to keep up. So we’re starting a new newsletter called The Container Chronicle to help you stay on top of everything newsworthy from Cloud to Kubernetes, Docker to DevOps, and Beyond.

We will periodically be sending out The Container Chronicle, with the first edition shipping out this morning but in case you aren’t subscribed yet we’ve included it below so you don’t miss out. If you’d like to subscribe and stay on top of important industry news fill out the form at the bottom of the page and we will make sure it hits your inbox!

March ended on a high with the release of Kubernetes 1.10 but April is already shaping up to be a busy month in the world of containers and we are only halfway through.

Docker + Java 10 = ❤️

The month began with the general availability of Java 10 which includes a number of interesting new features, the most significant of which to container users is the ability of the Java runtime to recognize memory and CPU constraints applied to the container using cgroups. Previous versions of the Java runtime were not aware of resource constraints applied to the container in which it was running, requiring manual configuration of JVM parameters. With Java 10, memory and CPU limits are automatically detected and accounted for by the JVM’s resource management.

The folks at Docker produced a great blog covering the details:

Improved Docker Container Integration with Java 10

OCI Locks in a Distribution Specification

The Open Container Initiative announced a new project to standardize the container distribution specification. The Docker Registry API specification is already the de-facto standard for distributing container images. Any time you push or pull an image, your Docker (or compatible) client is using the Docker registry API to interact with the registry.

All the major registry providers already support this API but the specification was controlled by a single vendor. While Docker has proven to be a good citizen in the open source community having a single vendor dictate standards is not conducive to cross-vendor collaboration. As happened previously with the image and runtime specification Docker has now donated the specification to the Open Container Initiative (OCI) which has adopted the standard and will continue to drive it forward. The OCI includes industry leaders such as Amazon, Docker, Google, IBM, Microsoft and Red Hat. You can read more about the announcement at The New Stack.

Canary in the Kayenta

Google and Netflix announced the Kayenta project which was jointly developed by the two companies and now licensed as an Apache 2 project under the umbrella of the Spinnaker continuous delivery platform. Kayenta is an automated canary analysis tool. The idea behind canary analysis is that you push a new release of a service or program to a small number of users. Since only a few users get the new release any problems are limited to a small subset of users and can easily be rolled back. If the release proves successful the test audience can be expanded. Unlike the original canary in a coal mine no animals are actually harmed during these test deployments.

You can read more about Kayenta on Google’s blog or on ZDNet.

Docker Embraces Kubernetes in Docker EE

 

Yesterday Docker announced the release of Docker Enterprise Edition 2.0 which includes support for both Docker’s own Swarm orchestration system but also adds support for Kubernetes. Docker Inc are not alone in shifting focus away from their own orchestration platform to Kubernetes, only a few short weeks ago we saw Mesosphere announce Kubernetes-as-a-service integrated with their DC/OS offering.

While Kubernetes clearly won the short-lived orchestration war, the real beneficiaries are the end-users who now can standardize on a single platform that can be deployed on public clouds, on-premises or even on a stack of Raspberry Pis. This standardization helps to drive a rich ecosystem of vendors to provide value-added solutions that can now focus on a single, open source platform.

Thanks for hanging with us in this first edition of The Container Chronicle. You’ll see us again soon (but not too soon) so keep an eye out for our next newsletter.