Future of Container Technology & Open Container Initiative

Open Container Initiative – August 23, 2016

The Open Container Initiative (OCI), an open source project for creating open industry standards around container formats and runtime, today announced that Anchore, ContainerShip, EasyStack and Replicated have joined The Linux Foundation and the Open Container Initiative.

Today’s enterprises demand portable, agile and interoperable developer and sysadmin tools. The OCI was launched with the express purpose of developing standards for the container format and runtime that will give everyone the ability to fully commit to container technologies today without worrying that their current choice of infrastructure, cloud provider or DevOps tool will lock them in. Their choices can instead be guided by choosing the best tools for the applications they are building.

“The rapid growth and interest in container technology over the past few years has led to the emergence of a new ecosystem of startups offering container-based solutions and tools,” said Chris Aniszczyk, Executive Director of the OCI. “We are very excited to welcome these new members as we work to develop standards that will aid container portability.”

The OCI currently has nearly 50 members. Anchore, ContainerShip, EasyStack and Replicated join existing members including Amazon Web Services, Apcera, Apprenda, AT&T, ClusterHQ, Cisco, CoreOS, Datera, Dell, Docker, EMC, Fujitsu Limited, Goldman Sachs, Google, Hewlett Packard Enterprise, Huawei, IBM, Infoblox, Intel, Joyent, Kismatic, Kyup, Mesosphere, Microsoft, Midokura, Nutanix, Odin, Oracle, Pivotal, Polyverse, Portworx, Rancher Labs, Red Hat, Resin.io, Scalock, Sysdig, SUSE, Twistlock, Twitter, Univa, Verizon Labs, VMware and Weaveworks.

Read the complete and original announcement on Open Container Initiative.

How are Containers Really Being Used?

Our friends at ContainerJournal and Devops.com are running a survey to learn how you are using containers today and your plans for the future.

We’ve seen a number of surveys over the last couple of years and heard some incredible statistics on the growth of Docker usage and of containers in general, for example, we learned last week that DockerHub had reached over 5 billion pulls. The ContainerJournal survey digs deeper to uncover details about the whole stack that users are running.

For example, who do you get your container runtime from, where do you store your images, how do you handle orchestration?

Some of the questions are especially interesting to the team here at Anchore as they cover how you create and maintain the images that you use. For example, do you pull application images straight from Docker Hub, do you just pull base operating system images and add your own application layers, or perhaps you build your own operating system images from scratch?

And no matter how you initially obtain your image how do you ensure that it contains the right content starting from the lowest layer of the image with the operating system all the way up to the application tier. While it’s easy to build and pull images, the maintenance of those images is another matter, eg. how often are those images updated?

Please head over to ContainerJournal and fill out the survey by clicking the button below.

TNS Research: A Scan of the Container Vulnerability Scanner Landscape

Lawrence Hecht – The New Stack – August 5, 2016

Container registries and vulnerability scanners are often bundled together, but they are not the same thing. Code scanning may occur at multiple points in a container deployment workflow. Some scanners will be bundled with existing solutions, while others are point solutions. There differences can be measured by the data sources they use, what is being checked, and the actions are automatically taken as the result of a scan.

Read the original and complete article at The New Stack.

Extending Anchore with Jenkins

Jenkins is one of the most popular Continuous Integration/Continuous Delivery platforms in production today. Jenkins has over a million active users, and according to the CloudBees State of Jenkins survey last year, 95% of Jenkins users are already using or plan to start using Docker within 12 months. A CI/CD build system is a very important part of any organization’s automation toolkit, and Anchore has some clear integration points with these tools. In this blog post, I’ll describe and illustrate a simple way to manually integrate Anchore’s open source container image validation engine into a Jenkins-based CI/CD environment. It’s worth noting that this is only one possible method integration between Anchore and Jenkins, and a different approach may be more suitable for your environment. We’d love to hear from you if you find a new way to use Anchore in your CI/CD pipeline!

Anchore allows you to specify “gates” — checks that are performed on a container image before it moves to the next stage of the development. These gates range from things like required or disallowed packages, properties of the image’s Dockerfile, presence of known vulnerabilities, and so on. The gate subsystem is easily extended to add your own conditions–perhaps application configuration, versioning requirements, etc.

Gates have been designed to run as part of an automated CI/CD pipeline. A popular workflow is to have an organization’s CI/CD pipeline respond to newly-committed Dockerfiles, building images, running tests, and so on. A good place to run Anchore’s Gates would be in between the build of the image and the next phase: whether it’s a battery of tests, or maybe a promotion of an application to the next stage of production. The workflow looks like this:

  1. Developer commits an updated Dockerfile to Git
  2. A Jenkins job is triggered based on that commit
  3. A new container image is built as part of the Jenkins job
  4. Anchore is invoked to analyze the image
  5. The status of that image’s gates are checked

At this point, the CI pipeline can make a decision on whether to allow this newly-created and analyzed image to the next stage of development. Gates have three possible statuses: GO, WARN, STOP. They are fairly self-explanatory: an image whose gates all pass GO should be promoted to the next stage. Images with any WARN statuses may need further inspection but may be allowed to continue. An image with a gate that returns a STOP status should not move forward in the pipeline.

Let’s walk through a simplified example. For clarity, I’ve got my Docker, Anchore, and Jenkins instances all on the same virtual machine. Production configurations will likely be different. (I’m running Jenkins 2.7.1, Docker 1.11.2, and the latest version of Anchore from PIP.)

The first thing we need to do is create a Build Job. This is not intended to be a general-purpose Jenkins tutorial, so drop by the Jenkins Documentation if you need some help. Our Jenkins job will poll a GitHub repository containing our very simple Dockerfile, which looks like this:

The relevant section of our Jenkins build job looks like this:

These commands do the following:

docker build -t anchore-test.

This command instructs Docker to build a new image based on the Dockerfile in the directory of the cloned Git repository. The image’s name is “anchore-test”.

anchore analyze –image anchore-test –dockerfile Dockerfile

This command calls Anchore to analyze the newly-created image.

anchore gate –image anchore-test

This command runs through the Anchore “gates” to determine if the newly-generated image is suitable for use in our environment.

Let’s look at the output from this build:

Whoops! Our build failed. Looks like we triggered a couple of gates here. The first one, “PKGDIFF”, is reporting an action of “STOP”. If you look at the “CheckOutput” column, it says: “Package version in container is different from baseline for pkg – tzdata”. This means that along the way the package version of tzdata has changed; probably because our Dockerfile does a “yum update -y”. Let’s try removing that command–maybe we should instead stick to the baseline image that our container team has provided.

So let’s edit the Dockerfile, remove that line, commit the change, and re-run the build. Here’s the output from the new build:

Success! We’ve passed all of the gates. You can change which gates apply to which images and how they are configured by running:

anchore gate –image anchore-test –editpolicy

(You’ll be dropped into the editor specified by the VISUAL or EDITOR environment variables, usually vim.)

Our policy currently looks like this:

DOCKERFILECHECK:NOTAG:STOP
DOCKERFILECHECK:SUDO:GO
DOCKERFILECHECK:EXPOSE:STOP:ALLOWEDPORTS=22
DOCKERFILECHECK:NOFROM:STOP
SUIDDIFF:SUIDFILEDEL:GO
SUIDDIFF:SUIDMODEDIFF:STOP
SUIDDIFF:SUIDFILEADD:STOP
PKGDIFF:PKGVERSIONDIFF:STOP
PKGDIFF:PKGADD:WARN
PKGDIFF:PKGDEL:WARN
ANCHORESEC:VULNHIGH:STOP
ANCHORESEC:VULNLOW:GO
ANCHORESEC:VULNCRITICAL:STOP
ANCHORESEC:VULNMEDIUM:WARN
ANCHORESEC:VULNUNKNOWN:GO

You can read all about gates and policies in our documentation. Let’s try one more thing: let’s change the “PKGDIFF:PKGVERSIONDIFF” policy to “WARN” instead of “STOP”, and re-enable our yum update command in the Dockerfile.

In the policy editor, we’ll change these lines:

PKGDIFF:PKGVERSIONDIFF:STOP
PKGDIFF:PKGADD:WARN

To this:

PKGDIFF:PKGVERSIONDIFF:GO
PKGDIFF:PKGADD:GO

And save and exit. We’ll also edit the Dockerfile, re-add the “RUN yum update -y” line, and commit and push the change. Then let’s run the Jenkins job again and see what happens.

Now you can see that although Anchore still detects an added package and a changed version, because we’ve reconfigured those gates, it’s not a fatal error and the build completes successfully.

This is just a very simple example of what can be done with Anchore gates in a CI/CD environment. We are planning on implementing a full Jenkins plugin for a more streamlined integration, so stay tuned for that. There are also more gates to explore, and you can extend the system to add your own. If you have questions, comments, or want to share how you’re using Anchore, let us know!