How are Containers Really Being Used?

Our friends at ContainerJournal and Devops.com are running a survey to learn how you are using containers today and your plans for the future.

We’ve seen a number of surveys over the last couple of years and heard some incredible statistics on the growth of Docker usage and of containers in general, for example, we learned last week that DockerHub had reached over 5 billion pulls. The ContainerJournal survey digs deeper to uncover details about the whole stack that users are running.

For example, who do you get your container runtime from, where do you store your images, how do you handle orchestration?

Some of the questions are especially interesting to the team here at Anchore as they cover how you create and maintain the images that you use. For example, do you pull application images straight from Docker Hub, do you just pull base operating system images and add your own application layers, or perhaps you build your own operating system images from scratch?

And no matter how you initially obtain your image how do you ensure that it contains the right content starting from the lowest layer of the image with the operating system all the way up to the application tier. While it’s easy to build and pull images, the maintenance of those images is another matter, eg. how often are those images updated?

Please head over to ContainerJournal and fill out the survey by clicking the button below.

TNS Research: A Scan of the Container Vulnerability Scanner Landscape

Lawrence Hecht – The New Stack – August 5, 2016

Container registries and vulnerability scanners are often bundled together, but they are not the same thing. Code scanning may occur at multiple points in a container deployment workflow. Some scanners will be bundled with existing solutions, while others are point solutions. There differences can be measured by the data sources they use, what is being checked, and the actions are automatically taken as the result of a scan.

Read the original and complete article at The New Stack.

Extending Anchore with Jenkins

Jenkins is one of the most popular Continuous Integration/Continuous Delivery platforms in production today. Jenkins has over a million active users, and according to the CloudBees State of Jenkins survey last year, 95% of Jenkins users are already using or plan to start using Docker within 12 months. A CI/CD build system is a very important part of any organization’s automation toolkit, and Anchore has some clear integration points with these tools. In this blog post, I’ll describe and illustrate a simple way to manually integrate Anchore’s open source container image validation engine into a Jenkins-based CI/CD environment. It’s worth noting that this is only one possible method integration between Anchore and Jenkins, and a different approach may be more suitable for your environment. We’d love to hear from you if you find a new way to use Anchore in your CI/CD pipeline!

Anchore allows you to specify “gates” — checks that are performed on a container image before it moves to the next stage of the development. These gates range from things like required or disallowed packages, properties of the image’s Dockerfile, presence of known vulnerabilities, and so on. The gate subsystem is easily extended to add your own conditions–perhaps application configuration, versioning requirements, etc.

Gates have been designed to run as part of an automated CI/CD pipeline. A popular workflow is to have an organization’s CI/CD pipeline respond to newly-committed Dockerfiles, building images, running tests, and so on. A good place to run Anchore’s Gates would be in between the build of the image and the next phase: whether it’s a battery of tests, or maybe a promotion of an application to the next stage of production. The workflow looks like this:

  1. Developer commits an updated Dockerfile to Git
  2. A Jenkins job is triggered based on that commit
  3. A new container image is built as part of the Jenkins job
  4. Anchore is invoked to analyze the image
  5. The status of that image’s gates are checked

At this point, the CI pipeline can make a decision on whether to allow this newly-created and analyzed image to the next stage of development. Gates have three possible statuses: GO, WARN, STOP. They are fairly self-explanatory: an image whose gates all pass GO should be promoted to the next stage. Images with any WARN statuses may need further inspection but may be allowed to continue. An image with a gate that returns a STOP status should not move forward in the pipeline.

Let’s walk through a simplified example. For clarity, I’ve got my Docker, Anchore, and Jenkins instances all on the same virtual machine. Production configurations will likely be different. (I’m running Jenkins 2.7.1, Docker 1.11.2, and the latest version of Anchore from PIP.)

The first thing we need to do is create a Build Job. This is not intended to be a general-purpose Jenkins tutorial, so drop by the Jenkins Documentation if you need some help. Our Jenkins job will poll a GitHub repository containing our very simple Dockerfile, which looks like this:

The relevant section of our Jenkins build job looks like this:

These commands do the following:

docker build -t anchore-test.

This command instructs Docker to build a new image based on the Dockerfile in the directory of the cloned Git repository. The image’s name is “anchore-test”.

anchore analyze –image anchore-test –dockerfile Dockerfile

This command calls Anchore to analyze the newly-created image.

anchore gate –image anchore-test

This command runs through the Anchore “gates” to determine if the newly-generated image is suitable for use in our environment.

Let’s look at the output from this build:

Whoops! Our build failed. Looks like we triggered a couple of gates here. The first one, “PKGDIFF”, is reporting an action of “STOP”. If you look at the “CheckOutput” column, it says: “Package version in container is different from baseline for pkg – tzdata”. This means that along the way the package version of tzdata has changed; probably because our Dockerfile does a “yum update -y”. Let’s try removing that command–maybe we should instead stick to the baseline image that our container team has provided.

So let’s edit the Dockerfile, remove that line, commit the change, and re-run the build. Here’s the output from the new build:

Success! We’ve passed all of the gates. You can change which gates apply to which images and how they are configured by running:

anchore gate –image anchore-test –editpolicy

(You’ll be dropped into the editor specified by the VISUAL or EDITOR environment variables, usually vim.)

Our policy currently looks like this:

DOCKERFILECHECK:NOTAG:STOP
DOCKERFILECHECK:SUDO:GO
DOCKERFILECHECK:EXPOSE:STOP:ALLOWEDPORTS=22
DOCKERFILECHECK:NOFROM:STOP
SUIDDIFF:SUIDFILEDEL:GO
SUIDDIFF:SUIDMODEDIFF:STOP
SUIDDIFF:SUIDFILEADD:STOP
PKGDIFF:PKGVERSIONDIFF:STOP
PKGDIFF:PKGADD:WARN
PKGDIFF:PKGDEL:WARN
ANCHORESEC:VULNHIGH:STOP
ANCHORESEC:VULNLOW:GO
ANCHORESEC:VULNCRITICAL:STOP
ANCHORESEC:VULNMEDIUM:WARN
ANCHORESEC:VULNUNKNOWN:GO

You can read all about gates and policies in our documentation. Let’s try one more thing: let’s change the “PKGDIFF:PKGVERSIONDIFF” policy to “WARN” instead of “STOP”, and re-enable our yum update command in the Dockerfile.

In the policy editor, we’ll change these lines:

PKGDIFF:PKGVERSIONDIFF:STOP
PKGDIFF:PKGADD:WARN

To this:

PKGDIFF:PKGVERSIONDIFF:GO
PKGDIFF:PKGADD:GO

And save and exit. We’ll also edit the Dockerfile, re-add the “RUN yum update -y” line, and commit and push the change. Then let’s run the Jenkins job again and see what happens.

Now you can see that although Anchore still detects an added package and a changed version, because we’ve reconfigured those gates, it’s not a fatal error and the build completes successfully.

This is just a very simple example of what can be done with Anchore gates in a CI/CD environment. We are planning on implementing a full Jenkins plugin for a more streamlined integration, so stay tuned for that. There are also more gates to explore, and you can extend the system to add your own. If you have questions, comments, or want to share how you’re using Anchore, let us know!

Signed, Sealed, Deployed

Red Hat recently blogged about their progress in adding support for container image signing, a particularly interesting and most welcome aspect of the design is the way that the binary signature file can be decoupled from the registry and distributed separately. The blog makes interesting reading and I’d strongly recommend reading through it, I’m sure you’ll appreciate the design. And of course, code is available online.

Red Hat is, along with the other Linux distributors, well versed in the practice of signing software components to allow end-users to verify that they are running authentic code and is in the process of extending this support to container images. The approach described is different from that taken previously by Docker Inc. however rather than comparing the two approaches I wanted to talk at a high level about the benefits of image signing along with some commentary about trust.

In the physical world, we are all used to using our signature to confirm our identity.

Probably the most common example is when we are signing a paper check or using an electronic signature pad during a sales transaction. How many times have you signed your signature so quickly that you do not even recognize it yourself? How many times in recent memory has a cashier or server compared the signature written with that on the back of your credit card? In my experience that check is likely to happen one out of every ten times and even in those cases that check is little more than a token gesture and the two signatures may not have matched.

That leads me to the first important observation: a signature mechanism is only useful if it is checked.  Obviously, when vendors such as Docker Inc, Red Hat, and others implement an image signing and validation system, the enforcement will be built into to all layers, so that in one example a Red Hat delivered image will be validated by a Red Hat provided Docker runtime to ensure it’s signed by a valid source.

However it’s more likely that the images that you deploy in your enterprise won’t just be the images downloaded from a registry, but instead, images built on top of these images or perhaps even built from scratch, so for image signing to provide the required level of security all images created within your enterprise should also be signed and have those signatures validated before the image is deployed. Some early users of image signing that we have talked to have used image signing less as a way of tracking the provenance of images but instead as a method to show that an image has not been modified between leaving the CI/CD pipeline and being deployed on their container host.

Before we dig into the topic of image signing it’s worth discussing what a signature actually represents.

The most common example of signatures that we see in our day to day life is in our web browsers where we look for the little green padlock in the address bar that indicates that the connection to the webserver from our browser is encrypted but most importantly it confirms that you are talking to the expected website.

The use of TLS/SSL certificates allows your browser to validate that when you connect to https://www.example.com the content displayed actually came from example.com.

So in this example, the signature was used to confirm the source of the (web) content. Over many years we have been trained NOT to type our credit card details into a site that is NOT delivered through HTTPS.

But that does not mean that you would trust your credit card details to any site that uses HTTPS.

The same principle applies to the use of image signatures. If you download an image signed by Red Hat, Docker Inc, or any other vendor, you can be assured that the image did come from this vendor. The level of confidence you have in the contents of the image is based on the level of trust you already have with the vendor. For example, you are likely not to run an image signed by l33thackerz even though it may include a valid signature.

As enterprises move to a DevOps model with containers we’re seeing a new software supply chain, which often begins with a base image pulled from DockerHub or a vendor registry.

This base image may be modified by the operations team to include extra packages or to customize specific configuration files. The resulting image is then published in the local registry to be used by the development team as the base image for their application container. In many organizations, we are starting to see other participants in this supply chain, for example, a middleware team may publish an image containing an application server that is in turn used by an application team.

For the promise of image signing to be fulfilled, at each stage of this supply chain, each team must sign the image to ensure that the ‘chain of custody’ can be validated throughout the software development lifecycle. As we covered previously those signatures only serve to prove the source of an image, during any point in the supply chain from the original vendor of the base image all the way through the development process the images may be modified. At any step in the supply chain a mistake may be made, an outdated package that contains known bugs of vulnerabilities may be used, an insecure configuration option in an application’s configuration file or perhaps secrets such as passwords or API keys may be stored in the image.

Signing an image will not prevent insecure or otherwise non-compliant images from being deployed, however, as part of a post mortem, it will provide a way of tracking down when the vulnerability or bug was introduced.

During each stage of the supply chain, detailed checks should be performed on the image to ensure that the image complies with your site-specific policies.

These policies could cover security, starting with the ubiquitous CVE scan but then going further to analyze the configuration of key security components. For example, you could have the latest version of the apache web server but have configured the wrong set of TLS Ciphers suites leading to insecure communication. In addition to security, your policies could cover application-specific configurations to comply with best practices or to enable consistency and predictability.

Anchore’s goal is to provide a toolset that allows developers, operations, and security teams to maintain full visibility of the ‘chain of custody’ as containers move through the development lifecycle while providing the visibility, predictability, and control needed for production deployment.

With Anchore’s tools, the analysis and policy evaluation could be run during each stage of the supply chain allowing the signatures to attest to both the source of the image and also the compliance of the image’s contents.

In summary, we believe that image signing is an important part of the security and integrity of your software supply chain however signatures alone will not ensure the integrity of your systems.

Webinar – Introduction to the Anchore Project

Today we delivered Anchore’s first webinar where we gave an introduction into Anchore’s open source project and discussed how we can democratize certification through the use of open source.

A primary concern for enterprises adopting Docker is security most notably, in the governance and compliance of the containers that they are deploying. In the past, as we moved from physical server deployments to virtual machines we saw similar issues and we spoke about “VM sprawl” but containers are set to exponentially outgrow VM deployments. It’s almost too easy to pull an application image from a public registry and run it, within seconds you can deploy an application in production without even knowing what’s under the covers.

Organizations want to have confidence in their deployments, to know that when they deploy an application it will work, it will be secure, it can be maintained and it will be performant.

In the past, this confidence came through certification. Commercial Linux distributions such as Red Hat, SuSE and others set the standard and worked with hardware and software vendors on certification programs to give a level of assurance to end-users that the operating system would run reliably on their hardware and also offer insurance in the form of enterprise-grade commercial support if they encountered issues.

Today the problem is more complex and there can no longer be just a single certification. For example, the requirements of a financial services company are different from the requirements of a healthcare company handling medical records and these are different from the needs of a federal institution and so on. Even the needs of individual departments within any given organization may be different.

What is needed now is the ability for IT operations and security to be able to define their own certification requirements which may differ even from application to application, allowing them to define these policies and evaluate them before applications are deployed into production.

What we are talking about is the democratization of certification

Rather than having certification in the hands of a small number of vendors or standards bodies, we want to allow organizations to define what certification means to them.

Anchore’s goal is to provide a toolset that allows developers, operations, and security teams to maintain full visibility of the ‘chain of custody’ as containers move through the development lifecycle while providing the visibility, predictability, and control needed for production deployment.

Please tune into the webinar where we go a level deeper to discuss the challenges around container certification, how an open source, democratized approach can help end-users and introduce our open source tooling.

Extending Anchore with Lynis

Add Lynis Scanning to Anchore Image Analysis

Note: You will need the latest Anchore code from GitHub to follow this procedure: Install it here

In this post, we focus on solving a common problem that is faced when building out a container-based deployment environment – take an existing tool/practice for deciding whether or not application code is ready to be deployed, and apply it to the steady stream of container images that are flowing in from developers on their way to production. With Anchore, we show that we can apply many existing tools/techniques to container images easily, in a way that leads to a ‘fail fast’ property where things can be checked early on in the CI/CD pipeline (pre-execution).

To illustrate this idea, we walk through the the process of adding a new analyzer/gate to Anchore – specifically I would like to include the scanning of all container images using the ‘Lynis’ open-source Linux Distro scanning utility, and then be able to use the Anchore policy system to make decisions based on the result of the Lynis scan.  Once complete, every container image that is analyzed by Anchore in the future will include a lynis report, and every analyzed image will be subject to the Lynis gate checker.

The process is broken down into two parts – first, we write an ‘analyzer’ that is responsible for running the Lynis scan whenever any container is analyzed with Anchore, and second we write a ‘gate’ which can take as input the result of the Lynis scan and emits triggers based on what it finds.  From there, we can then use the normal Anchore policy strings to make STOP/WARN/GO suggestions based on what triggers the gate emits.

Writing the Lynis Analyzer Module

First, I use the anchore tool to set up a module development environment.

Note the output where it shows the exact paths on your system.  I run the exact command just to make sure everything is sane:

# /tmp/3355618.anchoretmp/anchore-modules/analyzers/analyzer-example.sh 0f192147631d72486538039c51ef9557be11865030be2951a0fbe94ef66db618 /tmp/3355618.anchoretmp/data /tmp/3355618.anchoretmp/data/0f192147631d72486538039c51ef9557be11865030be2951a0fbe94ef66db618 /tmp/3355618.anchoretmp

RESULT: pfiles found in image, review key/val data stored in:

/tmp/3355618.anchoretmp/data/0f192147631d72486538039c51ef9557be11865030be2951a0fbe94ef66db618/analyzer_output/analyzer-example/pfiles

Since I want to write a python module (instead of the included example shell script), I’ll start with an existing anchore python analyzer script and call it ’10_lynis_report.py’

# cp /usr/lib/python2.7/site-packages/anchore/anchore-modules/analyzers/10_package_list.py /tmp/3355618.anchoretmp/anchore-modules/analyzers/10_lynis_report.py

I’ll trim most of the code out, and change the ‘analyzer_name’ to a new name for this module – I’ve chosen ‘lynis_report’.

Next, I’ll add my code which first downloads the lynis scanner from a URL and creates a tarball that contains lynis.  Then, the code uses an anchore utility routine that takes an input tarball and the input container image, and runs an instance of the container with the input tarball stages and available, executing the lynis scanner. Finally, the routine returns the stdout/stderr output of the executed container along with the contents of a specified file from within the container (in this case, the lynis report data itself). The last thing the analyzer does is write the lynis report data to the anchore output directory for later use.

While writing this code, we use the follow command each time to iterate and get the analyzer working the way we would like (i.e. when the lynis.report output file contains the lynis report data itself, we know the analyzer is working properly)

# /tmp/3355618.anchoretmp/anchore-modules/analyzers/10_lynis_report.py 0f192147631d72486538039c51ef9557be11865030be2951a0fbe94ef66db618 /tmp/3355618.anchoretmp/data /tmp/3355618.anchoretmp/data/0f192147631d72486538039c51ef9557be11865030be2951a0fbe94ef66db618 /tmp/3355618.anchoretmp

# cat /tmp/3355618.anchoretmp/data/0f192147631d72486538039c51ef9557be11865030be2951a0fbe94ef66db618/analyzer_output/lynis_report/lynis.report

The finished module is here:

#!/usr/bin/env python

import sys
import os
import shutil
import re
import json
import time
import rpm
import subprocess
import requests
import tarfile

import anchore.anchore_utils

analyzer_name = “lynis_report”

try:
config = anchore.anchore_utils.init_analyzer_cmdline(sys.argv, analyzer_name)
except Exception as err:
print str(err)
sys.exit(1)

imgname = config
outputdir = config
unpackdir = config

if not os.path.exists(outputdir):
os.makedirs(outputdir)

try:
#datafile_dir = ‘/’.join(, ‘datafiles’])
datafile_dir = ‘/tmp/’
url = ‘https://cisofy.com/files/lynis-2.2.0.tar.gz’
r = requests.get(url)
TFH=open(‘/’.join(), ‘w’);
TFH.write(r.content)
TFH.close()

lynis_data_tarfile = ‘/’.join()
tar = tarfile.open(lynis_data_tarfile, mode=’w’, format=tarfile.PAX_FORMAT)
tar.add(‘/’.join(), arcname=’/lynis.tgz’)
tar.close()

except Exception as err:
print “ERROR: cannot locate datafile directory for lynis staging: ” + str(err)
sys.exit(1)

FH=open(outputdir + “/lynis.report”, ‘w’)
try:
fileput = lynis_data_tarfile
(o, f) = anchore.anchore_utils.run_command_in_container(image=imgname, cmd=”tar zxvf /lynis.tgz && cd /lynis && sh lynis audit system –quick”, fileget=”/var/log/lynis-report.dat”, fileput=fileput)
FH.write(‘ ‘.join([“LYNIS-REPORT-JSON”, json.dumps(f)]))
except Exception as err:
print str(err)

FH.close()

NOTE: this module is the basic code only meant as a demonstration, it does not include any checking for errors/faults as this would add a bit of code unrelated to the purpose of this posting.

Writing the Lynis Gate Module

The process of writing a gate is very similar to writing an analyzer – there are a few input differences and output file expectations, but the general process is the same.  I will start with an existing anchore gate modules and trim the functional code:

# cp /usr/lib/python2.7/site-packages/anchore/anchore-modules/gates/20_check_pkgs.py /tmp/3355618.anchoretmp/anchore-modules/gates/10_lynis_gate.py

Here is the module with the functional code trimmed out:

#!/usr/bin/env python

import sys
import os
import re
import anchore.anchore_utils

try:
config = anchore.anchore_utils.init_gate_cmdline(sys.argv, “LYNIS report checker”)
except Exception as err:
print str(err)
sys.exit(1)

if not config:
sys.exit(0)

imgid = config
imgdir = config
analyzerdir = config
comparedir = config
outputdir = config

try:
params = config
except:
params = None

if not os.path.exists(imgdir):
sys.exit(0)

# code will go here

sys.exit(0)

Next, we need to set up the input by putting the imageId that we’re testing against into an input file for the gate, and then we can run the module manually and check the output iteratively until we’re happy.

# echo 0f192147631d72486538039c51ef9557be11865030be2951a0fbe94ef66db618 > /tmp/3355618.anchoretmp/querytmp/inputimages

# /tmp/3355618.anchoretmp/anchore-modules/gates/10_lynis_gate.py /tmp/3355618.anchoretmp/querytmp/inputimages /tmp/3355618.anchoretmp/data/ /tmp/3355618.anchoretmp/data/0f192147631d72486538039c51ef9557be11865030be2951a0fbe94ef66db618/gates_output/ PARAM=True

# cat /tmp/3355618.anchoretmp/data/0f192147631d72486538039c51ef9557be11865030be2951a0fbe94ef66db618/gates_output/LYNISCHECK

The finished module is here:

#!/usr/bin/env python

import sys
import os
import re
import json
import traceback

import anchore.anchore_utils

try:
config = anchore.anchore_utils.init_gate_cmdline(sys.argv, “LYNIS report checker”)
except Exception as err:
traceback.print_exc()
print “ERROR: ” + str(err)
sys.exit(1)

if not config:
sys.exit(0)

imgid = config
imgdir = config
analyzerdir = config
comparedir = config
outputdir = config

try:
params = config
except:
params = None

if not os.path.exists(imgdir):
sys.exit(0)

# code will go here

output = ‘/’.join()
OFH=open(output, ‘w’)

try:
FH=open(‘/’.join(), ‘r’)
lynis_report = False
for l in FH.readlines():
l = l.strip()
(k, v) = re.match(‘(S*)s*(.*)’, l).group(1, 2)
lynis_report = json.loads(v)
FH.close()

if lynis_report:
for l in lynis_report.splitlines():
l = l.strip()
if l and not re.match(“^s*#.*”, l) and re.match(“.*=.*”, l):
(k, v) = re.match(‘(S*)=(.*)’, l).group(1, 2)
if str(k) == ‘warning[]’:
# output a trigger
OFH.write(‘LYNISWARN ‘ + str(v) + ‘n’)
elif str(k) == ‘suggestion[]’:
OFH.write(‘LYNISSUGGEST ‘ + str(v) + ‘n’)
elif str(k) == ‘vulnerable_package[]’:
OFH.write(‘LYNISPKGVULN ‘ + str(v) + ‘n’)

except Exception as err:
traceback.print_exc()
print “ERROR: ” + str(err)

OFH.close()
sys.exit(0)

NOTE: this module is the basic code only meant as a demonstration, it does not include any checking for errors/faults as this would add a bit of code unrelated to the purpose of this posting.

Tie the Two Together

Now that we’re finished writing and testing the module, we can drop the new analyzer/gate modules into anchore and use the anchore CLI as normal.  First we copy the new modules into a location where anchore can use them:

cp /tmp/3355618.anchoretmp/anchore-modules/analyzers/10_lynis_report.py ~/.anchore/user-scripts/analyzers/
cp /tmp/3355618.anchoretmp/anchore-modules/gates/10_lynis_gate.py ~/.anchore/user-scripts/gates/

Next, we run the normal analyze operation which will now include the lynis analyzer:

anchore analyze –force –image ubuntu –imagetype none

Then, we can add new lines to the image’s policy that describe what actions to output if the new gate emits its triggers:

anchore gate –image ubuntu –editpolicy

# opens an editor, where you can add the following lines to the existing image’s policy
LYNISCHECK:LYNISPKGVULN:STOP
LYNISCHECK:LYNISWARN:WARN
LYNISCHECK:LYNISSUGGEST:GO

Finally, we can run the normal anchore gate, and see the resulting triggers showing up alongside the other anchore gates:

anchore gate –image ubuntu

0f192147631d: evaluating policies …
0f192147631d: evaluated.
+————–+—————+————+————–+———————————+————+
| ImageID | Repo/Tag | Gate | Trigger | CheckOutput | GateAction |
+————–+—————+————+————–+———————————+————+
| 0f192147631d | ubuntu:latest | LYNISCHECK | LYNISSUGGEST | BOOT-5180|Determine runlevel | GO |
| | | | | and services at startup|-|-| | |
| 0f192147631d | ubuntu:latest | LYNISCHECK | LYNISSUGGEST | KRNL-5788|Check the output of | GO |
| | | | | apt-cache policy manually to | |
| | | | | determine why output is | |
| | | | | empty|-|-| | |
| 0f192147631d | ubuntu:latest | LYNISCHECK | LYNISSUGGEST | AUTH-9262|Install a PAM module | GO |
| | | | | for password strength testing | |
| | | | | like pam_cracklib or | |
| | | | | pam_passwdqc|-|-| | |
| 0f192147631d | ubuntu:latest | LYNISCHECK | LYNISSUGGEST | AUTH-9286|Configure minimum | GO |
| | | | | password age in | |
| | | | | /etc/login.defs|-|-| | |
| 0f192147631d | ubuntu:latest | LYNISCHECK | LYNISSUGGEST | AUTH-9286|Configure maximum | GO |
| | | | | password age in | |
| | | | | /etc/login.defs|-|-| | |
| 0f192147631d | ubuntu:latest | LYNISCHECK | LYNISSUGGEST | AUTH-9328|Default umask in | GO |
| | | | | /etc/login.defs could be more | |
| | | | | strict like 027|-|-| | |
| 0f192147631d | ubuntu:latest | LYNISCHECK | LYNISSUGGEST | AUTH-9328|Default umask in | GO |
| | | | | /etc/init.d/rc could be more | |
| | | | | strict like 027|-|-| | |
| 0f192147631d | ubuntu:latest | LYNISCHECK | LYNISSUGGEST | FILE-6310|To decrease the | GO |
| | | | | impact of a full /home file | |
| | | | | system, place /home on a | |
| | | | | separated partition|-|-| | |
| 0f192147631d | ubuntu:latest | LYNISCHECK | LYNISSUGGEST | FILE-6310|To decrease the | GO |
| | | | | impact of a full /tmp file | |
| | | | | system, place /tmp on a | |
| | | | | separated partition|-|-| | |
| 0f192147631d | ubuntu:latest | LYNISCHECK | LYNISSUGGEST | FILE-6310|To decrease the | GO |
| | | | | impact of a full /var file | |
| | | | | system, place /var on a | |
| | | | | separated partition|-|-| | |
| 0f192147631d | ubuntu:latest | LYNISCHECK | LYNISSUGGEST | FILE-6336|Check your /etc/fstab | GO |
| | | | | file for swap partition mount | |
| | | | | options|-|-| | |
| 0f192147631d | ubuntu:latest | LYNISCHECK | LYNISSUGGEST | STRG-1840|Disable drivers like | GO |
| | | | | USB storage when not used, to | |
| | | | | prevent unauthorized storage or | |
| | | | | data theft|-|-| | |
| 0f192147631d | ubuntu:latest | LYNISCHECK | LYNISSUGGEST | STRG-1846|Disable drivers like | GO |
| | | | | firewire storage when not used, | |
| | | | | to prevent unauthorized storage | |
| | | | | or data theft|-|-| | |
| 0f192147631d | ubuntu:latest | LYNISCHECK | LYNISSUGGEST | PKGS-7370|Install debsums | GO |
| | | | | utility for the verification of | |
| | | | | packages with known good | |
| | | | | database.|-|-| | |
| 0f192147631d | ubuntu:latest | LYNISCHECK | LYNISPKGVULN | tzdata | STOP |
| 0f192147631d | ubuntu:latest | LYNISCHECK | LYNISWARN | PKGS-7392|Found one or more | WARN |
| | | | | vulnerable packages.|M|-| | |
| 0f192147631d | ubuntu:latest | LYNISCHECK | LYNISSUGGEST | PKGS-7392|Update your system | GO |
| | | | | with apt-get update, apt-get | |
| | | | | upgrade, apt-get dist-upgrade | |
| | | | | and/or unattended-upgrades|-|-| | |
| 0f192147631d | ubuntu:latest | LYNISCHECK | LYNISSUGGEST | PKGS-7394|Install package apt- | GO |
| | | | | show-versions for patch | |
| | | | | management purposes|-|-| | |
| 0f192147631d | ubuntu:latest | LYNISCHECK | LYNISSUGGEST | NETW-3032|Install ARP | GO |
| | | | | monitoring software like | |
| | | | | arpwatch|-|-| | |
| 0f192147631d | ubuntu:latest | LYNISCHECK | LYNISSUGGEST | FIRE-4590|Configure a | GO |
| | | | | firewall/packet filter to | |
| | | | | filter incoming and outgoing | |
| | | | | traffic|-|-| | |
| 0f192147631d | ubuntu:latest | LYNISCHECK | LYNISSUGGEST | LOGG-2130|Check if any syslog | GO |
| | | | | daemon is running and correctly | |
| | | | | configured.|-|-| | |
| 0f192147631d | ubuntu:latest | LYNISCHECK | LYNISWARN | LOGG-2130|No syslog daemon | WARN |
| | | | | found|H|-| | |
| 0f192147631d | ubuntu:latest | LYNISCHECK | LYNISWARN | LOGG-2138|klogd is not running, | WARN |
| | | | | which could lead to missing | |
| | | | | kernel messages in log | |
| | | | | files|L|-| | |
| 0f192147631d | ubuntu:latest | LYNISCHECK | LYNISSUGGEST | BANN-7126|Add a legal banner to | GO |
| | | | | /etc/issue, to warn | |
| | | | | unauthorized users|-|-| | |
| 0f192147631d | ubuntu:latest | LYNISCHECK | LYNISSUGGEST | BANN-7130|Add legal banner to | GO |
| | | | | /etc/issue.net, to warn | |
| | | | | unauthorized users|-|-| | |
| 0f192147631d | ubuntu:latest | LYNISCHECK | LYNISSUGGEST | ACCT-9622|Enable process | GO |
| | | | | accounting|-|-| | |
| 0f192147631d | ubuntu:latest | LYNISCHECK | LYNISSUGGEST | ACCT-9626|Enable sysstat to | GO |
| | | | | collect accounting (no | |
| | | | | results)|-|-| | |
| 0f192147631d | ubuntu:latest | LYNISCHECK | LYNISSUGGEST | ACCT-9628|Enable auditd to | GO |
| | | | | collect audit information|-|-| | |
| 0f192147631d | ubuntu:latest | LYNISCHECK | LYNISSUGGEST | TIME-3104|Use NTP daemon or NTP | GO |
| | | | | client to prevent time | |
| | | | | issues.|-|-| | |
| 0f192147631d | ubuntu:latest | LYNISCHECK | LYNISSUGGEST | FINT-4350|Install a file | GO |
| | | | | integrity tool to monitor | |
| | | | | changes to critical and | |
| | | | | sensitive files|-|-| | |
| 0f192147631d | ubuntu:latest | LYNISCHECK | LYNISSUGGEST | TOOL-5002|Determine if | GO |
| | | | | automation tools are present | |
| | | | | for system management|-|-| | |
| 0f192147631d | ubuntu:latest | LYNISCHECK | LYNISSUGGEST | KRNL-6000|One or more sysctl | GO |
| | | | | values differ from the scan | |
| | | | | profile and could be | |
| | | | | tweaked|-|-| | |
| 0f192147631d | ubuntu:latest | LYNISCHECK | LYNISSUGGEST | HRDN-7230|Harden the system by | GO |
| | | | | installing at least one malware | |
| | | | | scanner, to perform periodic | |
| | | | | file system scans|-|-| | |
| 0f192147631d | ubuntu:latest | ANCHORESEC | VULNLOW | Low Vulnerability found in | GO |
| | | | | package – glibc (CVE-2015-5180 | |
| | | | | – http://people.ubuntu.com | |
| | | | | /~ubuntu- | |
| | | | | security/cve/CVE-2015-5180) | |
| 0f192147631d | ubuntu:latest | ANCHORESEC | VULNMEDIUM | Medium Vulnerability found in | WARN |
| | | | | package – coreutils | |
| | | | | (CVE-2016-2781 – | |
| | | | | http://people.ubuntu.com | |
| | | | | /~ubuntu- | |
| | | | | security/cve/CVE-2016-2781) | |
| 0f192147631d | ubuntu:latest | ANCHORESEC | VULNLOW | Low Vulnerability found in | GO |
| | | | | package – shadow (CVE-2013-4235 | |
| | | | | – http://people.ubuntu.com | |
| | | | | /~ubuntu- | |
| | | | | security/cve/CVE-2013-4235) | |
| 0f192147631d | ubuntu:latest | ANCHORESEC | VULNMEDIUM | Medium Vulnerability found in | WARN |
| | | | | package – glibc (CVE-2016-3706 | |
| | | | | – http://people.ubuntu.com | |
| | | | | /~ubuntu- | |
| | | | | security/cve/CVE-2016-3706) | |
| 0f192147631d | ubuntu:latest | ANCHORESEC | VULNLOW | Low Vulnerability found in | GO |
| | | | | package – glibc (CVE-2016-1234 | |
| | | | | – http://people.ubuntu.com | |
| | | | | /~ubuntu- | |
| | | | | security/cve/CVE-2016-1234) | |
| 0f192147631d | ubuntu:latest | ANCHORESEC | VULNMEDIUM | Medium Vulnerability found in | WARN |
| | | | | package – bzip2 (CVE-2016-3189 | |
| | | | | – http://people.ubuntu.com | |
| | | | | /~ubuntu- | |
| | | | | security/cve/CVE-2016-3189) | |
| 0f192147631d | ubuntu:latest | ANCHORESEC | VULNMEDIUM | Medium Vulnerability found in | WARN |
| | | | | package – util-linux | |
| | | | | (CVE-2016-2779 – | |
| | | | | http://people.ubuntu.com | |
| | | | | /~ubuntu- | |
| | | | | security/cve/CVE-2016-2779) | |
| 0f192147631d | ubuntu:latest | FINAL | FINAL | | STOP |
+————–+—————+————+————–+———————————+————+

Peek Into Your Containers With 3 Simple Commands

If you are just looking to run a common Linux application such as Tomcat or WordPress it’s far simpler to download a pre-packaged image from DockerHub than to install the application from scratch. But with tens of thousands of images on DockerHub you are likely to find many variations of the application in question, even with official repositories you may find multiple different versions of an application.

In previous blog posts, we have introduced the Anchore open source project which provides a rich toolset to allow developers, operations, and security teams to maintain full visibility of the ‘chain of custody’ as containers move through the development lifecycle.

In our last blog, we covered a couple of simple use cases, allowing a user to dig into the contents of a container looking at specific files or packages. In this blog post, I wanted to introduce you to three interesting features within Anchore.

There are seven top-level commands within the Anchore command-line tools. These can be seen by running the anchore command with no other options:

Command  Description
analyze  Perform analysis on specified image IDs
explore  Search, report and query specified image IDs
gate  Perform and view gate evaluation on selected images
subscriptions  Manage local subscriptions
sync  Synchronize images and metadata
system  Anchore system-level operations
toolbox  Useful tools and operations on images and containers

In previous blog posts, we have presented the analyze, explore and gate commands, but in this blog post, we wanted to highlight a couple of the lesser-known features in the toolbox that we found very useful in our day to day use of containers.

Running anchore toolbox will show the sub-commands available:

Command  Description
setup-module-dev  Setup a module development environment
show  Show image summary information
show-dockerfile  Generate (or display actual) image Dockerfile
show-familytre  Show image family tree image IDs
show-layers  Show image layer IDs
show-taghistory  Show history of all known repo/tags for image
unpack  Unpack and Squash image to local filesystem

While Docker allows applications to be packaged as easily distributed containers, transparently providing both the underlying operating system and the application, you often need to know exactly what operating system this application is built upon. This information may be required to fulfill compliance or audit requirements in your organization or to ensure that you are only deploying operating systems for which you have commercial support agreements.

If you are lucky then the full description of the container on the DockerHub portal contains details about the operating system used. But in many cases, this information isn’t presented.

One way to ascertain what operating system is used is to download and run the image and inspect the file system however that’s a manual and time-consuming process. The show command presents a simple way to retrieve this information.

Taking a look at nginx, the most popular image on DockerHub:

IMAGEID='0d409d33b27e47423b049f7f863faa08655a8c901749c2b25b93ca67d01a470d'
REPOTAGS='docker.io/nginx:latest'
DISTRO='debian'
DISTROVERS='8'
SHORTID='0d409d33b27e'
PARENTID=''
BASEID='0d409d33b27e47423b049f7f863faa08655a8c901749c2b25b93ca67d01a470d'

Here we see the latest image is built on Debian version 8 (Jessie).

Using another useful toolbox function is show-tag history, which shows the known tags for a given image. Here we can see that the latest image is also tagged as 1.11 and 1.11.1:

+--------------+---------------------+-------------------------+
|   ImageId    |         Date        |        KnownTags        |
+--------------+---------------------+-------------------------+
| 0d409d33b27e | Wed Jun 15 14:37:03 | nginx:1.11,nginx:1.11.1 |
|              |         2016        |      ,nginx:latest      |
| 0d409d33b27e | Wed Jul 13 15:57:51 | nginx:1.11,nginx:1.11.1 |
|              |         2016        |      ,nginx:latest      |
| 0d409d33b27e | Wed Jul 13 16:35:14 |  docker.io/nginx:latest |
|              |         2016        |                         |
+--------------+---------------------+-------------------------+

The final toolbox feature I want to highlight is a feature that many users do not know is available, that is the ability to retrieve the dockerfile for a given image. The show-dockerfile command will either display the dockerfile, if it was available during the image analysis phase, or will generate the dockerfile from the image.

This information may be useful if you wish to look under the covers to understand how the container was created or to check for any potential issues with the container content. The contents of the dockerfile may also be used within our ‘gates’ feature, for example allowing you to specify that specific ports may not be exposed.

# anchore toolbox --image=nginx:latest show-dockerfile
--- ImageId ---
0d409d33b27e

--- Mode ---
Guessed

Here the mode Guessed indicates that the dockerfile was generated by the tool during image analysis.

There are other toolbox commands that include the ability to show the family tree of an image, display the image layers, or unpack the image to the local filesystem.

If you haven’t already installed Anchore and begun scanning your container images, take a look at our installation and quick-start guides at our wiki below or by going to https://github.com/anchore/anchore/wiki.

Anchore Use Cases

We just released the first version of the open-source Anchore command-line tools and we’re excited for the container community to take a look at what we’ve done and provide feedback. This blog post will outline a couple of basic use cases for some of the queries you can run using the tools, and hopefully, give you some ideas for integrating Anchore into your container image management workflow.

Anchore scans container images and records a great deal of information about them: package and file lists, image hierarchies and family trees to track provenance and changes, and maps known security vulnerabilities to the packages installed on your container images. The command-line tools provide a number of ways to query this data.

If you haven’t already installed Anchore and begun scanning your container images, take a look at our installation and quick-start guides.

Once you’re set-up, let’s run a couple of basic package queries. Maybe you want to confirm that a certain library of a specific version is installed across all of your images, for consistency–there’s nothing worse than the dependency hell of a couple of mismatched libraries causing issues throughout your infrastructure. Or maybe your organizational policies require that a certain monitoring package be installed consistently on all of your production containers. These are questions that Anchore can quickly and easily answer.

Here’s an example command that searches a file containing a list of image ids for the “curl” package, and reports the version found:

+————–+———————–+————+———+———————-+
| ImageID | Repo/Tag | QueryParam | Package | Version |
+————–+———————–+————+———+———————-+
| 6a77ab6655b9 | centos:6 | curl | curl | 7.19.7-52.el6 |
| 20c80ee30a09 | ryguyrg/neo4j-panama- | curl | curl | 7.38.0-4+deb8u3 |
| | papers:latest | | | |
| 8fe6580be3ef | slackbridge:latest | curl | curl | 7.43.0-1ubuntu2.1 |
| db688f102aeb | devbad:latest | curl | curl | 7.29.0-25.el7.centos |
+————–+———————–+————+———+———————-+

We just released the first version of the open-source Anchore command-line tools and we’re excited for the container community to take a look at what we’ve done and provide feedback. This blog post will outline a couple of basic use cases for that’s pretty simple. How about something a little bit more interesting? Since Anchore has the ability to correlate information about all of your container images together, it can make useful suggestions based on not just the contents of one image, but on all of your images. For example, the “base-image” query will show you if a particular image is up to date relative to its base image:

# anchore explore –imagefile ~/myimages.txt query base-status all
+————–+———————–+—————+———————–+————+————–+——————–+
| InputImageId | InputRepo/Tag | CurrentBaseId | CurrentBaseRepo/Tag | Status | LatestBaseId | LatestBaseRepo/Tag |
+————–+———————–+—————+———————–+————+————–+——————–+
| db688f102aeb | devbad:latest | db688f102aeb | devbad:latest | up-to-date | N/A | N/A |
| 20c80ee30a09 | ryguyrg/neo4j-panama- | 20c80ee30a09 | ryguyrg/neo4j-panama- | up-to-date | N/A | N/A |
| | papers:latest | | papers:latest | | | |
| 8fe6580be3ef | slackbridge:latest | 0b4516a442e7 | ubuntu:wily | up-to-date | N/A | N/A |
| 89fbcb00e7a2 | devgood:latest | 2fa927b5cdd3 | ubuntu:latest | up-to-date | N/A | N/A |
| 6a77ab6655b9 | centos:6 | 6a77ab6655b9 | centos:6 | up-to-date | N/A | N/A |
+————–+———————–+—————+———————–+————+————–+——————–+

If the status is ‘up-to-date’, it means that the container image the input image was initially built from (e.g. what was specified in the input image’s FROM line in its Dockerfile) is the same currently as it was when originally built. The status is ‘out-of-date’, meaning that if you were to rebuild the input image with the same Dockerfile, it would result in a different final image since the base has since been updated (indicated by the LatestBaseId column). This query can be used to determine how ‘fresh’ the analyzed container images are with respect to their base images and could trigger an action to rebuild and redeploy the application containers if they are getting too far out of date from their bases.

Anchore’s query and analysis infrastructure are pluggable, so you can write your own! Stay tuned for more interesting and useful ways to use the data that we collect: with Anchore’s help, your container infrastructure will be slim, up-to-date, and secure.

Anchore Open Source Release is Live

Whether it’s security, orchestration, management or monitoring, there are many projects, products and companies vying to provide users a way to successfully deploy their apps at scale, with a minimum amount of friction. All of these projects are trying to solve a runtime problem with containers or performing simple security vulnerability scanning, but the big question of what happens in the pre-production cycle remains a period I’ll call the “Dark Ages of the Container Lifecycle”.

With traditional IT models this problem was largely addressed by standardizing on commercial Linux distributions such as Red Hat’s Enterprise Linux, now the gold standard within Fortune 1000 companies. This helped aggregate and certify the Linux distribution with thousands of ISVs, providing a production-ready “golden image,” and ensuring enterprise-grade support. Today, that certification process for containers is mostly self-driven and highly unpredictable, with many stakeholders and no single “throat to choke.”

Anchore Open Source Release

This week’s Anchore open source release addresses a major challenge in today’s container technology space and provides a platform for the open source community to participate and share ideas. Our open source release will give users the ability to pick from a vetted list of containers, analyze new containers, and inspect existing ones — either in the public domain or behind a firewall. In the past, these tasks were left to the user, creating an even bigger challenge and the gap between developers and operations. Anchore bridges the gap between Dev. and Ops.

Data Analytics meets Container Compute

An unprecedented amount of churn (more than any other one technology in the past, and over a billion downloads), illustrates the tremendous amount of information exchange at stake and at risk for container sprawl. Managing all this data — today and over the coming years — becomes a challenging geometric problem, to say the least. Container dependencies and relationships, security checks, functional dependencies, versioning, and so on, all become incredibly hard to manage. This will widen the gap between Dev. and Ops, and in turn make transparency and predictability paramount for operations and security teams.

Pre-production data for production readiness

Tens of gigabytes of information are now at the fingertips of Anchore users. Today, our open source release provides this data for the top 10 most downloaded application containers, including Ubuntu, NginX, Redis and MySQL, with new ones to follow as the need arises. Our hosted service is continuously tracking and analyzing every update and upgrade while keeping track of earlier versions for completeness. This data can then be used as a baseline to set and enforce policies, coupled with a proactive notification mechanism that lets users see potential vulnerabilities and critical bugs in a timely fashion. Anchore will provide operations and security teams the confidence necessary to deploy in production.

Anchore longer term

We are still in the first inning of a very long game in IT. Security, orchestration and management challenges are incrementally being addressed by small and large companies alike. The transformational effect containerization will have on IT will bring about new and interesting challenges. Future releases of Anchore, starting with our beta release next month, will address the data aspects of containers, provide actionable advice based on that data, and bring about more transparency. Most importantly, Anchore promises the predictability and control needed for mission-critical production deployments.

Introducing Anchore for Docker Technology Demo & System

Today, we are going to show how Anchore technology fits into a Docker-based container workflow to provide trust, insight, and validation to your container ecosystem without inhibiting the flexibility, agility, and speed of development that makes container-based deployment platforms so valuable. This post will walk through using Anchore in a deployment scenario for a container-based application.

And we’ll also discuss the container registry, curation, and management capabilities of Anchore as well as analysis, control, inspection, and review of containers.

The rest of this document is organized around a container-based application deployment workflow composed of the following basic phases:

  1. Creation of trusted well-known base containers
  2. Creation and validation of application containers built by developers or a CI/CD system
  3. Analysis of containers to determine acceptance for production use prior to deployment

This post will present both an operations and a developer perspective on each phase, and describes how Anchore operates in each one.

Setup and Creating Anchore Curated Containers

Starting with the operations perspective, the first step is to create a registry that hosts trusted containers, and exposes them to development teams for use as the base containers from which application containers are built. This is the job of the Anchore registry management tool. The tool creates a local registry and orchestrates the images pulled fromDocker Hub (or another public-facing registry) with the image analysis metadata provided by the Anchore service.

So, let’s first create a local registry and sync it to Anchore. Starting with an installed registry tool, run the initcommand to initialize the registry and do an initial sync:

[root@tele ~]# anchore-registry init
Creating new anchore anchore-reg at /root/.local/
[root@tele ~]#

After the command is run, the registry is now initialized and contains some metadata about the base subscribed images as well as vulnerability data. You can view the set of subscribed containers by running the subscriptions command

[root@tele ~]# anchore-registry subscriptions
[]
[root@tele ~]#

To subscribe to a few more containers, for example mongo and redis, run the subscribe command. This command will not pull any data, only change the subscription values:

[root@tele ~][root@tele ~]# anchore-registry subscribe centos ubuntu mysql redis
Subscribing to containers [u’centos’, u’ubuntu’, u’mysql’, u’redis’]
Checking sources: [u’ubuntu’, u’centos’, u’busybox’, u’postgres’, u’mysql’,
u’registry’, u’redis’, u’mongo’, u’couchbase’, u’couchdb’]
[root@tele ~]#
[root@tele ~]# anchore-registry subscriptions
[u’centos’, u’ubuntu’, u’mysql’, u’redis’]
[root@tele ~]# 

To pull those containers and metadata from Anchore, run the sync command:

[root@tele ~]# anchore-registry sync
Synchronizing anchore registry with remote
[root@tele ~]#

By synchronizing with Anchore, we now have the ability to inspect the container to see what kind of analysis and information you get from a curated Anchore container. Let’s search those containers for specific packages.

We now have the ability to “navigate” the information that Anchore gathers about the subscribed containers. For example, you can find all of the containers that have a particular package installed:

[root@tele ~]# anchore --allanchore navigate --search --has-package ‘ssl*’
+--------------+--------------------+----------------------+-------------------+---------------------+
| ImageId | Current Repo/Tags | Past Repo/Tags | Package | Version |
+--------------+--------------------+----------------------+-------------------+---------------------+
| 778a53015523 | centos:latest | centos:latest | openssl-libs | 1.0.1e-51.el7_2.4 |
| f48f462dde2f | devone:apr15 | devone:latest | openssl-libs | 1.0.1e-51.el7_2.4 |
| | devone:latest | devone:apr15 | | |
| 0f0e96f1f267 | redis:latest | redis:latest | libssl1.0.0:amd64 | 1.0.1k-3+deb8u4 |
| b72889fa879c | | ubuntu:latest | libssl1.0.0:amd64 | 1.0.1f-1ubuntu2.18 |
| b72889fa879c | | ubuntu:latest | libgnutls- | 2.12.23-12ubuntu2.5 |
| | | | openssl27:amd64 | |
+--------------+--------------------+----------------------+-------------------+---------------------+
[root@tele ~]#

That output shows us which images in the local docker repo contain the ssl* package.

Analyzing Changed Containers

Now, assume that a developer has built an application container using one of the curated Anchore images and has pushed that container back into the local docker repo. In order to determine if this developer container is okay to push into production, it’s helpful to see how the container changed from its parent image (in the FROM clause of the dockerfile).

Anchore provides a specific report for this that gives insight into exactly what has changed at a file, package, and checksum level. If, for example, the developer built the container with the following steps:

[root@tele ~]# cat Dockerfile
FROM centos:latest
RUN yum -y install wget
CMD ["/bin/echo", "HELLO WORLD FROM DEVONE"]
[root@tele ~]#

[root@tele ~]# docker build --no-cache=True -t devone .
...
...
Successfully built f48f462dde2f
[root@tele ~]#

First, we need to run the analysis tools on the image. For convenience we can just specify all local images (those already processed are skipped). The result of this command is locally stored analysis data for the images that have not been analyzed yet:

[root@tele ~]# anchore --image devone analyze --dockerfile
./DockerfileRunning analyzers: 2791834d4281 ...SUCCESS
Running analyzers: f48f462dde2f ...SUCCESS
Running analyzers: 778a53015523 ...SUCCESS
Running differs: f48f462dde2f to 778a53015523...SUCCESS
Running differs: 778a53015523 to f48f462dde2f...SUCCESS
[root@tele ~]#

Now, we can view the reports and metadata that resulted from the analysis pass:

[root@tele ~]# anchore --image devone analyze --dockerfile
./DockerfileRunning analyzers: 2791834d4281 ...SUCCESS
Running analyzers: f48f462dde2f ...SUCCESS
Running analyzers: 778a53015523 ...SUCCESS
Running differs: f48f462dde2f to 778a53015523...SUCCESS
Running differs: 778a53015523 to f48f462dde2f...SUCCESS
[root@tele ~]#

With this report, we can see exactly the delta between an image and its parent:

[[root@tele ~]# anchore --image devone navigate -- report

CI/CD Gates

The next step is to determine if the image is acceptable to put into production. Anchore provides mechanisms to describe gating policies that are run against each image and can be used to gate an image’s entry into production (e.g., as a step in a continuous integration pipeline).

Gate policies can include things like file content changes, properties of Dockerfiles, and presence of known vulnerabilities. To check an image against the gates, run the control —gate command. The output will show all of the gate evaluations against the image:

[root@tele ~]# anchore --image devone control --gate
+--------------+-----------------+-------------+
| f48f462dde2f | ANCHORECHECK | GO |
| f48f462dde2f | PKGDIFF | GO |
| f48f462dde2f | DOCKERFILECHECK | GO |
| f48f462dde2f | SUIDDIFF | GO |
| f48f462dde2f | USERIMAGECHECK | GO |
| f48f462dde2f | NETPORTS | GO |
| f48f462dde2f | FINALACTION | GO |
+--------------+-----------------+-------------+
[root@tele ~]#

If these statuses are all GO, then that container has passed all gates and is ready for production or further functional testing in a CI/CD system.

Aggregate Container Introspection and Search

After some time has passed and your Docker environment has accrued more developer container images, Anchore tools can be used to perform a variety of exploration, introspection and search actions over the entire set of analyzed container images.

We can search the container image space for packages, files, common packages that have been added, and various types of differences between the application containers and their Anchore curated base images. Some example queries are illustrated below.

This query shows us all images that have an installed /etc/passwd file that is different from its base image (i.e., has been modified either directly or indirectly):

[root@tele ~]# anchore --alldocker navigate --search --show-file-diffs /etc/
passwd
+--------------+-------------------+----------------+--------------+--------------+----------------------+----------------------+
| ImageId | Current Repo/Tags | Past Repo/Tags | BaseId | File | Image MD5 | Base MD5 |
+--------------+-------------------+----------------+--------------+--------------+----------------------+----------------------+
| 3ceace5b73b0 | devfive:latest | devfive:latest | 778a53015523 | ./etc/passwd | 7073ff817bcd08c9b9c8 | 60c2b408a06eda681ced |
| | | | | | cee4b0dc7dea | a05b0cad8f8a |
| c67409e321d6 | devfive:apr15 | devfive:apr15 | 778a53015523 | ./etc/passwd | 7073ff817bcd08c9b9c8 | 60c2b408a06eda681ced |
| | | devfive:latest | | | cee4b0dc7dea | a05b0cad8f8a |
+--------------+-------------------+----------------+--------------+--------------+----------------------+----------------------+
[root@tele ~]#

The next query shows all images that are currently in a STOP Anchore gate state:

[root@tele ~]# anchore --alldocker navigate --search --has-gateaction STOP
+--------------+--------------------+--------------------+-------------+
| ImageId | Current Repo/Tags | Past Repo/Tags | Gate Action |
+--------------+--------------------+--------------------+-------------+
| 3ceace5b73b0 | devfive:latest | devfive:latest | STOP |
| 55c843b5c7a3 | devthirteen:apr15 | devthirteen:apr15 | STOP |
| | devthirteen:latest | devthirteen:latest | |
| 2785fa3ab761 | devfifteen:apr15 | devfifteen:apr15 | STOP |
| | devfifteen:latest | devfifteen:latest | |
| 4e02de1e5ca5 | devtwelve:apr15 | devtwelve:apr15 | STOP |
| | devtwelve:latest | devtwelve:latest | |
| dd490a4ef2b3 | devsix:apr15 | devsix:apr15 | STOP |
| | devsix:latest | devsix:latest | |
| a7f1bb64c477 | develeven:apr15 | develeven:apr15 | STOP |
| | develeven:latest | develeven:latest | |
| b33f58798470 | devseven:apr15 | devseven:apr15 | STOP |
| | devseven:latest | devseven:latest | |
| c67409e321d6 | devfive:apr15 | devfive:apr15 | STOP |
| | | devfive:latest | |
| f48f462dde2f | devone:apr15 | devone:latest | STOP |
| | devone:latest | devone:apr15 | |
| 0f0e96f1f267 | redis:latest | redis:latest | STOP |
| 63a92d0c131d | mysql:latest | mysql:latest | STOP |
+--------------+--------------------+--------------------+-------------+
[root@tele ~]#

This last query shows us a count of common packages that have been installed in applications containers, which can be used to determine how popular certain package installations are amongst those building containers from base images:

[root@tele ~]# anchore --alldocker navigate --search --common-packages
+--------------+-------------------+----------------+-------------------+--------------------+
| BaseId | Current Repo/Tags | Past Repo/Tags | Package | Child Images w Pkg |
+--------------+-------------------+----------------+-------------------+--------------------+
| 778a53015523 | centos:latest | centos:latest | wget | 2 |
| 778a53015523 | centos:latest | centos:latest | sudo | 2 |
| 778a53015523 | centos:latest | centos:latest | gpg-pubkey | 4 |
| 44776f55294a | ubuntu:latest | ubuntu:latest | wget | 9 |
| 44776f55294a | ubuntu:latest | ubuntu:latest | ca-certificates | 9 |
| 44776f55294a | ubuntu:latest | ubuntu:latest | libssl1.0.0:amd64 | 9 |
| 44776f55294a | ubuntu:latest | ubuntu:latest | libidn11:amd64 | 9 |
| 44776f55294a | ubuntu:latest | ubuntu:latest | openssl | 9 |
+--------------+-------------------+----------------+-------------------+--------------------+
[root@tele ~]#

Container Image Visualizations

Anchore CLI tools can be used to view, inspect, search, and perform specific queries in individual container images and sets, but it is also often helpful to be able to view a container image collection graphically and to also apply coloring to indicate certain qualities of images in the visual representation. Anchore has a number of visualizations that can be generated from analysis data.

As an example, we have a visual representation of container images in a system with just 15 total application containers, with each node being colored either green, yellow or red (indicating the level of severity of a present CVE vulnerability within the container image).

This visualization can be performed at any time against the list of static container images, or against a list of images that has been derived from the set of deployed containers.

Enterprise Networking Planet, Container Networking Challenges for Enterprises

Arthur Cole – Enterprise Networking Planet – April 28, 2016

Establishing connectivity between containers in a network fabric is one challenge; coordinating their activities is yet another. According to Computer Weekly’s Adrian Bridgwater, a key issue is predictability, which is largely a function of the enterprise’s ability to inspect, certify and synchronize container contents.

A start-up called Anchore Inc. targets this process through a common portal that application developers can use to select verified containers from established registries. In this way, they receive containers that have been pre-screened for compatibility, vulnerability and other aspects that are crucial for deploying tightly orchestrated container environments quickly and easily.

Read the original and complete article on Enterprise Networking Planet.

The Cloudcast Podcast: Trouble Inside Your Containers

Last week, our very own Tim Gerla, VP of Product, and Dan Nermi, CTO and Co-Founder, were interviewed in an episode of Cloudcast. Hosts Aaron Delp and Brian Gracely spoke with Tim and Dan about a number of issues including container security, how to avoid slowing down developers, and the challenges that Anchore is attempting to solve.

You can listen to the podcast now at The Cloudcast’s website for free.  The Cloudcast is an award-winning podcast on all things cloud computing, AWS Ecosystem, open source, DevOps, AppDev, SaaS, and SDN.

Computer Weekly: Anchore, A New Name for Container Predictability

Adrian Bridgewater – Computer Weekly – April 8, 2016

As a newly formed operational entity, Anchore Inc. has announced the formation of the company itself and (in literally the same mouthful) the firm has launched its beta program for users working with containers.

Users can sign up for the Anchore beta program now with expected availability in Q2 of 2016.

But what is Anchore and how do we achieve container predictability?

Read the original and complete article on ComputerWeekly.com.

Fortune: Stealthy Startup Says It Can Build Safer Software

Barb Darrow – Fortune – April 6, 2016

Fortune_logo_logotype_red

Anchore to certify software containers as ready for prime time.

Saïd Ziouani, one of the forces behind Ansible, the tool that helps automate software development and deployment, is back with a new company.

Anchore, based in Santa Barbara, Calif., is making its debut Wednesday with $2.5 million in seed money and what it says is a new way to inspect, track, and secure software containers. “We’re opening up the box,” Ziouani noted. “We can tell exactly where it came from, who touched it, and if it’s ready for mission-critical production environment or not.”

Read the original and complete article on Fortune.com.

Anchore’s Official Launch: How Did We Get Here?

If you spend any time in the technology industry, you’ll probably be struck by how quickly the world changes. A lot of promising technological trends disappear as quickly as they appear, but some have staying power. Most are familiar with the technology adoption life cycle, originally published in 1957. Its premise holds true, and we can see it in action every day.

I’ve spent most of my career in infrastructure technology, starting with rPath, where we pioneered the concept of “software appliances”—all-in-one software units containing all of the required dependencies all the way up to a minimal version of the base operating system. rPath was around for the introduction of cloud computing in 2006 when Amazon launched the first version of its Simple Storage Service (S3). Public cloud computing has outlasted the hype and become dominant throughout many industries because of its low barrier to entry, effectively limitless scale, and aggressive pricing.

Private cloud computing, however, has not been as successful. I spent five years at Eucalyptus Systems building and selling an on-premise implementation of Amazon’s cloud platform. OpenStack was founded during that time, and we struggled to gain community and market adoption. An amazing number of platform companies spawned during that time, including Cloud.com, Nebula, and Piston Cloud. And several older infrastructure service projects moved into the private cloud market—OpenQRM, OpenNebula, and Abiquo. Still, large-scale adoption of private cloud platforms was elusive. Amazon’s EC2 was a major competitor, and despite the hype from OpenStack, Eucalyptus, and others, the advantages of public cloud computing didn’t always translate well into on-premise environments.

Container Origins and Adoption

Unless you’ve been living in a cave (No offense to cave-dwellers! I’m envious sometimes), you’ve heard of these new things like “Docker” and “containers.” Containers are actually not new. Linux has supported containers since 2001, but only lately has container-based systems management become popular. There are a lot of advantages to running apps in their own containers; advantages we were trying to exploit at rPath by bundling all of the required dependencies into a single, minimal computing environment.

Containers promise unified environments between development, test, and production, with happier and more productive developers, greater ease of troubleshooting, fewer side effects when different system components are changed, and overall, more stable and more frequently updated applications. I spent most of 2014 skeptical of container promises thinking, “Isn’t this just virtualization again?” and, “This is more hyped than OpenStack, and look at how few production deployments of THAT exist?” But as I speak to more and more container users, I realize that adoption in production is occurring at a much faster rate than any other technological change I’ve experienced in my career.

This rapid adoption is good news for a lot of people, including container management companies, developers frustrated by slow test/release cycles, and anyone responsible for managing large-scale systems with lots of dependencies and moving parts. All of this comes with risks, however. One of the problems we struggled with at rPath was handling out-of-band changes to “appliancized” systems. There was still a long modify-test-deploy cycle. This duration sometimes led to software appliances being modified in ways that were unmanageable, taking us right back to the inflexible and expensive “golden image” model, where the carefully hand-crafted golden image was the source of truth for how an environment should be constructed. If you lost that golden image, or if you needed to make major changes, you had a lot of work to do.

Problems and Solutions

Containers face many of the same problems today, including the hand-crafted, “artisan” containers, and there are few tools to manage provenance, examine container contents, and track changes over time. While this issue may not be a burden for the developers, it will rapidly become a headache for those responsible for production operations and the security of the applications.

At Anchore, launched today, we are building tools to manage contents of the containers themselves, how they change over time, where they come from, and what’s inside, giving dev, test, and ops the visibility they need for reliable and secure application deployments. While early in our journey, we see the rapid and widespread adoption of container technology and are excited to watch what the container ecosystem has in store, and how we can help improve the agility, safety, and productivity of application developers throughout the industry.

Deploying Containers with Confidence

Container technology brings about a compute model that has been long been sought after, the ability to allow for agile application development and portability across heterogeneous environments while allowing development and operations teams to align in ways never before possible. Well, that’s the promise for now at least.

The industry backing by the likes of Google, Red Hat, Intel, IBM, VMWare, to name a few, clearly shows strength and staying power of containerized apps for years to come. Google, in fact, has been using container technology since long before the buzz. Docker has helped containers cross over to the mainstream where developers can now extract value easier and faster.

But in reality, container technology has also brought about new challenges that have made deploying in production a near-impossible task. The new compute paradigm, which forces existing infrastructures to be replatformed in most cases, is creating a shift in IT thinking. While a bare-metal to virtualization transition proved a substantial density-added value and fairly easy migration, containers are different. Today, new projects make up the majority of deployments while the migration of existing infrastructure continues to lag way behind.

DockerHub, the largest container repository out there today, has seen close to 1B downloads so far. Spanning operating systems, databases, web services and many other technologies, the sheer download volume alone can intimidate anyone trying to deploy in mission-critical environments (think Linux circa 2000). With the understanding that new features are being added at an unprecedented pace, just keeping up with the latest ones is hard enough, let alone the most stable features.

Having spoken to hundreds of users over the past year, it is clear to us now that transparency and predictability are key to bridging this gap for future production deployments of containers. A billion downloads do not necessarily equate to a stable platform and could instead point to an enormous amount of potential risk. For peace of mind, users today that need a stable platform tend to pivot towards creating their own repositories as a way to mitigate the risk. These repositories will most likely become stale over time while the baseline source continues to evolve and mature. This proves, once again, that the agility of app development and deployment using containers clearly overcomes the need to keep up with the latest and greatest technology in the public repositories.

This is where Anchore comes in. Our goal is to connect these lines by creating a model of transparency and predictability, that allows users, whether in development, operations or security, to all have the tools necessary to effectively capitalize on the container compute model.

Anchore is a tool that allows everyone to not only pick a collection of container-based apps that clearly show the origin and entire history but also apps that have been vetted for security, vulnerability, and functionality completeness. A set of containers that have been “Anchore certified” through collaboration with both internal and community users and tagged as production-ready. Allowing users to not only have stable repository but one that includes the most up-to-date container functionally, security checks, and bug fixes.