Kubernetes Admission Controller Dynamic Policy Mappings & Modes

In December, Anchore introduced an admission controller for Kubernetes solution & vulnerability scanner to gate pod execution based on Anchore analysis and policy evaluation of image content. It supports three different modes of operation allowing you to tune the tradeoff between control and intrusiveness for your environments.

To summarize, those modes are:

  1. Strict Policy-Based Admission Gating Mode – Images must pass policy evaluation by Anchore Engine for admission.
  2. Analysis-Based Admission Gating Mode – Images must have been analyzed by Anchore Engine for admission.
  3. Passive Analysis Trigger Mode – No admission, requirement, but images are submitted for analysis by Anchore Engine prior to admission. The analysis itself is asynchronous.

The multi-mode flexibility is great for customizing how strictly the controller enforces compliance with policy (if at all), but it does not allow you to use different bundles with different policies for the same image based on annotations or labels in Kubernetes, where there is typically more context about how strictly an image should be evaluated.

Consider the following scenario:

Your cluster has two namespaces: testing and production. You’ll be deploying many of the same images into those namespaces, and but you want testing to use much more permissive policies than production. Let’s consider the two policies:

  • testing policy – only block images with critical vulnerabilities
  • production policy – block images with high or critical vulnerabilities or that do not have a defined healthcheck

Now, let’s also allow pods to run in the production environment regardless of the image content if the pod has a special label: ‘breakglass=true’ These kinds of high-level policies are useful for operations work that requires temporary access using specific tools.

Such a scenario would not be achievable with the older controller. So, based on user feedback we’ve added the ability to select entirely different Anchore policy bundles based on metadata in Kubernetes as well as the image tag itself. This complements Anchore’s internal mapping structures within policy bundles that give fine-grained control over which rules to apply to an image based on the image’s tag or digest.

Broadly, the controller’s configuration now supports selector rules that encode a logical condition like this (in words instead of yaml):

If metadata property name matches SelectorKeyRegex and its value matches SelectorValueRegex, then use the specified Mode for checking with bundle PolicyBundleId from anchore user Username

In YAML, the configuration configmap has a new section, which looks like:

policySelectors:
  - Selector:
      ResourceType: pod
      SelectorKeyRegex: breakglass
      SelectorValueRegex: true
    PolicyReference:
      Username: testuser
      PolicyBundleId: testing_bundle
    Mode: policy
  - Selector:
      ResourceType: namespace
      SelectorKeyRegex: name
      SelectorValueRegex: testing
    PolicyReference:
      Username: testuser
      PolicyBundleId: testing_bundle
    Mode: policy
  - Selector:
      ResourceType: namespace
      SelectorKeyRegex: name
      SelectorValueRegex: production
    PolicyReference:
      Username: testuser
      PolicyBundleId: production_bundle
    Mode: policy 
  - Selector:
      ResourceType: image
      SelectorKeyRegex: .*
      SelectorValueRegex: .*
    PolicyReference:
      Username: demouser
      PolicyBundleId: default

Next, I’ll walk through configuring and deploying anchore and a controller to behave like the above example. I’ll set up two policies and two namespaces in Kubernetes to show how the selectors work. For a more detailed walk-thru of the configuration and operation of the controller, see the GitHub project.

Installation and Configuration of the Controller

If you already have anchore running in the cluster or in a location reachable by the cluster then that will work. You can skip to user and policy setup and continue there.

Anchore Engine install requirements:

  • Running Kubernetes cluster v1.9+
  • Configured kubectl tool with configured access (this may require some rbac config depending on your environment)
  • Enough resources to run anchore engine (a few cores and 4GB+ of RAM is recommended)

Install Anchore Engine

1. Install Anchore Engine in the cluster. There is no requirement that the installation is in the same k8s cluster or any k8s cluster, it is simply for convenience

helm install --name anchore stable/anchore-engine

2. Run a CLI container to easily query anchore directly to configure a user and policy

kubectl run -i -t anchorecli --image anchore/engine-cli --restart=Always --env ANCHORE_CLI_URL=http://anchore-anchore-engine-api.anchore.svc.local:8228 --env ANCHORE_CLI_USER=admin --env ANCHORE_CLI_PASS=foobar

3. From within the anchorecli container, create a new account in anchore

anchore-cli account create testing

4. Add a user to the account with a set of credentials (you’ll need these later)

anchore-cli account user add --account testing testuser testuserpassword

5. As the new user, analyze some images, nginx and alpine in this walk-thru. I’ll use those for testing the controller later.

anchore-cli --u testuser --p testuserpassword image add alpine
anchore-cli --u testuser --p testuserpassword image add nginx 
anchore-cli --u testuser --p testuserpassword image list

6. Create a file, testing_bundle.json:

{
    "blacklisted_images": [], 
    "comment": "testing bundle", 
    "id": "testing_bundle", 
    "mappings": [
        {
            "id": "c4f9bf74-dc38-4ddf-b5cf-00e9c0074611", 
            "image": {
                "type": "tag", 
                "value": "*"
            }, 
            "name": "default", 
            "policy_id": "48e6f7d6-1765-11e8-b5f9-8b6f228548b6", 
            "registry": "*", 
            "repository": "*", 
            "whitelist_ids": [
                "37fd763e-1765-11e8-add4-3b16c029ac5c"
            ]
        }
    ], 
    "name": "Testing bundle", 
    "policies": [
        {
            "comment": "System default policy", 
            "id": "48e6f7d6-1765-11e8-b5f9-8b6f228548b6", 
            "name": "DefaultPolicy", 
            "rules": [
                {
                    "action": "WARN", 
                    "gate": "dockerfile", 
                    "id": "312d9e41-1c05-4e2f-ad89-b7d34b0855bb", 
                    "params": [
                        {
                            "name": "instruction", 
                            "value": "HEALTHCHECK"
                        }, 
                        {
                            "name": "check", 
                            "value": "not_exists"
                        }
                    ], 
                    "trigger": "instruction"
                }, 
                {
                    "action": "STOP", 
                    "gate": "vulnerabilities", 
                    "id": "b30e8abc-444f-45b1-8a37-55be1b8c8bb5", 
                    "params": [
                        {
                            "name": "package_type", 
                            "value": "all"
                        }, 
                        {
                            "name": "severity_comparison", 
                            "value": ">"
                        }, 
                        {
                            "name": "severity", 
                            "value": "high"
                        }
                    ], 
                    "trigger": "package"
                }
            ], 
            "version": "1_0"
        }
    ], 
    "version": "1_0", 
    "whitelisted_images": [], 
    "whitelists": [
        {
            "comment": "Default global whitelist", 
            "id": "37fd763e-1765-11e8-add4-3b16c029ac5c", 
            "items": [], 
            "name": "Global Whitelist", 
            "version": "1_0"
        }
    ]
}

7. Create a file, production_bundle.json:

{
    "blacklisted_images": [], 
    "comment": "Production bundle", 
    "id": "production_bundle", 
    "mappings": [
        {
            "id": "c4f9bf74-dc38-4ddf-b5cf-00e9c0074611", 
            "image": {
                "type": "tag", 
                "value": "*"
            }, 
            "name": "default", 
            "policy_id": "48e6f7d6-1765-11e8-b5f9-8b6f228548b6", 
            "registry": "*", 
            "repository": "*", 
            "whitelist_ids": [
                "37fd763e-1765-11e8-add4-3b16c029ac5c"
            ]
        }
    ], 
    "name": "production bundle", 
    "policies": [
        {
            "comment": "System default policy", 
            "id": "48e6f7d6-1765-11e8-b5f9-8b6f228548b6", 
            "name": "DefaultPolicy", 
            "rules": [
                {
                    "action": "STOP", 
                    "gate": "dockerfile", 
                    "id": "312d9e41-1c05-4e2f-ad89-b7d34b0855bb", 
                    "params": [
                        {
                            "name": "instruction", 
                            "value": "HEALTHCHECK"
                        }, 
                        {
                            "name": "check", 
                            "value": "not_exists"
                        }
                    ], 
                    "trigger": "instruction"
                }, 
                {
                    "action": "STOP", 
                    "gate": "vulnerabilities", 
                    "id": "b30e8abc-444f-45b1-8a37-55be1b8c8bb5", 
                    "params": [
                        {
                            "name": "package_type", 
                            "value": "all"
                        }, 
                        {
                            "name": "severity_comparison", 
                            "value": ">="
                        }, 
                        {
                            "name": "severity", 
                            "value": "high"
                        }
                    ], 
                    "trigger": "package"
                }
            ], 
            "version": "1_0"
        }
    ], 
    "version": "1_0", 
    "whitelisted_images": [], 
    "whitelists": [
        {
            "comment": "Default global whitelist", 
            "id": "37fd763e-1765-11e8-add4-3b16c029ac5c", 
            "items": [], 
            "name": "Global Whitelist", 
            "version": "1_0"
        }
    ]
}    

8. Add those policies for the new testuser:

anchore-cli --u testuser --p testuserpassword policy add testing_bundle.json
anchore-cli --u testuser --p testuserpassword policy add production_bundle.json

9. Verify that the alpine image will pass the staging bundle evaluation but not the production bundle:

/ # anchore-cli --u testuser --p testuserpassword evaluate check alpine --policy testing_bundle
Image Digest: sha256:25b4d910f4b76a63a3b45d0f69a57c34157500faf6087236581eca221c62d214
Full Tag: docker.io/alpine:latest
Status: pass
Last Eval: 2019-01-30T18:51:08Z
Policy ID: testing_bundle

/ # anchore-cli --u testuser --p testuserpassword evaluate check alpine --policy production_bundle
Image Digest: sha256:25b4d910f4b76a63a3b45d0f69a57c34157500faf6087236581eca221c62d214
Full Tag: docker.io/alpine:latest
Status: fail
Last Eval: 2019-01-30T18:51:14Z
Policy ID: production_bundle

Now its time to get the admission controller in place to use those policies

Install and Configure the Admission Controller

1. Configure Credentials for the Admission controller to use

I’ll configure a pair of credentials, the new format supports multiple credentials in the secret so that the controller configuration can map policy bundles in multiple accounts. It is important that all usernames specified in the configuration of the controller have a corresponding entry in this secret to provide the password for API auth.

Create a file, testcreds.json:

{
  "users": [
    { "username": "admin", "password": "foobar"},
    { "username": "testuser", "password": "testuserpassword"}
  ]
}

kubectl create secret generic anchore-credentials --from-file=credentials.json=testcreds.json

2. Add the stable anchore charts repository

helm repo add anchore-stable http://charts.anchore.io/stable
helm repo update

3. Create a custom test_values.yaml In your editor, create a file values.yaml in the current directory

credentialsSecret: anchore-credentials
anchoreEndpoint: "http://anchore-anchore-engine-api.default.svc.cluster.local:8228"
requestAnalysis: true
policySelectors:
  - Selector:
      ResourceType: pod
      SelectorKeyRegex: ^breakglass$
      SelectorValueRegex: "^true$"
    PolicyReference:
      Username: testuser
      PolicyBundleId: testing_bundle
    Mode: breakglass
  - Selector:
      ResourceType: namespace
      SelectorKeyRegex: name
      SelectorValueRegex: ^testing$
    PolicyReference:
      Username: testuser
      PolicyBundleId: testing_bundle
    Mode: policy
  - Selector:
      ResourceType: namespace
      SelectorKeyRegex: name
      SelectorValueRegex: ^production$
    PolicyReference:
      Username: testuser
      PolicyBundleId: production_bundle
    Mode: policy
  - Selector:
      ResourceType: image
      SelectorKeyRegex: .*
      SelectorValueRegex: .*
    PolicyReference:
      Username: testuser
      PolicyBundleId: 2c53a13c-1765-11e8-82ef-23527761d060
    Mode: analysis
 

The ‘name’ values are used instead of full regexes in those instances because if the KeyRegex is exactly the string “name” then the controller will look at the resource name instead of a label or annotation and do the value regex match against that name.

4. Install the controller via the chart

helm install --name controller anchore-stable/anchore-admission-controller -f test_values.yaml

5. Create the validating webhook configuration as indicated by the chart install output:

KUBE_CA=$(kubectl config view --minify=true --flatten -o json | jq '.clusters[0].cluster."certificate-authority-data"' -r)
cat > validating-webhook.yaml <<EOF
apiVersion: admissionregistration.k8s.io/v1beta1
kind: ValidatingWebhookConfiguration
metadata:
  name: controller-anchore-admission-controller.admission.anchore.io
webhooks:
- name: controller-anchore-admission-controller.admission.anchore.io
  clientConfig:
    service:
      namespace: default
      name: kubernetes
      path: /apis/admission.anchore.io/v1beta1/imagechecks
    caBundle: $KUBE_CA
  rules:
  - operations:
    - CREATE
    apiGroups:
    - ""
    apiVersions:
    - "*"
    resources:
    - pods
  failurePolicy: Fail
# Uncomment this and customize to exclude specific namespaces from the validation requirement
#  namespaceSelector:
#    matchExpressions:
#      - key: exclude.admission.anchore.io
#        operator: NotIn
#        values: ["true"]
EOF

The apply the generated validating-webhook.yaml:

kubectl apply -f validating-webhook.yaml

Try It

To see it in action, run the alpine container in the testing namespace:

```
[zhill]$ kubectl -n testing run -it alpine --restart=Never --image alpine /bin/sh
If you don't see a command prompt, try pressing enter.
/ # exit
```

It works, as expected since that image passes policy evaluation for that bundle. Now try production, where it should fail to pass policy checks and be blocked:

```
[zhill]$ kubectl -n production run -it alpine --restart=Never --image alpine /bin/sh
Error from server: admission webhook "controller-anchore-admission-controller.admission.anchore.io" denied the request: Image alpine with digest sha256:25b4d910f4b76a63a3b45d0f69a57c34157500faf6087236581eca221c62d214 failed policy checks for policy bundle production_bundle
```

And to get around that, as was defined in the configuration (test_values.yaml), if you add the “breakglass=true” label, it will be allowed:

```
[zhill]$ kubectl -n production run -it alpine --restart=Never --labels="breakglass=true" --image alpine /bin/sh
If you don't see a command prompt, try pressing enter.
/ # exit 
```

Authoring Selector Rules

Selector rules are evaluated in the order they appear in the configmap value, so structure the rules to match from most to least specific filters. Note how in this example the breakglass rule is first.

These selectors are filters on:

  • namespace names, labels and annotations
  • pod names, labels, and annotations
  • image references (pull string)

Each selector provides regex support for both the key to providing the data as well as the data value itself. For image references the key regex is ignored and can be an empty string, only the SelectorValueRegex is used for the match against the pull string.

Important: The match values are regex patterns, so for a full string match you must bracket the string with ^ and $ (e.g. ^exactname$). If you do not include the begin/end matches the regex may match substrings rather than exact strings.

Summary

The new features of the controller are shown here to specify flexible rules for determining controller behavior based on namespace and pod metadata as well as the image pull string in order to support more sophisticated deployment strategies in Kubernetes.

As always, we love feedback, so drop us a line on Slack or file issues on GitHub

The controller code is on Github and so is the chart.

Identifying Vulnerabilities with Anchore

By far one the most common challenge Anchore helps its users solve is the identification of vulnerabilities within their Docker container images. Anchore analysis tools will inspect container images and generate a detailed manifest of the image, a virtual ‘bill of materials’ that includes official operating system packages, unofficial packages, configuration files, and language modules and artifacts. Following this, Anchore will evaluate policies against the analysis result, which includes vulnerability matches on the artifacts discovered in the image.

Quite often, Docker images contain both application and operating system packages. However, in this particular post, I will focus on the identification of a specific vulnerable application package inside an image, walkthrough how it can be visualized within the Anchore Enterprise UI, and what might an approach be to remediate.

As part of Anchore Enterprise, the vulnerability data source you will be seeing comes from Snyk. I recently wrote a post discussing the choice at Anchore to add this high-quality vulnerability data source to our enterprise platform which you can read about here.

Sample Project Repo

I will be referencing this example GitHub repository located here. The idea is simple, create a war file with Maven containing a vulnerable dependency, create a Docker image containing the war file, scan it with Anchore, and see what vulnerabilities are present. It is not the intent to run this Java project or anything outside of the scope discussed above.

Viewing the Dependencies

When viewing the pom.xml file for this project I can clearly see which dependencies I will be including.

Dependencies section of pom.xml:

  <dependencies>
    <dependency>
      <groupId>junit</groupId>
      <artifactId>junit</artifactId>
      <version>4.11</version>
    </dependency>
    <dependency>
      <groupId>com.fasterxml.jackson.core</groupId>
      <artifactId>jackson-databind</artifactId>
      <version>${jackson.version}</version>
    </dependency>
  </dependencies>

The vulnerable artifact I’ve added to this particular project can be found on Maven Central here and GitHub here. I will be expecting jackson-databind 2.9.7 to contain vulnerabilities.

Building Project

Viewing the dependency tree

Since we are leveraging Maven to build this project, I can also use a Maven command to view the dependencies. The command mvn dependency:tree will display the dependency tree for this project as seen below.

 

mvn dependency:tree
[INFO] Scanning for projects...
[INFO] 
[INFO] ------------------------< Anchore:anchore-demo >------------------------
[INFO] Building Anchore Demo 1.0
[INFO] -----------------------------------------------------------------
[INFO] 
[INFO] --- maven-dependency-plugin:2.8:tree (default-cli) @ anchore-demo ---
[INFO] Anchore:anchore-demo:war:1.0
[INFO] +- junit:junit:jar:4.11:compile
[INFO] |  - org.hamcrest:hamcrest-core:jar:1.3:compile
[INFO] - com.fasterxml.jackson.core:jackson-databind:jar:2.9.7:compile
[INFO]    +- com.fasterxml.jackson.core:jackson-annotations:jar:2.9.0:compile
[INFO]    - com.fasterxml.jackson.core:jackson-core:jar:2.9.7:compile
[INFO] ------------------------------------------------------------------------
[INFO] BUILD SUCCESS
[INFO] ------------------------------------------------------------------------
[INFO] Total time:  1.117 s
[INFO] Finished at: 2019-01-22T19:39:56-05:00
[INFO] ------------------------------------------------------------------------

Building a war file

To create the war file as defined in the pom.xml I will run the command mvn clean package. The important piece here is that package will generate the war file and place it in the target directory as seen below.

target jvalance$ ls -la | grep anchore-demo-1.0.war
-rw-r--r--   1 jvalance  staff  1862404 Jan 22 19:50 anchore-demo-1.0.war

Building and Scanning Docker Images

For the purposes of this post, I just need to include the war file created in the previous step in our Docker image. A simple way to do this can be defined below in a Dockerfile.

FROM openjdk:8-jre-alpine

# Copy target directory
COPY target/ app/

Once I’ve built the image and pushed it to a container registry, I can now scan it with Anchore via the CLI command below.

anchore-cli image add docker.io/jvalance/maven-demo:latest

Viewing Vulnerabilities

Once Anchore has completed an analysis of the image successfully, I can check for non-os vulnerabilities via the following CLI command:

anchore-cli image vuln docker.io/jvalance/maven-demo:latest non-os

## The above produces the following output:

Vulnerability ID                               Package                       Severity        Fix                     Vulnerability URL                                                   
SNYK-JAVA-COMFASTERXMLJACKSONCORE-72448        jackson-databind-2.9.7        High            ! <2.6.7.2              https://snyk.io/vuln/SNYK-JAVA-COMFASTERXMLJACKSONCORE-72448        
SNYK-JAVA-COMFASTERXMLJACKSONCORE-72448        jackson-databind-2.9.7        High            ! <2.6.7.2              https://snyk.io/vuln/SNYK-JAVA-COMFASTERXMLJACKSONCORE-72448        
SNYK-JAVA-COMFASTERXMLJACKSONCORE-72449        jackson-databind-2.9.7        High            ! <2.6.7.2              https://snyk.io/vuln/SNYK-JAVA-COMFASTERXMLJACKSONCORE-72449        
SNYK-JAVA-COMFASTERXMLJACKSONCORE-72449        jackson-databind-2.9.7        High            ! <2.6.7.2              https://snyk.io/vuln/SNYK-JAVA-COMFASTERXMLJACKSONCORE-72449        
SNYK-JAVA-COMFASTERXMLJACKSONCORE-72451        jackson-databind-2.9.7        High            ! <2.6.7.2              https://snyk.io/vuln/SNYK-JAVA-COMFASTERXMLJACKSONCORE-72451        
SNYK-JAVA-COMFASTERXMLJACKSONCORE-72451        jackson-databind-2.9.7        High            ! <2.6.7.2              https://snyk.io/vuln/SNYK-JAVA-COMFASTERXMLJACKSONCORE-72451        
SNYK-JAVA-COMFASTERXMLJACKSONCORE-72882        jackson-databind-2.9.7        High            ! >=2.0.0 <2.9.8        https://snyk.io/vuln/SNYK-JAVA-COMFASTERXMLJACKSONCORE-72882        
SNYK-JAVA-COMFASTERXMLJACKSONCORE-72882        jackson-databind-2.9.7        High            ! >=2.0.0 <2.9.8        https://snyk.io/vuln/SNYK-JAVA-COMFASTERXMLJACKSONCORE-72882        
SNYK-JAVA-COMFASTERXMLJACKSONCORE-72883        jackson-databind-2.9.7        High            ! >=2.0.0 <2.9.8        https://snyk.io/vuln/SNYK-JAVA-COMFASTERXMLJACKSONCORE-72883        
SNYK-JAVA-COMFASTERXMLJACKSONCORE-72883        jackson-databind-2.9.7        High            ! >=2.0.0 <2.9.8        https://snyk.io/vuln/SNYK-JAVA-COMFASTERXMLJACKSONCORE-72883        
SNYK-JAVA-COMFASTERXMLJACKSONCORE-72884        jackson-databind-2.9.7        High            ! >=2.0.0 <2.9.8        https://snyk.io/vuln/SNYK-JAVA-COMFASTERXMLJACKSONCORE-72884        
SNYK-JAVA-COMFASTERXMLJACKSONCORE-72884        jackson-databind-2.9.7        High            ! >=2.0.0 <2.9.8        https://snyk.io/vuln/SNYK-JAVA-COMFASTERXMLJACKSONCORE-72884 

I also have the option to login and view the vulnerabilities via the UI.

screenshot

By clicking on any of the links on the far right, I can immediately be taken to Snyk’s Vulnerability DB to view more information. Example: SNYK-JAVA-COMFASTERXMLJACKSONCORE-72448.

Snyk's vulnerability database.

For this particular vulnerability, Snyk offers remediation advice located at the bottom. Which states: “Upgrade com.fasterxml.jackson.core:jackson-databind to version 2.6.7.2, 2.7.9.5, 2.8.11.3, 2.9.8 or higher.”

Given that there are twelve known vulnerabilities found within this image, it is a best practice for a security team to go through each and make a decision with the developer on how to best triage. For the simplicity of this post, if I were to follow the suggested remediation guidance above, and upgrade my vulnerable dependency to 2.9.8, rebuild the war file, rebuild the Docker image, and scan it with Anchore, this particular vulnerability should no longer persist.

Quick Test

mvn dependency:tree output:

Anchore:anchore-demo:war:1.0
[INFO] +- junit:junit:jar:4.11:compile
[INFO] |  - org.hamcrest:hamcrest-core:jar:1.3:compile
[INFO] - com.fasterxml.jackson.core:jackson-databind:jar:2.9.8:compile
[INFO]    +- com.fasterxml.jackson.core:jackson-annotations:jar:2.9.0:compile
[INFO]    - com.fasterxml.jackson.core:jackson-core:jar:2.9.8:compile

Once I’ve repeated the steps shown above to build war file, Docker image, and scan newly built image with Anchore, I can then see if the discussed vulnerability is present.

Overview of vulnerabilities

Anchore showing if vulnerability is present.

None present.

I can also view the changelog for this image to get a better sense of the modification I just made.

View changelog in Anchore to confirm modification made to vulnerability.

Below I can specifically see the version change I made to the jackson-databind library.

A visual of version change made.

 

Conclusion

This was an intentionally simple example of how a vulnerable non-os package within a Docker image can be identified and fixed with Anchore. However, you can see how easily a vulnerable package can potentially wreak havoc if the appropriate checks are not in place. In practice, Docker image scanning should be a mandatory step in a CI pipeline, and development and security teams should maintain open lines of communication when vulnerabilities are discovered within images, and then move swiftly to apply the appropriate fixes.

5 CI/CD Platforms Leverage Docker Container Technology

As containers have exploded onto the IT landscape over the last few years, more and more companies are turning to Docker to provide a quick and effective means to release software at a faster pace.

This shift has caused several Continuous Integration and Continuous Delivery (CI/CD) tools and companies to strategically create and weave new container-native solutions into their platforms.

In this blog, we’ll take a look at some of the top CI/CD players in the game and the shifts they’ve made to support their users in this brave new world of containers.

1. Jenkins

Cloudbees’ open source Jenkins CI/CD platform is arguably the most popular CI/CD platform available in 2019. Originally created in the early 2000s (as part of the Hudson project) Jenkins now has wide adoption across various types of organizations helping teams to automate any task that would otherwise put a time-consuming strain on your software team. Some of the most common uses for Jenkins include building projects, running tests, bug detection, code analysis, and project deployment.

Jenkins can be easily integrated with a Docker workflow where it manages the entire development pipeline of containerized applications.

In addition, with one of the largest open source communities among CI/CD providers, Jenkins has a wide variety of container-related plugins available to delivers solutions for source code management to security.

**Bonus: With the Anchore plugin for Jenkins, users can quickly and easily scan Docker images in a Jenkins pipeline.

2. CircleCI

CircleCI is one of the most nimble and well-integrated of the CI platforms. Founded in 2011 CircleCI provides a state of the art platform for integration and delivery, which has helped hundreds of thousands of teams across the globe to release their code through build automation, test automation, and a comprehensive deployment process.

CircleCI can be conveniently configured to deploy code to a number of environments including AWS EC2, AWS CodeDeploy, AWS S3, and Google Container Engine (GKE).

CircleCI natively supports the ability to build, test, or run as many Docker containers as you’d like. Users can run any Docker commands as well as access public and private containers registries for full control over your build environment. For convenience, CircleCI also maintains several Docker images. These images are typically extensions of official Docker images and include tools especially useful for CI/CD.

Like Jenkins, CircleCI has a robust set of integrations that cater to container users.

3. Codeship

Codeship is a CI/CD tool recently acquired by CloudBees that offers efficiency, simplicity, and speed all at the same time.

Teams can use Codeship to build, test, and deploy directly from a Bitbucket or GitHub project and it’s a concise set of features combines integration with delivery so that your code is deployed accordingly once test automation has cleared.

With Codeship Pro, the build pipeline runs in Docker containers. This enables users to take advantage of features like easy migration through the ability to use large parts of your docker-compose file to set up Codeship and updates whenever the latest stable Docker version is available.

You can learn more about how Codeship works in a containerized environment.

4. GitLab

GitLab is a rapidly growing code management platform that offers both open source and enterprise solutions for issue management, code views, as well as continuous integration and deployment, all within a single dashboard. While the main Gitlab offering is a web-based Git repository manager with features such as issue tracking, analytics, and a Wiki, Gitlab also offers a CI/CD component that allows you to trigger builds, run tests, and deploy code with each commit or push. You can run build jobs in a virtual machine, Docker container, or on another server.

Of all CI/CD platforms, Gitlab has shown a particularly strong focus on containers, even creating GitLab Container Registry, which makes it easy to store and share container images.

By building a number of toolsets that integrate seamlessly together and focusing on a growing base of container-native users, Gitlab is definitely worth a look if containers are top of mind for your company

Check out their docs to learn how to utilize Docker images within the GitLab suite of tools.

5. Codefresh

Codefresh is another CI/CD platform that has placed a heavy focused on its container first user base, offering Docker-in-Docker as a service for building CI/CD pipelines with each step of a pipeline running in its own container.

The Codefresh user interface is clear, smart, and easy to understand. You can launch a project and check its working condition as soon as the project is built and the image is created. You can also choose from a number of templates to smoothen the movement of your current project to containers.

Codefresh puts a big focus k8s, and in has some neat helm features too. Anchore’s helm chart is listed in the codefresh ui.

With Codefresh’s suite of tools, users can easily build, test, push, and deploy images, utilize a built-in Kubernetes dashboard, Docker registry, as well as release management making it much easier for container users to get work done quickly and efficiently.

Learn more about how Codefresh works with containers in their documentation.

Conclusion

With the growing move to containers in 2019, we can only expect CI/CD tools to place an even heavier focus on building solutions to support containers.

This shift to a container friendly ecosystem has and will help thousands of companies continue to see a decrease in their build time, test time, time to release.