The Real Difference Between CI & CD? Confidence

As an industry when we talk about DevOps we tend to lump together the terms CI and CD as if they are exactly the same thing. Looking back on our blogs and collateral, we are certainly guilty of that, but there are a number of differences between CI and CD and the implications of these differences are significant, so in this blog, we wanted to set the record straight and discuss the differences and talk about an interesting new project that promises to simplify CI and CD for Kubernetes environments.

There are three terms that we will cover:

  • Continuous Integration
  • Continuous Delivery
  • Continuous Deployment

While each of these practices shares common practices but differs in terms of scope – how far they go in terms of automation and release.

Continuous Integration (CI)

Over recent years continuous Integration has become the norm for engineering teams, where every merge to a source control repository such as git triggers and automatic build of the application which then passes through automated testing. If the build fails or if the automated testing shows regressions the commit does not get accepted into the master branch. This methodology improves the overall quality of a product by finding problems early in the cycle.

For CI to work you need extensive and robust automated testing, the successful compilation is not enough,  your application needs to be run through an extensive set of automated tests to ensure that each small, incremental change does not break existing functionality. This model requires more upfront work in writing tests alongside your code, often writing tests before code is implemented but this investment pays off in terms of quality, velocity and resources as the need for long manual QA cycles is drastically reduced. Automated testing should be quick so a developer can address issues rapidly and then get to work on the next test, bugfix or feature.

In most of our users’ deployments we see the Anchore scan happening after a container is built and before automated testing. This allows any security and compliance issues to be flagged before automated testing to save time and resources – there is no point testing an application that will be failed due to security and compliance issues later. Some users run Anchore after automated testing as they argue that there’s no point running a security and compliance test on broken code. Anchore is flexible to be run in either model, we would recommend that you run the shortest tests first, whether that is Anchore or your automated test suite.

Continuous Delivery (CD)

Continuous Delivery builds on top of the CI process.

There is no deliverable produced as part of the CI process, the result of CI is a well-tested codebase in your source control system. CD goes a step further by automating the next steps in the release process by taking all the steps necessary to prepare for a deployment such as building and packaging the application. While no code is deployed to production all the steps necessary have been performed and so the software can be released or deployed as required however the next step, the actual deployment, is manual.

When running with a CD model there is no need to deploy every build, you make the business decision when you release or promote your software. The beauty of this model is that you can deploy at any time.

Continuous Deployment

Continuous Deployment goes one step further: every commit to the source code repository for a given project is built, tested, packaged and deployed into production automatically. There are no manual steps, no final approval. If the software passes all testing then it is deployed.

While the move step from continuous delivery to continuous deployment may only involve a single click there is are huge organizational implications not least of which is the need for robust operational, monitoring and support practices. For this reason, most organizations stop at continuous delivery until they have the confidence in their infrastructure, testing and procedures.

Jenkins X

The name Jenkins is synonymous with CI/CD and in survey after survey we see their continued domination of the space. But the industry is changing rapidly with cloud deployments being the norm, organizations now deploying microservices, implementing DevOps practices and generally moving to a ‘cloud native’ philosophy. It’s fair to say that even with recent updates Jenkins is showing its age (or perhaps it’s maturity).

Recently the Jenkins community announced Jenkins X which represents the next generation of Jenkins which focuses on the cloud, more specifically on Kubernetes with built-in DevOps best practices, extensive automation and tooling. Over the years we have become used to building Dockerfiles, Jenkins files and now Helm charts and then piecing together tools to automate builds and deployment. The goal of Jenkins X is to automate this work and let developers concentrate on building applications and not infrastructure.  You can read more about Jenkins X in their project announcement blog.

This week the Jenkins X team announced the release of their add-on for the Anchore Engine.
With a single command: jx create addon anchore Anchore scanning is automatically added to your Jenkins X pipelines allowing every image built to be scanned for security vulnerabilities. You can now simply call jx get cve to produce a security vulnerability report showing the vulnerabilities in your environments.

This is just the first step in integrating security and compliance more deeply into Jenkins X, there are a number of interesting possibilities that are opened up by integrating two open source projects:

  • Policy-based scanning:
    Looking at more than just CVEs – adding support for policy checks that can include checks for secrets (keys, passwords), required packages, blacklisted packages, dockerfile best practices, etc.
  • Automating remediation
    Once Anchore has scanned an image it can continually track the policy and security status of the image. For example, if a new vulnerability has been discovered in an application that has already been built and deployed in your Kubernetes infrastructure. Anchore can send a webhook to notify Jenkins X that a vulnerability has been discovered and that a fix has been published by the operating system or library vendor. What if Jenkins X then automatically triggered a rebuild and test cycle to remediate this issue?

We’re excited to work with the Jenkins X team and encourage you to check out Jenkins X and the Anchore integration.  But you don’t need to be running Jenkins X to take advantage of Anchore’s security and compliance scanning, you can add Anchore to your existing Jenkins projects today whether you are using freestyle or pipeline syntax using our free Jenkins plugin.

Why CVE Scanning Still Isn’t Enough

On Thursday the Node Package Manager team removed a node package from the NPMJS.org registry. You can read more about the discovery in this bleepingcomputer article or on the incident reported on the npm blog. This package was found to have a malicious payload which provided a framework for a remote attacker to execute arbitrary code. While the module was removed from the NPM registry you may already have this module in your environment.

We saw something very similar last year and blogged about adding an Anchore policy to blacklist this node module to block it. You can follow the same steps to block the getcookies module today. This will stop future deployments of images with this vulnerability and allow you to scan previously created images to ensure they do not contain this malicious content.

As of today, there is no CVE published for this vulnerability in the NIST National Vulnerability Database (NVD) and since this module was not packaged by operating distributions such as Red Hat and Debian it will not appear in their custom vulnerability feeds but this can still simply be added to a custom policy check-in Anchore Cloud or Anchore Engine.

Two weeks ago we blogged about adding scanning to your container infrastructure even if you were not yet ready to consider policy checks or some form of gating in your CI/CD infrastructure. This incident provides a great example of why scanning your environment now will pay off later.

The Container Chronicle Volume 2

When we launched the Container Chronicle newsletter we planned on making this a monthly newsletter to make sure there was enough content to make it a worthwhile read while not making it too long. Well, two weeks later there was so much interesting news even before we covered the KubeCon announcements that we decided to release early.

New Month, New Releases!

Red Hat announced the release of Red Hat Enterprise Linux 7.5 which includes a number of container-related improvements including a move to fully support OverlayFS, which becomes the default storage driver for containers, replacing device-mapper. Buildah is now fully supported allowing you to build Docker and OCI compliant container images without the need for any a container runtime and more significantly without any Docker tools. If you are wondering how buildah should be pronounced then you really need to hear it from Red Hat’s Dan Walsh.

Two of the most popular Linux distributions for developers announced major releases: Fedora 28 and Ubuntu 18.04 (Bionic Beaver) which is the latest long term support release from Canonical.

Microsoft announced the general availability of Azure Container Instances (ACI) which were initially previewed in the summer of 2017, allowing users to run containers directly without worrying about the underlying host OS or creating and managing clusters.

Netflix open sourced its Titus container management platform which is built on top of Apache Mesos. While Titus is designed to be a challenger to Kubernetes in the mainstream market opening up the codebase allows the wider community to benefit from the extensive operational experience that Netflix has codified in Titus.

Digital Ocean announced an early access program for their managed Kubernetes service

The Rancher team announced the release of Rancher 2.0 which includes the Rancher Kubernetes Engine (RKE) in addition to a unified cluster management system for managing RKE, Google Kubernetes Engine, Azure Container Service and Amazon EKS from a single interface.

News from KubeCon EMEA

Over 4,300 developers and operators attend KubeCon in Copenhagen and there were a number of exciting announcements including:

Red Hat Operator Framework

The CoreOS team at Red Hat announced the release of the Operator Framework based on the operator’s concept they introduced in 2016. The Framework provides a toolkit and services to help manage and deploy Kubernetes applications at scale.

Google had a Number of Announcements 

  • gVisor a new container runtime designed to provide more isolation than containers but with less overhead than a virtual machine. Unlike The Kata Containers project (previously Intel Clear Containers) which relies on a lightweight virtualization approach, gVisor provides a userspace kernel implementation that exposes most Linux syscalls to the container.
  • The beta release of Stackdriver a Kubernetes monitoring solution that integrates metrics from native Kubernetes sources including metrics, events and logs as well as from Prometheus instrumentation.

Buoyant announced the 1.0 release of the Lingerd service mesh

Bitnami announced the 1.0 release of Kubeless, their Kubernetes-native serverless framework in addition to the 1.0 release of Kubeapps which provides a simple way to launch and manager Kubernetes applications using Helm.

Tip: Head over to the Kubeapps public hub to find a simple way to install Anchore Engine.