Deeper Analysis with Anchore

Since we announced Anchore 1.0 back in October we have spent a great deal of time talking to our community users, partners and enterprises about their compliance and governance needs. Many of these conversations followed a similar pattern: Initial excitement about Docker and container deployments, followed by concerns about security, then the challenge of balancing the desire to support agile development and innovation with the need for compliance and security. We’ve heard from these users that many have a basic system in place to perform the first level of checks on their images, which are focused on CVEs, however, they understand that this is not enough. In our conversations with these organizations, we spend a lot of time talking about the CVE scanning being the tip of the iceberg and many of our discussions then focus on how to go deeper into container inspection and analysis.

At Anchore our focus has been to deliver tools and services that go below the surface to perform deep analysis on container images and allow organizations to define policies that specify rules to govern security vulnerabilities, package whitelists and blacklists, configuration file contents, presence of credentials in image, manifest changes, exposed ports or any user-defined checks.

Last week we outlined a number of new features we added to the Anchore Navigator which added deeper container scanning including the ability to report on Node.JS NPM modules. Today we would like to announce the latest release of both Anchore’s open source project and Anchore’s Enterprise offering.

Over the coming weeks, we will deep dive into each of the new features in this release and outline the roadmap for the coming months.

We’ll highlight the 3 most significant features in the 1.0.3 release however you can get more details from the changelog in our Github repository.

Node.JS NPM Support

In addition to the operating system packages and all files in the image Anchore now reports on all Node.js NPMs that are installed in the image. These software libraries are often overlooked; they are not covered by security scanning tools and do not undergo the same level of scrutiny and governance than the operating system yet in many cases you’ll find more NPM packages in your image than you have operating system packages.

Node.JS Data Feed

The enterprise offering builds on top of the NPM reporting in the open source project to allow organizations to build policies that govern the use of NPM modules in their container images. For example allowing an organization to blacklist specific modules, specify minimum versions or even block deployment of outdated modules.

Advanced Content Policies

It is not enough to just look at the operating system packages and software packages such as NPM modules. It’s possible to have all of the latest operating system packages but still have an image that’s got security vulnerabilities or is otherwise not compliant with your operational, security or business policies. A great example of this was seen this summer when a security researcher found source code and secrets (API keys) within a Vine container image that was publicly accessible.

In this release, we have added the ability to perform detailed checks against both the names and the contents of files. While this feature enables the ability to perform a wide variety of checks one of the most interesting use cases is to scan the image for ‘secrets’. For example, search for .CER or .PEM files that may contain private keys for certificates, look for source code or inspect the contents of specific files for saved passwords or API keys.

These are just a few of the new features added in this release. We’ll cover these in more detail in the coming days. If you want to learn more please fill out the form below and our team will reach out to you.

Anchore Joins the Open Container Initiative

Today we formally announced that Anchore had joined the Open Container Initiative (OCI).

12563465The OCI was established to develop standards for containers, initially focusing on the runtime format specification but later adding the container image format specification.

Container adoption is accelerating rapidly and the ecosystem is exploding with new vendors who are providing features such as orchestration, monitoring, deployment and reporting.

Standards are critical to the adoption of containers, ensuring that customers can choose their cloud provider, orchestration platform or monitoring tool without worrying about interoperability between these platforms and without being locked into one particular stack or vendor.

In the early days of the OCI concerns were raised about the overhead that is sometimes seen with standards bodies that can be bureaucratic and slow to come to an agreement. As such there was a real concern that the standardization process may stifle innovation in the container market which had seen rapid innovation and adoption. The incredible progress we have seen made by the OCI within its first 18 months seems to have put those concerns to rest and the OCI community is growing with nearly all of the leading players in the container market participating in this important work.

The image format specification is of particular interest to Anchore. This format covers the low-level details of container images including both the filesystem image and the associated metadata required to run the image. Today Anchore‘s Container Image scanning engine understands the low-level details of the Docker image format and is able to perform detailed analysis on these images. Over the coming months, Anchore will add support for the OCI image specification to allow customers to perform analysis, compliance and certification tests on OCI images in addition to Docker images.

We are looking forward to contributing to the specification, especially in the area of governance and compliance and by providing open source tools and services to allow OCI images to be analyzed and validated.

Containers in Production, Is Security a Barrier?

Fintan Ryan – Redmonk – December 1, 2016

16ce728

Over the last week, we have had the opportunity to work with an interesting set of data collected by Anchore (full disclosure: Anchore is a RedMonk client). Anchore collected this data by means of a user survey ran in conjunction with DevOps.com. While the number of respondents is relatively small, at 338, there are some interesting questions asked, and a number of data points which support wider trends we are seeing around container usage. With any data set of this nature, it is important to state that survey results strictly reflect the members of the DevOps.com community.

The data set covered a number of areas including container usage and plans, orchestration tools, operating system choices, CI tools and security. For this post we will be focusing on the data around containers and CI.

Read the original and complete article on RedMonk.

How Fast Can You Add Image Scanning to Jenkins?

Last month we blogged about securing your Jenkins pipeline, how within 10 minutes you could add, for free, image scanning, analysis and compliance validation. Since then we’ve spoken to many organizations who’ve had the opportunity to add security to their CI/CD pipeline. And it’s also been pointed out that if you don’t read the marketing preamble the whole process takes around 3 minutes before you are ready to analyze your first build.

So in this short blog, we want to see if we can set a record – how quickly can we really add image scanning to your CI/CD pipeline. This video was recorded on a virtual machine running Docker 1.11 and Jenkins 2.32.

Without caching or pre-loading images our time is 2 minutes and 34 seconds – from the start of the install through to kicking off the build. Can you beat that? In less time than it takes to make a coffee you can secure your Jenkins pipeline.

Please tweet us at @anchore with the hashtag #SecureWithAnchore to let us know your times.

To learn more please contact us using the form below, or request a demo by clicking the button in the menu above.

Keeping Linux Containers Safe and Secure

Jason Baker – Opensource.com – October 4, 2016

Linux containers are helping to change the way that IT operates. In place of large, monolithic virtual machines, organizations are finding effective ways to deploy their applications inside Linux containers, providing for faster speeds, greater density, and increased agility in their operations.

While containers can bring a number of advantages from a security perspective, they come with their own set of security challenges as well. Just as with traditional infrastructure, it is critical to ensure that the system libraries and components running within a container are regularly updated in order to avoid vulnerabilities. But how do you know what is running inside of your containers? To help manage the full set of security challenges facing container technologies, a startup named Anchore is developing an open source project of the same name to bring visibility inside of Linux containers.

To learn more about Anchore, I caught up with Andrew Cathrow, Anchore’s vice president of products and marketing, to learn more about the open source project and the company behind it.

In a Nutshell, What is Anchore? How does the Toolset Work?

Anchore’s goal is to provide a toolset that allows developers, operations, and security teams to maintain full visibility of the ‘chain of custody’ as containers move through the development lifecycle, while providing the visibility, predictability, and control needed for production deployment. The Anchore engine is comprised of pluggable modules that can perform analysis (extraction of data and metadata from an image), queries (allowing reporting against the container), and policy evaluation (where policies can be specified that govern the deployment of images).

While there are a number of scanning tools on the market, most are not open source. We believe that security and compliance products should be open source, otherwise, how could you trust them?

Anchore, in addition to being open source, has two other major differentiators that set it apart from the commercial offerings in the market.

First, we look beyond the operating system image. Scanning tools today concentrate on operating system packages, e.g. “Do you have any CVEs (security vulnerabilities) in your RPMs or DEB packages?” While that is certainly important, you don’t want vulnerable packages in your image, the operating system packages are just the foundation on which the rest of the image is built. All layers need to be validated, including configuration files, language modules, middleware, etc. You can have all the latest packages, but with even one configuration file wrong, insecurity sets in. A second differentiator is the ability to extend the engine by adding users’ own data, queries or policies.

Read the original and complete article on OpenSource.com.

Startup Nets $5 Million to X-ray & Secure Software Containers

Barb Darrow – Fortune – October 4, 2016

Fortune_logo_logotype_red

Anchore has $5 million in seed funding to attack knotty container issues.

Anchore, a startup that says it can ensure that software “containers” are safe, secure, and ready to deploy, is introducing its first product along with announcing $5 million in seed funding.

For non-techies, containers are an emerging way to package up all the building blocks in software—the file system, the tools, the core runtime—into a nice bundle, or container, that can then run on any sort of infrastructure. That means, theoretically at least, the container, as exemplified by the popular Docker, can work inside a company’s data center, on Amazon Web Services, or some other shared public cloud infrastructure. That’s a lot more flexible than previously when business software was pretty much welded to the underlying hardware.

Read the original and complete article on Fortune.

Confident Production Deployment With Anchore 1.0

It has been just a little over five months since Anchore opened its doors, and we’re happy to announce the General Availability of Anchore 1.0 – combining an open source platform for community participation while addressing enterprise needs through an on-prem offering with additional feature augmentation, and Anchore Navigator, anchore.com, a free service that provides an unparalleled level of visibility into the contents of container images.

As the adoption of containers continues to grow enterprises are increasingly demanding more visibility and control of their container environments. Today we see operations, security and compliance teams looking to add a level of governance to container deployments that were lacking during the early gold rush. The most common approach we have seen today is container image scanning which typically means scanning the operating system components of an image for security vulnerabilities, CVEs. While the need to scan an image for CVEs is undeniable it should only be the first step given the fact that each image typically contains hundreds of operating system packages, thousands of files along with application libraries and configuration files that are likely not part of the operating system image.

Anchore 1.0 was designed to address the lack of transparency, allowing developers, operations and security teams to get visibility into the entire contents of the containers – far more than the surface CVE scans that we have seen today. Empowered with this detailed information operations, security and compliance teams can define policies that govern the deployment of containers, including rules that cover security vulnerabilities, mandatory software packages, blacklisted software packages, required versions of software libraries, validated configuration files or any one of a hundred other tests that an enterprise may require to consider an image compliant.

The need for visibility and compliance extends beyond just point in time scanning of an image before deployment. In most cases application images are built from base images downloaded from public registries, these images may be updated often and in many cases without any obvious indication that a change was made let alone what was changed in these images. End-users have to struggle with the age-old choice: sticking with a known working but somewhat stale version, or use the latest, more feature-full, version and run the risk of security vulnerabilities, major bugs, and overall compliance deviation.

Full transparency is no longer just a good option to have in your toolset, but a mandate for application development and operations teams alike. Using the most stable and secure baseline of an IT service should no longer translate to an antiquated version of the software. With the fast pace of innovation also comes risk, and companies, big and small, will benefit greatly from simply and easily uncovering and tracking all changes throughout the application development and production lifecycle.

Is Docker More Secure?

Over the last couple of years, much has been written about the security of Docker containers, with most of the analysis focusing on the comparison between containers and virtual machines.

Given the similar use cases addressed by virtual machines and containers, this is a natural comparison to make, however, I prefer to see the two technologies as complementary. Indeed a large proportion of containers that are deployed today are run inside virtual machines, this is especially true of public cloud deployments such as Amazon’s EC2 Container Service (ECS) and Google Container Engine (GKE).

While we have seen a number of significant enhancements made to container runtimes to improve isolation, containers will continue to offer less isolation than traditional virtual machines for the foreseeable future. This is due to the fact that in the container model each container shares the same host kernel so if a security exploit or some kernel-related bug is triggered from a container then the host system and all running containers on that host are potentially at risk.

For use cases where absolute isolation is required, for example where an image may come from an untrusted source, virtual machines are the obvious solution. For this reason, multi-tenant systems such as public clouds and private Infrastructure as a Service (IaaS) platforms will tend to use virtual machines.

In a single-tenant use case such as enterprise IT infrastructure where the deployment and production pipeline can be designed and controlled with security in mind, containers offer a lightweight and simple mechanism for isolating workloads. This is the use case where we have seen exponential growth of container deployments. We are starting to see crossover technologies such as Intel’s Clear Containers that allow containers to be run in lightweight virtual machines allowing the user to provide stronger isolation for a specific container when deemed necessary.

Within the last year or so we have seen container isolation techniques improve considerably through the use of features of the Linux kernel such as Namespaces, seccomp, cgroups, SELinux and AppArmor.

Recently Joerg Fritsch from Gartner published a research note and blog where he made the following statement:

“Applications deployed in containers are more secure than applications deployed on the bare OS”.

Following on from this note Nathan McCauley from Docker wrote a blog that dug further into this topic and referenced NCC group’s excellent white paper on Hardening Linux Containers.

The high-level message here is that “you are safer if you run all your apps in containers”. More specifically the idea is to take applications that you would normally run on ‘bare metal’ and deploy them as containers on bare metal. Using this approach you would add a number of extra layers of protection around these applications reducing the attack surface, so in the case of a successful exploit against the application, the damage would be limited to the container reducing potential exposure to the other applications running on that bare metal system.

While I would agree with this recommendation there are, as always, a number of caveats to consider.  The most important of which relates to the contents of the container.

When you deploy a container you are not just deploying an application binary in a convenient packaging format, you are often deploying an image that contains an operating system runtime, shared libraries, and potentially some middleware that supports the application.

In our experience, a large proportion of end-users build their containers based on full operating system base images that often include hundreds of packages and thousands of files. While deploying your application within a container will provide extra levels of isolation and security you must ensure that the container is both well constructed and well maintained. In the traditional deployment model, all applications use a common set of shared libraries so, for example, when the runtime C library glibc on the host is updated all the applications on that system now use the new library. However in the container model, each container will include it’s own runtime libraries which will need to be updated. In addition, you may find that these containers include more libraries and binaries that are required – for example does an nginx container need the mount binary?

As always, nothing comes without a cost. Each application you containerize needs to be maintained and monitored, but it’s clear that the advantages in terms of security and agility provided by Docker and containers in general far outweigh some of the administrative overhead which can be addressed with the appropriate policies and tooling, which is where Anchore can help.

Anchore provides tooling and a service that gives unparalleled insight into the contents of your containers, whether you are building your own container images or using images from third parties. Using Anchore’s tools an organization can gain deep insight into the contents of their containers and define policies that are used to validate the contents of those containers before they are deployed. Once deployed, Anchore will be able to provide proactive notification if a container that was previously certified based on your organization’s policies moves out of compliance – for example, if a security vulnerability is found in a package you have deployed.

So far container scanning solutions have concentrated on the operating system packages, inspecting the RPM or dpkg databases and reporting on the versions of packages installed and correlating that with known CVEs in these packages. However, the operating system packages are just one of many components in an image which may include configuration files, non-packages files on the file system, software artifacts such as PiP, Gem, NPM and Java archives. Compliance with your standards for deployment means more than just the latest packages it means the right packages (required packages, blacklisted packages) the right software artifacts, the right configuration files, etc.

Our core engine has already been open sourced and our commercial offering will be available later this month.

Future of Container Technology & Open Container Initiative

Open Container Initiative – August 23, 2016

The Open Container Initiative (OCI), an open source project for creating open industry standards around container formats and runtime, today announced that Anchore, ContainerShip, EasyStack and Replicated have joined The Linux Foundation and the Open Container Initiative.

Today’s enterprises demand portable, agile and interoperable developer and sysadmin tools. The OCI was launched with the express purpose of developing standards for the container format and runtime that will give everyone the ability to fully commit to container technologies today without worrying that their current choice of infrastructure, cloud provider or DevOps tool will lock them in. Their choices can instead be guided by choosing the best tools for the applications they are building.

“The rapid growth and interest in container technology over the past few years has led to the emergence of a new ecosystem of startups offering container-based solutions and tools,” said Chris Aniszczyk, Executive Director of the OCI. “We are very excited to welcome these new members as we work to develop standards that will aid container portability.”

The OCI currently has nearly 50 members. Anchore, ContainerShip, EasyStack and Replicated join existing members including Amazon Web Services, Apcera, Apprenda, AT&T, ClusterHQ, Cisco, CoreOS, Datera, Dell, Docker, EMC, Fujitsu Limited, Goldman Sachs, Google, Hewlett Packard Enterprise, Huawei, IBM, Infoblox, Intel, Joyent, Kismatic, Kyup, Mesosphere, Microsoft, Midokura, Nutanix, Odin, Oracle, Pivotal, Polyverse, Portworx, Rancher Labs, Red Hat, Resin.io, Scalock, Sysdig, SUSE, Twistlock, Twitter, Univa, Verizon Labs, VMware and Weaveworks.

Read the complete and original announcement on Open Container Initiative.

How are Containers Really Being Used?

Our friends at ContainerJournal and Devops.com are running a survey to learn how you are using containers today and your plans for the future.

We’ve seen a number of surveys over the last couple of years and heard some incredible statistics on the growth of Docker usage and of containers in general, for example, we learned last week that DockerHub had reached over 5 billion pulls. The ContainerJournal survey digs deeper to uncover details about the whole stack that users are running.

For example, who do you get your container runtime from, where do you store your images, how do you handle orchestration?

Some of the questions are especially interesting to the team here at Anchore as they cover how you create and maintain the images that you use. For example, do you pull application images straight from Docker Hub, do you just pull base operating system images and add your own application layers, or perhaps you build your own operating system images from scratch?

And no matter how you initially obtain your image how do you ensure that it contains the right content starting from the lowest layer of the image with the operating system all the way up to the application tier. While it’s easy to build and pull images, the maintenance of those images is another matter, eg. how often are those images updated?

Please head over to ContainerJournal and fill out the survey by clicking the button below.