Benefits of Static Image Inspection and Policy Enforcement

In this post, I will dive deeper into the key benefits of a comprehensive container image inspection and policy as code framework
A couple of key terms:

  • Comprehensive Container Image Inspection: Complete analysis of a container image to identify it’s entire contents: OS & non-OS packages, libraries, licenses, binaries, credentials, secrets, and metadata. Importantly: storing this information in a Software Bill of Materials (SBOM) for later use.
  • Policy as Code Framework: a structure and language for policy rule creation, management, and enforcement represented as code. Importantly: This allows for software development best practices to be adopted such as version control, automation, and testing.

What Exactly Comes from a Complete Static Image Inspection?

A deeper understanding. Container images are complex and require a complete analysis to fully understand all of their contents. The picture above shows all of the useful data an inspection can uncover. Some examples are:

  • Ports specified via the EXPOSE instruction
  • Base image / Linux distribution
  • Username or UID to use when running the container
  • Any environment variables set via the ENV instruction
  • Secrets or keys (ex. AWS credentials, API keys) in the container image filesystem
  • Custom configurations for applications (ex. httpd.conf for Apache HTTP Server)

In short, a deeper insight into what exactly is inside of container images allows teams to make better decisions on what configurations and security standards they would prefer their production software to have.

How to Use the Above Data in Context?

While we can likely agree that access to the above data for container images is a good thing from a visibility perspective, how can we use it effectively to produce higher-quality software? The answer is through policy management.

Policy management allows us to create and edit the rules we would like to enforce. Oftentimes these rules fall into one of three buckets: security, compliance, or best-practice. Typically, a policy author creates sets of rules and describes the circumstances by which certain behaviors/properties are allowed or not. Unfortunately, authors are oftentimes restricted to setting policy rules with a GUI or even a Word document, which makes rules difficult to transfer, repeat, version, or test. Policy as code solves this by representing policies in human-readable text files, which allow them to adopt software practices such as version control, automation, and testing. Importantly, a policy as code framework includes a mechanism to enforce the rules created.

With containers, standardization on a common set of best-practices for software vulnerabilities, package usage, secrets management, Dockerfiles, etc. are excellent places to start. Some examples of policy rules are:

  • Should all Dockerfiles have effective USER instruction? Yes. If undefined, warn me.
  • Should the FROM instruction only reference a set of “trusted” base images? Yes. If not from the approved list, fail this policy evaluation.
  • Are AWS keys ever allowed inside of the container image filesystem? No. If they are found, fail this policy evaluation.
  • Are containers coming from DockerHub allowed in production? No. If they attempt to be used, fail this policy evaluation.

The above examples demonstrate how the Dockerfile analysis and secrets found during the image inspection can prove extremely useful when creating policy. Most importantly, all of these policy rules are created to map to information available prior to running a container.

Integrating Policy Enforcement

With policy rules clearly defined as code and shared across multiple teams, the enforcement component can freely be integrated into the Continuous Integration / Continuous Delivery workflow. The concept of “shifting left” is important to follow here. The principal benefit here is, the more testing and checks individuals and teams can incorporate further left in their software development pipelines, the less costly it will be for them when changes need to be made. Simply put, prevention is better than a cure.

Integration as Part of a CI Pipeline

Incorporating container image inspection and policy rule enforcement to new or existing CI pipelines immediately adds security and compliance requirements as part of the build, blocking important security risks from ever making their way into production environments. For example, if a policy rule exists to explicitly not allow a container image to have a root user defined in the Dockerfile, failing the build pipeline of a non-compliant image before pushing to a production registry is a fundamental quality gate to implement. Developers will typically be forced to remediate the issue they’ve created which caused the build failure and work to modify their commit to reflect compliant changes.

Below depicts how this process works with Anchore:

Anchore provides an API endpoint where the CI pipeline can send an image for analysis and policy evaluation. This provides simple integration into any workflow, agnostic of the CI system being used. When the policy evaluation is complete, Anchore returns a PASS or FAIL output based on the policy rules defined. From this, the user can choose whether or not to fail the build pipeline.

Integration with Kubernetes Deployments

Adding an admission controller to gate execution of container images in Kubernetes in accordance with policy standards can be a critical method to validate what containers are allowed to run on your cluster. Very simply: admit the containers I trust, reject the ones I don’t. Some examples of this are:

  • Reject an image if it is being pulled directly from DockerHub.
  • Reject an image if it has high or critical CVEs that have fixes available.

This integration allows Kubernetes operators to enforce policy and security gates for any pod that is requested on their clusters before they even get scheduled.

Below depicts how this process works with Anchore and the Anchore Kubernetes Admission Controller:

The key takeaway from both of these points of integration is that they are occurring before ever running a container image. Anchore provides users with a full suite of policy checks which can be mapped to any detail uncovered during the image inspection. When discussing this with customers, we often hear, “I would like to scan my container images for vulnerabilities.” While this is a good first step to take, it is the tip of the iceberg when it comes to what is available inside of a container image.

Conclusion

With immutable infrastructure, once a container image artifact is created, it does not change. To make changes to the software, good practice tells us to build a new container image, push it to a container registry, kill the existing container, and start a new one. As explained above, containers provide us with tons of useful static information gathered during an inspection, so another good practice is to use this information, as soon as it is available, and where it makes sense in the development workflow. The more policies which can be created and enforced as code, the faster and more effective IT organizations will be able to deliver secure software to their end customers.

Success With Anchore, Best Practices from our Customers

Successful container and CI/CD security encompass not only vulnerability analysis but also a mindset based on integrating security with every step of the Software Development Life Cycle (SDLC). At Anchore, we believe incorporating early and frequent scanning with policy enforcement can help reduce overall security risk. This blog shares some of the elements that have helped our customers be successful with Anchore.

Scan Early/Scan Often

Anchore allows you to start analyzing right away, without changing your existing processes. There is no downside in putting an `anchore-cli image add <new image>` at the end of your CI/CD pipeline, and then exploring how to use the results of vulnerability scans or policy evaluations later. Since all images added to Anchore are there until you decide to remove them, analysis can be revisited later and new policies can be applied as your organizational needs evolve.

Scanning early catches vulnerabilities and policy violations prior to deploying into production. By scanning during the CI/CD pipeline, issues can be resolved prior to runtime narrowing the focus to issues that are solely runtime-related at that point. This “Shift Left” mentality moves application quality and security considerations closer to the developer, allowing issues to be addressed sooner in the delivery chain. Whether it’s CI/CD build plugins (Jenkins, CircleCI, etc.) or repository image scanning, adding security analysis to your delivery pipeline can reduce the time it takes to resolve issues as well as lower the costs associated with fixing security issues in production.

To learn more about Anchore’s CI/CD integrations, take a look at our CI/CD documentation.

To learn more about repository image analysis, see our Analyzing Images documentation.

Custom Policy Creation

At Anchore, we believe in more than just CVEs. Anchore policies act as a one-stop-checking-spot for Dockerfile best practices, as well as keep policy enforcement in-line with your organizational security standards, such as secret storage and application configuration within your container. At a high level, policy bundles contain the policies themselves, whitelists, mappings, whitelisted images, and blacklisted images.

Policies can be configured to be compliant with NIST, ISO, and banking regulations, among many others. As industry regulations and auditing regularly affect the time to deployment, performing policy checks early in the CI/CD pipeline can help increase the speed of deployments without sacrificing auditing or regulation requirements. At a finer-grained level, custom policies can enforce organizational best practices at an earlier point in the pipeline, enabling cross-group buy-in between developers and security personnel.

To learn more about working with Anchore policies, please see our Working with Policies documentation.

Policy Enforcement with Notifications

To build upon the above topic, another best practice is enabling notifications. With a typical CI/CD process, build failures prompt notifications to fix the build, whether it is due to a missing dependency or simply a typo. With Anchore, builds can be configured to fail when an analysis or a policy evaluation fails, prompting attention to the issue.

Taking this a step further, Anchore enables notifications through webhooks that can be used to notify the appropriate personnel in the event that there is an update to a CVE or if a policy evaluation status changes. Anchore leverages the ability to subscribe to tags and images to receive notifications when images are updated, when CVEs are added or removed and when the policy status of an image changes so you can take a proactive approach to ensure security and compliance. Having the ability to stay on top of the notifications above allows for the appropriate methods for remediation and triage to take place.

To learn more about using webhooks for notifications, please see our Webhook Configuration documentation.

For an example of how notifications can be integrated with Slack, please see our Using Anchore and Slack for Container Security Notifications blog.

Archiving Old Analysis Data

There may be times that older image analysis data is no longer needed in your working set but, for security compliance reasons, the data needs to be retained. Adding an image to the archive includes all analyses, policy evaluations, and tags for an image, allowing you to delete the image from your working set. Manually moving images to an archive can be cumbersome and time-consuming, but automating the process reduces the number of images in your working set while still retaining the analysis data.

Archiving analysis data backs it up, allowing it to be removed from the working set; it can always be moved back should something in the policy change, an organizational shift occurs, or you simply want it back in the working set. Archiving image data keeps the live set of images in line with what is current; over time, it could become cumbersome to continuously be running policy evaluations and vulnerability scans against images that are old and potentially not important. Archiving them keeps the working set lighter. Anchore’s archiving service makes it simple to automatically archive images and their data, implemented via adding rules to the analysis archive. With such rules, images with an analyzed date older than a specified number of days, specific tags, and the number of images can be automatically added to the archive, making it simpler to work with the newer images your organization is concerned with while maintaining the analysis data of older images.

To learn more about archiving old analysis data, please see our Using the Analysis Archive documentation.

To learn more about working with archiving rules, please see our Working with Archive Rules documentation.

Leveraging External Object Storage to Offload Database Storage

Anchore Engine uses a PostgreSQL database to store structured data for images, tags, policies, subscriptions, and metadata about images by default, but other types of data in the system are less structured and tend to be larger pieces of data. Because of that, there are benefits to supporting key-value access patterns for things like image manifests, analysis reports, and policy evaluations. For such data, Anchore has an internal object storage interface that, while defaulted to use the same PostgreSQL database for storage, can be configured to use external object storage providers to support simpler capacity management and lower costs.

By offloading the database storage, it eliminates the need to scale-out PostgreSQL while speeding up its performance. As the database grows, the various queries that are run against it and writing new data to it slow down, in turn slowing the productivity of Anchore. By leveraging an external object store and removing bulk data from PostgreSQL, only the relevant image metadata will be stored there, while other important data is stored externally and can be archived at lower costs.

To learn more about using any of our supported external object storage drivers, please see our Object Storage documentation.

Conclusion

Leveraging some of the best practices that have made our customers successful can help your organization achieve the same success with Anchore. As an open-source community, we value feedback and hearing about what best practices the community has developed.

Anchore Talk Webinar, Redefining the Software Supply Chain

We are pleased to announce Anchore Talks, a series of short webinars to help improve Kubernetes and Docker security best practices. We believe it is important to have excellent security measures in place when adopting containers, and that drives every decision we make when developing Anchore Enterprise and Anchore Engine. These talks, no longer than 15 minutes each, will share our perspective on the challenges and opportunities presented to today’s DevSecOps professionals and offer clear, actionable advice for securing the build pipeline.

Containers can create quite a few headaches for security professionals because they increase velocity and allow developers to pull from a wider variety of software. Fortunately, they can also offer more efficient tracking and oversight for your software supply chain, making it much easier to scan, find and patch vulnerabilities during the build process. Using containers, security can be baked in from the start, keeping the velocity of the build process high.

Anchore VP of Product Neil Levine has prepared our first Anchore Talk on this new approach to security, starting with how developers can source containers responsibly finishing with container immutability and its impact on audits and compliance. You won’t want to miss this brief 10-15 minute talk live on October 28th, starting at 10 am PST! It will also be available on-demand once you have signed up for a BrightTalk account. If keeping systems secure is your full-time job, we have some exciting content coming your way.

Anchore and Google Distroless

The most recent open source release of Anchore Engine (0.5.1), which is also available as part of Anchore Enterprise 2.1, added support for Google Distroless containers. But what are they and why is the addition notable?

When containers were first starting to be adopted, it was natural for many users to think of them as stripped-down virtual machines which booted faster. Indeed, if you look at the container images published by the operating system vendors, you can see that in most instances they take their stock distribution and remove all the parts they consider unnecessary. This still leaves images that are pretty large, in the hundreds of megabytes, and so some alternative distributions have become popular, notably Alpine which based on Busybox and the MUSL C library had its roots in the embedded space. Now images can be squeezed into the tens of megabytes, enabling faster build, downloads and a reduced surface area for vulnerabilities.

However, these images still ape VMs, enabling shell access and containing package managers, designed to let users grow and modify them. Google wanted a different approach that saw a container image as essentially a language runtime environment that was curated by the application teams themselves. The only thing that should be added to it was the actual application itself. The resulting family of images known as Distroless are only slightly larger than thin distros like Alpine but, by contrast, have better compatibility by using standard libraries (e.g. libc rather than MUSL).

As Google Distroless images are based on Debian packages, Anchore is now able to scan and report on any security findings in the base images as well as in the language files installed.

The images are all hosted on the Google Container Registry (GCR) and are available with Java and C (with experimental support also available for Python, NPM, Node and .Net). We can add them using the regular syntax for Anchore Engine on the CLI:

anchore-cli image add gcr.io/distroless/java:11

Being so small, the images are typically scanned in a minute or less. Using the Anchore Enterprise GUI, you can see the image is detected as being Debian:

Looking at its contents, you can see the image has very little in it – only 19 Debian packages, including libc6:

As standard Debian packages, Anchore can scan these and alert for any vulnerabilities. If there are fixes available, you can configure Anchore to trigger a rebuild of the image.

There is only one warning generated on the image by the standard Anchore policy which relates to the lack of a Dockerfile health check but other than that, this image – given its lean nature, is vulnerability free.

If you are using a compiled binary application like Java, another new feature allows you to add the hash of the binary to the Anchore policy check which means you can enforce a strict compliance check on every build that goes through your CI/CD. This will ensure that literally no other modifications are being made to the base images other than the application being layered on top.

For users who still need access to a shell for debugging or viewing locally stored log files, they may still prefer to use Alpine or other minimal ages, but or those fully vested in the cloud-native deployment model where containers conform to 12-factor best practices, Google Distroless images are a great asset to have in your development process.

You can find more information about Google Distoless on GitHub and existing users of both Anchore Engine or Anchore Enterprise just need to download the latest version to enable support.

Anchore Engine 0.5.1 Release

We are pleased to announce the immediate availability of Anchore Engine 0.5.1, the latest point update to our open source software from Anchore that helps users enforce container security, compliance, and best practice requirements. This update not only adds bug fixes and performance improvements but also adds a new policy gate check and support for Google’s distroless images.

Google’s distroless images are helping businesses tighten up security while speeding up the build, scan, and patch process for DevOps teams. Because these images only contain the application’s resources and runtime dependencies, the attack surface is significantly reduced and the process of scanning for and patching vulnerabilities becomes much simpler. Using distroless container images can help DevOps teams save time and become more agile in their development pipeline while keeping security at the forefront. For more documentation on distroless container images, take a look here.

Also in this release, our engineers have taken policy check to the next level with our secret search gate. Previously, our secret search gate made sure vulnerable information was not left in plain sight for hackers to exploit. Now you can use it to make sure necessities aren’t missing from your config file within your image.

If you haven’t already deployed Anchore Engine, you can stand it up alongside your favorite cloud native tools, begin hardening your container images and adhering to federally accepted compliance and best practices.

We are incredibly thankful for our open source community and can’t wait to share more project updates! For more information about the release, check out our release notes.

Visit AWS Marketplace For Anchore Engine on EKS

In this post, I will walk through the steps required to deploy the Anchore Engine Marketplace Container Image Solution on Amazon EKS with Helm. Amazon Elastic Kubernetes Service (Amazon EKS) is a managed service that makes it easy for users to run Kubernetes on AWS without needing to install and operate their own clusters. For many users looking to deploy Anchore Engine, Amazon EKS is a simple choice to reap the benefits of Kubernetes without the operational overhead.

Prerequisites

Before you begin, please make sure you have fulfilled the prerequisites detailed below. At a minimum, you should be comfortable working with the command-line and have a general understanding of how to work with Kubernetes applications.

  • A running Amazon EKS cluster with worker nodes launched. See EKS Documentation for more information on this setup.
  • Helm client and server installed and configured with your EKS cluster.
  • Anchore CLI installed on localhost.

Once you have an EKS cluster up and running with worker nodes launched, you can verify via the following command.

$ kubectl get nodes
NAME                             STATUS   ROLES    AGE   VERSION
ip-192-168-2-164.ec2.internal    Ready    <none>   10m   v1.14.6-eks-5047ed
ip-192-168-35-43.ec2.internal    Ready    <none>   10m   v1.14.6-eks-5047ed
ip-192-168-55-228.ec2.internal   Ready    <none>   10m   v1.14.6-eks-5047ed

Anchore Engine Marketplace Listing

Anchore Engine allows users to bring industry-leading open source container security and compliance to their container landscape in EKS. Deployment is done using the Anchore Engine Helm Chart, which can be found on GitHub. So if you are already running an EKS cluster with Helm configured, you can now deploy Anchore Engine directly from the AWS marketplace to tighten up your container security posture.

To get started, navigate to the Anchore Engine Marketplace Listing, and select “Continue to Subscribe”, “Continue to Configuration”, and “Continue to Launch”.

On the Launch Configuration screen, select “View container image details”

Selecting this will present the popup depicted below. This will display the Anchore Engine container images you will be required to pull down and use with your deployment.
There are two container images required for this deployment: Anchore Engine and PostgreSQL.

Next, follow the steps on the popup to verify you are able to pull down the required images (Anchore Engine and Postgres) from Amazon ECR.

Anchore Custom Configuration

Before deploying the Anchore software, you will need to create a custom anchore_values.yaml file to pass the Anchore Engine Helm Chart during your installation. The reason behind this is the default Helm chart references different container images than the ones on AWS Marketplace. Additionally, in order to expose our application on the public internet, you will need to configure ingress resources.

As mentioned above, you will need to reference the Amazon ECR Marketplace images in this Helm chart. You can do so by populating your custom anchore_values.yaml file with image location and tag as shown below.

postgresql:
  image: 709373726912.dkr.ecr.us-east-1.amazonaws.com/e4506d98-2de6-4375-8d5e-10f8b1f5d7e3/cg-3671661136/docker.io/library/postgres
  imageTag: v.0.5.0-latest
  imagePullPolicy: IfNotPresent
anchoreGlobal:
  image: 709373726912.dkr.ecr.us-east-1.amazonaws.com/e4506d98-2de6-4375-8d5e-10f8b1f5d7e3/cg-3671661136/docker.io/anchore/anchore-engine
  imageTag: v.0.5.0-latest
  imagePullPolicy: IfNotPresent

Note: Since the container images live in a private ECR registry, you will also need to create a secret with valid Docker credentials in order to fetch them.

Example Steps to Create a Secret

# RUN me where kubectl is available,& make sure to replace account,region etc
# Set ENV vars
ACCOUNT=123456789
REGION=my-region
SECRET_NAME=${REGION}-ecr-registry
[email protected] ( can be anything)

#
# Fetch token (which will expire in 12 hours)
#

TOKEN=`aws ecr --region=$REGION get-authorization-token --output text --query authorizationData[].authorizationToken | base64 -d | cut -d: -f2`

#
# Create registry secret
#
kubectl create secret docker-registry $SECRET_NAME --docker-server=https://${ACCOUNT}.dkr.ecr.${REGION}.amazonaws.com--docker-username=AWS --docker-password="${TOKEN}" --docker-email="${EMAIL}"

Once you have successfully created the secret, you will need to add ImagePullSecrets to a service account.

I recommend reading more about how you can add ImagePullSecrets to a service account here.

Ingress (Optional)

One of the simplest ways to expose Kubernetes applications on the public internet is through ingress. On AWS, an ALB ingress controller can be used. It is important to note that this step is optional, as you can still run through a successful installation of the software without it. You can read more about Kubernetes Ingress with AWS ALB Ingress Controller here.

Anchore Ingress Configurations

Just as we did above, any changes to the Helm chart configuration should be made in your anchore_values.yaml

Ingress

First, you should create an ingress section in your anchore_values.yaml file as shown in the code block below. The key properties here are apiPath and annotations.

ingress:
  enabled: true
  # Use the following paths for GCE/ALB ingress controller
  apiPath: /v1/*
  # uiPath: /*
    # apiPath: /v1/
    # uiPath: /
    # Uncomment the following lines to bind on specific hostnames
    # apiHosts:
    #   - anchore-api.example.com
    # uiHosts:
    #   - anchore-ui.example.com
  annotations:
    kubernetes.io/ingress.class: alb
    alb.ingress.kubernetes.io/scheme: internet-facing

Anchore Engine API Service

Next, you can create an anchoreApi section in your anchore_values.yaml file as shown in the code block below. The key property here is changing service type to NodePort.

# Pod configuration for the anchore engine api service.
anchoreApi:
  replicaCount: 1

  # Set extra environment variables. These will be set on all api containers.
  extraEnv: []
    # - name: foo
    #   value: bar

  # kubernetes service configuration for anchore external API
  service:
    type: NodePort
    port: 8228
    annotations: {}

AWS EKS Configurations

Once the Anchore configuration is complete, you can move to the EKS specific configuration. The first step is to create an IAM policy to give the Ingress controller we will be creating the proper permissions. In short, you need to allow permission to work with ec2 resources and create a load balancer.

Create the IAM Policy to Give the Ingress Controller the Right Permissions

  1. Go to the IAM Console.
  2. Choose the section Roles and search for the NodeInstanceRole of your EKS worker nodes.
  3. Create and attach a policy using the contents of the template iam-policy.json

Next, deploy RBAC Roles and RoleBindings needed by the AWS ALB Ingress controller from the template below:

wget https://raw.githubusercontent.com/kubernetes-sigs/aws-alb-ingress-controller/v1.0.0/docs/examples/rbac-role.yaml
kubectl apply -f https://raw.githubusercontent.com/kubernetes-sigs/aws-alb-ingress-controller/v1.0.0/docs/examples/rbac-role.yaml

Update ALB Ingress

Download the ALB Ingress manifest and update the cluster-name section with the name of your EKS cluster name.

wget https://raw.githubusercontent.com/kubernetes-sigs/aws-alb-ingress-controller/v1.0.1/docs/examples/alb-ingress-controller.yaml
# Name of your cluster. Used when naming resources created
            # by the ALB Ingress Controller, providing distinction between
            # clusters.
            - --cluster-name=anchore-prod

Deploy the AWS ALB Ingress controller YAML:

kubectl apply -f alb-ingress-controller.yaml

Installation

Now that all of the custom configurations are completed, you are ready to install the Anchore software.

First, ensure you have the latest Helm Charts by running the following command:

helm repo update

Install Anchore Engine

Next, run the following command to install the Anchore Engine Helm chart in your EKS cluster:

helm install --name anchore-engine -f anchore_values.yaml stable/anchore-engine

The command above will install Anchore Engine using the custom anchore_values.yaml file you’ve creaed

You will need to give the software a few minutes to bootstrap.

In order to see the ingress resource we have created, run the following command:

$ kubectl describe ingress
Name:             anchore-enterprise-anchore-engine
Namespace:        default
Address:          xxxxxxx-default-anchoreen-xxxx-xxxxxxxxx.us-east-1.elb.amazonaws.com
Default backend:  default-http-backend:80 (<none>)
Rules:
  Host  Path  Backends
  ----  ----  --------
  *     
        /v1/*   anchore-enterprise-anchore-engine-api:8228 (192.168.42.122:8228)
Annotations:
  alb.ingress.kubernetes.io/scheme:  internet-facing
  kubernetes.io/ingress.class:       alb
Events:
  Type    Reason  Age   From                    Message
  ----    ------  ----  ----                    -------
  Normal  CREATE  14m   alb-ingress-controller  LoadBalancer 904f0f3b-default-anchoreen-d4c9 created, ARN: arn:aws:elasticloadbalancing:us-east-1:077257324153:loadbalancer/app/904f0f3b-default-anchoreen-d4c9/4b0e9de48f13daac
  Normal  CREATE  14m   alb-ingress-controller  rule 1 created with conditions [{    Field: "path-pattern",    Values: ["/v1/*"]  }]

The output above shows you that a Load Balancer has been created in AWS with an address you can hit in the browser. A great tool to validate that the software is up and running is the Anchore CLI. Additionally, you can use this tool to verify that the API route hostname is configured correctly:

Note: Read more on Configuring the Anchore CLI

$ anchore-cli --url http://anchore-engine-anchore-engine.apps.54.84.147.202.nip.io/v1 --u admin --p foobar system status
Service analyzer (anchore-enterprise-anchore-engine-analyzer-cfddf6b56-9pwm9, http://anchore-enterprise-anchore-engine-analyzer:8084): up
Service apiext (anchore-enterprise-anchore-engine-api-5b5bffc79f-vmwvl, http://anchore-enterprise-anchore-engine-api:8228): up
Service simplequeue (anchore-enterprise-anchore-engine-simplequeue-dc58c69c9-5rmj9, http://anchore-enterprise-anchore-engine-simplequeue:8083): up
Service policy_engine (anchore-enterprise-anchore-engine-policy-84b6dbdfd-fvnll, http://anchore-enterprise-anchore-engine-policy:8087): up
Service catalog (anchore-enterprise-anchore-engine-catalog-b88d4dff4-jhm4t, http://anchore-enterprise-anchore-engine-catalog:8082): up

Engine DB Version: 0.0.11
Engine Code Version: 0.5.0

Conclusion

With Anchore installed on EKS, Security and DevOps teams can seamlessly integrate comprehensive container image inspection and policy enforcement into their CI/CD pipeline to ensure that images are analyzed thoroughly for known vulnerabilities before deploying them into production. This will not only avoid the pain of finding and remediating vulnerabilities at runtime but also allow the end-user to define and enforce custom security policies to meet their specific company’s internal policies and any applicable regulatory security standards. We are happy to provide users with the added simplicity of deploying Anchore software on Amazon EKS with Helm as a validated AWS Marketplace container image solution.

Anchore Engine Available in Azure Marketplace

We are pleased to announce the immediate availability of Anchore Engine in the Azure marketplace.

Microsoft has grown its cloud native development and DevOps offerings significantly in the past two years. The Azure offerings available today such as Azure Container Instances (ACI), Azure Kubernetes Service (AKS), and Azure Pipelines give enterprises and agencies the tools they need to build scalable, cloud native applications. With Azure, Microsoft helps organizations innovate and grow while saving time and money, enabling business transformation and increased competitiveness.

At Anchore, we have a similar mission. We want organizations to innovate quickly with containers but be confident that the software they ship is safe. Our comprehensive container image inspection and analysis solution is a perfect fit for the kind of innovative enterprises and agencies that use Azure. That is why we are proud to make it available through the Azure Marketplace.

Give it a try! If you don’t already have an Azure account, you can get one for free. Then, check out our marketplace page to get started.

Anchore Enterprise 2.1 Features Single Sign-On (SSO)

With the release of Anchore Enterprise 2.1 (based on Anchore Engine v0.5.0), we are happy to announce integration with external identity providers that support SAML 2.0. Adding support for external identity providers allows users to enable Single Sign-On for Anchore, reducing the number of user stores that an enterprise needs to maintain.

Authentication / Authorization

SAML is an open standard for exchanging authorization and authentication (auth-n/auth-z) data between an identity provider (IdP) and a service provider (SP). As an SP, Anchore Enterprise 2.1 can be configured to use an external IdP such as Keycloak for auth-n/auth-z user transactions.

When using SAML SSO, users log into the Anchore Enterprise UI via the external IdP without ever passing credentials to Anchore. Information about the user is passed from the IdP to Anchore and Anchore initializes the user’s identity within itself using that data. After first sign-in, the username exists without credentials in Anchore and additional RBAC configuration can be done on the identity directly by Anchore administrators. This allows Anchore administrator users to control access of their own users without also having to have access to a corporate IdP system.

Integrating Anchore Enterprise with Keycloak

The JBoss Keycloak auth-n/auth-z IdP is a widely used and open-source identity management system that supports integration with applications via SAML and OpenID Connect. It also can operate as an identity broker between other providers such as LDAP or other SAML providers and applications that support SAML or OpenID Connect.

In addition to Keycloak, other SAML supporting IdPs could be used, such as Okta or Google’s Cloud Identity SSO. There are four key features that an IdP must provide in order to successfully integrate with Anchore:

  1. It must support HTTP Redirect binding.
  2. It should support signed assertions and signed documents. While this blog doesn’t apply either of these, it is highly recommended to use signed assertions and documents in a production environment.
  3. It must allow unsigned client requests from Anchore.
  4. It must allow unencrypted requests and responses.

The following is an example of how to configure a new client entry in KeyCloak and configure Anchore to use it to permit UI via Keycloak SSO.

Deploying Keycloak and Anchore

For this example, I used the latest Keycloak image from Docker Hub (Keycloak v7.0.0). The default docker-compose file for Anchore Enterprise 2.1 includes options to enable OAuth. By default, these options are commented out. Uncommenting `ANCHORE_OAUTH_ENABLED` and `ANCHORE_AUTH_SECRET` will enable SSO.

Using the following docker-compose file, I can deploy Keycloak with its own Postgres DB:

version: '3'

volumes:
  postgres_data:
      driver: local

services:
  postgres:
      image: postgres
      volumes:
        - postgres_data:/var/lib/postgresql/data
      environment:
        POSTGRES_DB: keycloak
        POSTGRES_USER: keycloak
        POSTGRES_PASSWORD: password
  keycloak:
      image: jboss/keycloak
      environment:
        DB_VENDOR: POSTGRES
        DB_ADDR: postgres
        DB_DATABASE: keycloak
        DB_USER: keycloak
        DB_SCHEMA: public
        DB_PASSWORD: password
        KEYCLOAK_USER: admin
        KEYCLOAK_PASSWORD: Pa55w0rd
      ports:
        - 8080:8080
        - 9990:9990
      depends_on:
        - postgres

Next, I can deploy Anchore Enterprise with the following docker-compose file:

# All-in-one docker-compose deployment of a full anchore-enterprise service system
---
version: '2.1'
volumes:
  anchore-db-volume:
    # Set this to 'true' to use an external volume. In which case, it must be created manually with "docker volume create anchore-db-volume"
    external: false
  anchore-scratch: {}
  feeds-workspace-volume:
    # Set this to 'true' to use an external volume. In which case, it must be created manually with "docker volume create feeds-workspace-volume"
    external: false
  enterprise-feeds-db-volume:
    # Set this to 'true' to use an external volume. In which case, it must be created manually with "docker volume create enterprise-feeds-db-volume"
    external: false

services:
  # The primary API endpoint service
  engine-api:
    image: docker.io/anchore/anchore-engine:v0.5.0
    depends_on:
    - anchore-db
    - engine-catalog
    #volumes:
    #- ./config-engine.yaml:/config/config.yaml:z
    ports:
    - "8228:8228"
    logging:
      driver: "json-file"
      options:
        max-size: 100m
    environment:
    - ANCHORE_ENDPOINT_HOSTNAME=engine-api
    - ANCHORE_DB_HOST=anchore-db
    - ANCHORE_DB_PASSWORD=mysecretpassword
    - ANCHORE_AUTHZ_HANDLER=external
    - ANCHORE_EXTERNAL_AUTHZ_ENDPOINT=http://enterprise-rbac-authorizer:8228
    - ANCHORE_ENABLE_METRICS=false
    - ANCHORE_LOG_LEVEL=INFO
    # Uncomment both ANCHORE_OAUTH_ENABLED and ANCHORE_AUTH_SECRET to enable SSO feature of anchore-enterprise
    - ANCHORE_OAUTH_ENABLED=true
    - ANCHORE_AUTH_SECRET=supersharedsecret
    command: ["anchore-manager", "service", "start",  "apiext"]
  # Catalog is the primary persistence and state manager of the system
  engine-catalog:
    image: docker.io/anchore/anchore-engine:v0.5.0
    depends_on:
    - anchore-db
    #volumes:
    #- ./config-engine.yaml:/config/config.yaml:z
    logging:
      driver: "json-file"
      options:
        max-size: 100m
    expose:
    - 8228
    environment:
    - ANCHORE_ENDPOINT_HOSTNAME=engine-catalog
    - ANCHORE_DB_HOST=anchore-db
    - ANCHORE_DB_PASSWORD=mysecretpassword
    - ANCHORE_ENABLE_METRICS=false
    - ANCHORE_LOG_LEVEL=INFO
    # Uncomment both ANCHORE_OAUTH_ENABLED and ANCHORE_AUTH_SECRET to enable SSO feature of anchore-enterprise
    - ANCHORE_OAUTH_ENABLED=true
    - ANCHORE_AUTH_SECRET=supersharedsecret
    command: ["anchore-manager", "service", "start",  "catalog"]
  engine-simpleq:
    image: docker.io/anchore/anchore-engine:v0.5.0
    depends_on:
    - anchore-db
    - engine-catalog
    #volumes:
    #- ./config-engine.yaml:/config/config.yaml:z
    expose:
    - 8228
    logging:
      driver: "json-file"
      options:
        max-size: 100m
    environment:
    - ANCHORE_ENDPOINT_HOSTNAME=engine-simpleq
    - ANCHORE_DB_HOST=anchore-db
    - ANCHORE_DB_PASSWORD=mysecretpassword
    - ANCHORE_ENABLE_METRICS=false
    - ANCHORE_LOG_LEVEL=INFO
    # Uncomment both ANCHORE_OAUTH_ENABLED and ANCHORE_AUTH_SECRET to enable SSO feature of anchore-enterprise
    - ANCHORE_OAUTH_ENABLED=true
    - ANCHORE_AUTH_SECRET=supersharedsecret
    command: ["anchore-manager", "service", "start",  "simplequeue"]
  engine-policy-engine:
    image: docker.io/anchore/anchore-engine:v0.5.0
    depends_on:
    - anchore-db
    - engine-catalog
    #volumes:
    #- ./config-engine.yaml:/config/config.yaml:z
    expose:
    - 8228
    logging:
      driver: "json-file"
      options:
        max-size: 100m
    environment:
    - ANCHORE_ENDPOINT_HOSTNAME=engine-policy-engine
    - ANCHORE_DB_HOST=anchore-db
    - ANCHORE_DB_PASSWORD=mysecretpassword
    - ANCHORE_ENABLE_METRICS=false
    - ANCHORE_LOG_LEVEL=INFO
    # Uncomment the ANCHORE_FEEDS_* environment variables (and uncomment the feeds db and service sections at the end of this file) to use the on-prem feed service
    #- ANCHORE_FEEDS_URL=http://enterprise-feeds:8228/v1/feeds
    #- ANCHORE_FEEDS_CLIENT_URL=null
    #- ANCHORE_FEEDS_TOKEN_URL=null
    # Uncomment both ANCHORE_OAUTH_ENABLED and ANCHORE_AUTH_SECRET to enable SSO feature of anchore-enterprise
    - ANCHORE_OAUTH_ENABLED=true
    - ANCHORE_AUTH_SECRET=supersharedsecret
    command: ["anchore-manager", "service", "start",  "policy_engine"]
  engine-analyzer:
    image: docker.io/anchore/anchore-engine:v0.5.0
    depends_on:
    - anchore-db
    - engine-catalog
    #volumes:
    #- ./config-engine.yaml:/config/config.yaml:z
    expose:
    - 8228
    logging:
      driver: "json-file"
      options:
        max-size: 100m
    environment:
    - ANCHORE_ENDPOINT_HOSTNAME=engine-analyzer
    - ANCHORE_DB_HOST=anchore-db
    - ANCHORE_DB_PASSWORD=mysecretpassword
    - ANCHORE_ENABLE_METRICS=false
    - ANCHORE_LOG_LEVEL=INFO
    # Uncomment both ANCHORE_OAUTH_ENABLED and ANCHORE_AUTH_SECRET to enable SSO feature of anchore-enterprise
    - ANCHORE_OAUTH_ENABLED=true
    - ANCHORE_AUTH_SECRET=supersharedsecret
    volumes:
    - anchore-scratch:/analysis_scratch
    - ./analyzer_config.yaml:/anchore_service/analyzer_config.yaml:z
    command: ["anchore-manager", "service", "start",  "analyzer"]
  anchore-db:
    image: "postgres:9"
    volumes:
    - anchore-db-volume:/var/lib/postgresql/data
    environment:
    - POSTGRES_PASSWORD=mysecretpassword
    expose:
    - 5432
    logging:
      driver: "json-file"
      options:
        max-size: 100m
  enterprise-rbac-authorizer:
    image: docker.io/anchore/enterprise:v0.5.0
    volumes:
    - ./license.yaml:/license.yaml:ro
    #- ./config-enterprise.yaml:/config/config.yaml:z
    depends_on:
    - anchore-db
    - engine-catalog
    expose:
    - 8089
    logging:
      driver: "json-file"
      options:
        max-size: 100m
    environment:
    - ANCHORE_ENDPOINT_HOSTNAME=enterprise-rbac-authorizer
    - ANCHORE_DB_HOST=anchore-db
    - ANCHORE_DB_PASSWORD=mysecretpassword
    - ANCHORE_ENABLE_METRICS=false
    - ANCHORE_LOG_LEVEL=INFO
    # Uncomment both ANCHORE_OAUTH_ENABLED and ANCHORE_AUTH_SECRET to enable SSO feature of anchore-enterprise
    - ANCHORE_OAUTH_ENABLED=true
    - ANCHORE_AUTH_SECRET=supersharedsecret
    command: ["anchore-enterprise-manager", "service", "start",  "rbac_authorizer"]
  enterprise-rbac-manager:
    image: docker.io/anchore/enterprise:v0.5.0
    volumes:
    - ./license.yaml:/license.yaml:ro
    #- ./config-enterprise.yaml:/config/config.yaml:z
    depends_on:
    - anchore-db
    - engine-catalog
    ports:
    - "8229:8228"
    logging:
      driver: "json-file"
      options:
        max-size: 100m
    environment:
    - ANCHORE_ENDPOINT_HOSTNAME=enterprise-rbac-manager
    - ANCHORE_DB_HOST=anchore-db
    - ANCHORE_DB_PASSWORD=mysecretpassword
    - ANCHORE_AUTHZ_HANDLER=external
    - ANCHORE_EXTERNAL_AUTHZ_ENDPOINT=http://enterprise-rbac-authorizer:8228
    - ANCHORE_ENABLE_METRICS=false
    - ANCHORE_LOG_LEVEL=INFO
    # Uncomment both ANCHORE_OAUTH_ENABLED and ANCHORE_AUTH_SECRET to enable SSO feature of anchore-enterprise
    - ANCHORE_OAUTH_ENABLED=true
    - ANCHORE_AUTH_SECRET=supersharedsecret
    command: ["anchore-enterprise-manager", "service", "start",  "rbac_manager"]
  enterprise-reports:
    image: docker.io/anchore/enterprise:v0.5.0
    volumes:
    - ./license.yaml:/license.yaml:ro
    depends_on:
    - anchore-db
    - engine-catalog
    ports:
    - "8558:8228"
    logging:
      driver: "json-file"
      options:
        max-size: 100m
    environment:
    - ANCHORE_ENDPOINT_HOSTNAME=enterprise-reports
    - ANCHORE_DB_HOST=anchore-db
    - ANCHORE_DB_PASSWORD=mysecretpassword
    - ANCHORE_ENABLE_METRICS=false
    - ANCHORE_AUTHZ_HANDLER=external
    - ANCHORE_EXTERNAL_AUTHZ_ENDPOINT=http://enterprise-rbac-authorizer:8228
    - ANCHORE_LOG_LEVEL=INFO
    # Uncomment both ANCHORE_OAUTH_ENABLED and ANCHORE_AUTH_SECRET to enable SSO feature of anchore-enterprise
    - ANCHORE_OAUTH_ENABLED=true
    - ANCHORE_AUTH_SECRET=supersharedsecret
    command: ["anchore-enterprise-manager", "service", "start",  "reports"]
  enterprise-ui-redis:
    image: "docker.io/library/redis:4"
    expose:
    - 6379
    logging:
      driver: "json-file"
      options:
        max-size: 100m
  enterprise-ui:
    image: docker.io/anchore/enterprise-ui:v0.5.0
    volumes:
    - ./license.yaml:/license.yaml:ro
    #- ./config-ui.yaml:/config/config-ui.yaml:z
    depends_on:
    - engine-api
    - enterprise-ui-redis
    - anchore-db
    ports:
    - "3000:3000"
    logging:
      driver: "json-file"
      options:
        max-size: 100m
    environment:
    - ANCHORE_ENGINE_URI=http://engine-api:8228/v1
    - ANCHORE_RBAC_URI=http://enterprise-rbac-manager:8228/v1
    - ANCHORE_REDIS_URI=redis://enterprise-ui-redis:6379
    - ANCHORE_APPDB_URI=postgres://postgres:mysecretpassword@anchore-db:5432/postgres
    - ANCHORE_REPORTS_URI=http://enterprise-reports:8228/v1
    - ANCHORE_POLICY_HUB_URI=https://hub.anchore.io

Once all containers are deployed, we can move into configuring SSO.

Configure the Keycloak Client

Adding a SAML client in Keycloak can be done following the instructions provided by SAML Clients in the Keycloak documentation.

  • Once logged into the Keycloak UI, navigate to Clients and select Add Client.
  • Enter http://localhost:3000/service/sso/auth/keycloak as the Client ID.
      • This will be used later in the Anchore Enterprise SSO configuration.
  • In the Client Protocol dropdown, choose SAML.
  • Enter http://localhost:3000/service/sso/auth/keycloak as the Client SAML Endpoint.
  • Select Save.

Once added, I can now configure the Anchore Enterprise SSO relevant sections. The majority of the defaults provided by Keycloak are sufficient for the purposes of this blog. However, some configurations do need to be changed.

  • Adding a Name helps identify the client in a user-friendly manner.
  • Adding a Description gives users more information about the client.
  • Set Client Signature Required to Off.
      • In this blog, I’m not setting up client public keys or certs in the SAML Tab, so I’m turning off validation.
  • Set Force POST Binding to Off.
      • Anchore requires the HTTP Redirect Binding to work, so this setting must be off to enable that.
  • Set Force Name ID Format to On.
      • Ignore any name ID policies and use the value configured in the admin console under Name ID Format.
  • Ensure Name ID Format is set to Username.
      • This should be the default.
  • Enter http://localhost:3000/service/sso/auth/keycloak to Valid Redirect URIs.
  • Ensure http://localhost:3000/service/sso/auth/keycloak is set as the Master SAML Processing URL.
      • This should be the default.
  • Expand Fine Grain SAML Endpoint Configuration add add http://localhost:3000/service/sso/auth/keycloak to Assertion Consumer Service Redirect Binding URL.

The configuration should look like the screenshot below, select Save.

I can now download the metadata XML to import into Anchore Enterprise.

  • Select the Installation tab.
  • Choose Mod Auth Mellon files from the Format Option dropbox.
  • Select Download.

Configure Anchore Enterprise SSO

Next, I will configure the Anchore Enterprise UI to use Keycloak for SSO.

  • Once logged into the Anchore Enterprise UI as Admin, navigate to Configuration.
  • Select SSO from the column on the left.
  • Select Let’s Add One under the SSO tab.

I will add the following configurations to the fields on the next screen, several fields will be left blank as they are not necessary for this blog.

  • Enter keycloak for the Name.
  • Enter -1 for the ACS HTTPS Port.
      • This is the port to use for HTTPS to the ACS (Assertion Consumer Service, in this case, the UI). It is only needed if you need to use a non-standard https port.
  • Enter http://localhost:3000/service/sso/auth/keycloak for the SP Entity ID.
      • The service provider entity ID must match the client ID used in the Keycloak configuration above.
  • Enter http://localhost:3000/service/sso/auth/keycloak for the ACS URL.
  • Enter keycloakusers for Default Account.
      • This can be any account name (existing or not) that you’d like the users to be members of.
  • Select read-write from the Default Role dropdown.
  • From the .zip file the downloaded from Keycloak in the above section, copy the contents of the idp-metadata.xml into IDP Metadata XML.
  • Uncheck Require Signed Assertions.
  • The configuration should look like the series of screenshots below, select Save.

After logging out of the Anchore Enterprise UI, there is now an option to authenticate with Keycloak.

After selecting the Keycloak login option, I am redirected to the Keycloak login page. I can now login with existing Keycloak users, in this case, “example”.


The example user did not exist in my Anchore environment but was added upon the successful login to Keycloak.

Conclusion

I have successfully gone through the configuration for both the Keycloak Client and Anchore Enterprise SSO. I hope this step-by-step procedure is helpful in setting up SSO for your Anchore Enterprise solution. For more information on Anchore Enterprise 2.1 SSO support, please see Anchore SSO Support. For the full Keycloak and other examples, see Anchore SSO Examples.

GCP Marketplace Certifies Anchore Engine

Containers make developing and deploying applications for multi and hybrid-cloud environments a whole lot easier. But they also require new best practices for development and operations teams in order to keep security paramount within your process. To keep up, industry-leading DevOps teams have been quickly making the switch to more portable and agile platforms that have the flexibility to speed up building, deploying, and managing cloud-native software. You need the best tools to make the best software.

Both Anchore and Google are committed to helping developers like you build better, safer software more quickly and have been pioneers in the container space since the earliest days. So we are proud to announce that Anchore Engine is now available in the GCP Marketplace. If you are a user of Google Cloud Platform, you can stand up Anchore Engine and start addressing your container security objectives quickly and easily. If you aren’t a user of GCP, maybe the combination of Anchore Engine and GCP together will convince you to give it a try.

You can view Anchore Engine in the GCP Marketplace here.

Getting Anchore Engine certified to be in the GCP Marketplace is exciting for the Anchore team and we can’t wait to help you get started. Please don’t hesitate to reach out with questions by joining our community on Slack. We’d love to hear from you.