Announcing Anchore Enterprise 2.2

Just in time for the holidays, Anchore Enterprise 2.2, our latest update, is now generally available to all of our customers. For this release, we focus on third-party integrations to send notifications, and a new system dashboard to help customers view the status of their systems. This new enterprise release is based on open source Anchore Engine 0.6.0, also available now.

New Integrations with GitHub, Jira, Slack & Microsoft Teams

Anchore Enterprise is commonly used in either a CI/CD pipeline with a container registry or with a Kubernetes admission controller, to analyze and report on any container image issues. When an image fails a policy check, you typically want to notify your developers as soon as possible so they can fix the issue. With our new integrations, these notifications can now be sent to popular workflow tools (or via plain old email if you prefer), enabling the information to be used as part of existing processes.

Notifications can optionally be separated by account, by type (system or user) and by level (info, warn, error), which allows you to send alerts about security vulnerabilities to one set of users and notifications about the Anchore system itself to another.

Importantly for images, notifications are sent not only at the time of the initial scan, but also when a new vulnerability is detected in a previously scanned image, or when a policy is changed that causes an image to be marked as “out of compliance’. The notification service is a fantastic way of creating remediation workflows from the security team to the developers, or as part of an automated system. Look for upcoming Anchore integrations with other systems.

System Dashboard and Feed Sync Status

Anchore Enterprise is a distributed application consisting of many parts, including a database, a message queue, a report engine, a policy engine and so on. To help users see the status of each component, we’ve added a new system dashboard which makes it easier to troubleshoot issues and understand the roles of the various services.

The dashboard also reports which vulnerability data sources have been successfully downloaded. Anchore Enterprise downloads a complete set of vulnerability data for use locally, reducing the need to send data back and forth over the internet, and enabling air-gapped operations. This way, you are ensured that you are receiving data from all relevant sources and that the data is up to date, which is critical for securing your container images.

Looking Into 2020

We are planning one more release in the 2 series for early 2020. After that, we will focus on version 3 of the product which will significantly expand Anchore’s policy-based security capabilities by supporting all aspects of the container’s journey, from code to cloud. As more companies adopt DevSecOps practices, we hear feedback from our users that every step of the software development lifecycle should be enforced with clear policies that prevent the introduction of inadvertent or malicious flaws. We look forward to hearing feedback from our users about their experiences with Anchore Enterprise 2.2 and collaborating on the next phase of the Anchore roadmap.

GitHub Actions Reduces Barrier for Improving Security

GitHub has been a key vendor in making the developer experience friction-free and many of the features they announced this week at their GitHub Universe conference continue to set the standard.

What was notable at the event this week was that security has now been added to the fiction-free mantra and, for anyone who has worked in the security industry, this is not a combination of words you typically hear. Indeed security is mostly seen as being a friction-adder par excellence, so it was really encouraging to see security as the core theme of the day 2 keynote, along with multiple product announcements and talks. Ensuring that security can be added to container workflows with as little overhead as possible is at the core of Anchore’s mission and a key driver of general DevSecOps practices. The fact that GitHub, as the largest hoster of open source content in the world, is getting behind this is great for everyone in the community.

As we announced two days ago, we spent a number of weeks collaborating with GitHub to produce our Anchore Container Scan action. As Zach Hill, Chief Architect at Anchore, and Steve Winton, Senior Partner Engineer at GitHub, demonstrated at one of the breakouts, starting with as little as 4 lines of YAML, you can add Anchore to a CI/CD workflow to generate a full scan of a container and use the output to pass or fail a build. It is hard to conceive of a simpler way to add security to the software development workflow. No manual crafting of Jenkins build jobs, no post-hoc scanning of a content registry – just a simple event-driven model that takes a few minutes to run.

The ability to piece multiple actions together is the most interesting part of the GitHub Action story. The obvious workflow for developers to instrument is to build a container with their code, scan it using to Anchore, push it to the Github Packages registry and then deploy it with one of the AWS, Azure or Google cloud actions. But linking this to other security capabilities in GitHub is where it gets interesting. You could programmatically: create GitHub issues with information about security issues found and how to resolve them for developers to act on; create security notifications (or even a CVE) for users to see about your product; or, push all the resulting data from your scans to a database for security researchers to mine.

We do seem to be at a moment in the industry where the scale of the problem is clear, the urgency to fix is now felt more broadly within organizations, and, finally, the tools and processes to start fixing it are becoming credible. By removing the friction, GitHub and others are hopefully reducing the cost of improving security while making the benefit ever more clear.

As we continue to develop the Anchore Container Scan action, we’re keen to hear your ideas about how we can improve it to support these types of workflows. So please provide feedback in the repo or drop us an email.

Anchore for GitHub Actions

Today at Github Universe, we are announcing the availability of the Anchore Container Scan action for GitHub. Actions allow developers to automate CI/CD workflows, easily integrating tools like Anchore into their build processes. This new action was designed for teams looking to introduce security into their development processes. You can find the action in the GitHub Marketplace

At Anchore, our mission is to enable secure container-based workflows without compromising velocity. By adding Anchore Container Scan into their build process, development teams can gain deep visibility into the contents of their images and create custom policies that ensure compliance. That means discovering and remediating vulnerabilities before publishing images…without adding manual steps that slow everything down.

If you want to learn more about the Anchore Container Scan action, watch our latest webinar where Zach Hill, Chief Architect at Anchore, provides a quick overview and demonstration.

The Delivery Hero Story, Inviting Security to the Party

Last week, the team at Delivery Hero posted the first in a series of articles about bolstering container security and compliance in their DevOps container orchestration model using Anchore Engine. We think they did a fantastic job explaining their goals and sharing the progress they have made. Their article is a great read for those who are grappling with the same challenges.

We believe it’s important to incorporate security best practices early in the development process, and the Restaurant Partner Solutions team at DeliveryHero has done so with Anchore Engine while keeping up with over one million daily orders. So if you haven’t yet read about their project, please take a look at the full article.

Benefits of Static Image Inspection and Policy Enforcement

In this post, I will dive deeper into the key benefits of a comprehensive container image inspection and policy-as-code framework.
A couple of key terms:

  • Comprehensive Container Image Inspection: Complete analysis of a container image to identify it’s entire contents: OS & non-OS packages, libraries, licenses, binaries, credentials, secrets, and metadata. Importantly: storing this information in a Software Bill of Materials (SBOM) for later use.
  • Policy-as-Code Framework: a structure and language for policy rule creation, management, and enforcement represented as code. Importantly: This allows for software development best practices to be adopted such as version control, automation, and testing.

What Exactly Comes from a Complete Static Image Inspection?

A deeper understanding. Container images are complex and require a complete analysis to fully understand all of their contents. The picture above shows all of the useful data an inspection can uncover. Some examples are:

  • Ports specified via the EXPOSE instruction
  • Base image / Linux distribution
  • Username or UID to use when running the container
  • Any environment variables set via the ENV instruction
  • Secrets or keys (ex. AWS credentials, API keys) in the container image filesystem
  • Custom configurations for applications (ex. httpd.conf for Apache HTTP Server)

In short, a deeper insight into what exactly is inside of container images allows teams to make better decisions on what configurations and security standards they would prefer their production software to have.

How to Use the Above Data in Context?

While we can likely agree that access to the above data for container images is a good thing from a visibility perspective, how can we use it effectively to produce higher-quality software? The answer is through policy management.

Policy management allows us to create and edit the rules we would like to enforce. Oftentimes these rules fall into one of three buckets: security, compliance, or best-practice. Typically, a policy author creates sets of rules and describes the circumstances by which certain behaviors/properties are allowed or not. Unfortunately, authors are often restricted to setting policy rules with a GUI or even a Word document, which makes rules difficult to transfer, repeat, version, or test. Policy-as-code solves this by representing policies in human-readable text files, which allow them to adopt software practices such as version control, automation, and testing. Importantly, a policy as code framework includes a mechanism to enforce the rules created.

With containers, standardization on a common set of best-practices for software vulnerabilities, package usage, secrets management, Dockerfiles, etc. are excellent places to start. Some examples of policy rules are:

  • Should all Dockerfiles have effective USER instruction? Yes. If undefined, warn me.
  • Should the FROM instruction only reference a set of “trusted” base images? Yes. If not from the approved list, fail this policy evaluation.
  • Are AWS keys ever allowed inside of the container image filesystem? No. If they are found, fail this policy evaluation.
  • Are containers coming from DockerHub allowed in production? No. If they attempt to be used, fail this policy evaluation.

The above examples demonstrate how the Dockerfile analysis and secrets found during the image inspection can prove extremely useful when creating policy. Most importantly, all of these policy rules are created to map to information available prior to running a container.

Integrating Policy Enforcement

With policy rules clearly defined as code and shared across multiple teams, the enforcement component can freely be integrated into the Continuous Integration / Continuous Delivery workflow. The concept of “shifting left” is important to follow here. The principal benefit here is, the more testing and checks individuals and teams can incorporate further left in their software development pipelines, the less costly it will be for them when changes need to be made. Simply put, prevention is better than a cure.

Integration as Part of a CI Pipeline

Incorporating container image inspection and policy rule enforcement to new or existing CI pipelines immediately adds security and compliance requirements as part of the build, blocking important security risks from ever making their way into production environments. For example, if a policy rule exists to explicitly not allow a container image to have a root user defined in the Dockerfile, failing the build pipeline of a non-compliant image before pushing to a production registry is a fundamental quality gate to implement. Developers will typically be forced to remediate the issue they’ve created which caused the build failure and work to modify their commit to reflect compliant changes.

Below depicts how this process works with Anchore:

Anchore provides an API endpoint where the CI pipeline can send an image for analysis and policy evaluation. This provides simple integration into any workflow, agnostic of the CI system being used. When the policy evaluation is complete, Anchore returns a PASS or FAIL output based on the policy rules defined. From this, the user can choose whether or not to fail the build pipeline.

Integration with Kubernetes Deployments

Adding an admission controller to gate execution of container images in Kubernetes in accordance with policy standards can be a critical method to validate what containers are allowed to run on your cluster. Very simply: admit the containers I trust, reject the ones I don’t. Some examples of this are:

  • Reject an image if it is being pulled directly from DockerHub.
  • Reject an image if it has high or critical CVEs that have fixes available.

This integration allows Kubernetes operators to enforce policy and security gates for any pod that is requested on their clusters before they even get scheduled.

Below depicts how this process works with Anchore and the Anchore Kubernetes Admission Controller:

The key takeaway from both of these points of integration is that they are occurring before ever running a container image. Anchore provides users with a full suite of policy checks which can be mapped to any detail uncovered during the image inspection. When discussing this with customers, we often hear, “I would like to scan my container images for vulnerabilities.” While this is a good first step to take, it is the tip of the iceberg when it comes to what is available inside of a container image.

Conclusion

With immutable infrastructure, once a container image artifact is created, it does not change. To make changes to the software, good practice tells us to build a new container image, push it to a container registry, kill the existing container, and start a new one. As explained above, containers provide us with tons of useful static information gathered during an inspection, so another good practice is to use this information, as soon as it is available, and where it makes sense in the development workflow. The more policies which can be created and enforced as code, the faster and more effective IT organizations will be able to deliver secure software to their end customers.

Looking to learn more about how to utilizing a policy-based security posture to meet DoD compliance standards like cATO or CMMC? One of the most popular technology shortcuts is to utilize a DoD software factory. Anchore has been helping organizations and agencies put the Sec in DevSecOps by securing traditional software factories, transforming them into DoD software factories. Get caught up with the content below:

Success With Anchore, Best Practices from our Customers

Successful container and CI/CD security encompass not only vulnerability analysis but also a mindset based on integrating security with every step of the Software Development Life Cycle (SDLC). At Anchore, we believe incorporating early and frequent scanning with policy enforcement can help reduce overall security risk. This blog shares some of the elements that have helped our customers be successful with Anchore.

Scan Early/Scan Often

Anchore allows you to start analyzing right away, without changing your existing processes. There is no downside in putting an `anchore-cli image add <new image>` at the end of your CI/CD pipeline, and then exploring how to use the results of vulnerability scans or policy evaluations later. Since all images added to Anchore are there until you decide to remove them, analysis can be revisited later and new policies can be applied as your organizational needs evolve.

Scanning early catches vulnerabilities and policy violations prior to deploying into production. By scanning during the CI/CD pipeline, issues can be resolved prior to runtime narrowing the focus to issues that are solely runtime-related at that point. This “Shift Left” mentality moves application quality and security considerations closer to the developer, allowing issues to be addressed sooner in the delivery chain. Whether it’s CI/CD build plugins (Jenkins, CircleCI, etc.) or repository image scanning, adding security analysis to your delivery pipeline can reduce the time it takes to resolve issues as well as lower the costs associated with fixing security issues in production.

To learn more about Anchore’s CI/CD integrations, take a look at our CI/CD documentation.

To learn more about repository image analysis, see our Analyzing Images documentation.

Custom Policy Creation

At Anchore, we believe in more than just CVEs. Anchore policies act as a one-stop-checking-spot for Dockerfile best practices, as well as keep policy enforcement in-line with your organizational security standards, such as secret storage and application configuration within your container. At a high level, policy bundles contain the policies themselves, whitelists, mappings, whitelisted images, and blacklisted images.

Policies can be configured to be compliant with NIST, ISO, and banking regulations, among many others. As industry regulations and auditing regularly affect the time to deployment, performing policy checks early in the CI/CD pipeline can help increase the speed of deployments without sacrificing auditing or regulation requirements. At a finer-grained level, custom policies can enforce organizational best practices at an earlier point in the pipeline, enabling cross-group buy-in between developers and security personnel.

To learn more about working with Anchore policies, please see our Working with Policies documentation.

Policy Enforcement with Notifications

To build upon the above topic, another best practice is enabling notifications. With a typical CI/CD process, build failures prompt notifications to fix the build, whether it is due to a missing dependency or simply a typo. With Anchore, builds can be configured to fail when an analysis or a policy evaluation fails, prompting attention to the issue.

Taking this a step further, Anchore enables notifications through webhooks that can be used to notify the appropriate personnel in the event that there is an update to a CVE or if a policy evaluation status changes. Anchore leverages the ability to subscribe to tags and images to receive notifications when images are updated, when CVEs are added or removed and when the policy status of an image changes so you can take a proactive approach to ensure security and compliance. Having the ability to stay on top of the notifications above allows for the appropriate methods for remediation and triage to take place.

To learn more about using webhooks for notifications, please see our Webhook Configuration documentation.

For an example of how notifications can be integrated with Slack, please see our Using Anchore and Slack for Container Security Notifications blog.

Archiving Old Analysis Data

There may be times that older image analysis data is no longer needed in your working set but, for security compliance reasons, the data needs to be retained. Adding an image to the archive includes all analyses, policy evaluations, and tags for an image, allowing you to delete the image from your working set. Manually moving images to an archive can be cumbersome and time-consuming, but automating the process reduces the number of images in your working set while still retaining the analysis data.

Archiving analysis data backs it up, allowing it to be removed from the working set; it can always be moved back should something in the policy change, an organizational shift occurs, or you simply want it back in the working set. Archiving image data keeps the live set of images in line with what is current; over time, it could become cumbersome to continuously be running policy evaluations and vulnerability scans against images that are old and potentially not important. Archiving them keeps the working set lighter. Anchore’s archiving service makes it simple to automatically archive images and their data, implemented via adding rules to the analysis archive. With such rules, images with an analyzed date older than a specified number of days, specific tags, and the number of images can be automatically added to the archive, making it simpler to work with the newer images your organization is concerned with while maintaining the analysis data of older images.

To learn more about archiving old analysis data, please see our Using the Analysis Archive documentation.

To learn more about working with archiving rules, please see our Working with Archive Rules documentation.

Leveraging External Object Storage to Offload Database Storage

Anchore Engine uses a PostgreSQL database to store structured data for images, tags, policies, subscriptions, and metadata about images by default, but other types of data in the system are less structured and tend to be larger pieces of data. Because of that, there are benefits to supporting key-value access patterns for things like image manifests, analysis reports, and policy evaluations. For such data, Anchore has an internal object storage interface that, while defaulted to use the same PostgreSQL database for storage, can be configured to use external object storage providers to support simpler capacity management and lower costs.

By offloading the database storage, it eliminates the need to scale-out PostgreSQL while speeding up its performance. As the database grows, the various queries that are run against it and writing new data to it slow down, in turn slowing the productivity of Anchore. By leveraging an external object store and removing bulk data from PostgreSQL, only the relevant image metadata will be stored there, while other important data is stored externally and can be archived at lower costs.

To learn more about using any of our supported external object storage drivers, please see our Object Storage documentation.

Conclusion

Leveraging some of the best practices that have made our customers successful can help your organization achieve the same success with Anchore. As an open-source community, we value feedback and hearing about what best practices the community has developed.

Anchore Talk Webinar, Redefining the Software Supply Chain

We are pleased to announce Anchore Talks, a series of short webinars to help improve Kubernetes and Docker security best practices. We believe it is important to have excellent security measures in place when adopting containers, and that drives every decision we make when developing Anchore Enterprise and Anchore Engine. These talks, no longer than 15 minutes each, will share our perspective on the challenges and opportunities presented to today’s DevSecOps professionals and offer clear, actionable advice for securing the build pipeline.

Containers can create quite a few headaches for security professionals because they increase velocity and allow developers to pull from a wider variety of software. Fortunately, they can also offer more efficient tracking and oversight for your software supply chain, making it much easier to scan, find and patch vulnerabilities during the build process. Using containers, security can be baked in from the start, keeping the velocity of the build process high.

Anchore VP of Product Neil Levine has prepared our first Anchore Talk on this new approach to security, starting with how developers can source containers responsibly finishing with container immutability and its impact on audits and compliance. You won’t want to miss this brief 10-15 minute talk live on October 28th, starting at 10 am PST! It will also be available on-demand once you have signed up for a BrightTalk account. If keeping systems secure is your full-time job, we have some exciting content coming your way.

Anchore and Google Distroless

The most recent open source release of Anchore Engine (0.5.1), which is also available as part of Anchore Enterprise 2.1, added support for Google Distroless containers. But what are they and why is the addition notable?

When containers were first starting to be adopted, it was natural for many users to think of them as stripped-down virtual machines which booted faster. Indeed, if you look at the container images published by the operating system vendors, you can see that in most instances they take their stock distribution and remove all the parts they consider unnecessary. This still leaves images that are pretty large, in the hundreds of megabytes, and so some alternative distributions have become popular, notably Alpine which based on Busybox and the MUSL C library had its roots in the embedded space. Now images can be squeezed into the tens of megabytes, enabling faster build, downloads and a reduced surface area for vulnerabilities.

However, these images still ape VMs, enabling shell access and containing package managers, designed to let users grow and modify them. Google wanted a different approach that saw a container image as essentially a language runtime environment that was curated by the application teams themselves. The only thing that should be added to it was the actual application itself. The resulting family of images known as Distroless are only slightly larger than thin distros like Alpine but, by contrast, have better compatibility by using standard libraries (e.g. libc rather than MUSL).

As Google Distroless images are based on Debian packages, Anchore is now able to scan and report on any security findings in the base images as well as in the language files installed.

The images are all hosted on the Google Container Registry (GCR) and are available with Java and C (with experimental support also available for Python, NPM, Node and .Net). We can add them using the regular syntax for Anchore Engine on the CLI:

anchore-cli image add gcr.io/distroless/java:11

Being so small, the images are typically scanned in a minute or less. Using the Anchore Enterprise GUI, you can see the image is detected as being Debian:

Looking at its contents, you can see the image has very little in it – only 19 Debian packages, including libc6:

As standard Debian packages, Anchore can scan these and alert for any vulnerabilities. If there are fixes available, you can configure Anchore to trigger a rebuild of the image.

There is only one warning generated on the image by the standard Anchore policy which relates to the lack of a Dockerfile health check but other than that, this image – given its lean nature, is vulnerability free.

If you are using a compiled binary application like Java, another new feature allows you to add the hash of the binary to the Anchore policy check which means you can enforce a strict compliance check on every build that goes through your CI/CD. This will ensure that literally no other modifications are being made to the base images other than the application being layered on top.

For users who still need access to a shell for debugging or viewing locally stored log files, they may still prefer to use Alpine or other minimal ages, but or those fully vested in the cloud-native deployment model where containers conform to 12-factor best practices, Google Distroless images are a great asset to have in your development process.

You can find more information about Google Distoless on GitHub and existing users of both Anchore Engine or Anchore Enterprise just need to download the latest version to enable support.

Anchore Engine 0.5.1 Release

We are pleased to announce the immediate availability of Anchore Engine 0.5.1, the latest point update to our open source software from Anchore that helps users enforce container security, compliance, and best practice requirements. This update not only adds bug fixes and performance improvements but also adds a new policy gate check and support for Google’s distroless images.

Google’s distroless images are helping businesses tighten up security while speeding up the build, scan, and patch process for DevOps teams. Because these images only contain the application’s resources and runtime dependencies, the attack surface is significantly reduced and the process of scanning for and patching vulnerabilities becomes much simpler. Using distroless container images can help DevOps teams save time and become more agile in their development pipeline while keeping security at the forefront. For more documentation on distroless container images, take a look here.

Also in this release, our engineers have taken policy check to the next level with our secret search gate. Previously, our secret search gate made sure vulnerable information was not left in plain sight for hackers to exploit. Now you can use it to make sure necessities aren’t missing from your config file within your image.

If you haven’t already deployed Anchore Engine, you can stand it up alongside your favorite cloud native tools, begin hardening your container images and adhering to federally accepted compliance and best practices.

We are incredibly thankful for our open source community and can’t wait to share more project updates! For more information about the release, check out our release notes.

Visit AWS Marketplace For Anchore Engine on EKS

In this post, I will walk through the steps required to deploy the Anchore Engine Marketplace Container Image Solution on Amazon EKS with Helm. Amazon Elastic Kubernetes Service (Amazon EKS) is a managed service that makes it easy for users to run Kubernetes on AWS without needing to install and operate their own clusters. For many users looking to deploy Anchore Engine, Amazon EKS is a simple choice to reap the benefits of Kubernetes without the operational overhead.

Prerequisites

Before you begin, please make sure you have fulfilled the prerequisites detailed below. At a minimum, you should be comfortable working with the command-line and have a general understanding of how to work with Kubernetes applications.

  • A running Amazon EKS cluster with worker nodes launched. See EKS Documentation for more information on this setup.
  • Helm client and server installed and configured with your EKS cluster.
  • Anchore CLI installed on localhost.

Once you have an EKS cluster up and running with worker nodes launched, you can verify via the following command.

$ kubectl get nodes
NAME                             STATUS   ROLES    AGE   VERSION
ip-192-168-2-164.ec2.internal    Ready    <none>   10m   v1.14.6-eks-5047ed
ip-192-168-35-43.ec2.internal    Ready    <none>   10m   v1.14.6-eks-5047ed
ip-192-168-55-228.ec2.internal   Ready    <none>   10m   v1.14.6-eks-5047ed

Anchore Engine Marketplace Listing

Anchore Engine allows users to bring industry-leading open source container security and compliance to their container landscape in EKS. Deployment is done using the Anchore Engine Helm Chart, which can be found on GitHub. So if you are already running an EKS cluster with Helm configured, you can now deploy Anchore Engine directly from the AWS marketplace to tighten up your container security posture.

To get started, navigate to the Anchore Engine Marketplace Listing, and select “Continue to Subscribe”, “Continue to Configuration”, and “Continue to Launch”.

On the Launch Configuration screen, select “View container image details”

Selecting this will present the popup depicted below. This will display the Anchore Engine container images you will be required to pull down and use with your deployment.
There are two container images required for this deployment: Anchore Engine and PostgreSQL.

Next, follow the steps on the popup to verify you are able to pull down the required images (Anchore Engine and Postgres) from Amazon ECR.

Anchore Custom Configuration

Before deploying the Anchore software, you will need to create a custom anchore_values.yaml file to pass the Anchore Engine Helm Chart during your installation. The reason behind this is the default Helm chart references different container images than the ones on AWS Marketplace. Additionally, in order to expose our application on the public internet, you will need to configure ingress resources.

As mentioned above, you will need to reference the Amazon ECR Marketplace images in this Helm chart. You can do so by populating your custom anchore_values.yaml file with image location and tag as shown below.

postgresql:
  image: 709373726912.dkr.ecr.us-east-1.amazonaws.com/e4506d98-2de6-4375-8d5e-10f8b1f5d7e3/cg-3671661136/docker.io/library/postgres
  imageTag: v.0.5.0-latest
  imagePullPolicy: IfNotPresent
anchoreGlobal:
  image: 709373726912.dkr.ecr.us-east-1.amazonaws.com/e4506d98-2de6-4375-8d5e-10f8b1f5d7e3/cg-3671661136/docker.io/anchore/anchore-engine
  imageTag: v.0.5.0-latest
  imagePullPolicy: IfNotPresent

Note: Since the container images live in a private ECR registry, you will also need to create a secret with valid Docker credentials in order to fetch them.

Example Steps to Create a Secret

# RUN me where kubectl is available,& make sure to replace account,region etc
# Set ENV vars
ACCOUNT=123456789
REGION=my-region
SECRET_NAME=${REGION}-ecr-registry
[email protected] ( can be anything)

#
# Fetch token (which will expire in 12 hours)
#

TOKEN=`aws ecr --region=$REGION get-authorization-token --output text --query authorizationData[].authorizationToken | base64 -d | cut -d: -f2`

#
# Create registry secret
#
kubectl create secret docker-registry $SECRET_NAME --docker-server=https://${ACCOUNT}.dkr.ecr.${REGION}.amazonaws.com--docker-username=AWS --docker-password="${TOKEN}" --docker-email="${EMAIL}"

Once you have successfully created the secret, you will need to add ImagePullSecrets to a service account.

I recommend reading more about how you can add ImagePullSecrets to a service account here.

Ingress (Optional)

One of the simplest ways to expose Kubernetes applications on the public internet is through ingress. On AWS, an ALB ingress controller can be used. It is important to note that this step is optional, as you can still run through a successful installation of the software without it. You can read more about Kubernetes Ingress with AWS ALB Ingress Controller here.

Anchore Ingress Configurations

Just as we did above, any changes to the Helm chart configuration should be made in your anchore_values.yaml

Ingress

First, you should create an ingress section in your anchore_values.yaml file as shown in the code block below. The key properties here are apiPath and annotations.

ingress:
  enabled: true
  # Use the following paths for GCE/ALB ingress controller
  apiPath: /v1/*
  # uiPath: /*
    # apiPath: /v1/
    # uiPath: /
    # Uncomment the following lines to bind on specific hostnames
    # apiHosts:
    #   - anchore-api.example.com
    # uiHosts:
    #   - anchore-ui.example.com
  annotations:
    kubernetes.io/ingress.class: alb
    alb.ingress.kubernetes.io/scheme: internet-facing

Anchore Engine API Service

Next, you can create an anchoreApi section in your anchore_values.yaml file as shown in the code block below. The key property here is changing service type to NodePort.

# Pod configuration for the anchore engine api service.
anchoreApi:
  replicaCount: 1

  # Set extra environment variables. These will be set on all api containers.
  extraEnv: []
    # - name: foo
    #   value: bar

  # kubernetes service configuration for anchore external API
  service:
    type: NodePort
    port: 8228
    annotations: {}

AWS EKS Configurations

Once the Anchore configuration is complete, you can move to the EKS specific configuration. The first step is to create an IAM policy to give the Ingress controller we will be creating the proper permissions. In short, you need to allow permission to work with ec2 resources and create a load balancer.

Create the IAM Policy to Give the Ingress Controller the Right Permissions

  1. Go to the IAM Console.
  2. Choose the section Roles and search for the NodeInstanceRole of your EKS worker nodes.
  3. Create and attach a policy using the contents of the template iam-policy.json

Next, deploy RBAC Roles and RoleBindings needed by the AWS ALB Ingress controller from the template below:

wget https://raw.githubusercontent.com/kubernetes-sigs/aws-alb-ingress-controller/v1.0.0/docs/examples/rbac-role.yaml
kubectl apply -f https://raw.githubusercontent.com/kubernetes-sigs/aws-alb-ingress-controller/v1.0.0/docs/examples/rbac-role.yaml

Update ALB Ingress

Download the ALB Ingress manifest and update the cluster-name section with the name of your EKS cluster name.

wget https://raw.githubusercontent.com/kubernetes-sigs/aws-alb-ingress-controller/v1.0.1/docs/examples/alb-ingress-controller.yaml
# Name of your cluster. Used when naming resources created
            # by the ALB Ingress Controller, providing distinction between
            # clusters.
            - --cluster-name=anchore-prod

Deploy the AWS ALB Ingress controller YAML:

kubectl apply -f alb-ingress-controller.yaml

Installation

Now that all of the custom configurations are completed, you are ready to install the Anchore software.

First, ensure you have the latest Helm Charts by running the following command:

helm repo update

Install Anchore Engine

Next, run the following command to install the Anchore Engine Helm chart in your EKS cluster:

helm install --name anchore-engine -f anchore_values.yaml stable/anchore-engine

The command above will install Anchore Engine using the custom anchore_values.yaml file you’ve creaed

You will need to give the software a few minutes to bootstrap.

In order to see the ingress resource we have created, run the following command:

$ kubectl describe ingress
Name:             anchore-enterprise-anchore-engine
Namespace:        default
Address:          xxxxxxx-default-anchoreen-xxxx-xxxxxxxxx.us-east-1.elb.amazonaws.com
Default backend:  default-http-backend:80 (<none>)
Rules:
  Host  Path  Backends
  ----  ----  --------
  *     
        /v1/*   anchore-enterprise-anchore-engine-api:8228 (192.168.42.122:8228)
Annotations:
  alb.ingress.kubernetes.io/scheme:  internet-facing
  kubernetes.io/ingress.class:       alb
Events:
  Type    Reason  Age   From                    Message
  ----    ------  ----  ----                    -------
  Normal  CREATE  14m   alb-ingress-controller  LoadBalancer 904f0f3b-default-anchoreen-d4c9 created, ARN: arn:aws:elasticloadbalancing:us-east-1:077257324153:loadbalancer/app/904f0f3b-default-anchoreen-d4c9/4b0e9de48f13daac
  Normal  CREATE  14m   alb-ingress-controller  rule 1 created with conditions [{    Field: "path-pattern",    Values: ["/v1/*"]  }]

The output above shows you that a Load Balancer has been created in AWS with an address you can hit in the browser. A great tool to validate that the software is up and running is the Anchore CLI. Additionally, you can use this tool to verify that the API route hostname is configured correctly:

Note: Read more on Configuring the Anchore CLI

$ anchore-cli --url http://anchore-engine-anchore-engine.apps.54.84.147.202.nip.io/v1 --u admin --p foobar system status
Service analyzer (anchore-enterprise-anchore-engine-analyzer-cfddf6b56-9pwm9, http://anchore-enterprise-anchore-engine-analyzer:8084): up
Service apiext (anchore-enterprise-anchore-engine-api-5b5bffc79f-vmwvl, http://anchore-enterprise-anchore-engine-api:8228): up
Service simplequeue (anchore-enterprise-anchore-engine-simplequeue-dc58c69c9-5rmj9, http://anchore-enterprise-anchore-engine-simplequeue:8083): up
Service policy_engine (anchore-enterprise-anchore-engine-policy-84b6dbdfd-fvnll, http://anchore-enterprise-anchore-engine-policy:8087): up
Service catalog (anchore-enterprise-anchore-engine-catalog-b88d4dff4-jhm4t, http://anchore-enterprise-anchore-engine-catalog:8082): up

Engine DB Version: 0.0.11
Engine Code Version: 0.5.0

Conclusion

With Anchore installed on EKS, Security and DevOps teams can seamlessly integrate comprehensive container image inspection and policy enforcement into their CI/CD pipeline to ensure that images are analyzed thoroughly for known vulnerabilities before deploying them into production. This will not only avoid the pain of finding and remediating vulnerabilities at runtime but also allow the end-user to define and enforce custom security policies to meet their specific company’s internal policies and any applicable regulatory security standards. We are happy to provide users with the added simplicity of deploying Anchore software on Amazon EKS with Helm as a validated AWS Marketplace container image solution.