[Updated post as of October 1, 2020]

In this post, I will walk through the steps for deploying Anchore Enterprise v2.4 on Amazon EKS with Helm. Anchore currently maintains a Helm Chart which we will use to install the necessary Anchore services.


  • A running Amazon EKS cluster with worker nodes launched. See EKS Documentation for more information.
  • Helm (v3) client installed and configured.

Before we proceed, let’s confirm our cluster is up and running and we can access the kube-api server of our cluster:

Note: Since we will be deploying all services including the database as pods in the cluster, I have deployed a three-node cluster with (2) m5.xlarge and (1) t3.large instances for a basic deployment. I’ve also given the root volume of each node 65GB (195GB total) since we will be using the cluster for persistent storage of the database service.

$ kubectl get nodes NAME                                    


ip-10-0-1-66.us-east-2.compute.internal  Ready <none> 1d  v1.16.12-eks ip-10-0-3-15.us-east-2.compute.internal  Ready <none> 1d  v1.16.12-eks ip-10-0-3-157.us-east-2.compute.internal Ready <none> 1d  v1.16.12-eks

Configuring the Ingress Controller

The ALB Ingress Controller triggers the creation of an Application Load Balancer (ALB) and the necessary supporting AWS resources whenever an Ingress resource is created on the cluster with the kubernetes.io/ingress.class: alb annotation.

To support external access to the Enterprise UI and Anchore API, we will need the cluster to create an ALB for our deployment.

To enable the ALB Ingress Controller pod to create the load balancer and required resources, we need to update the IAM role of the worker nodes and tag the cluster subnets the ingress controller should associate the load balancer with.

  • Download the sample IAM Policy from AWS and attach it to your worker node role either via console or aws-cli.
  • Add the following tags to your cluster’s public subnets:
Key Value
kubernetes.io/cluster/<<cluster-name>> shared
Key Value
kubernetes.io/role/elb 1

Next, we need to create a Kubernetes service account in the kube-system namespace, a cluster role, and a cluster role binding for the ALB Ingress Controller to use:

$ kubectl apply -f https://raw.githubusercontent.com/kubernetes-sigs/aws-alb-ingress-controller/v1.1.8/docs/examples/rbac-role.yaml

With the service account and cluster role resources deployed, download the AWS ALB Ingress Controller deployment manifest to your working directory:

$ wget https://raw.githubusercontent.com/kubernetes-sigs/aws-alb-ingress-controller/v1.1.8/docs/examples/alb-ingress-controller.yaml

Under the container specifications of the manifest, uncomment  --cluster-name=  and enter the name of your cluster:

 # Name of your cluster. Used when naming resources created
 # by the ALB Ingress Controller, providing distinction between
 # clusters.
 - --cluster-name=<eks_cluster_name>

Save and close the deployment manifest, then deploy it to the cluster:

$ kubectl apply -f alb-ingress-controller.yaml

Installing the Anchore Engine Helm Chart

To install the chart repository, run the following command:

$ helm repo add anchore https://charts.anchore.io

"anchore" has been added to your repositories

Confirm the chart was installed successfully:

$ helm repo list
anchore https://charts.anchore.io

Deploying Anchore Enterprise

For the purposes of this post, we will focus on getting a basic deployment of Anchore Enterprise running. For a complete set of configuration options you may include in your installation, refer to the values.yaml file in our charts repository.

Note: Refer to our blog post Configuring Anchore Enterprise on EKS for a walkthrough of common production configuration options including securing the Application Load Balancer/Ingress Controller deployment, using S3 archival and configuring a hosted database service such as Amazon RDS.

Configure Namespace and Credentials

First, let’s create a new namespace for the deployment:

$ kubectl create namespace anchore

namespace/anchore created

Enterprise services require an active Anchore Enterprise subscription (which is supplied via license file), as well as Docker credentials with permission to the private docker repositories that contain the enterprise images.

Create a Kubernetes secret in the anchore namespace with your license file:

Note: You will need to reference the exact path to your license file on your localhost. In the example below, I have copied my license to my working directory.

$ kubectl -n anchore create secret generic anchore-enterprise-license --from-file=license.yaml=./license.yaml

secret/anchore-enterprise-license created

Next, create a secret containing the Docker Hub credentials with access to the private anchore enterprise repositories:

$ kubectl -n anchore create secret docker-registry anchore-enterprise-pullcreds --docker-server=docker.io --docker-username=<DOCKERHUB_USER> --docker-password=<DOCKERHUB_PASSWORD> --docker-email=<EMAIL_ADDRESS>

secret/anchore-enterprise-pullcreds created


Create a new file named anchore_values.yaml in your working directory and create an ingress section with the following contents:


  enabled: true 

  # Use the following paths for GCE/ALB ingress controller

  apiPath: /v1/* 

  uiPath: /*

  annotations: kubernetes.io/ingress.class: alb alb.ingress.kubernetes.io/scheme: internet-facing

Engine API

Below the ingress section add the following block to configure the Enterprise API:

Note: To expose the API service, we set the service type to NodePort instead of the default ClusterIP

  replicaCount: 1

  # kubernetes service configuration for anchore external API
    type: NodePort
    port: 8228
    annotations: {}

Enable Enterprise Deployment

Next, add the following to your anchore_values.yaml file below the anchoreApi section:

    enabled: true

Enterprise UI

Like the API service, we’ll need to expose the UI service to ensure it is accessible outside the cluster. Copy the following section at the end of your anchore_values.yaml file:

  enabled: true
  image: docker.io/anchore/enterprise-ui:latest
  imagePullPolicy: IfNotPresent

  # kubernetes service configuration for anchore UI
    type: NodePort
    port: 443
    annotations: {}
    labels: {}
    sessionAffinity: ClientIP

Deploying the Helm Chart

To install the chart, run the following command from the working directory:

$ helm install --namespace anchore <your_release_name> -f anchore_values.yaml anchore/anchore-engine

It will take the system several minutes to bootstrap. You can checks on the status of the pods by running kubectl get pods:

$ kubectl -n anchore get pods 


anchore-cli-5f4d697985-rdm9f 1/1 Running 0 14m anchore-enterprise-anchore-engine-analyzer-55f6dd766f-qxp9m 1/1 Running 0 9m 

anchore-enterprise-anchore-engine-api-bcd54c574-bx8sq 4/4 Running 0 9m anchore-enterprise-anchore-engine-catalog-ddd45985b-l5nfn 1/1 Running 0 9m 

anchore-enterprise-anchore-engine-enterprise-feeds-786b6cd9mw9l 1/1 Running 0 9m anchore-enterprise-anchore-engine-enterprise-ui-758f85c859t2kqt 1/1 Running 0 9m 

anchore-enterprise-anchore-engine-policy-846647f56b-5qk7f 1/1 Running 0 9m 

anchore-enterprise-anchore-engine-simplequeue-85fbd57559-c6lqq 1/1 Running 0 9m 

anchore-enterprise-anchore-feeds-db-668969c784-6f556 1/1 Running 0 9m anchore-enterprise-anchore-ui-redis-master-0 1/1 Running 0 9m anchore-enterprise-postgresql-86d56f7bf8-nx6mw 1/1 Running 0 9m

Run the following command to get details on the deployed ingress:

$ kubectl -n anchore get ingress


support-anchore-engine   *       1a2b3c4-anchoreenterprise-f9e8-123456789.us-east-2.elb.amazonaws.com   80      4h

You should see the address for the created and can use it to navigate to the Enterprise UI:

Anchore Enterprise login screen.


You now have an installation of Anchore Enterprise up and running on Amazon EKS. The complete contents for the walkthrough are available by navigating to the GitHub repo here. For more info on Anchore Engine or Enterprise, you can join our community Slack channel, or request a technical demo.