4 Kubernetes Security Best Practices

Kubernetes security best practices are a necessity now as Kubernetes is becoming a defacto standard for container orchestration. Many of the best practices focus on securing Kubernetes workloads. Managers, developers, and sysadmins need to make it a habit to institute early in their move to Kubernetes orchestration.

Earlier this year, respondents to the Anchore 2021 Software Supply Chain Security Report replied that they use a median of 5 container platforms. That’s testimony to the growing importance of Kubernetes in the market. ”Standalone” Kubernetes (that are not part of a PaaS service) are used most often by 71 percent of respondents. These instances may be run on-premise, through a hosting provider, or on a cloud provider’s infrastructure. The second most used container platform is Amazon ECS (56%), a platform-as-a-service (PaaS) offering. Tied for third place (53%) are Amazon EKS, Azure Kubernetes Services, and Red Hat OpenShift.

A common industry definition for a workload is the amount of activity performed or capable of being performed within a specified period by a  program or application running on a computer. The definition is often loosely applied and can describe a simple “hello world” program or a complex monolithic application. Today, the terms workload, application, software, and program are used interchangeably.

Best Practices

Here are some Kubernetes security best practices to keep in mind

1. Enable Role-Based Access Control

Implementing and configuring Role-Based Access Control (RBAC) is necessary when securing your Kubernetes environment and workloads.

Kubernetes 1.6 and later enable RBAC by default (later for HAProxy); however, if you’ve upgraded since then and haven’t changed your configuration since then, you should double-check it. Due to how Kubernetes authorization controllers are combined, you will have to enable RBAC and disable legacy Attribute-Based Access Control (ABAC).

Once you start enforcing RBAC, you still need to use it effectively. You should avoid cluster-wide permissions in favor of namespace-specific permissions. Don’t give just anyone cluster admin privileges, even for debugging – it is much more secure to grant access only as needed.

2. Perform Vulnerability Scanning of Containers in the Pipeline

Setting up automated Kubernetes vulnerability scanning of containers in your DevSecOps pipelines and registries is essential to workload security. When you automate visibility, monitoring, and scanning across the container lifecycle, can you remediate more issues in development before your containers reach your production environment.

Another element of this best practice is to have the tools and processes in place to enable the scanning of Kubernetes secrets and private registries. This is another essential step as software supply chain security continues to gain a foothold across industries. 

3. Keep a Secret

A secret in Kubernetes contains sensitive information, such as a password or token. Even though a pod cannot access the secrets of another pod, it’s vital to keep a secret separate from an image or pod. A person with access to the image would also have access to the secret. This is especially true for complex applications that handle numerous processes and have public access.

4. Follow your CSP’s Security Guidelines

If you’re running Kubernetes in the cloud, then you want to consult your cloud service provider’s guidelines for container workload security. Here are links to documentation from the major CSPs:

Along with these security guidelines, you may want to consider cloud security certifications for your cloud and security teams members. CSPs are constantly evolving their security offerings. Just consulting the documentation when you need it may not be enough for your organization’s security and compliance posture.       

Final thought

Kubernetes security best practices need to become second nature to operations teams as their Kubernetes adoption grows. IT management needs to work with their teams to ensure the best practices in this post and others make it into standard operating procedures if they aren’t already.

Want to learn more about container security practices? Check out our Container Security Best Practices That Scale webinar, now on-demand!

Kubernetes Adoption by the Numbers

Our recent 2021 Anchore Supply Chain Security Survey sheds some light on Kubernetes adoption and growth in the enterprise as it pertains to running container workloads. 

For this blog post, container platforms based on Kubernetes run containerized applications, whether during development and testing, staging, or production. These platforms run in house, through a hosting provider, or from a cloud provider or another vendor.

K8s Stands Alone

Perhaps the most interesting Kubernetes stat in the survey is that 71% of respondents are using a “standalone” version of Kubernetes that’s not part of a platform as a service (PaaS) rather it’s run on-premise or even on cloud infrastructure as a service (IaaS).

The second most used container platform is Amazon Amazon Elastic Container Service (ECS) with 56%. 

53% of the respondents are using Amazon Elastic Kubernetes Service (EKS), Azure Kubernetes Services, and Red Hat OpenShift for their container management and orchestration.

We’re at an interesting point for Kubernetes adoption as these numbers show. While there’s a well-known Kubernetes skills gap, organizations are still relying on their own teams, most likely augmented with outside contractors to deploy and operate Kubernetes. While the major cloud service providers (CSPs) are the logical platform for outsourcing Kubernetes infrastructure and the related backend management tasks the numbers point to the fact they are still gaining mindshare in the Kubernetes market.

Container Platforms Used

K8s and Large Workloads

Cloud-native software development is now delivering at an enterprise-scale on major development projects that include 1000+ containers. Here’s the spread of Kubernetes adoption in use on these business and mission-critical projects:

  • Standalone Kubernetes (7%)
  • Amazon ECS (7%)
  • Amazon EKS (7%)
  • Azure Kubernetes Service (7%)
  • SUSE-Rancher Labs (6%)

This tight spread of K8s platforms paints an interesting picture of the scale where these large enterprise projects play. Standalone Kubernetes, Amazon ECS, Amazon EKS, and Azure AKS are all at a tie. The continued presence of standalone Kubernetes is a testimony to early adopters and the growing reliance on open source software in large enterprises.

It’ll be interesting to revisit this question next year after large enterprises have gone through more than a year of COVID-19 driven cloud migrations which could give CSP offerings a decided advantage in the new world of work.

Looking Forward

Kubernetes is still experiencing exponential growth. The Kubernetes responses in our survey speak to a future that’s being written as we speak. 

The complexities around deploying and operating Kubernetes still remain and aren’t going to disappear anytime soon. That means that the open source projects and CSPs offering Kubernetes solutions are going to have to focus more on simplicity and usability in future releases. Along with that comes a renewed commitment for outreach, documentation, and training for their Kubernetes offerings.

Do you want more insights into container and software supply chain security? Download the Anchore 2021 Software Supply Chain Security Report!

Deploying Anchore Enterprise 2.4 on AWS Elastic Kubernetes Services (EKS) with Helm

[Updated post as of October 1, 2020]

In this post, I will walk through the steps for deploying Anchore Enterprise v2.4 on Amazon EKS with Helm. Anchore currently maintains a Helm Chart which we will use to install the necessary Anchore services.

Prerequisites

  • A running Amazon EKS cluster with worker nodes launched. See EKS Documentation for more information.
  • Helm (v3) client installed and configured.

Before we proceed, let’s confirm our cluster is up and running and we can access the kube-api server of our cluster:

Note: Since we will be deploying all services including the database as pods in the cluster, I have deployed a three-node cluster with (2) m5.xlarge and (1) t3.large instances for a basic deployment. I’ve also given the root volume of each node 65GB (195GB total) since we will be using the cluster for persistent storage of the database service.

$ kubectl get nodes NAME                                    

 STATUS ROLES AGE VERSION

ip-10-0-1-66.us-east-2.compute.internal  Ready <none> 1d  v1.16.12-eks ip-10-0-3-15.us-east-2.compute.internal  Ready <none> 1d  v1.16.12-eks ip-10-0-3-157.us-east-2.compute.internal Ready <none> 1d  v1.16.12-eks

Configuring the Ingress Controller

The ALB Ingress Controller triggers the creation of an Application Load Balancer (ALB) and the necessary supporting AWS resources whenever an Ingress resource is created on the cluster with the kubernetes.io/ingress.class: alb annotation.

To support external access to the Enterprise UI and Anchore API, we will need the cluster to create an ALB for our deployment.

To enable the ALB Ingress Controller pod to create the load balancer and required resources, we need to update the IAM role of the worker nodes and tag the cluster subnets the ingress controller should associate the load balancer with.

  • Download the sample IAM Policy from AWS and attach it to your worker node role either via console or aws-cli.
  • Add the following tags to your cluster’s public subnets:
Key Value
kubernetes.io/cluster/<<cluster-name>> shared
Key Value
kubernetes.io/role/elb 1

Next, we need to create a Kubernetes service account in the kube-system namespace, a cluster role, and a cluster role binding for the ALB Ingress Controller to use:

$ kubectl apply -f https://raw.githubusercontent.com/kubernetes-sigs/aws-alb-ingress-controller/v1.1.8/docs/examples/rbac-role.yaml

With the service account and cluster role resources deployed, download the AWS ALB Ingress Controller deployment manifest to your working directory:

$ wget https://raw.githubusercontent.com/kubernetes-sigs/aws-alb-ingress-controller/v1.1.8/docs/examples/alb-ingress-controller.yaml

Under the container specifications of the manifest, uncomment  --cluster-name=  and enter the name of your cluster:

# REQUIRED
 # Name of your cluster. Used when naming resources created
 # by the ALB Ingress Controller, providing distinction between
 # clusters.
 - --cluster-name=<eks_cluster_name>

Save and close the deployment manifest, then deploy it to the cluster:

$ kubectl apply -f alb-ingress-controller.yaml

Installing the Anchore Engine Helm Chart

To install the chart repository, run the following command:

$ helm repo add anchore https://charts.anchore.io

"anchore" has been added to your repositories

Confirm the chart was installed successfully:

$ helm repo list
NAME    URL
anchore https://charts.anchore.io

Deploying Anchore Enterprise

For the purposes of this post, we will focus on getting a basic deployment of Anchore Enterprise running. For a complete set of configuration options you may include in your installation, refer to the values.yaml file in our charts repository.

Note: Refer to our blog post Configuring Anchore Enterprise on EKS for a walkthrough of common production configuration options including securing the Application Load Balancer/Ingress Controller deployment, using S3 archival and configuring a hosted database service such as Amazon RDS.

Configure Namespace and Credentials

First, let’s create a new namespace for the deployment:

$ kubectl create namespace anchore

namespace/anchore created

Enterprise services require an active Anchore Enterprise subscription (which is supplied via license file), as well as Docker credentials with permission to the private docker repositories that contain the enterprise images.

Create a Kubernetes secret in the anchore namespace with your license file:

Note: You will need to reference the exact path to your license file on your localhost. In the example below, I have copied my license to my working directory.

$ kubectl -n anchore create secret generic anchore-enterprise-license --from-file=license.yaml=./license.yaml

secret/anchore-enterprise-license created

Next, create a secret containing the Docker Hub credentials with access to the private anchore enterprise repositories:

$ kubectl -n anchore create secret docker-registry anchore-enterprise-pullcreds --docker-server=docker.io --docker-username=<DOCKERHUB_USER> --docker-password=<DOCKERHUB_PASSWORD> --docker-email=<EMAIL_ADDRESS>

secret/anchore-enterprise-pullcreds created

Ingress

Create a new file named anchore_values.yaml in your working directory and create an ingress section with the following contents:

ingress: 

  enabled: true 

  # Use the following paths for GCE/ALB ingress controller

  apiPath: /v1/* 

  uiPath: /*

  annotations: kubernetes.io/ingress.class: alb alb.ingress.kubernetes.io/scheme: internet-facing

Engine API

Below the ingress section add the following block to configure the Enterprise API:

Note: To expose the API service, we set the service type to NodePort instead of the default ClusterIP

anchoreApi:
  replicaCount: 1

  # kubernetes service configuration for anchore external API
  service:
    type: NodePort
    port: 8228
    annotations: {}

Enable Enterprise Deployment

Next, add the following to your anchore_values.yaml file below the anchoreApi section:

anchoreEnterpriseGlobal:
    enabled: true

Enterprise UI

Like the API service, we’ll need to expose the UI service to ensure it is accessible outside the cluster. Copy the following section at the end of your anchore_values.yaml file:

anchoreEnterpriseUi:
  enabled: true
  image: docker.io/anchore/enterprise-ui:latest
  imagePullPolicy: IfNotPresent

  # kubernetes service configuration for anchore UI
  service:
    type: NodePort
    port: 443
    annotations: {}
    labels: {}
    sessionAffinity: ClientIP

Deploying the Helm Chart

To install the chart, run the following command from the working directory:

$ helm install --namespace anchore <your_release_name> -f anchore_values.yaml anchore/anchore-engine

It will take the system several minutes to bootstrap. You can checks on the status of the pods by running kubectl get pods:

$ kubectl -n anchore get pods 

NAME READY STATUS RESTARTS AGE 

anchore-cli-5f4d697985-rdm9f 1/1 Running 0 14m anchore-enterprise-anchore-engine-analyzer-55f6dd766f-qxp9m 1/1 Running 0 9m 

anchore-enterprise-anchore-engine-api-bcd54c574-bx8sq 4/4 Running 0 9m anchore-enterprise-anchore-engine-catalog-ddd45985b-l5nfn 1/1 Running 0 9m 

anchore-enterprise-anchore-engine-enterprise-feeds-786b6cd9mw9l 1/1 Running 0 9m anchore-enterprise-anchore-engine-enterprise-ui-758f85c859t2kqt 1/1 Running 0 9m 

anchore-enterprise-anchore-engine-policy-846647f56b-5qk7f 1/1 Running 0 9m 

anchore-enterprise-anchore-engine-simplequeue-85fbd57559-c6lqq 1/1 Running 0 9m 

anchore-enterprise-anchore-feeds-db-668969c784-6f556 1/1 Running 0 9m anchore-enterprise-anchore-ui-redis-master-0 1/1 Running 0 9m anchore-enterprise-postgresql-86d56f7bf8-nx6mw 1/1 Running 0 9m

Run the following command to get details on the deployed ingress:

$ kubectl -n anchore get ingress

NAME    HOSTS   ADDRESS  PORTS   AGE

support-anchore-engine   *       1a2b3c4-anchoreenterprise-f9e8-123456789.us-east-2.elb.amazonaws.com   80      4h

You should see the address for the created and can use it to navigate to the Enterprise UI:

Anchore Enterprise login screen.

Conclusion

You now have an installation of Anchore Enterprise up and running on Amazon EKS. The complete contents for the walkthrough are available by navigating to the GitHub repo here. For more info on Anchore Engine or Enterprise, you can join our community Slack channel, or request a technical demo.

3 Best Practices for Detecting Attack Vectors on Kubernetes Containers

In recent years, the adoption of microservices and containerized architectures has continually been on the rise, with everyone from small startups to major corporations joining the push into the container world. According to VMWare, 59 percent of large organizations surveyed use Kubernetes to deploy their applications into production. As organizations move towards deploying containers in production, keeping security at the forefront of development becomes more critical because while containers ideally are immutable, they are just another application exposed to security vulnerabilities. The potential impact of a compromise of the underlying container orchestrator can be massive, making securing your applications one of the most important aspects of deployment. 

Securing Infrastructure

Securing the underlying infrastructure that Kubernetes runs on is just as important as securing the servers that run traditional applications. There are many security guides available, but keeping the following three points in mind is a great place to start.

  • Secure and configure the underlying host. Checking your configuration against CIS Benchmarks is recommended as CIS Benchmarks provide clear sets of standards for configuring everything from operating systems to cloud infrastructure.
  • Minimize administrative access to Kubernetes nodes. Restricting access to the nodes in your cluster is the basis of preventing insider threats and reducing the ability to elevate commands for malicious users. Most debugging and other tasks can typically be handled without directly accessing the node.
  • Control network access to sensitive ports. Ensuring that your network limits access to commonly known ports, such as port 22 for SSH access or ports 10250 and 10255 used by Kubelet, restricts access to your network and limits the attack surface for malicious users. Using Security Groups (AWS), Firewall Rules (GCP), and Azure Firewall (Azure) are simple, straightforward ways to control access to your network resources.
  • Rotate infrastructure access credentials frequently. Setting shorter lifetimes on secrets, keys, or access credentials makes it more difficult for an attacker to make use of that credential. Following recommended credential rotation schedules greatly reduces the ability of an attacker to gain access.

Securing Kubernetes

Ensuring the configuration of Kubernetes and any secrets is another critical component to securing your organization’s operational infrastructure. Here are some helpful tips to focus on when deploying to Kubernetes.

  • Encrypt secrets at rest. Kubernetes uses an etcd database to store any information accessible via the Kubernetes API such as secrets and ConfigMaps; essentially the actual and desired state of the entire system. Encrypting this area helps protect the entire system.
  • Enable audit logging. Kubernetes clusters have the option to enable audit logging, keeping a chronological record of calls made to the API. They can be useful for investigating suspicious API requests, for collecting statistics, or for creating monitoring alerts for unwanted API calls.
  • Control the privileges containers are allowed. Limiting access to a container is crucial to prevent privilege escalation. Kubernetes includes pod security policies that can be used to enforce privileges. Container applications should be written to run as a non-root user, and administrators should use a restrictive pod security policy to prevent applications from escaping their container.
  • Control access to the Kubelet. A Kubelet’s HTTPS endpoint exposes APIs which give access to data of varying sensitivity, and allow you to perform operations with varying levels of power on the node and within containers. By default, Kubelet allows unauthorized access to the API, so securing it is recommended for production environments.
  • Enable TLS for all API traffic. Kubernetes expects all API communication within the cluster to be encrypted by TLS, and while the Kubernetes APIs and most installation methods encrypt this by default, API communication in deployed applications may not be encrypted. Administrators should pay close attention to any applications that communicate over unencrypted API calls as they are exposed to potential attacks.
  • Control which nodes pods can access. Kubernetes does not restrict pod scheduling on nodes by default, but it is a best practice to leverage Kubernetes’ in-depth pod placement policies, including labels, nodeSelector, and affinity/anti-affinity rules.

Securing Containerized Applications

Aside from how it is deployed, an application that runs in a container is subject to the same vulnerabilities as running it outside a container. At Anchore, we focus on helping identify which vulnerabilities apply to your containerized applications, and the following are some of many key takeaways that we’ve learned.

  • Scan early, scan often. Shifting security left in the DevSecOps pipeline helps organizations identify potential vulnerabilities early in the process. Shift Left with a Real World Guide to DevSecOps walks you through the benefits of moving security earlier in the DevSecOps workflow.
  • Incorporate vulnerability analysis into CI/CD. Several of our blog posts cover integrating Anchore with CI/CD build pipelines. We also have documentation on integrating with some of the more widely used CI/CD build tools.
  • Multi-staged builds to keep software compilation out of runtime. Take a look at our blog post on Cryptocurrency Mining Attacks for some information on how Anchore can help prevent vulnerabilities and how multi-stage builds come into play.

With the shift towards containerized production deployments, it is important to understand how security plays a role in each level of the infrastructure; from the underlying hosts to the container orchestration platform, and finally to the container itself. By keeping these guidelines in mind, the focus on security shifts from being an afterthought to being included in every step of the DevSecOps workflow.

Need a better solution for managing container vulnerabilities? Anchore’s Kubernetes vulnerability scanning can help.

Getting Started with Helm, Kubernetes and Anchore

We see a lot of people asking about standing up Anchore for local testing on their laptop and in the past, we’ve detailed how to use Docker to do so. Lately, I have been frequently asked if there’s a way to test and learn with Anchore on a laptop using the same or similar deployment methods as what would be used in a larger deployment.

Anchore installation is preferably done via a Helm chart. We can mirror this on a laptop using MiniKube, as opposed to the instructions to use docker-compose to install Anchore. MiniKube is a small testing instance of Kubernetes you can install on your laptop, whether you use Windows, Linux or macOS. Instructions on installing the initial MiniKube virtual machine are here.

Prerequisites are different for your platform so read closely. On macOS You need only install VirtualBox, Homebrew, and issue the following command:

brew cask install minikube kubernetes-cli

Once the installation is complete, you can start your minikube instance with the following command:

minikube start

Once minikube has started, we can grab helm from the Kubernetes GitHub repository:

curl https://raw.githubusercontent.com/kubernetes/helm/master/scripts/get > get_helm.sh
chmod 700 get_helm.sh
./get_helm.sh

Or on macOS:

brew install kubernetes-helm

That will install the latest version of Helm for us to use. Let’s now create a role for helm/tiller to use. Place the following in a file called clusterrole.yaml:

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
annotations:
rbac.authorization.kubernetes.io/autoupdate: "true"
labels:
kubernetes.io/bootstrapping: rbac-defaults
name: cluster-admin
rules:
- apiGroups:
- '*'
resources:
- '*'
verbs:
- '*'
- nonResourceURLs:
- '*'
verbs:
- '*'

To create the cluster role, let’s run this command:

kubectl create -f clusterrole.yaml

Now we’ll create a service account to utilize this role with these commands:

kubectl create serviceaccount -n kube-system tiller
kubectl create clusterrolebinding tiller-cluster-rule --clusterrole=cluster-admin --serviceaccount=kube-system:tiller

Let’s now initialize helm:

helm init --service-account tiller

We can verify if that worked with the following command:

kubectl --namespace kube-system get pods

In that output, you should see a line showing a namespace item of “tiller-deploy” with a status of “running.”

Once we have that installed, let’s install Anchore via the helm chart:

helm install --name anchore-demo stable/anchore-engine

This will install a demo instance of Anchore engine that allows anonymous access. You may want to consult our documentation on helm installs here for more detailed or specific types of configurations to install.

Hopefully, you now have a local copy of Anchore to use on your local development processes using MiniKube and Helm.

Kubernetes Admission Controller Dynamic Policy Mappings & Modes

In December, Anchore introduced an admission controller for Kubernetes solution & vulnerability scanner to gate pod execution based on Anchore analysis and policy evaluation of image content. It supports three different modes of operation allowing you to tune the tradeoff between control and intrusiveness for your environments.

To summarize, those modes are:

  1. Strict Policy-Based Admission Gating Mode – Images must pass policy evaluation by Anchore Engine for admission.
  2. Analysis-Based Admission Gating Mode – Images must have been analyzed by Anchore Engine for admission.
  3. Passive Analysis Trigger Mode – No admission, requirement, but images are submitted for analysis by Anchore Engine prior to admission. The analysis itself is asynchronous.

The multi-mode flexibility is great for customizing how strictly the controller enforces compliance with policy (if at all), but it does not allow you to use different bundles with different policies for the same image based on annotations or labels in Kubernetes, where there is typically more context about how strictly an image should be evaluated.

Consider the following scenario:

Your cluster has two namespaces: testing and production. You’ll be deploying many of the same images into those namespaces, and but you want testing to use much more permissive policies than production. Let’s consider the two policies:

  • testing policy – only block images with critical vulnerabilities
  • production policy – block images with high or critical vulnerabilities or that do not have a defined healthcheck

Now, let’s also allow pods to run in the production environment regardless of the image content if the pod has a special label: ‘breakglass=true’ These kinds of high-level policies are useful for operations work that requires temporary access using specific tools.

Such a scenario would not be achievable with the older controller. So, based on user feedback we’ve added the ability to select entirely different Anchore policy bundles based on metadata in Kubernetes as well as the image tag itself. This complements Anchore’s internal mapping structures within policy bundles that give fine-grained control over which rules to apply to an image based on the image’s tag or digest.

Broadly, the controller’s configuration now supports selector rules that encode a logical condition like this (in words instead of yaml):

If metadata property name matches SelectorKeyRegex and its value matches SelectorValueRegex, then use the specified Mode for checking with bundle PolicyBundleId from anchore user Username

In YAML, the configuration configmap has a new section, which looks like:

policySelectors:
  - Selector:
      ResourceType: pod
      SelectorKeyRegex: breakglass
      SelectorValueRegex: true
    PolicyReference:
      Username: testuser
      PolicyBundleId: testing_bundle
    Mode: policy
  - Selector:
      ResourceType: namespace
      SelectorKeyRegex: name
      SelectorValueRegex: testing
    PolicyReference:
      Username: testuser
      PolicyBundleId: testing_bundle
    Mode: policy
  - Selector:
      ResourceType: namespace
      SelectorKeyRegex: name
      SelectorValueRegex: production
    PolicyReference:
      Username: testuser
      PolicyBundleId: production_bundle
    Mode: policy 
  - Selector:
      ResourceType: image
      SelectorKeyRegex: .*
      SelectorValueRegex: .*
    PolicyReference:
      Username: demouser
      PolicyBundleId: default

Next, I’ll walk through configuring and deploying anchore and a controller to behave like the above example. I’ll set up two policies and two namespaces in Kubernetes to show how the selectors work. For a more detailed walk-thru of the configuration and operation of the controller, see the GitHub project.

Installation and Configuration of the Controller

If you already have anchore running in the cluster or in a location reachable by the cluster then that will work. You can skip to user and policy setup and continue there.

Anchore Engine install requirements:

  • Running Kubernetes cluster v1.9+
  • Configured kubectl tool with configured access (this may require some rbac config depending on your environment)
  • Enough resources to run anchore engine (a few cores and 4GB+ of RAM is recommended)

Install Anchore Engine

1. Install Anchore Engine in the cluster. There is no requirement that the installation is in the same k8s cluster or any k8s cluster, it is simply for convenience

helm install --name anchore stable/anchore-engine

2. Run a CLI container to easily query anchore directly to configure a user and policy

kubectl run -i -t anchorecli --image anchore/engine-cli --restart=Always --env ANCHORE_CLI_URL=http://anchore-anchore-engine-api.anchore.svc.local:8228 --env ANCHORE_CLI_USER=admin --env ANCHORE_CLI_PASS=foobar

3. From within the anchorecli container, create a new account in anchore

anchore-cli account create testing

4. Add a user to the account with a set of credentials (you’ll need these later)

anchore-cli account user add --account testing testuser testuserpassword

5. As the new user, analyze some images, nginx and alpine in this walk-thru. I’ll use those for testing the controller later.

anchore-cli --u testuser --p testuserpassword image add alpine
anchore-cli --u testuser --p testuserpassword image add nginx 
anchore-cli --u testuser --p testuserpassword image list

6. Create a file, testing_bundle.json:

{
    "blacklisted_images": [], 
    "comment": "testing bundle", 
    "id": "testing_bundle", 
    "mappings": [
        {
            "id": "c4f9bf74-dc38-4ddf-b5cf-00e9c0074611", 
            "image": {
                "type": "tag", 
                "value": "*"
            }, 
            "name": "default", 
            "policy_id": "48e6f7d6-1765-11e8-b5f9-8b6f228548b6", 
            "registry": "*", 
            "repository": "*", 
            "whitelist_ids": [
                "37fd763e-1765-11e8-add4-3b16c029ac5c"
            ]
        }
    ], 
    "name": "Testing bundle", 
    "policies": [
        {
            "comment": "System default policy", 
            "id": "48e6f7d6-1765-11e8-b5f9-8b6f228548b6", 
            "name": "DefaultPolicy", 
            "rules": [
                {
                    "action": "WARN", 
                    "gate": "dockerfile", 
                    "id": "312d9e41-1c05-4e2f-ad89-b7d34b0855bb", 
                    "params": [
                        {
                            "name": "instruction", 
                            "value": "HEALTHCHECK"
                        }, 
                        {
                            "name": "check", 
                            "value": "not_exists"
                        }
                    ], 
                    "trigger": "instruction"
                }, 
                {
                    "action": "STOP", 
                    "gate": "vulnerabilities", 
                    "id": "b30e8abc-444f-45b1-8a37-55be1b8c8bb5", 
                    "params": [
                        {
                            "name": "package_type", 
                            "value": "all"
                        }, 
                        {
                            "name": "severity_comparison", 
                            "value": ">"
                        }, 
                        {
                            "name": "severity", 
                            "value": "high"
                        }
                    ], 
                    "trigger": "package"
                }
            ], 
            "version": "1_0"
        }
    ], 
    "version": "1_0", 
    "whitelisted_images": [], 
    "whitelists": [
        {
            "comment": "Default global whitelist", 
            "id": "37fd763e-1765-11e8-add4-3b16c029ac5c", 
            "items": [], 
            "name": "Global Whitelist", 
            "version": "1_0"
        }
    ]
}

7. Create a file, production_bundle.json:

{
    "blacklisted_images": [], 
    "comment": "Production bundle", 
    "id": "production_bundle", 
    "mappings": [
        {
            "id": "c4f9bf74-dc38-4ddf-b5cf-00e9c0074611", 
            "image": {
                "type": "tag", 
                "value": "*"
            }, 
            "name": "default", 
            "policy_id": "48e6f7d6-1765-11e8-b5f9-8b6f228548b6", 
            "registry": "*", 
            "repository": "*", 
            "whitelist_ids": [
                "37fd763e-1765-11e8-add4-3b16c029ac5c"
            ]
        }
    ], 
    "name": "production bundle", 
    "policies": [
        {
            "comment": "System default policy", 
            "id": "48e6f7d6-1765-11e8-b5f9-8b6f228548b6", 
            "name": "DefaultPolicy", 
            "rules": [
                {
                    "action": "STOP", 
                    "gate": "dockerfile", 
                    "id": "312d9e41-1c05-4e2f-ad89-b7d34b0855bb", 
                    "params": [
                        {
                            "name": "instruction", 
                            "value": "HEALTHCHECK"
                        }, 
                        {
                            "name": "check", 
                            "value": "not_exists"
                        }
                    ], 
                    "trigger": "instruction"
                }, 
                {
                    "action": "STOP", 
                    "gate": "vulnerabilities", 
                    "id": "b30e8abc-444f-45b1-8a37-55be1b8c8bb5", 
                    "params": [
                        {
                            "name": "package_type", 
                            "value": "all"
                        }, 
                        {
                            "name": "severity_comparison", 
                            "value": ">="
                        }, 
                        {
                            "name": "severity", 
                            "value": "high"
                        }
                    ], 
                    "trigger": "package"
                }
            ], 
            "version": "1_0"
        }
    ], 
    "version": "1_0", 
    "whitelisted_images": [], 
    "whitelists": [
        {
            "comment": "Default global whitelist", 
            "id": "37fd763e-1765-11e8-add4-3b16c029ac5c", 
            "items": [], 
            "name": "Global Whitelist", 
            "version": "1_0"
        }
    ]
}    

8. Add those policies for the new testuser:

anchore-cli --u testuser --p testuserpassword policy add testing_bundle.json
anchore-cli --u testuser --p testuserpassword policy add production_bundle.json

9. Verify that the alpine image will pass the staging bundle evaluation but not the production bundle:

/ # anchore-cli --u testuser --p testuserpassword evaluate check alpine --policy testing_bundle
Image Digest: sha256:25b4d910f4b76a63a3b45d0f69a57c34157500faf6087236581eca221c62d214
Full Tag: docker.io/alpine:latest
Status: pass
Last Eval: 2019-01-30T18:51:08Z
Policy ID: testing_bundle

/ # anchore-cli --u testuser --p testuserpassword evaluate check alpine --policy production_bundle
Image Digest: sha256:25b4d910f4b76a63a3b45d0f69a57c34157500faf6087236581eca221c62d214
Full Tag: docker.io/alpine:latest
Status: fail
Last Eval: 2019-01-30T18:51:14Z
Policy ID: production_bundle

Now its time to get the admission controller in place to use those policies

Install and Configure the Admission Controller

1. Configure Credentials for the Admission controller to use

I’ll configure a pair of credentials, the new format supports multiple credentials in the secret so that the controller configuration can map policy bundles in multiple accounts. It is important that all usernames specified in the configuration of the controller have a corresponding entry in this secret to provide the password for API auth.

Create a file, testcreds.json:

{
  "users": [
    { "username": "admin", "password": "foobar"},
    { "username": "testuser", "password": "testuserpassword"}
  ]
}

kubectl create secret generic anchore-credentials --from-file=credentials.json=testcreds.json

2. Add the stable anchore charts repository

helm repo add anchore-stable http://charts.anchore.io/stable
helm repo update

3. Create a custom test_values.yaml In your editor, create a file values.yaml in the current directory

credentialsSecret: anchore-credentials
anchoreEndpoint: "http://anchore-anchore-engine-api.default.svc.cluster.local:8228"
requestAnalysis: true
policySelectors:
  - Selector:
      ResourceType: pod
      SelectorKeyRegex: ^breakglass$
      SelectorValueRegex: "^true$"
    PolicyReference:
      Username: testuser
      PolicyBundleId: testing_bundle
    Mode: breakglass
  - Selector:
      ResourceType: namespace
      SelectorKeyRegex: name
      SelectorValueRegex: ^testing$
    PolicyReference:
      Username: testuser
      PolicyBundleId: testing_bundle
    Mode: policy
  - Selector:
      ResourceType: namespace
      SelectorKeyRegex: name
      SelectorValueRegex: ^production$
    PolicyReference:
      Username: testuser
      PolicyBundleId: production_bundle
    Mode: policy
  - Selector:
      ResourceType: image
      SelectorKeyRegex: .*
      SelectorValueRegex: .*
    PolicyReference:
      Username: testuser
      PolicyBundleId: 2c53a13c-1765-11e8-82ef-23527761d060
    Mode: analysis
 

The ‘name’ values are used instead of full regexes in those instances because if the KeyRegex is exactly the string “name” then the controller will look at the resource name instead of a label or annotation and do the value regex match against that name.

4. Install the controller via the chart

helm install --name controller anchore-stable/anchore-admission-controller -f test_values.yaml

5. Create the validating webhook configuration as indicated by the chart install output:

KUBE_CA=$(kubectl config view --minify=true --flatten -o json | jq '.clusters[0].cluster."certificate-authority-data"' -r)
cat > validating-webhook.yaml <<EOF
apiVersion: admissionregistration.k8s.io/v1beta1
kind: ValidatingWebhookConfiguration
metadata:
  name: controller-anchore-admission-controller.admission.anchore.io
webhooks:
- name: controller-anchore-admission-controller.admission.anchore.io
  clientConfig:
    service:
      namespace: default
      name: kubernetes
      path: /apis/admission.anchore.io/v1beta1/imagechecks
    caBundle: $KUBE_CA
  rules:
  - operations:
    - CREATE
    apiGroups:
    - ""
    apiVersions:
    - "*"
    resources:
    - pods
  failurePolicy: Fail
# Uncomment this and customize to exclude specific namespaces from the validation requirement
#  namespaceSelector:
#    matchExpressions:
#      - key: exclude.admission.anchore.io
#        operator: NotIn
#        values: ["true"]
EOF

The apply the generated validating-webhook.yaml:

kubectl apply -f validating-webhook.yaml

Try It

To see it in action, run the alpine container in the testing namespace:

```
[zhill]$ kubectl -n testing run -it alpine --restart=Never --image alpine /bin/sh
If you don't see a command prompt, try pressing enter.
/ # exit
```

It works, as expected since that image passes policy evaluation for that bundle. Now try production, where it should fail to pass policy checks and be blocked:

```
[zhill]$ kubectl -n production run -it alpine --restart=Never --image alpine /bin/sh
Error from server: admission webhook "controller-anchore-admission-controller.admission.anchore.io" denied the request: Image alpine with digest sha256:25b4d910f4b76a63a3b45d0f69a57c34157500faf6087236581eca221c62d214 failed policy checks for policy bundle production_bundle
```

And to get around that, as was defined in the configuration (test_values.yaml), if you add the “breakglass=true” label, it will be allowed:

```
[zhill]$ kubectl -n production run -it alpine --restart=Never --labels="breakglass=true" --image alpine /bin/sh
If you don't see a command prompt, try pressing enter.
/ # exit 
```

Authoring Selector Rules

Selector rules are evaluated in the order they appear in the configmap value, so structure the rules to match from most to least specific filters. Note how in this example the breakglass rule is first.

These selectors are filters on:

  • namespace names, labels and annotations
  • pod names, labels, and annotations
  • image references (pull string)

Each selector provides regex support for both the key to providing the data as well as the data value itself. For image references the key regex is ignored and can be an empty string, only the SelectorValueRegex is used for the match against the pull string.

Important: The match values are regex patterns, so for a full string match you must bracket the string with ^ and $ (e.g. ^exactname$). If you do not include the begin/end matches the regex may match substrings rather than exact strings.

Summary

The new features of the controller are shown here to specify flexible rules for determining controller behavior based on namespace and pod metadata as well as the image pull string in order to support more sophisticated deployment strategies in Kubernetes.

As always, we love feedback, so drop us a line on Slack or file issues on GitHub

The controller code is on Github and so is the chart.

Admission Control in Kubernetes with Anchore

Our focus at Anchore is analyzing, validating, and evaluating docker images against custom policies to give users visibility, control-of, and confidence-in their container images before they ever execute. And, its open-source. In this post, learn how to use the new Anchore admission controller for Kubernetes to gate execution of docker images in Kubernetes according to criteria expressed in Anchore policies such as security vulnerabilities, package manifests, image build-instructions, image source, and the other aspects of image content that Anchore Engine can expose via policy. For a more complete list see the documentation.

The Anchore admission controller implements a handler for Kubernetes’s Validating Webhook payloads specifically configured to validate Pod objects and the image references they contain.

This is a well-established pattern for Kubernetes clusters and admission controllers.

The Anchore admission controller supports three different modes of operation allowing you to tune tradeoff between control and intrusiveness for your environments.

Strict Policy-Based Admission Gating Mode

This is the strictest mode and will admit only images that are already analyzed by Anchore and receive a “pass” on policy evaluation. This enables you to ensure, for example, that no image is deployed into the cluster that has a known high-severity CVE with an available fix, or any of a number of other conditions. Anchore’s policy language (found here) supports sophisticated conditions on the properties of images, vulnerabilities, and metadata. If you have a check or condition that you want to evaluate that you’re not sure about, please let us know!

Examples of Anchore Engine policy rules that are useful in a strict admission environment:

  • Reject an image if it is being pulled from dockerhub directly
  • Reject an image that has high or critical CVEs that have a fix available, but allow high-severity if no fix is available yet
  • Reject an image if it contains a blacklisted package (rpm, deb, apk, jar, python, npm, etc), where you define the blacklist
  • Never reject images from a specific registry/repository (e.g. internal infra images that must be allowed to run)

Analysis-Based Admission Gating Mode

Admit only images that are analyzed and known to Anchore, but do not execute or require a policy evaluation. This is useful in cases where you’d like to enforce requirement that all images be deployed via a CI/CD pipeline, for example, that itself manages the Kubernetes image scanning with Anchore, but allowing the CI/CD process to determine what should run based on other factors outside the context of the image or k8s itself.

Passive Analysis Trigger Mode

Trigger an Anchore analysis of images, but to no block execution on analysis completion or policy evaluation of the image. This is a way to ensure that all images that make it to deployment (test, staging, or prod) are guaranteed to have some form of analysis audit trail available and a presence in reports and notifications that are managed by Anchore Engine.

Installation and Configuration of the Controller

Requirements:

  • Running Kubernetes cluster v1.9+
  • Configured kubectl tool with configured access (this may require some rbac config depending on your environment)
  • Enough resources to run anchore engine (a few cores and 4GB+ of RAM is recommended)

Install Anchore Engine

1. Install Anchore Engine in the cluster. There is no requirement that the installation is in the same k8s cluster or any k8s cluster, I use it here simply for convenience

helm install --name demo stable/anchore-engine

2. Run a CLI container so we can easily query anchore directly to configure a user and policy

kubectl run -i -t anchorecli --image anchore/engine-cli --restart=Always --env ANCHORE_CLI_URL=http://demo-anchore-engine-api.default.svc:8228 --env ANCHORE_CLI_USER=admin --env ANCHORE_CLI_PASS=foobar

3. From within the anchorecli container, verify the system is responding (it may take a few minutes to fully bootstrap so you may need to run this a few times until it returns all services in the “up” state). The second command will wait until the security feeds are all synced and cve data is available.

anchore-cli system status

Which should show the system version and services. If the command hangs for a second, that is normal during service bootstrap, you may need to cancel and re-run the command as all the infrastructure comes up in k8s. Once you have a successful return of system status, run a wait to make sure the system is fully initialized. This may take some time since it requires all vulnerability feed data to be synced.

anchore-cli system wait

4. From within the anchorecli container, create a new anchore account

anchore-cli account add demo

5. Add a user to the account with a set of credentials (you’ll need these later)

anchore-cli account user add --account demo controller admissioncontroller123

Now, exit the container

6. Create a new cli container using the new credentials, I’ll refer to this as ctluser_cli container

kubectl run -i --tty anchore-controller-cli --restart=Always --image anchore/engine-cli --env ANCHORE_CLI_USER=controller --env ANCHORE_CLI_PASS=admissioncontroller123 --env ANCHORE_CLI_URL=http://demo-anchore-engine-api.default.svc:8228/v1/

From within ctluser_cli container, analyze an image to verify things work

anchore-cli image add alpine
anchore-cli image list

7. Exit the anchore-controller-cli container

Configure Credentials

The helm chart and controller support two ways of passing the Anchore Engine credentials to the controller:

  • Directly in the chart via values.yaml or on cli:--set anchore.username=admissionuser --set anchore.password=mysupersecretpassword
  • Using kubernetes Secrets: kubectl create secret generic anchore-creds --from-literal=username=admissionuser --from-literal=password=mysupersecretpassword. And on chart execution/upgrade set via cli (--set anchore.credentialsSecret=<name of secret>) or set the key in values.yaml

NOTE: Using a secret is highly recommended since it will not be visible in any ConfigMaps

For this post I’ll use a secret:

kubectl create secret generic anchore-credentials --from-literal=username=controller --from-literal=password=admissioncontroller123

Next, on to the controller itself.

Install and Configure the Admission Controller

I’ll start by using the controller in Passive mode, and then show how to add the policy gating.

1. Back on your localhost, get the admission controller chart from Github

git clone https://github.com/anchore/anchore-charts
cd anchore-charts/stable/anchore-admission-controller

2. Save the following yaml to my_values.yaml

anchore:
  endpoint: "http://demo-anchore-engine-api.default.svc:8228"
  credentialsSecret: anchore-credentials

3. Install the controller chart

helm install --name democtl -f my_values.yaml .

4. Run the get_config.sh script included in the github repo to grab the validating webhook configuration. It will output validating-webhook.yaml

./files/get_validating_webhook_config.sh democtl

5. Activate the configuration

kubectl apply -f validating-webhook.yaml

6. Verify its working

kubectl run ubuntu --image ubuntu --restart=Never
kubectl attach -i -t <ctluser_cli>
anchore-cli image list

You should see the ‘ubuntu’ tag available and analyzing/analyzed in Anchore. That is the passive-mode triggering the analysis.

For example:

zhill@localhost anchore-admission-controller]$ kubectl run -i -t ubuntu --image ubuntu --restart=Never
If you don't see a command prompt, try pressing enter.
root@ubuntutest:/# exit
exit
[zhill@localhost anchore-admission-controller]$ kubectl logs test2-anchore-admission-controller-7c47fb85b4-n5v7z 
...
1207 13:30:52.274424       1 main.go:148] Checking image: ubuntu
I1207 13:30:52.274448       1 main.go:193] Performing passive validation. Will request image analysis and always allow admission
I1207 13:30:55.180722       1 main.go:188] Returning status: &AdmissionResponse{UID:513100b2-fa24-11e8-9154-d06131dd3541,Allowed:true,Result:&k8s_io_apimachinery_pkg_apis_meta_v1.Status{ListMeta:ListMeta{SelfLink:,ResourceVersion:,Continue:,},Status:Success,Message:Image analysis for image ubuntu requested and found mapped to digest sha256:acd85db6e4b18aafa7fcde5480872909bd8e6d5fbd4e5e790ecc09acc06a8b78,Reason:,Details:nil,Code:0,},Patch:nil,PatchType:nil,}
...

And in the ctluser_cli container I can confirm the image was added an analyzed:

/ # anchore-cli image get ubuntu
Image Digest: sha256:acd85db6e4b18aafa7fcde5480872909bd8e6d5fbd4e5e790ecc09acc06a8b78
Parent Digest: sha256:6d0e0c26489e33f5a6f0020edface2727db9489744ecc9b4f50c7fa671f23c49
Analysis Status: analyzed
Image Type: docker
Image ID: 93fd78260bd1495afb484371928661f63e64be306b7ac48e2d13ce9422dfee26
Dockerfile Mode: Guessed
Distro: ubuntu
Distro Version: 18.04
Size: 32103814
Architecture: amd64
Layer Count: 4
Annotations: requestor=anchore-admission-controller

Full Tag: docker.io/ubuntu:latest

Also, note that the controller has added an Annotation on the anchore image to indicate that it was analyzed at the request of the admission controller. This is useful for later requests to Anchore itself so you know which images were analyzed by the controller compared to those that may have been added as part of CI/CD.

Great! Next, I’ll walk through using the policy gating mode.

Using Strict Policy-Based Admission

In policy gating mode, images must both be analyzed and pass a policy evaluation in order to be admitted.

It’s important to note that the controller requires that the images already be analyzed prior to the admission request. This is because the analysis can take more than a few seconds and may be more (depending on the wait queue), so admission decisions do not wait on an analysis submission and completion.

Configure a Specific Policy

It’s likely that the same policy used for something like CI/CD is not appropriate for execution gating. Anchore Engine directly supports multiple “policy bundles”. In a production environment, you’ll probably want to set a custom policy bundle for the admission controller to use.

1. So, let’s attach to the ctluser_cli pod again and add a new policy

kubectl attach -i -t < ctluser_cli pod>

2. Now, from within the ctluser_cli container shell:

Create a file, policy.json with the following content (or create a similar policy in the Enterprise UI if you’re an Enterprise customer):

{
  "id": "admissionpolicy",
  "version": "1_0",
  "name": "AdmissionControllerDefaultPolicy",
  "comments": "",
  "policies": [
    {
      "id": "Default",
      "version": "1_0",
      "name": "Default",
      "comments": "Default policy for doing cve checks",
      "rules": [
        {
          "id": "cverule1",
          "gate": "vulnerabilities",
          "trigger": "package",
          "params": [ 
            {"name": "package_type", "value": "all"},
            {"name": "severity", "value": "low"},
            {"name": "severity_comparison", "value": ">="}
          ],
          "action": "STOP"
        }
      ]
    }
  ],  
  "whitelists": [],
  "mappings": [
    {
      "name": "Default",
      "registry": "*",
      "repository": "*",
      "image": {
        "type": "tag",
        "value": "*"
      },
      "policy_ids": ["Default"],
      "whitelist_ids": []
    }
  ],
  "whitelisted_images": [],
  "blacklisted_images": []  
}

For this example, I’m using a policy for triggering low severity vulnerabilities just to show how the gating works. A more appropriate production severity would be high or critical to avoid blocking too many images.

To save the policy:

anchore-cli policy add policy.json

3. Update your my_values.yaml to be:

anchore:
  endpoint: "http://demo-anchore-engine-api.default.svc:8228"
  credentialsSecret: anchore-credentials
  policybundle: admissionpolicy
enableStrictGating: true

4. Remove the webhook config to disable admission request for the upgrade of the controller

kubectl delete validatingwebhookconfiguration/demo-anchore-admission-controller.admission.anchore.io

There are cleaner ways to upgrade that avoid this, such as using distinct namespaces and namespace selectors, but that is a bit beyond the scope of this post.

5. And upgrade the deployment

helm upgrade -f my_values.yaml --force democtl .

6. Ensure the controller pod got updated. I’ll delete the pod and let the deployment definition recreate it with the new configmap mounted

kubectl delete po -l release=demotctl

7. Re-apply the webhook config

kubectl apply -f validate-webhook.yaml

8. To show that it’s working, use an image that has not been analyzed yet.

kubectl run -i -t ubuntu2 --image ubuntu --restart=Never

You will see an error response from Kubernetes that the pod could not be executed due to failing policy.

[zhill@localhost anchore-admission-controller]$ kubectl run -i -t ubuntu2 --image ubuntu --restart=Never
Error from server: admission webhook "demo-anchore-admission-controller.admission.anchore.io" denied the request: Image ubuntu with digest sha256:acd85db6e4b18aafa7fcde5480872909bd8e6d5fbd4e5e790ecc09acc06a8b78 failed policy checks for policy bundle admissionpolicy    

Configuring How the Controller Operates

The controller is configured via a ConfigMap that is mounted as a file into the container. The helm chart exposes a few values to simplify that configuration process. For a full set of configuration options see the chart,

Caveats

Currently, there is no Docker Registry credential coordination between k8s and Anchore. For Anchore to be able to pull and analyze images you must configure it to have access to your image registries. For more detail on how to do this, see the documentation

Future Work and Feedback

  • Mutating Webhook Support
    • Integration into workflows that leverage existing policy systems like the Open Policy Agent, and/or integrating such an agent directly into this controller to expand its context to enable admission decisions based on combinations of image analysis context and k8s object context.
  • Enhanced policy mapping capabilities
    • Dynamically map which policy bundle to evaluate based on labels and/or annotations
  • Enhanced Audit trail and configurability via CRDs
    • Leverage API extensions to allow uses to query k8s APIs for analysis information without special tooling.

We love feedback, so drop us a line on Slack or file issues on GitHub

The controller code is on Github and so is the chart.

Anchore Engine on Azure Kubernetes Service Cluster with Helm

This post will walk through deploying an AKS Cluster using the Azure CLI. Once the cluster has been deployed, Anchore Engine will be installed and run via Helm on the cluster. Following the install, I will configure Anchore to authenticate with Azure Container Registry (ACR) and analyze an image.

Prerequisites

Create Azure Resource Group and AKS Cluster

In order to create a cluster, a resource group must first be created in Azure.

Azure CLI:

az group create --name anchoreAKSCluster --location eastus

Once the resource group has been created, we can create a cluster. The following command creates a cluster name anchoreAKSCluster with three nodes.

Azure CLI:

az aks create --resource-group anchoreAKSCluster --name anchoreAKSCluster --node-count 3 --enable-addons monitoring --generate-ssh-keys

Once the cluster has been created, use kubectl to manage the cluster. To install it locally use the following command:

Azure CLI:

az aks install-cli

Configure kubectl to connect to the cluster you just created:

Azure CLI:

az aks get-credentials --resource-group anchoreAKSCluster --name anchoreAKSCluster

In order to verify a successfull connection run the following:

kubectl get nodes

Kubernetes Dashboard

To view the Kubernetes Dashboard for your cluster run the following command:

Azure CLI:

az aks browse --resource-group anchoreAKSCluster --name anchoreAKSCluster

Helm Configuration

Prior to deploying Helm in an RBAC-enabled cluster, you must create a service account and role binding for the Tiller service.

Create a file name helm-rbac.yaml:

apiVersion: v1
kind: ServiceAccount
metadata:
  name: tiller
  namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: kube-dashboard
rules:
- apiGroups: ["*"]
  resources: ["*"]
  verbs: ["*"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: tiller
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
  - kind: ServiceAccount
    name: tiller
    namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: rook-operator
  namespace: rook-system
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: kube-dashboard
subjects:
- kind: ServiceAccount
  name: kubernetes-dashboard
  namespace: kube-system

Run the following command to create the account and role binding:

kubectl apply -f helm-rbac.yaml

To deploy Tiller in the AKS cluster run the following command:

helm init --service-account tiller

Install Anchore

We will deploy Anchore Engine via the lastest Helm Chart release. For a detailed description of the chart options view the Github repo.

helm install --name anchore-demo stable/anchore-engine

Following this, we can use kubectl get deployments to show the deployments.

Output:

$ kubectl get deployments
NAME                                 READY   UP-TO-DATE   AVAILABLE   AGE
anchore-demo-anchore-engine-core     1/1     1            1           5m36s
anchore-demo-anchore-engine-worker   1/1     1            1           5m36s
anchore-demo-postgresql              1/1     1            1           5m36s

Expose API port externally:

kubectl expose deployment anchore-demo-anchore-engine-core --type=LoadBalancer --name=anchore-engine --port=8228

Output:

service/anchore-engine exposed

View service and External IP:

kubectl get service anchore-engine

Output:

NAME             TYPE           CLUSTER-IP    EXTERNAL-IP      PORT(S)          AGE
anchore-engine   LoadBalancer   10.0.56.241   40.117.232.147   8228:31027/TCP   12m

Assuming you have the Anchore-CLI, you can pass the EXTERNAL-IP to the CLI as the --url parameter.

View the status of Anchore:

anchore-cli --url http://40.117.232.147:8228/v1 --u admin --p foobar system status

Output:

Service simplequeue (anchore-demo-anchore-engine-core-6447cb7464-cp295, http://anchore-demo-anchore-engine:8083): up
Service analyzer (anchore-demo-anchore-engine-worker-746cf99f7c-rkprd, http://10.244.2.8:8084): up
Service kubernetes_webhook (anchore-demo-anchore-engine-core-6447cb7464-cp295, http://anchore-demo-anchore-engine:8338): up
Service policy_engine (anchore-demo-anchore-engine-core-6447cb7464-cp295, http://anchore-demo-anchore-engine:8087): up
Service catalog (anchore-demo-anchore-engine-core-6447cb7464-cp295, http://anchore-demo-anchore-engine:8082): up
Service apiext (anchore-demo-anchore-engine-core-6447cb7464-cp295, http://anchore-demo-anchore-engine:8228): up

Engine DB Version: 0.0.7
Engine Code Version: 0.2.4

It is recommended to add the URL, username, and password as environment variables to avoid passing them with every anchore-cli command. View repo for more info.

You are now ready to begin analyzing images

Creating a Container Registry in Azure

First, create a resource group.

Azure CLI:

az group create --name anchoreContainerRegistryGroup --location eastus

Create a container registry.

Azure CLI:

az acr create --resource-group anchoreContainerRegistryGroup --name anchoreContainerRegistry001 --sku Basic

Verify login to create ACR.

Azure CLI:

az acr login --name anchoreContainerRegistry001

Push Image to ACR

In order to push an image to your newly created container registry, you must have an image. I’ve already pulled an image from my Docker Hub account via the following command:

docker pull jvalance/sampledockerfiles:latest

Once I have the image locally, it needs to be tagged with the fully qualified name of the ACR login server. This can be obtained via the following command:

Azure CLI:

az acr list --resource-group anchoreContainerRegistryGroup --query "[].{acrLoginServer:loginServer}" --output table

Output:

AcrLoginServer
--------------------------------------
anchorecontainerregistry001.azurecr.io

Run the following command to tag and push image:

docker tag jvalance/sampledockerfiles anchorecontainerregistry001.azurecr.io/sampledockerfiles:latest

docker push anchorecontainerregistry001.azurecr.io/sampledockerfiles:latest

View your pushed image in ACR.

Azure CLI:

az acr repository list --name anchorecontainerregistry001 --output table

Output:

Result
-----------------
sampledockerfiles

Now that we have an image in ACR we can add the created registry to Anchore.

Add the Created Registry to Anchore and Begin Analyzing images

With the anchore-cli we can easily add the created container registry to Anchore and analyzed the image.

  • –registry-type: docker_v2
  • Registry: myregistryname.azurecr.io
  • Username: Username of ACR account
  • Password: Password of ACR account

To obtain the credentials of the ACR account run the following command:

Azure CLI:

az acr credential show --name anchorecontainerregistry001

Output:

{
  "passwords": [
    {
      "name": "password",
      "value": "********"
    },
    {
      "name": "password2",
      "value": "********"
    }
  ],
  "username": "anchoreContainerRegistry001"
}

Run the following command to add the registry to Anchore:

anchore-cli registry add --registry-type <Type> <Registry> <Username> <Password>

View the added registry:

anchore-cli registry list

Output:

Registry                                      Type             User                               
anchoreContainerRegistry001.azurecr.io        docker_v2        anchoreContainerRegistry001

Once with configured the registry we can analyze the image we just pushed to it with the following command:

anchore-cli image add anchoreContainerRegistry001.azurecr.io/sampledockerfiles:latest

We can view the analyzed image via the image list command:

anchore-cli image list

Output:

Full Tag                                                               Image ID                                                                Analysis Status        
anchoreContainerRegistry001.azurecr.io/sampledockerfiles:latest        be4e57961e68d275be8600c1d9411e33f58f1c2c025cf3af22e3901368e02fe1        analyzed             

Conclusion

Following these examples, we can see how simple it is to deploy an AKS cluster with a running Anchore Engine service, and additionally, if we are using ACR as a primary container registry, easily set up and configure Anchore to scan any images that reside within the registry.

How to integrate Kubernetes with Anchore Engine

By integrating Anchore and Kubernetes you can ensure that only trusted and secure images are deployed and run in your Kubernetes environment

Overview

Anchore provides the ability to inspect, query, and apply policies to container images prior to deployment in your private container registry, ensuring that only images that meet your organization’s policies are deployed in your Kubernetes environment.

Anchore can be integrated with Kubernetes using admission controllers to ensure that images are validated before being launched. This ensures that images that fall out of compliance, for example, due to new security vulnerabilities discovered, can be blocked from running within your environment. Anchore can be deployed standalone or as a service running within your Kubernetes environment.

Getting Started with Integration

How to Integrate Anchore and Kubernetes

We have recently packaged the Anchore Engine as a Helm Chart to simplify deployment on Kubernetes. Now Anchore can be installed in a highly scalable environment with a single command.

Within 3 minutes you can have an Anchore Engine installed and running in your Kubernetes environment. The following guide requires:

  • A running Kubernetes Cluster
  • kubectl configured to access your Kubernetes cluster
  • Helm binary installed and available in your path

Tiller, the server side component of Helm, should be installed in your Kubernetes cluster. To installer Tiller run the following command:

$ helm init
$HELM_HOME has been configured at /home/username/.helm
Tiller (the Helm server-side component) has been installed into your Kubernetes Cluster.
⎈ Happy Helming! ⎈

If Tiller has already been installed you will receive a warning messaging that can safely be ignored.

Next we need to ensure that we have an up to date list of Helm Charts.

$ helm repo update
Hang tight while we grab the latest from your chart repositories...
...Skip local chart repository
...Successfully got an update from the "stable" chart repository
Update Complete. ⎈ Happy Helming!⎈

By default, the Anchore Engine chart will deploy an Anchore Engine container along with a PostgreSQL database container however this behavior can be overridden if you have an existing PostgreSQL service available.

In addition to the database the chart creates two deployments

  • Cores Services: The core services deployment includes the external api, notification service, kubernetes webhook, catalog and queuing service.
  • Worker: The worker service runs the image analysis and can be scaled up to handle concurrent evaluation of images.

In this example we will deploy the database, core services and a single worker. Please refer to the documentation for more sophisticated deployments including scaling worker nodes.

The installation can be completed with a single command:

$ helm install --name anchore-demo stable/anchore-engine

Read the Documentation

Read the documentation on Anchore integration with Jenkins and get started with the integration.