DevSecOps and the Next Generation of Digital Transformation

COVID-19 is accelerating the digital transformation of commercial and public sector enterprises around the world. However, digital transformation brings along new digital assets (such as applications, websites, and databases), increasing an enterprise’s attack surface. To prevent costly breaches, protect reputation, and maintain customer relationships, enterprises undergoing digital transformation have begun implementing a built-in and bottom-up security approach: DevSecOps.

Ways Enterprises Can Start Implementing DevSecOps

DevSecOps requires sharing the responsibility of security across development and operations teams. It involves empowering development, DevOps, and IT personnel with security information and tools to identify and eliminate threats as early as possible. Here are a few ways enterprises that are undergoing digital transformation can start implementing DevSecOps:

    • Analyze Front End Code. Cybercriminals love to target front end code due to its high number of reported vulnerabilities and security issues. Use CI/CD pipelines to detect security flaws early and share that information with developers so they can fix the issue. It’s also a good idea to make sure that attackers haven’t injected any malicious code – containers can be a great way to ensure immutability.
    • Sanitize Sensitive Data. Today, several open source tools can detect personally identifiable information (PII), secrets, access keys, etc. Running a simple check for sensitive data can be exponentially beneficial – a leaked credential in a GitHub repository could mean game over for your data and infrastructure.
    • Utilize IDE Extensions. Developers use integrated development environments and text editors to create and modify code. Why not take advantage of open source extensions that can scan local directories and containers for vulnerabilities? You can’t detect security issues much earlier in the SDLC than that!
    • Integrate Security into CI/CD. There are many open source Continuous Integration/Continuous Delivery tools available such as Jenkins, GitLab CI, Argo, etc. Enterprises should integrate one or more security solutions into their current and future CI/CD pipelines. A good solution would include alerts and events that allow developers to resolve the security issue prior to pushing anything into production.
    • Go Cloud Native. As mentioned earlier, containers can be a great way to ensure immutability. Paired with a powerful orchestration tool, such as Kubernetes, containers can completely transform the way we run distributed applications. There are many great benefits to “going cloud-native,” and several ways enterprises can protect their data and infrastructure by securing their cloud-native applications.

Successful Digital Transformation with DevSecOps

From government agencies to fast food chains, DevSecOps has enabled enterprises to quickly and securely transform their services and assets, even during a pandemic. For example, the US Department of Defense Enterprise DevSecOps Services Team has changed the average amount of time it takes for software to become approved for military use to days instead of years. For the first time ever, that same team managed to update the software on a spy plane that was in-flight!

On the commercial side of things, we’ve seen the pandemic force many businesses and enterprises to adopt new ways of doing things, especially in the food industry. For example, with restaurant seating shut down, Chick-fil-A has to rely heavily on its drive-thru, curbside, and delivery services. Where do those services begin? Software applications! Chick-fil-A obviously uses GitOps, Kubernetes, and AWS and controls large amounts of sensitive data for all of its customers, making it critical that Chick-fil-A implements DevSecOps instead of just DevOps. Imagine if your favorite fast food chain was hacked and your data was stolen – that would be extremely detrimental to business. With the suspiciously personalized ads that I receive on the Chick-fil-A app, there’s also reason to believe that Chick-fil-A has implemented DevSecMLOps, but that’s a topic for another discussion.

A Beginner’s Guide to Anchore Enterprise

[Updated post as of October 22, 2020]

While many Anchore Enterprise users are familiar with our open source Anchore Engine tool and have a good understanding of the way Anchore works, getting started with the additional features provided by the full product may at first seem overwhelming.

In this blog, we will walk through some of the major capabilities of Anchore Enterprise in order to help you get the most value from our product. From basic user interface (UI) usage to enabling third-party notifications, the following sections describe some common things to first explore when adopting Anchore Enterprise.

The Enterprise User Interface

Perhaps the most notable feature of Anchore Enterprise is the addition of a UI to help you navigate various features of Anchore, such as adding images and repositories, configuring policy bundle and whitelists, and scheduling or viewing reports.

The UI helps simplify the usability of Anchore by allowing you to perform normal Anchore actions without requiring a strong understanding of command-line tooling. This means that instead of editing a policy bundle as a JSON file, you can instead use a simple-to-use GUI to directly add or edit policy bundles, rule definitions, and other policy-based features.

Check out our documentation for more information on getting started with the Anchore Enterprise UI.

Advanced Vulnerability Feeds

With the move to Anchore Enterprise, you have the ability to include third-party entitlements that grant access to enhanced vulnerability feed data from Risk Based Security’s VulnDB. You can also analyze Windows-based containers using vulnerability data provided by Microsoft Security Research Center (MSRC).

Additionally, feed sync statuses can be viewed directly in the UI’s System Dashboard, giving you insight into the status of the data feeds along with the health of the underlying Anchore services. You can read more about enabling and configuring Anchore to use a localized feed service.

Note: Enabling the on-premise (localized) feeds service is required to enable VulnDB and Windows feeds, as these feed providers are not included in the data provided by our feed service.

Enterprise Authentication

In addition to Role-Based Access Controls (RBAC) to enhance user and account management, Anchore Enterprise includes the ability to configure an external authentication provider using LDAP, or OAuth / SAML.

Single Sign-On can be configured via OAuth / SAML support, allowing you to configure Anchore Enterprise to use an external Identity Provider such as Keycloak, Okta, or Google-SSO (among others) in order to fit into your greater organizational identity management workflow.

You can use the system dashboard provided by the UI to configure these features, making integration straightforward and easy to view.

Take a look at our RBAC, LDAP, or our SSO documentation for more information on authentication/authorization options in Anchore Enterprise.

Third-Party Notifications

By using our Notifications service, you can configure your Anchore Enterprise deployment to send alerts to external endpoints (Email, GitHub, Slack, and more) about system events such as policy evaluation results, vulnerability updates, and system errors.

Notification endpoints can be configured and managed through the UI, along with the specific events that fit your organizational needs. The currently supported endpoints are:

  • Email—Send notifications to a specific SMTP mail service
  • GitHub—Version control for software development using Git
  • JIRA—Issue tracking and agile product management software by Atlassian
  • Slack—Team collaboration software tools and online services by Slack Technologies
  • Teams—Team collaboration software tools and online services by Microsoft
  • Webhook—Send notifications to a specific API endpoint

For more information on managing notifications in Anchore Enterprise, take a look at our documentation on notifications.

Conclusion

In this blog, we provided a high-level overview of several features to explore when first starting out with Anchore Enterprise. There are multiple other features that we didn’t touch on, so check out our product comparison page for a list of other features included in Anchore Enterprise vs. our open-source Engine offering.

Take a look at our FAQs for more information.

Our Top 5 Strategies for Modern Container Security

[Updated post as of October 15, 2020]

At Anchore, we’re fortunate to be part of the journey of many technology teams as they become cloud-native. We would like to share what we know.

Over the past several years, we’ve observed many teams perform microservice application modernization using containers as the basic building blocks. Using Kubernetes, they dynamically orchestrate these software units and optimize their resource utilization. Aside from the adoption of new technologies, we’ve seen cultural transformations as well.

For example, the breaking of organizational silos to provide an environment for “shifting left” with the shared goal of incorporating as much validation as possible before a software release. One specific area of transformation which is fascinating to us here is how cloud-native is modernizing both development and security practices, along with CI/CD and operations workflows.

Below, we discuss how foundational elements of modern container image security, combined with improved development practices, enhance software delivery overall. For the purposes of this blog, we’ll focus mainly on the image build and the surrounding process within the CI stages of the software development lifecycle.

Here is some high-level guidance all technology teams using containers can implement to increase their container image security posture.

  1. Use minimal base images: Use minimal base images only containing necessary software packages from trusted sources. This will reduce the attack surface of your images, meaning there is less to exploit, and it will make you more confident in your deployment artifacts. To address this, Red Hat introduced Universal Base Images designed for applications that contain their own dependencies. UBIs also undergo regular vulnerability checking and are continuously maintained. Other examples of minimal base images are Distroless images, maintained by Google, and Alpine Linux images.
  2. Go daemonless: Moving away from the Docker CLI and daemon client/server model and into a “daemonless” fork/exec model provides advantages. Traditionally, with the Docker container platform, image build, registry, and container operations happen through what is known as the daemon. Not only does this create a single point of failure, but Docker operations are conducted by a user with full root authority. More recently, tools such as Podman, Buildah, and Skopeo (we use Skopeo inside of Anchore Engine) were created to address the challenges of building images, working with registries, and running containers. For a bit more information the security benefits of using Podman vs Docker read this article by Dan Walsh.
  3. Require image signing: Require container images to be signed to verify their authenticity. By doing so you can verify that your images were pushed by the correct party. Image authenticity can be verified with tools such as Notary, and both Podman and Skopeo (discussed above) also provide image signing capabilities. Taking this a step further, you can require that CI tools, repositories, and all other steps in the CI pipeline cryptographically sign every image they process with a software supply chain security framework such as in-toto.
  4. Inspect deployment artifacts: Inspect container images for vulnerabilities, misconfigurations, credentials, secrets, and bespoke policy rule violations prior to being promoted to a production registry and certainly before deployment. Container analysis tools such as Anchore can perform deep inspection of container images, and provide codified policy enforcement checks which can be customized to fit a variety of compliance standards. Perhaps the largest benefit of adding security testing with gated policy checks earlier in the container lifecycle is that you will spend less time and money fixing issues post-deployment.
  5. Create and enforce policies: For each of the above, tools selected should have the ability to generate codified rules to enable a policy-driven build and release practice. Once chosen they can be integrated and enforced as checkpoints/quality control gates during the software development process in CI/CD pipelines.

How Improved Development Practices Help

The above can be quite challenging to implement without modernizing development in parallel. One development practice we’ve seen change the way organizations are able to adopt supply chain security in a cloud-native world is GitOps. The declarative constructs of containers and Kubernetes configurations, coupled with infrastructure-as-code tools such as Terraform provide the elements for teams to fully embrace the GitOps methodology. Git now becomes the single source of truth for infrastructure and application configuration, along with policy-as-code documents. This practice allows for improved knowledge sharing, code reviews, and self-service, while at the same time providing a full audit trail to meet compliance requirements.

Final Thought

The key benefit of adopting modern development practices is the ability to deliver secure software faster and more reliably. By shifting as many checks as possible into an automated testing suite as part of CI/CD, issues are caught early, before they ever make their way into a production environment.

Here at Anchore, we’re always interested in finding out more about your cloud-native journey, and how we may be able to help you weave security into your modern workflow.

Adopt Zero Trust to Safeguard Containers

In a time where remote access has shifted from the exception to the new normal, users require access to enterprise applications and services from outside the traditional boundaries of an enterprise network. The rising adoption of microservices and containerized applications have further complicated things. Containers and their underlying infrastructure don’t play well within the boundaries of traditional network security practices, which typically emphasize security at the perimeter. As organizations look for ways to address these challenges, strategies such as the Zero Trust model have gained traction in securing containerized workloads.

What is the Zero Trust Model?

Forrester Research introduced the Zero Trust model in 2010, emphasizing a new approach to security: “never trust, always verify.” The belief was that traditional security methodologies focused on securing the internal perimeter were no longer sufficient and that any entity accessing enterprise applications and services needed to be authenticated, authorized, and continuously validated, whether inside or outside of the network perimeter, before being granted or keeping access to applications and their data. 

Since then, cloud adoption and the rise in a distributed enterprise model has seen organizations looking to adopt these principles in a time where security threats and breaches have become commonplace. Google, a regular early adopter in new technological trends, released a series of whitepapers and other publications in 2014 detailing its implementation of the Zero Trust model in a project known as BeyondCorp

Zero Trust and Containerized Workloads

So how can organizations apply Zero Trust principles on their containerized workloads?

Use Approved Images

A containerized environment gives you the ability to bring up new applications and services quickly using free and openly distributed software rather than building them yourself. There are advantages to using open source software but this also presents the inherent risk of introducing vulnerabilities and other issues into your environment. Restricting the use of images to those that have been vetted and approved can greatly reduce their attack surface and ensure only trusted applications and services are being deployed into production.

Implement Network Policies

Container networking introduces complexities such as nodes, pods, containers, and service endpoints assigned IP addresses typically on different network ranges requiring interconnectivity to function properly. As a result, each of these endpoints is generally configured to communicate freely by default. Implementing network policies and micro-segmentation enforces explicit controls around traffic and data flowing between these entities to ensure that only permitted communications are established. 

Secure Endpoints

In traditional enterprise networks, workloads are often assigned static IP addresses as an identifier and controls are placed around which entities can access certain IP addresses. Containerized applications are typically short-lived, resulting in a dynamic environment with large IP ranges, making it harder to track and audit network connections. To secure these endpoints and the communications between them, organizations should focus on continuously validating and authorizing identities. An emphasis should also be placed on encrypting any communications between endpoints.

Implement Identity-Based Policies

One of the most important aspects of Zero Trust is ensuring that no entity, inside or outside the perimeter, is authorized to access privileged data and systems without first validating and confirming their identity. As previously mentioned, IP-based validation is no longer sufficient in a containerized environment. Instead, enterprises should enforce policies based on the identities of the actual workloads running in their environments. Role-based access control can facilitate the implementation of fine-grained access policies based on an entity’s characteristics while employing a least-privilege approach further narrows the scope of access by ensuring that any entity requiring privileged access is granted only the minimum level of permissions required to perform a set of actions. 

Final Thoughts

Container adoption has become a point of emphasis for many organizations in their digital transformation strategies. While there are many benefits to containers and microservices, organizations must be careful not to combine new technologies with archaic enterprise security methodologies. As organizations devise new strategies for securing containerized workloads in a modernized infrastructure, the Zero Trust model can serve as a framework for success. 

The Story Behind Anchore Toolbox

As tool builders, we interact daily with teams of developers, operators, and security professionals working to achieve efficient and highly automated software development processes.  Our goal with this initiative is to provide a technology-focused space for ourselves and the community to build and share a variety of open-source tools to provide data gathering, security, and other capabilities in a form specifically designed for inclusion in developer and developer infrastructure workflows.

This post will share the reasoning, objectives, future vision, and methods for joining and contributing to this new project from Anchore.

Why Anchore Toolbox?

Over the last few years, we’ve witnessed a significant effort in the industry to adopt highly automated, modern software delivery lifecycle (SDLC) management processes.  As container security and compliance technology providers, we often find ourselves deeply involved in security/compliance discussions with practitioners and the general design of new, automation-oriented developer infrastructure systems.  Development teams are looking to add or start with automated security and compliance data collection and controls directly into their SDLC processes. We believe there is an opportunity to translate many of the lessons learned along the way into small, granular tools specifically (and importantly!) designed to be used within a modern developer/CI/CD environment.  Toward this objective, we’ve adopted a UNIX-like philosophy for projects in the Toolbox.  Each specific tool is a stand-alone element with a particular purpose your team can combine with other tools to construct more comprehensive flows. This model lends itself to useful manual invocation. We also find it works well when integrating these types of operations into existing CI/CD platforms such as GitHub, GitLab, Atlassian BitBucket, Azure Pipelines, and CloudBees as they continue to add native security and compliance interfaces.

What’s Available Today?

We include two tools in Anchore Toolbox to start – Syft,  a software bill of materials generator, and Grype, a container image/code repository vulnerability scanner.  Syft and Grype are fast and efficient software analysis tools that come from our experience building technologies that provide deep container image analysis and security data.

To illustrate how we envision DevSecOps teams using these tools in practice, we’ve included a VS Code extension for Grype and a new version of the Anchore Scan GitHub action, based on Grype, that supplies container image security findings to GitHub’s recently launched code scanning feature set. 

Both Syft and Grype are light-weight command-line tools by design. We wrote them in Go, making them very straightforward additions to any developer/developer infrastructure workflow. There’s no need to install any language-specific environments or struggle with configurations to pass information in and out of a container instance.  To support interoperability with many SBOM, security, and compliance data stores, you can choose to generate results in human-readable, JSON, and CycloneDX format.

Future of Anchore Toolbox

We’re launching the Anchore Toolbox with what we believe are important and fundamental building block elements that by themselves fill in essential aspects of the modern SDLC story, but we’re just getting started.  We would love nothing more than to hear from anyone in the community who shares our enthusiasm for bringing the goals of security, compliance, and insight automation ever closer.  We look forward to continuing the discussion and working with you to improve our existing projects and to bring new tools into the Toolbox!

For more information – check out the following resources to start using Anchore Toolbox today.

Introducing Anchore Toolbox: A New Collection of Open Source DevSecOps Tools

Anchore Toolbox is a collection of lightweight, single-purpose, easy to use, open source DevSecOps tools that Anchore has developed for developers and DevOps teams who want to build their continuous integration/continuous development (CI/CD) pipeline.

We’re building Toolbox to support the open source DevSecOps community by providing easy-to-use just in time tools available at the command line interface (CLI). Our goal is for Toolbox to serve a fundamentally different need than Anchore Enterprise by offering DevSecOps teams single-purpose tools optimized for speed and ease of use.

The first tools to debut as part of Anchore Toolbox are Syft and Grype:

Syft

We built Syft from the ground up to be an open source analyzer that serves developers who want to “shift left” and scan their projects still in development. You can use Syft to scan a container image, but also a directory inside your development project.

Syft tells you what’s inside your super complicated project or container and builds you a detailed software bill of materials (SBOM). You can output an SBOM from Syft as a text file, table, or JavaScript Object Notation (JSON) file and includes native output support for the CycloneDX format. 

Installing Syft

We provide everything you need, including full documentation for installing Syft over on GitHub.

Grype

Grype is an open source project to scan your project or container for known vulnerabilities. Grype uses the latest information from the same Anchore feed services as Anchore Engine. You can use Grype to identify vulnerabilities in most Linux operating system packages and language artifacts, including NPM, Python, Ruby, and Java.

Grype provides output similar to Syft, including table, text, and JSON. You can use Grype on container images or just directories. 

Installing Grype

We provide everything you need, including full documentation for installing Grype over on GitHub.

Anchore’s Open Source Portfolio and DevSecOps

Open source is a building block of today’s DevSecOps toolchain and integral to the growth of the DevSecOps community’s growth at large. Anchore Toolbox is part of our strategy to contribute to both the open source and DevSecOps communities and do our part to advance container security practices.

The Anchore Open Source Portfolio also includes two other elements:

  • Out-of-the-box integrations that connect Anchore open source technologies with common CI/CD platforms and developer tools with current integrations including GitHub Actions, Azure Pipelines, BitBucket Pipes, and Visual Studio Code
  • Anchore Engine, a persistent service that stores SBOMs and scan results for historical analysis and API-based interaction

Learn more about Anchore Toolbox

The best way to learn about Syft and Grype is to use them! Also, stay tuned this week for a blog on Thursday, October 8, 2020, from Dan Nurmi, Anchore CTO, who tells the story behind Anchore Toolbox and offers a look forward at what we plan to do with open source as a company.

Join the Anchore Community on Slack to learn more about Toolbox developments and interact with our online community, file issues, and give feedback about your experience with these new tools.

Deploying Anchore Enterprise 2.4 on AWS Elastic Kubernetes Services (EKS) with Helm

[Updated post as of October 1, 2020]

In this post, I will walk through the steps for deploying Anchore Enterprise v2.4 on Amazon EKS with Helm. Anchore currently maintains a Helm Chart which we will use to install the necessary Anchore services.

Prerequisites

  • A running Amazon EKS cluster with worker nodes launched. See EKS Documentation for more information.
  • Helm (v3) client installed and configured.

Before we proceed, let’s confirm our cluster is up and running and we can access the kube-api server of our cluster:

Note: Since we will be deploying all services including the database as pods in the cluster, I have deployed a three-node cluster with (2) m5.xlarge and (1) t3.large instances for a basic deployment. I’ve also given the root volume of each node 65GB (195GB total) since we will be using the cluster for persistent storage of the database service.

$ kubectl get nodes NAME                                    

 STATUS ROLES AGE VERSION

ip-10-0-1-66.us-east-2.compute.internal  Ready <none> 1d  v1.16.12-eks ip-10-0-3-15.us-east-2.compute.internal  Ready <none> 1d  v1.16.12-eks ip-10-0-3-157.us-east-2.compute.internal Ready <none> 1d  v1.16.12-eks

Configuring the Ingress Controller

The ALB Ingress Controller triggers the creation of an Application Load Balancer (ALB) and the necessary supporting AWS resources whenever an Ingress resource is created on the cluster with the kubernetes.io/ingress.class: alb annotation.

To support external access to the Enterprise UI and Anchore API, we will need the cluster to create an ALB for our deployment.

To enable the ALB Ingress Controller pod to create the load balancer and required resources, we need to update the IAM role of the worker nodes and tag the cluster subnets the ingress controller should associate the load balancer with.

  • Download the sample IAM Policy from AWS and attach it to your worker node role either via console or aws-cli.
  • Add the following tags to your cluster’s public subnets:
Key Value
kubernetes.io/cluster/<<cluster-name>> shared
Key Value
kubernetes.io/role/elb 1

Next, we need to create a Kubernetes service account in the kube-system namespace, a cluster role, and a cluster role binding for the ALB Ingress Controller to use:

$ kubectl apply -f https://raw.githubusercontent.com/kubernetes-sigs/aws-alb-ingress-controller/v1.1.8/docs/examples/rbac-role.yaml

With the service account and cluster role resources deployed, download the AWS ALB Ingress Controller deployment manifest to your working directory:

$ wget https://raw.githubusercontent.com/kubernetes-sigs/aws-alb-ingress-controller/v1.1.8/docs/examples/alb-ingress-controller.yaml

Under the container specifications of the manifest, uncomment  --cluster-name=  and enter the name of your cluster:

# REQUIRED
 # Name of your cluster. Used when naming resources created
 # by the ALB Ingress Controller, providing distinction between
 # clusters.
 - --cluster-name=<eks_cluster_name>

Save and close the deployment manifest, then deploy it to the cluster:

$ kubectl apply -f alb-ingress-controller.yaml

Installing the Anchore Engine Helm Chart

To install the chart repository, run the following command:

$ helm repo add anchore https://charts.anchore.io

"anchore" has been added to your repositories

Confirm the chart was installed successfully:

$ helm repo list
NAME    URL
anchore https://charts.anchore.io

Deploying Anchore Enterprise

For the purposes of this post, we will focus on getting a basic deployment of Anchore Enterprise running. For a complete set of configuration options you may include in your installation, refer to the values.yaml file in our charts repository.

Note: Refer to our blog post Configuring Anchore Enterprise on EKS for a walkthrough of common production configuration options including securing the Application Load Balancer/Ingress Controller deployment, using S3 archival and configuring a hosted database service such as Amazon RDS.

Configure Namespace and Credentials

First, let’s create a new namespace for the deployment:

$ kubectl create namespace anchore

namespace/anchore created

Enterprise services require an active Anchore Enterprise subscription (which is supplied via license file), as well as Docker credentials with permission to the private docker repositories that contain the enterprise images.

Create a Kubernetes secret in the anchore namespace with your license file:

Note: You will need to reference the exact path to your license file on your localhost. In the example below, I have copied my license to my working directory.

$ kubectl -n anchore create secret generic anchore-enterprise-license --from-file=license.yaml=./license.yaml

secret/anchore-enterprise-license created

Next, create a secret containing the Docker Hub credentials with access to the private anchore enterprise repositories:

$ kubectl -n anchore create secret docker-registry anchore-enterprise-pullcreds --docker-server=docker.io --docker-username=<DOCKERHUB_USER> --docker-password=<DOCKERHUB_PASSWORD> --docker-email=<EMAIL_ADDRESS>

secret/anchore-enterprise-pullcreds created

Ingress

Create a new file named anchore_values.yaml in your working directory and create an ingress section with the following contents:

ingress: 

  enabled: true 

  # Use the following paths for GCE/ALB ingress controller

  apiPath: /v1/* 

  uiPath: /*

  annotations: kubernetes.io/ingress.class: alb alb.ingress.kubernetes.io/scheme: internet-facing

Engine API

Below the ingress section add the following block to configure the Enterprise API:

Note: To expose the API service, we set the service type to NodePort instead of the default ClusterIP

anchoreApi:
  replicaCount: 1

  # kubernetes service configuration for anchore external API
  service:
    type: NodePort
    port: 8228
    annotations: {}

Enable Enterprise Deployment

Next, add the following to your anchore_values.yaml file below the anchoreApi section:

anchoreEnterpriseGlobal:
    enabled: true

Enterprise UI

Like the API service, we’ll need to expose the UI service to ensure it is accessible outside the cluster. Copy the following section at the end of your anchore_values.yaml file:

anchoreEnterpriseUi:
  enabled: true
  image: docker.io/anchore/enterprise-ui:latest
  imagePullPolicy: IfNotPresent

  # kubernetes service configuration for anchore UI
  service:
    type: NodePort
    port: 443
    annotations: {}
    labels: {}
    sessionAffinity: ClientIP

Deploying the Helm Chart

To install the chart, run the following command from the working directory:

$ helm install --namespace anchore <your_release_name> -f anchore_values.yaml anchore/anchore-engine

It will take the system several minutes to bootstrap. You can checks on the status of the pods by running kubectl get pods:

$ kubectl -n anchore get pods 

NAME READY STATUS RESTARTS AGE 

anchore-cli-5f4d697985-rdm9f 1/1 Running 0 14m anchore-enterprise-anchore-engine-analyzer-55f6dd766f-qxp9m 1/1 Running 0 9m 

anchore-enterprise-anchore-engine-api-bcd54c574-bx8sq 4/4 Running 0 9m anchore-enterprise-anchore-engine-catalog-ddd45985b-l5nfn 1/1 Running 0 9m 

anchore-enterprise-anchore-engine-enterprise-feeds-786b6cd9mw9l 1/1 Running 0 9m anchore-enterprise-anchore-engine-enterprise-ui-758f85c859t2kqt 1/1 Running 0 9m 

anchore-enterprise-anchore-engine-policy-846647f56b-5qk7f 1/1 Running 0 9m 

anchore-enterprise-anchore-engine-simplequeue-85fbd57559-c6lqq 1/1 Running 0 9m 

anchore-enterprise-anchore-feeds-db-668969c784-6f556 1/1 Running 0 9m anchore-enterprise-anchore-ui-redis-master-0 1/1 Running 0 9m anchore-enterprise-postgresql-86d56f7bf8-nx6mw 1/1 Running 0 9m

Run the following command to get details on the deployed ingress:

$ kubectl -n anchore get ingress

NAME    HOSTS   ADDRESS  PORTS   AGE

support-anchore-engine   *       1a2b3c4-anchoreenterprise-f9e8-123456789.us-east-2.elb.amazonaws.com   80      4h

You should see the address for the created and can use it to navigate to the Enterprise UI:

Anchore Enterprise login screen.

Conclusion

You now have an installation of Anchore Enterprise up and running on Amazon EKS. The complete contents for the walkthrough are available by navigating to the GitHub repo here. For more info on Anchore Engine or Enterprise, you can join our community Slack channel, or request a technical demo.

Compliance’s Role in Container Image Security and Vulnerability Scanning

Compliance is the practice of observing a set of standards for recommended security controls laid out by a particular agency or industry that an application must adhere to or face stiff penalties. Today, most enterprises have regulations to protect information and assets from the Center for Internet Security (CIS) to the Health Insurance Portability and Accountability Act (HIPAA). As with most things in compliance, it’s how an agency or company configures applications and services that counts. While vulnerability scanning and image analysis are crucial parts of container security, ensuring that images are compliant with organizational and industry regulations extends beyond merely looking for vulnerabilities.

NIST SP 800-190

An example of such an agency is the National Institute of Standards and Technology (NIST). NIST is a non-regulatory government agency that develops technology, metrics, and standards to drive innovation and economic competitiveness at U.S. based organizations in the science and technology industry. Companies that are providing products and services to the federal government are often required to meet the NIST security mandates. NIST provides guidance with Special Publication (SP) 800-190, which addresses the security concerns associated with application container technologies.

CIS Docker Benchmark

The Center for Internet Security (CIS), with its CIS 1.13.0 Docker compliance guide, provides a  more general recommended compliance guideline. A CIS 1.13.0 policy bundle that addresses compliance regulations outlined by CIS is available in Anchore’s Policy Hub, making it simple to enforce these checks with Anchore out of the box. Many common CIS compliance checks have been implemented with the CIS policy bundle or have examples for end-users to customize. Still, all Anchore policy bundles can be extended or even have new bundles created that are tailored directly to application and industry recommendations.

Enforcing Compliance with Anchore

As outlined in this previous blog post written by our very own Jeremy Valance, enforcing compliance with Anchore is a straightforward and flexible way to adhere to varying industry regulations. Given the variance of compliance needs across different enterprises, having a flexible and robust policy engine becomes necessary for organizations needing to stick to one or many sets of standards. With Anchore, development and security teams can harden their container security posture by adding an image scanning step to their CI, reporting back on CVEs, and fine-tuning policies to meet compliance requirements. Putting compliance checks in place ensures that only container images that meet the standards outlined by a particular agency or industry will be allowed to make their way into production-ready environments. 

You can find more information on working with Anchore policies here.

The Importance of Building Trust in Cloud Security, A Shared Responsibility With DevOps Teams

Overall the world is moving towards the cloud. Companies all across the globe are recognizing the merit of overcoming infrastructure challenges by using cloud services. While moving to cloud infrastructure solves many complex problems faced by companies, it introduces new challenges. One of the main challenges is the security of business-critical information that companies are now storing inside cloud infrastructure.

Storing data inside cloud infrastructure is easy and convenient, but it comes with a whole new set of technical challenges for DevOps engineers. Cloud services provide a highly configurable environment that can be adapted to any application. However, it is a new environment, and engineers must learn how to configure the system properly. The infrastructure must be configured appropriately; user accounts must be tracked and have the appropriate permissions, applications must be secure, the infrastructure running those applications must also be secured.

Misconfigured cloud systems are a significant risk for data breaches where a company can lose important data. These data losses can cause incredible damage to a company, not only causing a loss in revenue and trust, but also a loss of reputation. These costly mistakes, more often than not, stem from a misconfigured system. Misconfigurations can include user accounts that have higher privileges than they should, web servers that are exposed to the public when they shouldn’t be. Multi-factor authentication is not made a requirement when it should be.

Overall the cloud has a lot to offer, the upsides are highly performant and scalable infrastructure, along with toolsets that give DevOps Engineers control over their system from top to bottom. However, this improved way of deploying and controlling production software is accompanied by a new set of security challenges. These security challenges come from the requirement to learn a whole new cutting edge system. In order to secure business-critical systems, tooling must be developed so that DevOps engineers can use the toolsets to ensure only secure software is running in production handling business-critical information. The landscape for production software is changing so quickly, and there is such a minuscule margin of error that there must be a focus on not only automated deployment but automated security as well. The infrastructure must be audited to ensure security. Applications must be audited for security before deployment, during deployment, and while running.

It is the responsibility of DevOps Engineers to ensure that the software running business-critical systems is secure. With such an extensive and highly configurable system offered by cloud providers, many small misconfigurations can fall through the cracks. The best way to overcome the challenges of ensuring software security is to develop automation using security tooling to ensure your system conforms to the requirements. Once automation has been put in place, it will ensure that any system goes through the same rigorous process and security checks before it makes it into production. This helps reduce the number of misconfigurations due to human error, and it will help increase the overall trustworthiness of production software.

Cloud infrastructure has so much to offer to improve the overall performance and data handling for companies today. However, it also comes with a whole new set of challenges that DevOps Engineers must face.

As companies put more and more of their information into the cloud, it falls on DevOps Engineers to ensure that data is safely managed. The cloud, by its nature, is highly configurable, and thus, the security of the workloads running on it are subject to the configuration of the system. This configuration ultimately falls on the shoulders of DevOps Engineers, who must learn how to configure the system properly. To configure complex cloud systems, tooling and automation must be used to provide engineers a way to deploy software so that it is secure and trustworthy. Deploying software in this manner helps alleviate the complexity introduced by cloud systems and allows the engineers some peace of mind when their production software handles business-critical information.

Container Security & Automation, How To Implement And Keep Up With CI/CD

A major issue in modern software development is the fact that most organizations are quick to adopt containers and automation, but remain behind the curve in adopting DevSecOps processes that ensure container security. By sharing the responsibility of security across all software teams, organizations can begin to identify vulnerabilities earlier in their SDLC (software development lifecycle) and engrain security and compliance into their current and future CI/CD (Continuous Integration/Continuous Delivery) workflows.

Empowering Developers Before CI/CD

One of the first steps an organization should take towards sharing the responsibility of security across all teams is to empower their developers with visibility and knowledge into security threats. As the ones who initially create and improve code, developers need to be aware of the weaknesses in the packages and libraries they are using. Since developers are typically working on local machines, Anchore has created open source CLI tools that enable developers to generate SBOMs (software bill of materials) and identify vulnerabilities not only in container images but also in code and filesystems. Currently in pre-release, Syft and Grype are ideal for projects in development and will soon include automated vulnerability scanning with IDE (integrated development environment) plugins. This allows for development and security teams to communicate and remediate security threats prior to wasting time or money on operational resources.

Automating Security During CI

Once the development and security teams have acknowledged and accepted the threat level in a software project or feature, they may decide to run the code through a CI pipeline. These pipelines are usually owned by DevOps (development operations) Engineers and may include stages like building a container image, running tests, and pushing the image to a registry. 

In order to share the responsibility of security across teams, an organization should ensure there is a vulnerability scanning and compliance stage in every pipeline. With easy integration with CI tools such as GitLab CI, Jenkins, and AWS CodeBuild, Anchore Enterprise 2.4.0 makes it simple for operations teams to incorporate things like malware scanning, base image comparisons, and enhanced vulnerability feeds to discover vulnerable points in the attack surface that the development team would have missed. When Anchore finds vulnerable points in the attack surface, future pipeline stages can be configured to fail and the operations team can be alerted so that development and security teams can work to resolve the security issue.

Ensuring Compliance During CD

When a feature is ready and it comes time to deploy into production through an orchestration tool such as Kubernetes, it is important that organizations remain vigilant in their “final evaluation” of a container image before runtime. The security team may have requirements like blocking containers from using specific packages, ports, or user permissions. The organization may have a mandated level of compliance to achieve such as DISA, NIST, or PCI DSS compliance. Anchore makes it simple for the security team to enforce security and compliance checks with policy as code.

Additionally, the Anchore Admission Controller can ensure non-compliant containers are blocked from being deployed. Regardless of whether someone is attempting to deploy containers with a CD tool like Argo or by creating a pod, deployment, or stateful set, the Anchore Admission Controller will evaluate each container against the security team’s policy before deciding to deploy or not deploy.

Conclusion

As attackers are constantly looking to take advantage of vulnerable points, organizations should be looking for their own vulnerable points. By sharing the responsibility of security across all software teams, modern organizations can begin identifying threats earlier and automating container security processes in their CI/CD workflows.