Using Grype to Identify GitHub Action Vulnerabilities

About a month ago, GitHub announced the presence of a moderate security vulnerability in the GitHub Actions runner that can allow environment variables and path injection in workflows that log untrusted data to STDOUT. You can read the disclosure here for more details. Given at Anchore, we build and maintain a GitHub Action of our own; this particular announcement was one we were made aware of. While I’m sure many folks have taken the time to update their GitHub Actions accordingly, I thought this would be a good opportunity to take a closer look at setting up a CI workflow as if I were developing my own GitHub Action, and step through the options in Anchore for identifying this particular vulnerability.

To start with, I created an example repository in GitHub, demonstrating a very basic hello-world GitHub Action and workflow configuration. The configuration below scans the current directory of the project I am working on with the Anchore Container Scan Action. Under the hood, the tool scanning this directory is called Grype, an open-source project we built here at Anchore.

name: Scan current directory CI
on: [push]
jobs:
  anchore_job:
    runs-on: ubuntu-latest
    name: Anchore scan directory
    steps:
    - name: Checkout
      uses: actions/checkout@v2
    - name: Scan current project
      id: scan
      uses: anchore/scan-action@v2
      with:
        path: "./"
        fail-build: true
        acs-report-enable: true
    - name: upload Anchore scan SARIF report
      uses: github/codeql-action/upload-sarif@v1
      with:
        sarif_file: ${{ steps.scan.outputs.sarif }}

On push, I can navigate to the Actions tab and find the latest build. 

Build Output

The build output above shows a build failure due to vulnerabilities identified in the project of severity level medium or higher. To find out more information about these specific issues, I can jump over to the Security tab.

All CVEs open

Once here, we can click on the vulnerability linked to the disclosure discussed above. 

Open CVE

We can see the GHSA, and make the necessary updates to the @actions/core dependency we are using. While this is just a basic example, it paints a clear picture that adding security scans to CI workflows doesn’t have to be complicated. With the proper tools, it becomes quite simple to obtain actionable information about the software you’re building. 

If we wanted to take this a step further “left” in the software development lifecycle (SDLC), I could install Grype for Visual Studio Code, an extension for discovering project vulnerabilities while working locally in VS Code. 

Grype vscode

Here we can see for the same hello-world GitHub Action, I can get visibility into vulnerabilities as I’m working locally on my workstation and can resolve these issues before I end up pushing to my source code repository. I’ve also just added two places for security checks in the development lifecycle in just a few minutes, which means I am spreading out my checks, providing more places to catch myself should I create issues. 

Just for good measure, once I update my dependencies and push to GitHub, my CI job is now successfully passing the Anchore scan, and the security issues that were opened have now been closed and resolved. 

All CVEs closed

CVE closed

While this was just a simple demonstration of what is possible, at Anchore, we generally just think of these types of checks as good hygiene, and the more spots in the development workflow we can provide developers with security information about the code they’re writing, the better positioned they’ll be to promote shared security principles across their organization and build high-quality, secure software.

Free Download: Inside the Anchore Technology Suite: Open Source to Enterprise

Open source is foundational to much of what we do here at Anchore. It’s at the core of Anchore Enterprise, our complete container security workflow solution for enterprise DevSecOps. Anchore Toolbox is our collection of lightweight, single-purpose open source tools for the analysis and scanning of software projects.

Each tool has its place in the DevSecOps journey, depending on your organization’s requirements and eventual goals.

Our free guide explains the following:

  • The role of containers in DevSecOps transformation
  • Features of Anchore Enterprise and Anchore Toolbox
  • Ideal use cases for Anchore Enterprise
  • Ideal use cases for Anchore Toolbox
  • Choosing the right Anchore tool for your requirements

To learn more about how Anchore Toolbox and Anchore Enterprise can fit into your DevSecOps journey, please download our free guide.

Configuring Anchore Enterprise on AWS Elastic Kubernetes Services (EKS)

In previous posts, we’ve demonstrated how to create a Kubernetes cluster on AWS Elastic Kubernetes Service (EKS) and how to deploy Anchore Enterprise in your EKS cluster. The focus of this post is to demonstrate how to configure a more production-like deployment of Anchore with integrations such as SSL support, RDS database backend and S3 archival.

Prerequisites:

Configuring the Ingress/Application Load Balancer

Anchore’s Helm Chart provides a deployment template for configuring an ingress resource for your Kubernetes deployment. EKS supports the use of an AWS Elastic Load Balancing Application Load Balancer (ALB) ingress controller, an NGINX ingress controller or a combination of both.

For the purposes of this demonstration, we will focus on deploying the ALB ingress controller using the Helm chart.

To enable ingress deployment in your EKS cluster, simply add the following ingress configuration to your anchore_values.yaml:

Note: If you haven’t already, make sure to create the necessary RBAC roles, role bindings and service deployment required by the AWS ALB Ingress controller. See ALB Ingress Controller for more details.

ingress:
  enabled: true
  labels: {}
  apiPath: /v1/*
  uiPath: /*


  annotations:
    kubernetes.io/ingress.class: alb
    alb.ingress.kubernetes.io/scheme: internet-facing

Specify Custom Security Groups/Subnets

By default, the ingress controller will deploy a public-facing application load balancer and create a new security group allowing access to your deployment from anywhere over the internet. To prevent this, we can update the ingress annotations to include additional information such as a custom security group resource. This will enable you to use an existing security group within the cluster VPC with your defined set of rules to access the attached resources.

To specify a security group, simply add the following to your ingress annotations and update the value with your custom security group id:

alb.ingress.kubernetes.io/security-groups: "sg-012345abcdef"

We can also specify the subnets we want the load balancer to be associated with upon deployment. This may be useful if we want to attach our load balancer to the cluster’s public subnets and have it route traffic to nodes attached to the cluster’s private subnets.

To manually specify which subnets the load balancer should be associated with upon deployment, update your annotations with the following value:

alb.ingress.kubernetes.io/subnets: "subnet-1234567890abcde, subnet-0987654321edcba"

To test the configuration, apply the Helm chart:

helm install <deployment_name> anchore/anchore-engine -f anchore_values.yaml

Next, describe your ingress controller configuration by running kubectl describe ingress

You should see the DNS name of your load balancer next to the address field and under the ingress rules, a list of annotations including the specified security groups and subnets.

Note: If the load balancer did not deploy successfully, review the following AWS documentation to ensure the ingress controller is properly configured.

Configure SSL/TLS for the Ingress

You can also configure an HTTPS listener for your ingress to secure connections to your deployment.

First, create an SSL certificate using AWS Certificate Manager and specify a domain name to associate with your certificate. Note the ARN of your new certificate and save it for the next step.

Next, update the ingress annotations in your anchore_values.yaml with the following parameter and provide the certificate ARN as the value.

alb.ingress.kubernetes.io/certificate-arn: "arn:aws:acm::"

Additionally, we can configure the Enterprise UI to listen on HTTPS or a different port by including the following annotations to the ingress with the desired port configuration. See the following example:

alb.ingress.kubernetes.io/listen-ports: '[{"HTTPS": 443}, {"HTTP": 80}]'
alb.ingress.kubernetes.io/actions.ssl-redirect: '{"Type": "redirect", "RedirectConfig": { "Protocol": "HTTPS", "Port": "443", "StatusCode": "HTTP_301"}}'

Next, install the deployment if this is a new deployment:

helm install anchore/anchore-engine -f anchore_values.yaml

Or upgrade your existing deployment

helm upgrade anchore/anchore-engine -f anchore_values.yaml

To confirm the updates were applied, run kubectl describe ingress and verify your certificate ARN, as well as the updated port configurations, appear in your annotations.

Analyze Archive Storage Using AWS S3

AWS’s S3 Object Storage allows users to store and retrieve data from anywhere in the world. It can be particularly useful as an archive system. For more information on S3, please see the documentation from Amazon.

Both Anchore Engine and Anchore Enterprise can be configured to use S3 as an archiving solution. Some form of archiving is highly recommended for a production-ready environment. In order to set this up on your EKS, you must first ensure that your use case is in line with Anchore’s archiving rules. Anchore stores image analysis results in two locations. The first is the working set which is where an image is stored initially after its analysis is completed. In the working state, images are available for queries and policy evaluation. The second location is the archive set. Analysis data stored in this location is not actively ready for policy evaluation or queries but is less resource-intensive and information here can always be loaded into the working set for evaluation and queries. More information about Anchore and archiving can be found here.

To enable S3 archival, copy the following to the catalog section of your anchore_values.yaml:

anchoreCatalog:
  replicaCount: 1

  archive:
    compression:
      enabled: true
      min_size_kbytes: 100
    storage_driver:
      name: s3
      config:
        bucket: ""

        # A prefix for keys in the bucket if desired (optional)
        prefix: ""
        # Create the bucket if it doesn't already exist
        create_bucket: false
        # AWS region to connect to if 'url' not specified, if both are set, then 'url' has precedent
        region: us-west-2

By default, Anchore will attempt to access an existing bucket specified under the config > bucket value. If you do not have an S3 bucket created, then you can set create_bucket to false and allow the Helm chart to create the bucket for you. If you already created one, put its name in the bucket parameter. Since S3 isn’t region-specific, you need to specify the region that your EKS cluster resides in with the region parameter.

Note: Whether you specify an existing bucket resource or set create_bucket to true, the cluster nodes require permissions to perform the necessary API calls to the S3 service. There are two ways to configure authentication:

Specify AWS Access and Secret Keys

To specify the access and secret keys tied to a role with permissions to your bucket resource, update the storage driver configuration in your anchore_values.yaml with the following parameters and appropriate values:

# For Auth can provide access/secret keys or use 'iamauto' which will use an instance profile or any credentials found in normal aws search paths/metadata service
        access_key: XXXX
        secret_key: YYYY

Use Permissions Attached to the Node Instance Profile

The second method for configuring access to the bucket is to leverage the instance profile of your cluster nodes. This eliminates the need to create an IAM role to access the bucket and manage the access and secret keys for the role separately. To configure the catalog service to leverage the IAM role attached to the underlying instance, update the storage driver configuration in your anchore_values.yaml with the following and ensure iamauto is set true:

# For Auth can provide access/secret keys or use 'iamauto' which will use an instance profile or any credentials found in normal aws search paths/metadata service
        iamauto: true

You must also ensure that the role associated with your cluster nodes has GetObject, PutObject and DeleteObject permissions to your S3 bucket (see a sample policy below).

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Action": ["s3:ListBucket"],      "Resource": ["arn:aws:s3:::test"]
    },
    {
      "Effect": "Allow",
      "Action": [
        "s3:PutObject",
        "s3:GetObject",
        "s3:DeleteObject"
      ],
      "Resource": ["arn:aws:s3:::test/*"]
    }
  ]
}

Once all of these steps are completed, deploy the Helm chart by running:

helm install stable/anchore-engine -f anchore_values.yaml

Or the following, if upgrading an existing deployment:

helm upgrade stable/anchore-engine -f anchore_values.yaml

Note: If your cluster nodes reside in private subnets, they must have outbound connectivity in order to access your S3 bucket.

For cluster deployments where nodes are hosted in private subnets, a NAT gateway can be used to route traffic from your cluster nodes outbound through the public subnets. More information about creating and configuring NAT gateways can be found here.

Another option is to configure a VPC gateway allowing your nodes to access the S3 service without having to route traffic over the internet. More information regarding VPC endpoints and VPC gateways can be found here.

Using Amazon RDS as an External Database

By default, Anchore will deploy a database service within the cluster for persistent storage using a standard PostgreSQL Helm chart. For production deployments, it is recommended to use an external database service that provides more resiliency and supports features such as automated backups. For EKS deployments, we can offload Anchore’s database tier to PostgreSQL on Amazon RDS.

Note: Your RDS instance must be accessible to the nodes in your cluster in order for Anchore to access the database. To enable connectivity, the RDS instance should be deployed in the same VPC/subnets as your cluster and at least one of the security groups attached to your cluster nodes must allow connections to the database instance. For more information, read about configuring access to a database instance in a VPC.

To configure the use of an external database, update your anchore_values.yaml with the following section and ensure enabled is set to “false”.

postgresql:
  enabled: false

Under the postgres section, add the following parameters and update them with the appropriate values from your RDS instance.

 postgresUser: 
  postgresPassword: 
  postgresDatabase: 
  externalEndpoint:

With the section configured, your database values should now look something like this:

postgresql:
  enabled: false
  postgresUser: anchoreengine
  postgresPassword: anchore-postgres,123
  postgresDatabase: postgres
  externalEndpoint: abcdef12345.jihgfedcba.us-east-1.rds.amazonaws.com

To bring up your deployment run:

helm install  stable/anchore-engine -f anchore_values.yaml

Finally, run kubectl get pods to confirm the services are healthy and the local postgresql pod isn’t deployed in your cluster.

Note: The above steps can also be applied to deploy the feeds postgresql database on Amazon RDS by updating the anchore-feeds-db section instead of the postgresql section of the chart.

Encrypting Database Connections Using SSL Certificates with Amazon RDS

Encrypting RDS connections is a best practice to ensure the security and integrity of your Anchore deployment that uses external database connections.

Enabling SSL on RDS

AWS provides the necessary certificates to enable SSL with your rds deployment. Download rds-ca-2019-root.pem from here. In order to require SSL connections on an RDS PostgreSQL instance, the rds.force_ssl parameter needs to be set to 1 (on). By setting this to 1, the PostgreSQL instance will set the SSL parameter to 1 (on) as well as modify the database’s pg_hba.conf file to support SSL. See more information about RDS PostgreSQL ssl configuration.

Configuring Anchore to take advantage of SSL is done through the Helm chart. Under the anchoreGlobal section in the chart, enter the certificate filename next to certStoreSecretName that we downloaded from AWS in the previous section. (see example below)

anchoreGlobal:
   certStoreSecretName: rds-ca-2019-root.pem

Under the dbConfig section, set SSL to true. Set sslRootCertName to the same value as certStoreSecretName. Make sure to update the postgresql and anchore-feeds-db sections to disable the local container deployment of the services and specify the RDS database values (see the previous section on configuring RDS to work with Anchore for further details). (If running Enterprise, the dbConfig section under anchoreEnterpriseFeeds should also be updated to include the cert name under sslRootCertName)

dbConfig:
    timeout: 120
    ssl: true
    sslMode: verify-full
    sslRootCertName: rds-ca-2019-root.pem
    connectionPoolSize: 30
    connectionPoolMaxOverflow: 100

Once these settings have been configured, run a Helm upgrade to apply the changes to your cluster.

Conclusion

The Anchore Helm chart provided on GitHub allows users to quickly get a deployment running on their cluster, but it is not necessarily a production-ready environment. The sections above showed how to configure the ingress/application load balancer, configuring HTTPS, archiving image analysis data to an AWS S3 bucket, and setting up an external RDS instance and requiring SSL connections to it. All of these steps will ensure that your Anchore deployment is production-ready and prepared for anything you throw at it.

 

Enforcing the DoD Container Image and Deployment Guide with Anchore Federal

The latest version of the DoD Container Image and Deployment Guide details technical and security requirements for container image creation and deployment within a DoD production environment. Sections 2 and 3 of the guide include security practices that teams must follow to limit the footprint of security flaws during the container image build process. These sections also discuss best security practices and correlate them to the corresponding security control family with Risk Management Framework (RMF) commonly used by cybersecurity teams across DoD.

Anchore Federal is a container scanning solution used to validate the DoD compliance and security standards, such as continuous authorization to operate (cATO), across images, as explained in the DoD Container Hardening Process Guide. Anchore’s policy first approach places policy where it belongs– at the forefront of the development lifecycle to assess compliance and security issues in a shift left approach. Scanning policies within Anchore are fully customizable based on specific mission needs, providing more in-depth insight into compliance irregularities that may exist within a container image. This level of granularity is achieved through specific security gates and triggers that generate automated alerts. This allows teams to validate that the best practices discussed in Section 2 of the Container Image Deployment Guide enable best practices to be enforced as your developers build.  

Anchore Federal uses a specific DoD Scanning Policy that enforces a wide array of gates and triggers that provide insight into the DoD Container Image and Deployment Guide’s security practices. For example, you can configure the Dockerfile gate and its corresponding triggers to monitor for security issues such as privileged access. You can also configure the Dockerfile gate to expose unauthorized ports and validate images built from approved base images and check for the unauthorized disclosure of secrets/sensitive files, amongst others.

Anchor Federal’s DoD scanning policy is already enabled to validate the detailed list of best practices in Section 2 of the Container Image and Deployment Guide. 

Looking to learn more about how to achieve container hardening at DoD levels of security? One of the most popular technology shortcuts is to utilize a DoD software factory. Anchore has been helping organizations and agencies put the Sec in DevSecOps by securing traditional software factories, transforming them into DoD software factories.

Next Steps

Anchore Federal is a battle-tested solution that has been deployed to secure DoD’s most critical workloads. Anchore Federal exists to provide cleared professional services and software to DoD mission partners and the US Intelligence Community in building their DevSecOps environments. Learn more about how Anchore Federal supports DoD missions.

Anchore Federal Now Part of the DoD Container Hardening Process

The latest version of the Department of Defense (DoD) Container Hardening Process Guide includes Anchore Federal as an approved container scanning tool. This hardening process is critical because it allows for a measurement of risk that an Authorizing Official (AO) assesses while rendering their decision to authorize the container. DoD programs can use this guide as a source of truth to know they are following DISA container security best practices.

Currently, the DoD is in the early stages of container adoption and security. As containers become more integral for secure software applications, the focus shifts to making sure, DoD systems are being built using DoD compliant container images and mitigating risks associated with using container images. For example, the United States Air Force Platform One initiative includes Iron Bank, a repository of DoD compliant container images available for reuse across authorized DoD program offices and weapon systems.

Here are some more details about how Anchore factors into the DoD Container Hardening Process:

Container Scanning Guidelines

The DISA container hardening SRG relies heavily on best practices already utilized at Platform One. Anchore Federal services work alongside the US Air Force at Platform One to build, harden, and scan container images from vendors in Repo1 as the Platform One team adds secure images to Iron Bank. Automation of container scanning of each build within a DevSecOps pipeline is the primary benefit of the advised approach discussed in Section 2.3 of the SRG. Anchore encourages our customers to read the Scanning Process section of the DoD Container Hardening Process Guide to learn more about the container scanning process.

Serving as a mandatory check as part of a container scanning process is an ideal use case for Anchore Federal in the DoD and public sector agencies. Our application programming interface (API) makes it very easy to integrate with DevSecOps environments and validate your builds for security and DoD compliance by automating Anchore scanning inside your pipeline.

Anchore scanning against the DoD compliance standards involves assessing the image by checking for Common Vulnerabilities and Exposures (CVEs), embedded malware, and other security requirements found in Appendix B: DoD hardened Containers Cybersecurity Requirements. 

An Anchore scan report containing the output is fed back to the developer and forwarded to the project’s security stakeholders to enable a Continuous Authority to Operate (c-ATO) workflow, which satisfies the requirements for the Findings Mitigation Reporting step of the process recommended by the Container Hardening Guide. The report output also serves as a source of truth for approvers accepting the risks associated with each image.

Scanning Reports & Image Approval 

After personnel review the Anchore compliance reports and complete the mitigation reporting, they report these findings to the DevSecOps approver, who determines if the results warrant approving the container based on the level of risk presented within each image.  Upon approval, the images move to the approved registry in Iron Bank accessible to developers across DoD programs.

Next Step

Anchore Federal is a battle-tested solution that has been deployed to secure DoD’s most critical workloads. Anchore Federal exists to provide cleared professional services and software to DoD mission partners and the US Intelligence Community in building their DevSecOps environments. Learn more about how Anchore Federal supports DoD missions.

AI and the Future of DevSecOps

Many companies have been investing heavily in Artificial Intelligence (AI) over the past few years. It has enabled cars to drive themselves, doctors to pick up on various diseases earlier, and even create works of art. Such a powerful technology can impact nearly every aspect of human life. We want to explore what that looks like in the realm of application security and DevSecOps.

Addressing DevSecOps Challenges With AI

The importance of maintaining compliance within any organization is crucial. Health care providers have to remain within the Health Insurance Portability and Accountability Act (HIPAA) requirements. Financial companies have similar requirements. Other companies have other requirements regarding protecting user data. Many times these regulations change. For example, HIPAA has had hundreds of minor updates and six major updates since its creation in 1996. Many times these requirements come in faster than humans can keep up with. AI can make sure that these requirements aren’t missed and implemented properly in any delivered code.

Additionally, AI is taking the feasibility of application security from many companies from a “sometimes” thing to an “always” thing. It speeds up that testing process from a laborious manual process to something that can be run in a pipeline.

AI functions like a human brain. With neural networks and backpropagation, It mimics how the brain changes to adapt to new situations. In this way, it can be leveraged to adjust to changes in code and infrastructure automatically.

 

The Future of “DevSecAIOps”

Another critical aspect of DevSecOps that is sometimes difficult to maintain is the speed of code delivery. Securing pipelines will always add more time due to added complexity and the need for human interaction within that pipeline. An example of this is a developer needed to change code to remove specific vulnerabilities found during a security scan. This is an aspect of DevSecOps that can benefit from the introduction of Artificial Intelligence. AI can change its own code through neural networks and backpropagation, so, logically, it could be used to make these changes to vulnerable code to get that code through the pipeline rapidly. 

Additionally, AI can bring the expertise of the few cybersecurity experts to many companies and organizations. Though artificial intelligence has the ability to accomplish tasks that humans usually do, it is a data and labor-intensive process to train the models to function to the standard that humans do. But once they are functioning to that level, they can be utilized by many people and, in the case of DevSecOps, can be used to assist companies who cannot have DevSecOps engineers working on their pipelines.

Conclusion

The usefulness of artificial intelligence far outweighs the buzz of it in society. It has allowed many companies to iterate their technologies at speeds that simply weren’t possible before. With these rapid advancements, however, the importance of maintaining that same cadence in the realms of application security and DevSecOps cannot be overstated. By taking advantage of AI like other technologies are, DevSecOps can make sure that these rapidly developed technologies are powered by secure and stable code when they reach the user.

Understanding your Software Supply Chain Risk

Many organizations have seen increased value from in house software development by adopting open source technology and containers to quickly build and package software for the cloud. Usually branded as Digital Transformation, this shift comes with trade-offs not often highlighted by vendors and boutique consulting firms selling the solutions. The reality is moving fast, can break things and without proper constraints, you can expose your organization to significant security, legal and reputational risks.

These are not entirely new revelations. Security experts have long known that supply chains are an incredibly valuable attack surface to hackers. Software supply chain attacks have been used to exfiltrate credit card data, (alleged) nation-state surveillance, and to cash out ATMs. The widespread adoption of open source projects and the use of containers and registries have given hackers new opportunities for harm.

Supply Chain Exposure Goes Beyond Security

These risks are not limited to criminal hacking and fragility in your supply chain comes in many forms. One type of risk comes from single contributors that could object morally to the use of their software, like what happened when one developer decided he didn’t like Trump’s support of ICE and pulled his package from NPM. Or unbeknownst to your legal team, you could be distributing software without proper license, as is the case with any container that uses Alpine Linux as the base image.

Fortunately, understanding these risks is not unknowable. A number of open source tools exist for scanning for CVEs, and recent projects are helping to standardize Software Bill of Materials to help make it easy to check your containers for license and security risks. Knowing is of course only half the battle – securing your supply chain is the end goal. This is where the unique capabilities of Anchore Enterprise can be applied. Creating, managing, and enforcing policy allows you to enforce the constraints that are most applicable to your organization, and allow teams to still move quickly by building on top of open source and container tooling.

Smart Contracts for your Supply Chain

Most sizable organizations have already established best practices around their software supply chain. Network security, tool mandates, and release practices all help to decrease your organization’s risk – but they all are fallible. Where humans are involved, they are sure to choose convenience over security, especially when urgency is involved.

This is the idea behind the Open Policy Agent (OPA) Kubernetes project which can prevent certain containers images from being scheduled, and even integrate with service mesh to route network traffic away from suspicious containers.

At Anchore, we believe that catching security issues at runtime is costly and focus on controlling your path to production through an independent policy engine. By defining policy, and leveraging our toolbox in your pipelines you can enforce the appropriate policy for your organization, team, and environment.

This powerful capability gives you the ability to allow development teams to use tools that are convenient to them during the creative process but enforce a more strict packaging process. For example, you might want to ensure that all production containers are pulled from a privately managed registry. This gives you greater control and less exposure, but how can you enforce this? Below is an example policy rule you can apply using Anchore Enterprise to prevent container images from being pulled from Docker Hub.

"denylisted_images": [

   {
     "id": "9b6e8f3b-3f59-44cb-83c7-378b9ba750f7",
     "image": {
       "type": "tag",
       "value": "*"
     },

     "name": "Deny use of Dockerhub Images",
     "registry": "dockerhub.io",
     "repository": "*"
   }
 ],

By adding this to a policy you can warn teams they are pulling a publicly accessible image, and allow your central IT team to be aware of the violation. This simple contract severs a building block to developing “compliance-as-code” within your Organization. This is just one example of course, you could also search for secrets, personally identifiable information (PII data), or any variety of combinations.

Supply Chain Driven Design

For CIOs and CSOs, focusing on the role of compliance when designing your software supply chain is crucial for not only managing risk, but also to improve the efficiency and productivity of your organization. Technology leaders that do this quickly will maintain distinct agility when a crisis hits, and stand out from their peers in the industry by innovating faster and more consistently. Anchore Enterprise gives you the building blocks to design your supply chain based on the trade-offs that make the most sense for your organization.

More Links & References

How one programmer broke the internet

NPM Typo Squatting attack

How a supply chain attack lead to millions of stolen credit cards

Kubecon Supply Chain Talk

DevSecOps and the Next Generation of Digital Transformation

COVID-19 is accelerating the digital transformation of commercial and public sector enterprises around the world. However, digital transformation brings along new digital assets (such as applications, websites, and databases), increasing an enterprise’s attack surface. To prevent costly breaches, protect reputation, and maintain customer relationships, enterprises undergoing digital transformation have begun implementing a built-in and bottom-up security approach: DevSecOps.

Ways Enterprises Can Start Implementing DevSecOps

DevSecOps requires sharing the responsibility of security across development and operations teams. It involves empowering development, DevOps, and IT personnel with security information and tools to identify and eliminate threats as early as possible. Here are a few ways enterprises that are undergoing digital transformation can start implementing DevSecOps:

    • Analyze Front End Code. Cybercriminals love to target front end code due to its high number of reported vulnerabilities and security issues. Use CI/CD pipelines to detect security flaws early and share that information with developers so they can fix the issue. It’s also a good idea to make sure that attackers haven’t injected any malicious code – containers can be a great way to ensure immutability.
    • Sanitize Sensitive Data. Today, several open source tools can detect personally identifiable information (PII), secrets, access keys, etc. Running a simple check for sensitive data can be exponentially beneficial – a leaked credential in a GitHub repository could mean game over for your data and infrastructure.
    • Utilize IDE Extensions. Developers use integrated development environments and text editors to create and modify code. Why not take advantage of open source extensions that can scan local directories and containers for vulnerabilities? You can’t detect security issues much earlier in the SDLC than that!
    • Integrate Security into CI/CD. There are many open source Continuous Integration/Continuous Delivery tools available such as Jenkins, GitLab CI, Argo, etc. Enterprises should integrate one or more security solutions into their current and future CI/CD pipelines. A good solution would include alerts and events that allow developers to resolve the security issue prior to pushing anything into production.
    • Go Cloud Native. As mentioned earlier, containers can be a great way to ensure immutability. Paired with a powerful orchestration tool, such as Kubernetes, containers can completely transform the way we run distributed applications. There are many great benefits to “going cloud-native,” and several ways enterprises can protect their data and infrastructure by securing their cloud-native applications.

Successful Digital Transformation with DevSecOps

From government agencies to fast food chains, DevSecOps has enabled enterprises to quickly and securely transform their services and assets, even during a pandemic. For example, the US Department of Defense Enterprise DevSecOps Services Team has changed the average amount of time it takes for software to become approved for military use to days instead of years. For the first time ever, that same team managed to update the software on a spy plane that was in-flight!

On the commercial side of things, we’ve seen the pandemic force many businesses and enterprises to adopt new ways of doing things, especially in the food industry. For example, with restaurant seating shut down, Chick-fil-A has to rely heavily on its drive-thru, curbside, and delivery services. Where do those services begin? Software applications! Chick-fil-A obviously uses GitOps, Kubernetes, and AWS and controls large amounts of sensitive data for all of its customers, making it critical that Chick-fil-A implements DevSecOps instead of just DevOps. Imagine if your favorite fast food chain was hacked and your data was stolen – that would be extremely detrimental to business. With the suspiciously personalized ads that I receive on the Chick-fil-A app, there’s also reason to believe that Chick-fil-A has implemented DevSecMLOps, but that’s a topic for another discussion.

A Beginner’s Guide to Anchore Enterprise

[Updated post as of October 22, 2020]

While many Anchore Enterprise users are familiar with our open source Anchore Engine tool and have a good understanding of the way Anchore works, getting started with the additional features provided by the full product may at first seem overwhelming.

In this blog, we will walk through some of the major capabilities of Anchore Enterprise in order to help you get the most value from our product. From basic user interface (UI) usage to enabling third-party notifications, the following sections describe some common things to first explore when adopting Anchore Enterprise.

The Enterprise User Interface

Perhaps the most notable feature of Anchore Enterprise is the addition of a UI to help you navigate various features of Anchore, such as adding images and repositories, configuring policy bundle and whitelists, and scheduling or viewing reports.

The UI helps simplify the usability of Anchore by allowing you to perform normal Anchore actions without requiring a strong understanding of command-line tooling. This means that instead of editing a policy bundle as a JSON file, you can instead use a simple-to-use GUI to directly add or edit policy bundles, rule definitions, and other policy-based features.

Check out our documentation for more information on getting started with the Anchore Enterprise UI.

Advanced Vulnerability Feeds

With the move to Anchore Enterprise, you have the ability to include third-party entitlements that grant access to enhanced vulnerability feed data from Risk Based Security’s VulnDB. You can also analyze Windows-based containers using vulnerability data provided by Microsoft Security Research Center (MSRC).

Additionally, feed sync statuses can be viewed directly in the UI’s System Dashboard, giving you insight into the status of the data feeds along with the health of the underlying Anchore services. You can read more about enabling and configuring Anchore to use a localized feed service.

Note: Enabling the on-premise (localized) feeds service is required to enable VulnDB and Windows feeds, as these feed providers are not included in the data provided by our feed service.

Enterprise Authentication

In addition to Role-Based Access Controls (RBAC) to enhance user and account management, Anchore Enterprise includes the ability to configure an external authentication provider using LDAP, or OAuth / SAML.

Single Sign-On can be configured via OAuth / SAML support, allowing you to configure Anchore Enterprise to use an external Identity Provider such as Keycloak, Okta, or Google-SSO (among others) in order to fit into your greater organizational identity management workflow.

You can use the system dashboard provided by the UI to configure these features, making integration straightforward and easy to view.

Take a look at our RBAC, LDAP, or our SSO documentation for more information on authentication/authorization options in Anchore Enterprise.

Third-Party Notifications

By using our Notifications service, you can configure your Anchore Enterprise deployment to send alerts to external endpoints (Email, GitHub, Slack, and more) about system events such as policy evaluation results, vulnerability updates, and system errors.

Notification endpoints can be configured and managed through the UI, along with the specific events that fit your organizational needs. The currently supported endpoints are:

  • Email—Send notifications to a specific SMTP mail service
  • GitHub—Version control for software development using Git
  • JIRA—Issue tracking and agile product management software by Atlassian
  • Slack—Team collaboration software tools and online services by Slack Technologies
  • Teams—Team collaboration software tools and online services by Microsoft
  • Webhook—Send notifications to a specific API endpoint

For more information on managing notifications in Anchore Enterprise, take a look at our documentation on notifications.

Conclusion

In this blog, we provided a high-level overview of several features to explore when first starting out with Anchore Enterprise. There are multiple other features that we didn’t touch on, so check out our product comparison page for a list of other features included in Anchore Enterprise vs. our open-source Engine offering.

Take a look at our FAQs for more information.

Our Top 5 Strategies for Modern Container Security

[Updated post as of October 15, 2020]

At Anchore, we’re fortunate to be part of the journey of many technology teams as they become cloud-native. We would like to share what we know.

Over the past several years, we’ve observed many teams perform microservice application modernization using containers as the basic building blocks. Using Kubernetes, they dynamically orchestrate these software units and optimize their resource utilization. Aside from the adoption of new technologies, we’ve seen cultural transformations as well.

For example, the breaking of organizational silos to provide an environment for “shifting left” with the shared goal of incorporating as much validation as possible before a software release. One specific area of transformation which is fascinating to us here is how cloud-native is modernizing both development and security practices, along with CI/CD and operations workflows.

Below, we discuss how foundational elements of modern container image security, combined with improved development practices, enhance software delivery overall. For the purposes of this blog, we’ll focus mainly on the image build and the surrounding process within the CI stages of the software development lifecycle.

Here is some high-level guidance all technology teams using containers can implement to increase their container image security posture.

  1. Use minimal base images: Use minimal base images only containing necessary software packages from trusted sources. This will reduce the attack surface of your images, meaning there is less to exploit, and it will make you more confident in your deployment artifacts. To address this, Red Hat introduced Universal Base Images designed for applications that contain their own dependencies. UBIs also undergo regular vulnerability checking and are continuously maintained. Other examples of minimal base images are Distroless images, maintained by Google, and Alpine Linux images.
  2. Go daemonless: Moving away from the Docker CLI and daemon client/server model and into a “daemonless” fork/exec model provides advantages. Traditionally, with the Docker container platform, image build, registry, and container operations happen through what is known as the daemon. Not only does this create a single point of failure, but Docker operations are conducted by a user with full root authority. More recently, tools such as Podman, Buildah, and Skopeo (we use Skopeo inside of Anchore Engine) were created to address the challenges of building images, working with registries, and running containers. For a bit more information the security benefits of using Podman vs Docker read this article by Dan Walsh.
  3. Require image signing: Require container images to be signed to verify their authenticity. By doing so you can verify that your images were pushed by the correct party. Image authenticity can be verified with tools such as Notary, and both Podman and Skopeo (discussed above) also provide image signing capabilities. Taking this a step further, you can require that CI tools, repositories, and all other steps in the CI pipeline cryptographically sign every image they process with a software supply chain security framework such as in-toto.
  4. Inspect deployment artifacts: Inspect container images for vulnerabilities, misconfigurations, credentials, secrets, and bespoke policy rule violations prior to being promoted to a production registry and certainly before deployment. Container analysis tools such as Anchore can perform deep inspection of container images, and provide codified policy enforcement checks which can be customized to fit a variety of compliance standards. Perhaps the largest benefit of adding security testing with gated policy checks earlier in the container lifecycle is that you will spend less time and money fixing issues post-deployment.
  5. Create and enforce policies: For each of the above, tools selected should have the ability to generate codified rules to enable a policy-driven build and release practice. Once chosen they can be integrated and enforced as checkpoints/quality control gates during the software development process in CI/CD pipelines.

How Improved Development Practices Help

The above can be quite challenging to implement without modernizing development in parallel. One development practice we’ve seen change the way organizations are able to adopt supply chain security in a cloud-native world is GitOps. The declarative constructs of containers and Kubernetes configurations, coupled with infrastructure-as-code tools such as Terraform provide the elements for teams to fully embrace the GitOps methodology. Git now becomes the single source of truth for infrastructure and application configuration, along with policy-as-code documents. This practice allows for improved knowledge sharing, code reviews, and self-service, while at the same time providing a full audit trail to meet compliance requirements.

Final Thought

The key benefit of adopting modern development practices is the ability to deliver secure software faster and more reliably. By shifting as many checks as possible into an automated testing suite as part of CI/CD, issues are caught early, before they ever make their way into a production environment.

Here at Anchore, we’re always interested in finding out more about your cloud-native journey, and how we may be able to help you weave security into your modern workflow.