Enforcing the DoD Container Image and Deployment Guide with Anchore Federal

The latest version of the DoD Container Image and Deployment Guide details technical and security requirements for container image creation and deployment within a DoD production environment. Sections 2 and 3 of the guide include security practices that teams must follow to limit the footprint of security flaws during the container image build process. These sections also discuss best security practices and correlate them to the corresponding security control family with Risk Management Framework (RMF) commonly used by cybersecurity teams across DoD.

Anchore Federal is a container scanning solution used to validate the DoD compliance and security standards, such as continuous authorization to operate (cATO), across images, as explained in the DoD Container Hardening Process Guide. Anchore’s policy first approach places policy where it belongs– at the forefront of the development lifecycle to assess compliance and security issues in a shift left approach. Scanning policies within Anchore are fully customizable based on specific mission needs, providing more in-depth insight into compliance irregularities that may exist within a container image. This level of granularity is achieved through specific security gates and triggers that generate automated alerts. This allows teams to validate that the best practices discussed in Section 2 of the Container Image Deployment Guide enable best practices to be enforced as your developers build.  

Anchore Federal uses a specific DoD Scanning Policy that enforces a wide array of gates and triggers that provide insight into the DoD Container Image and Deployment Guide’s security practices. For example, you can configure the Dockerfile gate and its corresponding triggers to monitor for security issues such as privileged access. You can also configure the Dockerfile gate to expose unauthorized ports and validate images built from approved base images and check for the unauthorized disclosure of secrets/sensitive files, amongst others.

Anchor Federal’s DoD scanning policy is already enabled to validate the detailed list of best practices in Section 2 of the Container Image and Deployment Guide. 

Looking to learn more about how to achieve container hardening at DoD levels of security? One of the most popular technology shortcuts is to utilize a DoD software factory. Anchore has been helping organizations and agencies put the Sec in DevSecOps by securing traditional software factories, transforming them into DoD software factories.

Next Steps

Anchore Federal is a battle-tested solution that has been deployed to secure DoD’s most critical workloads. Anchore Federal exists to provide cleared professional services and software to DoD mission partners and the US Intelligence Community in building their DevSecOps environments. Learn more about how Anchore Federal supports DoD missions.

Anchore Federal Now Part of the DoD Container Hardening Process

The latest version of the Department of Defense (DoD) Container Hardening Process Guide includes Anchore Federal as an approved container scanning tool. This hardening process is critical because it allows for a measurement of risk that an Authorizing Official (AO) assesses while rendering their decision to authorize the container. DoD programs can use this guide as a source of truth to know they are following DISA container security best practices.

Currently, the DoD is in the early stages of container adoption and security. As containers become more integral for secure software applications, the focus shifts to making sure, DoD systems are being built using DoD compliant container images and mitigating risks associated with using container images. For example, the United States Air Force Platform One initiative includes Iron Bank, a repository of DoD compliant container images available for reuse across authorized DoD program offices and weapon systems.

Here are some more details about how Anchore factors into the DoD Container Hardening Process:

Container Scanning Guidelines

The DISA container hardening SRG relies heavily on best practices already utilized at Platform One. Anchore Federal services work alongside the US Air Force at Platform One to build, harden, and scan container images from vendors in Repo1 as the Platform One team adds secure images to Iron Bank. Automation of container scanning of each build within a DevSecOps pipeline is the primary benefit of the advised approach discussed in Section 2.3 of the SRG. Anchore encourages our customers to read the Scanning Process section of the DoD Container Hardening Process Guide to learn more about the container scanning process.

Serving as a mandatory check as part of a container scanning process is an ideal use case for Anchore Federal in the DoD and public sector agencies. Our application programming interface (API) makes it very easy to integrate with DevSecOps environments and validate your builds for security and DoD compliance by automating Anchore scanning inside your pipeline.

Anchore scanning against the DoD compliance standards involves assessing the image by checking for Common Vulnerabilities and Exposures (CVEs), embedded malware, and other security requirements found in Appendix B: DoD hardened Containers Cybersecurity Requirements. 

An Anchore scan report containing the output is fed back to the developer and forwarded to the project’s security stakeholders to enable a Continuous Authority to Operate (c-ATO) workflow, which satisfies the requirements for the Findings Mitigation Reporting step of the process recommended by the Container Hardening Guide. The report output also serves as a source of truth for approvers accepting the risks associated with each image.

Scanning Reports & Image Approval 

After personnel review the Anchore compliance reports and complete the mitigation reporting, they report these findings to the DevSecOps approver, who determines if the results warrant approving the container based on the level of risk presented within each image.  Upon approval, the images move to the approved registry in Iron Bank accessible to developers across DoD programs.

Next Step

Anchore Federal is a battle-tested solution that has been deployed to secure DoD’s most critical workloads. Anchore Federal exists to provide cleared professional services and software to DoD mission partners and the US Intelligence Community in building their DevSecOps environments. Learn more about how Anchore Federal supports DoD missions.

AI and the Future of DevSecOps

Many companies have been investing heavily in Artificial Intelligence (AI) over the past few years. It has enabled cars to drive themselves, doctors to pick up on various diseases earlier, and even create works of art. Such a powerful technology can impact nearly every aspect of human life. We want to explore what that looks like in the realm of application security and DevSecOps.

Addressing DevSecOps Challenges With AI

The importance of maintaining compliance within any organization is crucial. Health care providers have to remain within the Health Insurance Portability and Accountability Act (HIPAA) requirements. Financial companies have similar requirements. Other companies have other requirements regarding protecting user data. Many times these regulations change. For example, HIPAA has had hundreds of minor updates and six major updates since its creation in 1996. Many times these requirements come in faster than humans can keep up with. AI can make sure that these requirements aren’t missed and implemented properly in any delivered code.

Additionally, AI is taking the feasibility of application security from many companies from a “sometimes” thing to an “always” thing. It speeds up that testing process from a laborious manual process to something that can be run in a pipeline.

AI functions like a human brain. With neural networks and backpropagation, It mimics how the brain changes to adapt to new situations. In this way, it can be leveraged to adjust to changes in code and infrastructure automatically.

 

The Future of “DevSecAIOps”

Another critical aspect of DevSecOps that is sometimes difficult to maintain is the speed of code delivery. Securing pipelines will always add more time due to added complexity and the need for human interaction within that pipeline. An example of this is a developer needed to change code to remove specific vulnerabilities found during a security scan. This is an aspect of DevSecOps that can benefit from the introduction of Artificial Intelligence. AI can change its own code through neural networks and backpropagation, so, logically, it could be used to make these changes to vulnerable code to get that code through the pipeline rapidly. 

Additionally, AI can bring the expertise of the few cybersecurity experts to many companies and organizations. Though artificial intelligence has the ability to accomplish tasks that humans usually do, it is a data and labor-intensive process to train the models to function to the standard that humans do. But once they are functioning to that level, they can be utilized by many people and, in the case of DevSecOps, can be used to assist companies who cannot have DevSecOps engineers working on their pipelines.

Conclusion

The usefulness of artificial intelligence far outweighs the buzz of it in society. It has allowed many companies to iterate their technologies at speeds that simply weren’t possible before. With these rapid advancements, however, the importance of maintaining that same cadence in the realms of application security and DevSecOps cannot be overstated. By taking advantage of AI like other technologies are, DevSecOps can make sure that these rapidly developed technologies are powered by secure and stable code when they reach the user.

Understanding your Software Supply Chain Risk

Many organizations have seen increased value from in house software development by adopting open source technology and containers to quickly build and package software for the cloud. Usually branded as Digital Transformation, this shift comes with trade-offs not often highlighted by vendors and boutique consulting firms selling the solutions. The reality is moving fast, can break things and without proper constraints, you can expose your organization to significant security, legal and reputational risks.

These are not entirely new revelations. Security experts have long known that supply chains are an incredibly valuable attack surface to hackers. Software supply chain attacks have been used to exfiltrate credit card data, (alleged) nation-state surveillance, and to cash out ATMs. The widespread adoption of open source projects and the use of containers and registries have given hackers new opportunities for harm.

Supply Chain Exposure Goes Beyond Security

These risks are not limited to criminal hacking and fragility in your supply chain comes in many forms. One type of risk comes from single contributors that could object morally to the use of their software, like what happened when one developer decided he didn’t like Trump’s support of ICE and pulled his package from NPM. Or unbeknownst to your legal team, you could be distributing software without proper license, as is the case with any container that uses Alpine Linux as the base image.

Fortunately, understanding these risks is not unknowable. A number of open source tools exist for scanning for CVEs, and recent projects are helping to standardize Software Bill of Materials to help make it easy to check your containers for license and security risks. Knowing is of course only half the battle – securing your supply chain is the end goal. This is where the unique capabilities of Anchore Enterprise can be applied. Creating, managing, and enforcing policy allows you to enforce the constraints that are most applicable to your organization, and allow teams to still move quickly by building on top of open source and container tooling.

Smart Contracts for your Supply Chain

Most sizable organizations have already established best practices around their software supply chain. Network security, tool mandates, and release practices all help to decrease your organization’s risk – but they all are fallible. Where humans are involved, they are sure to choose convenience over security, especially when urgency is involved.

This is the idea behind the Open Policy Agent (OPA) Kubernetes project which can prevent certain containers images from being scheduled, and even integrate with service mesh to route network traffic away from suspicious containers.

At Anchore, we believe that catching security issues at runtime is costly and focus on controlling your path to production through an independent policy engine. By defining policy, and leveraging our toolbox in your pipelines you can enforce the appropriate policy for your organization, team, and environment.

This powerful capability gives you the ability to allow development teams to use tools that are convenient to them during the creative process but enforce a more strict packaging process. For example, you might want to ensure that all production containers are pulled from a privately managed registry. This gives you greater control and less exposure, but how can you enforce this? Below is an example policy rule you can apply using Anchore Enterprise to prevent container images from being pulled from Docker Hub.

"denylisted_images": [

   {
     "id": "9b6e8f3b-3f59-44cb-83c7-378b9ba750f7",
     "image": {
       "type": "tag",
       "value": "*"
     },

     "name": "Deny use of Dockerhub Images",
     "registry": "dockerhub.io",
     "repository": "*"
   }
 ],

By adding this to a policy you can warn teams they are pulling a publicly accessible image, and allow your central IT team to be aware of the violation. This simple contract severs a building block to developing “compliance-as-code” within your Organization. This is just one example of course, you could also search for secrets, personally identifiable information (PII data), or any variety of combinations.

Supply Chain Driven Design

For CIOs and CSOs, focusing on the role of compliance when designing your software supply chain is crucial for not only managing risk, but also to improve the efficiency and productivity of your organization. Technology leaders that do this quickly will maintain distinct agility when a crisis hits, and stand out from their peers in the industry by innovating faster and more consistently. Anchore Enterprise gives you the building blocks to design your supply chain based on the trade-offs that make the most sense for your organization.

More Links & References

How one programmer broke the internet

NPM Typo Squatting attack

How a supply chain attack lead to millions of stolen credit cards

Kubecon Supply Chain Talk

DevSecOps and the Next Generation of Digital Transformation

COVID-19 is accelerating the digital transformation of commercial and public sector enterprises around the world. However, digital transformation brings along new digital assets (such as applications, websites, and databases), increasing an enterprise’s attack surface. To prevent costly breaches, protect reputation, and maintain customer relationships, enterprises undergoing digital transformation have begun implementing a built-in and bottom-up security approach: DevSecOps.

Ways Enterprises Can Start Implementing DevSecOps

DevSecOps requires sharing the responsibility of security across development and operations teams. It involves empowering development, DevOps, and IT personnel with security information and tools to identify and eliminate threats as early as possible. Here are a few ways enterprises that are undergoing digital transformation can start implementing DevSecOps:

    • Analyze Front End Code. Cybercriminals love to target front end code due to its high number of reported vulnerabilities and security issues. Use CI/CD pipelines to detect security flaws early and share that information with developers so they can fix the issue. It’s also a good idea to make sure that attackers haven’t injected any malicious code – containers can be a great way to ensure immutability.
    • Sanitize Sensitive Data. Today, several open source tools can detect personally identifiable information (PII), secrets, access keys, etc. Running a simple check for sensitive data can be exponentially beneficial – a leaked credential in a GitHub repository could mean game over for your data and infrastructure.
    • Utilize IDE Extensions. Developers use integrated development environments and text editors to create and modify code. Why not take advantage of open source extensions that can scan local directories and containers for vulnerabilities? You can’t detect security issues much earlier in the SDLC than that!
    • Integrate Security into CI/CD. There are many open source Continuous Integration/Continuous Delivery tools available such as Jenkins, GitLab CI, Argo, etc. Enterprises should integrate one or more security solutions into their current and future CI/CD pipelines. A good solution would include alerts and events that allow developers to resolve the security issue prior to pushing anything into production.
    • Go Cloud Native. As mentioned earlier, containers can be a great way to ensure immutability. Paired with a powerful orchestration tool, such as Kubernetes, containers can completely transform the way we run distributed applications. There are many great benefits to “going cloud-native,” and several ways enterprises can protect their data and infrastructure by securing their cloud-native applications.

Successful Digital Transformation with DevSecOps

From government agencies to fast food chains, DevSecOps has enabled enterprises to quickly and securely transform their services and assets, even during a pandemic. For example, the US Department of Defense Enterprise DevSecOps Services Team has changed the average amount of time it takes for software to become approved for military use to days instead of years. For the first time ever, that same team managed to update the software on a spy plane that was in-flight!

On the commercial side of things, we’ve seen the pandemic force many businesses and enterprises to adopt new ways of doing things, especially in the food industry. For example, with restaurant seating shut down, Chick-fil-A has to rely heavily on its drive-thru, curbside, and delivery services. Where do those services begin? Software applications! Chick-fil-A obviously uses GitOps, Kubernetes, and AWS and controls large amounts of sensitive data for all of its customers, making it critical that Chick-fil-A implements DevSecOps instead of just DevOps. Imagine if your favorite fast food chain was hacked and your data was stolen – that would be extremely detrimental to business. With the suspiciously personalized ads that I receive on the Chick-fil-A app, there’s also reason to believe that Chick-fil-A has implemented DevSecMLOps, but that’s a topic for another discussion.

A Beginner’s Guide to Anchore Enterprise

[Updated post as of October 22, 2020]

While many Anchore Enterprise users are familiar with our open source Anchore Engine tool and have a good understanding of the way Anchore works, getting started with the additional features provided by the full product may at first seem overwhelming.

In this blog, we will walk through some of the major capabilities of Anchore Enterprise in order to help you get the most value from our product. From basic user interface (UI) usage to enabling third-party notifications, the following sections describe some common things to first explore when adopting Anchore Enterprise.

The Enterprise User Interface

Perhaps the most notable feature of Anchore Enterprise is the addition of a UI to help you navigate various features of Anchore, such as adding images and repositories, configuring policy bundle and whitelists, and scheduling or viewing reports.

The UI helps simplify the usability of Anchore by allowing you to perform normal Anchore actions without requiring a strong understanding of command-line tooling. This means that instead of editing a policy bundle as a JSON file, you can instead use a simple-to-use GUI to directly add or edit policy bundles, rule definitions, and other policy-based features.

Check out our documentation for more information on getting started with the Anchore Enterprise UI.

Advanced Vulnerability Feeds

With the move to Anchore Enterprise, you have the ability to include third-party entitlements that grant access to enhanced vulnerability feed data from Risk Based Security’s VulnDB. You can also analyze Windows-based containers using vulnerability data provided by Microsoft Security Research Center (MSRC).

Additionally, feed sync statuses can be viewed directly in the UI’s System Dashboard, giving you insight into the status of the data feeds along with the health of the underlying Anchore services. You can read more about enabling and configuring Anchore to use a localized feed service.

Note: Enabling the on-premise (localized) feeds service is required to enable VulnDB and Windows feeds, as these feed providers are not included in the data provided by our feed service.

Enterprise Authentication

In addition to Role-Based Access Controls (RBAC) to enhance user and account management, Anchore Enterprise includes the ability to configure an external authentication provider using LDAP, or OAuth / SAML.

Single Sign-On can be configured via OAuth / SAML support, allowing you to configure Anchore Enterprise to use an external Identity Provider such as Keycloak, Okta, or Google-SSO (among others) in order to fit into your greater organizational identity management workflow.

You can use the system dashboard provided by the UI to configure these features, making integration straightforward and easy to view.

Take a look at our RBAC, LDAP, or our SSO documentation for more information on authentication/authorization options in Anchore Enterprise.

Third-Party Notifications

By using our Notifications service, you can configure your Anchore Enterprise deployment to send alerts to external endpoints (Email, GitHub, Slack, and more) about system events such as policy evaluation results, vulnerability updates, and system errors.

Notification endpoints can be configured and managed through the UI, along with the specific events that fit your organizational needs. The currently supported endpoints are:

  • Email—Send notifications to a specific SMTP mail service
  • GitHub—Version control for software development using Git
  • JIRA—Issue tracking and agile product management software by Atlassian
  • Slack—Team collaboration software tools and online services by Slack Technologies
  • Teams—Team collaboration software tools and online services by Microsoft
  • Webhook—Send notifications to a specific API endpoint

For more information on managing notifications in Anchore Enterprise, take a look at our documentation on notifications.

Conclusion

In this blog, we provided a high-level overview of several features to explore when first starting out with Anchore Enterprise. There are multiple other features that we didn’t touch on, so check out our product comparison page for a list of other features included in Anchore Enterprise vs. our open-source Engine offering.

Take a look at our FAQs for more information.

Our Top 5 Strategies for Modern Container Security

[Updated post as of October 15, 2020]

At Anchore, we’re fortunate to be part of the journey of many technology teams as they become cloud-native. We would like to share what we know.

Over the past several years, we’ve observed many teams perform microservice application modernization using containers as the basic building blocks. Using Kubernetes, they dynamically orchestrate these software units and optimize their resource utilization. Aside from the adoption of new technologies, we’ve seen cultural transformations as well.

For example, the breaking of organizational silos to provide an environment for “shifting left” with the shared goal of incorporating as much validation as possible before a software release. One specific area of transformation which is fascinating to us here is how cloud-native is modernizing both development and security practices, along with CI/CD and operations workflows.

Below, we discuss how foundational elements of modern container image security, combined with improved development practices, enhance software delivery overall. For the purposes of this blog, we’ll focus mainly on the image build and the surrounding process within the CI stages of the software development lifecycle.

Here is some high-level guidance all technology teams using containers can implement to increase their container image security posture.

  1. Use minimal base images: Use minimal base images only containing necessary software packages from trusted sources. This will reduce the attack surface of your images, meaning there is less to exploit, and it will make you more confident in your deployment artifacts. To address this, Red Hat introduced Universal Base Images designed for applications that contain their own dependencies. UBIs also undergo regular vulnerability checking and are continuously maintained. Other examples of minimal base images are Distroless images, maintained by Google, and Alpine Linux images.
  2. Go daemonless: Moving away from the Docker CLI and daemon client/server model and into a “daemonless” fork/exec model provides advantages. Traditionally, with the Docker container platform, image build, registry, and container operations happen through what is known as the daemon. Not only does this create a single point of failure, but Docker operations are conducted by a user with full root authority. More recently, tools such as Podman, Buildah, and Skopeo (we use Skopeo inside of Anchore Engine) were created to address the challenges of building images, working with registries, and running containers. For a bit more information the security benefits of using Podman vs Docker read this article by Dan Walsh.
  3. Require image signing: Require container images to be signed to verify their authenticity. By doing so you can verify that your images were pushed by the correct party. Image authenticity can be verified with tools such as Notary, and both Podman and Skopeo (discussed above) also provide image signing capabilities. Taking this a step further, you can require that CI tools, repositories, and all other steps in the CI pipeline cryptographically sign every image they process with a software supply chain security framework such as in-toto.
  4. Inspect deployment artifacts: Inspect container images for vulnerabilities, misconfigurations, credentials, secrets, and bespoke policy rule violations prior to being promoted to a production registry and certainly before deployment. Container analysis tools such as Anchore can perform deep inspection of container images, and provide codified policy enforcement checks which can be customized to fit a variety of compliance standards. Perhaps the largest benefit of adding security testing with gated policy checks earlier in the container lifecycle is that you will spend less time and money fixing issues post-deployment.
  5. Create and enforce policies: For each of the above, tools selected should have the ability to generate codified rules to enable a policy-driven build and release practice. Once chosen they can be integrated and enforced as checkpoints/quality control gates during the software development process in CI/CD pipelines.

How Improved Development Practices Help

The above can be quite challenging to implement without modernizing development in parallel. One development practice we’ve seen change the way organizations are able to adopt supply chain security in a cloud-native world is GitOps. The declarative constructs of containers and Kubernetes configurations, coupled with infrastructure-as-code tools such as Terraform provide the elements for teams to fully embrace the GitOps methodology. Git now becomes the single source of truth for infrastructure and application configuration, along with policy-as-code documents. This practice allows for improved knowledge sharing, code reviews, and self-service, while at the same time providing a full audit trail to meet compliance requirements.

Final Thought

The key benefit of adopting modern development practices is the ability to deliver secure software faster and more reliably. By shifting as many checks as possible into an automated testing suite as part of CI/CD, issues are caught early, before they ever make their way into a production environment.

Here at Anchore, we’re always interested in finding out more about your cloud-native journey, and how we may be able to help you weave security into your modern workflow.

Adopt Zero Trust to Safeguard Containers

In a time where remote access has shifted from the exception to the new normal, users require access to enterprise applications and services from outside the traditional boundaries of an enterprise network. The rising adoption of microservices and containerized applications have further complicated things. Containers and their underlying infrastructure don’t play well within the boundaries of traditional network security practices, which typically emphasize security at the perimeter. As organizations look for ways to address these challenges, strategies such as the Zero Trust model have gained traction in securing containerized workloads.

What is the Zero Trust Model?

Forrester Research introduced the Zero Trust model in 2010, emphasizing a new approach to security: “never trust, always verify.” The belief was that traditional security methodologies focused on securing the internal perimeter were no longer sufficient and that any entity accessing enterprise applications and services needed to be authenticated, authorized, and continuously validated, whether inside or outside of the network perimeter, before being granted or keeping access to applications and their data. 

Since then, cloud adoption and the rise in a distributed enterprise model has seen organizations looking to adopt these principles in a time where security threats and breaches have become commonplace. Google, a regular early adopter in new technological trends, released a series of whitepapers and other publications in 2014 detailing its implementation of the Zero Trust model in a project known as BeyondCorp

Zero Trust and Containerized Workloads

So how can organizations apply Zero Trust principles on their containerized workloads?

Use Approved Images

A containerized environment gives you the ability to bring up new applications and services quickly using free and openly distributed software rather than building them yourself. There are advantages to using open source software but this also presents the inherent risk of introducing vulnerabilities and other issues into your environment. Restricting the use of images to those that have been vetted and approved can greatly reduce their attack surface and ensure only trusted applications and services are being deployed into production.

Implement Network Policies

Container networking introduces complexities such as nodes, pods, containers, and service endpoints assigned IP addresses typically on different network ranges requiring interconnectivity to function properly. As a result, each of these endpoints is generally configured to communicate freely by default. Implementing network policies and micro-segmentation enforces explicit controls around traffic and data flowing between these entities to ensure that only permitted communications are established. 

Secure Endpoints

In traditional enterprise networks, workloads are often assigned static IP addresses as an identifier and controls are placed around which entities can access certain IP addresses. Containerized applications are typically short-lived, resulting in a dynamic environment with large IP ranges, making it harder to track and audit network connections. To secure these endpoints and the communications between them, organizations should focus on continuously validating and authorizing identities. An emphasis should also be placed on encrypting any communications between endpoints.

Implement Identity-Based Policies

One of the most important aspects of Zero Trust is ensuring that no entity, inside or outside the perimeter, is authorized to access privileged data and systems without first validating and confirming their identity. As previously mentioned, IP-based validation is no longer sufficient in a containerized environment. Instead, enterprises should enforce policies based on the identities of the actual workloads running in their environments. Role-based access control can facilitate the implementation of fine-grained access policies based on an entity’s characteristics while employing a least-privilege approach further narrows the scope of access by ensuring that any entity requiring privileged access is granted only the minimum level of permissions required to perform a set of actions. 

Final Thoughts

Container adoption has become a point of emphasis for many organizations in their digital transformation strategies. While there are many benefits to containers and microservices, organizations must be careful not to combine new technologies with archaic enterprise security methodologies. As organizations devise new strategies for securing containerized workloads in a modernized infrastructure, the Zero Trust model can serve as a framework for success. 

The Story Behind Anchore Toolbox

As tool builders, we interact daily with teams of developers, operators, and security professionals working to achieve efficient and highly automated software development processes.  Our goal with this initiative is to provide a technology-focused space for ourselves and the community to build and share a variety of open-source tools to provide data gathering, security, and other capabilities in a form specifically designed for inclusion in developer and developer infrastructure workflows.

This post will share the reasoning, objectives, future vision, and methods for joining and contributing to this new project from Anchore.

Why Anchore Toolbox?

Over the last few years, we’ve witnessed a significant effort in the industry to adopt highly automated, modern software delivery lifecycle (SDLC) management processes.  As container security and compliance technology providers, we often find ourselves deeply involved in security/compliance discussions with practitioners and the general design of new, automation-oriented developer infrastructure systems.  Development teams are looking to add or start with automated security and compliance data collection and controls directly into their SDLC processes. We believe there is an opportunity to translate many of the lessons learned along the way into small, granular tools specifically (and importantly!) designed to be used within a modern developer/CI/CD environment.  Toward this objective, we’ve adopted a UNIX-like philosophy for projects in the Toolbox.  Each specific tool is a stand-alone element with a particular purpose your team can combine with other tools to construct more comprehensive flows. This model lends itself to useful manual invocation. We also find it works well when integrating these types of operations into existing CI/CD platforms such as GitHub, GitLab, Atlassian BitBucket, Azure Pipelines, and CloudBees as they continue to add native security and compliance interfaces.

What’s Available Today?

We include two tools in Anchore Toolbox to start – Syft,  a software bill of materials generator, and Grype, a container image/code repository vulnerability scanner.  Syft and Grype are fast and efficient software analysis tools that come from our experience building technologies that provide deep container image analysis and security data.

To illustrate how we envision DevSecOps teams using these tools in practice, we’ve included a VS Code extension for Grype and a new version of the Anchore Scan GitHub action, based on Grype, that supplies container image security findings to GitHub’s recently launched code scanning feature set. 

Both Syft and Grype are light-weight command-line tools by design. We wrote them in Go, making them very straightforward additions to any developer/developer infrastructure workflow. There’s no need to install any language-specific environments or struggle with configurations to pass information in and out of a container instance.  To support interoperability with many SBOM, security, and compliance data stores, you can choose to generate results in human-readable, JSON, and CycloneDX format.

Future of Anchore Toolbox

We’re launching the Anchore Toolbox with what we believe are important and fundamental building block elements that by themselves fill in essential aspects of the modern SDLC story, but we’re just getting started.  We would love nothing more than to hear from anyone in the community who shares our enthusiasm for bringing the goals of security, compliance, and insight automation ever closer.  We look forward to continuing the discussion and working with you to improve our existing projects and to bring new tools into the Toolbox!

For more information – check out the following resources to start using Anchore Toolbox today.

Introducing Anchore Toolbox: A New Collection of Open Source DevSecOps Tools

Anchore Toolbox is a collection of lightweight, single-purpose, easy to use, open source DevSecOps tools that Anchore has developed for developers and DevOps teams who want to build their continuous integration/continuous development (CI/CD) pipeline.

We’re building Toolbox to support the open source DevSecOps community by providing easy-to-use just in time tools available at the command line interface (CLI). Our goal is for Toolbox to serve a fundamentally different need than Anchore Enterprise by offering DevSecOps teams single-purpose tools optimized for speed and ease of use.

The first tools to debut as part of Anchore Toolbox are Syft and Grype:

Syft

We built Syft from the ground up to be an open source analyzer that serves developers who want to “shift left” and scan their projects still in development. You can use Syft to scan a container image, but also a directory inside your development project.

Syft tells you what’s inside your super complicated project or container and builds you a detailed software bill of materials (SBOM). You can output an SBOM from Syft as a text file, table, or JavaScript Object Notation (JSON) file and includes native output support for the CycloneDX format. 

Installing Syft

We provide everything you need, including full documentation for installing Syft over on GitHub.

Grype

Grype is an open source project to scan your project or container for known vulnerabilities. Grype uses the latest information from the same Anchore feed services as Anchore Engine. You can use Grype to identify vulnerabilities in most Linux operating system packages and language artifacts, including NPM, Python, Ruby, and Java.

Grype provides output similar to Syft, including table, text, and JSON. You can use Grype on container images or just directories. 

Installing Grype

We provide everything you need, including full documentation for installing Grype over on GitHub.

Anchore’s Open Source Portfolio and DevSecOps

Open source is a building block of today’s DevSecOps toolchain and integral to the growth of the DevSecOps community’s growth at large. Anchore Toolbox is part of our strategy to contribute to both the open source and DevSecOps communities and do our part to advance container security practices.

The Anchore Open Source Portfolio also includes two other elements:

  • Out-of-the-box integrations that connect Anchore open source technologies with common CI/CD platforms and developer tools with current integrations including GitHub Actions, Azure Pipelines, BitBucket Pipes, and Visual Studio Code
  • Anchore Engine, a persistent service that stores SBOMs and scan results for historical analysis and API-based interaction

Learn more about Anchore Toolbox

The best way to learn about Syft and Grype is to use them! Also, stay tuned this week for a blog on Thursday, October 8, 2020, from Dan Nurmi, Anchore CTO, who tells the story behind Anchore Toolbox and offers a look forward at what we plan to do with open source as a company.

Join the Anchore Community on Slack to learn more about Toolbox developments and interact with our online community, file issues, and give feedback about your experience with these new tools.