Anchore Enterprise 3.2 Provides Increased Visibility to Identify More Risks in the Software Supply Chain

Modern cloud-native software applications include software components and code from both internal developers and external sources such as open source communities or commercial software providers. Visibility into these components to identify vulnerabilities, security risks, misconfigurations, and bad practices is an integral part of securing the software supply chain. Anchore Enterprise 3.2 provides richer visibility into your software components so risks can be identified and quickly resolved.

Discover and Mitigate Vulnerabilities and Security Risks Found in SUSE Enterprise Linux Containers

Discover and Mitigate Vulnerabilities and Security Risks Found in SUSE Enterprise Linux Containers

SUSE container image scan results

Anchore Enterprise 3.2 can now scan and continuously monitor SUSE container images for any security issues present in installed SUSE packages to improve their security posture and reduce threats. SUSE packages are now included in SBOMs as well as a comprehensive list of files. Additionally, customers can apply Anchore’s customizable policy enforcement to SUSE packages and vulnerabilities.

Identify Vulnerabilities More Accurately with Our Next-Generation Scanning Engine

Anchore Enterprise 3.2 now uses our next-generation scanning engine that builds upon capabilities in our open source tool Grype and also delivers more accurate results. Users will benefit from the fast pace of innovation while gaining all of the additional features that are available in Anchore Enterprise such as false-positive management. In addition, Grype users switching to Anchore Enterprise will benefit from consistent results between the two solutions, simplifying the transition. The 3.2 version of Anchore Enterprise provides richer visibility into your software components so risks can be identified and quickly resolved.

Note: Existing customers will need to select the next-generation engine in order to take advantage of these benefits. All new installations will default to the new scanning engine. For more information on how to switch, please see the release notes.

More Metadata Exposed in the UI for Policy Rules

More Metadata Exposed in the UI for Policy Rules

New metadata tabs

Customers using Anchore Enterprise now have the ability to see additional SBOM file details in the UI that were previously available only through the API. This new UI visibility enables users to quickly and easily view data that can be instrumental in creating and tuning policy rules. The UI data additions include secrets for identifying credential information inadvertently included in container builds and file content checks which can be used for best practices such as making sure configurations are set correctly. The UI also now allows you to access retrieved files (files that you have designated to be saved during the scan) for further review and additional policy checks. 

More Allowlist Customization Options in the UI

More Allowlist Customization Options in the UI

Allowlist customized by Trigger ID

Users now have additional Allowlist customization options in the UI. Allowlists enable development teams to continue working while issues are being investigated. Now in addition to vulnerabilities, users can add other policy checks to Allowlists through the UI which permits them to override specific policy violations for a more accurate final pass or fail recommendations on image scans.

Expanding Container Security: Announcing Anchore Engine 1.0 and the Role of Syft and Grype

It’s been an amazing five years working with you, our users, with more than 74,000 deployments across more than 40 releases since we initially shipped Anchore Engine. Today, we are pleased to announce that the project has now reached its 1.0 milestone. Much has changed in the world of container security since our first release, but the need for scanning container images for vulnerabilities and other security risks has been a constant. 

Anchore Engine 1.0 includes a focused feature set that is the result of working directly with many organizations toward securing cloud-native development environments and also represents an update to Anchore’s overall approach to delivering DevSecOps-focused open source tools.

New code, New Speed

Over and over again, we’ve heard that the three most important criteria for a container scanning tool is that it needs to be quick, it needs to be accurate, and it needs to be easy to integrate into existing development toolchains. To support those needs, last year we took the lessons learned over years of developing Anchore Engine and created two new command line tools: Syft and Grype. Syft generates a high-fidelity software bill of materials (SBOM) for containers and directories, and Grype performs a vulnerability analysis on the SBOMs created by Syft or against containers directly. 

With the release of 1.0, we’ve now refactored Anchore Engine to embed these stateless tools for core scanning functions, improving the speed of container image scans and ensuring parity between stand-alone stateless Syft/Grype scans and those produced in the stateful Anchore Engine service. We’ve also cut the time for the initial sync of the vulnerability DB from hours to seconds, getting new users up-and-running even faster than before.

Feed Service Deprecation

Prior to the 1.0 release, deployments of Anchore Engine periodically connect up to our public feed service, hosted in the cloud. The vulnerability data would then be pulled down and merged into your local Anchore Engine database. This merge process often took a while due to the per-record insert-or-update process. 

With the new Grype-based scanner, the vulnerability data is now managed using Grype itself using a single file transfer that updates the entire vulnerability database atomically. The vulnerability data itself is generated from the same sources as the public feed service ensuring that you’ll see no drop in distro or package ecosystem coverage. Since Anchore Engine 1.0 will no longer use the public feed service, we plan to sunset the non-Grype feed service on April 4, 2022, which we’ve chosen in order to give all users of Anchore Engine the time to plan and execute upgrades to Anchore Engine 1.0+. After April 4, 2022, any existing deployments of Anchore Engine prior to 1.0 will continue to operate, but will no longer be receiving new vulnerability data updates.

New Tools for CI/CD

When Anchore Engine was created, scanning was centered around the container registry. Now, with GitHub Actions, GitLab Runners, Azure Pipelines, and the ever present Jenkins, scanning in the CI/CD pipeline is fast becoming the norm. We originally created our inline scanner script (inline_scan) to wrap around Anchore Engine and facilitate this workflow, and it did its job well. With Syft and Grype now delivering the same capabilities as inline_scan (and much more), in a faster and more efficient fashion, we are also retiring the project. 

As of 1.0, we will no longer be updating inline_scan with new releases of Anchore Engine, and we will stop updating the existing inline_scan images after January 10, 2022. Similarly, our existing native CI/CD integrations based on inline_scan will be or have already been, updated to use Grype internally. 

The Road Ahead

Going forward, Syft and Grype will be the best choice for developers and DevOps engineers that need to integrate scanning into their CI/CD pipelines. Anchore Engine 1.0 will continue to play the role of providing persistent storage for the results of CI/CD scans as well as automated registry scanning. Because Anchore Engine 1.0 is built on the common core of Syft and Grype, you will get a consistent result regardless of where you need scans performed and which tools you use.

With the foundational role that SBOMs will play in software supply chain security and the fast moving changes to the various CI/CD platforms, we have an ambitious roadmap for Syft and Grype. Our goal is to make Syft the best open source tool for generating an SBOM and Grype the best tool for reporting discovered vulnerabilities. Anchore Engine will continue to receive updates and improvements, but with registry scanning requirements being relatively static we are not planning any major new capabilities at this time.

Our commercial solution, Anchore Enterprise, will continue to be focused on helping security teams to manage the whole security lifecycle with centralized policy control, audits, and integrations with enterprise platforms. We are committed to ensuring that users who chose to use Syft, Grype and Anchore Engine have a quick and easy path if and when they are ready to make the transition. 

Anchore is committed to open source, recognizing that it is the best way to give DevOps teams the tools they need to move fast. Whether you use Syft, Grype, or Anchore Engine, we look forward to working together with you on your DevSecOps journey. You can connect with us on Slack or GitHub. We’ll also be hosting a webinar on October 20, 2021 to walk you through our open source and enterprise solutions and explain the role and capabilities of each. Register here.

Important Resources

Anchore Engine 1.0

Syft

Grype 

Anchore Enterprise

Compare Anchore open source and enterprise solutions

Important Dates

  • October 1, 2021 – Anchore Engine 1.0 Available
  • October 20, 2021 – Webinar on Anchore open source and enterprise solutions. Register here.
  • January 10, 2022 – the inline_scan project will be retired.  Users can switch to using syft/grype for stateless security scans.  Executions of inline_scan will continue to function but will no longer receive vulnerability data updates. inline_scan will not be updated with Engine 1.0. It will remain on 0.10.x until retirement.
  • April 4, 2022 – The public feed service, ancho.re, will no longer be available.  Anchore Engine users need version 1.0+ using the v2 scanner to receive new vulnerability data updates.  Existing users will continue to function but will no longer receive vulnerability data updates.

The 3 Shades of SecDevOps

We live and work in a time of Peak Ops. DevOps. DevSecOps. GitOps. And SecDevOps, to name a few. It can be confusing to discern the reality through the marketing spin. However, SecDevOps is one new form of Ops that’s worth keeping in mind as you face new and emerging security and compliance challenges as your organization pulls out of the pandemic.

Here’s what I call the three shades of SecDevOps definitions:

SecDevOps: The Ops Definition

SecDevOps — also called rugged DevOps — places security first in the development process. SecDevOps and DevSecOps differ in the order of security considerations during the software development life cycle (SDLC). It’s a nascent school of thought which goes as far as comparing SecDevOps vs. DevOps.

SecDevOps requires a thorough understanding of how the application works to identify how it can be vulnerable. Such an understanding gives you a clearer idea of how you can protect your application from security threats. Threat modeling during the SDLC is an industry best practice for gaining such an understanding.

There are two distinct parts in SecDevOps:

Security as Code

Security as Code (SaC) is when you build security into the tools and practices in your DevOps pipeline. Static application security testing (SAST) and dynamic application security testing (DAST) solutions automatically scan applications coming through the pipeline. SaC places priority on automation over manual processes. Manual processes do remain in place for security-critical components of the application. Implementing SaC is an essential element of DevOps toolchains and workflows.

Infrastructure as Code

Infrastructure as Code (IaC) refers to a set of DevOps tools for setting up and updating infrastructure components to ensure a hardened and controlled deployment environment. The same code development rules are used to manage operations infrastructure instead of manual changes or one-off scripts that often take place these days. With IaC, mitigating a system problem takes deploying a configuration-controlled server versus the old way of patching and updating servers already in production.

SecDevOps uses continuous and automated security testing starting before the application goes into production. It implements issue tracking to ensure the early identification of any defects. It also leverages automation and testing to provide effective security tests throughout the software development lifecycle.

SecDevOps: The DevSecOps Synonym 

Then again, some organizations use the term SecDevOps synonymously with DevSecOps. There’s nothing wrong here. For example, a government agency focusing on security may use the term to mean DevSecOps. It’s semantics because they want to emphasize the importance of security in their software development.

SecDevOps: The Marketing Spin Definition

The Ops market is full of competition. It’s natural for marketers to want to spin the definition of SecDevOps so that it best suits the products and solutions that their company is selling to prospective customers. The best way to digest a marketing spin definition is to define what SecDevOps means for your organization. Don’t let salespeople define SecDevOps for you.

Final thoughts

Regardless of your school of thought about the three shades of SecDevOps, it’s about the people, culture, processes, and technology. A positive outcome of our current age of Peak Ops is that we all have a lot to learn from other schools of Ops thought, so soak in the SecDevOps definition and see what you can learn from it to apply to your organization’s DevSecOps practices.

Drop an SBOM: How to Secure your Software Supply Chain Using Open Source Tools

In the past few years, the number of software supply chain attacks against companies has skyrocketed. The incessant threat is pushing organizations to start figuring out their own solutions to supply chain security. The recent Executive Order on Improving the Nation’s Cybersecurity also raises new requirements for software used by the United States government.

Securing the software supply chain is no easy task! Software supply chains continue to grow in complexity. Software may come from open source projects, commercial software vendors, and your internally-developed code. And with today’s cloud-native, container-centric practices, development teams are consuming, building, and deploying more software today than they ever have before.

So this begs the question: “Is your team deploying software that might lead to the next headline-grabbing supply chain hack?”

Some supply chain hacks happen when software consumers don’t realize they are using vulnerable software, while other hacks can occur when the origin or contents of the software has been spoofed by malicious actors.

If you’d like to avoid falling victim to these types of attacks, keep reading.

To start off, let’s step backward from the worst-case scenario…

  • I don’t want my company to make headlines by having a massive security breach, so…
  • I don’t want to deploy any software artifacts that are known to have vulnerabilities, so…
  • I need to know which of my installed software packages are vulnerable, so…
  • I need to know what my installed software packages are, so…
  • I need to analyze my artifacts to determine what software they contain.

The Ingredients of Supply Chain Security

Any effective solution to securing your supply chain must include two ingredients: transparency and trust. What does that mean?

Transparency: Discovering What is There

Inevitably, it all starts with knowing what software is being used. You need an accurate list of “ingredients” (such as libraries, packages, or files) that are included in a piece of software. This list of “ingredients” is known as a software bill of materials (SBOM). Once we have an SBOM for any piece of software we create or use, we can begin to answer critical questions about the security of our software supply chain.

It’s important to note that SBOMs themselves also serve as input to other types of analyses. A noteworthy example of this is vulnerability scanning — discovering known security problems with a piece of software based on previously published vulnerability reports. Detecting and mitigating vulnerabilities goes a long way toward preventing security incidents.

In the case of software deployed in containers, developers can use SBOMs and vulnerability scans together to provide better transparency into container images. When performing these two types of analyses within a CI/CD pipeline, we need to realize two things:

  1. Each time we create a new container image (i.e. an image with a unique digest), we only need to generate an SBOM once. And that SBOM can be forever associated with that unique image. Nice!
  2. Even though that unique image never changes, it’s vital to continually scan for vulnerabilities. Many people scan for vulnerabilities once an image is built, and then move on. But new vulnerabilities are discovered and published every day (literally) — so it’s vital to periodically scan any existing images we’re already consuming or distributing to identify if they are impacted by new vulnerabilities.

Trust: Relying on What is There

While artifacts such as an SBOM or vulnerability report provide critical information about software at various points in the supply chain, software consumers need to ensure that they can rely on the origin and integrity of these artifacts.

Keep in mind that software consumers can be customers or users outside your organization or they can be other teams within your organization. In either case, you need to establish “trust”.

One of the foundational approaches to implementing “trust” is for software producers to generate artifacts (including SBOMs and vulnerability reports) that attest to the contents of the software (including SBOMs and vulnerability reports), and then sign those artifacts. Software consumers can then verify the software, SBOM and vulnerability report for an accurate picture of both the contents and security status of the software they are using.

To implement signing and attestation, development teams have to figure out how to create the SBOM and vulnerability reports, which crypto technology to use for signing, how to manage the keys used for signing, and how to jam these new tools into their existing pipelines. It’s not uncommon to see a “trust” solution get misimplemented, or even neglected altogether.

Ideally, solving for trust would be easy and automated. If it was, development teams would be much more likely to implement it.

What might that look like? Let’s take a look at how we can build transparency and trust into an automated workflow.

Building the Workflow

To accomplish this, we’re going to use three open source CLI tools that are gaining traction in the trust and transparency spaces:

  • Cosign:  container signing, verification and storage in an OCI registry (one of the tools in the Sigstore project)
  • Syft:  software bill of materials generator for container images and filesystems
  • Grype: vulnerability scanner for container images and filesystems

If you learn best by seeing a working example, we have one! Check out https://github.com/luhring/example-container-image-supply-chain-security. We’re using GitHub Actions and GitHub’s container registry, but these practices apply just as well to any CI system and container registry.

In our example, we’ve created a container image that’s been intentionally designed to have vulnerabilities, using this Dockerfile. But you should apply the steps below to your own container images that your team already builds.

Signing the Image

Since we’re adding trust and analysis for a container image, the first step is to provide a way to trust the origin and integrity of the container image itself. This means we need to ensure that the container image is signed.

For this, we’ll use Cosign. Cosign is a fantastic tool for signing and verifying container images and related artifacts. It can generate a public/private key pair for us, or it can hook into an existing key management system. For the simplicity of this demonstration, we’ll use “cosign.key” and “cosign.pub” files that Cosign generates for us.

Outside of our CI workflow, we’ll run this command, set a password for our private key, and store these files in GitHub as secrets.

cosign generate-key-pair

Then in our workflow, we can use these keys in our Cosign commands, such as here to sign our image:

cosign sign -key ./cosign.key "$IMAGE"

Conveniently, this command also pushes the new signature to the registry for us.

Analyzing the Image

Now that we have an image we can trust, we can begin asking critical questions about what’s inside this image.

Creating an SBOM

Let’s start by generating an SBOM for the image. For this, we’ll use Syft. Syft is a great tool for discovering what’s inside an image and creating SBOMs that can be leveraged downstream.

syft "registry:$IMAGE" -o json > ./sbom.syft.json

Having an SBOM on file for a container image is important because it lets others observe and further analyze the software packages found in the image. But we can’t forget: other people need to be able to trust our SBOM!

Cosign lets us create attestations for container images. Attestations allow us to make a claim about an image (such as what software is present) in such a way that can be cryptographically verified by others that depend on this information.

cosign attest -predicate ./sbom.syft.json -key ./cosign.key "$IMAGE"

Like with the “sign” command, Cosign takes care of pushing our attestation to the registry for us.

Scanning for Vulnerabilities

Okay, now that we have an SBOM that we can trust, it’s critical to our security that we understand what vulnerabilities have been reported for the software packages in our image. For this, we’ll use Grype, a powerful, CLI-based vulnerability scanner.

We’ll use Grype to scan for vulnerabilities using the SBOM from Syft as the target.

grype sbom:./sbom.syft.json -o json > ./vulnerability-report.grype.json

Just as we did with our SBOM, we’re going to “attest” this vulnerability report for our image, which allows others to trust the results of our scan.

cosign attest -predicate ./vulnerability-report.grype.json -key ./cosign.key "$IMAGE"

Remember that it’s crucial that we continuously scan for vulnerabilities since new vulnerabilities are reported every day. In our example repo, we’ve set up a nightly pipeline that looks for the latest SBOM, verifies the attestation using Cosign, and if valid, uses the SBOM to perform a new vulnerability scan.

Why Did We Do All of That?

Armed with an attested SBOM and vulnerability report, consumers of our image can depend on both the contents of the software and our scan results to understand what software vulnerabilities are present in the container image we’ve shipped.

Here’s how someone could leverage the trust and transparency we’ve created by verifying the attestations in their own workflow:

cosign verify-attestation -key ./cosign.pub “$IMAGE”

If the attestation can be verified, cosign retrieves all of the attestation data available for the container image. Depending on the scenario, there can be a large amount of attestation data. But this opens up a lot of options for downstream users that are depending on the ability to trust the container images they’re consuming.

Here’s an example of how to use an attestation to find all of the reported vulnerabilities for a container image. (This command assumes you’ve downloaded the cosign.pub public key file from our example).

cosign verify-attestation -key ./cosign.pub ghcr.io/luhring/
example@sha256:1f2d8339eda7df7945ece6d3f72a3198bf9a0e92f3f937d4cf37adcbd21a006a | jq --slurp 'map(.payload | @base64d | fromjson | .predicate.Data | fromjson | select(.descriptor.name == "grype")) | first | .matches | map(.vulnerability.id) | unique'

Yes, it’s a long command! But that’s only because we’re using the console. This gets much easier with tooling designed to verify signatures and attestations on our behalf.

Final Thoughts

Now is the time to invest in the security of your software supply chain. Equipped with the knowledge above, you can start to bake trust and transparency into your own pipelines right now. This mission can be daunting — it’s difficult or impossible without great tooling.

We’ve now built a workflow that uses Cosign, Syft, and Grype to:

  1. provide transparency into the contents and the security of our container image, and
  2. create trust that the output from any given step is valid to use as the input to the next step.

With Cosign, Syft, and Grype, you’re much closer to securing your supply chain than you were before. Here’s how to get started with each:

And most importantly, remember that these tools are entirely open source. Please consider giving back to these communities! Add issues, open PRs, and talk to us in the Sigstore and Anchore community Slack channels. The biggest challenges are overcome when great minds come together to create solutions.

7 Principles of DevSecOps Automation

This blog post has been archived and replaced by the supporting pillar page that can be found here:

https://anchore.com/wp-admin/post.php?post=987473388&action=edit

The blog post is meant to remain “public” so that it will continue to show on the /blog feed. This will help discoverability for people browsing the blog and potentially help SEO. If it is clicked on it will automatically redirect to the pillar page.

5 DevSecOps Best Practices for Hybrid Teams

As we put away our beach chairs and pool toys, now that Labor Day is past us, it’s time to refresh your DevSecOps best practices if your organization is moving employees back to the office on a part-time basis. While your developers should capitalize on their remote work wins, hybrid work can require different approaches than what has been in place during the past 18+ months.

Here are some DevSecOps practices to consider if your development teams are moving to a hybrid work model:

1. Reinforce Trust and Relationships 

The pandemic-forced remote work we’ve all been through has provided invaluable collaboration, empathy, and trust lessons. Your work to continuously improve trust and relationships on your teams doesn’t stop when some team members begin to make their way back to the office.

A challenge to be wary of with hybrid DevSecOps teams is the reality that some team members have face time with managers and executives in the office.   Remote employees don’t get this time. A common employee concern is that two (or more) classes of employees develop in your organization.

There can be cultural issues at play here. Then again, work from home (WFH) anxiety and paranoia can be real for some people. Pay close attention and keep your communication between team members open as you venture into remote work. Provide parity for your meetings by allowing onsite and remote participants an equal platform. Another good rule is to communicate calmly and with candor. Such acts will help reinforce trust across your teams. 

2. Review your DevOps/DevSecOps Toolchain Security

The move to remote work opened up commercial and public sector enterprises to new attacks as remote work grew endpoints outside the traditional network perimeter.  Commercial and public sector organization endpoint security in pre-pandemic times was very much centralized. 

Securing the DevSecOps pipeline is an underserved security discussion in some ways. The DevOps and DevSecOps communities spend so much time on discussions about delivery velocity and shifting security left. The actual security of the toolchain, such as the value of identity access management (IAM), zero trust architecture (ZTA), and other security measures. The benefit here is only authorized employees can access your toolchain.

Use the move to hybrid work to review and refresh your toolchain security against “man in the middle” and other attackers lurking for hybrid teams to target.

3. Improve your DevSecOps Tools and Security Reporting

End-to-end traceability gains added importance as more of your executives and stakeholders return to a new state of normalcy. Use your organization’s move to hybrid work to improve security and development tools reporting across your pipelines. There are some reasons for this refresher:

  • Deliver additional data to your management and stakeholders about project progress through your pipelines regarding your hybrid work move. Be proactive and work with stakeholders during your hybrid work transition to see if they have additional reporting requirements for their management.
  • Update your security reporting to reflect the new hybrid working environment that spans both inside and outside your traditional endpoints and network perimeter.
  • Give your team the most accurate picture using data of the current state of software development and security over your projects.

4. Standardize on a Dev Platform

Hybrid work reinforces the need for your developers to work on a standardized platform such as GitLab or GitHub. The platform can serve as a centralized, secure hub for software code and project artifacts accessible to your developers, whether they are working from home or in the office. Each platform also includes reporting tools that can help you further communicate with your management about the progress and security of your projects. 

If your developers are already standardized on a platform, use the move to hybrid work to learn and implement new features. For example, GitLab now integrates Grype with GitLab 14 for container security. GitHub includes GitHub Actions which makes it easy to automate CI/CD workflows.

5. Refine your Automation Practices

DevSecOps automation isn’t meant to be a one-and-done process. It requires constant analysis and feedback from your developers. With automation, look for areas to improve, such as change management and other tasks that you need to adapt to hybrid work. Make it a rule if hybrid work changes a workflow for your teams, it’s a new opportunity to automate! 

Final thoughts

If you view DevOps and, in turn, DevSecOps as opportunities for continuous improvement, then DevSecOps best practices for hybrid work are another step in your DevSecOps journey. Treat it as the same learning experience as when your organization sent your team home in the early days of COVID-19. 

DevOps Supply Chain Security: A Case for DevSecOps

DevOps supply chain security is becoming another use case for DevSecOps as enterprises seek innovative solutions to secure this attack vector. 60% of the 2021 Anchore Software Supply Chain Report considers securing the software supply chain as a top or significant focus area. DevSecOps gives enterprises the foundational tools and processes to support this security focus.

Anatomy of a Software Supply Chain Attack

A software supply chain is analogous to a manufacturing supply chain in the auto industry. It includes anything that impacts your software, especially open source and custom software components. The sources for these components come from outside an organization such as an open source software (OSS) project, third-party vendor, contractor, or partner.

The National Institute of Standards and Technology (NIST) has a concise and easy-to-understand definition of software supply chain attack:

A software supply chain attack occurs when a cyber threat actor infiltrates a software vendor’s network and employs malicious code to compromise the software before the vendor sends it to their customers. 

Many organizations see increased value from in-house software development by adopting open source technology and containers to build and package software for the cloud quickly. Usually branded as Digital Transformation, this shift comes with trade-offs rarely highlighted by vendors and boutique consulting firms selling the solutions. You can get past these trade-offs with OSS by establishing an open source program office (OSPO) to manage your OSS governance.

They do not limit these risks to criminal hacking, and fragility in your supply chain comes in many forms. One type of risk comes from single contributors that could object morally to the use of their software, like what happened when one developer decided he didn’t like President Trump’s support of ICE and pulled his package from NPM. Or unbeknownst to your legal team, you could distribute software without a proper license, as with any container that uses Alpine Linux as the base image. 

Why DevSecOps for Software Supply Chain Security?

DevSecOps practices focus on breaking down silos, improving collaboration, and of course, shifting security left to integrate it early in the development process before production. These and other DevSecOps practices are foundational to secure cloud-native software development.

Software supply chain security in the post SolarWinds and Codecov world is continuously evolving. Some of the brightest minds in commercial and public sector cybersecurity are stepping up to mitigate the risks of potential software supply chain attacks. It’s a nearly impossible task currently. 

Here are some reasons why DevSecOps is a must for software supply chain security:

Unify your CI/CD Pipeline

The sooner you can unify your CI/CD pipeline, the sooner you can implement controls, allowing your security controls to shift left, according to InfoWorld. Implementing multiple controls across multiple systems is a recipe for disaster.

Unifying your CI/CD pipeline also gives you another opportunity to level set current tool standards, but you can upgrade tools as necessary to improve security and compliance.

Target Dependencies in Software Code

A DevSecOps toolchain gives you the tools, processes, and analytics to target dependencies in the software code coursing through your software supply chain. Less than half of our software supply chain survey respondents report scanning open source software (OSS) containers and using private repositories for dependencies.

Unfortunately, there’s no perfect solution to detecting your software dependencies. Thus, you need to resort to multiple solutions across your DevSecOps toolchain and software supply chain. Here are some traditional solutions:

  • Implement software container scanning using a tool such as Anchore Enterprise (of course!) at critical points across your supply chain, such as before checking containers into your private repository
  • Analyze code dependencies specified in the manifest file or lock files
  • Track and analyze dependencies that your build process pulls into the release candidate
  • Examine build artifacts before they enter your registry via tools and processes

The appropriate targeting of software dependencies raises the stature of the software bill of materials (SBOM) as a potent software supply chain security measure. 

Use DevSecOps Collaboration to Break Down DevOps Supply Chain Barriers

DevSecOps isn’t just about tools and processes. It also instills improvements in culture, especially for cross-team collaboration. While DevSecOps culture is a work in progress for the average enterprise, and it should be that way, focusing a renewed focus on software supply chain security is cause for you to extend your DevSecOps culture to your contractors and third-party suppliers that make up your software supply chain.

DevSecOps frees your security team from being the last stop before production. They are free to be more proactive at earlier stages of the software supply chain through frameworks, automated testing, and improved processes. Collaborating with the security team takes on some extra dimensions with software supply security because they’ll deal with some additional considerations:

  • Onboarding OSS securely to their supply chain
  • Intaking third-party vendor technologies while maintaining security and compliance
  • Collaborating with contractor and partner security teams as a player-coach to integrate their code into their final product

Structure DevSecOps with a Framework and Processes

As companies continue to move to the cloud, it’s becoming increasingly apparent they should integrate DevSecOps into their cloud infrastructure. Some pain points will likely arise, but their duration will be short and their payoffs large, according to InfoQ.

A DevSecOps framework brings accountability and standardization leading to an improved security posture. It should encompass the following:

  • Visibility into dependencies through the use of automated container scanning and SBOM generation
  • Automation of CI/CD pipelines through the use of AI/ML tools and other emerging technologies
  • Mastery over the data that your pipelines generate gives your technology and cybersecurity stakeholders the actionable intelligence they require to respond effectively to technical issues in the build lifecycle and cybersecurity incidents

Final Thoughts

As more commercial and public sector enterprises focus on improving the security posture of their software supply chains, DevSecOps provides the framework, tools, and culture change that can serve as a foundation for software supply chain security. Just as important, DevSecOps also provides the means to pivot and iterate on your software supply chain security in the interests of continuous improvement.

Want to learn more about supply chain security? Download our Expert Guide to Software Supply Chain Security White Paper!

4 Kubernetes Security Best Practices

Kubernetes security best practices are a necessity now as Kubernetes is becoming a defacto standard for container orchestration. Many of the best practices focus on securing Kubernetes workloads. Managers, developers, and sysadmins need to make it a habit to institute early in their move to Kubernetes orchestration.

Earlier this year, respondents to the Anchore 2021 Software Supply Chain Security Report replied that they use a median of 5 container platforms. That’s testimony to the growing importance of Kubernetes in the market. ”Standalone” Kubernetes (that are not part of a PaaS service) are used most often by 71 percent of respondents. These instances may be run on-premise, through a hosting provider, or on a cloud provider’s infrastructure. The second most used container platform is Amazon ECS (56%), a platform-as-a-service (PaaS) offering. Tied for third place (53%) are Amazon EKS, Azure Kubernetes Services, and Red Hat OpenShift.

A common industry definition for a workload is the amount of activity performed or capable of being performed within a specified period by a  program or application running on a computer. The definition is often loosely applied and can describe a simple “hello world” program or a complex monolithic application. Today, the terms workload, application, software, and program are used interchangeably.

Best Practices

Here are some Kubernetes security best practices to keep in mind

1. Enable Role-Based Access Control

Implementing and configuring Role-Based Access Control (RBAC) is necessary when securing your Kubernetes environment and workloads.

Kubernetes 1.6 and later enable RBAC by default (later for HAProxy); however, if you’ve upgraded since then and haven’t changed your configuration since then, you should double-check it. Due to how Kubernetes authorization controllers are combined, you will have to enable RBAC and disable legacy Attribute-Based Access Control (ABAC).

Once you start enforcing RBAC, you still need to use it effectively. You should avoid cluster-wide permissions in favor of namespace-specific permissions. Don’t give just anyone cluster admin privileges, even for debugging – it is much more secure to grant access only as needed.

2. Perform Vulnerability Scanning of Containers in the Pipeline

Setting up automated Kubernetes vulnerability scanning of containers in your DevSecOps pipelines and registries is essential to workload security. When you automate visibility, monitoring, and scanning across the container lifecycle, can you remediate more issues in development before your containers reach your production environment.

Another element of this best practice is to have the tools and processes in place to enable the scanning of Kubernetes secrets and private registries. This is another essential step as software supply chain security continues to gain a foothold across industries. 

3. Keep a Secret

A secret in Kubernetes contains sensitive information, such as a password or token. Even though a pod cannot access the secrets of another pod, it’s vital to keep a secret separate from an image or pod. A person with access to the image would also have access to the secret. This is especially true for complex applications that handle numerous processes and have public access.

4. Follow your CSP’s Security Guidelines

If you’re running Kubernetes in the cloud, then you want to consult your cloud service provider’s guidelines for container workload security. Here are links to documentation from the major CSPs:

Along with these security guidelines, you may want to consider cloud security certifications for your cloud and security teams members. CSPs are constantly evolving their security offerings. Just consulting the documentation when you need it may not be enough for your organization’s security and compliance posture.       

Final thought

Kubernetes security best practices need to become second nature to operations teams as their Kubernetes adoption grows. IT management needs to work with their teams to ensure the best practices in this post and others make it into standard operating procedures if they aren’t already.

Want to learn more about container security practices? Check out our Container Security Best Practices That Scale webinar, now on-demand!

Cloud Migration Security Challenges: 5 Ways DevSecOps Can Help

DevSecOps is playing a growing role in cloud migrations, especially in the public sector. Even before the Executive Order on Improving the Nation’s Cybersecurity Executive Order, agencies had to face cloud migrations with an eye on security to ensure their cloud projects met FedRAMP compliance.

Here are some ways that DevSecOps can help your agency or organization meet cloud migration challenges:

1. Improves Information Processing

When a DoD or other government program moves to the cloud and a DevSecOps model, it fundamentally transforms how they interact with data. DevSecOps gives government agency and DoD programs the tools, processes, and frameworks to develop applications quickly and capitalize on data to help them respond to data-intensive mission challenges such as big data data analysis, fraud detection, and trends data.

To say information is power now considering government responses to natural disasters, COVID-19, and other threats on the world stage. For example, DevSecOps gives development teams in the public sector a new ability to migrate legacy applications to the cloud securely to enable access so they can open them up a new hybrid workforce.

2. Provides Security by Design for New Cloud Projects

“Shift security left” is a common refrain about DevSecOps. More importantly, DevSecOps brings security by design to public sector cloud projects.

When you consider DevSecOps as part of your program’s cloud migration strategy, DevOps and security teams can collaborate on workload protection, secure landing zones, operating models, network segmentation, and the implementation of zero trust architecture (ZTA) because both teams get input and buy-in during the design phase in regards to functional requirements, data flows, and workstreams. 

DevSecOps, by its nature, also provides the feedback loops and collaboration channels that you don’t find in the public sector’s legacy model of long-term contracts, multiple vendors, and silos between developers, cybersecurity, stakeholders, and constituents.

3. Automation of Builds and Testing

Automation is becoming one of the keys to security and overall success with public sector cloud projects. Implementing a DevSecOps toolchain or upgrading your existing DevOps toolchain for DevSecOps provides the tools for automation of container security scanning and compliance checks.

With some government contracting pundits saying up to 80% of agency IT staff’s daily work is just keeping the lights on, moving technical staff to more mission-critical and strategic work will benefit the program. A cloud migration — by its very nature — requires some time for your teams to learn and harness the latest cloud services. Being able to retask team members from fairly rote tasks such as running software builds to critical tasks such as implementing new cloud services benefits government programs small and large and, in turn, the taxpayer.

4. Supports Secure Iteration of Cloud Applications

Following a DevSecOps methodology gives you a secure method for iterating on application features. For example, let’s say your agency is moving a legacy application to the cloud. Moving legacy agency applications to the cloud requires a process that secures the application and its data from inside the agency data into the cloud. If the choice is made to refactor your application, your users can use new cloud services that improve security and user experience (UX). 

DevSecOps adds a new layer of security over these everyday development tasks:

  • Adding new features using DevSecOps can help the project gain the delivery velocity of a consumer app store versus the quarterly or yearly feature releases common to public sector software development
  • Allowing applications to take advantage of containers and microservices architectures
  • Enabling application optimization using the cloud service provider’s infrastructure that wasn’t previously available in agency data centers

Another option is to rebuild a legacy application for the cloud. Moving to DevSecOps and containers brings with it significant code changes. Still, such an investment could be worth it depending on the purpose of the application, and the changing user and constituent landscape as remote and hybrid work grow in dominance.

5. Sets a Foundation for a Security Culture

DevSecOps and moving to the cloud require a cultural transformation for today’s public sector agencies to meet cloud migration security challenges. Bringing DevSecOps into your program’s cloud migration process is another step in making security part of everybody’s job.  When your cloud migration and development teams adopt DevSecOps, it opens up new opportunities for reporting that enable you to best communicate the progress and security status of your cloud migrations to your internal stakeholders. 

DevSecOps and Cloud Benefits in Full View

The DoD and the public sector are gradually realizing the benefits of DevSecOps and the cloud. Bringing DevSecOps into your cloud migration framework gives you new tools to maintain security and compliance of your legacy applications and data as they leave your agency data centers and make their journey to the cloud.

Download our Expert Guide to DevOps to DevSecOps Transformation to learn more about DevSecOps to help prepare for your next cloud migration security challenges!

Advancing Software Security with Technical Innovation

As we explore the various roles and responsibilities at Anchore, one of the critical areas is building the roadmap for our enterprise product.  Anchore Enterprise is a continuous security and compliance platform for cloud-native applications. Our technology helps secure the software development process and is in use by enterprises like NVIDIA and eBay as well as government agencies like the U.S. Air Force and Space Force. 

As news of software supply chain breaches continue to make headlines and impact software builds across industries, the team at Anchore works each day to innovate and refine new technology to support secure and compliant software builds. 

With this, Anchore is thrilled to announce an opening for the role of Principal Product Manager. Our Vice President of Product, Neil Levine, weighs in on what he sees as key elements to this role:  

“Product managers are lucky in that we get to work with almost every part of an organization and are able to use both our commercial and technical skills. In larger organizations, a role like this often gets more proscribed and the ability to exercise a variety of functions is limited. Anchore is a great opportunity for any PM who wants to enjoy roaming across a diverse range of projects and teams. In addition to that, you get to work in one of the most important and relevant parts of the cybersecurity market that is addressing literal front-page news.”

Are you passionate about security, cloud infrastructure or open-source markets? Then apply for this role on our job board.