What is Software Composition Analysis (SCA)?

This blog post has been archived and replaced by the supporting pillar page that can be found here: https://anchore.com/wp-admin/post.php?post=987475061&action=edit

The blog post is meant to remain “public” so that it will continue to show on the /blog feed. This will help discoverability for people browsing the blog and potentially help SEO. If it is clicked on it will automatically redirect to the pillar page.

Open Source and foreign influence, should we panic?

Updated 2025-09-08 to add notes about the similar fast-glob package.

Wired recently published an article titled Security Researchers Warn a Widely Used Open Source Tool Poses a ‘Persistent’ Risk to the US which paints a dire picture of a popular open source Go package named easyjson. This sounds like it could be a problem if you read the article, so how much panic is appropriate for this situation? In order to ruin the big conclusion, the answer is “not much”.

There’s another article about an open source package posing a potential risk, fast-glob in this instance. It’s the same basic idea, but there’s again zero cause for concern at this time. Both of these articles have been all bark and no bite.

So what’s the deal? Are the adversaries using open source as a trojan horse into our software? They are, without question. Remember XZ Utils or tj-actions/changed-files? Those are both well resourced attacks against important open source components. It’s clear that open source is a target for attackers. We can name two examples, it’s likely there are more.

But what about easyjson and fast-glob? Is that a supply chain attack? So far it doesn’t look like it. There is no evidence that using the easyjson or fast-glob libraries creates a risk for an organization. Could this change someday? Absolutely, but so could any other open source library. The potential risk from a Russian company controlling a popular open source library probably isn’t an important detail.

Let’s look at some examples.

Pulling all this data is a lot of work, but there are some quick things anyone can observe in a web browser. Let’s use a couple of popular NPM packages. It’s easy to find this list which is why I’m using NPM, but the example will apply to anything in GitHub

If we dig into the owners of those widely used repositories, the only one that lists a real location is React, it’s in Menlo Park, California, USA—the headquarters for Meta. Where are those other repositories located? We don’t really know. It’s also worth pointing out that all of those repositories have many contributors from all over the world. Just because a project is controlled by an organization in a country doesn’t mean all contributions are from that country.

We know easyjson and fastglob are from a Russian company because they aren’t trying to hide this fact. The organization that holds the easyjson repository is Mail.ru, a Russian company—and they list their location as Russia. The fast-glob package is held by an open source maintainer who resides in Russia. If they want to conduct nefarious activities against open source, this isn’t the best way to do it.

There are some lessons in this though.

Knowing exactly what software pieces you have is super important for keeping things secure and running smoothly. Imagine you need to find every place you’re using easyjson or fast-glob. Could you do it quickly? Probably not easily, right? Today’s software has a lot of hidden parts and pieces. If you don’t have a clear inventory of all those pieces, a software bill of materials (SBOM), finding something like easyjson or fast-glob will take forever and you might miss something. If there’s a security problem, that delay in finding it can cause serious trouble and make you vulnerable. Being able to quickly find and fix these kinds of issues is important when most of our software is open source.

The issue of open source sovereignty introduces complex challenges in today’s interconnected world. If organizations and governments decide to prioritize understanding the origins of their open source dependencies, they immediately encounter a fundamental question: which countries warrant the most scrutiny? Establishing such a list risks geopolitical bias and may not accurately reflect the actual threat landscape. Furthermore, the practicalities of tracking the geographical origins of open source contributions are significant. Developers and maintainers operate globally, and attributing code contributions to a specific nation-state is fraught with difficulty. IP address geolocation can be easily circumvented, and self-reported location data is unreliable, especially in the context of malicious actors who would intentionally falsify such information. This raises serious doubts about relying on geographical data for assessing open source security risks. It necessitates exploring alternative or supplementary methods for ensuring the integrity and trustworthiness of the open source software supply chain, methods that move beyond simplistic notions of national origin.

For a long time, we’ve kind of just trusted open source stuff without really checking it out. Organizations grab these components and throw them into their systems, and so far that’s mostly worked. Things are changing though. People are getting more worried about vulnerabilities, and there are new rules coming out, like the Cyber Resilience Act, that are going to make us be more careful with software. We’re probably going to have to check things out before we use them, keep an eye on them for security issues, and update them regularly. Basically, just assuming everything’s fine isn’t going to cut it anymore. We need to start being a lot more aware of security. This means organizations are going to have to learn new ways to work and change how they do things to make sure their software is safe and follows the rules.

Wrapping up

The origin of easyjson and fast-glob being traced back to a Russian raises a valid point about the perception and utilization of open source software. While the geographical roots of a project don’t inherently signify malicious intent, this instance serves as a potent reminder that open source is not simply “free stuff” devoid of obligations for its users. The responsibility for ensuring the security and trustworthiness of the software we integrate into our projects lies squarely with those who build and deploy it.

Anchore has two tools, Syft and Grype that can help us take responsibility for the open source software we use. Syft can generate SBOMs, making sure we know what we have. Then we can use Grype to scan those SBOMs for vulnerabilities, making sure our software isn’t an actual threat to our environments. When a backdoor is found in an open source package, like XZ Utils, Grype will light up like a Christmas tree letting you know there’s a problem.

The EU Cyber Resilience Act (CRA) shifts this burden of responsibility onto software builders. This approach acknowledges the practical limitations of expecting individual open source developers, who often contribute their time and effort voluntarily, to shoulder the comprehensive security and maintenance demands of widespread software usage. Instead of relying on the goodwill and diligence of unpaid contributors to conduct our due diligence, the CRA framework encourages a more proactive and accountable stance from the entities that commercially benefit from and distribute software, including open source components.

This shift in perspective is crucial for the long-term health and security of the software ecosystem. It fosters a culture of proactive risk assessment, thorough vetting of dependencies, and ongoing monitoring for vulnerabilities. By recognizing open source as a valuable resource that still requires careful consideration and due diligence, rather than a perpetually free and inherently secure commodity, we can collectively contribute to a more resilient and trustworthy digital landscape. The focus should be on building secure systems by responsibly integrating open source components, rather than expecting the open source community to single-handedly guarantee the security of every project that utilizes their code.


Learn the 5 best practices for container security and how SBOMs play a pivotal role in securing your software supply chain.

EU CRA SBOM Requirements: Overview & Compliance Tips

This blog post has been archived and replaced by the supporting pillar page that can be found here: https://anchore.com/wp-admin/post.php?post=987475103&action=edit

The blog post is meant to remain “public” so that it will continue to show on the /blog feed. This will help discoverability for people browsing the blog and potentially help SEO. If it is clicked on it will automatically redirect to the pillar page.

FedRAMP Continuous Monitoring: Overview & Checklist

This blog post has been archived and replaced by the supporting pillar page that can be found here:
https://anchore.com/wp-admin/post.php?post=987474886&action=edit

The blog post is meant to remain “public” so that it will continue to show on the /blog feed. This will help discoverability for people browsing the blog and potentially help SEO. If it is clicked on it will automatically redirect to the pillar page.

A Complete Guide to Container Security

This blog post has been archived and replaced by the supporting pillar page that can be found here: https://anchore.com/wp-admin/post.php?post=987474704&action=edit

The blog post is meant to remain “public” so that it will continue to show on the /blog feed. This will help discoverability for people browsing the blog and potentially help SEO. If it is clicked on it will automatically redirect to the pillar page.

Benefits of Static Image Inspection and Policy Enforcement

In this post, I will dive deeper into the key benefits of a comprehensive container image inspection and policy-as-code framework.
A couple of key terms:

  • Comprehensive Container Image Inspection: Complete analysis of a container image to identify it’s entire contents: OS & non-OS packages, libraries, licenses, binaries, credentials, secrets, and metadata. Importantly: storing this information in a Software Bill of Materials (SBOM) for later use.
  • Policy-as-Code Framework: a structure and language for policy rule creation, management, and enforcement represented as code. Importantly: This allows for software development best practices to be adopted such as version control, automation, and testing.

What Exactly Comes from a Complete Static Image Inspection?

A deeper understanding. Container images are complex and require a complete analysis to fully understand all of their contents. The picture above shows all of the useful data an inspection can uncover. Some examples are:

  • Ports specified via the EXPOSE instruction
  • Base image / Linux distribution
  • Username or UID to use when running the container
  • Any environment variables set via the ENV instruction
  • Secrets or keys (ex. AWS credentials, API keys) in the container image filesystem
  • Custom configurations for applications (ex. httpd.conf for Apache HTTP Server)

In short, a deeper insight into what exactly is inside of container images allows teams to make better decisions on what configurations and security standards they would prefer their production software to have.

How to Use the Above Data in Context?

While we can likely agree that access to the above data for container images is a good thing from a visibility perspective, how can we use it effectively to produce higher-quality software? The answer is through policy management.

Policy management allows us to create and edit the rules we would like to enforce. Oftentimes these rules fall into one of three buckets: security, compliance, or best-practice. Typically, a policy author creates sets of rules and describes the circumstances by which certain behaviors/properties are allowed or not. Unfortunately, authors are often restricted to setting policy rules with a GUI or even a Word document, which makes rules difficult to transfer, repeat, version, or test. Policy-as-code solves this by representing policies in human-readable text files, which allow them to adopt software practices such as version control, automation, and testing. Importantly, a policy as code framework includes a mechanism to enforce the rules created.

With containers, standardization on a common set of best-practices for software vulnerabilities, package usage, secrets management, Dockerfiles, etc. are excellent places to start. Some examples of policy rules are:

  • Should all Dockerfiles have effective USER instruction? Yes. If undefined, warn me.
  • Should the FROM instruction only reference a set of “trusted” base images? Yes. If not from the approved list, fail this policy evaluation.
  • Are AWS keys ever allowed inside of the container image filesystem? No. If they are found, fail this policy evaluation.
  • Are containers coming from DockerHub allowed in production? No. If they attempt to be used, fail this policy evaluation.

The above examples demonstrate how the Dockerfile analysis and secrets found during the image inspection can prove extremely useful when creating policy. Most importantly, all of these policy rules are created to map to information available prior to running a container.

Integrating Policy Enforcement

With policy rules clearly defined as code and shared across multiple teams, the enforcement component can freely be integrated into the Continuous Integration / Continuous Delivery workflow. The concept of “shifting left” is important to follow here. The principal benefit here is, the more testing and checks individuals and teams can incorporate further left in their software development pipelines, the less costly it will be for them when changes need to be made. Simply put, prevention is better than a cure.

Integration as Part of a CI Pipeline

Incorporating container image inspection and policy rule enforcement to new or existing CI pipelines immediately adds security and compliance requirements as part of the build, blocking important security risks from ever making their way into production environments. For example, if a policy rule exists to explicitly not allow a container image to have a root user defined in the Dockerfile, failing the build pipeline of a non-compliant image before pushing to a production registry is a fundamental quality gate to implement. Developers will typically be forced to remediate the issue they’ve created which caused the build failure and work to modify their commit to reflect compliant changes.

Below depicts how this process works with Anchore:

Anchore provides an API endpoint where the CI pipeline can send an image for analysis and policy evaluation. This provides simple integration into any workflow, agnostic of the CI system being used. When the policy evaluation is complete, Anchore returns a PASS or FAIL output based on the policy rules defined. From this, the user can choose whether or not to fail the build pipeline.

Integration with Kubernetes Deployments

Adding an admission controller to gate execution of container images in Kubernetes in accordance with policy standards can be a critical method to validate what containers are allowed to run on your cluster. Very simply: admit the containers I trust, reject the ones I don’t. Some examples of this are:

  • Reject an image if it is being pulled directly from DockerHub.
  • Reject an image if it has high or critical CVEs that have fixes available.

This integration allows Kubernetes operators to enforce policy and security gates for any pod that is requested on their clusters before they even get scheduled.

Below depicts how this process works with Anchore and the Anchore Kubernetes Admission Controller:

The key takeaway from both of these points of integration is that they are occurring before ever running a container image. Anchore provides users with a full suite of policy checks which can be mapped to any detail uncovered during the image inspection. When discussing this with customers, we often hear, “I would like to scan my container images for vulnerabilities.” While this is a good first step to take, it is the tip of the iceberg when it comes to what is available inside of a container image.

Conclusion

With immutable infrastructure, once a container image artifact is created, it does not change. To make changes to the software, good practice tells us to build a new container image, push it to a container registry, kill the existing container, and start a new one. As explained above, containers provide us with tons of useful static information gathered during an inspection, so another good practice is to use this information, as soon as it is available, and where it makes sense in the development workflow. The more policies which can be created and enforced as code, the faster and more effective IT organizations will be able to deliver secure software to their end customers.

Looking to learn more about how to utilizing a policy-based security posture to meet DoD compliance standards like cATO or CMMC? One of the most popular technology shortcuts is to utilize a DoD software factory. Anchore has been helping organizations and agencies put the Sec in DevSecOps by securing traditional software factories, transforming them into DoD software factories. Get caught up with the content below: