Anchore Enterprise 5.0: New, Free Self-Service Trial

This week we’re proud to announce the immediate availability of the Anchore Enterprise 5.0 free trial.  If you’ve been curious about how the platform works, or wondering how it can complement your application security measures, now you can have access to a 15 day free trial. 


To get started, just click here, fill out a short form, and you will immediately receive instructions via email on how to spin up your free 15 day trial in your own AWS account.  Please note that only AWS is supported at this time, so if you’d like to launch a trial on-prem or with another cloud provider, please reach out to us directly.

With just a few clicks, you’ll be up and running with the latest release of Anchore Enterprise which includes a new API, improvements to our reporting interface, and so much more.  In fact, we have pre-populated the trial with data that will allow you to explore the many features Anchore Enterprise has to offer. Take a look at the below screenshots for a glimpse behind the scenes.

Malware Scanning

Kubernetes Runtime Integration

Vulnerability Reports

We invite you to learn more about Anchore Enterprise 5.0 with a free 15 day trial here. Or, if you’ve got other questions, set up a call with one of our specialists here.

Unpacking the Power of Policy at Scale in Anchore

Generating a software bill of materials (SBOM) is starting to become common practice. Is your organization using them to their full potential? Here are a couple questions Anchore can help you answer with SBOMs and the power of our policy engine:

  • How far off are we from meeting the security requirements that Iron Bank, NIST, CIS, and DISA put out around container images?
  • How can I standardize the way our developers build container images to improve security without disrupting the development team’s output?
  • How can I best prioritize this endless list of security issues for my container images?
  • I’m new to containers. Where do I start on securing them?

If any of those questions still need answering at your organization and you have five minutes, you’re in the right place. Let’s dive in.

If you’re reading this you probably already know that Anchore creates developer tools to generate SBOMs, and has been since 2016. Beyond just SBOM generation, Anchore truly shines when it comes to its policy capabilities. Every company operates differently — some need to meet strict compliance standards while others are focused on refining their software development practices for enhanced security. No matter where you’re at in your container security journey today, Anchore’s policy framework can help improve your security practices.

Anchore Enterprise has a tailored approach to policy and enforcement that means whether you’re a healthcare provider abiding by stringent regulations or a startup eager to fortify its digital defenses, Anchore has got you covered. Our granular controls allow teams to craft policies that align perfectly with their security goals.

Exporting Policy Reports with Ease

Anchore also has a nifty command line tool called anchorectl that allows you to grab SBOMs and policy results related to those SBOMs. There are a lot of cool things you can do with a little bit of scripting and all the data that Anchore Enterprise stores. We are going to cover one example in this blog.

Once Anchore has created and stored an SBOM for a container image, you can quickly get policy results related to that image. The anchorectl command that will evaluate an image against the docker-cis-benchmark policy bundle:

anchorectl image details <image-id> -p docker-cis-benchmark

That command will return the policy result in a few seconds. Let’s say your organization develops 100 images and you want to meet the CIS benchmark standard. You wouldn’t want to assess each of these images individually, that sounds exhausting. 

To solve this problem, we have created a script that can iterate over any number of images, merge the results into a single policy report, and export that into a csv file. This allows you to make strategic decisions about how you can most effectively move towards compliance with the CIS benchmark (or any standard).

In this example, we ran the script against 30 images in my Anchore deployment. Now take a look holistically at how far off we are from CIS compliance. Here are a few metrics that standout:

  • 26 of the 30 images are running as ‘root’
  • 46.9% of our total vulnerabilities have fixes available (4978 /10611)
  • ADD instructions are being used in 70% of our images
  • Health checks missing in 80% of our images
  • 14 secrets (all from the same application team)
  • 1 malware hit (Cryptominer Casey is at it again)

As a security team member, we didn’t write any of this code myself, which means I need to work with my developer colleagues on the product/application teams to clear up these security issues. Usually this means an email that educates my colleagues on how to utilize health checks, prefer COPY instead over ADD in Dockerfiles, declaring a non-privileged user instead of root, and methods to upgrade packages with fixes available (e.g., Dependabot). Finally, we would prioritize investigating how that malware made its way into that image for myself.

This example illustrates how storing SBOMs and applying policy rules against them at scale can streamline your path to your container security goals.

Visualizing Your Aggregated Policy Reports

While this raw data is useful in and of itself, there are times when you may want to visualize the data in a way that is easier to understand.  While Anchore Enterprise does provide some dashboarding capabilities, it is not and does not aim to be a versatile dashboarding tool. This is where utilizing an observability vendor comes in handy.

In this example, I’ll be using New Relic as they provide a free tier that you can sign up for and begin using immediately. However, other providers such as Datadog and Grafana would also work quite well for this use case. 

Importing your Data

  1. Download the tsv-to-json.py script
  2. Save the data produced by the policy-report.py script as a TSV file
    • We use TABs as a separator because commas are used in many of the items contained in the report.
  3. Run the tsv-to-json.py script against the TSV file:
python3 tsv-to-json.py aggregated_output.tsv > test.json
  1. Sign-up for a New Relic account here
  2. Find your New Relic Account ID and License Key
    • Your New Relic Account ID can be seen in your browser’s address bar upon logging in to New Relic, and your New Relic License Key can be found on the right hand side of the screen upon initial login to your New Relic account.
  3. Use curl to push the data to New Relic:
gzip -c test.json | curl \
-X POST \
-H "Content-Type: application/json" \
-H "Api-Key: <YOUR_NEWRELIC_LICENSE_KEY>" \
-H "Content-Encoding: gzip" \
https://insights-collector.newrelic.com/v1/accounts/<YOUR_NEWRELIC_ACCOUNT_ID>/events \
--data-binary @-

Visualizing Your Data

New Relic uses the New Relic Query Language (NRQL) to perform queries and render charts based on the resulting data set.  The tsv-to-json.py script you ran earlier converted your TSV file into a JSON file compatible with New Relic’s event data type.  You can think of each collection of events as a table in a SQL database.  The tsv-to-json.py script will automatically create an event type for you, combining the string “Anchore” with a timestamp.

To create a dashboard in New Relic containing charts, you’ll need to write some NRQL queries.  Here is a quick example:

FROM Anchore1698686488 SELECT count(*) FACET severity

This query will count the total number of entries in the event type named Anchore1698686488 and group them by the associated vulnerability’s severity. You can experiment with creating your own, or start by importing a template we have created for you here.

Wrap-Up

The security data that your tools create is only as good as the insights that you are able to derive from them. In this blog post, we covered a way to help security practitioners turn a mountain of security data into actionable and prioritized security insights. That can help your organization to improve its security posture and meet compliance standards quicker. That being said this blog is dependent on you already being a customer of Anchore Enterprise.

Looking to learn more about how to utilizing a policy-based security posture to meet DoD compliance standards like cATO or CMMC? One of the most popular technology shortcuts is to utilize a DoD software factory. Anchore has been helping organizations and agencies put the Sec in DevSecOps by securing traditional software factories, transforming them into DoD software factories. Get caught up with the content below:

Your Guide to Software Compliance, from Federal Policy to Industry Standards

Let’s be real, cybersecurity compliance is like health insurance, massively complicated, mind-numbing to learn about and really important when something goes wrong. Complying with cybersecurity laws has only become more challenging in the past few years as the US federal government and European Union have both been accelerating their efforts to modernize cybersecurity legislation and regulations.

This accelerating pace of influence and involvement of governments worldwide is impacting all businesses that use software to operate (which is to say, all businesses). Not only because the government is being more prescriptive with the requirements that have to be met in order to operate a business but also because of the financial penalties involved with non-compliance.

This guide will help you understand how cybersecurity laws and regulations impact your businesses and how to think about cybersecurity compliance so you don’t run afoul of non-compliance fines.

What is Cybersecurity Compliance?

Cybersecurity compliance is the practice of conforming to established standards, regulations, and laws to protect digital information and systems from cybersecurity threats. By implementing specific policies, procedures, and controls, organizations meet the requirements set by various governing bodies. This enables these organizations to demonstrate their commitment to cybersecurity best practices and legal mandates.

Consider the construction of a house. Just as architects and builders follow blueprints and building codes to ensure the house is safe, sturdy, and functional, cybersecurity compliance serves as the “blueprint” for organizations in the digital world. These guidelines and standards ensure that the organization’s digital “structure” is secure, resilient, and trustworthy. By adhering to these blueprints, organizations not only protect their assets but also create a foundation of trust with their stakeholders, much like a well-built house stands strong and provides shelter for its inhabitants.

Why is Cybersecurity Compliance Important?

At its core, the importance of cybersecurity compliance can be distilled into one critical aspect: the financial well-being of an organization. Typically when we list the benefits of cybersecurity compliance, we are forced to use imprecise ideas like “enhanced trust” or “reputational safeguarding,” but the common thread connecting all these benefits is the tangible and direct impact on an organization’s bottom line. In this case, it is easier to understand the benefits of cybersecurity compliance by instead looking at the consequences of non-compliance.

  • Direct Financial Penalties: Regulatory bodies can impose substantial fines on organizations that neglect cybersecurity standards. According to the IBM Cost of a Data Breach Report 2023, the average company can expect to pay approximately $40,000 USD in fines due to a data breach. The emphasis of this figure is that it is the average. A black swan event can lead to a significantly different outcome. A prime example of this is the TJX Companies data breach in 2006. TJX faced a staggering fine of $40.9 million after the exposure of credit card information of more than 45 million customers for non-compliance with PCI DSS standards.
  • Operational Disruptions: Incidents like ransomware attacks can halt operations, leading to significant revenue loss.
  • Loss of Customer Trust: A single data breach can result in a mass exodus of clientele, leading to decreased revenue.
  • Reputational Damage: The long-term financial effects of a tarnished reputation can be devastating, from stock price drops to reduced market share.
  • Legal Fees: Lawsuits from affected parties can result in additional financial burdens.
  • Recovery Costs: Addressing a cyber incident, from forensic investigations to public relations efforts, can be expensive.
  • Missed Opportunities: Non-compliance can lead to lost contracts and business opportunities, especially with entities that mandate cybersecurity standards.

An Overview of Cybersecurity Laws and Legislation

This section will give a high-level overview of cybersecurity laws, standards and the governing bodies that exert their influence on these laws and standards.

Government Agencies that Influence Cybersecurity Regulations

Navigating the complex terrain of cybersecurity regulations in the United States is akin to understanding a vast network of interlinked agencies, each with its own charter to protect various facets of the nation’s digital and physical infrastructure. This ecosystem is a tapestry woven with the threads of policy, enforcement, and standardization, where agencies like the Cybersecurity and Infrastructure Security Agency (CISA), the National Institute of Standards and Technology (NIST), and the Department of Defense (DoD) play pivotal roles in crafting the guidelines and directives that shape the nation’s defense against cyber threats.

The White House and legislative bodies contribute to this web by issuing executive orders and laws that direct the course of cybersecurity policy, while international standards bodies such as the International Organization for Standardization (ISO) offer a global perspective on best practices. Together, these entities form a collaborative framework that influences the development, enforcement, and evolution of cybersecurity laws and standards, ensuring a unified approach to protecting the integrity, confidentiality, and availability of information systems and data.

  1. Cybersecurity and Infrastructure Security Agency (CISA)
    • Branch of Department of Homeland Security (DHS) that oversees cybersecurity for critical infrastructure for the US federal government
    • Houses critical cybersecurity services, such as, National Cybersecurity and Communications Integration Center (NCCIC), United States Computer Emergency Readiness Team (US-CERT), National Coordinating Center for Communications (NCC) and NCCIC Operations & Integration (NO&I)
    • Issues Binding Operational Directives, such as, BOD 22-01: Reducing the Significant Risk of Known Exploited Vulnerabilities which require federal agencies to take action
  1. National Institute of Standards and Technology (NIST)
  1. Department of Defense (DoD)
    • Enforces the Defense Federal Acquisition Regulation Supplement (DFARS) which mandates NIST SP 800-171 compliance for defense contractors
    • Introduced the Cybersecurity Maturity Model Certification (CMMC) for defense industrial base (DIB) which builds a certification around the security controls in NIST SP 800-171
    • Releases memorandums that amend other cybersecurity laws and standards specific to the defense industrial base (DIB), such as, the Continuous Authorization To Operate (cATO) memo
  1. The White House
    • Issues executive orders (EOs) that direct federal agencies to take specific actions related to cybersecurity (e.g., May 2021, President Biden issued “Executive Order on Improving the Nation’s Cybersecurity”)
    • Launches policy initiatives that prioritize cybersecurity, leading to the development of new regulations or the enhancement of existing ones
    • Release strategy documents to align agencies around a national vision for cybersecurity (e.g., National Cybersecurity Strategy)
  1. International Organization for Standardization (ISO)
    • Develops and publishes international standards, including those related to information security
    • Roughly equivalent to NIST but for European countries
    • Influence extends beyond Europe in practice though not officially
  1. European Union Agency for Cybersecurity (ENISA)
    • EU’s agency dedicated to achieving a high common level of cybersecurity across member states
    • Roughly equivalent to CISA but for European states
  1. The Federal Bureau of Investigation (FBI)
    • Investigates cyber attacks, including those by nation-states, hacktivists, and criminals; investigations can set legal precedent
    • Leads National Cyber Investigative Joint Task Force (NCIJTF) to coordinate interagency investigation efforts
    • Collaborates with businesses, academic institutions, and other organizations to share threat intelligence and best practices through the InfraGard program
  1. Federal Trade Commission (FTC)
    • Takes legal action against companies failing to protect consumer data
    • Publishes guidance for businesses on how to protect consumer data and ensure privacy
    • Recommends new legislation or changes to existing laws related to consumer data protection and cybersecurity
  1. U.S. Secret Service
    • Investigates cyber crimes, specifically financial crimes; investigations can set legal precedent
    • Manages the Electronic Crimes Task Forces (ECTFs) focusing on cyber intrusions, bank fraud, and data breaches
  1. National Security Agency (NSA)
    • Collects and analyzes signals intelligence (SIGINT) related to cyber threats
    • Established the Cybersecurity Directorate to unify foreign intelligence and cyber defense missions for national security systems and the defense industrial base (DIB)
    • Conducts extensive research in cybersecurity, cryptography, and related fields. Innovations and findings from this research often influence broader cybersecurity standards and practices
  1. Department of Health and Human Services (HHS)
    • Enforces the Health Insurance Portability and Accountability Act (HIPAA) ensuring the protection of health information
    • Oversees the Office for Civil Rights (OCR) which enforces HIPAA’s Privacy and Security Rules
  1. Food and Drug Administration (FDA)
    • Regulates the cybersecurity of medical devices, specifically Internet of Things (IoT) devices
    • Provides guidance to manufacturers on cybersecurity considerations for medical devices
  1. Securities and Exchange Commission (SEC)
    • Requires public companies to disclose material cybersecurity risks and incidents
    • Enforces the Sarbanes-Oxley Act (SOX) implications for cybersecurity, ensuring the integrity of financial data

U.S. Cybersecurity Laws and Standards to Know

Navigating the complex web of U.S. cybersecurity regulations can often feel like wading through an alphabet soup of acronyms. We have tried to highlight some of the most important and give context on how the laws, standards and regulations interact, overlap or build on each other.

  1. Federal Information Security Management Act (FISMA)
    • Law that requires federal agencies and their contractors implement comprehensive cybersecurity measures
    • Many of the standards and recommendations of the NIST Special Publication series on cybersecurity are a response to the mandate of FISMA
  1. Federal Risk and Authorization Management Program (FedRAMP)
    • Standard for assessing security of cloud/SaaS products and services used by federal agencies
    • Certification is the manifestation of the FISMA law
  1. Defense Federal Acquisition Regulation Supplement (DFARS)
  1. Cybersecurity Maturity Model Certification (CMMC)
    • Certification to prove that DoD contractors are in compliance with cybersecurity practices and processes required in DFARS
    • For many years DFARS was not enforced, CMMC is certification process to close this gap
  1. SOC 2 (System and Organization Controls 2)
    • Compliance framework for auditing and reporting on controls related to the security, availability, confidentiality, and privacy of a system
    • Very popular certification for cloud/SaaS companies to maintain as a way to assures clients that their information is managed in a secure and compliant manner
  1. Payment Card Industry Data Security Standard (PCI DSS)
    • Establishes security standards for organizations that handle credit cards
    • Must comply with this security standard in order to process or store payment data
  1. Health Insurance Portability and Accountability Act (HIPAA)
    • Protects the privacy and security of health information for consumers
    • Must comply with this security standard in order to process or store electronic health records
  1. NIST Cybersecurity Framework
    • Provides a policy framework to guide private sector organizations in the U.S. to assess and improve their ability to prevent, detect, and respond to cyber incidents
    • While voluntary, many organizations adopt this framework to enhance their cybersecurity posture
  1. NIST Secure Software Development Framework
    • Standardized, industry-agnostic set of best practices that can be integrated into any software development process to mitigate the risk of vulnerabilities and improve the security of software products
    • More specific security controls than NIST 800-53 that still meets the controls outlined in the Control Catalog regrading secure software development practices
  2. CCPA (California Consumer Privacy Act)
    • Statute to enhance privacy rights and consumer protection to prevent misuse of consumer data
    • While only application to business operating in California, it is considered the most likely candidate to be adopted by other states
  1. Gramm-Leach-Bliley Act (GLBA)
    • Protects consumers’ personal financial information held by financial institutions
    • Financial institutions must explain their information-sharing practices and safeguard sensitive data
  1. Sarbanes-Oxley Act (SOX)
    • Addresses corporate accounting scandals and mandates accurate financial reporting
    • Public companies must implement stringent measures to ensure the accuracy and integrity of financial data
  1. Children’s Online Privacy Protection Act (COPPA)
    • Protects the online privacy of children under 13.
    • Websites and online services targeting children must obtain parental consent before collecting personally identifiable information (PII)

EU Cybersecurity Laws and Standards to Know

  1. EU 881/2019 (Cybersecurity Act)
    • The law that codifies the mandate for ENISA to assist EU member states in dealing with cybersecurity issues and promote cooperation
    • Creates an EU-wide cybersecurity certification framework for member states to aim for when creating their own local legislation
  1. NIS2 (Revised Directive on Security of Network and Information Systems)
    • A law that requires a high level of security for network and information systems across various sectors in the EU
    • A more specific set of security requirements than the cybersecurity certification framework of the Cybersecurity Act
  1. ISO/IEC 27001
    • An international standard that provides the criteria for establishing, implementing, maintaining, and continuously improving a system
    • Roughly equivalent to NIST 800-37, the Risk Management Framework
    • Also includes a compliance and certification component; when combined with ISO/IEC 27002 it is roughly equivalent to FedRAMP
  1. ISO/IEC 27002
    • An international standard that provides more specific controls and best practices that assist in meeting the more general requirements outlined in ISO/IEC 27001
    • Roughly equivalent to NIST 800-53, the Control Catalog
  1. General Data Protection Regulation (GDPR)
    • A comprehensive data protection and privacy law
    • Non-compliance can result in significant fines, up to 4% of an organization’s annual global turnover or €20 million (whichever is greater)

How to Streamline Cybersecurity Compliance in your Organization

Ensuring cybersecurity compliance is a multifaceted challenge that requires a strategic approach tailored to an organization’s unique operational landscape. The first step is to identify the specific laws and regulations applicable to your organization, which can vary based on geography, industry, and business model. Whether it’s adhering to financial regulations like GLBA and SOX, healthcare standards such as HIPAA, or public sector requirements like FedRAMP and CMMC, understanding your compliance obligations is crucial. 

While this guide can’t give prescriptive steps for any organization to meet their individual needs, we have put together a high-level set of steps to consider when developing a cybersecurity compliance program.

Determine Which Laws and Regulations Apply to Your Organization

  1. Geography
    • US-only; if your business only operates in the United States then you only need to be focused on compliance with US laws
    • EU-only; if your business only operates in the European Union then you only need to be focused on compliance with EU laws
    • Global; if your business operates in both jurisdictions then you’ll need to consider compliance with both laws
  2. Industry
    • Financial Services; financial services firms have to comply with the GLBA and SOX laws but if they don’t process credit card payments they might not need to be concerned with PCI-DSS
    • E-commerce; any organization that processes payments, especially via credit card will need to adhere to PCI-DSS but not likely many other compliance frameworks
    • Healthcare; any organization that processes or stores data that is defined as protected health information (PHI) will need to comply with HIPAA requirements
    • Federal; any organization that wants to do business with a federal agency will need to be FedRAMP compliant
    • Defense; any defense contractor that wants to do business with the DoD will need to maintain CMMC compliance
    • B2B; there isn’t a law that mandates cybersecurity compliance for B2B relationships but many companies will only do business with companies that maintain SOC2 compliance
  3. Business Model
    • Data storage; if your organization stores data but does not process or transmit the data then your requirements will differ. For example, if you offer a cloud-based data storage service and a customer uses your service to store PHI, they are required to be HIPAA-compliant but you are considered a Business Associate and do not need to comply with HIPAA specifically
    • Data processing;  if your organization processes data but does not store the data then your requirements will differ. For example, if you process credit card transactions but don’t store the credit card information you will probably need to comply with PCI-DSS but maybe not GLBA and SOX
    • Data transmission; if your organization transmits data but does not process or store the data then your requirements will differ. For example, if you run a internet service provider (ISP) credit card transactions and PHI will traverse your network but you won’t need to be HIPAA or PCI-DSS compliant

Conduct a Gap Analysis

Current State Assessment: Evaluate the current cybersecurity posture and practices against the required standards and regulations.

Identify Gaps: Highlight areas where the organization does not meet required standards.

These steps can either be done manually or automatically. Anchore Enterprise offers organizations an automated, policy-based approach to scanning their entire application ecosystem and identifying which software is non-compliant with a specific framework.

If you’re interested to learn more check out our webinar titled, “Policy-Based Compliance for Containers: CIS, NIST, and More

Prioritize Compliance Needs

Risk-based Approach: Prioritize gaps based on risk. Address high-risk areas first.

Business Impact: Consider the potential business impact of non-compliance, such as fines, reputational damage, or business disruption.

Develop a Compliance Roadmap

Short-term Goals: Address immediate compliance requirements and any quick wins.

Long-term Goals: Plan for ongoing compliance needs, continuous monitoring, and future regulatory changes.

Implement Controls and Solutions

Technical Controls: Deploy cybersecurity solutions that align with compliance requirements, such as encryption, firewalls, intrusion detection systems, etc.

Procedural Controls: Establish and document processes and procedures that support compliance, such as incident response plans or data handling procedures.

Another important security solution, specifically targeting software supply chain security is a vulnerability scanner. Anchore Enterprise is a modern, SBOM-based software composition analysis platform that combines software vulnerability scanning with a monitoring solution and a policy-based component to automate the management of software vulnerabilities and regulation compliance.

If you’re interested to learn more, we have detailed our strategy in a blog, titled “A Policy Based Approach to Container Security & Compliance” and spelled out the benefits in a separate blog post called, “The Power of Policy-as-Code for the Public Sector”.

Monitor and Audit

Continuous Monitoring: Use tools and solutions to continuously monitor the IT environment for compliance.

Regular Audits: Conduct internal and external audits to ensure compliance and identify areas for improvement.

Being able to find vulnerabilities with a scanner at a point in time or evaluate a system against specific compliance policies is a great first step for a security program. Being able to do each of these things continuously in an automated fashion and be able to know the exact state of your system at any point in time is even better. Anchore Enterprise is capable of integrating security and compliance features into a continuously updated dashboard enabling minute by minute insight into the security and compliance of a software system.

Document Everything

Maintain comprehensive documentation of all compliance-related activities, decisions, and justifications. This is crucial for demonstrating compliance during audits.

Engage with Stakeholders

Regularly communicate with internal stakeholders (e.g., executive team, IT, legal) and external ones (e.g., regulators, auditors) to ensure alignment and address concerns.

Review and Adapt

Stay Updated: Regulatory landscapes and cybersecurity threats evolve. Stay updated on changes to ensure continued compliance.

Feedback Loop: Use insights from audits, incidents, and feedback to refine the compliance strategy.

How Anchore Can Help

Anchore is a leading software supply chain security company that has built a modern, SBOM-powered software composition analysis (SCA) platform that helps organizations meet and exceed the security standards in the above guide.

As we have learned working with Fortune 100 enterprises and federal agencies, including the Department of Defense, an organization’s supply chain security can only be as good as the depth of their data on their supply chain and the automation of processing the raw data into actionable insights. Anchore Enterprise provides an end-to-end software supply chain security system with total visibility, deep inspection, automated enforcement, expedited remediation and trusted reporting to deliver the actionable insights to make a software system compliant.

If you’d like to learn more about the Anchore Enterprise platform or speak with a member of our team, feel free to book a time to speak with one of our specialists.

Additional Compliance Resources

Introducing Anchore Enterprise 5.0

Today, we are pleased to announce the release of Anchore Enterprise 5.0 which is now Generally Available for download. This is a major release representing a step change from the Anchore Enterprise 4.x code base, which contains several new features and improvements to the foundational API. 

It’s been over a year and a half since Anchore Enterprise 4.0 was released and it’s been a tumultuous time in software security. The rate of critical flaws being discovered in open source software has been ever increasing and the regulatory response driven by the U.S. government’s Executive Order is now being felt in the market. We’ve always been proud of our customer base and have worked hard to ensure that we are delivering real value to them in response to these dynamics. The improvements to security posture usually come less from novel techniques than just making the existing hard tasks easier. We’d like to thank all our customers who contributed their feedback and insights into making 5.0 the foundation for their security workflows.

Anchore Enterprise 5.0 continues our mission of delivering new features on a fast and regular cadence to our customers while also giving us the opportunity to redesign some of the core parts of the product to make the day to day life of operators and users easier. 5.0 will now be the foundation for a series of major new features we are planning over the next 12 months. 

Simplified Reporting 

We have introduced a new design to the reporting section of the graphical user interface. The underlying functionality is the same as with 4.x but the UI now features a more intuitive layout to create and manage your reports. New reports start with a clean layout where new filters can be added and then saved as a template. Scheduled and unscheduled reports are summarized in a single clean overview.

Single, unified API and Helm Chart for simpler operations

Previously Anchore exposed its capabilities through multiple API endpoints making it hard to create integration workflows. 5.0 now unifies them under the new v2 API and makes them available under a single endpoint. This new API makes it easier to create scripts and code without coding to different endpoints.  In addition, we’ve created a new streamlined Helm chart for deploying Anchore in Kubernetes environments to ensure that all configuration options are easily accessed in a single location.

Easier Vulnerability Matching Logic

Reducing false positives is an ever-present goal for every security team. Based on Syft and Grype, our flagship open source projects, we are continually evaluating the best logic and vulnerability feeds for the highest quality results. With 5.0, we’ve made it easier for users to control which vulnerability feeds should be used for which language ecosystems. New sources such as Github’s Advisory Database often provide a higher quality experience for Java which continues to be ubiquitous in the enterprise. 

We invite you to learn more about Anchore Enterprise 5.0 in a demo call with one of our specialists or sign up for a free 15 day trial here.

SBOMs & Vulnerability Scanners: Better Together

In the world of software development, two mega-trends have emerged in the past decade that have reshaped the industry. First the practice of building applications with a foundation of open-source software components and, second, the adoption of DevOps principles to automate the build and delivery of software. While these innovations have accelerated the pace of software getting into the hands of users, they’ve also introduced new challenges, particularly in the realm of security. 

As software teams race to deliver applications at breakneck speeds, security often finds itself playing catch-up, leading to potential vulnerabilities and risks. But what if there was a way to harmonize rapid software delivery with robust security measures? 

In this post, we’ll explore the tension between engineering and security, the transformative role of Software Bill of Materials (SBOMs), and how modern approaches to software composition analysis (SCA) are paving the way for a secure, efficient, and integrated software development lifecycle.

The rise of open-source software ushered in an era where developers had innumerable off-the-shelf components to construct their applications from. These building blocks eliminated the need to reinvent the wheel, allowing developers to focus on innovating on top of the already existing foundation that had been built by others. By leveraging pre-existing, community-tested components, software teams could drastically reduce development time, ensuring faster product releases and more efficient engineering cycles. However, this boon also brought about a significant challenge: blindspots. Developers often found themselves unaware of all the ingredients that made up their software.

Enter the second mega-trend DevOps tools, with special emphasis on CI/CD build pipelines. These tools promised (and delivered) faster, more reliable software testing, building, and delivery. Which ultimately meant not only was the creation of software accelerated via open-source components but the build process of manufacturing the software into a state that a user could consume was also sped up. But, as Uncle Ben reminds us, “with great power comes great responsibility”. The accelerated delivery meant that any security issues, especially those lurking in the blindspots, found their way into production at the new accelerated pace that was enabled through open-source software components and DevOps tooling.

The Strain on Legacy Security Tools in the Age of Rapid Development

This double-shot of productivity boosts to engineering teams began to strain their security oriented counterparts. The legacy security tools that security teams had been relying on were designed for a different era. They were created when software development lifecycles were measured in quarters or years rather than weeks or months. Because of this they could afford to be leisurely with their process. 

The tools that were originally developed to ensure that an application’s supply chain was secure were called software composition analysis (SCA) platforms. They were originally developed as a method for scanning open source software for licensing information to prevent corporations from running into legal issues as their developers used open-source components. They scanned every software artifact in its entirety—a painstakingly slow process. Especially if you wanted to run a scan during every step of software integration and delivery (e.g. source, build, stage, delivery, production). 

As the wave of open-source software and DevOps principles took hold, a tug-of-war between security teams, who wanted thoroughness, and software teams, who were racing against time began to form. Organizations found themselves at a crossroads, choosing between slowing down software delivery to manage security risks or pushing ahead and addressing security issues reactively.

SBOMs to the Rescue!

But what if there was a way to bridge this gap? Enter the Software Bill of Materials (SBOM). An SBOM is essentially a comprehensive list of components, libraries, and modules that make up a software application. Think of it as an ingredient list for your software, detailing every component and its origin.

In the past, security teams had to scan each software artifact during the build process for vulnerabilities, a method that was not only time-consuming but also less efficient. With the sheer volume and complexity of modern software, this approach was akin to searching for a needle in a haystack.

SBOMs, on the other hand, provide a clear and organized view of all software components. This clarity allows security teams to swiftly scan their software component inventory, pinpointing potential vulnerabilities with precision. The result? A revolution in the vulnerability scanning process. Faster scans meant more frequent checks. And with the ability to re-scan their entire catalog of applications whenever a new vulnerability is discovered, organizations are always a step ahead, ensuring they’re not just reactive but proactive in their security approach.

In essence, organizations could now enjoy the best of both worlds: rapid software delivery without compromising on security. With SBOMs, the balance between speed and security isn’t just achievable; it’s the new standard.

How do I Implement an SBOM-powered Vulnerability Scanning Program?

Okay, we have the context (i.e. the history of how the problem came about), we have a solution, the next question then becomes how do you bring this all together to integrate this vision of the future with the reality of your software development lifecycle?

Below we outlined the high-level steps of how an organization might begin to adopt this solution into their software integration and delivery processes:

  1. Research and select the best SBOM generation and vulnerability scanning tools. (Hint: We have some favorites!)
  2. Educate your developers about SBOMs. Need guidance? Check out our detailed post on getting started with SBOMs.
  3. Store the generated SBOMs in a centralized repository.
  4. Create a system to pull vulnerability feeds from reputable sources. If you’re looking for a way to get started here, read our post on how to get started.
  5. Regularly scan your catalog of SBOMs for vulnerabilities, storing the results alongside the SBOMs.
  6. Integrate your SBOM generation and vulnerability scanning tooling into your CI/CD build pipeline to automate this process.
  7. Implement a query system to extract insights from your catalog of SBOMs.
  8. Create a tool to visualize your software supply chain’s security health.
  9. Create a system to alert on for newly discovered vulnerabilities in your application ecosystem.
  10. Integrate a policy enforcement system into your developers’ workflows, CI/CD pipelines, and container orchestrators to automatically prevent vulnerabilities from leaking into build or production environments.
  11. Maintain the entire system and continue to improve on it as new vulnerabilities are discovered, new technologies emerge and development processes evolve.

Alternatively, consider investing in a comprehensive platform that offers all these features, either as a SaaS or on-premise solution instead of building this entire system yourself. If you need some guidance trying to determine whether it makes more sense to build or buy, we have put together a post outlining the key signs to watch for when considering when to outsource this function.

How Anchore can Help you Achieve your Vulnerability Scanning Dreams

The previous section is a bit tongue-in-cheek but it is also a realistic portrait of how to build a scalable vulnerability scanning program in the Cloud Native-era. Open-source software and container pipelines have changed the face of the software industry for the better but as with any complex system there are always unintended side effects. Being able to deliver software more reliably at a faster cadence was an amazing step forward but doing it securely got left behind. 

Anchore Enterprise was built specifically to address this challenge. It is the manifestation of the list of steps outlined in the previous section on how to build an SBOM-powered software composition analysis (SCA) platform. Integrating into your existing DevOps tools, Anchore Enterprise is a turnkey solution for the management of software supply chain security. If you’d rather buy than build and save yourself the blood, sweat and tears that goes into designing an end-to-end SCA platform, we’re looking forward to talking to you.
If you’d like to learn more about the Anchore Enterprise platform or speak with a member of our team, feel free to book a time to speak with one of our specialists.

Guide to SBOMs: What They are and Their Role in Cybersecurity

In the dynamic landscape of software development, the past decade has witnessed two transformative shifts that have redefined the industry’s trajectory. The first is the widespread adoption of open-source software components, providing developers with a vast repository of pre-built modules to streamline their work. The second is the embrace of DevOps principles, automating and accelerating the software build and delivery process. Together, these shifts promised unprecedented efficiency and speed. However, they also introduced a labyrinth of complexity, with software compositions becoming increasingly intricate and opaque. 

This complexity, coupled with the relentless pace of modern development cycles, created a pressing need for a solution that could offer clarity amidst the chaos. This is the backdrop against which the Software Bill of Materials (SBOM) emerged. This guide delves into the who, what, why and how of SBOMs. Whether you’re a developer, a security professional, or simply someone keen on understanding the backbone of modern software security, this guide offers insights that will equip you with the knowledge to navigate all of the gory details of SBOMs.

What is a Software Bill of Materials (SBOM)? 

A software bill of materials (SBOM) is a structured list of software components, modules, and libraries that are included in an application. Similar to the nutrition labels on the back of the foods that you buy, SBOMs are a list of ingredients that the software is composed of. We normally think of SBOMs as an artifact of the software development process. As a developer is building an application using different open-source components they are also creating a list of ingredients, an SBOM is the digital artifact of this list.

To fully extend the metaphor, creating a modern software application is analogous to crafting a gourmet dish. When you savor a dish at a restaurant, what you experience is the final, delicious result. Behind that dish, however, is a complex blend of ingredients sourced from various producers, each contributing to the dish’s unique flavor profile. Just as a dish might have tomatoes from Italy, spices from India, olive oil from Spain, and fresh herbs from a local garden, a software application is concocted from individual software components (i.e., software dependencies). These components, like ingredients in a dish, are meticulously combined to create the final product. Similarly, while you interact with a seamless software interface, behind the scenes, it’s an intricate assembly of diverse open source software components working in harmony.

Why are SBOMs important?

SBOMs are one of the most powerful security tools that you can use. Large-scale software supply chain attacks that affected SolarWinds, Codecov, and Log4j highlight the need for organizations to understand the software components—and the associated risk—of the software they create or use. SBOMs are critical not only for identifying security vulnerabilities and risks in software. They are also key for understanding how that software changes over time, potentially introducing new risks or threats. 

Knowing what’s in software is the first step to securing it. Increasingly organizations are developing and using cloud-native software that runs in containers. Consider the complexity of these containerized applications that have hundreds—sometimes thousands—of components from commercial vendors, partners, custom-built software, and open source software (OSS). Each of these pieces is a potential source of risk and vulnerabilities.

Generating SBOMs enables you to create a trackable inventory of these components. Yet, despite the importance of SBOMs for container security practices, only 36% of the respondents to the Anchore 2022 Software Supply Chain Report produce an SBOM for the containerized apps they build, and only 27% require an SBOM from their software suppliers.

SBOMs Use-Cases

An organization can use SBOMs for many purposes. The data inside an SBOM has internal uses such as:

  • Compliance review
  • Security assessments
  • License compliance
  • Quality assurance

Additionally, you can share an SBOM externally for compliance and customer audits. Within the security and development role, SBOMs serve a similar purpose as a bill of materials in other industries. For example, automotive manufacturers must track the tens of thousands of parts coming from a wide range of suppliers when manufacturing a modern car. All it takes is one faulty part to ruin the final product.

Cloud-native software faces similar challenges. Modern applications use significant amounts of open source software that depends on other open source software components which in turn incorporate further open source components. They also include internally developed code, commercial software, and custom software developed by partners. 

Combining components and code from such a wide range of sources introduces additional risks and potential for vulnerabilities at each step in the software development lifecycle. As a result, SBOMs become a critical foundation for getting a full picture of the “ingredients” in any software application over the course of the development lifecycle.

Collecting SBOMs from software suppliers and generating SBOMs throughout the process to track component inventory changes and identify security issues is an integral first step to ensuring the overall security of your applications.

Security and development teams can either request SBOMs from their software suppliers or generate an SBOM themselves. Having the ability to generate SBOMs internally is currently the more optimal approach. This way teams can produce multiple SBOMs throughout the development process to track component changes and search for vulnerabilities as new issues become known in software.

SBOMs can help to alleviate the challenges faced by both developers and security teams by:

  • Understanding risk exposure inherent in open source and third-party tools
  • Reduce development time and cost through exposing and remediating issues earlier in the cycle
  • Identifying license and compliance requirements

Ultimately, SBOMs are a source of truth. To ensure product integrity, development and security teams must quickly and accurately establish:

  • Specific versions of the software in use
  • Location of the software in their builds and existing products

SBOM Security Benefits

There are many SBOM security benefits for your organization. Any effective solution to securing your software supply chain is transparency. Let’s dive into what SBOM security means with regards to these ingredients and why transparency is so vital. 

Transparency = Discovering What is in There

It all starts with knowing what software is being used. You need an accurate list of “ingredients” (such as libraries, packages, and files) that are included in a piece of software. This list of “ingredients” is known as a software bill of materials. Once you have an SBOM for any piece of software you create or use, you can begin to answer critical questions about the security of our software supply chain.

It’s important to note that SBOMs themselves can also serve as input to other types of analyses. A noteworthy example of this is vulnerability scanning. Typically, vulnerability scanning is a term for discovering known security problems with a piece of software based on previously published vulnerability reports. Detecting and mitigating vulnerabilities goes a long way toward preventing security incidents.

In the case of software deployed in containers, developers can use SBOMs and vulnerability scans together to provide better transparency into container images. When performing these two types of analyses within a CI/CD pipeline, you need to realize two things:

Each time you create a new container image (i.e. an image with a unique digest), you only need to generate an SBOM once. And that SBOM can be forever associated with that unique image. 

Even though that unique image never changes, it’s vital to continually scan for vulnerabilities. Many people scan for vulnerabilities once an image is built, and then move on. But new vulnerabilities are discovered and published every day (literally) — so it’s vital to periodically scan any existing images you’re already consuming or distributing to identify if they are impacted by new vulnerabilities. Using an SBOM means you can quickly and confidently scan an application for new vulnerabilities.

Why SBOMs Matter for Software Supply Chain Security

Today’s software is complex, that is why SBOMs have become the foundation of  software supply chain security. The role of an SBOM is to provide transparency about the software components of an application, providing a foundation for vulnerability analysis and other security assessments. 

For example, organizations that have a comprehensive SBOM for every software application they buy or build can instantly identify the impact of new zero-day vulnerabilities, such as the Log4Shell vulnerability in Log4j, and discern its exact location for faster remediation. Similarly, they can evaluate the provenance and operational risk of open source components to comply with internal policies or industry standards. These are critical capabilities when it comes to maintaining and actively managing a secure software supply chain. 

The importance of the SBOM was highlighted in the 2021 U.S. Executive Order to Improve the Nation’s Cybersecurity. The Executive Order directs federal agencies to “publish minimum SBOM standard” and define criteria regarding “providing a purchaser a software bill of materials (SBOM) directly or publish to a public website.” This Executive Order is having a ripple effect across the industry, as software suppliers that sell to the U.S. federal government will increasingly need to provide SBOMs for the software they deliver. Over time these standards will spread as companies in other industries begin to mirror the federal requirements in their own software procurement efforts.

If you’re looking for a deep dive into the world of software supply chain security, we have written a comprehensive guide to the subject.

What is an SBOM made of? What’s inside? 

Each modern software application typically includes a large number of open source and commercial components coming from a wide range of sources. An SBOM is a structured list of components, modules, and libraries that are included in a given piece of software that provides the developer with visibility into that application. Think of an SBOM like a list of ingredients that evolves throughout the software development lifecycle as you add new code or components.

An example of items included in a SBOM are:

  • A data format that catalogs all the software in an application, including deeply nested dependencies
  • Data tracked includes details such as dependency name, version, and language ecosystem
  • SBOM data files can catalog files not associated with the operating system or included dependency

Anchore Enterprise supports SPDX, CycloneDX, and Syft formats. This is a continually evolving space with new formats introduced periodically. To learn about the latest on SBOM formats see the Anchore blog here

Who needs SBOMs? 

When it comes to who needs SBOMs, they are mainly used by DevSecOps practitioners and compliance teams for audits, license monitoring, and to comply with industry-specific regulations. However, with the rise of software supply chain attacks (like the SolarWinds hack and the recent Log4Shell vulnerability in Log4j) SBOM use is now on the radar for both security and development teams alike.

Security Teams

SBOMs play a critical role for security teams, especially when it comes to vulnerability scanning. It is much quicker and easier to scan a library of SBOMs than it is to scan all of your software applications, and in the event of a zero-day vulnerability, every minute counts. 

SBOMs can also be leveraged by security teams to prioritize issues for remediation based on their presence and location and to create policies specific to software component attributes such as vendor, version, or package type.

Development Teams

Development teams use SBOMs to track the open source, commercial, and custom-built software components that they use across the applications they develop, manage, and operate. This assists development teams by reducing time spent on rework by helping to manage dependencies, identify security issues for remediation early, and ensure that developers are using approved code and sources.

Fueling the cross-functional use of SBOMs, is the Executive Order on Improving the Nation’s Cybersecurity, where President Biden issued an SBOM requirement that plays a prominent role in securing software supply chains. 

The Current State of SBOMs

The current state of SBOMs is complex and evolving. The risks of software supply chain attacks are real, with almost two-thirds of enterprises impacted by a software supply chain attack in the last year according to the Anchore 2022 Software Supply Chain Report

To stem these rising threats, the Executive Order outlines new requirements for SBOMs along with other security measures for software used by federal agencies. Until now, the use of SBOMs by cybersecurity teams has been limited to the largest, most advanced organizations. However, as a consequence of these two forces, the use of SBOMs is on the cusp of a rapid transformation.

With governments and large enterprises leading the way, standardized SBOMs are poised to become a baseline requirement for all software as it moves through the supply chain. As a result, organizations that produce or consume software need the ability to generate, consume, manage, and leverage SBOMs as a foundational element of their cybersecurity efforts.

In recent years we have seen threat actors shift their focus to third-party software suppliers. Rather than attacking their targets directly, they aim to compromise software at the build level, introducing malicious code that can later be executed once that software has been deployed, giving the attacker access to new corporate networks. Now, instead of taking down one target, supply chain attacks can potentially create a ripple effect that could affect hundreds, even thousands, of unsuspecting targets. 

Open source software can also be an attack vector if it contains un-remediated vulnerabilities.

SBOMs are a critical foundation for securing against software supply chain attacks. By generating SBOMs into the development cycle, developers and security teams can identify and manage the software in their supply chains and catch these bad actors early before they reach runtime and wreak havoc. Additionally, SBOMs allow organizations to create a data trail that can provide an extended view of the supply chain history of a particular product.

Additional SBOM Resources

Say Goodbye to False Positives

You might be in for a bit of a surprise when running the latest version of Grype – potential vulnerabilities you may have become accustomed to seeing are no longer there! Keep calm. This is a good thing – we made your life easier! Today, we released an improvement to Grype that is the culmination of months of work and testing, which will dramatically improve the results you see, in fact some ecosystems can see up to an 80% reduction of false positives! If you’re reading this, you may have used Grype in the past and seen things you weren’t expecting, or you may just be curious to see how we’ve achieved an improvement like this. Let’s dig in.

The surprising source of false positives

The process of scanning for vulnerabilities involves several different factors, but, without a doubt, one of the most important is for Grype to have accurate data: both when identifying software artifacts and also when applying vulnerability data against those artifacts. To address the latter, Anchore provides a database (GrypeDB), which aggregates multiple data sources that Grype uses to assess whether components are vulnerable or not. This data includes the GitHub Advisory Database and the National Vulnerability Database (NVD), along with several other more specific data sources like those provided by Debian, Red Hat, Alpine, and more.

Once Grype has a set of artifacts identified, vulnerability matching can take place. This matching works well, but inevitably may result in certain vulnerabilities being incorrectly excluded (false negatives) or incorrectly included (false positives). False results are not great, either way, and the false positives certainly constitute a number of issues reported against Grype over the years.

One of the biggest problems we’ve encountered is the fact that the data sources used to build the Grype database use different identifiers – for example, GitHub Advisory Database uses data that includes a package’s ecosystem, name, and version; while NVD uses the Common Platform Enumeration (CPE). These identifiers have some trade-offs, but the most important of which is how accurate it is for a package to be matched against the vulnerability record. In particular, the GitHub Advisory Database data is partitioned by ecosystems such as npm or Python whereas the NVD data does not generally have this distinction. The result of this is a situation where a Python package named “foo” might match vulnerabilities against another “foo” in another ecosystem. When taking a closer look at reports by the community, it is apparent that the most common reason for reported false positives is due to CPEs matching.

Focusing on the negative

After experimenting with a number of options for improving vulnerability matching, ultimately one of the simplest solutions proved most effective: stop matching with CPEs.

The first question you might ask is: won’t this result in a lot of false negatives? And, secondly, if we’re not matching against CPEs, what are we matching against? Grype has already been using GitHub Advisory Database data for vulnerability matching, so we simply leaned into this. Thankfully, we already have a way to test that this change isn’t resulting in a significant change in false negatives: the Grype quality gate.

One of the things we’ve put in place for Grype is a quality gate, which uses manually labeled vulnerability information to validate that a change in Grype hasn’t significantly affected the vulnerability match results. Every pull request and push to main runs the quality gate, which compares the previously released version of Grype against the newly introduced changes to ensure the matching hasn’t become worse. In our set of test data, we have been able to reduce false positive matches by 2,000+, while only seeing 11 false negatives.

Instead of focusing on how we reduce the false positives, we can now focus on a much smaller set of false negatives to see why they were missed. In our sample data set, this is due to 11 Java JARs that don’t have Maven group, artifact, or version information, which brings up the next area of improvement: Java artifact identification.

When first exploring the option to stop CPE matching there were a lot more than 11 false negatives, but it was still a manageable number – less than 200 false negatives are a lot easier to handle than thousands of false positives. Focusing on these, we found almost all of these were cases where Java JARs were not being identified properly, so we improved this, too. Today, it’s still not perfect – the main reason being that some JARs simply don’t have enough information to identify accurately without using some sort of external data (and we have some ideas for handling these cases, too). However, the majority of JARs do have enough information to accurately be identified. To make sure we weren’t regressing on this front, we downloaded gigabytes (25+ GB) of JARs, scanned, and validated that we are finding the right information to correctly extract the correct names and versions from these JARs. And much of this information ends up being included in the labeled vulnerability data we use to test every commit to Grype.

This change doesn’t mean all CPE matching is turned off by default, however. There are some types of artifacts that Grype still needs to use CPE matching for. Binaries, for example, are not present in the GitHub Advisory Database and Alpine only provides entries for things that are fixed, so we need to continue using CPE matching to determine the vulnerabilities before querying for fix information there. But, for ecosystems supported by the GitHub Advisory Database, we can confidently use this data and prevent the plethora of false positives associated with CPE matching.

GitHub + Grype for the win

The next question you might ask is: how is the GitHub Advisory Database better? There are many reasons that the GitHub data is great, but the things that are most important for Grype are data quality, updatability, and community involvement.

The GitHub Advisory Database is already a well-curated, machine-readable collection of vulnerability data. A surprising amount of public vulnerability data that exists isn’t very machine readable or high quality, and while a large volume of data that needs updates isn’t a problem by itself, it is a problem when the ability to provide such updates is nearly impossible. GitHub can review the existing public vulnerability data and update it with relevant details by correcting descriptions, package names, version information, and inaccurate severities along with all the rest of the captured information. Being able to update the data quickly and easily is vital to maintain a quality data set.

And it’s not just GitHub that can contribute to these data corrections – because the GitHub Advisory Database is stored in a public GitHub repository, anyone with a GitHub account can submit updates. If you notice an incorrect version or spelling mistake in the description, the fix is one pull request away. Since GitHub repositories are historical archives, in addition to just submitting fixes, is the ability to look back in time at discussions, decisions, and questions. Much of the public vulnerability data today lacks transparency. Decisions might be made in private or by a single person, with no record of why. With the GitHub Advisory Database, we can see who did what, when, and why. Having a strong community makes open source work and using the open source model with vulnerability data works great too.

We’ve got your back

We believe this change will be a significant improvement for all Grype users, but we don’t know everyone’s situation. Since Grype is a versatile tool, it’s easy to enable CPE matching, if that’s something you still want to do. Just add the appropriate options to your .grype.yaml file or use the appropriate environment variables (see the Grype configuration for all the options), for example:

We want to ensure Grype is the best vulnerability scanner that exists, which is a lofty goal. Today we made a big stride towards this goal. There will always be more work to do: better package detection, better vulnerability detection, and better vulnerability data. Grype and the GrypeDB are open source projects, so if you would like to help please join us.

But today, we celebrate saying goodbye to lots of false positives, so keep calm and scan on, your list of vulnerabilities just got shorter!

The Complete Guide to Software Supply Chain Security

The mega-trends of the containerization of applications and the rise of open-source software components have sped up the velocity of software delivery. This evolution, while offering significant benefits, has also introduced complexity and challenges to traditional software supply chain security. 

Anchore was founded on the belief that the legacy security solutions of the monolith-era could be re-built to deliver on the promises of speed without sacrificing security. Anchore is trusted by Fortune 100 companies and the most exacting federal agencies across the globe because it has delivered on this promise.  

If you’d like to learn more about how the Anchore Enterprise platform is able to accomplish this, feel free to book a time to speak with one of our specialists.

If you’re looking to get a better understanding of how software supply chains operate, where the risks lie and best practices on how to manage the risks, then keep reading.

An Overview of Software Supply Chains 

Before you can understand how to secure the software supply chain, it’s important to understand what the software supply chain is in the first place. A software supply chain is all of the individual software components that make up a software application. 

Software supply chains are similar to physical supply chains. When you purchase an iPhone all you see is the finished product. Behind the final product is a complex web of component suppliers that are then assembled together to produce an iPhone. Displays and camera lenses from a Japanese company, CPUs from Arizona, modems from San Diego, lithium ion batteries from a Canadian mine; all of these pieces come together in a Shenzhen assembly plant to create a final product that is then shipped straight to your door. In the same way that an iPhone is made up of a screen, a camera, a CPU, a modem, and a battery, modern applications are composed of individual software components (i.e. dependencies) that are bundled together to create the finished product. 

With the rise of open source software, most of these components are open source frameworks, libraries and operating systems. Specifically, 70-90% of modern applications are built utilizing open source software components. Before the ascent of open source software, applications were typically developed with proprietary, in-house code without a large and diverse set of software “suppliers”. In this environment the entire “supply chain” were employees of the company which reduced the complex nature of managing all of these teams. The move to Cloud Native and DevSecOps design patterns dramatically sped up the delivery of software with the complication that the complexity of coordinating all of the open source software suppliers increased significantly.

This shift in the way that software is developed impacts essentially all modern software that is written. This means that all businesses and government agencies are waking up to the realization that they are building a software supply chain whether they want it or not.

One of the ways this new supply chain complexity is being tamed is with the software bill of materials (SBOM). A software bill of materials (SBOM) is a structured list of software components, modules, and libraries that are included in a given piece of software. Similar to the nutrition labels on the back of the foods that you buy, SBOMs are a list of ingredients that go into the software that your applications consume. We normally think of SBOMs as an artifact of the development process. As a developer is “manufacturing” their application using different dependencies they are also building a “recipe” based on the ingredients.

What is software supply chain security? 

Software supply chain security is the process of finding and preventing any vulnerabilities that exist from impacting the software applications that utilize the vulnerable components. Going back to the iPhone analogy from the previous section, in the same way that an attacker could target one of the iPhone suppliers to modify a component before the iPhone is assembled, a software supply chain threat actor could do the same but target an open source package that is then built into a commercial application. 

Given the size and prevalence of open source software components in modern applications, the supply chain is only as secure as its weakest link. The image below of the iceberg has become a somewhat overused meme of software supply chain security but it has become overused precisely because it explains the situation so well.

A different analogy would be to view the open source software components that your application is built using as a pyramid. Your application’s supply chain is all of the open source components that your proprietary business logic is built on top of. The rub is that each of these components that you utilize have their own pyramid of dependencies that they are built with. The foundation of your app might look solid but there is always the potential that if you follow the dependencies chain far enough down that you will find a vulnerability that could topple the entire structure.

This gives adversaries their opening. A single compromised package allows attackers to manipulate all of the packages “downstream” of their entrypoint.

This reality was viscerally felt by the software industry (and all industries that rely on the software industry, meaning all industries) during the Log4j incident. 

Common Software Supply Chain Risks

Software development is a multifaceted process, encompassing various components and stages. From the initial lines of code to the final deployment in a production environment, each step presents potential risks for vulnerabilities to creep in. As organizations increasingly integrate third-party components and open-source libraries into their applications, understanding the risks associated with the software supply chain becomes paramount. This section delves into the common risks that permeate the software supply chain, offering insights into their origins and implications.

Source Code

Supply chain risks start with the code itself. Below are the most common risks associated with a software supply chain when generating custom first-party code:

  1. Insecure first-party code

Custom code is the first place to be aware of risk in the supply chain. If the code written by your developers isn’t secure then your application will be vulnerable at its foundation. Insecure code is any application logic that can be manipulated to perform a function that wasn’t originally intended by the developer.

For example, if a developer writes a function that allows a user to login to their account by checking the user database that a username and password match the ones provided by the user but an attacker crafts a payload that instead causes the function to delete the entire user database this is insecure code.

  1. Source code management (SCM) compromise

Source code is typically stored in a centralized repository so that all of your developers can collaborate on the same codebase. An SCM is software that can potentially be vulnerable the same as your first-party code. If an adversary gains access to your SCM through a vulnerability in the software or through social engineering then they will be able to manipulate your source code at the foundation.

  1. Developer environments

Developer environments are powerful productivity tools for your engineers but they are another potential fount of risk for an organization. Most integrated developer environments come with a plug-in system so that developers can customize their workflows for maximum efficiency. These plug-in systems typically also have a marketplace associated with them. In the same way that a malicious Chrome browser plug-in and compromise a user’s laptop, a malicious developer plug-in can gain access to a “trusted” engineers development system and piggyback on this trusted access to manipulate the source code of an application.

3rd-Party Dependencies (Open source or otherwise)

Third-party software is really just first-party software written by someone else. The same way that the cloud is just servers run by someone else. Third-party software dependencies are potentially vulnerable to all of the same risks associated with your own first-party code in the above section. Since it isn’t your code you have to deal with the risks in a different way. Below we layout the two risks associated with this software supply chain risk:

  1. Known vulnerabilities (CVEs, etc)

Known vulnerabilities are insecure or malicious code that has been identified in a third-party dependency. Typically the maintainer of a third-party dependency will fix their insecure code when they are notified and publish an update. Sometimes if the vulnerability isn’t a priority they won’t address it for a long time (if ever). If your developers rely on this dependency for your application then you have to assume the risk.

  1. Unknown vulnerabilities (zero-days)

Unknown vulnerabilities are insecure or malicious code that hasn’t been discovered. These vulnerabilities can lay dormant in a codebase for months, years or even decades. When they are finally uncovered and announced there is typically a scramble across the world by any business that uses software (i.e. almost all businesses) to figure out whether they utilize this dependency and how to protect themselves from having it be exploited. Attackers are in a scramble themselves to determine who is using the vulnerable software and crafting exploits to take advantage of businesses that are slow to react.

Build Pipeline & Artifact Repository

  1. Build pipeline compromise

A software build pipeline is a software system that pulls the original source code from an SCM then pulls all of the third-party dependencies from their source repositories and goes through the process of creating and optimizing the code into a binary that can then be stored in an artifact repository. It is similar to an SCM in that it is software, it is composed of both first- and third-party code which means there will be all of the same associated risks to its source code and software dependencies.

Organizations deal with these risks differently than the developers of the build systems because they do not control this code. Instead the risks are around managing who has access to the build system and what they can do with their access. Risks range from modifying where the build system is pulling source code from to modifying the build instructions to inject malicious or vulnerable code into previously secure source.

  1. Artifact registry compromise

An artifact registry is a centralized repository of the fully built applications (typically in the format of a container or image) that a deployment orchestrator would use to pull the software from in order to run it in a production environment. It is also software similar to a build pipeline or SCM and has the same associated risks as mentioned before.

Typically, the risks of registries are managed through how trust is managed between the registry and the build system or any other system/person that has access to it. Risks range from an attacker poisoning the registry with an untrusted container or an attacker gaining privileged access to the repository and modifying a container in place.

Production

  1. Deployment orchestrator compromise

A deployment orchestrator is a system that pulls pre-built software binaries and runs the applications on servers. It is another type of software system similar to a build pipeline or SCM and has the same associated risks as mentioned before.

Typically, the risks of orchestrators are managed through trust relationships between the orchestrator and the artifact registry or any other system/person that has access to it. Risks range from an attacker manipulating the orchestrator into deploying an untrusted container or an attacker gaining privileged access to the orchestrator and modifying a running container or manifest.

  1. Production environment compromise

The production environment is the application running on a server that was deployed by an orchestrator. It is the software system built from the original source code that fulfills user’s requests. It is the final product that is created from the software supply chain. The risks associated with this system are different from most other systems because it typically serves users outside of the organization and has different risks associated with it because not as much is known about external users as internal users. 

Examples of software supply chain attacks

As reliance on third-party components and open-source libraries grows, so does the potential for vulnerabilities in the software supply chain. Several notable incidents have exposed these risks, emphasizing the need for proactive security and a deep understanding of software dependencies. In this section, we explore significant software supply chain attacks and the lessons they impart.

SolarWinds (2020)

In one of the most sophisticated supply chain attacks, malicious actors compromised the update mechanism of SolarWinds’ Orion software. This breach allowed the attackers to distribute malware to approximately 18,000 customers. The attack had far-reaching consequences, affecting numerous government agencies, private companies, and critical infrastructure.

Lessons Learned: The SolarWinds attack underscored the importance of securing software update mechanisms and highlighted the need for continuous monitoring and validation of software components.

Log4j (2021)

In late 2021, a critical vulnerability was discovered in the Log4j logging library, a widely used Java-based logging utility. Dubbed “Log4Shell,” this vulnerability allowed attackers to execute arbitrary code remotely, potentially gaining full control over vulnerable systems. Given the ubiquity of Log4j in various software applications, the potential impact was massive, prompting organizations worldwide to scramble for patches and mitigation strategies.

Lessons Learned: The Log4j incident underscored the risks associated with ubiquitous open-source components. It highlighted the importance of proactive vulnerability management, rapid response to emerging threats, and the need for organizations to maintain an updated inventory of third-party components in their software stack.

NotPetya (2017)

Originating from a compromised software update mechanism of an Ukrainian accounting software, NotPetya spread rapidly across the globe. Masquerading as ransomware, its primary intent was data destruction. Major corporations, including Maersk, FedEx, and Merck, faced disruptions, leading to financial losses amounting to billions.

Lessons Learned: NotPetya highlighted the dangers of nation-state cyber warfare and the need for robust cybersecurity measures, even in seemingly unrelated software components.

Node.js Packages coa and rc

In July 2021, two widely-used npm packages, coa and rc, were compromised. Malicious versions of these packages were published to the npm registry, attempting to run a script to access sensitive information from users’ .npmrc files. The compromised versions were downloaded thousands of times before being identified and removed.

Lessons Learned: This incident emphasized the vulnerabilities in open-source repositories and the importance of continuous monitoring of dependencies. It also highlighted the need for developers and organizations to verify the integrity of packages before installation and to be wary of unexpected package updates.

JuiceStealer Malware

JuiceStealer is a malware spread through a technique known as typosquatting on the PyPI (Python Package Index). Malicious packages were seeded on PyPI, intending to infect users with the JuiceStealer malware, designed to steal sensitive browser data. The attack involved a complex chain, including phishing emails to PyPI developers.

Lessons Learned: JuiceStealer showcased the risks of typosquatting in package repositories and the importance of verifying package names and sources. It also underscored the need for repository maintainers to have robust security measures in place to detect and remove malicious packages promptly.

Node.js Packages colors and faker

In January 2022, the developer behind popular npm libraries colors and faker intentionally sabotaged both packages in an act of “protestware.” This move affected thousands of applications, leading to broken builds and potential security risks. The compromised versions were swiftly removed from the npm registry.

Lessons Learned: This incident highlighted the potential risks associated with relying heavily on open-source libraries and the actions of individual developers. It underscored the importance of diversifying dependencies, having backup plans, and the need for the open-source community to address developer grievances constructively.

Standards and Best Practices for Preventing Attacks

There are a number of different initiatives to define best practices for software supply chain security. Organizations ranging from the National Institute of Standards and Technology (NIST) to the Cloud Native Computing Foundation (CNCF) to Open Source Security Foundation (OpenSSF) have created fantastically detailed documentation on their recommendations to achieve an optimally secure supply chain.

Choosing any of the standards defined is better than choosing none or even cherry-picking from each of the standards to create a program that is best tailored to the risk profile of your organization. If you’d prefer to stick to one for simplicity sake and need some help deciding, Anchore has detailed our thoughts on the pros and cons of each software supply chain standard here.

Below is a concise summary of each of the major standards to help get you started:

National Institute of Standards and Technology (NIST)

NIST has a few different standards that are worth noting. We’ve ordered them from the broadest to the most specific and, coincidently, chronically as well.

NIST SP 800-53, “Security and Privacy Controls for Information Systems and Organizations”

NIST 800-53, aka the Control Catalog, is the grandaddy of NIST security standards. It has had a long life and evolved alongside the security landscape. Typically paired with NIST 800-37, the Risk Management Framework or RMF, this pair of standards create a one-two punch that not only produce a highly secure environment for protecting classified and confidential information but set up organizations to more easily be compliant with federal compliance standards like FedRAMP.

Software supply chain security (SSCS) topics first began filtering into NIST 800-53 in 2013 but it wasn’t until 2020 that the Control Catalog was updated to break out SSCS into its own section. If your concern is to get up and running with SCSS as quickly as possible then this standard will be overkill. If your goal is to build toward FedRAMP and NIST 800-53 compliance as well as build a secure software development process then this standard is for you. If you’re looking for something more specific, one of the two next standards might be for you.

If you need a comprehensive guide to NIST 800-53 or its spiritual sibling, NIST 800-37, we have put together both. You can find a detailed but comprehensible guide to the Control Catalog here and the same plain english, deep-dive into NIST 800-37 here.

NIST SP 800-161, “Cybersecurity Supply Chain Risk Management Practices for Systems and Organizations”

NIST 800-161 is an interesting application of both the RMF and the Control Catalog for supply chain security specifically. The controls in NIST 800-161 take the base controls from NIST 800-53 and provide guidance on how to achieve more specific outcomes for the controls. For the framework, NIST 800-161 takes the generic RMF and creates a version that is tailored to SSCS. 

NIST 800-161 is a comprehensive standard that will guide your organization to create a development process with its primary output being highly secure software and systems. 

NIST SP 800-218, “Secure Software Development Framework (SSDF)”

NIST 800-218, the SSDF, is an even more refined standard than NIST 800-161. The SSDF targets the software developer as the audience and gives even more tailored recommendations on how to create secure software systems.

If you’re a developer attempting to build secure software that complies with all of these standards, we have an ongoing blog series that breaks down the individual controls that are part of the SSDF.

NIST SP 800-204D, “Strategies for the Integration of Software Supply Chain Security in DevSecOps CI/CD Pipelines”

Focused specifically on Cloud-native architectures and Continuous Integration/Continuous Delivery (CI/CD) pipelines, NIST 800-204D is a significantly more specific standard than any of the previous standards. That being said, if the primary insertion point for software supply chain security in your organization is via the DevOps team then this standard will have the greatest impact on your overall software supply chain security.

Also, it is important to note that this standard is still a draft and will likely change as it is finalized.

Open Source Security Foundation (OpenSSF)

A project of the Linux Foundation, the Open Source Security Foundation is a cross-industry organization that focuses on the security of the open source ecosystem. Since most 3rd-party dependencies are open source they carry a lot of weight in the software supply chain security domain. 

Supply-chain Levels for Software Artifacts (SLSA)

If an SBOM is an ingredients label for a product then the SLSA (pronounced ‘salsa’) is the food safety handling guidelines of the factory where they are produced. It focuses primarily on updating traditional DevOps workflows with signed attestations around the quality of the software that is produced.

Google originally donated the framework and has been using an internal version of SLSA since 2013 which it requires for all of their production workloads. 

You can view the entire framework on its dedicated website here

Secure Supply Chain Consumption Framework (S2C2F) 

The S2C2F is similar to SLSA but much broader in its scope. It gives recommendations around the security of the entire software supply chain using both traditional security practices such as scanning for vulnerabilities. It touches on signed attestations but not at the same level of depth at the SLSA.

The S2C2F was built and donated by Microsoft, where it has been used and refined internally since 2019.

You can view the entire list of recommendations on its GitHub repository.

Cloud Native Computing Foundation (CNCF)

The CNCF is also a project of the Linux Foundation but is focused on the entire ecosystem of open-source, cloud-native software. The Security Technical Advisory Group at the CNCF has a vested interest in supply chain security because the majority of the software that is incubated and matured at the CNCF is part of the software development lifecycle.

Software Supply Chain Best Practices White Paper

The Security Technical Advisory Group at CNCF, created a best practices white paper that was heralded as a huge step forward for the security of software supply chains. The document creation was led by the CTO of Docker and the Chief Open Source Officer at Isovalent. It captures over 50 recommended practices to secure the software supply chain.

You can view the full document here.

Types of Supply Chain Compromise

This document isn’t a standard or best practices, instead it is support for the best practices white paper that defines a full list of supply chain compromises.

Catalog of Supply Chain Compromises

This isn’t a standard or best practices document, as well. It is instead a detailed history of the significant supply chain breaches that have occurred over the years. Helpful for understanding this history that informed the best practices detailed in the accompanying white paper.

How Anchore Can Help 

Anchore is a leading software supply chain security company that has built a modern, SBOM-powered software composition analysis (SCA) platform that helps organizations incorporate many of the software supply chain best practices that are defined in the above guides.

As we have learned working with Fortune 100 enterprises and federal agencies, including the Department of Defense, an organization’s supply chain security can only be as good as the depth of their data on their supply chain and the automation of processing the raw data into actionable insights. Anchore Enterprise provides an end-to-end software supply chain security system with total visibility, deep inspection, automated enforcement, expedited remediation and trusted reporting to deliver the actionable insights to make a supply chain as secure as possible.

If you’d like to learn more about the Anchore Enterprise platform or speak with a member of our team, feel free to book a time to speak with one of our specialists.

Detecting Exploits within your Software Supply Chain

SBOMs. What are they good for? At Anchore, we see SBOMs (software bills of material) as the foundation of an application’s supply chain hierarchy. Upon this foundation you can build a lot of powerful features, such as, the ability to detect vulnerabilities in your open source dependencies before they are pushed to production. An unintended side effect of giving users the power to easily see deeply into their application’s dependencies and detect the vulnerabilities in those dependencies is that there can sometimes be hundreds of vulnerabilities discovered in the process. 

We’ve seen customer applications that generate up to 400+ known vulnerabilities! This creates an information overload that typically ends in the application developer ignoring the results because it is too much effort to triage and remediate each one. Knowing that an application is riddled with vulnerabilities is better than not but excessive information does not lead to actionable insights. 

Anchore Enterprise solves this challenge by pairing vulnerability data (e.g. CVEs, etc) with exploit data (e.g. KEV, etc). By combining these two data sources we can create actionable insight by showing users both the vulnerabilities in their applications and which vulnerabilities are actually being exploited. Actively exploited vulnerabilities are significantly higher risk and can be prioritized for triage and remediation first. In this blog post, we’ll discuss how we do that and how it can save both your security team and application developers time.

How Does Anchore Enterprise Help You Find Exploits in Your Application Dependencies?

What is an Exploited Vulnerability?

“Exploited” is an important distinction because it means that not only does a vulnerability exist but a payload also exists that can reliably trigger the vulnerability and cause an application to execute unintended functionality (e.g. leaking all of the contents of a database or deleting all of the data in a database). For instance, almost all bank vaults in the world are vulnerable to an asteroid strike “deleting” all of the contents of the safe but no one has developed a system to reliably cause an asteroid to strike bank vaults. Maybe Elon Musk can make this happen in a few more years but today this vulnerability isn’t exploitable. It is important for organizations to prioritize exploited vulnerabilities because the potential for damage is significantly greater.

Source High-Quality Data on Exploits

In order to find vulnerabilities that are exploitable, you need high-quality data from security researchers that are either crafting exploits to known vulnerabilities or analyzing attack data for payloads that are triggering an exploit in a live application. Thankfully, there are two exceedingly high-quality databases that publish this information publicly and regularly; the Known Exploited Vulnerability (KEV) Catalog and the Exploit Database (Exploit-DB).

The KEV Catalog is a database of known exploited vulnerabilities that is published and maintained by the US government through the Cybersecurity and Infrastructure Security Agency, CISA. It is updated regularly; they typically add 1-5 new KEVs every week. 

While not an exploit database itself, the National Vulnerability Database (NVD) is an important source of exploit data because it checks all of the vulnerabilities that it publishes and maintains against the Exploit-DB and embeds the relevant identifiers when a match is found.

Anchore Enterprise ingests both of these data feeds and stores the data in a centralized repository. Once this data is structured and available to your organization it can then be used to determine which applications and their associated dependencies are exploitable.

Map Data on Exploits to Your Application Dependencies

Now that you have a quality source of data on known exploited vulnerabilities, you need to determine if any of these exploits exist in your applications and/or the dependencies that they are built with. The industry-standard method for storing information on applications and their dependency supply chain is via a software bill of materials (SBOM)

After you have an SBOM for your application you can then cross-reference the dependencies against both a list of known vulnerabilities and a list of known exploited vulnerabilities. The output of this is a list of all of the applications in your organization that are vulnerable to exploits.

If done manually, via something like a spreadsheet this can quickly become a tedious process. Anchore Enterprise automates this process by generating SBOMs for all of your applications and running scans of the SBOMs against vulnerability and exploit databases. 

How Does Anchore Enterprise Help You Prioritize Remediation of Exploits in Your Application Dependencies?

Once we’ve used Anchore Enterprise to detect CVEs in our containers that are also exploitable through the KEV or ExploitDB lists, then we can take the severity score back into account with more contextual evidence. We need to know two things for each finding: what is the severity of the finding and can I accept the risk associated with leaving that vulnerable code in my application or container. 

If we look back to the Log4J event in December of 2021, that particular vulnerability scored a 10 on the CVSS. That score alone provides us little detail on how dangerous that vulnerability is. If a CVE is discovered against any given piece of software and the NVD researchers cannot reach the authors of the code, then it’s assigned a score of 10 and the worst case is assumed. 

However, if we have applied our KEV and ExploitDB bundles and determined that we do indeed have a critical vulnerability that has active known exploits and evidence that it is being exploited in the wild AND the severity exceeds our personal or organizational risk thresholds then we know that we need to take action immediately. 

Everyone has questioned the utility of the SBOM but Anchore Enterprise is making this an afterthought. Moving past the basics of just generating an SBOM and detecting CVE’s, Anchore Enterprise is automatically mapping exploit data to specific packages in your software supply chain allowing you to generate reports and notifications for your teams. By analyzing this higher quality information, you can determine  which vulnerabilities actually pose a threat to your and in turn make more intelligent decisions about which to fix and which to accept, saving your organization time and money.

Wrap Up

Returning to our original question, “what are SBOMs good for”? It turns out the answer is scaling the process of finding and prioritizing vulnerabilities in your organization’s software supply chain.

In today’s increasingly complex software landscape, the importance of securing your application’s supply chain cannot be overstated. Traditional SBOMs have empowered organizations to identify vulnerabilities but often left them inundated with too much information, rendering the data less actionable. Anchore Enterprise revolutionizes this process by not only automating the generation of SBOMs but also cross-referencing them against reputable databases like KEV Catalog and Exploit-DB to isolate actively exploited vulnerabilities. By focusing on the vulnerabilities that are actually being exploited in the wild, your security team can prioritize remediation efforts more effectively, saving both time and resources. 

Anchore Enterprise moves beyond merely detecting vulnerabilities to providing actionable insights, enabling organizations to make intelligent decisions on which risks to address immediately and which to monitor. Don’t get lost in the sea of vulnerabilities; let Anchore Enterprise be your compass in navigating the choppy waters of software security.

If you’d like to learn more about the Anchore Enterprise platform or speak with a member of our team, feel free to book a time to speak with one of our specialists

Introducing Grype Explain

Since releasing Grype 3 years ago (in September 2020), one of the most frequent questions we’ve gotten is, “why is image X vulnerable to vulnerability Y?” Today, we’re introducing a new sub-command to help users answer this question: Grype Explain.

Now, when users are surprised to see some CVE they’ve never heard of in their Grype output, they can ask Grype to explain itself: grype -o json alpine:3.7 | grype explain --id CVE-2021-42374. We’re asking the community to please give it a try, and if you have feedback or questions, let us know.

The goal of Grype Explain is to help operators evaluate a reported vulnerability so that they can decide what, if any, action to take. To demonstrate, let’s look at a simple scenario.

First, an operator who deploys a file called fireline.hpi into production sees some vulnerabilities:

❯ grype fireline.hpi| grep Critical

✔ Vulnerability DB                [no update available]
✔ Indexed file system
✔ Cataloged packages              [35 packages]
✔ Scanned for vulnerabilities     [36 vulnerabilities]
├── 10 critical, 14 high, 9 medium, 3 low, 0 negligible
└── 14 fixed

bcel                 6.0-SNAPSHOT  6.6.0     java-archive    GHSA-97xg-phpr-rg8q  Critical
commons-collections  3.1           3.2.2     java-archive    GHSA-fjq5-5j5f-mvxh  Critical
dom4j                1.6.1         2.0.3     java-archive    GHSA-hwj3-m3p6-hj38  Critical
fastjson             1.2.9         1.2.31    java-archive    GHSA-xjrr-xv9m-4pw5  Critical
fastjson             1.2.9                   java-archive    CVE-2022-25845       Critical
fastjson             1.2.9                   java-archive    CVE-2017-18349       Critical
log4j-core           2.11.1        2.12.2    java-archive    GHSA-jfh8-c2jp-5v3q  Critical
log4j-core           2.11.1        2.12.2    java-archive    GHSA-7rjr-3q55-vv33  Critical
log4j-core           2.11.1                  java-archive    CVE-2021-45046       Critical
log4j-core           2.11.1                  java-archive    CVE-2021-44228       Critical

Wait, isn’t CVE-2021-44228 log4shell? I thought we patched that! The operator asks for an explanation of the vulnerability:

❯ grype -q -o json fireline.hpi| grype explain --id CVE-2021-44228

[0000]  WARN grype explain is a prototype feature and is subject to change

CVE-2021-44228 from nvd:cpe (Critical)

Apache Log4j2 2.0-beta9 through 2.15.0 (excluding security releases 2.12.2, 2.12.3, and 2.3.1)
JNDI features used in configuration, log messages, and parameters do not protect against attacker
controlled LDAP and other JNDI related endpoints. An attacker who can control log messages or log
message parameters can execute arbitrary code loaded from LDAP servers when message lookup
substitution is enabled. From log4j 2.15.0, this behavior has been disabled by default. From
version 2.16.0 (along with 2.12.2, 2.12.3, and 2.3.1), this functionality has been completely
removed. Note that this vulnerability is specific to log4j-core and does not affect log4net,
log4cxx, or other Apache Logging Services projects.

Related vulnerabilities:
    - github:language:java GHSA-jfh8-c2jp-5v3q (Critical)
Matched packages:
    - Package: log4j-core, version: 2.11.1
      PURL: pkg:maven/org.apache.logging.log4j/[email protected]
      Match explanation(s):
          - github:language:java:GHSA-jfh8-c2jp-5v3q Direct match (package name, version, and
            ecosystem) against log4j-core (version 2.11.1).
          - nvd:cpe:CVE-2021-44228 CPE match on `cpe:2.3:a:apache:log4j:2.11.1:*:*:*:*:*:*:*`.
      Locations:
          - /fireline.hpi:WEB-INF/lib/fireline.jar:lib/firelineJar.jar:log4j-core-2.11.1.jar
URLs:
    - https://nvd.nist.gov/vuln/detail/CVE-2021-44228
    - https://github.com/advisories/GHSA-jfh8-c2jp-5v3q

Right away this gives us some information an operator might need:

  • Where’s the vulnerable file?
    • /fireline.hpi:WEB-INF/lib/fireline.jar:lib/firelineJar.jar:log4j-core-2.11.1.jar
    • Seeing the location inside a jar inside the .hpi file tells the operator that a jar inside a jar inside the .hpi file is responsible for the vulnerability.
  • How was it matched?
    • Seeing both a CPE match on cpe:2.3:a:apache:log4j:2.11.1:*:*:*:*:*:*:* and a GHSA match on pkg:maven/org.apache.logging.log4j/[email protected] gives the operator confidence that this is a real match. 
  • What’s the URL where I can read more about it?
    • Links to the NVD and GHSA sites for the vulnerability are printed out so the operator can easily learn more.

Based on this information, the operator can assess the severity of the issue, and know what to patch.

We hope that Grype Explain will help users better understand and respond faster to vulnerabilities in their applications. Do you have feedback on how Grype Explain could be improved? Please let us know!