At its core, any security tool is only as good as the data it uses. This is the age-old principle of “garbage in, garbage out.” If your security tooling is working with stale, incomplete, or inaccurate information, it will produce unreliable results. This leaves you with a false sense of security.
To protect your software supply chain and wrestle the ever increasing wave of CVEs effectively, you need a constant stream of high-quality, up-to-date and a matched set of vulnerability data to your software and SBOM. This is especially critical for reacting to zero-day vulnerabilities, where every second counts.
This post will take you “under the hood” to show you how Anchore Enterprise’s hosted data service. Put another way, a stream of vulnerability feeds are engineered to deliver the timely, accurate and enriched data needed to remediate vulnerabilities fast.
A lot has been changing in Anchore to provide better results with more data at your fingertips. From the introduction of KEV and EPSS datasets to the addition of secondary CVSS scores in Anchore Enterprise 5.20 and matched CPEs in 5.24. We also touch on the future of how this data can be further extended with the introduction and support of VEX (Vulnerability Exploitability eXchange) and VDR (Vulnerability Disclosure Report).
Let’s get started!
Introducing the Anchore Data Service
The landscape of software vulnerabilities is never static. New threats emerge daily, data sources are constantly updated, and upstream feeds can suffer from API changes, data inconsistencies, or inaccuracies.
Managing this high-velocity, often-chaotic stream of information is a full-time job. This is where the Anchore Data Service as a delivery vehicle comes in. The service shoulders the heavy lifting by continuously ingesting, analyzing, and correlating vulnerability data from the latest intelligence from sources like Red Hat, Canonical, GitHub, NVD, CISA KEV, EPSS and much more.
Our security team additionally publishes ‘patches’ for this data, correcting upstream errors, suppressing known false positives, and enriching records to ensure maximum accuracy. The end result is a curated, high-fidelity set of intelligence feeds that are available to your Anchore Enterprise deployment. Whether in the cloud, on-premises, or even fully air-gapped, Anchore Data Service gives you a single, trustworthy source of truth for all your vulnerability scanning.
We document the curated workflow in detail over on our vulnerability management docs pages, for this article we will unpack how this data is first made available and later who it can be utilized.
How does the Anchore Data Service work?
Anchore Data Service is designed for both robustness and flexibility. As well as catering to both internet-connected or fully air-gapped environments. The magic happens through a dedicated internal service which acts as the central hub for vulnerability data that you can download data from.
In an internet-connected deployment, your Anchore Enterprise deployment pulls data directly from Anchore Data Service, hosted at https://data.anchore-enterprise.com. You only need to allow outbound HTTPS traffic (TCP port 443) from your Anchore instance. The deployments data syncer service periodically reaches out to this endpoint and checks for new feed data. If it finds an update, it downloads and distributes it across your deployment.
For air-gapped deployments with no internet connectivity, Anchore provides a simple, secure mechanism for updating vulnerability data. Using the command-line tool, anchorectl, on a low side/internet connected machine you can use the tool to download the entire vulnerability data feed as a single bundle. Then simply transfer, or sneaker copy, this bundle into your air-gapped network. Using a local copy of anchorectl in your high side environment you will upload the feed data. This gives you full control over the data flow while maintaining a strict air gap.
The data lifecycle: From publication to matching
A question we often hear is, ‘When a newly discovered vulnerability is published, how does it and how long does it take to go from security advisory all the way to a finding in an image in your deployment?’ As with most questions in IT the answer is it depends…but the aim now is to go under the hood and look at the nuance as to how this works.
Let’s run through the steps so you can see how this works end to end.
Step 1: Anchore pulls upstream data from vendors and other sources, compiles and then publishes data to the hosted data syncer service. This happens every 6 hours. Our OSS tooling only published data every 24 hours.
Step 2: The data syncer service in an Anchore Enterprise deployment runs every hour and checks the hosted Anchore Data Service for new vulnerability data.
Step 3: Once downloaded the data syncer service communicates with the Anchore Policy Engine and other internal components to update internal databases. This makes the new dataset, or set of feeds, available within your deployment.
With new data available to an Anchore Enterprise deployment, there are a few mechanics to understand how this new data will be utilized. For ad-hoc requests (web ui/api) for vulnerabilities on an artifact, the system returns the latest results. For example, if the SBOM shows log4j-core-2.14.0.jar, the policy engine searches the latest vulnerability data for any entries where log4j-core is the affected package and 2.14.0 falls within a vulnerable version range. When a match is found, a vulnerability is reported.
The importance of having a high quality SBOM cannot be underrated. If your SBOM is older it can be worth rescanning with the latest version of anchorectl/Syft to get improved SBOM quality and therefore results.
You need not re-scan an image as the SBOM stored acts as the reference point for mapping vulnerabilities. The reporting system by default will re-build the latest vulnerability data on a cycle timer ( anchoreConfig.reports_worker.cycle_timers.reports_image_load). The default is set to update every 600 seconds. This is configurable if you require fresher reports.
Finally, if you have a subscription watch for vulnerability data you can get notified of any changes to the vulnerability data. For example, if a user is subscribed to the library/nginx:latest tag and on the 12th of September 2025 a new vulnerability was added to the Debian 9 vulnerability feed which matched a package in the library/nginx:latest image, this would trigger a notification.
Notifications can be configured to hit a webhook, slack, email and/or other endpoints. This subscription will be checked on a cycle of every four hours but is also configurable (anchoreConfig.catalog.cycle_timers.vulnerability_scan).
Note: the subscription is disabled by default. There are other similar subscriptions for policy and tags that might also play a role here. For example, if you want to notify on changes to policy results about a CVE (e.g., a CVE changes from unknown to critical).
The data lifecycle: Trust but verify…the feeds
As you can see at this point, the Anchore Data Service is critical to your deployment’s continued operation and due to this and the ever changing nature of upstream data, we provide the Anchore Data Service status page. This page offers real-time operational health of all our backend services, including the critical vulnerability data feeds, package information, and security advisories that your Anchore Enterprise instance relies on.
If you ever suspect an issue with data synchronization or feed updates, this status page should be your first troubleshooting step. It allows you to immediately verify if Anchore is experiencing an outage or performing maintenance, saving your team valuable time in diagnosing whether an issue is local to your environment or an external upstream problem.
You can also easily verify the status of your vulnerability feeds in your own deployment using the anchorectl:
# See a list of all feeds and their last sync time
$ anchorectl system feeds listThis command will show you each feed group (e.g., vulnerabilities, clamAV, nvdv2, ubuntu, etc.), the number of records, and when it was last updated. You can also login to the Anchore Enterprise Web UI with admin permissions and head to the System -> Health Page to see the feeds list and timestamps and other details like record count too.
When you see recent timestamps, you know the data is flowing correctly. Importantly each feed has its own timestamp representing the last time the service pulled data from the upstream source. If there has been no new data published upstream, this timestamp won’t be updated. The policy engine and relevant policy packs have a rule set defined to show a policy failure if upstream data is missing or not recently. The FedRAMP policy bundle will flag any instances where an evaluation is using data older than 48 hours old.
What’s in the data and why it matters
The Anchore Data Service distributes a few different types of datasets that can aid prioritizing in your remediation workflow.
For Malware checks, Anchore Enterprise utilizes ClamAV, which is disabled by default but can be enabled for all centralized image scans. This data is constantly updated and pulled from upstream ClamAv. Checks on your container image filesystem happen at scan time. This offers extra insight into your container images beyond pure vulnerabilities and as such we won’t dig much further into this here but certainly a strong signal to utilize when determining if your software is safe for production.
Another useful dataset is the Known Exploited Vulnerability (KEV), a dataset produced by CISA. It is a list of known exploited vulnerabilities; meaning if a CVE is actively being exploited in the wild, it will be on this list.
Finally there is the Exploit Prediction Scoring System (EPSS) dataset where you retrieve a score and a percentile rankings. These are based on modelled data and point to the chance of a CVE being exploited in the next 30 days.
Anchore Data Service maintains numerous vulnerability feeds upstream sources. Each entry includes some useful metadata and context, such as:
- Vulnerability Identifiers: CVE or other unique ID (e.g., GHSA ) as well as external data source URLs useful to act as guidance and/or reference.
- Severity Score: The CVSS 2 or 3 score and vectors, helping you prioritize what to fix first. We can also present secondary scores if needed too.
- Affected Type: The type of software/ecosystem the software resides in.
- Affected Package: The name of the software package or library.
- Affected Versions: The specific version or version range that is vulnerable.
- Fix Information: The version in which a fix is available.
- CPE, CPE23 and PURL: CPE, CPE 2.3 and PURL are the primary ways to “name” a piece of software so Anchore can find and match to known vulnerabilities. Both are used and each have strengths and weaknesses, but PURL can help get the most accurate matching.
- Package path: Where is this package located? Don’t forget you might have multiple instances of the same software
- Feed/feed group: Which upstream data feed was used to match this CVE data.
Utilizing this information can help matching as well as remediation. It’s the difference between results that just tell you “your container has Log4j” and one that tells you “your container is using log4j-core version 2.15, which is vulnerable to CVE-2021-45046 (GHSA-7rjr-3q55-vv33) with a critical severity 9, and a fix is available in version 2.16 and this fix has been available for 4 years“.
Beyond what is included in the OSS Grype vulnerability feeds, Anchore Enterprise offers additional feeds like Microsoft MSRC vulnerability data and exclusions.
But wait there is more:
- Severity Score – Secondary CVSS: Anchore can now be configured to show the highest secondary CVSS score if the Primary NVD score has not been provided for a CVE. This becomes useful as some data might have two CNA’s upstream sources associated with the same package/vulnerability.
- Matched CPEs: This field contains a list of CPEs that were matched for the vulnerable package. It provides more context around how the vulnerability was identified. This extra data might help you understand the match Anchore used and identify any false positives.
During the scan, various attributes will be used from the SBOM package artifacts to match to the relevant vulnerability data. Depending on the ecosystem, the most important of these package attributes tend to be Package URL (purl) and/or Common Platform Enumeration (CPE).
The Anchore analyzer attempts to generate a best effort guess of the CPE candidates for a given package as well as the purl based on the metadata that is available at the time of analysis. For example, for Java packages, the manifest contains multiple different version specifications but sometimes stores erroneous version data.
Luckily there are some processes to help facilitate better matching and get you the most accurate results:
- Enrichment: Due to the known issues with the NVD, Anchore Enterprise enhances the quality of its data for analysis by enriching the information obtained from the NVD. This process involves human intervention to review and correct the data. Once this manual process is completed, the cleaned and refined data is stored in the Anchore Enrichment Database.
- Vulnerability Match Exclusions. These exclusions allow us to remove a vulnerability from the findings for a specific set of match criteria.
- Correction & Hints: Anchore Enterprise provides the capability for you to adapt the SBOM and its contained packages but also provide a correction that will update a given package’s metadata so that attributes (including CPEs and Package URLs) can be corrected at the time that Anchore performs a vulnerability scan.
- Vendor Data First: We surface both NVD and vendor data, but recommend and by default, surface vendor-specific data first. They understand how a package of software has been compiled and installed and therefore the impact of a known vulnerability the best. They also have awareness of accurate fix information too.
One of the most impactful features, recently released within the Anchore Enterprise ecosystem is the support for open standards VEX (Vulnerability Exploitability eXchange) and VDR (Vulnerability Disclosure Report). While these deserve a deep dive of their own, their core value is simple: they allow you to apply vulnerability annotations, like “this CVE is not applicable or code path not executed” or “investigating” directly to your SBOMs.
Not only this but you can soon also leverage VEX documents provided by upstream vendors like Red Hat if you used UBI9 base images that contained CVEs. Meaning you can eliminate noise and save significant manual triage time with confidence. Because Anchore Enterprise supports image ancestry and inheritance detection, these time savings multiply across every image in your environment. Furthermore, you can share these annotations with customers and auditors, streamlining their adoption and compliance process.
For some, simply being able to leverage a mix of data available like CVSS 2/3 or extract the PURL for downstream use cases is a must. Like matching and publishing discovered vulnerability matches and their data points into other systems like SIEM. In a larger enterprise, this helps to connect data systems, facilitate organizational automation and drive consistency across results from disparate systems.
A common example is that some compliance scenarios require NVD data specific results and whilst we lean on vendor data first, surfacing NVD results is absolutely available. Anchore Enterprise makes this simple and easy with 100% API coverage as well as a powerful notifications system providing rich exposure to the underlying data.
Summary
In vulnerability management, the “garbage in, garbage out” must be avoided as any tooling showing incomplete, outdated, or inaccurate data leads to false positives, missed threats, and wasted effort. In addition to this, having the ability to utilize additional data signals from other data sources like EPSS and KEV can truly assist your remediation and prioritization efforts in the face of the never ending wave of vulnerabilities.
This is why Anchore invests heavily in our vulnerability data feed. We do the relentless, complex work of ingesting, correlating, and curating data so you don’t have to. The result is areliable, timely, and high-fidelity intelligence feed engineered to power your security operations, no matter your environment.By letting Anchore manage the data chaos, you gain confidence that your entire security posture is based on the latest intelligence. This allows your team to stop chasing data and focus on what matters:finding and fixing vulnerabilities.









































































