This article was updated February 2026.
Containers that haven’t been scanned for vulnerabilities have no business reaching production. Yet many CI/CD pipelines do exactly that — build an image, push it to a registry, and deploy it without ever checking what’s inside. Anchore Enterprise solves this by adding a security gate to your pipeline, ensuring only compliant and vulnerability-free containers make it through. This is where generating, analyzing, and storing SBOMs comes into play.
In this post, I’ll walk you through two approaches for integrating Anchore into an Azure DevOps pipeline using AnchoreCTL: distributed analysis , where the SBOM generation happens locally in the pipeline agent, and centralized analysis , where images are pushed to a staging registry and scanned by Anchore Enterprise.
Starting Point: A Simple Build Pipeline
Here’s a typical Azure DevOps pipeline that builds a Docker image and pushes it directly to a production registry:
trigger:
- master
resources:
- repo: self
stages:
- stage: Build
displayName: Build and push stage
jobs:
- job: Build
displayName: Build
pool:
vmImage: 'ubuntu-latest'
steps:
- task: Docker@2
displayName: Build and push an image to container registry
inputs:
command: buildAndPush
repository: examplerepo.io/simpleserver
dockerfile: Dockerfile
containerRegistry: production
tags: |
$(Build.BuildId) This works, but there’s a gap: nothing validates the security posture of the image before it lands in production. Let’s fix that by adding Anchore Enterprise as a security stage between the build and the production push.
Common Prerequisites
Regardless of which analysis approach you choose, you’ll need:
- A running instance of Anchore Enterprise. Refer to the documentation to get set up.
- An Azure Pipeline. Refer to the Azure documentation to get set up.
- An Azure Key Vault group for Anchore credentials. You’ll need the URL and credentials for your Anchore Enterprise instance. You can authenticate with either a username and password or an API key. API keys are the recommended approach — they can be scoped, rotated, and revoked independently of user accounts, making them a better fit for CI/CD pipelines. See the API keys documentation for setup instructions. Store your credentials in an Azure Key Vault group called
anchoreCredentials, and make sure to lock any sensitive variables so they stay secret. You can also use environment variables directly in the pipeline, but that’s less secure, and as a representative of a security organization, I wouldn’t recommend it.
Distributed Analysis
Distributed analysis is the simpler of the two approaches. Instead of pushing your image to a staging registry for Anchore Enterprise to pull and scan, AnchoreCTL generates the SBOM locally in the pipeline agent and sends it to Anchore Enterprise for policy evaluation. This means you don’t need a staging registry or any special registry permissions for Anchore — the image never leaves the build agent until it’s been approved.
Pipeline
trigger:
- master
resources:
- repo: self
variables:
- name: imageRef
value: 'production/simpleserver:$(Build.BuildId)'
- group: anchoreCredentials
stages:
- stage: Build
displayName: Build stage
jobs:
- job: Build
displayName: Build
pool:
vmImage: 'ubuntu-latest'
steps:
- task: Docker@2
displayName: Build image
inputs:
command: build
repository: simpleserver
dockerfile: Dockerfile
tags: |
$(Build.BuildId)
- stage: Security
displayName: Security scan stage
dependsOn: Build
jobs:
- job: Security
displayName: Security
pool:
vmImage: 'ubuntu-latest'
steps:
- script: curl -X GET "https://$(anchore_endpoint)/v2/system/anchorectl?operating_system=linux&architecture=amd64" -H "accept: */*" | tar -zx anchorectl
displayName: Install AnchoreCTL
- script: |
export PATH=$PATH:$HOME/.local/bin
export ANCHORECTL_URL=$(anchore_url)
export ANCHORECTL_USERNAME=$(anchore_user)
export ANCHORECTL_PASSWORD=$(anchore_pass)
# Or use an API key instead of username/password:
# export ANCHORECTL_USERNAME=_api_key
# export ANCHORECLTL_PASSWORD=$(api_token)
anchorectl image add $(imageRef) --from docker:simpleserver:$(Build.BuildId) --dockerfile Dockerfile --wait
anchorectl image vulnerabilities $(imageRef)
anchorectl image check $(imageRef)
displayName: Anchore Security Scan
- stage: Production
# Push the image to your production registry and deploy How It Works
The key difference is the --from flag on anchorectl image add . This tells AnchoreCTL to generate the SBOM from a local Docker source (docker:simpleserver:$(Build.BuildId) ) rather than expecting Anchore Enterprise to pull the image from a remote registry to generate the SBOM. AnchoreCTL generates the SBOM on the pipeline agent and uploads it to Anchore Enterprise, which then performs vulnerability matching and policy evaluation. The first argument to image add ($(imageRef)) is the tag that Anchore Enterprise will use to identify the image in its database — it doesn’t need to be a pullable registry path. The --wait flag blocks until analysis is complete, and then image vulnerabilities and image check work the same as in the centralized approach.
This is the recommended approach for most pipelines because it requires less infrastructure setup and avoids the overhead of a staging registry. However, if you’ve already pushed the image to a registry in a previous step, you can swap out the --from docker flag for --from registry to have AnchoreCTL pull that exact image for local analysis. This has a practical benefit for supply chain tracking: when AnchoreCTL analyzes an image from a registry, it captures the registry-assigned digest, which stays consistent as that image moves through your environments into production. A locally built image has its own digest, but that digest can change once the image is pushed to a registry — making it harder to trace the same image across your pipeline.
Failing a Pipeline Based on a Policy Result
To use the policy evaluation as a pipeline gate, use the --fail-based-on-results flag (or -f for shorthand) on anchorectl image check . This tells AnchoreCTL to return a non-zero exit code if the policy evaluation result is stop , which automatically fails the pipeline stage.
bash
anchorectl image check $(imageRef) -f Here’s an example of what a failed evaluation looks like:
✔ Evaluated against policy [failed]
Tag: docker.io/anchore/test_images:convertigo-7.9.2
Digest: sha256:b649023ebd9751db65d2f9934e3cfeeee54a010d4ba90ebaab736100a1c34d7d
Policy ID: anchore_secure_default
Last Evaluation: 2026-02-20T17:19:26Z
Evaluation: fail
Final Action: stop
Reason: policy_evaluation
error: 1 error occurred:
* failed policies: The non-zero exit code ensures the pipeline fails before reaching the Production stage, preventing non-compliant images from being pushed to your production registry.
TIP: One-Time Analysis
Anchore Enterprise also supports performing a one-time analysis where a user can check vulnerabilities and policy evaluation results without storing the SBOM in Anchore Enterprise. Typically this is used for quick feedback during development, allowing developers to address potential issues before perhaps pushing to a staging or production registry.. For Enterprise customers curious about one-time analysis, check out our CI/CD doc page about it, or reach out to the customer success team via our Support Portal .
Centralized Analysis
Centralized analysis uses a more traditional model that enables Anchore Enterprise to perform a malware scan using ClamAV on the image (if enabled, see Malware Scanning ): images are pushed to a staging registry, and Anchore Enterprise pulls, generates SBOMs, and scans them directly using the analyzer service component; distributed analysis does not support malware scanning as of writing this blog. Enabling malware scans does increase scan time as it has to check each file found in the SBOM, but it also enables specific malware policy checks . The centralized analysis approach is useful when you need Anchore Enterprise to have direct access to the image layers, or when your workflow involves mandating a malware scan on the image contents.
Additional Prerequisites
Beyond the common prerequisites, for our example centralized analysis requires:
- A staging registry for pre-scanned images. The build stage will push images here first so Anchore Enterprise can scan them before they’re promoted to production. I’m using an Azure Container Registry, created with Terraform:
terraform {
required_providers {
azurerm = {
source = "hashicorp/azurerm"
version = "~> 3.0"
}
}
}
provider "azurerm" {
features {}
}
resource "azurerm_resource_group" "blog" {
name = "blog"
location = "West US"
}
resource "azurerm_container_registry" "blog" {
name = "staging"
resource_group_name = azurerm_resource_group.blog.name
location = azurerm_resource_group.blog.location
sku = "Standard"
admin_enabled = true
} -
A service connection in Azure DevOps. This allows your pipeline to push to the staging registry. Set one up by navigating to your project’s service connections, selecting Docker Registry , then Azure Container Registry , and authenticating against the
anchoreStagingregistry you just created. - Registry permissions for Anchore. Your Anchore Enterprise instance needs to be able to pull images from the staging registry. Follow the steps in the Anchore documentation to configure registry credentials.
Pipeline
trigger:
- master
resources:
- repo: self
variables:
- name: stagedImage
value: 'staging/simpleserver:$(Build.BuildId)'
- name: productionImage
value: 'production/simpleserver:$(Build.BuildId)'
- group: anchoreCredentials
stages:
- stage: Build
# Build and push the image to the staging registry
- stage: Security
displayName: Security scan stage
dependsOn: Build
jobs:
- job: Security
displayName: Security
pool:
vmImage: 'ubuntu-latest'
steps:
- script: curl -X GET "https://$(anchore_endpoint)/v2/system/anchorectl?operating_system=linux&architecture=amd64" -H "accept: */*" | tar -zx anchorectl
displayName: Install AnchoreCTL
- script: |
export PATH=$PATH:$HOME/.local/bin
export ANCHORECTL_URL=$(anchore_url)
export ANCHORECTL_USERNAME=$(anchore_user)
export ANCHORECTL_PASSWORD=$(anchore_pass)
anchorectl image add $(stagedImage) --dockerfile Dockerfile --wait
anchorectl image vulnerabilities $(stagedImage)
anchorectl image check $(stagedImage) -f
displayName: Anchore Security Scan
- stage: Production
# Push the image to your production registry and deploy How It Works
In this model, the Build stage pushes the image to the staging registry. The Security stage then tells Anchore Enterprise to pull and analyze the image using anchorectl image add (without the –from flag). Anchore Enterprise fetches the image from the staging registry, performs its analysis, and stores the results. The --wait flag blocks until the scan completes, and then image vulnerabilities and image check evaluate the results as before. The Dockerfile is still provided when the image is added, which Enterprise can now store and associate with the image. This also enables specific policy checks against the Dockerfile, like checking the effective User ID, or looking for ports that are exposed that shouldn’t be.
This approach provides a fully stateful scan where Anchore Enterprise has direct access to the image and the SBOM is maintained and stored in the Anchore Enterprise deployment. Teams can retrieve results after the fact to generate reports, audit compliance, or document justifications for passed or failed scans.
Next Steps
Whether you choose distributed or centralized analysis, adding Anchore to your Azure DevOps pipeline takes only a handful of YAML additions. Consider exploring Anchore’s policy bundles to customize exactly which vulnerabilities or compliance violations should block a release. You can also integrate scan results into Azure DevOps dashboards or set up notifications to alert your team when a scan fails. Some customers use both, with distributed analysis being used in CI pipelines for faster feedback, or centralized analysis being used in CD pipelines for a full malware scan. So many possibilities!
For more on Anchore’s capabilities, check out the official documentation .